Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proof of concept OSD code. #10

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

rdoeffinger
Copy link

It works for displaying a usable OSD in MPlayer at full screen
resolution and even if using hardware decode, but is far
too slow for fullscreen use for example.

Signed-off-by: Reimar Döffinger [email protected]

@rzk
Copy link
Member

rzk commented Nov 22, 2013

Awesome! But may I ask, how slow is "too slow"? Just a user question, would be great to know some data about what formats and on what resolutions work good with OSD. If this is is the only OSD solution possible, it might be even needed to add info to the wiki for users to know what to expect.

@jemk
Copy link
Member

jemk commented Nov 22, 2013

I added a branch of my idea for OSD support. It is far from optimal too, but at least it's pretty fast and I have many optimizations in mind. Seems like we doubled some work, sorry for not pushing this earlier.

@rdoeffinger
Copy link
Author

I think it's not exactly duplicated code.
Note that the speed issues is mostly due to me choosing a stupid design because I wanted to test the idea.
The only non-fixable issue is that proper alpha blending is not possible.
I have to admit that I kind of really do not like having a hard dependency on another, independent hardware feature like G2D. My approach only needs colorkey support.
I wonder what your opinion is on how feasible/sensible it would be to possibly support both?
just to be clear: I don't mean this as an attempt to push my solution (who knows if I'll have time to ever finish it), just sounded like a nice idea in my head.

@jemk
Copy link
Member

jemk commented Nov 22, 2013

No, of course not duplicate, only vdp_output_surface_put_bits_indexed()

Well, it doesn't strictly depend on G2D, this could be done in software too. It even needs to get some software fallback for SoCs without G2D. But most sunxi SoCs have G2D, so why not use it.

The main difference is that my version puts a second disp layer over the video with real alpha blending. I'm still not quite sure if this is really better than using X (a little bit more optimized). It will be necessary to support "both", as a second alpha blended layer makes it impossible to use color key, so no window overlapping. I had many ideas how to do all this, but not much time to try.

@ssvb
Copy link

ssvb commented Dec 17, 2013

Hi guys. It's great to see some progress with the OSD feature support. We talked about this with @jemk some time ago on #linux-sunxi irc, and in my opinion the xorg ddx driver has the best control over what part of the window is visible at any time. So that it may handle fully visible or partially overlapped windows in the most optimal way.

On a technical side, we can implement some basic DRI2 support for libvdpau-sunxi in https://github.com/ssvb/xf86-video-fbturbo and get it nicely integrated. The dri2 workflow is that the the clients may do two things:

  1. request the dri2 buffers (the dri2 buffers are created on the x server side)
  2. request to swap dri2 buffers

This may look rather invasive. Except that we may treat "dri2 buffer" as not a real buffer with pixels, but as just some sort of a unique token. The cedar kernel driver can be extended with some new ioctl to support storing data by token and retrieving data by token. So the workflow may look like this:

  1. The VDPAU client asks the X server for a new dri2 buffer (DRI2GetBuffers)
  2. The X server generates a unique token and returns it to the client
  3. The client does whatever it wants with its own memory buffers and then calls the cedar driver ioctl to store data with the detailed information about the video frame using this token as a unique handle
  4. The VDPAU client asks the X server to DRI2SwapBuffers
  5. The ddx driver in the X server uses the dri2 buffer token to retrieve the detailed data about the video frame (the addresses of the video surface, the bitmap surface and anything else) via the cedar driver ioctl.
  6. Then using the frame data, the ddx tries to present it on the screen in the most optimal way possible. If the window is fully visible - configures the sunxi disp layers. If the window is partially overlapped - uses a software rendering fallback or g2d fallback.

Does this look reasonable? I was actually intending to implement some sort of prototype, but just did not have much free time lately. Still I hope to be able to do something useful during the upcoming Christmas holidays.

Support for hue is there in principle, but would need tweaking,
Due to the very limited range supported by the driver I didn't
manage to do that.
The hardware actually has full support for arbitrary CSC matrices,
but since it is not exposed some hack like this is the best I could
think of.
Patch also fixes vdp_generate_csc_matrix to write the data to
the correct location, previously it would actually write outside
the matrix.

Signed-off-by: Reimar Döffinger <[email protected]>
It works for displaying a usable OSD in MPlayer at full screen
resolution and even if using hardware decode, but is far
too slow for fullscreen use for example.

Signed-off-by: Reimar Döffinger <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants