This commit makes Looking Glass always use the OpenGL renderer when
running on Wayland. The EGL renderer is broken on Wayland and can't
reasonably be fixed until SDL is dropped entirely (as per
https://github.com/gnif/LookingGlass/issues/306).
Until that time, the OpenGL renderer provides a much better
Wayland-native experience.
eglSwapBuffers is allowed to block when called with a nonzero interval
parameter. On Wayland, Mesa will block until a frame callback arrives.
If an application is not visible, a compositor is free to not schedule
frame callbacks (in order to save CPU time rendering something that is
entirely invisible).
Currently, starting Looking Glass from a terminal, hiding it
entirely, and sending ^C will cause Looking Glass to hang joining the
render thread until the window is made visible again.
Calling eglDestroySurface is insufficient to unblock eglSwapBuffers, as
it attempts to grab the same underlying mutex.
Instead, this commit makes it so that we pass a 0 interval to
eglSwapBuffers when running on Wayland, such that we don't block waiting
for a frame callback. This is not entirely ideal as it *does* mean
Looking Glass submits buffers while hidden, but it seems better than
hanging on exit.
It also forces opengl:vsync and egl:vsync flags to off when running on
Wayland, as they are meaningless there.
Note: This only works with the KVMFR kernel module in a VM->VM
configuration. If this causes issues it can be disabled with the new
option `app:allowDMA`
The texture buffer may still be in use if we try to re-map it
immediately, instead only map when we need it mapped, and unmap
immediately after advancing the offset allowing the render thread to
continue while the unmap operation occurs
This is a major change to how the LG client performs it's updates. In
the past LG would operate a fixed FPS regardless of incoming update
speed and/or frequency. This change allows LG to dynamically increase
it's FPS in order to better sync with the guest as it's rate changes.
[Why]
Recent versions of Mesa may have trouble with surface creation, resulting in
errors like:
egl.c:428 | egl_render_startup | Failed to create EGL surface (eglError: 0x300b)
[How]
Replace eglGetDisplay() with eglGetPlatformDisplay(). Requires EGL 1.5, but should
be supported with any desktop driver released in the past few years.
This changes the method of the memory copy from the host application to
the guest. Instead of performing a full copy from the capture device
into shared memory, and then flagging the new frame, we instead set a
write pointer, flag the client that there is a new frame and then copy
in chunks of 1024 bytes until the entire frame is copied. The client
upon seeing the new frame flag begins to poll at high frequency the
write pointer and upon each update copies as much as it can into the
texture.
This should improve latency but also slightly increase CPU usage on the
client due to the high frequency polling.
While it is recommended to use memory barriers when updating a buffer
like we are, since we double buffer it is unlikely we will corrupt a
prior frame, and even if we do since it's just texture data at worst
we might see a tear.
This feture is to allow the use of the key combination <super>+N to
increase the brightness of the screen when using monitors with poor
backlighting. Can help in some games.
N = Night vision