The actual time between opening the device and the device starting to pull
data can range anywhere between nearly instant and hundreds of
milliseconds. To minimise startup latency, open the device as soon as the
first playback data is received from Spice. If the device starts earlier
than required, insert a period of silence at the beginning of playback to
avoid underrunning. If it starts later, just accept the higher latency and
let the adaptive resampling deal with it.
We can set the startup latency for the next playback far more precisely if
we have the device open already.
Only keep the device open with no playback for 30 seconds to avoid keeping
the device open unnecessarily forever.
Underruns can still happen quite easily at the beginning of playback,
particularly at very low latency settings. Further increase the startup
latency to avoid this.
Many X11 window managers will present an application on their
taskbar as a combination of the application name and an icon
imagery pulled from the X-Property _NET_WM_ICON. Applications
built under frameworks such as Qt or GTK have this property
populated by the framework. This commit adds the Atom _NET_WM_ICON
and populates it with a 64x64 icon of Looking Glass.
The desktop doesn't need its own sampler, there is already an identically
configured one in the `desktop->texture`.
For some reason, using the texture sampler fixes a black screen issue
with my GTX 660 using the 470.86 driver. Maybe hitting some limit
for how many samplers can be allocated?
PipeWire startup latency varies wildly depending on what else is, or was
last using the audio device. In the worst case, PipeWire can request two
full buffers within a very short period of time immediately at the start of
playback, so make sure we've got enough data in the buffer to support this.
The target latency is now based upon the device maximum period size
(which may be configured by setting the `PIPEWIRE_LATENCY` environment
variable if using PipeWire), with some allowance for timing jitter from
Spice and the audio device.
PipeWire can change the period size dynamically at any time which must be
taken into account when selecting the target latency to avoid underruns
when the period size is increased. This is explained in detail within the
commit body.
The recent `pwnkit` exploit brought this to my attention, not that we
are a setuid process we should still do this properly... who knows where
this code might get used in the future.
Previously this was hardcoded to 100ms which is far too high in most
instances, instead we get the initial period size and use whichever is
greater out of 50ms or the period size.
The idea is to reduce the amount of time it takes for the latency to
come down after initial stream start.
This removes the need for locking while also giving a better result in
the graph output. Also when the graph is disabled via the overlay
options it will no longer cause redraws.
This change allows the audiodevs to return the minimum period frames
needed to start playback instead of having to rely on a pull to obtain
these details.
Additionally we are using this information to select an initial start
latency as well as to train the desired latency in order to keep it as
low as possible.