[client] audio: use actual device period if larger than expected maximum

This is rare and I'm not sure what causes it, but PipeWire sometimes uses a
larger period size than requested for no obvious reason (e.g., we could
request a period size of 512, but PipeWire uses 2048 anyway). This causes
us to stay in a permanent state of underrunning because the target latency
is too low.

With this change, we use the actual device period in the target latency
calculation if it is larger than the expected maximum. We may still get
some glitches at the beginning of playback (because the startup latency is
based upon the expected maximum period size), but it will recover after a
few seconds as it adjusts to the new target latency.
This commit is contained in:
Chris Spencer 2022-10-19 18:21:52 +01:00 committed by Geoffrey McRae
parent 7e42e6cdce
commit 081a0a419d

View File

@ -525,12 +525,15 @@ void audio_playbackData(uint8_t * data, size_t size)
}
/* Determine the target latency. This is made up of the maximum audio device
* period (plus a little extra to absorb timing jitter) and a configurable
* period (or the current actual period, if larger than the expected maximum),
* plus a little extra to absorb timing jitter, and a configurable
* additional buffer period. The default is set high enough to absorb typical
* timing jitter from qemu. */
int configLatencyMs = max(g_params.audioBufferLatency, 0);
int maxPeriodFrames =
max(audio.playback.deviceMaxPeriodFrames, spiceData->devPeriodFrames);
double targetLatencyFrames =
audio.playback.deviceMaxPeriodFrames * 1.1 +
maxPeriodFrames * 1.1 +
configLatencyMs * audio.playback.sampleRate / 1000.0;
/* If the device is currently at a lower period size than its maximum (which