Previously, all recordings/typescripts were strictly readable by the
service user that created them (guacd). This prevents reading by other
services like the Guacamole web application. Instead,
recordings/typescripts should at least be group-readable.
Without these changes, RDP-specific resources like the CLIPRDR and RDPEI
channels may remain from past connections if the RDP connection is
dynamically reconnected via the "Reconnect" display resize method,
resulting in assertion failures or memory errors if those stale
resources are reused after reconnect is completed.
Tested on top of Guacamole 1.3.0
The following works beyond a simple smoke-test:
- æøå
- \
- |
- dead acute á
- dead grave à
- dead umlaut ö
- dead cirumflex ê
- dead tilde ~
The RDP specification for the AUDIO_INPUT channel requires that all
audio be sent in packets of a specific size. Guacamole does correctly
limit itself to sending packets of this size to the RDP server, but will
send quite a few of these packets all at once if it has received more
audio data than the RDP packet size. This is OK in principle (the
Guacamole client should be able to send audio in packets of whatever
size it chooses), but may overwhelm the software running within the RDP
server if the amount of data received exceeds the available buffer
space, resulting in dropped samples.
As there is no way to know the size of the remote audio buffer, we need
to instead ensure that audio is streamed as close to real time as
possible, with each audio packet of N bytes not being sent until roughly
the amount of time represented by those N bytes has elapsed since the
last packet. This throttling ensures that software expecting to process
audio in real time should never run out of buffer space.
That said, if we never exceed the per-packet data rate and occasionally
send a packet earlier than real time would dictate, unavoidable latency
in sending/receiving audio data would accumulate over time. For example,
if each audio packet represents 10ms of audio data, but we receive that
audio packet 10.1ms after the previous packet, we need to adjust the
timing of the next audio packet(s) to account for that additional 0.1ms.
Simply waiting 10ms after sending each packet would cause that 0.1ms to
accumulate each time it occurs, eventually resulting in noticable
latency and finally running out of buffer space.
Thus, these changes:
1. Leverage a flush thread and per-packet scheduling to ensure that each
flushed audio packet does not exceed the equivalent real time rate.
2. Calculate the amount of additional latency from the amount of data
received beyond the required packet size, and amortize scheduling
corrections to account for that latency over the next several audio
packets.
This ensures that audio is streamed exactly as it is received if the
audio matches the packet size of the RDP server, and audio that is
received in a different size or varying sizes is buffered and throttled
to keep things within the expectations of software running within the
RDP server.