Audio streams form the central functionality of the sound server. Data is routed, converted and mixed from several sources before it is passed along to a final output. Currently, there are three forms of audio streams:
To access a stream, a pa_stream object must be created using pa_stream_new(). At this point the audio sample format and mapping of channels must be specified. See Sample Format Specifications and Channel Maps for more information about those structures.
This first step will only create a client-side object, representing the stream. To use the stream, a server-side object must be created and associated with the local object. Depending on which type of stream is desired, a different function is needed:
Similar to how connections are done in contexts, connecting a stream will not generate a pa_operation object. Also like contexts, the application should register a state change callback, using pa_stream_set_state_callback(), and wait for the stream to enter an active state.
Playback and record streams always have a server-side buffer as part of the data flow. The size of this buffer needs to be chosen in a compromise between low latency and sensitivity for buffer overflows/underruns.
The buffer metrics may be controlled by the application. They are described with a pa_buffer_attr structure which contains a number of fields:
If PA_STREAM_ADJUST_LATENCY is set, then the tlength/fragsize parameters will be interpreted slightly differently than described above when passed to pa_stream_connect_record() and pa_stream_connect_playback(): the overall latency that is comprised of both the server side playback buffer length, the hardware playback buffer length and additional latencies will be adjusted in a way that it matches tlength resp. fragsize. Set PA_STREAM_ADJUST_LATENCY if you want to control the overall playback latency for your stream. Unset it if you want to control only the latency induced by the server-side, rewritable playback buffer. The server will try to fulfill the clients latency requests as good as possible. However if the underlying hardware cannot change the hardware buffer length or only in a limited range, the actually resulting latency might be different from what the client requested. Thus, for synchronization clients always need to check the actual measured latency via pa_stream_get_latency() or a similar call, and not make any assumptions. about the latency available. The function pa_stream_get_buffer_attr() will always return the actual size of the server-side per-stream buffer in tlength/fragsize, regardless whether PA_STREAM_ADJUST_LATENCY is set or not.
The server-side per-stream playback buffers are indexed by a write and a read index. The application writes to the write index and the sound device reads from the read index. The read index is increased monotonically, while the write index may be freely controlled by the application. Substracting the read index from the write index will give you the current fill level of the buffer. The read/write indexes are 64bit values and measured in bytes, they will never wrap. The current read/write index may be queried using pa_stream_get_timing_info() (see below for more information). In case of a buffer underrun the read index is equal or larger than the write index. Unless the prebuf value is 0, PulseAudio will temporarily pause playback in such a case, and wait until the buffer is filled up to prebuf bytes again. If prebuf is 0, the read index may be larger than the write index, in which case silence is played. If the application writes data to indexes lower than the read index, the data is immediately lost.
Once the stream is up, data can start flowing between the client and the server. Two different access models can be used to transfer the data:
It is also possible to mix the two models freely.
Once there is data/space available, it can be transferred using either pa_stream_write() for playback, or pa_stream_peek() / pa_stream_drop() for record. Make sure you do not overflow the playback buffers as data will be dropped.
The transfer buffers can be controlled through a number of operations:
A client application may freely seek in the playback buffer. To accomplish that the pa_stream_write() function takes a seek mode and an offset argument. The seek mode is one of:
If an application just wants to append some data to the output buffer, PA_SEEK_RELATIVE and an offset of 0 should be used.
After a call to pa_stream_write() the write index will be left at the position right after the last byte of the written data.
A major problem with networked audio is the increased latency caused by the network. To remedy this, PulseAudio supports an advanced system of monitoring the current latency.
To get the raw data needed to calculate latencies, call pa_stream_get_timing_info(). This will give you a pa_timing_info structure that contains everything that is known about the server side buffer transport delays and the backend active in the server. (Besides other things it contains the write and read index values mentioned above.)
This structure is updated every time a pa_stream_update_timing_info() operation is executed. (i.e. before the first call to this function the timing information structure is not available!) Since it is a lot of work to keep this structure up-to-date manually, PulseAudio can do that automatically for you: if PA_STREAM_AUTO_TIMING_UPDATE is passed when connecting the stream PulseAudio will automatically update the structure every 100ms and every time a function is called that might invalidate the previously known timing data (such as pa_stream_write() or pa_stream_flush()). Please note however, that there always is a short time window when the data in the timing information structure is out-of-date. PulseAudio tries to mark these situations by setting the write_index_corrupt and read_index_corrupt fields accordingly.
The raw timing data in the pa_timing_info structure is usually hard to deal with. Therefore a simpler interface is available: you can call pa_stream_get_time() or pa_stream_get_latency(). The former will return the current playback time of the hardware since the stream has been started. The latter returns the overall time a sample that you write now takes to be played by the hardware. These two functions base their calculations on the same data that is returned by pa_stream_get_timing_info(). Hence the same rules for keeping the timing data up-to-date apply here. In case the write or read index is corrupted, these two functions will fail with PA_ERR_NODATA set.
Since updating the timing info structure usually requires a full network round trip and some applications monitor the timing very often PulseAudio offers a timing interpolation system. If PA_STREAM_INTERPOLATE_TIMING is passed when connecting the stream, pa_stream_get_time() and pa_stream_get_latency() will try to interpolate the current playback time/latency by estimating the number of samples that have been played back by the hardware since the last regular timing update. It is espcially useful to combine this option with PA_STREAM_AUTO_TIMING_UPDATE, which will enable you to monitor the current playback time/latency very precisely and very frequently without requiring a network round trip every time.
Even with the best precautions, buffers will sometime over - or underflow. To handle this gracefully, the application can be notified when this happens. Callbacks are registered using pa_stream_set_overflow_callback() and pa_stream_set_underflow_callback().
PulseAudio allows applications to fully synchronize multiple playback streams that are connected to the same output device. That means the streams will always be played back sample-by-sample synchronously. If stream operations like pa_stream_cork() are issued on one of the synchronized streams, they are simultaneously issued on the others.
To synchronize a stream to another, just pass the "master" stream as last argument to pa_stream_connect_playack(). To make sure that the freshly created stream doesn't start playback right-away, make sure to pass PA_STREAM_START_CORKED and - after all streams have been created - uncork them all with a single call to pa_stream_cork() for the master stream.
To make sure that a particular stream doesn't stop to play when a server side buffer underrun happens on it while the other synchronized streams continue playing and hence deviate you need to pass a "prebuf" pa_buffer_attr of 0 when connecting it.
When a stream has served is purpose it must be disconnected with pa_stream_disconnect(). If you only unreference it, then it will live on and eat resources both locally and on the server until you disconnect the context.
Several copyright owners GNU Lesser General Public License v2.1 |
MeeGo 1.2 Harmattan API
|