diff --git a/docs/gstreamer-app/docs.md b/docs/gstreamer-app/docs.md index 17436d4a3..ccf7aa2e6 100644 --- a/docs/gstreamer-app/docs.md +++ b/docs/gstreamer-app/docs.md @@ -239,10 +239,10 @@ the maximum amount of time to wait for a sample a `gst::Sample` or NULL when the appsink is stopped or EOS or the timeout expires. Call `gst_sample_unref` after usage. - + Signal that the end-of-stream has been reached. This signal is emitted from the streaming thread. - + Signal that a new preroll sample is available. This signal is emitted from the streaming thread and only when the @@ -254,7 +254,7 @@ or from any other thread. Note that this signal is only emitted when the "emit-signals" property is set to `true`, which it is not by default for performance reasons. - + Signal that a new sample is available. This signal is emitted from the streaming thread and only when the @@ -266,7 +266,7 @@ or from any other thread. Note that this signal is only emitted when the "emit-signals" property is set to `true`, which it is not by default for performance reasons. - + Get the last preroll sample in `appsink`. This was the sample that caused the appsink to preroll in the PAUSED state. @@ -289,7 +289,7 @@ element is set to the READY/NULL state. # Returns a `gst::Sample` or NULL when the appsink is stopped or EOS. - + This function blocks until a sample or EOS becomes available or the appsink element is set to the READY/NULL state. @@ -308,7 +308,7 @@ If an EOS event was received before any buffers, this function returns # Returns a `gst::Sample` or NULL when the appsink is stopped or EOS. - + Get the last preroll sample in `appsink`. This was the sample that caused the appsink to preroll in the PAUSED state. @@ -337,7 +337,7 @@ the maximum amount of time to wait for the preroll sample # Returns a `gst::Sample` or NULL when the appsink is stopped or EOS or the timeout expires. - + This function blocks until a sample or EOS becomes available or the appsink element is set to the READY/NULL state or the timeout expires. @@ -606,13 +606,13 @@ be connected to. A stream_type stream ## `type_` the new state - + Notify `appsrc` that no more buffer are available. - + Signal that the source has enough data. It is recommended that the application stops calling push-buffer until the need-data signal is emitted again to avoid excessive buffer queueing. - + Signal that the source needs more data. In the callback or from another thread you should call push-buffer or end-of-stream. @@ -623,7 +623,7 @@ You can call push-buffer multiple times until the enough-data signal is fired. ## `length` the amount of bytes needed. - + Adds a buffer to the queue of buffers that the appsrc element will push to its source pad. This function does not take ownership of the buffer so the buffer needs to be unreffed after calling this function. @@ -632,7 +632,7 @@ When the block property is TRUE, this function can block until free space becomes available in the queue. ## `buffer` a buffer to push - + Adds a buffer list to the queue of buffers and buffer lists that the appsrc element will push to its source pad. This function does not take ownership of the buffer list so the buffer list needs to be unreffed @@ -645,7 +645,7 @@ Feature: `v1_14` ## `buffer_list` a buffer list to push - + Extract a buffer from the provided sample and adds the extracted buffer to the queue of buffers that the appsrc element will push to its source pad. This function set the appsrc caps based on the caps @@ -659,7 +659,7 @@ When the block property is TRUE, this function can block until free space becomes available in the queue. ## `sample` a sample from which extract buffer to push - + Seek to the given offset. The next push-buffer should produce buffers from the new `offset`. This callback is only called for seekable stream types. diff --git a/docs/gstreamer-audio/docs.md b/docs/gstreamer-audio/docs.md index ddaff50e0..99a5307de 100644 --- a/docs/gstreamer-audio/docs.md +++ b/docs/gstreamer-audio/docs.md @@ -1,4 +1,162 @@ + +This is the base class for audio sinks. Subclasses need to implement the +::create_ringbuffer vmethod. This base class will then take care of +writing samples to the ringbuffer, synchronisation, clipping and flushing. + +# Implements + +[`AudioBaseSinkExt`](trait.AudioBaseSinkExt.html), [`gst_base::BaseSinkExt`](../gst_base/trait.BaseSinkExt.html), [`gst::ElementExt`](../gst/trait.ElementExt.html), [`gst::ObjectExt`](../gst/trait.ObjectExt.html), [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html) + +Trait containing all `AudioBaseSink` methods. + +# Implementors + +[`AudioBaseSink`](struct.AudioBaseSink.html), [`AudioSink`](struct.AudioSink.html) + +Create and return the `AudioRingBuffer` for `self`. This function will +call the ::create_ringbuffer vmethod and will set `self` as the parent of +the returned buffer (see `gst::ObjectExt::set_parent`). + +# Returns + +The new ringbuffer of `self`. + +Get the current alignment threshold, in nanoseconds, used by `self`. + +# Returns + +The current alignment threshold used by `self`. + +Get the current discont wait, in nanoseconds, used by `self`. + +# Returns + +The current discont wait used by `self`. + +Get the current drift tolerance, in microseconds, used by `self`. + +# Returns + +The current drift tolerance used by `self`. + +Queries whether `self` will provide a clock or not. See also +gst_audio_base_sink_set_provide_clock. + +# Returns + +`true` if `self` will provide a clock. + +Get the current slave method used by `self`. + +# Returns + +The current slave method used by `self`. + +Informs this base class that the audio output device has failed for +some reason, causing a discontinuity (for example, because the device +recovered from the error, but lost all contents of its ring buffer). +This function is typically called by derived classes, and is useful +for the custom slave method. + +Controls the sink's alignment threshold. +## `alignment_threshold` +the new alignment threshold in nanoseconds + +Sets the custom slaving callback. This callback will +be invoked if the slave-method property is set to +GST_AUDIO_BASE_SINK_SLAVE_CUSTOM and the audio sink +receives and plays samples. + +Setting the callback to NULL causes the sink to +behave as if the GST_AUDIO_BASE_SINK_SLAVE_NONE +method were used. +## `callback` +a `GstAudioBaseSinkCustomSlavingCallback` +## `user_data` +user data passed to the callback +## `notify` +called when user_data becomes unused + +Controls how long the sink will wait before creating a discontinuity. +## `discont_wait` +the new discont wait in nanoseconds + +Controls the sink's drift tolerance. +## `drift_tolerance` +the new drift tolerance in microseconds + +Controls whether `self` will provide a clock or not. If `provide` is `true`, +`gst::ElementExt::provide_clock` will return a clock that reflects the datarate +of `self`. If `provide` is `false`, `gst::ElementExt::provide_clock` will return +NULL. +## `provide` +new state + +Controls how clock slaving will be performed in `self`. +## `method` +the new slave method + +A window of time in nanoseconds to wait before creating a discontinuity as +a result of breaching the drift-tolerance. + +A window of time in nanoseconds to wait before creating a discontinuity as +a result of breaching the drift-tolerance. + +Controls the amount of time in microseconds that clocks are allowed +to drift before resynchronisation happens. + +Controls the amount of time in microseconds that clocks are allowed +to drift before resynchronisation happens. + +This is the base class for audio sources. Subclasses need to implement the +::create_ringbuffer vmethod. This base class will then take care of +reading samples from the ringbuffer, synchronisation and flushing. + +# Implements + +[`AudioBaseSrcExt`](trait.AudioBaseSrcExt.html), [`gst_base::BaseSrcExt`](../gst_base/trait.BaseSrcExt.html), [`gst::ElementExt`](../gst/trait.ElementExt.html), [`gst::ObjectExt`](../gst/trait.ObjectExt.html), [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html) + +Trait containing all `AudioBaseSrc` methods. + +# Implementors + +[`AudioBaseSrc`](struct.AudioBaseSrc.html), [`AudioSrc`](struct.AudioSrc.html) + +Create and return the `AudioRingBuffer` for `self`. This function will call +the ::create_ringbuffer vmethod and will set `self` as the parent of the +returned buffer (see `gst::ObjectExt::set_parent`). + +# Returns + +The new ringbuffer of `self`. + +Queries whether `self` will provide a clock or not. See also +gst_audio_base_src_set_provide_clock. + +# Returns + +`true` if `self` will provide a clock. + +Get the current slave method used by `self`. + +# Returns + +The current slave method used by `self`. + +Controls whether `self` will provide a clock or not. If `provide` is `true`, +`gst::ElementExt::provide_clock` will return a clock that reflects the datarate +of `self`. If `provide` is `false`, `gst::ElementExt::provide_clock` will return NULL. +## `provide` +new state + +Controls how clock slaving will be performed in `self`. +## `method` +the new slave method + +Actual configured size of audio buffer in microseconds. + +Actual configured audio latency in microseconds. Audio channel positions. @@ -89,6 +247,728 @@ Wide right (between front right and side right) Surround left (between rear left and side left) Surround right (between rear right and side right) + +This base class is for audio decoders turning encoded data into +raw audio samples. + +GstAudioDecoder and subclass should cooperate as follows. + +## Configuration + + * Initially, GstAudioDecoder calls `start` when the decoder element + is activated, which allows subclass to perform any global setup. + Base class (context) parameters can already be set according to subclass + capabilities (or possibly upon receive more information in subsequent + `set_format`). + * GstAudioDecoder calls `set_format` to inform subclass of the format + of input audio data that it is about to receive. + While unlikely, it might be called more than once, if changing input + parameters require reconfiguration. + * GstAudioDecoder calls `stop` at end of all processing. + +As of configuration stage, and throughout processing, GstAudioDecoder +provides various (context) parameters, e.g. describing the format of +output audio data (valid when output caps have been set) or current parsing state. +Conversely, subclass can and should configure context to inform +base class of its expectation w.r.t. buffer handling. + +## Data processing + * Base class gathers input data, and optionally allows subclass + to parse this into subsequently manageable (as defined by subclass) + chunks. Such chunks are subsequently referred to as 'frames', + though they may or may not correspond to 1 (or more) audio format frame. + * Input frame is provided to subclass' `handle_frame`. + * If codec processing results in decoded data, subclass should call + `AudioDecoder::finish_frame` to have decoded data pushed + downstream. + * Just prior to actually pushing a buffer downstream, + it is passed to `pre_push`. Subclass should either use this callback + to arrange for additional downstream pushing or otherwise ensure such + custom pushing occurs after at least a method call has finished since + setting src pad caps. + * During the parsing process GstAudioDecoderClass will handle both + srcpad and sinkpad events. Sink events will be passed to subclass + if `event` callback has been provided. + +## Shutdown phase + + * GstAudioDecoder class calls `stop` to inform the subclass that data + parsing will be stopped. + +Subclass is responsible for providing pad template caps for +source and sink pads. The pads need to be named "sink" and "src". It also +needs to set the fixed caps on srcpad, when the format is ensured. This +is typically when base class calls subclass' `set_format` function, though +it might be delayed until calling `AudioDecoder::finish_frame`. + +In summary, above process should have subclass concentrating on +codec data processing while leaving other matters to base class, +such as most notably timestamp handling. While it may exert more control +in this area (see e.g. `pre_push`), it is very much not recommended. + +In particular, base class will try to arrange for perfect output timestamps +as much as possible while tracking upstream timestamps. +To this end, if deviation between the next ideal expected perfect timestamp +and upstream exceeds `AudioDecoder:tolerance`, then resync to upstream +occurs (which would happen always if the tolerance mechanism is disabled). + +In non-live pipelines, baseclass can also (configurably) arrange for +output buffer aggregation which may help to redue large(r) numbers of +small(er) buffers being pushed and processed downstream. Note that this +feature is only available if the buffer layout is interleaved. For planar +buffers, the decoder implementation is fully responsible for the output +buffer size. + +On the other hand, it should be noted that baseclass only provides limited +seeking support (upon explicit subclass request), as full-fledged support +should rather be left to upstream demuxer, parser or alike. This simple +approach caters for seeking and duration reporting using estimated input +bitrates. + +Things that subclass need to take care of: + + * Provide pad templates + * Set source pad caps when appropriate + * Set user-configurable properties to sane defaults for format and + implementing codec at hand, and convey some subclass capabilities and + expectations in context. + + * Accept data in `handle_frame` and provide encoded results to + `AudioDecoder::finish_frame`. If it is prepared to perform + PLC, it should also accept NULL data in `handle_frame` and provide for + data for indicated duration. + +# Implements + +[`AudioDecoderExt`](trait.AudioDecoderExt.html), [`gst::ElementExt`](../gst/trait.ElementExt.html), [`gst::ObjectExt`](../gst/trait.ObjectExt.html), [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html) + +Trait containing all `AudioDecoder` methods. + +# Implementors + +[`AudioDecoder`](struct.AudioDecoder.html) + +Helper function that allocates a buffer to hold an audio frame +for `self`'s current output format. +## `size` +size of the buffer + +# Returns + +allocated buffer + +Collects decoded data and pushes it downstream. + +`buf` may be NULL in which case the indicated number of frames +are discarded and considered to have produced no output +(e.g. lead-in or setup frames). +Otherwise, source pad caps must be set when it is called with valid +data in `buf`. + +Note that a frame received in `AudioDecoderClass.handle_frame`() may be +invalidated by a call to this function. +## `buf` +decoded data +## `frames` +number of decoded frames represented by decoded data + +# Returns + +a `gst::FlowReturn` that should be escalated to caller (of caller) + +Collects decoded data and pushes it downstream. This function may be called +multiple times for a given input frame. + +`buf` may be NULL in which case it is assumed that the current input frame is +finished. This is equivalent to calling `AudioDecoder::finish_subframe` +with a NULL buffer and frames=1 after having pushed out all decoded audio +subframes using this function. + +When called with valid data in `buf` the source pad caps must have been set +already. + +Note that a frame received in `AudioDecoderClass.handle_frame`() may be +invalidated by a call to this function. + +Feature: `v1_16` + +## `buf` +decoded data + +# Returns + +a `gst::FlowReturn` that should be escalated to caller (of caller) + +Lets `AudioDecoder` sub-classes to know the memory `allocator` +used by the base class and its `params`. + +Unref the `allocator` after use it. +## `allocator` +the `gst::Allocator` +used +## `params` +the +`gst::AllocationParams` of `allocator` + + +# Returns + +a `AudioInfo` describing the input audio format + + +# Returns + +currently configured decoder delay + +Queries decoder drain handling. + +# Returns + +TRUE if drainable handling is enabled. + +MT safe. + + +# Returns + +currently configured byte to time conversion setting + +Sets the variables pointed to by `min` and `max` to the currently configured +latency. +## `min` +a pointer to storage to hold minimum latency +## `max` +a pointer to storage to hold maximum latency + + +# Returns + +currently configured decoder tolerated error count. + +Queries decoder's latency aggregation. + +# Returns + +aggregation latency. + +MT safe. + +Queries decoder required format handling. + +# Returns + +TRUE if required format handling is enabled. + +MT safe. + +Return current parsing (sync and eos) state. +## `sync` +a pointer to a variable to hold the current sync state +## `eos` +a pointer to a variable to hold the current eos state + +Queries decoder packet loss concealment handling. + +# Returns + +TRUE if packet loss concealment is enabled. + +MT safe. + + +# Returns + +currently configured plc handling + +Queries current audio jitter tolerance threshold. + +# Returns + +decoder audio jitter tolerance threshold. + +MT safe. + +Sets the audio decoder tags and how they should be merged with any +upstream stream tags. This will override any tags previously-set +with `AudioDecoderExt::merge_tags`. + +Note that this is provided for convenience, and the subclass is +not required to use this and can still do tag handling on its own. +## `tags` +a `gst::TagList` to merge, or NULL +## `mode` +the `gst::TagMergeMode` to use, usually `gst::TagMergeMode::Replace` + +Negotiate with downstream elements to currently configured `AudioInfo`. +Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if +negotiate fails. + +# Returns + +`true` if the negotiation succeeded, else `false`. + +Returns caps that express `caps` (or sink template caps if `caps` == NULL) +restricted to rate/channels/... combinations supported by downstream +elements. +## `caps` +initial caps +## `filter` +filter caps + +# Returns + +a `gst::Caps` owned by caller + +Sets a caps in allocation query which are different from the set +pad's caps. Use this function before calling +`AudioDecoder::negotiate`. Setting to `None` the allocation +query will use the caps from the pad. + +Feature: `v1_10` + +## `allocation_caps` +a `gst::Caps` or `None` + +Configures decoder drain handling. If drainable, subclass might +be handed a NULL buffer to have it return any leftover decoded data. +Otherwise, it is not considered so capable and will only ever be passed +real data. + +MT safe. +## `enabled` +new state + +Allows baseclass to perform byte to time estimated conversion. +## `enabled` +whether to enable byte to time conversion + +Sets decoder latency. +## `min` +minimum latency +## `max` +maximum latency + +Sets numbers of tolerated decoder errors, where a tolerated one is then only +warned about, but more than tolerated will lead to fatal error. You can set +-1 for never returning fatal errors. Default is set to +GST_AUDIO_DECODER_MAX_ERRORS. +## `num` +max tolerated errors + +Sets decoder minimum aggregation latency. + +MT safe. +## `num` +new minimum latency + +Configures decoder format needs. If enabled, subclass needs to be +negotiated with format caps before it can process any data. It will then +never be handed any data before it has been configured. +Otherwise, it might be handed data without having been configured and +is then expected being able to do so either by default +or based on the input data. + +MT safe. +## `enabled` +new state + +Configure output caps on the srcpad of `self`. Similar to +`AudioDecoder::set_output_format`, but allows subclasses to specify +output caps that can't be expressed via `AudioInfo` e.g. caps that have +caps features. + +Feature: `v1_16` + +## `caps` +(fixed) `gst::Caps` + +# Returns + +`true` on success. + +Configure output info on the srcpad of `self`. +## `info` +`AudioInfo` + +# Returns + +`true` on success. + +Enable or disable decoder packet loss concealment, provided subclass +and codec are capable and allow handling plc. + +MT safe. +## `enabled` +new state + +Indicates whether or not subclass handles packet loss concealment (plc). +## `plc` +new plc state + +Configures decoder audio jitter tolerance threshold. + +MT safe. +## `tolerance` +new tolerance + +Lets `AudioDecoder` sub-classes decide if they want the sink pad +to use the default pad query handler to reply to accept-caps queries. + +By setting this to true it is possible to further customize the default +handler with `GST_PAD_SET_ACCEPT_INTERSECT` and +`GST_PAD_SET_ACCEPT_TEMPLATE` +## `use_` +if the default pad accept-caps query handling should be used + +This base class is for audio encoders turning raw audio samples into +encoded audio data. + +GstAudioEncoder and subclass should cooperate as follows. + +## Configuration + + * Initially, GstAudioEncoder calls `start` when the encoder element + is activated, which allows subclass to perform any global setup. + + * GstAudioEncoder calls `set_format` to inform subclass of the format + of input audio data that it is about to receive. Subclass should + setup for encoding and configure various base class parameters + appropriately, notably those directing desired input data handling. + While unlikely, it might be called more than once, if changing input + parameters require reconfiguration. + + * GstAudioEncoder calls `stop` at end of all processing. + +As of configuration stage, and throughout processing, GstAudioEncoder +maintains various parameters that provide required context, +e.g. describing the format of input audio data. +Conversely, subclass can and should configure these context parameters +to inform base class of its expectation w.r.t. buffer handling. + +## Data processing + + * Base class gathers input sample data (as directed by the context's + frame_samples and frame_max) and provides this to subclass' `handle_frame`. + * If codec processing results in encoded data, subclass should call + `AudioEncoder::finish_frame` to have encoded data pushed + downstream. Alternatively, it might also call + `AudioEncoder::finish_frame` (with a NULL buffer and some number of + dropped samples) to indicate dropped (non-encoded) samples. + * Just prior to actually pushing a buffer downstream, + it is passed to `pre_push`. + * During the parsing process GstAudioEncoderClass will handle both + srcpad and sinkpad events. Sink events will be passed to subclass + if `event` callback has been provided. + +## Shutdown phase + + * GstAudioEncoder class calls `stop` to inform the subclass that data + parsing will be stopped. + +Subclass is responsible for providing pad template caps for +source and sink pads. The pads need to be named "sink" and "src". It also +needs to set the fixed caps on srcpad, when the format is ensured. This +is typically when base class calls subclass' `set_format` function, though +it might be delayed until calling `AudioEncoder::finish_frame`. + +In summary, above process should have subclass concentrating on +codec data processing while leaving other matters to base class, +such as most notably timestamp handling. While it may exert more control +in this area (see e.g. `pre_push`), it is very much not recommended. + +In particular, base class will either favor tracking upstream timestamps +(at the possible expense of jitter) or aim to arrange for a perfect stream of +output timestamps, depending on `AudioEncoder:perfect-timestamp`. +However, in the latter case, the input may not be so perfect or ideal, which +is handled as follows. An input timestamp is compared with the expected +timestamp as dictated by input sample stream and if the deviation is less +than `AudioEncoder:tolerance`, the deviation is discarded. +Otherwise, it is considered a discontuinity and subsequent output timestamp +is resynced to the new position after performing configured discontinuity +processing. In the non-perfect-timestamp case, an upstream variation +exceeding tolerance only leads to marking DISCONT on subsequent outgoing +(while timestamps are adjusted to upstream regardless of variation). +While DISCONT is also marked in the perfect-timestamp case, this one +optionally (see `AudioEncoder:hard-resync`) +performs some additional steps, such as clipping of (early) input samples +or draining all currently remaining input data, depending on the direction +of the discontuinity. + +If perfect timestamps are arranged, it is also possible to request baseclass +(usually set by subclass) to provide additional buffer metadata (in OFFSET +and OFFSET_END) fields according to granule defined semantics currently +needed by oggmux. Specifically, OFFSET is set to granulepos (= sample count +including buffer) and OFFSET_END to corresponding timestamp (as determined +by same sample count and sample rate). + +Things that subclass need to take care of: + + * Provide pad templates + * Set source pad caps when appropriate + * Inform base class of buffer processing needs using context's + frame_samples and frame_bytes. + * Set user-configurable properties to sane defaults for format and + implementing codec at hand, e.g. those controlling timestamp behaviour + and discontinuity processing. + * Accept data in `handle_frame` and provide encoded results to + `AudioEncoder::finish_frame`. + +# Implements + +[`AudioEncoderExt`](trait.AudioEncoderExt.html), [`gst::ElementExt`](../gst/trait.ElementExt.html), [`gst::ObjectExt`](../gst/trait.ObjectExt.html), [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html) + +Trait containing all `AudioEncoder` methods. + +# Implementors + +[`AudioEncoder`](struct.AudioEncoder.html) + +Helper function that allocates a buffer to hold an encoded audio frame +for `self`'s current output format. +## `size` +size of the buffer + +# Returns + +allocated buffer + +Collects encoded data and pushes encoded data downstream. +Source pad caps must be set when this is called. + +If `samples` < 0, then best estimate is all samples provided to encoder +(subclass) so far. `buf` may be NULL, in which case next number of `samples` +are considered discarded, e.g. as a result of discontinuous transmission, +and a discontinuity is marked. + +Note that samples received in `AudioEncoderClass.handle_frame`() +may be invalidated by a call to this function. +## `buffer` +encoded data +## `samples` +number of samples (per channel) represented by encoded data + +# Returns + +a `gst::FlowReturn` that should be escalated to caller (of caller) + +Lets `AudioEncoder` sub-classes to know the memory `allocator` +used by the base class and its `params`. + +Unref the `allocator` after use it. +## `allocator` +the `gst::Allocator` +used +## `params` +the +`gst::AllocationParams` of `allocator` + + +# Returns + +a `AudioInfo` describing the input audio format + +Queries encoder drain handling. + +# Returns + +TRUE if drainable handling is enabled. + +MT safe. + + +# Returns + +currently configured maximum handled frames + + +# Returns + +currently maximum requested samples per frame + + +# Returns + +currently minimum requested samples per frame + +Queries encoder hard minimum handling. + +# Returns + +TRUE if hard minimum handling is enabled. + +MT safe. + +Sets the variables pointed to by `min` and `max` to the currently configured +latency. +## `min` +a pointer to storage to hold minimum latency +## `max` +a pointer to storage to hold maximum latency + + +# Returns + +currently configured encoder lookahead + +Queries if the encoder will handle granule marking. + +# Returns + +TRUE if granule marking is enabled. + +MT safe. + +Queries encoder perfect timestamp behaviour. + +# Returns + +TRUE if perfect timestamp setting enabled. + +MT safe. + +Queries current audio jitter tolerance threshold. + +# Returns + +encoder audio jitter tolerance threshold. + +MT safe. + +Sets the audio encoder tags and how they should be merged with any +upstream stream tags. This will override any tags previously-set +with `AudioEncoderExt::merge_tags`. + +Note that this is provided for convenience, and the subclass is +not required to use this and can still do tag handling on its own. + +MT safe. +## `tags` +a `gst::TagList` to merge, or NULL to unset + previously-set tags +## `mode` +the `gst::TagMergeMode` to use, usually `gst::TagMergeMode::Replace` + +Negotiate with downstream elements to currently configured `gst::Caps`. +Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if +negotiate fails. + +# Returns + +`true` if the negotiation succeeded, else `false`. + +Returns caps that express `caps` (or sink template caps if `caps` == NULL) +restricted to channel/rate combinations supported by downstream elements +(e.g. muxers). +## `caps` +initial caps +## `filter` +filter caps + +# Returns + +a `gst::Caps` owned by caller + +Sets a caps in allocation query which are different from the set +pad's caps. Use this function before calling +`AudioEncoder::negotiate`. Setting to `None` the allocation +query will use the caps from the pad. + +Feature: `v1_10` + +## `allocation_caps` +a `gst::Caps` or `None` + +Configures encoder drain handling. If drainable, subclass might +be handed a NULL buffer to have it return any leftover encoded data. +Otherwise, it is not considered so capable and will only ever be passed +real data. + +MT safe. +## `enabled` +new state + +Sets max number of frames accepted at once (assumed minimally 1). +Requires `frame_samples_min` and `frame_samples_max` to be the equal. + +Note: This value will be reset to 0 every time before +`AudioEncoderClass.set_format`() is called. +## `num` +number of frames + +Sets number of samples (per channel) subclass needs to be handed, +at most or will be handed all available if 0. + +If an exact number of samples is required, `AudioEncoderExt::set_frame_samples_min` +must be called with the same number. + +Note: This value will be reset to 0 every time before +`AudioEncoderClass.set_format`() is called. +## `num` +number of samples per frame + +Sets number of samples (per channel) subclass needs to be handed, +at least or will be handed all available if 0. + +If an exact number of samples is required, `AudioEncoderExt::set_frame_samples_max` +must be called with the same number. + +Note: This value will be reset to 0 every time before +`AudioEncoderClass.set_format`() is called. +## `num` +number of samples per frame + +Configures encoder hard minimum handling. If enabled, subclass +will never be handed less samples than it configured, which otherwise +might occur near end-of-data handling. Instead, the leftover samples +will simply be discarded. + +MT safe. +## `enabled` +new state + +Set the codec headers to be sent downstream whenever requested. +## `headers` +a list of + `gst::Buffer` containing the codec header + +Sets encoder latency. +## `min` +minimum latency +## `max` +maximum latency + +Sets encoder lookahead (in units of input rate samples) + +Note: This value will be reset to 0 every time before +`AudioEncoderClass.set_format`() is called. +## `num` +lookahead + +Enable or disable encoder granule handling. + +MT safe. +## `enabled` +new state + +Configure output caps on the srcpad of `self`. +## `caps` +`gst::Caps` + +# Returns + +`true` on success. + +Enable or disable encoder perfect output timestamp preference. + +MT safe. +## `enabled` +new state + +Configures encoder audio jitter tolerance threshold. + +MT safe. +## `tolerance` +new tolerance Enum value describing the most common audio formats. @@ -266,6 +1146,83 @@ Layout of the audio samples for the different channels. interleaved audio non-interleaved audio + +The format of the samples in the ringbuffer. + +samples in linear or float + +samples in mulaw + +samples in alaw + +samples in ima adpcm + +samples in mpeg audio (but not AAC) format + +samples in gsm format + +samples in IEC958 frames (e.g. AC3) + +samples in AC3 format + +samples in EAC3 format + +samples in DTS format + +samples in MPEG-2 AAC ADTS format + +samples in MPEG-4 AAC ADTS format + +samples in MPEG-2 AAC raw format (Since: 1.12) + +samples in MPEG-4 AAC raw format (Since: 1.12) + +samples in FLAC format (Since: 1.12) + +This is the most simple base class for audio sinks that only requires +subclasses to implement a set of simple functions: + +* `open()` :Open the device. + +* `prepare()` :Configure the device with the specified format. + +* `write()` :Write samples to the device. + +* `reset()` :Unblock writes and flush the device. + +* `delay()` :Get the number of samples written but not yet played +by the device. + +* `unprepare()` :Undo operations done by prepare. + +* `close()` :Close the device. + +All scheduling of samples and timestamps is done in this base class +together with `AudioBaseSink` using a default implementation of a +`AudioRingBuffer` that uses threads. + +# Implements + +[`AudioBaseSinkExt`](trait.AudioBaseSinkExt.html), [`gst_base::BaseSinkExt`](../gst_base/trait.BaseSinkExt.html), [`gst::ElementExt`](../gst/trait.ElementExt.html), [`gst::ObjectExt`](../gst/trait.ObjectExt.html), [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html) + +This is the most simple base class for audio sources that only requires +subclasses to implement a set of simple functions: + +* `open()` :Open the device. +* `prepare()` :Configure the device with the specified format. +* `read()` :Read samples from the device. +* `reset()` :Unblock reads and flush the device. +* `delay()` :Get the number of samples in the device but not yet read. +* `unprepare()` :Undo operations done by prepare. +* `close()` :Close the device. + +All scheduling of samples and timestamps is done in this base class +together with `AudioBaseSrc` using a default implementation of a +`AudioRingBuffer` that uses threads. + +# Implements + +[`AudioBaseSrcExt`](trait.AudioBaseSrcExt.html), [`gst_base::BaseSrcExt`](../gst_base/trait.BaseSrcExt.html), [`gst::ElementExt`](../gst/trait.ElementExt.html), [`gst::ObjectExt`](../gst/trait.ObjectExt.html), [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html) `AudioStreamAlign` provides a helper object that helps tracking audio stream alignment and discontinuities, and detects discontinuities if diff --git a/docs/gstreamer-base/docs.md b/docs/gstreamer-base/docs.md index a5ee06a6f..4e0b612ab 100644 --- a/docs/gstreamer-base/docs.md +++ b/docs/gstreamer-base/docs.md @@ -678,7 +678,7 @@ Feature: `v1_16` # Returns The running time based on the position - + Force minimum upstream latency (in nanoseconds). When sources with a higher latency are expected to be plugged in dynamically after the aggregator has started playing, this allows overriding the minimum @@ -687,7 +687,7 @@ account when larger than the actually reported minimum latency. Feature: `v1_16` - + Force minimum upstream latency (in nanoseconds). When sources with a higher latency are expected to be plugged in dynamically after the aggregator has started playing, this allows overriding the minimum @@ -762,12 +762,12 @@ Feature: `v1_14` The buffer in `self` or NULL if no buffer was queued. You should unref the buffer after usage. - + Enables the emission of signals such as `AggregatorPad::buffer-consumed` Feature: `v1_16` - + Enables the emission of signals such as `AggregatorPad::buffer-consumed` Feature: `v1_16` @@ -1097,14 +1097,14 @@ This function can be used to set the timestamps based on the offset into the frame data that the picture starts. ## `offset` offset into current buffer - + If set to `true`, baseparse will unconditionally force parsing of the incoming data. This can be required in the rare cases where the incoming side-data (caps, pts, dts, ...) is not trusted by the user and wants to force validation and parsing of the incoming data. If set to `false`, decision of whether to parse the data or not is up to the implementation (standard behaviour). - + If set to `true`, baseparse will unconditionally force parsing of the incoming data. This can be required in the rare cases where the incoming side-data (caps, pts, dts, ...) is not trusted by the user and wants to @@ -1589,63 +1589,63 @@ not required. The amount of bytes to pull when operating in pull mode. The amount of bytes to pull when operating in pull mode. - + Enable the last-sample property. If `false`, basesink doesn't keep a reference to the last buffer arrived and the last-sample property is always set to `None`. This can be useful if you need buffers to be released as soon as possible, eg. if you're using a buffer pool. - + Enable the last-sample property. If `false`, basesink doesn't keep a reference to the last buffer arrived and the last-sample property is always set to `None`. This can be useful if you need buffers to be released as soon as possible, eg. if you're using a buffer pool. - + The last buffer that arrived in the sink and was used for preroll or for rendering. This property can be used to generate thumbnails. This property can be `None` when the sink has not yet received a buffer. - + Control the maximum amount of bits that will be rendered per second. Setting this property to a value bigger than 0 will make the sink delay rendering of the buffers when it would exceed to max-bitrate. - + Control the maximum amount of bits that will be rendered per second. Setting this property to a value bigger than 0 will make the sink delay rendering of the buffers when it would exceed to max-bitrate. - + Maximum amount of time (in nanoseconds) that the pipeline can take for processing the buffer. This is added to the latency of live pipelines. Feature: `v1_16` - + Maximum amount of time (in nanoseconds) that the pipeline can take for processing the buffer. This is added to the latency of live pipelines. Feature: `v1_16` - + The additional delay between synchronisation and actual rendering of the media. This property will add additional latency to the device in order to make other sinks compensate for the delay. - + The additional delay between synchronisation and actual rendering of the media. This property will add additional latency to the device in order to make other sinks compensate for the delay. - + The time to insert between buffers. This property can be used to control the maximum amount of buffers per second to render. Setting this property to a value bigger than 0 will make the sink create THROTTLE QoS events. - + The time to insert between buffers. This property can be used to control the maximum amount of buffers per second to render. Setting this property to a value bigger than 0 will make the sink create THROTTLE QoS events. - + Controls the final synchronisation, a negative value will render the buffer earlier while a positive value delays playback. This property can be used to fix synchronisation in bad files. - + Controls the final synchronisation, a negative value will render the buffer earlier while a positive value delays playback. This property can be used to fix synchronisation in bad files. diff --git a/docs/gstreamer-check/docs.md b/docs/gstreamer-check/docs.md index 9583a1467..11f512829 100644 --- a/docs/gstreamer-check/docs.md +++ b/docs/gstreamer-check/docs.md @@ -1318,13 +1318,13 @@ Blocks until at least `count` clock notifications have been requested from use `TestClock::wait_for_multiple_pending_ids` instead. ## `count` the number of pending clock notifications to wait for - + When a `TestClock` is constructed it will have a certain start time set. If the clock was created using `TestClock::new_with_start_time` then this property contains the value of the `start_time` argument. If `TestClock::new` was called the clock started at time zero, and thus this property contains the value 0. - + When a `TestClock` is constructed it will have a certain start time set. If the clock was created using `TestClock::new_with_start_time` then this property contains the value of the `start_time` argument. If diff --git a/docs/gstreamer-editing-services/docs.md b/docs/gstreamer-editing-services/docs.md index db9160af0..31cfe046a 100644 --- a/docs/gstreamer-editing-services/docs.md +++ b/docs/gstreamer-editing-services/docs.md @@ -386,9 +386,9 @@ from the splitting or `None` if the clip can't be split. The GESLayer where this clip is being used. If you want to connect to its notify signal you should connect to it with g_signal_connect_after as the signal emission can be stop in the first fase. - + The formats supported by the clip. - + The formats supported by the clip. The `Container` base class. @@ -557,12 +557,12 @@ The gst-launch like bin description of the effect a newly created `Effect`, or `None` if something went wrong. - + The description of the effect bin with a gst-launch-style pipeline description. Example: "videobalance saturation=1.5 hue=+0.5" - + The description of the effect bin with a gst-launch-style pipeline description. @@ -629,21 +629,21 @@ The new empty group. The duration (in nanoseconds) which will be used in the container The duration (in nanoseconds) which will be used in the container - + The in-point at which this `Group` will start outputting data from its contents (in nanoseconds). Ex : an in-point of 5 seconds means that the first outputted buffer will be the one located 5 seconds in the controlled resource. - + The in-point at which this `Group` will start outputting data from its contents (in nanoseconds). Ex : an in-point of 5 seconds means that the first outputted buffer will be the one located 5 seconds in the controlled resource. - + The maximum duration (in nanoseconds) of the `Group`. - + The maximum duration (in nanoseconds) of the `Group`. The position of the object in its container (in nanoseconds). @@ -795,9 +795,9 @@ the `Clip` that was added. Will be emitted after the clip was removed from the layer. ## `clip` the `Clip` that was removed - + Sets whether transitions are added automagically when clips overlap. - + Sets whether transitions are added automagically when clips overlap. The priority of the layer in the `Timeline`. 0 is the highest @@ -967,9 +967,9 @@ the `Timeline` to set on the `self`. `true` if the `timeline` could be successfully set on the `self`, else `false`. - + Audio sink for the preview. - + Audio sink for the preview. Pipeline mode. See `GESPipelineExt::set_mode` for more @@ -983,9 +983,9 @@ Timeline to use in this pipeline. See also Timeline to use in this pipeline. See also `GESPipelineExt::set_timeline` for more info. - + Video sink for the preview. - + Video sink for the preview. The `Project` is used to control a set of `Asset` and is a @@ -1427,6 +1427,9 @@ we land at that position in the stack of layers inside the timeline. If `new_layer_priority` is superior than the number of layers present in the time, it will move to the end of the stack of layers. + +Feature: `v1_16` + ## `layer` The layer to move at `new_layer_priority` ## `new_layer_priority` @@ -1547,16 +1550,16 @@ the `Track` that was added to the timeline Will be emitted after the track was removed from the timeline. ## `track` the `Track` that was removed from the timeline - + Sets whether transitions are added automagically when clips overlap. - + Sets whether transitions are added automagically when clips overlap. Current duration (in nanoseconds) of the `Timeline` - + Distance (in nanoseconds) from which a moving object will snap with it neighboors. 0 means no snapping. - + Distance (in nanoseconds) from which a moving object will snap with it neighboors. 0 means no snapping. @@ -1632,6 +1635,9 @@ The `duration` of `self` The `inpoint` of `self` +Feature: `v1_16` + + # Returns The priority of the first layer the element is in (note that only @@ -1725,7 +1731,8 @@ be copied to, meaning it will become the start of `self` # Returns -Paste `self` copying the element +New element resulting of pasting `self` +or `None` Edits `self` in ripple mode. It allows you to modify the start of `self` and move the following neighbours accordingly. @@ -1916,21 +1923,21 @@ the property that changed The duration (in nanoseconds) which will be used in the container The duration (in nanoseconds) which will be used in the container - + The in-point at which this `TimelineElement` will start outputting data from its contents (in nanoseconds). Ex : an in-point of 5 seconds means that the first outputted buffer will be the one located 5 seconds in the controlled resource. - + The in-point at which this `TimelineElement` will start outputting data from its contents (in nanoseconds). Ex : an in-point of 5 seconds means that the first outputted buffer will be the one located 5 seconds in the controlled resource. - + The maximum duration (in nanoseconds) of the `TimelineElement`. - + The maximum duration (in nanoseconds) of the `TimelineElement`. The name of the object @@ -2106,22 +2113,22 @@ Default value: O Whether layer mixing is activated or not on the track. Whether layer mixing is activated or not on the track. - + Caps used to filter/choose the output stream. Default value: `GST_CAPS_ANY`. - + Caps used to filter/choose the output stream. Default value: `GST_CAPS_ANY`. - + Type of stream the track outputs. This is used when creating the `Track` to specify in generic terms what type of content will be outputted. It also serves as a 'fast' way to check what type of data will be outputted from the `Track` without having to actually check the `Track`'s caps property. - + Type of stream the track outputs. This is used when creating the `Track` to specify in generic terms what type of content will be outputted. @@ -2481,10 +2488,10 @@ Sets whether the clip is a still image or not. Sets whether the audio track of this clip is muted or not. ## `mute` `true` to mute `self` audio track, `false` to unmute it - + Whether this uri clip represents a still image or not. This must be set before create_track_elements is called. - + Whether this uri clip represents a still image or not. This must be set before create_track_elements is called. @@ -2512,6 +2519,9 @@ Trait containing all `UriClipAsset` methods. [`UriClipAsset`](struct.UriClipAsset.html) Finalize the request of an async `UriClipAsset` + +Feature: `v1_16` + ## `res` The `gio::AsyncResult` from which to get the newly created `UriClipAsset` diff --git a/docs/gstreamer-pbutils/docs.md b/docs/gstreamer-pbutils/docs.md index 50bdc1251..9761e62ad 100644 --- a/docs/gstreamer-pbutils/docs.md +++ b/docs/gstreamer-pbutils/docs.md @@ -65,7 +65,7 @@ asynchronous mode. Stop the discovery of any pending URIs and clears the list of pending URIS (if any). - + Will be emitted in async mode when all information on a URI could be discovered, or an error occurred. @@ -79,9 +79,9 @@ the results `DiscovererInfo` discovery. You must not free this `glib::Error`, it will be freed by the discoverer. - + Will be emitted in async mode when all pending URIs have been processed. - + This signal is emitted after the source element has been created for, so the URI being discovered, so it can be configured by setting additional properties (e.g. set a proxy server for an http source, or set the device @@ -91,15 +91,15 @@ This signal is usually emitted from the context of a GStreamer streaming thread. ## `source` source element - + Will be emitted when the discover starts analyzing the pending URIs - + The duration (in nanoseconds) after which the discovery of an individual URI will timeout. If the discovery of a URI times out, the `DiscovererResult::Timeout` will be set on the result flags. - + The duration (in nanoseconds) after which the discovery of an individual URI will timeout. diff --git a/docs/gstreamer-rtp/docs.md b/docs/gstreamer-rtp/docs.md index d4fffaed4..13b1b0f93 100644 --- a/docs/gstreamer-rtp/docs.md +++ b/docs/gstreamer-rtp/docs.md @@ -1 +1,172 @@ + +Different types of feedback messages. + +Invalid type + +Generic NACK + +Temporary Maximum Media Stream Bit Rate Request + +Temporary Maximum Media Stream Bit Rate + Notification + +Request an SR packet for early + synchronization + +Picture Loss Indication + +Slice Loss Indication + +Reference Picture Selection Indication + +Application layer Feedback + +Full Intra Request Command + +Temporal-Spatial Trade-off Request + +Temporal-Spatial Trade-off Notification + +Video Back Channel Message + +Different types of SDES content. + +Invalid SDES entry + +End of SDES list + +Canonical name + +User name + +User's electronic mail address + +User's phone number + +Geographic user location + +Name of application or tool + +Notice about the source + +Private extensions + +Different RTCP packet types. + +Invalid type + +Sender report + +Receiver report + +Source description + +Goodbye + +Application defined + +Transport layer feedback. + +Payload-specific feedback. + +Extended report. + +Types of RTCP Extended Reports, those are defined in RFC 3611 and other RFCs +according to the [IANA registry](https://www.iana.org/assignments/rtcp-xr-block-types/rtcp-xr-block-types.xhtml). + +Invalid XR Report Block + +Loss RLE Report Block + +Duplicate RLE Report Block + +Packet Receipt Times Report Block + +Receiver Reference Time Report Block + +Delay since the last Receiver Report + +Statistics Summary Report Block + +VoIP Metrics Report Block + +Feature: `v1_16` + + +Standard predefined fixed payload types. + +The official list is at: +http://www.iana.org/assignments/rtp-parameters + +Audio: +reserved: 19 +unassigned: 20-23, + +Video: +unassigned: 24, 27, 29, 30, 35-71, 77-95 +Reserved for RTCP conflict avoidance: 72-76 + +ITU-T G.711. mu-law audio (RFC 3551) + +RFC 3551 says reserved + +RFC 3551 says reserved + +GSM audio + +ITU G.723.1 audio + +IMA ADPCM wave type (RFC 3551) + +IMA ADPCM wave type (RFC 3551) + +experimental linear predictive encoding + +ITU-T G.711 A-law audio (RFC 3551) + +ITU-T G.722 (RFC 3551) + +stereo PCM + +mono PCM + +EIA & TIA standard IS-733 + +Comfort Noise (RFC 3389) + +Audio MPEG 1-3. + +ITU-T G.728 Speech coder (RFC 3551) + +IMA ADPCM wave type (RFC 3551) + +IMA ADPCM wave type (RFC 3551) + +ITU-T G.729 Speech coder (RFC 3551) + +See RFC 2029 + +ISO Standards 10918-1 and 10918-2 (RFC 2435) + +nv encoding by Ron Frederick + +ITU-T Recommendation H.261 (RFC 2032) + +Video MPEG 1 & 2 (RFC 2250) + +MPEG-2 transport stream (RFC 2250) + +Video H263 (RFC 2190) + +The transfer profile to use. + +invalid profile + +the Audio/Visual profile (RFC 3551) + +the secure Audio/Visual profile (RFC 3711) + +the Audio/Visual profile with feedback (RFC 4585) + +the secure Audio/Visual profile with feedback (RFC 5124) diff --git a/docs/gstreamer-webrtc/docs.md b/docs/gstreamer-webrtc/docs.md index 51c46b683..4a7dda95f 100644 --- a/docs/gstreamer-webrtc/docs.md +++ b/docs/gstreamer-webrtc/docs.md @@ -1,4 +1,14 @@ + +GST_WEBRTC_BUNDLE_POLICY_NONE: none +GST_WEBRTC_BUNDLE_POLICY_BALANCED: balanced +GST_WEBRTC_BUNDLE_POLICY_MAX_COMPAT: max-compat +GST_WEBRTC_BUNDLE_POLICY_MAX_BUNDLE: max-bundle +See https://tools.ietf.org/html/draft-ietf-rtcweb-jsep-24`section`-4.1.1 +for more information. + +Feature: `v1_16` + GST_WEBRTC_DTLS_SETUP_NONE: none GST_WEBRTC_DTLS_SETUP_ACTPASS: actpass @@ -16,6 +26,24 @@ GST_WEBRTC_DTLS_TRANSPORT_STATE_CLOSED: closed GST_WEBRTC_DTLS_TRANSPORT_STATE_FAILED: failed GST_WEBRTC_DTLS_TRANSPORT_STATE_CONNECTING: connecting GST_WEBRTC_DTLS_TRANSPORT_STATE_CONNECTED: connected + +GST_WEBRTC_DATA_CHANNEL_STATE_NEW: new +GST_WEBRTC_DATA_CHANNEL_STATE_CONNECTING: connection +GST_WEBRTC_DATA_CHANNEL_STATE_OPEN: open +GST_WEBRTC_DATA_CHANNEL_STATE_CLOSING: closing +GST_WEBRTC_DATA_CHANNEL_STATE_CLOSED: closed +See http://w3c.github.io/webrtc-pc/`dom`-rtcdatachannelstate`` + +Feature: `v1_16` + + + +none + +ulpfec + red + +Feature: `v1_14_1` + GST_WEBRTC_ICE_COMPONENT_RTP, GST_WEBRTC_ICE_COMPONENT_RTCP, @@ -42,6 +70,14 @@ GST_WEBRTC_ICE_ROLE_CONTROLLING: controlling # Implements [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html) + +GST_WEBRTC_ICE_TRANSPORT_POLICY_ALL: all +GST_WEBRTC_ICE_TRANSPORT_POLICY_RELAY: relay +See https://tools.ietf.org/html/draft-ietf-rtcweb-jsep-24`section`-4.1.1 +for more information. + +Feature: `v1_16` + GST_WEBRTC_PEER_CONNECTION_STATE_NEW: new GST_WEBRTC_PEER_CONNECTION_STATE_CONNECTING: connecting @@ -50,6 +86,15 @@ GST_WEBRTC_PEER_CONNECTION_STATE_DISCONNECTED: disconnected GST_WEBRTC_PEER_CONNECTION_STATE_FAILED: failed GST_WEBRTC_PEER_CONNECTION_STATE_CLOSED: closed See http://w3c.github.io/webrtc-pc/`dom`-rtcpeerconnectionstate`` + +GST_WEBRTC_PRIORITY_TYPE_VERY_LOW: very-low +GST_WEBRTC_PRIORITY_TYPE_LOW: low +GST_WEBRTC_PRIORITY_TYPE_MEDIUM: medium +GST_WEBRTC_PRIORITY_TYPE_HIGH: high +See http://w3c.github.io/webrtc-pc/`dom`-rtcprioritytype`` + +Feature: `v1_16` + @@ -69,6 +114,15 @@ See htt [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html) + +GST_WEBRTC_SCTP_TRANSPORT_STATE_NEW: new +GST_WEBRTC_SCTP_TRANSPORT_STATE_CONNECTING: connecting +GST_WEBRTC_SCTP_TRANSPORT_STATE_CONNECTED: connected +GST_WEBRTC_SCTP_TRANSPORT_STATE_CLOSED: closed +See http://w3c.github.io/webrtc-pc/`dom`-rtcsctptransportstate`` + +Feature: `v1_16` + GST_WEBRTC_SDP_TYPE_OFFER: offer GST_WEBRTC_SDP_TYPE_PRANSWER: pranswer diff --git a/docs/gstreamer/docs.md b/docs/gstreamer/docs.md index 8a9b7c17f..971f05902 100644 --- a/docs/gstreamer/docs.md +++ b/docs/gstreamer/docs.md @@ -442,15 +442,15 @@ the `Element` that was added to the bin Will be emitted after the element was removed from the bin. ## `element` the `Element` that was removed from the bin - + If set to `true`, the bin will handle asynchronous state changes. This should be used only if the bin subclass is modifying the state of its children on its own. - + If set to `true`, the bin will handle asynchronous state changes. This should be used only if the bin subclass is modifying the state of its children on its own. - + Forward all children messages, even those that would normally be filtered by the bin. This can be interesting when one wants to be notified of the EOS state of individual elements, for example. @@ -459,7 +459,7 @@ The messages are converted to an ELEMENT message with the bin as the source. The structure of the message is named 'GstBinForwarded' and contains a field named 'message' of type GST_TYPE_MESSAGE that contains the original forwarded message. - + Forward all children messages, even those that would normally be filtered by the bin. This can be interesting when one wants to be notified of the EOS state of individual elements, for example. @@ -2049,13 +2049,13 @@ a `Message` matching the usage. MT safe. - + A message has been posted on the bus. This signal is emitted from a GSource added to the mainloop. this signal will only be emitted when there is a mainloop running. ## `message` the message that has been posted asynchronously - + A message has been posted on the bus. This signal is emitted from the thread that posted the message so one has to be careful with locking. @@ -3299,6 +3299,12 @@ This signal will be emitted from an arbitrary thread, most likely not the application's main thread. ## `synced` if the clock is synced now + +The type of the clock entry + +a single shot timeout + +a periodic timeout request The return value of a clock operation. @@ -9913,35 +9919,35 @@ Unref after usage. Emit the pad-created signal for this template when created by this pad. ## `pad` the `Pad` that created it - + This signal is fired when an element creates a pad from this template. ## `pad` the pad that was created. - + The capabilities of the pad described by the pad template. - + The capabilities of the pad described by the pad template. - + The direction of the pad described by the pad template. - + The direction of the pad described by the pad template. - + The type of the pad described by the pad template. Feature: `v1_14` - + The type of the pad described by the pad template. Feature: `v1_14` - + The name template of the pad template. - + The name template of the pad template. - + When the pad described by the pad template will become available. - + When the pad described by the pad template will become available. Opaque structure. @@ -10184,11 +10190,11 @@ the pipeline run as fast as possible. MT safe. ## `clock` the clock to use - + Whether or not to automatically flush all messages on the pipeline's bus when going from READY to NULL state. Please see `PipelineExt::set_auto_flush_bus` for more information on this option. - + Whether or not to automatically flush all messages on the pipeline's bus when going from READY to NULL state. Please see `PipelineExt::set_auto_flush_bus` for more information on this option. @@ -12056,12 +12062,12 @@ the path to scan # Returns `true` if registry changed - + Signals that a feature has been added to the registry (possibly replacing a previously-added one by the same name) ## `feature` the feature that has been added - + Signals that a plugin has been added to the registry (possibly replacing a previously-added one by the same name) ## `plugin` @@ -12832,23 +12838,23 @@ Feature: `v1_10` ## `tags` a `TagList` - + The `Caps` of the `Stream`. - + The `Caps` of the `Stream`. - + The unique identifier of the `Stream`. Can only be set at construction time. - + The unique identifier of the `Stream`. Can only be set at construction time. - + The `StreamType` of the `Stream`. Can only be set at construction time. - + The `StreamType` of the `Stream`. Can only be set at construction time. - + The `TagList` of the `Stream`. - + The `TagList` of the `Stream`. A collection of `Stream` that are available.