Update documentation

This commit is contained in:
Sebastian Dröge 2019-12-18 16:52:00 +02:00
parent bb321f7fa8
commit 98dcb27abd
9 changed files with 1297 additions and 99 deletions

View file

@ -239,10 +239,10 @@ the maximum amount of time to wait for a sample
a `gst::Sample` or NULL when the appsink is stopped or EOS or the timeout expires.
Call `gst_sample_unref` after usage.
<!-- trait AppSinkExt::fn connect_eos -->
<!-- impl AppSink::fn connect_eos -->
Signal that the end-of-stream has been reached. This signal is emitted from
the streaming thread.
<!-- trait AppSinkExt::fn connect_new_preroll -->
<!-- impl AppSink::fn connect_new_preroll -->
Signal that a new preroll sample is available.
This signal is emitted from the streaming thread and only when the
@ -254,7 +254,7 @@ or from any other thread.
Note that this signal is only emitted when the "emit-signals" property is
set to `true`, which it is not by default for performance reasons.
<!-- trait AppSinkExt::fn connect_new_sample -->
<!-- impl AppSink::fn connect_new_sample -->
Signal that a new sample is available.
This signal is emitted from the streaming thread and only when the
@ -266,7 +266,7 @@ or from any other thread.
Note that this signal is only emitted when the "emit-signals" property is
set to `true`, which it is not by default for performance reasons.
<!-- trait AppSinkExt::fn connect_pull_preroll -->
<!-- impl AppSink::fn connect_pull_preroll -->
Get the last preroll sample in `appsink`. This was the sample that caused the
appsink to preroll in the PAUSED state.
@ -289,7 +289,7 @@ element is set to the READY/NULL state.
# Returns
a `gst::Sample` or NULL when the appsink is stopped or EOS.
<!-- trait AppSinkExt::fn connect_pull_sample -->
<!-- impl AppSink::fn connect_pull_sample -->
This function blocks until a sample or EOS becomes available or the appsink
element is set to the READY/NULL state.
@ -308,7 +308,7 @@ If an EOS event was received before any buffers, this function returns
# Returns
a `gst::Sample` or NULL when the appsink is stopped or EOS.
<!-- trait AppSinkExt::fn connect_try_pull_preroll -->
<!-- impl AppSink::fn connect_try_pull_preroll -->
Get the last preroll sample in `appsink`. This was the sample that caused the
appsink to preroll in the PAUSED state.
@ -337,7 +337,7 @@ the maximum amount of time to wait for the preroll sample
# Returns
a `gst::Sample` or NULL when the appsink is stopped or EOS or the timeout expires.
<!-- trait AppSinkExt::fn connect_try_pull_sample -->
<!-- impl AppSink::fn connect_try_pull_sample -->
This function blocks until a sample or EOS becomes available or the appsink
element is set to the READY/NULL state or the timeout expires.
@ -606,13 +606,13 @@ be connected to.
A stream_type stream
## `type_`
the new state
<!-- trait AppSrcExt::fn connect_end_of_stream -->
<!-- impl AppSrc::fn connect_end_of_stream -->
Notify `appsrc` that no more buffer are available.
<!-- trait AppSrcExt::fn connect_enough_data -->
<!-- impl AppSrc::fn connect_enough_data -->
Signal that the source has enough data. It is recommended that the
application stops calling push-buffer until the need-data signal is
emitted again to avoid excessive buffer queueing.
<!-- trait AppSrcExt::fn connect_need_data -->
<!-- impl AppSrc::fn connect_need_data -->
Signal that the source needs more data. In the callback or from another
thread you should call push-buffer or end-of-stream.
@ -623,7 +623,7 @@ You can call push-buffer multiple times until the enough-data signal is
fired.
## `length`
the amount of bytes needed.
<!-- trait AppSrcExt::fn connect_push_buffer -->
<!-- impl AppSrc::fn connect_push_buffer -->
Adds a buffer to the queue of buffers that the appsrc element will
push to its source pad. This function does not take ownership of the
buffer so the buffer needs to be unreffed after calling this function.
@ -632,7 +632,7 @@ When the block property is TRUE, this function can block until free space
becomes available in the queue.
## `buffer`
a buffer to push
<!-- trait AppSrcExt::fn connect_push_buffer_list -->
<!-- impl AppSrc::fn connect_push_buffer_list -->
Adds a buffer list to the queue of buffers and buffer lists that the
appsrc element will push to its source pad. This function does not take
ownership of the buffer list so the buffer list needs to be unreffed
@ -645,7 +645,7 @@ Feature: `v1_14`
## `buffer_list`
a buffer list to push
<!-- trait AppSrcExt::fn connect_push_sample -->
<!-- impl AppSrc::fn connect_push_sample -->
Extract a buffer from the provided sample and adds the extracted buffer
to the queue of buffers that the appsrc element will
push to its source pad. This function set the appsrc caps based on the caps
@ -659,7 +659,7 @@ When the block property is TRUE, this function can block until free space
becomes available in the queue.
## `sample`
a sample from which extract buffer to push
<!-- trait AppSrcExt::fn connect_seek_data -->
<!-- impl AppSrc::fn connect_seek_data -->
Seek to the given offset. The next push-buffer should produce buffers from
the new `offset`.
This callback is only called for seekable stream types.

View file

@ -1,4 +1,162 @@
<!-- file * -->
<!-- struct AudioBaseSink -->
This is the base class for audio sinks. Subclasses need to implement the
::create_ringbuffer vmethod. This base class will then take care of
writing samples to the ringbuffer, synchronisation, clipping and flushing.
# Implements
[`AudioBaseSinkExt`](trait.AudioBaseSinkExt.html), [`gst_base::BaseSinkExt`](../gst_base/trait.BaseSinkExt.html), [`gst::ElementExt`](../gst/trait.ElementExt.html), [`gst::ObjectExt`](../gst/trait.ObjectExt.html), [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html)
<!-- trait AudioBaseSinkExt -->
Trait containing all `AudioBaseSink` methods.
# Implementors
[`AudioBaseSink`](struct.AudioBaseSink.html), [`AudioSink`](struct.AudioSink.html)
<!-- trait AudioBaseSinkExt::fn create_ringbuffer -->
Create and return the `AudioRingBuffer` for `self`. This function will
call the ::create_ringbuffer vmethod and will set `self` as the parent of
the returned buffer (see `gst::ObjectExt::set_parent`).
# Returns
The new ringbuffer of `self`.
<!-- trait AudioBaseSinkExt::fn get_alignment_threshold -->
Get the current alignment threshold, in nanoseconds, used by `self`.
# Returns
The current alignment threshold used by `self`.
<!-- trait AudioBaseSinkExt::fn get_discont_wait -->
Get the current discont wait, in nanoseconds, used by `self`.
# Returns
The current discont wait used by `self`.
<!-- trait AudioBaseSinkExt::fn get_drift_tolerance -->
Get the current drift tolerance, in microseconds, used by `self`.
# Returns
The current drift tolerance used by `self`.
<!-- trait AudioBaseSinkExt::fn get_provide_clock -->
Queries whether `self` will provide a clock or not. See also
gst_audio_base_sink_set_provide_clock.
# Returns
`true` if `self` will provide a clock.
<!-- trait AudioBaseSinkExt::fn get_slave_method -->
Get the current slave method used by `self`.
# Returns
The current slave method used by `self`.
<!-- trait AudioBaseSinkExt::fn report_device_failure -->
Informs this base class that the audio output device has failed for
some reason, causing a discontinuity (for example, because the device
recovered from the error, but lost all contents of its ring buffer).
This function is typically called by derived classes, and is useful
for the custom slave method.
<!-- trait AudioBaseSinkExt::fn set_alignment_threshold -->
Controls the sink's alignment threshold.
## `alignment_threshold`
the new alignment threshold in nanoseconds
<!-- trait AudioBaseSinkExt::fn set_custom_slaving_callback -->
Sets the custom slaving callback. This callback will
be invoked if the slave-method property is set to
GST_AUDIO_BASE_SINK_SLAVE_CUSTOM and the audio sink
receives and plays samples.
Setting the callback to NULL causes the sink to
behave as if the GST_AUDIO_BASE_SINK_SLAVE_NONE
method were used.
## `callback`
a `GstAudioBaseSinkCustomSlavingCallback`
## `user_data`
user data passed to the callback
## `notify`
called when user_data becomes unused
<!-- trait AudioBaseSinkExt::fn set_discont_wait -->
Controls how long the sink will wait before creating a discontinuity.
## `discont_wait`
the new discont wait in nanoseconds
<!-- trait AudioBaseSinkExt::fn set_drift_tolerance -->
Controls the sink's drift tolerance.
## `drift_tolerance`
the new drift tolerance in microseconds
<!-- trait AudioBaseSinkExt::fn set_provide_clock -->
Controls whether `self` will provide a clock or not. If `provide` is `true`,
`gst::ElementExt::provide_clock` will return a clock that reflects the datarate
of `self`. If `provide` is `false`, `gst::ElementExt::provide_clock` will return
NULL.
## `provide`
new state
<!-- trait AudioBaseSinkExt::fn set_slave_method -->
Controls how clock slaving will be performed in `self`.
## `method`
the new slave method
<!-- trait AudioBaseSinkExt::fn get_property_discont_wait -->
A window of time in nanoseconds to wait before creating a discontinuity as
a result of breaching the drift-tolerance.
<!-- trait AudioBaseSinkExt::fn set_property_discont_wait -->
A window of time in nanoseconds to wait before creating a discontinuity as
a result of breaching the drift-tolerance.
<!-- trait AudioBaseSinkExt::fn get_property_drift_tolerance -->
Controls the amount of time in microseconds that clocks are allowed
to drift before resynchronisation happens.
<!-- trait AudioBaseSinkExt::fn set_property_drift_tolerance -->
Controls the amount of time in microseconds that clocks are allowed
to drift before resynchronisation happens.
<!-- struct AudioBaseSrc -->
This is the base class for audio sources. Subclasses need to implement the
::create_ringbuffer vmethod. This base class will then take care of
reading samples from the ringbuffer, synchronisation and flushing.
# Implements
[`AudioBaseSrcExt`](trait.AudioBaseSrcExt.html), [`gst_base::BaseSrcExt`](../gst_base/trait.BaseSrcExt.html), [`gst::ElementExt`](../gst/trait.ElementExt.html), [`gst::ObjectExt`](../gst/trait.ObjectExt.html), [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html)
<!-- trait AudioBaseSrcExt -->
Trait containing all `AudioBaseSrc` methods.
# Implementors
[`AudioBaseSrc`](struct.AudioBaseSrc.html), [`AudioSrc`](struct.AudioSrc.html)
<!-- trait AudioBaseSrcExt::fn create_ringbuffer -->
Create and return the `AudioRingBuffer` for `self`. This function will call
the ::create_ringbuffer vmethod and will set `self` as the parent of the
returned buffer (see `gst::ObjectExt::set_parent`).
# Returns
The new ringbuffer of `self`.
<!-- trait AudioBaseSrcExt::fn get_provide_clock -->
Queries whether `self` will provide a clock or not. See also
gst_audio_base_src_set_provide_clock.
# Returns
`true` if `self` will provide a clock.
<!-- trait AudioBaseSrcExt::fn get_slave_method -->
Get the current slave method used by `self`.
# Returns
The current slave method used by `self`.
<!-- trait AudioBaseSrcExt::fn set_provide_clock -->
Controls whether `self` will provide a clock or not. If `provide` is `true`,
`gst::ElementExt::provide_clock` will return a clock that reflects the datarate
of `self`. If `provide` is `false`, `gst::ElementExt::provide_clock` will return NULL.
## `provide`
new state
<!-- trait AudioBaseSrcExt::fn set_slave_method -->
Controls how clock slaving will be performed in `self`.
## `method`
the new slave method
<!-- trait AudioBaseSrcExt::fn get_property_actual_buffer_time -->
Actual configured size of audio buffer in microseconds.
<!-- trait AudioBaseSrcExt::fn get_property_actual_latency_time -->
Actual configured audio latency in microseconds.
<!-- enum AudioChannelPosition -->
Audio channel positions.
@ -89,6 +247,728 @@ Wide right (between front right and side right)
Surround left (between rear left and side left)
<!-- enum AudioChannelPosition::variant SurroundRight -->
Surround right (between rear right and side right)
<!-- struct AudioDecoder -->
This base class is for audio decoders turning encoded data into
raw audio samples.
GstAudioDecoder and subclass should cooperate as follows.
## Configuration
* Initially, GstAudioDecoder calls `start` when the decoder element
is activated, which allows subclass to perform any global setup.
Base class (context) parameters can already be set according to subclass
capabilities (or possibly upon receive more information in subsequent
`set_format`).
* GstAudioDecoder calls `set_format` to inform subclass of the format
of input audio data that it is about to receive.
While unlikely, it might be called more than once, if changing input
parameters require reconfiguration.
* GstAudioDecoder calls `stop` at end of all processing.
As of configuration stage, and throughout processing, GstAudioDecoder
provides various (context) parameters, e.g. describing the format of
output audio data (valid when output caps have been set) or current parsing state.
Conversely, subclass can and should configure context to inform
base class of its expectation w.r.t. buffer handling.
## Data processing
* Base class gathers input data, and optionally allows subclass
to parse this into subsequently manageable (as defined by subclass)
chunks. Such chunks are subsequently referred to as 'frames',
though they may or may not correspond to 1 (or more) audio format frame.
* Input frame is provided to subclass' `handle_frame`.
* If codec processing results in decoded data, subclass should call
`AudioDecoder::finish_frame` to have decoded data pushed
downstream.
* Just prior to actually pushing a buffer downstream,
it is passed to `pre_push`. Subclass should either use this callback
to arrange for additional downstream pushing or otherwise ensure such
custom pushing occurs after at least a method call has finished since
setting src pad caps.
* During the parsing process GstAudioDecoderClass will handle both
srcpad and sinkpad events. Sink events will be passed to subclass
if `event` callback has been provided.
## Shutdown phase
* GstAudioDecoder class calls `stop` to inform the subclass that data
parsing will be stopped.
Subclass is responsible for providing pad template caps for
source and sink pads. The pads need to be named "sink" and "src". It also
needs to set the fixed caps on srcpad, when the format is ensured. This
is typically when base class calls subclass' `set_format` function, though
it might be delayed until calling `AudioDecoder::finish_frame`.
In summary, above process should have subclass concentrating on
codec data processing while leaving other matters to base class,
such as most notably timestamp handling. While it may exert more control
in this area (see e.g. `pre_push`), it is very much not recommended.
In particular, base class will try to arrange for perfect output timestamps
as much as possible while tracking upstream timestamps.
To this end, if deviation between the next ideal expected perfect timestamp
and upstream exceeds `AudioDecoder:tolerance`, then resync to upstream
occurs (which would happen always if the tolerance mechanism is disabled).
In non-live pipelines, baseclass can also (configurably) arrange for
output buffer aggregation which may help to redue large(r) numbers of
small(er) buffers being pushed and processed downstream. Note that this
feature is only available if the buffer layout is interleaved. For planar
buffers, the decoder implementation is fully responsible for the output
buffer size.
On the other hand, it should be noted that baseclass only provides limited
seeking support (upon explicit subclass request), as full-fledged support
should rather be left to upstream demuxer, parser or alike. This simple
approach caters for seeking and duration reporting using estimated input
bitrates.
Things that subclass need to take care of:
* Provide pad templates
* Set source pad caps when appropriate
* Set user-configurable properties to sane defaults for format and
implementing codec at hand, and convey some subclass capabilities and
expectations in context.
* Accept data in `handle_frame` and provide encoded results to
`AudioDecoder::finish_frame`. If it is prepared to perform
PLC, it should also accept NULL data in `handle_frame` and provide for
data for indicated duration.
# Implements
[`AudioDecoderExt`](trait.AudioDecoderExt.html), [`gst::ElementExt`](../gst/trait.ElementExt.html), [`gst::ObjectExt`](../gst/trait.ObjectExt.html), [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html)
<!-- trait AudioDecoderExt -->
Trait containing all `AudioDecoder` methods.
# Implementors
[`AudioDecoder`](struct.AudioDecoder.html)
<!-- trait AudioDecoderExt::fn allocate_output_buffer -->
Helper function that allocates a buffer to hold an audio frame
for `self`'s current output format.
## `size`
size of the buffer
# Returns
allocated buffer
<!-- trait AudioDecoderExt::fn finish_frame -->
Collects decoded data and pushes it downstream.
`buf` may be NULL in which case the indicated number of frames
are discarded and considered to have produced no output
(e.g. lead-in or setup frames).
Otherwise, source pad caps must be set when it is called with valid
data in `buf`.
Note that a frame received in `AudioDecoderClass.handle_frame`() may be
invalidated by a call to this function.
## `buf`
decoded data
## `frames`
number of decoded frames represented by decoded data
# Returns
a `gst::FlowReturn` that should be escalated to caller (of caller)
<!-- trait AudioDecoderExt::fn finish_subframe -->
Collects decoded data and pushes it downstream. This function may be called
multiple times for a given input frame.
`buf` may be NULL in which case it is assumed that the current input frame is
finished. This is equivalent to calling `AudioDecoder::finish_subframe`
with a NULL buffer and frames=1 after having pushed out all decoded audio
subframes using this function.
When called with valid data in `buf` the source pad caps must have been set
already.
Note that a frame received in `AudioDecoderClass.handle_frame`() may be
invalidated by a call to this function.
Feature: `v1_16`
## `buf`
decoded data
# Returns
a `gst::FlowReturn` that should be escalated to caller (of caller)
<!-- trait AudioDecoderExt::fn get_allocator -->
Lets `AudioDecoder` sub-classes to know the memory `allocator`
used by the base class and its `params`.
Unref the `allocator` after use it.
## `allocator`
the `gst::Allocator`
used
## `params`
the
`gst::AllocationParams` of `allocator`
<!-- trait AudioDecoderExt::fn get_audio_info -->
# Returns
a `AudioInfo` describing the input audio format
<!-- trait AudioDecoderExt::fn get_delay -->
# Returns
currently configured decoder delay
<!-- trait AudioDecoderExt::fn get_drainable -->
Queries decoder drain handling.
# Returns
TRUE if drainable handling is enabled.
MT safe.
<!-- trait AudioDecoderExt::fn get_estimate_rate -->
# Returns
currently configured byte to time conversion setting
<!-- trait AudioDecoderExt::fn get_latency -->
Sets the variables pointed to by `min` and `max` to the currently configured
latency.
## `min`
a pointer to storage to hold minimum latency
## `max`
a pointer to storage to hold maximum latency
<!-- trait AudioDecoderExt::fn get_max_errors -->
# Returns
currently configured decoder tolerated error count.
<!-- trait AudioDecoderExt::fn get_min_latency -->
Queries decoder's latency aggregation.
# Returns
aggregation latency.
MT safe.
<!-- trait AudioDecoderExt::fn get_needs_format -->
Queries decoder required format handling.
# Returns
TRUE if required format handling is enabled.
MT safe.
<!-- trait AudioDecoderExt::fn get_parse_state -->
Return current parsing (sync and eos) state.
## `sync`
a pointer to a variable to hold the current sync state
## `eos`
a pointer to a variable to hold the current eos state
<!-- trait AudioDecoderExt::fn get_plc -->
Queries decoder packet loss concealment handling.
# Returns
TRUE if packet loss concealment is enabled.
MT safe.
<!-- trait AudioDecoderExt::fn get_plc_aware -->
# Returns
currently configured plc handling
<!-- trait AudioDecoderExt::fn get_tolerance -->
Queries current audio jitter tolerance threshold.
# Returns
decoder audio jitter tolerance threshold.
MT safe.
<!-- trait AudioDecoderExt::fn merge_tags -->
Sets the audio decoder tags and how they should be merged with any
upstream stream tags. This will override any tags previously-set
with `AudioDecoderExt::merge_tags`.
Note that this is provided for convenience, and the subclass is
not required to use this and can still do tag handling on its own.
## `tags`
a `gst::TagList` to merge, or NULL
## `mode`
the `gst::TagMergeMode` to use, usually `gst::TagMergeMode::Replace`
<!-- trait AudioDecoderExt::fn negotiate -->
Negotiate with downstream elements to currently configured `AudioInfo`.
Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if
negotiate fails.
# Returns
`true` if the negotiation succeeded, else `false`.
<!-- trait AudioDecoderExt::fn proxy_getcaps -->
Returns caps that express `caps` (or sink template caps if `caps` == NULL)
restricted to rate/channels/... combinations supported by downstream
elements.
## `caps`
initial caps
## `filter`
filter caps
# Returns
a `gst::Caps` owned by caller
<!-- trait AudioDecoderExt::fn set_allocation_caps -->
Sets a caps in allocation query which are different from the set
pad's caps. Use this function before calling
`AudioDecoder::negotiate`. Setting to `None` the allocation
query will use the caps from the pad.
Feature: `v1_10`
## `allocation_caps`
a `gst::Caps` or `None`
<!-- trait AudioDecoderExt::fn set_drainable -->
Configures decoder drain handling. If drainable, subclass might
be handed a NULL buffer to have it return any leftover decoded data.
Otherwise, it is not considered so capable and will only ever be passed
real data.
MT safe.
## `enabled`
new state
<!-- trait AudioDecoderExt::fn set_estimate_rate -->
Allows baseclass to perform byte to time estimated conversion.
## `enabled`
whether to enable byte to time conversion
<!-- trait AudioDecoderExt::fn set_latency -->
Sets decoder latency.
## `min`
minimum latency
## `max`
maximum latency
<!-- trait AudioDecoderExt::fn set_max_errors -->
Sets numbers of tolerated decoder errors, where a tolerated one is then only
warned about, but more than tolerated will lead to fatal error. You can set
-1 for never returning fatal errors. Default is set to
GST_AUDIO_DECODER_MAX_ERRORS.
## `num`
max tolerated errors
<!-- trait AudioDecoderExt::fn set_min_latency -->
Sets decoder minimum aggregation latency.
MT safe.
## `num`
new minimum latency
<!-- trait AudioDecoderExt::fn set_needs_format -->
Configures decoder format needs. If enabled, subclass needs to be
negotiated with format caps before it can process any data. It will then
never be handed any data before it has been configured.
Otherwise, it might be handed data without having been configured and
is then expected being able to do so either by default
or based on the input data.
MT safe.
## `enabled`
new state
<!-- trait AudioDecoderExt::fn set_output_caps -->
Configure output caps on the srcpad of `self`. Similar to
`AudioDecoder::set_output_format`, but allows subclasses to specify
output caps that can't be expressed via `AudioInfo` e.g. caps that have
caps features.
Feature: `v1_16`
## `caps`
(fixed) `gst::Caps`
# Returns
`true` on success.
<!-- trait AudioDecoderExt::fn set_output_format -->
Configure output info on the srcpad of `self`.
## `info`
`AudioInfo`
# Returns
`true` on success.
<!-- trait AudioDecoderExt::fn set_plc -->
Enable or disable decoder packet loss concealment, provided subclass
and codec are capable and allow handling plc.
MT safe.
## `enabled`
new state
<!-- trait AudioDecoderExt::fn set_plc_aware -->
Indicates whether or not subclass handles packet loss concealment (plc).
## `plc`
new plc state
<!-- trait AudioDecoderExt::fn set_tolerance -->
Configures decoder audio jitter tolerance threshold.
MT safe.
## `tolerance`
new tolerance
<!-- trait AudioDecoderExt::fn set_use_default_pad_acceptcaps -->
Lets `AudioDecoder` sub-classes decide if they want the sink pad
to use the default pad query handler to reply to accept-caps queries.
By setting this to true it is possible to further customize the default
handler with `GST_PAD_SET_ACCEPT_INTERSECT` and
`GST_PAD_SET_ACCEPT_TEMPLATE`
## `use_`
if the default pad accept-caps query handling should be used
<!-- struct AudioEncoder -->
This base class is for audio encoders turning raw audio samples into
encoded audio data.
GstAudioEncoder and subclass should cooperate as follows.
## Configuration
* Initially, GstAudioEncoder calls `start` when the encoder element
is activated, which allows subclass to perform any global setup.
* GstAudioEncoder calls `set_format` to inform subclass of the format
of input audio data that it is about to receive. Subclass should
setup for encoding and configure various base class parameters
appropriately, notably those directing desired input data handling.
While unlikely, it might be called more than once, if changing input
parameters require reconfiguration.
* GstAudioEncoder calls `stop` at end of all processing.
As of configuration stage, and throughout processing, GstAudioEncoder
maintains various parameters that provide required context,
e.g. describing the format of input audio data.
Conversely, subclass can and should configure these context parameters
to inform base class of its expectation w.r.t. buffer handling.
## Data processing
* Base class gathers input sample data (as directed by the context's
frame_samples and frame_max) and provides this to subclass' `handle_frame`.
* If codec processing results in encoded data, subclass should call
`AudioEncoder::finish_frame` to have encoded data pushed
downstream. Alternatively, it might also call
`AudioEncoder::finish_frame` (with a NULL buffer and some number of
dropped samples) to indicate dropped (non-encoded) samples.
* Just prior to actually pushing a buffer downstream,
it is passed to `pre_push`.
* During the parsing process GstAudioEncoderClass will handle both
srcpad and sinkpad events. Sink events will be passed to subclass
if `event` callback has been provided.
## Shutdown phase
* GstAudioEncoder class calls `stop` to inform the subclass that data
parsing will be stopped.
Subclass is responsible for providing pad template caps for
source and sink pads. The pads need to be named "sink" and "src". It also
needs to set the fixed caps on srcpad, when the format is ensured. This
is typically when base class calls subclass' `set_format` function, though
it might be delayed until calling `AudioEncoder::finish_frame`.
In summary, above process should have subclass concentrating on
codec data processing while leaving other matters to base class,
such as most notably timestamp handling. While it may exert more control
in this area (see e.g. `pre_push`), it is very much not recommended.
In particular, base class will either favor tracking upstream timestamps
(at the possible expense of jitter) or aim to arrange for a perfect stream of
output timestamps, depending on `AudioEncoder:perfect-timestamp`.
However, in the latter case, the input may not be so perfect or ideal, which
is handled as follows. An input timestamp is compared with the expected
timestamp as dictated by input sample stream and if the deviation is less
than `AudioEncoder:tolerance`, the deviation is discarded.
Otherwise, it is considered a discontuinity and subsequent output timestamp
is resynced to the new position after performing configured discontinuity
processing. In the non-perfect-timestamp case, an upstream variation
exceeding tolerance only leads to marking DISCONT on subsequent outgoing
(while timestamps are adjusted to upstream regardless of variation).
While DISCONT is also marked in the perfect-timestamp case, this one
optionally (see `AudioEncoder:hard-resync`)
performs some additional steps, such as clipping of (early) input samples
or draining all currently remaining input data, depending on the direction
of the discontuinity.
If perfect timestamps are arranged, it is also possible to request baseclass
(usually set by subclass) to provide additional buffer metadata (in OFFSET
and OFFSET_END) fields according to granule defined semantics currently
needed by oggmux. Specifically, OFFSET is set to granulepos (= sample count
including buffer) and OFFSET_END to corresponding timestamp (as determined
by same sample count and sample rate).
Things that subclass need to take care of:
* Provide pad templates
* Set source pad caps when appropriate
* Inform base class of buffer processing needs using context's
frame_samples and frame_bytes.
* Set user-configurable properties to sane defaults for format and
implementing codec at hand, e.g. those controlling timestamp behaviour
and discontinuity processing.
* Accept data in `handle_frame` and provide encoded results to
`AudioEncoder::finish_frame`.
# Implements
[`AudioEncoderExt`](trait.AudioEncoderExt.html), [`gst::ElementExt`](../gst/trait.ElementExt.html), [`gst::ObjectExt`](../gst/trait.ObjectExt.html), [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html)
<!-- trait AudioEncoderExt -->
Trait containing all `AudioEncoder` methods.
# Implementors
[`AudioEncoder`](struct.AudioEncoder.html)
<!-- trait AudioEncoderExt::fn allocate_output_buffer -->
Helper function that allocates a buffer to hold an encoded audio frame
for `self`'s current output format.
## `size`
size of the buffer
# Returns
allocated buffer
<!-- trait AudioEncoderExt::fn finish_frame -->
Collects encoded data and pushes encoded data downstream.
Source pad caps must be set when this is called.
If `samples` < 0, then best estimate is all samples provided to encoder
(subclass) so far. `buf` may be NULL, in which case next number of `samples`
are considered discarded, e.g. as a result of discontinuous transmission,
and a discontinuity is marked.
Note that samples received in `AudioEncoderClass.handle_frame`()
may be invalidated by a call to this function.
## `buffer`
encoded data
## `samples`
number of samples (per channel) represented by encoded data
# Returns
a `gst::FlowReturn` that should be escalated to caller (of caller)
<!-- trait AudioEncoderExt::fn get_allocator -->
Lets `AudioEncoder` sub-classes to know the memory `allocator`
used by the base class and its `params`.
Unref the `allocator` after use it.
## `allocator`
the `gst::Allocator`
used
## `params`
the
`gst::AllocationParams` of `allocator`
<!-- trait AudioEncoderExt::fn get_audio_info -->
# Returns
a `AudioInfo` describing the input audio format
<!-- trait AudioEncoderExt::fn get_drainable -->
Queries encoder drain handling.
# Returns
TRUE if drainable handling is enabled.
MT safe.
<!-- trait AudioEncoderExt::fn get_frame_max -->
# Returns
currently configured maximum handled frames
<!-- trait AudioEncoderExt::fn get_frame_samples_max -->
# Returns
currently maximum requested samples per frame
<!-- trait AudioEncoderExt::fn get_frame_samples_min -->
# Returns
currently minimum requested samples per frame
<!-- trait AudioEncoderExt::fn get_hard_min -->
Queries encoder hard minimum handling.
# Returns
TRUE if hard minimum handling is enabled.
MT safe.
<!-- trait AudioEncoderExt::fn get_latency -->
Sets the variables pointed to by `min` and `max` to the currently configured
latency.
## `min`
a pointer to storage to hold minimum latency
## `max`
a pointer to storage to hold maximum latency
<!-- trait AudioEncoderExt::fn get_lookahead -->
# Returns
currently configured encoder lookahead
<!-- trait AudioEncoderExt::fn get_mark_granule -->
Queries if the encoder will handle granule marking.
# Returns
TRUE if granule marking is enabled.
MT safe.
<!-- trait AudioEncoderExt::fn get_perfect_timestamp -->
Queries encoder perfect timestamp behaviour.
# Returns
TRUE if perfect timestamp setting enabled.
MT safe.
<!-- trait AudioEncoderExt::fn get_tolerance -->
Queries current audio jitter tolerance threshold.
# Returns
encoder audio jitter tolerance threshold.
MT safe.
<!-- trait AudioEncoderExt::fn merge_tags -->
Sets the audio encoder tags and how they should be merged with any
upstream stream tags. This will override any tags previously-set
with `AudioEncoderExt::merge_tags`.
Note that this is provided for convenience, and the subclass is
not required to use this and can still do tag handling on its own.
MT safe.
## `tags`
a `gst::TagList` to merge, or NULL to unset
previously-set tags
## `mode`
the `gst::TagMergeMode` to use, usually `gst::TagMergeMode::Replace`
<!-- trait AudioEncoderExt::fn negotiate -->
Negotiate with downstream elements to currently configured `gst::Caps`.
Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if
negotiate fails.
# Returns
`true` if the negotiation succeeded, else `false`.
<!-- trait AudioEncoderExt::fn proxy_getcaps -->
Returns caps that express `caps` (or sink template caps if `caps` == NULL)
restricted to channel/rate combinations supported by downstream elements
(e.g. muxers).
## `caps`
initial caps
## `filter`
filter caps
# Returns
a `gst::Caps` owned by caller
<!-- trait AudioEncoderExt::fn set_allocation_caps -->
Sets a caps in allocation query which are different from the set
pad's caps. Use this function before calling
`AudioEncoder::negotiate`. Setting to `None` the allocation
query will use the caps from the pad.
Feature: `v1_10`
## `allocation_caps`
a `gst::Caps` or `None`
<!-- trait AudioEncoderExt::fn set_drainable -->
Configures encoder drain handling. If drainable, subclass might
be handed a NULL buffer to have it return any leftover encoded data.
Otherwise, it is not considered so capable and will only ever be passed
real data.
MT safe.
## `enabled`
new state
<!-- trait AudioEncoderExt::fn set_frame_max -->
Sets max number of frames accepted at once (assumed minimally 1).
Requires `frame_samples_min` and `frame_samples_max` to be the equal.
Note: This value will be reset to 0 every time before
`AudioEncoderClass.set_format`() is called.
## `num`
number of frames
<!-- trait AudioEncoderExt::fn set_frame_samples_max -->
Sets number of samples (per channel) subclass needs to be handed,
at most or will be handed all available if 0.
If an exact number of samples is required, `AudioEncoderExt::set_frame_samples_min`
must be called with the same number.
Note: This value will be reset to 0 every time before
`AudioEncoderClass.set_format`() is called.
## `num`
number of samples per frame
<!-- trait AudioEncoderExt::fn set_frame_samples_min -->
Sets number of samples (per channel) subclass needs to be handed,
at least or will be handed all available if 0.
If an exact number of samples is required, `AudioEncoderExt::set_frame_samples_max`
must be called with the same number.
Note: This value will be reset to 0 every time before
`AudioEncoderClass.set_format`() is called.
## `num`
number of samples per frame
<!-- trait AudioEncoderExt::fn set_hard_min -->
Configures encoder hard minimum handling. If enabled, subclass
will never be handed less samples than it configured, which otherwise
might occur near end-of-data handling. Instead, the leftover samples
will simply be discarded.
MT safe.
## `enabled`
new state
<!-- trait AudioEncoderExt::fn set_headers -->
Set the codec headers to be sent downstream whenever requested.
## `headers`
a list of
`gst::Buffer` containing the codec header
<!-- trait AudioEncoderExt::fn set_latency -->
Sets encoder latency.
## `min`
minimum latency
## `max`
maximum latency
<!-- trait AudioEncoderExt::fn set_lookahead -->
Sets encoder lookahead (in units of input rate samples)
Note: This value will be reset to 0 every time before
`AudioEncoderClass.set_format`() is called.
## `num`
lookahead
<!-- trait AudioEncoderExt::fn set_mark_granule -->
Enable or disable encoder granule handling.
MT safe.
## `enabled`
new state
<!-- trait AudioEncoderExt::fn set_output_format -->
Configure output caps on the srcpad of `self`.
## `caps`
`gst::Caps`
# Returns
`true` on success.
<!-- trait AudioEncoderExt::fn set_perfect_timestamp -->
Enable or disable encoder perfect output timestamp preference.
MT safe.
## `enabled`
new state
<!-- trait AudioEncoderExt::fn set_tolerance -->
Configures encoder audio jitter tolerance threshold.
MT safe.
## `tolerance`
new tolerance
<!-- enum AudioFormat -->
Enum value describing the most common audio formats.
<!-- enum AudioFormat::variant Unknown -->
@ -266,6 +1146,83 @@ Layout of the audio samples for the different channels.
interleaved audio
<!-- enum AudioLayout::variant NonInterleaved -->
non-interleaved audio
<!-- enum AudioRingBufferFormatType -->
The format of the samples in the ringbuffer.
<!-- enum AudioRingBufferFormatType::variant Raw -->
samples in linear or float
<!-- enum AudioRingBufferFormatType::variant MuLaw -->
samples in mulaw
<!-- enum AudioRingBufferFormatType::variant ALaw -->
samples in alaw
<!-- enum AudioRingBufferFormatType::variant ImaAdpcm -->
samples in ima adpcm
<!-- enum AudioRingBufferFormatType::variant Mpeg -->
samples in mpeg audio (but not AAC) format
<!-- enum AudioRingBufferFormatType::variant Gsm -->
samples in gsm format
<!-- enum AudioRingBufferFormatType::variant Iec958 -->
samples in IEC958 frames (e.g. AC3)
<!-- enum AudioRingBufferFormatType::variant Ac3 -->
samples in AC3 format
<!-- enum AudioRingBufferFormatType::variant Eac3 -->
samples in EAC3 format
<!-- enum AudioRingBufferFormatType::variant Dts -->
samples in DTS format
<!-- enum AudioRingBufferFormatType::variant Mpeg2Aac -->
samples in MPEG-2 AAC ADTS format
<!-- enum AudioRingBufferFormatType::variant Mpeg4Aac -->
samples in MPEG-4 AAC ADTS format
<!-- enum AudioRingBufferFormatType::variant Mpeg2AacRaw -->
samples in MPEG-2 AAC raw format (Since: 1.12)
<!-- enum AudioRingBufferFormatType::variant Mpeg4AacRaw -->
samples in MPEG-4 AAC raw format (Since: 1.12)
<!-- enum AudioRingBufferFormatType::variant Flac -->
samples in FLAC format (Since: 1.12)
<!-- struct AudioSink -->
This is the most simple base class for audio sinks that only requires
subclasses to implement a set of simple functions:
* `open()` :Open the device.
* `prepare()` :Configure the device with the specified format.
* `write()` :Write samples to the device.
* `reset()` :Unblock writes and flush the device.
* `delay()` :Get the number of samples written but not yet played
by the device.
* `unprepare()` :Undo operations done by prepare.
* `close()` :Close the device.
All scheduling of samples and timestamps is done in this base class
together with `AudioBaseSink` using a default implementation of a
`AudioRingBuffer` that uses threads.
# Implements
[`AudioBaseSinkExt`](trait.AudioBaseSinkExt.html), [`gst_base::BaseSinkExt`](../gst_base/trait.BaseSinkExt.html), [`gst::ElementExt`](../gst/trait.ElementExt.html), [`gst::ObjectExt`](../gst/trait.ObjectExt.html), [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html)
<!-- struct AudioSrc -->
This is the most simple base class for audio sources that only requires
subclasses to implement a set of simple functions:
* `open()` :Open the device.
* `prepare()` :Configure the device with the specified format.
* `read()` :Read samples from the device.
* `reset()` :Unblock reads and flush the device.
* `delay()` :Get the number of samples in the device but not yet read.
* `unprepare()` :Undo operations done by prepare.
* `close()` :Close the device.
All scheduling of samples and timestamps is done in this base class
together with `AudioBaseSrc` using a default implementation of a
`AudioRingBuffer` that uses threads.
# Implements
[`AudioBaseSrcExt`](trait.AudioBaseSrcExt.html), [`gst_base::BaseSrcExt`](../gst_base/trait.BaseSrcExt.html), [`gst::ElementExt`](../gst/trait.ElementExt.html), [`gst::ObjectExt`](../gst/trait.ObjectExt.html), [`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html)
<!-- struct AudioStreamAlign -->
`AudioStreamAlign` provides a helper object that helps tracking audio
stream alignment and discontinuities, and detects discontinuities if

View file

@ -678,7 +678,7 @@ Feature: `v1_16`
# Returns
The running time based on the position
<!-- trait AggregatorExt::fn get_property_min-upstream-latency -->
<!-- trait AggregatorExt::fn get_property_min_upstream_latency -->
Force minimum upstream latency (in nanoseconds). When sources with a
higher latency are expected to be plugged in dynamically after the
aggregator has started playing, this allows overriding the minimum
@ -687,7 +687,7 @@ account when larger than the actually reported minimum latency.
Feature: `v1_16`
<!-- trait AggregatorExt::fn set_property_min-upstream-latency -->
<!-- trait AggregatorExt::fn set_property_min_upstream_latency -->
Force minimum upstream latency (in nanoseconds). When sources with a
higher latency are expected to be plugged in dynamically after the
aggregator has started playing, this allows overriding the minimum
@ -762,12 +762,12 @@ Feature: `v1_14`
The buffer in `self` or NULL if no buffer was
queued. You should unref the buffer after usage.
<!-- trait AggregatorPadExt::fn get_property_emit-signals -->
<!-- trait AggregatorPadExt::fn get_property_emit_signals -->
Enables the emission of signals such as `AggregatorPad::buffer-consumed`
Feature: `v1_16`
<!-- trait AggregatorPadExt::fn set_property_emit-signals -->
<!-- trait AggregatorPadExt::fn set_property_emit_signals -->
Enables the emission of signals such as `AggregatorPad::buffer-consumed`
Feature: `v1_16`
@ -1097,14 +1097,14 @@ This function can be used to set the timestamps based on the offset
into the frame data that the picture starts.
## `offset`
offset into current buffer
<!-- trait BaseParseExt::fn get_property_disable-passthrough -->
<!-- trait BaseParseExt::fn get_property_disable_passthrough -->
If set to `true`, baseparse will unconditionally force parsing of the
incoming data. This can be required in the rare cases where the incoming
side-data (caps, pts, dts, ...) is not trusted by the user and wants to
force validation and parsing of the incoming data.
If set to `false`, decision of whether to parse the data or not is up to
the implementation (standard behaviour).
<!-- trait BaseParseExt::fn set_property_disable-passthrough -->
<!-- trait BaseParseExt::fn set_property_disable_passthrough -->
If set to `true`, baseparse will unconditionally force parsing of the
incoming data. This can be required in the rare cases where the incoming
side-data (caps, pts, dts, ...) is not trusted by the user and wants to
@ -1589,63 +1589,63 @@ not required.
The amount of bytes to pull when operating in pull mode.
<!-- trait BaseSinkExt::fn set_property_blocksize -->
The amount of bytes to pull when operating in pull mode.
<!-- trait BaseSinkExt::fn get_property_enable-last-sample -->
<!-- trait BaseSinkExt::fn get_property_enable_last_sample -->
Enable the last-sample property. If `false`, basesink doesn't keep a
reference to the last buffer arrived and the last-sample property is always
set to `None`. This can be useful if you need buffers to be released as soon
as possible, eg. if you're using a buffer pool.
<!-- trait BaseSinkExt::fn set_property_enable-last-sample -->
<!-- trait BaseSinkExt::fn set_property_enable_last_sample -->
Enable the last-sample property. If `false`, basesink doesn't keep a
reference to the last buffer arrived and the last-sample property is always
set to `None`. This can be useful if you need buffers to be released as soon
as possible, eg. if you're using a buffer pool.
<!-- trait BaseSinkExt::fn get_property_last-sample -->
<!-- trait BaseSinkExt::fn get_property_last_sample -->
The last buffer that arrived in the sink and was used for preroll or for
rendering. This property can be used to generate thumbnails. This property
can be `None` when the sink has not yet received a buffer.
<!-- trait BaseSinkExt::fn get_property_max-bitrate -->
<!-- trait BaseSinkExt::fn get_property_max_bitrate -->
Control the maximum amount of bits that will be rendered per second.
Setting this property to a value bigger than 0 will make the sink delay
rendering of the buffers when it would exceed to max-bitrate.
<!-- trait BaseSinkExt::fn set_property_max-bitrate -->
<!-- trait BaseSinkExt::fn set_property_max_bitrate -->
Control the maximum amount of bits that will be rendered per second.
Setting this property to a value bigger than 0 will make the sink delay
rendering of the buffers when it would exceed to max-bitrate.
<!-- trait BaseSinkExt::fn get_property_processing-deadline -->
<!-- trait BaseSinkExt::fn get_property_processing_deadline -->
Maximum amount of time (in nanoseconds) that the pipeline can take
for processing the buffer. This is added to the latency of live
pipelines.
Feature: `v1_16`
<!-- trait BaseSinkExt::fn set_property_processing-deadline -->
<!-- trait BaseSinkExt::fn set_property_processing_deadline -->
Maximum amount of time (in nanoseconds) that the pipeline can take
for processing the buffer. This is added to the latency of live
pipelines.
Feature: `v1_16`
<!-- trait BaseSinkExt::fn get_property_render-delay -->
<!-- trait BaseSinkExt::fn get_property_render_delay -->
The additional delay between synchronisation and actual rendering of the
media. This property will add additional latency to the device in order to
make other sinks compensate for the delay.
<!-- trait BaseSinkExt::fn set_property_render-delay -->
<!-- trait BaseSinkExt::fn set_property_render_delay -->
The additional delay between synchronisation and actual rendering of the
media. This property will add additional latency to the device in order to
make other sinks compensate for the delay.
<!-- trait BaseSinkExt::fn get_property_throttle-time -->
<!-- trait BaseSinkExt::fn get_property_throttle_time -->
The time to insert between buffers. This property can be used to control
the maximum amount of buffers per second to render. Setting this property
to a value bigger than 0 will make the sink create THROTTLE QoS events.
<!-- trait BaseSinkExt::fn set_property_throttle-time -->
<!-- trait BaseSinkExt::fn set_property_throttle_time -->
The time to insert between buffers. This property can be used to control
the maximum amount of buffers per second to render. Setting this property
to a value bigger than 0 will make the sink create THROTTLE QoS events.
<!-- trait BaseSinkExt::fn get_property_ts-offset -->
<!-- trait BaseSinkExt::fn get_property_ts_offset -->
Controls the final synchronisation, a negative value will render the buffer
earlier while a positive value delays playback. This property can be
used to fix synchronisation in bad files.
<!-- trait BaseSinkExt::fn set_property_ts-offset -->
<!-- trait BaseSinkExt::fn set_property_ts_offset -->
Controls the final synchronisation, a negative value will render the buffer
earlier while a positive value delays playback. This property can be
used to fix synchronisation in bad files.

View file

@ -1318,13 +1318,13 @@ Blocks until at least `count` clock notifications have been requested from
use `TestClock::wait_for_multiple_pending_ids` instead.
## `count`
the number of pending clock notifications to wait for
<!-- trait TestClockExt::fn get_property_start-time -->
<!-- impl TestClock::fn get_property_start_time -->
When a `TestClock` is constructed it will have a certain start time set.
If the clock was created using `TestClock::new_with_start_time` then
this property contains the value of the `start_time` argument. If
`TestClock::new` was called the clock started at time zero, and thus
this property contains the value 0.
<!-- trait TestClockExt::fn set_property_start-time -->
<!-- impl TestClock::fn set_property_start_time -->
When a `TestClock` is constructed it will have a certain start time set.
If the clock was created using `TestClock::new_with_start_time` then
this property contains the value of the `start_time` argument. If

View file

@ -386,9 +386,9 @@ from the splitting or `None` if the clip can't be split.
The GESLayer where this clip is being used. If you want to connect to its
notify signal you should connect to it with g_signal_connect_after as the
signal emission can be stop in the first fase.
<!-- trait ClipExt::fn get_property_supported-formats -->
<!-- trait ClipExt::fn get_property_supported_formats -->
The formats supported by the clip.
<!-- trait ClipExt::fn set_property_supported-formats -->
<!-- trait ClipExt::fn set_property_supported_formats -->
The formats supported by the clip.
<!-- struct Container -->
The `Container` base class.
@ -557,12 +557,12 @@ The gst-launch like bin description of the effect
a newly created `Effect`, or `None` if something went
wrong.
<!-- trait EffectExt::fn get_property_bin-description -->
<!-- trait EffectExt::fn get_property_bin_description -->
The description of the effect bin with a gst-launch-style
pipeline description.
Example: "videobalance saturation=1.5 hue=+0.5"
<!-- trait EffectExt::fn set_property_bin-description -->
<!-- trait EffectExt::fn set_property_bin_description -->
The description of the effect bin with a gst-launch-style
pipeline description.
@ -629,21 +629,21 @@ The new empty group.
The duration (in nanoseconds) which will be used in the container
<!-- trait GroupExt::fn set_property_duration -->
The duration (in nanoseconds) which will be used in the container
<!-- trait GroupExt::fn get_property_in-point -->
<!-- trait GroupExt::fn get_property_in_point -->
The in-point at which this `Group` will start outputting data
from its contents (in nanoseconds).
Ex : an in-point of 5 seconds means that the first outputted buffer will
be the one located 5 seconds in the controlled resource.
<!-- trait GroupExt::fn set_property_in-point -->
<!-- trait GroupExt::fn set_property_in_point -->
The in-point at which this `Group` will start outputting data
from its contents (in nanoseconds).
Ex : an in-point of 5 seconds means that the first outputted buffer will
be the one located 5 seconds in the controlled resource.
<!-- trait GroupExt::fn get_property_max-duration -->
<!-- trait GroupExt::fn get_property_max_duration -->
The maximum duration (in nanoseconds) of the `Group`.
<!-- trait GroupExt::fn set_property_max-duration -->
<!-- trait GroupExt::fn set_property_max_duration -->
The maximum duration (in nanoseconds) of the `Group`.
<!-- trait GroupExt::fn get_property_start -->
The position of the object in its container (in nanoseconds).
@ -795,9 +795,9 @@ the `Clip` that was added.
Will be emitted after the clip was removed from the layer.
## `clip`
the `Clip` that was removed
<!-- trait LayerExt::fn get_property_auto-transition -->
<!-- trait LayerExt::fn get_property_auto_transition -->
Sets whether transitions are added automagically when clips overlap.
<!-- trait LayerExt::fn set_property_auto-transition -->
<!-- trait LayerExt::fn set_property_auto_transition -->
Sets whether transitions are added automagically when clips overlap.
<!-- trait LayerExt::fn get_property_priority -->
The priority of the layer in the `Timeline`. 0 is the highest
@ -967,9 +967,9 @@ the `Timeline` to set on the `self`.
`true` if the `timeline` could be successfully set on the `self`,
else `false`.
<!-- trait GESPipelineExt::fn get_property_audio-sink -->
<!-- trait GESPipelineExt::fn get_property_audio_sink -->
Audio sink for the preview.
<!-- trait GESPipelineExt::fn set_property_audio-sink -->
<!-- trait GESPipelineExt::fn set_property_audio_sink -->
Audio sink for the preview.
<!-- trait GESPipelineExt::fn get_property_mode -->
Pipeline mode. See `GESPipelineExt::set_mode` for more
@ -983,9 +983,9 @@ Timeline to use in this pipeline. See also
<!-- trait GESPipelineExt::fn set_property_timeline -->
Timeline to use in this pipeline. See also
`GESPipelineExt::set_timeline` for more info.
<!-- trait GESPipelineExt::fn get_property_video-sink -->
<!-- trait GESPipelineExt::fn get_property_video_sink -->
Video sink for the preview.
<!-- trait GESPipelineExt::fn set_property_video-sink -->
<!-- trait GESPipelineExt::fn set_property_video_sink -->
Video sink for the preview.
<!-- struct Project -->
The `Project` is used to control a set of `Asset` and is a
@ -1427,6 +1427,9 @@ we land at that position in the stack of layers inside
the timeline. If `new_layer_priority` is superior than the number
of layers present in the time, it will move to the end of the
stack of layers.
Feature: `v1_16`
## `layer`
The layer to move at `new_layer_priority`
## `new_layer_priority`
@ -1547,16 +1550,16 @@ the `Track` that was added to the timeline
Will be emitted after the track was removed from the timeline.
## `track`
the `Track` that was removed from the timeline
<!-- trait TimelineExt::fn get_property_auto-transition -->
<!-- trait TimelineExt::fn get_property_auto_transition -->
Sets whether transitions are added automagically when clips overlap.
<!-- trait TimelineExt::fn set_property_auto-transition -->
<!-- trait TimelineExt::fn set_property_auto_transition -->
Sets whether transitions are added automagically when clips overlap.
<!-- trait TimelineExt::fn get_property_duration -->
Current duration (in nanoseconds) of the `Timeline`
<!-- trait TimelineExt::fn get_property_snapping-distance -->
<!-- trait TimelineExt::fn get_property_snapping_distance -->
Distance (in nanoseconds) from which a moving object will snap
with it neighboors. 0 means no snapping.
<!-- trait TimelineExt::fn set_property_snapping-distance -->
<!-- trait TimelineExt::fn set_property_snapping_distance -->
Distance (in nanoseconds) from which a moving object will snap
with it neighboors. 0 means no snapping.
<!-- struct TimelineElement -->
@ -1632,6 +1635,9 @@ The `duration` of `self`
The `inpoint` of `self`
<!-- trait TimelineElementExt::fn get_layer_priority -->
Feature: `v1_16`
# Returns
The priority of the first layer the element is in (note that only
@ -1725,7 +1731,8 @@ be copied to, meaning it will become the start of `self`
# Returns
Paste `self` copying the element
New element resulting of pasting `self`
or `None`
<!-- trait TimelineElementExt::fn ripple -->
Edits `self` in ripple mode. It allows you to modify the
start of `self` and move the following neighbours accordingly.
@ -1916,21 +1923,21 @@ the property that changed
The duration (in nanoseconds) which will be used in the container
<!-- trait TimelineElementExt::fn set_property_duration -->
The duration (in nanoseconds) which will be used in the container
<!-- trait TimelineElementExt::fn get_property_in-point -->
<!-- trait TimelineElementExt::fn get_property_in_point -->
The in-point at which this `TimelineElement` will start outputting data
from its contents (in nanoseconds).
Ex : an in-point of 5 seconds means that the first outputted buffer will
be the one located 5 seconds in the controlled resource.
<!-- trait TimelineElementExt::fn set_property_in-point -->
<!-- trait TimelineElementExt::fn set_property_in_point -->
The in-point at which this `TimelineElement` will start outputting data
from its contents (in nanoseconds).
Ex : an in-point of 5 seconds means that the first outputted buffer will
be the one located 5 seconds in the controlled resource.
<!-- trait TimelineElementExt::fn get_property_max-duration -->
<!-- trait TimelineElementExt::fn get_property_max_duration -->
The maximum duration (in nanoseconds) of the `TimelineElement`.
<!-- trait TimelineElementExt::fn set_property_max-duration -->
<!-- trait TimelineElementExt::fn set_property_max_duration -->
The maximum duration (in nanoseconds) of the `TimelineElement`.
<!-- trait TimelineElementExt::fn get_property_name -->
The name of the object
@ -2106,22 +2113,22 @@ Default value: O
Whether layer mixing is activated or not on the track.
<!-- trait GESTrackExt::fn set_property_mixing -->
Whether layer mixing is activated or not on the track.
<!-- trait GESTrackExt::fn get_property_restriction-caps -->
<!-- trait GESTrackExt::fn get_property_restriction_caps -->
Caps used to filter/choose the output stream.
Default value: `GST_CAPS_ANY`.
<!-- trait GESTrackExt::fn set_property_restriction-caps -->
<!-- trait GESTrackExt::fn set_property_restriction_caps -->
Caps used to filter/choose the output stream.
Default value: `GST_CAPS_ANY`.
<!-- trait GESTrackExt::fn get_property_track-type -->
<!-- trait GESTrackExt::fn get_property_track_type -->
Type of stream the track outputs. This is used when creating the `Track`
to specify in generic terms what type of content will be outputted.
It also serves as a 'fast' way to check what type of data will be outputted
from the `Track` without having to actually check the `Track`'s caps
property.
<!-- trait GESTrackExt::fn set_property_track-type -->
<!-- trait GESTrackExt::fn set_property_track_type -->
Type of stream the track outputs. This is used when creating the `Track`
to specify in generic terms what type of content will be outputted.
@ -2481,10 +2488,10 @@ Sets whether the clip is a still image or not.
Sets whether the audio track of this clip is muted or not.
## `mute`
`true` to mute `self` audio track, `false` to unmute it
<!-- trait UriClipExt::fn get_property_is-image -->
<!-- trait UriClipExt::fn get_property_is_image -->
Whether this uri clip represents a still image or not. This must be set
before create_track_elements is called.
<!-- trait UriClipExt::fn set_property_is-image -->
<!-- trait UriClipExt::fn set_property_is_image -->
Whether this uri clip represents a still image or not. This must be set
before create_track_elements is called.
<!-- trait UriClipExt::fn get_property_mute -->
@ -2512,6 +2519,9 @@ Trait containing all `UriClipAsset` methods.
[`UriClipAsset`](struct.UriClipAsset.html)
<!-- impl UriClipAsset::fn finish -->
Finalize the request of an async `UriClipAsset`
Feature: `v1_16`
## `res`
The `gio::AsyncResult` from which to get the newly created `UriClipAsset`

View file

@ -65,7 +65,7 @@ asynchronous mode.
<!-- impl Discoverer::fn stop -->
Stop the discovery of any pending URIs and clears the list of
pending URIS (if any).
<!-- trait DiscovererExt::fn connect_discovered -->
<!-- impl Discoverer::fn connect_discovered -->
Will be emitted in async mode when all information on a URI could be
discovered, or an error occurred.
@ -79,9 +79,9 @@ the results `DiscovererInfo`
discovery. You must not free
this `glib::Error`, it will be freed by
the discoverer.
<!-- trait DiscovererExt::fn connect_finished -->
<!-- impl Discoverer::fn connect_finished -->
Will be emitted in async mode when all pending URIs have been processed.
<!-- trait DiscovererExt::fn connect_source_setup -->
<!-- impl Discoverer::fn connect_source_setup -->
This signal is emitted after the source element has been created for, so
the URI being discovered, so it can be configured by setting additional
properties (e.g. set a proxy server for an http source, or set the device
@ -91,15 +91,15 @@ This signal is usually emitted from the context of a GStreamer streaming
thread.
## `source`
source element
<!-- trait DiscovererExt::fn connect_starting -->
<!-- impl Discoverer::fn connect_starting -->
Will be emitted when the discover starts analyzing the pending URIs
<!-- trait DiscovererExt::fn get_property_timeout -->
<!-- impl Discoverer::fn get_property_timeout -->
The duration (in nanoseconds) after which the discovery of an individual
URI will timeout.
If the discovery of a URI times out, the `DiscovererResult::Timeout` will be
set on the result flags.
<!-- trait DiscovererExt::fn set_property_timeout -->
<!-- impl Discoverer::fn set_property_timeout -->
The duration (in nanoseconds) after which the discovery of an individual
URI will timeout.

View file

@ -1 +1,172 @@
<!-- file * -->
<!-- enum RTCPFBType -->
Different types of feedback messages.
<!-- enum RTCPFBType::variant FbTypeInvalid -->
Invalid type
<!-- enum RTCPFBType::variant RtpfbTypeNack -->
Generic NACK
<!-- enum RTCPFBType::variant RtpfbTypeTmmbr -->
Temporary Maximum Media Stream Bit Rate Request
<!-- enum RTCPFBType::variant RtpfbTypeTmmbn -->
Temporary Maximum Media Stream Bit Rate
Notification
<!-- enum RTCPFBType::variant RtpfbTypeRtcpSrReq -->
Request an SR packet for early
synchronization
<!-- enum RTCPFBType::variant PsfbTypePli -->
Picture Loss Indication
<!-- enum RTCPFBType::variant PsfbTypeSli -->
Slice Loss Indication
<!-- enum RTCPFBType::variant PsfbTypeRpsi -->
Reference Picture Selection Indication
<!-- enum RTCPFBType::variant PsfbTypeAfb -->
Application layer Feedback
<!-- enum RTCPFBType::variant PsfbTypeFir -->
Full Intra Request Command
<!-- enum RTCPFBType::variant PsfbTypeTstr -->
Temporal-Spatial Trade-off Request
<!-- enum RTCPFBType::variant PsfbTypeTstn -->
Temporal-Spatial Trade-off Notification
<!-- enum RTCPFBType::variant PsfbTypeVbcn -->
Video Back Channel Message
<!-- enum RTCPSDESType -->
Different types of SDES content.
<!-- enum RTCPSDESType::variant Invalid -->
Invalid SDES entry
<!-- enum RTCPSDESType::variant End -->
End of SDES list
<!-- enum RTCPSDESType::variant Cname -->
Canonical name
<!-- enum RTCPSDESType::variant Name -->
User name
<!-- enum RTCPSDESType::variant Email -->
User's electronic mail address
<!-- enum RTCPSDESType::variant Phone -->
User's phone number
<!-- enum RTCPSDESType::variant Loc -->
Geographic user location
<!-- enum RTCPSDESType::variant Tool -->
Name of application or tool
<!-- enum RTCPSDESType::variant Note -->
Notice about the source
<!-- enum RTCPSDESType::variant Priv -->
Private extensions
<!-- enum RTCPType -->
Different RTCP packet types.
<!-- enum RTCPType::variant Invalid -->
Invalid type
<!-- enum RTCPType::variant Sr -->
Sender report
<!-- enum RTCPType::variant Rr -->
Receiver report
<!-- enum RTCPType::variant Sdes -->
Source description
<!-- enum RTCPType::variant Bye -->
Goodbye
<!-- enum RTCPType::variant App -->
Application defined
<!-- enum RTCPType::variant Rtpfb -->
Transport layer feedback.
<!-- enum RTCPType::variant Psfb -->
Payload-specific feedback.
<!-- enum RTCPType::variant Xr -->
Extended report.
<!-- enum RTCPXRType -->
Types of RTCP Extended Reports, those are defined in RFC 3611 and other RFCs
according to the [IANA registry](https://www.iana.org/assignments/rtcp-xr-block-types/rtcp-xr-block-types.xhtml).
<!-- enum RTCPXRType::variant Invalid -->
Invalid XR Report Block
<!-- enum RTCPXRType::variant Lrle -->
Loss RLE Report Block
<!-- enum RTCPXRType::variant Drle -->
Duplicate RLE Report Block
<!-- enum RTCPXRType::variant Prt -->
Packet Receipt Times Report Block
<!-- enum RTCPXRType::variant Rrt -->
Receiver Reference Time Report Block
<!-- enum RTCPXRType::variant Dlrr -->
Delay since the last Receiver Report
<!-- enum RTCPXRType::variant Ssumm -->
Statistics Summary Report Block
<!-- enum RTCPXRType::variant VoipMetrics -->
VoIP Metrics Report Block
Feature: `v1_16`
<!-- enum RTPPayload -->
Standard predefined fixed payload types.
The official list is at:
http://www.iana.org/assignments/rtp-parameters
Audio:
reserved: 19
unassigned: 20-23,
Video:
unassigned: 24, 27, 29, 30, 35-71, 77-95
Reserved for RTCP conflict avoidance: 72-76
<!-- enum RTPPayload::variant Pcmu -->
ITU-T G.711. mu-law audio (RFC 3551)
<!-- enum RTPPayload::variant 1016 -->
RFC 3551 says reserved
<!-- enum RTPPayload::variant G721 -->
RFC 3551 says reserved
<!-- enum RTPPayload::variant Gsm -->
GSM audio
<!-- enum RTPPayload::variant G723 -->
ITU G.723.1 audio
<!-- enum RTPPayload::variant Dvi48000 -->
IMA ADPCM wave type (RFC 3551)
<!-- enum RTPPayload::variant Dvi416000 -->
IMA ADPCM wave type (RFC 3551)
<!-- enum RTPPayload::variant Lpc -->
experimental linear predictive encoding
<!-- enum RTPPayload::variant Pcma -->
ITU-T G.711 A-law audio (RFC 3551)
<!-- enum RTPPayload::variant G722 -->
ITU-T G.722 (RFC 3551)
<!-- enum RTPPayload::variant L16Stereo -->
stereo PCM
<!-- enum RTPPayload::variant L16Mono -->
mono PCM
<!-- enum RTPPayload::variant Qcelp -->
EIA & TIA standard IS-733
<!-- enum RTPPayload::variant Cn -->
Comfort Noise (RFC 3389)
<!-- enum RTPPayload::variant Mpa -->
Audio MPEG 1-3.
<!-- enum RTPPayload::variant G728 -->
ITU-T G.728 Speech coder (RFC 3551)
<!-- enum RTPPayload::variant Dvi411025 -->
IMA ADPCM wave type (RFC 3551)
<!-- enum RTPPayload::variant Dvi422050 -->
IMA ADPCM wave type (RFC 3551)
<!-- enum RTPPayload::variant G729 -->
ITU-T G.729 Speech coder (RFC 3551)
<!-- enum RTPPayload::variant Cellb -->
See RFC 2029
<!-- enum RTPPayload::variant Jpeg -->
ISO Standards 10918-1 and 10918-2 (RFC 2435)
<!-- enum RTPPayload::variant Nv -->
nv encoding by Ron Frederick
<!-- enum RTPPayload::variant H261 -->
ITU-T Recommendation H.261 (RFC 2032)
<!-- enum RTPPayload::variant Mpv -->
Video MPEG 1 & 2 (RFC 2250)
<!-- enum RTPPayload::variant Mp2t -->
MPEG-2 transport stream (RFC 2250)
<!-- enum RTPPayload::variant H263 -->
Video H263 (RFC 2190)
<!-- enum RTPProfile -->
The transfer profile to use.
<!-- enum RTPProfile::variant Unknown -->
invalid profile
<!-- enum RTPProfile::variant Avp -->
the Audio/Visual profile (RFC 3551)
<!-- enum RTPProfile::variant Savp -->
the secure Audio/Visual profile (RFC 3711)
<!-- enum RTPProfile::variant Avpf -->
the Audio/Visual profile with feedback (RFC 4585)
<!-- enum RTPProfile::variant Savpf -->
the secure Audio/Visual profile with feedback (RFC 5124)

View file

@ -1,4 +1,14 @@
<!-- file * -->
<!-- enum WebRTCBundlePolicy -->
GST_WEBRTC_BUNDLE_POLICY_NONE: none
GST_WEBRTC_BUNDLE_POLICY_BALANCED: balanced
GST_WEBRTC_BUNDLE_POLICY_MAX_COMPAT: max-compat
GST_WEBRTC_BUNDLE_POLICY_MAX_BUNDLE: max-bundle
See https://tools.ietf.org/html/draft-ietf-rtcweb-jsep-24`section`-4.1.1
for more information.
Feature: `v1_16`
<!-- enum WebRTCDTLSSetup -->
GST_WEBRTC_DTLS_SETUP_NONE: none
GST_WEBRTC_DTLS_SETUP_ACTPASS: actpass
@ -16,6 +26,24 @@ GST_WEBRTC_DTLS_TRANSPORT_STATE_CLOSED: closed
GST_WEBRTC_DTLS_TRANSPORT_STATE_FAILED: failed
GST_WEBRTC_DTLS_TRANSPORT_STATE_CONNECTING: connecting
GST_WEBRTC_DTLS_TRANSPORT_STATE_CONNECTED: connected
<!-- enum WebRTCDataChannelState -->
GST_WEBRTC_DATA_CHANNEL_STATE_NEW: new
GST_WEBRTC_DATA_CHANNEL_STATE_CONNECTING: connection
GST_WEBRTC_DATA_CHANNEL_STATE_OPEN: open
GST_WEBRTC_DATA_CHANNEL_STATE_CLOSING: closing
GST_WEBRTC_DATA_CHANNEL_STATE_CLOSED: closed
See <ulink url="http://w3c.github.io/webrtc-pc/`dom`-rtcdatachannelstate">http://w3c.github.io/webrtc-pc/`dom`-rtcdatachannelstate`</ulink>`
Feature: `v1_16`
<!-- enum WebRTCFECType -->
<!-- enum WebRTCFECType::variant None -->
none
<!-- enum WebRTCFECType::variant UlpRed -->
ulpfec + red
Feature: `v1_14_1`
<!-- enum WebRTCICEComponent -->
GST_WEBRTC_ICE_COMPONENT_RTP,
GST_WEBRTC_ICE_COMPONENT_RTCP,
@ -42,6 +70,14 @@ GST_WEBRTC_ICE_ROLE_CONTROLLING: controlling
# Implements
[`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html)
<!-- enum WebRTCICETransportPolicy -->
GST_WEBRTC_ICE_TRANSPORT_POLICY_ALL: all
GST_WEBRTC_ICE_TRANSPORT_POLICY_RELAY: relay
See https://tools.ietf.org/html/draft-ietf-rtcweb-jsep-24`section`-4.1.1
for more information.
Feature: `v1_16`
<!-- enum WebRTCPeerConnectionState -->
GST_WEBRTC_PEER_CONNECTION_STATE_NEW: new
GST_WEBRTC_PEER_CONNECTION_STATE_CONNECTING: connecting
@ -50,6 +86,15 @@ GST_WEBRTC_PEER_CONNECTION_STATE_DISCONNECTED: disconnected
GST_WEBRTC_PEER_CONNECTION_STATE_FAILED: failed
GST_WEBRTC_PEER_CONNECTION_STATE_CLOSED: closed
See <ulink url="http://w3c.github.io/webrtc-pc/`dom`-rtcpeerconnectionstate">http://w3c.github.io/webrtc-pc/`dom`-rtcpeerconnectionstate`</ulink>`
<!-- enum WebRTCPriorityType -->
GST_WEBRTC_PRIORITY_TYPE_VERY_LOW: very-low
GST_WEBRTC_PRIORITY_TYPE_LOW: low
GST_WEBRTC_PRIORITY_TYPE_MEDIUM: medium
GST_WEBRTC_PRIORITY_TYPE_HIGH: high
See <ulink url="http://w3c.github.io/webrtc-pc/`dom`-rtcprioritytype">http://w3c.github.io/webrtc-pc/`dom`-rtcprioritytype`</ulink>`
Feature: `v1_16`
<!-- struct WebRTCRTPReceiver -->
@ -69,6 +114,15 @@ See <ulink url="http://w3c.github.io/webrtc-pc/`dom`-rtcpeerconnectionstate">htt
[`glib::object::ObjectExt`](../glib/object/trait.ObjectExt.html)
<!-- enum WebRTCRTPTransceiverDirection -->
<!-- enum WebRTCSCTPTransportState -->
GST_WEBRTC_SCTP_TRANSPORT_STATE_NEW: new
GST_WEBRTC_SCTP_TRANSPORT_STATE_CONNECTING: connecting
GST_WEBRTC_SCTP_TRANSPORT_STATE_CONNECTED: connected
GST_WEBRTC_SCTP_TRANSPORT_STATE_CLOSED: closed
See <ulink url="http://w3c.github.io/webrtc-pc/`dom`-rtcsctptransportstate">http://w3c.github.io/webrtc-pc/`dom`-rtcsctptransportstate`</ulink>`
Feature: `v1_16`
<!-- enum WebRTCSDPType -->
GST_WEBRTC_SDP_TYPE_OFFER: offer
GST_WEBRTC_SDP_TYPE_PRANSWER: pranswer

View file

@ -442,15 +442,15 @@ the `Element` that was added to the bin
Will be emitted after the element was removed from the bin.
## `element`
the `Element` that was removed from the bin
<!-- trait GstBinExt::fn get_property_async-handling -->
<!-- trait GstBinExt::fn get_property_async_handling -->
If set to `true`, the bin will handle asynchronous state changes.
This should be used only if the bin subclass is modifying the state
of its children on its own.
<!-- trait GstBinExt::fn set_property_async-handling -->
<!-- trait GstBinExt::fn set_property_async_handling -->
If set to `true`, the bin will handle asynchronous state changes.
This should be used only if the bin subclass is modifying the state
of its children on its own.
<!-- trait GstBinExt::fn get_property_message-forward -->
<!-- trait GstBinExt::fn get_property_message_forward -->
Forward all children messages, even those that would normally be filtered by
the bin. This can be interesting when one wants to be notified of the EOS
state of individual elements, for example.
@ -459,7 +459,7 @@ The messages are converted to an ELEMENT message with the bin as the
source. The structure of the message is named 'GstBinForwarded' and contains
a field named 'message' of type GST_TYPE_MESSAGE that contains the original
forwarded message.
<!-- trait GstBinExt::fn set_property_message-forward -->
<!-- trait GstBinExt::fn set_property_message_forward -->
Forward all children messages, even those that would normally be filtered by
the bin. This can be interesting when one wants to be notified of the EOS
state of individual elements, for example.
@ -2049,13 +2049,13 @@ a `Message` matching the
usage.
MT safe.
<!-- trait BusExt::fn connect_message -->
<!-- impl Bus::fn connect_message -->
A message has been posted on the bus. This signal is emitted from a
GSource added to the mainloop. this signal will only be emitted when
there is a mainloop running.
## `message`
the message that has been posted asynchronously
<!-- trait BusExt::fn connect_sync_message -->
<!-- impl Bus::fn connect_sync_message -->
A message has been posted on the bus. This signal is emitted from the
thread that posted the message so one has to be careful with locking.
@ -3299,6 +3299,12 @@ This signal will be emitted from an arbitrary thread, most likely not
the application's main thread.
## `synced`
if the clock is synced now
<!-- enum ClockEntryType -->
The type of the clock entry
<!-- enum ClockEntryType::variant Single -->
a single shot timeout
<!-- enum ClockEntryType::variant Periodic -->
a periodic timeout request
<!-- enum ClockReturn -->
The return value of a clock operation.
<!-- enum ClockReturn::variant Ok -->
@ -9913,35 +9919,35 @@ Unref after usage.
Emit the pad-created signal for this template when created by this pad.
## `pad`
the `Pad` that created it
<!-- trait PadTemplateExt::fn connect_pad_created -->
<!-- impl PadTemplate::fn connect_pad_created -->
This signal is fired when an element creates a pad from this template.
## `pad`
the pad that was created.
<!-- trait PadTemplateExt::fn get_property_caps -->
<!-- impl PadTemplate::fn get_property_caps -->
The capabilities of the pad described by the pad template.
<!-- trait PadTemplateExt::fn set_property_caps -->
<!-- impl PadTemplate::fn set_property_caps -->
The capabilities of the pad described by the pad template.
<!-- trait PadTemplateExt::fn get_property_direction -->
<!-- impl PadTemplate::fn get_property_direction -->
The direction of the pad described by the pad template.
<!-- trait PadTemplateExt::fn set_property_direction -->
<!-- impl PadTemplate::fn set_property_direction -->
The direction of the pad described by the pad template.
<!-- trait PadTemplateExt::fn get_property_gtype -->
<!-- impl PadTemplate::fn get_property_gtype -->
The type of the pad described by the pad template.
Feature: `v1_14`
<!-- trait PadTemplateExt::fn set_property_gtype -->
<!-- impl PadTemplate::fn set_property_gtype -->
The type of the pad described by the pad template.
Feature: `v1_14`
<!-- trait PadTemplateExt::fn get_property_name-template -->
<!-- impl PadTemplate::fn get_property_name_template -->
The name template of the pad template.
<!-- trait PadTemplateExt::fn set_property_name-template -->
<!-- impl PadTemplate::fn set_property_name_template -->
The name template of the pad template.
<!-- trait PadTemplateExt::fn get_property_presence -->
<!-- impl PadTemplate::fn get_property_presence -->
When the pad described by the pad template will become available.
<!-- trait PadTemplateExt::fn set_property_presence -->
<!-- impl PadTemplate::fn set_property_presence -->
When the pad described by the pad template will become available.
<!-- struct ParseContext -->
Opaque structure.
@ -10184,11 +10190,11 @@ the pipeline run as fast as possible.
MT safe.
## `clock`
the clock to use
<!-- trait PipelineExt::fn get_property_auto-flush-bus -->
<!-- trait PipelineExt::fn get_property_auto_flush_bus -->
Whether or not to automatically flush all messages on the
pipeline's bus when going from READY to NULL state. Please see
`PipelineExt::set_auto_flush_bus` for more information on this option.
<!-- trait PipelineExt::fn set_property_auto-flush-bus -->
<!-- trait PipelineExt::fn set_property_auto_flush_bus -->
Whether or not to automatically flush all messages on the
pipeline's bus when going from READY to NULL state. Please see
`PipelineExt::set_auto_flush_bus` for more information on this option.
@ -12056,12 +12062,12 @@ the path to scan
# Returns
`true` if registry changed
<!-- trait RegistryExt::fn connect_feature_added -->
<!-- impl Registry::fn connect_feature_added -->
Signals that a feature has been added to the registry (possibly
replacing a previously-added one by the same name)
## `feature`
the feature that has been added
<!-- trait RegistryExt::fn connect_plugin_added -->
<!-- impl Registry::fn connect_plugin_added -->
Signals that a plugin has been added to the registry (possibly
replacing a previously-added one by the same name)
## `plugin`
@ -12832,23 +12838,23 @@ Feature: `v1_10`
## `tags`
a `TagList`
<!-- trait StreamExt::fn get_property_caps -->
<!-- impl Stream::fn get_property_caps -->
The `Caps` of the `Stream`.
<!-- trait StreamExt::fn set_property_caps -->
<!-- impl Stream::fn set_property_caps -->
The `Caps` of the `Stream`.
<!-- trait StreamExt::fn get_property_stream-id -->
<!-- impl Stream::fn get_property_stream_id -->
The unique identifier of the `Stream`. Can only be set at construction
time.
<!-- trait StreamExt::fn set_property_stream-id -->
<!-- impl Stream::fn set_property_stream_id -->
The unique identifier of the `Stream`. Can only be set at construction
time.
<!-- trait StreamExt::fn get_property_stream-type -->
<!-- impl Stream::fn get_property_stream_type -->
The `StreamType` of the `Stream`. Can only be set at construction time.
<!-- trait StreamExt::fn set_property_stream-type -->
<!-- impl Stream::fn set_property_stream_type -->
The `StreamType` of the `Stream`. Can only be set at construction time.
<!-- trait StreamExt::fn get_property_tags -->
<!-- impl Stream::fn get_property_tags -->
The `TagList` of the `Stream`.
<!-- trait StreamExt::fn set_property_tags -->
<!-- impl Stream::fn set_property_tags -->
The `TagList` of the `Stream`.
<!-- struct StreamCollection -->
A collection of `Stream` that are available.