mirror of
https://gitlab.freedesktop.org/gstreamer/gstreamer.git
synced 2024-06-09 01:29:28 +00:00
doc: remove details about sinkpad caps and batch
This commit is contained in:
parent
8f3b7e4fd9
commit
aec7a85128
|
@ -268,20 +268,7 @@ make negotiation of tensor-decoder in particular difficult.
|
|||
|
||||
### Inference Sinkpad(s) Capabilities
|
||||
Sinkpad capability, before been constrained based on model, can be any
|
||||
media type, including `tensor` . Note that multiple sinkpads can be present.
|
||||
|
||||
#### Batch Inference
|
||||
To support batch inference `tensor` media type need to be used. Batching is
|
||||
a method used to spread the fixed time cost of scheduling work on some
|
||||
accelerator (GPU). Multiple samples (buffers) are aggregated into a batch that
|
||||
is pushed to the inference. When batching is used, output tensor will also contain
|
||||
analytics results in the form of a batch. Un-batching require information on how
|
||||
the batch was formed: buffer timestamp, buffer source, media type, buffer caps.
|
||||
A batch can be formed from a single source, forming the batch with time multiplexing
|
||||
of from multiple sources, time and source multiplexing. Once multiplexed, the
|
||||
batch can be pushed to the inference element as a tensor media.
|
||||
|
||||
TODO: Describe TensorBatchMeta
|
||||
media type.
|
||||
|
||||
### Inference Srcpad(s) Capabilities
|
||||
|
||||
|
|
Loading…
Reference in a new issue