Commit graph

29 commits

Author SHA1 Message Date
Guillaume Desmottes 403004a85e fix typos
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1170>
2023-04-10 13:35:32 +02:00
Sebastian Dröge 3b4c48d9f5 Fix various new clippy warnings
Part-of: <https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/1062>
2023-01-25 10:31:19 +02:00
François Laignel d39aabe054 ts/Task: don't drain sub tasks after state transition and iteration
Subtasks are used when current async processing needs to execute
a `Future` via a sync function (eg. a call to a C function).
In this case `Context::block_on` would block the whole `Context`,
leading to a deadlock.

The main use case for this is the `Pad{Src,Sink}` functions:
when we `PadSrc::push` and the peer pad is a `PadSink`, we want
`PadSrc::push` to complete after the async function on the
`PadSink` completes. In this case the `PadSink` async function
is added as a subtask of current scheduler task and
`PadSrc::push` only returns when the subtask is executed.

In `runtime::Task` (`Task` here is the execution Task with a
state machine, not a scheduler task), we used to spawn state
transition actions and iteration loop (leading to a new
scheduler Task). At the time, it seemed convenient for the user
to automatically drain sub tasks after a state transition action
or an iteration. User wouldn't have to worry about this, similarly
to the `Pad{Src,Sink}` case.

In current implementation, the `Task` state machine now operates
directly on the target `Context`. State transtions actions and
the iteration loop are no longer spawned. It seems now useless to
abstract the subtasks draining from the user. Either they
transitively use a mechanism such as `Pad{Src,Sink}` which already
handles this automatically, or they add substasks on purpose, in
which case they know better when subtasks must be drained.
2022-09-13 07:29:50 +00:00
François Laignel 61c62ee1e8 ts/timers: multiple improvements
This commit improves threadshare timers predictability
by better making use of current time slice.

Added a dedicate timer BTreeMap for after timers (those
that are guaranteed to fire no sooner than the expected
instant) so as to avoid previous workaround which added
half the max throttling duration. These timers can now
be checked against the reactor processing instant.

Oneshot timers only need to be polled as `Future`s when
intervals are `Stream`s. This also reduces the size for
oneshot timers and make user call `next` on intervals.
Intervals can also implement `FusedStream`, which can help
when used in features such as `select!`.

Also drop the `time` module, which was kepts for
compatibility when the `executor` was migrated from tokio
based to smol-like.
2022-09-13 07:29:50 +00:00
François Laignel 2bb071a950 ts/runtime: slight optimizations for sub tasks related operations
Using callgrind with the standalone test showed opportunities for
improvements for sub tasks addition and drain.

All sub task additions were performed after making sure we were
operating on a Context Task. The Context and Task were checked
again when adding the sub task.

Draining sub tasks was perfomed in a loop on every call places,
checking whether there were remaining sub tasks first. This
commit implements the loop and checks directly in
`executor::Task::drain_subtasks`, saving one `Mutex` lock and
one `thread_local` access per iteration when there are sub
tasks to drain.

The `PadSink` functions wrapper were performing redundant checks
on the `Context` presence and were adding the delayed Future only
when there were already sub tasks.
2022-08-18 18:42:18 +02:00
François Laignel 8b54c3fed6 ts/Task: split iterate into try_next and handle_item
Previous Task iteration model suffered from the following
shortcomings:

- When an iteration was engaged it could be cancelled at
  await points by Stop or Flush state transitions,
  which could lead to inconsistent states.
- When an iteration was engaged it could not be cancelled
  by a Pause state transition so as to prevent data loss.
  This meant we couldn't block on the Pause request because
  the mechanism couldn't guarantee Paused would be reached
  in a timely manner.

This commit split the Task iteration into:

- `try_next`: this function returns a future that awaits
  for a new iteration to begin. The regular use case is
  to return an item to process. The item can be left to
  `()` if `try_next` acts as a tick generator. It can
  also return an error. This function can be cancelled at
  await points when a state transition request occurs.
- `handle_item`: this function is called with the item
  returned by `try_next` and is guaranteed to run to
  completion even if a transition request is received.

Note that this model plays well with the common Future
cancellation pitfalls in Rust.
2022-08-10 20:02:53 +02:00
François Laignel d4061774a4 ts/Task: return a future for state transitions
State transitions request functions hid the synchronization
details to the caller:

- If the transition was requested from a Context, a subtask was
  added or the transition ack was not awaited if the new transition
  was requested from a running transition or an iteration function.
- If the transition was not requested from a Context, current
  thread was blocked until the ack was received.

This strategy facilitated code in elements, but it suffered from
the following shortcomings:

- The `prepare` transition request didn't comply with the above
  strategy and would always return an `Async` form. This was
  designed to accomodate the `Prepare` function for elements
  such as `ts-tcpclientsrc` which takes times due to the
  TCP socket connection delays. The idea was that the actual
  transition result would be available after calling `start`.
  This was a disadvantage for elements which would prefer to
  error immediately in the event of a preparation failure.
- Hidding the transition request synchronization to the caller
  meant that they had no options but relying on the internal
  mechanism. E.g.: it was not possible to `start` from another
  async runtime without blocking. Also it was not possible
  to request a transition and choose not to await for the
  ack.

This commit introduces a more flexible API for state
transitions requests:

- The transition request function now return a `TransitionStatus`,
  which is a Future.
- When an error occurs immediately (e.g. the transition
  request is not autorized due to current state of the Task),
  the `TransitionStatus` is resolved immediately and can be
  `check`ed for errors. This is useful for functions such as
  `pepare` in the case of `ts-tcpclientsrc` (see above).
  This is also useful for `pause`, because in current design,
  the transition is always async. Note however, that `pause` is
  forseen to adhere to the same behaviour as the other transition
  requests in the near future [1].
- If the caller chooses to await for the ack and they don't know
  if they are running on a ts Context (e.g. in `Pad{Src,Sink}`
  handlers), they can call `await_maybe_on_context`. This is mostly
  the same behaviour as the one that used to be performed internaly.
- If the caller knows for sure they can't possibly block an async
  executor, they can call `block_on` which is more explicite, but
  will nonetheless make sure no ts Context is being blocked. This
  last check was introduced as it was considered low overhead
  while it should ease preventing missues in cases where the above
  functions should be used.

[1]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/793#note_1464400
2022-08-09 19:48:06 +02:00
François Laignel 625fce3934 ts/Task: spawn StateMachine on ts Context
Task state machines used to execute in an executor from the Futures
crate. State transitions actions and iteration functions were then
spawned on the target threadshare Context.

This commit directly spawns the task state machine on the threadshare
Context. This simplifies code a bit and paves the way for the changes
described in [1].

Also introduces struct `StateMachineHandle`, which gather together
fields to communicate and synchronize with the StateMachine. Renamed
`StateMachine::run` as `spawn` and return `StateMachineHandle`.

[1]: https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/793#note_1464400
2022-08-09 19:48:06 +02:00
François Laignel 28a62e622e ts/scheduler: rename awake / wake_up as unpark 2022-08-09 13:17:21 +00:00
François Laignel 833331ab66 ts/Task: wake up after the triggering event is pushed
The scheduler is awaken when aborting a task loop, but not after
a triggering event is pushed. This can cause throttling to induce
long state transitions for pipelines with many streams.

Observed for Unprepare with:

GST_DEBUG=ts-benchmark:4 ../../target/debug/examples/benchmark 2000 ts-udpsrc 2 20 5000
2022-08-09 13:17:21 +00:00
François Laignel 8eb8ea0e7d ts/rt/Task: use light weight executor blocking on ack or join handle
Previous version used the Context::block_on_or_add_sub_task which
spawns a full-fledged executor with timer and io Reactor for no
reason when we just need to wait for a Receiver or JoinHandle.
2022-03-28 08:47:32 +00:00
François Laignel c1615d01e6 ts/rt/Task: awake the iteration loop when it needs to be aborted
When the iteration loop is throttling, the call to `abort` on the
`loop_abort_handle` returns immediately, but the actual `Future`
for the iteration loop is aborted only when the scheduler throttling
completes. State transitions which requires the loop to be aborted &
which are serialized at the pipeline level can incur long delays.

This commit makes sure the Task Context's scheduler is awaken as soon
as the task loop is aborted.
2022-03-28 08:47:32 +00:00
François Laignel 72d9d3dc58 generic/threadshare: fix for nightly build 2022-02-22 00:18:28 +01:00
François Laignel 422ea740ca Update to gst::_log_macro_
See the details:
https://gitlab.freedesktop.org/gstreamer/gstreamer-rs/-/merge_requests/980
2022-02-21 20:50:01 +01:00
Sebastian Dröge 51f8e963d6 Add SPDX-License-Identifier to all file headers 2022-01-15 21:18:47 +02:00
François Laignel 6163589ac7 ts/executor: replace tokio with smol-like implementation
The threadshare executor was based on a modified version of tokio
which implemented the throttling strategy in the BasicScheduler.
Upstream tokio codebase has significantly diverged from what it
was when the throttling strategy was implemented making it hard
to follow. This means that we can hardly get updates from the
upstream project and when we cherry pick fixes, we can't reflect
the state of the project on our fork's version. As a consequence,
tools such as cargo-deny can't check for RUSTSEC fixes in our fork.

The smol ecosystem makes it quite easy to implement and maintain
a custom async executor. This MR imports the smol parts that
need modifications to comply with the threadshare model and implements
a throttling executor in place of the tokio fork.

Networking tokio specific types are replaced with Async wrappers
in the spirit of [smol-rs/async-io]. Note however that the Async
wrappers needed modifications in order to use the per thread
Reactor model. This means that higher level upstream networking
crates such as [async-net] can not be used with our Async
implementation.

Based on the example benchmark with ts-udpsrc, performances seem on par
with what we achieved using the tokio fork.

Fixes https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/issues/118

Related to https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/604
2021-12-25 11:25:56 +00:00
François Laignel cd0773662f ts: migrate most tests so that they don't use tokio 2021-12-25 11:25:56 +00:00
François Laignel 27b9f0d868 Improve usability thanks to opt-ops
The crate option-operations simplifies usage when dealing with
`Option`s, which is often the case with `ClockTime`.
2021-10-18 15:09:47 +02:00
Sebastian Dröge 052365ba1a Fix various needless-borrow clippy warnings and others 2021-07-30 13:53:35 +03:00
François Laignel 8f81cb8812 generic: migrate to new ClockTime design 2021-06-05 10:36:21 +02:00
Sebastian Dröge 3d617371af Update for macro renames 2020-12-20 20:43:45 +02:00
Sebastian Dröge df6a229f58 Fix or silence various clippy warnings 2020-11-19 15:31:50 +00:00
Sebastian Dröge a022bbe260 Fix some new clippy warnings 2020-07-28 18:52:11 +03:00
François Laignel 244f6dd6f7 threadshare: fix Transition naming 2020-05-25 18:31:49 +02:00
François Laignel 725eb0a093 threadshare: spawn StateMachine on the futures::executor::ThreadPool
StateMachines are spawned on a runtime::Context which uses a tokio
runtime. The StateMachine doesn't need all the features from tokio
such as the IO and timers drivers.

This commit makes use of a light-weight futures executor to spawn
the StateMachines.
2020-05-25 18:31:49 +02:00
François Laignel f0793587f6 threadshare/TaskImpl: allow transition hooks to fail...
... and add error handlers for iterate and transitions hooks with
default implementation.
2020-05-25 18:31:49 +02:00
François Laignel 5c9bbc6818 threadshare: Task: reduce memory usage
Using dynamic dispatch instead of monomorphization reduces the lib size
and memory use without affecting CPU nor throughput significantly.
2020-05-25 18:31:49 +02:00
François Laignel 1bea2ad279 threadshare: introduce TaskImpl trait
TaskImpl is the trait for specific Task behaviour. It is the basis
of a new Task model. The main motivation for this model is to ease
threadsafe implementations of state transitions.

See https://gitlab.freedesktop.org/gstreamer/gst-plugins-rs/-/merge_requests/298
2020-05-25 18:31:48 +02:00
Arun Raghavan dc3c8fd049 Drop gst-plugin- prefix in plugin directory name 2020-04-05 19:10:47 +00:00
Renamed from generic/gst-plugin-threadshare/src/runtime/task.rs (Browse further)