diff --git a/basics.md b/basics.md index 53421ba..8ae00f9 100644 --- a/basics.md +++ b/basics.md @@ -8,7 +8,7 @@ These examples assume that bash variable `SRC` to be set to a video file (e.g. a export SRC=/home/me/videos/test.mp4 ``` -### Play a video (with audio) +### Play a video (with audio) The magical element `playbin` can play anything: @@ -40,13 +40,13 @@ which could also have been done as: gst-launch-1.0 -v filesrc location="$SRC" ! decodebin ! autovideosink ``` -### Play just the audio from a video +### Play just the audio from a video ``` gst-launch-1.0 -v uridecodebin uri="file://$SRC" ! autoaudiosink ``` -### Visualise the audio: +### Visualise the audio: ``` gst-launch-1.0 filesrc location=$SRC ! qtdemux name=demux demux.audio_0 ! queue ! decodebin ! audioconvert ! wavescope ! autovideosink @@ -70,7 +70,7 @@ gst-launch-1.0 -v filesrc location="$SRC" ! decodebin ! videoconvert ! vertigotv Try also ‘rippletv’, ‘streaktv’, ‘radioactv’, ‘optv’, ‘quarktv’, ‘revtv’, ‘shagadelictv’, ‘warptv’ (I like), ‘dicetv’, ‘agingtv’ (great), ‘edgetv’ (could be great on real stuff) -### Add a clock +### Add a clock ``` gst-launch-1.0 -v filesrc location="$SRC" ! decodebin ! clockoverlay font-desc="Sans, 48" ! videoconvert ! autovideosink @@ -113,7 +113,7 @@ gst-launch-1.0 filesrc location=$SRC ! \ autovideosink ``` -### Play an MP3 audio file +### Play an MP3 audio file Set the environment variable `$AUDIO_SRC` to be the location of the MP3 file. Then: diff --git a/capturing_images.md b/capturing_images.md deleted file mode 100644 index e69de29..0000000 diff --git a/images.md b/images.md index 2873544..ef3d857 100644 --- a/images.md +++ b/images.md @@ -46,7 +46,7 @@ The `pngenc` element can create a single PNG: gst-launch-1.0 videotestsrc ! pngenc ! filesink location=foo.png ``` -### Capture an image as JPEG +### Capture an image as JPEG The `jpegenc` element can create a single JPEG: diff --git a/installing.md b/installing.md index 1cf2d34..30b27be 100644 --- a/installing.md +++ b/installing.md @@ -40,7 +40,7 @@ You can also compile and install yourself - see (https://gstreamer.freedesktop.o The GStreamer APT packages are excellent. -For a good install command, see `https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c`. +For a good install command, see (https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c). diff --git a/mixing.md b/mixing.md index 70252f2..43293b6 100644 --- a/mixing.md +++ b/mixing.md @@ -98,7 +98,7 @@ gst-launch-1.0 \ Use the `audiomixer` element to mix audio. It replaces the `adder` element, which struggles under some circumstances (according to the [GStreamer 1.14 release notes](https://gstreamer.freedesktop.org/releases/1.14/)). -### Mix two (or more) test audio streams +### Mix two (or more) test audio streams Here we use two different frequencies (tones): @@ -110,7 +110,7 @@ gst-launch-1.0 \ ``` -### Mix two test streams, dynamically +### Mix two test streams, dynamically [This Python example](python_examples/audio_dynamic_add.py) shows a dynamic equivalent of this example - the second test source is only mixed when the user presses Enter. @@ -127,7 +127,7 @@ gst-launch-1.0 \ ``` -### Mix a test stream with an MP3 file +### Mix a test stream with an MP3 file Because the audio streams are from different sources, they must each be passed through `audioconvert`. @@ -139,7 +139,7 @@ gst-launch-1.0 \ ``` -## Mixing video & audio together +## Mixing video & audio together ### Mix two fake video sources and two fake audio Sources @@ -256,7 +256,7 @@ gst-launch-1.0 \ demux2. ! queue2 ! decodebin ! videoconvert ! videoscale ! video/x-raw,width=320,height=180 ! videomix. ``` -## Fading +## Fading It's often nicer to fade between sources than to abruptly cut betwen them. This can be done both with video (temporarily blending using alpha channel) or audio (lowering the volume of one whilst raising it on another). It's not possible to do this on the command line... alhough `alpha` and `volume` can be set, they can only be set to discrete values. diff --git a/network_transfer.md b/network_transfer.md index 65ede6b..0a88072 100644 --- a/network_transfer.md +++ b/network_transfer.md @@ -83,13 +83,13 @@ gst-launch-1.0 \ rtpmp2tdepay ! decodebin name=decoder ! autoaudiosink decoder. ! autovideosink ``` -### How to receive with VLC +### How to receive with VLC To receive a UDP stream, an `sdp` file is required. An example can be found at https://gist.github.com/nebgnahz/26a60cd28f671a8b7f522e80e75a9aa5 ## TCP -### Audio via TCP +### Audio via TCP To send a test stream: diff --git a/queues.md b/queues.md index e749e42..41f5614 100644 --- a/queues.md +++ b/queues.md @@ -22,7 +22,7 @@ gst-launch-1.0 videotestsrc ! queue ! autovideosink Queues add latency, so the general advice is not to add them unless you need them. -## Queue2 +## Queue2 Confusingly, `queue2` is not a replacement for `queue`. It's not obvious when to use one or the other. diff --git a/rtmp.md b/rtmp.md index 8e73792..4d9675d 100644 --- a/rtmp.md +++ b/rtmp.md @@ -144,7 +144,7 @@ gst-launch-1.0 videotestsrc is-live=true ! \ voaacenc bitrate=96000 ! audio/mpeg ! aacparse ! audio/mpeg, mpegversion=4 ! mux. ``` -### Send a file over RTMP +### Send a file over RTMP Audio & video: diff --git a/sharing_and_splitting_pipelines.md b/sharing_and_splitting_pipelines.md index e89d5e7..f2bed24 100644 --- a/sharing_and_splitting_pipelines.md +++ b/sharing_and_splitting_pipelines.md @@ -77,7 +77,7 @@ I've used `proxysink` and `proxysrc` to split larger pipelines into smaller ones Unlike _inter_ below, _proxy_ will keep timing in sync. This is great if it's what you want... but if you want pipelines to have their own timing, it might not be right for your needs.. -### gstproxy documentation +### gstproxy documentation * Introduced by the blog mentioned above (http://blog.nirbheek.in/2018/02/decoupling-gstreamer-pipelines.html) * Example code on proxysrc here: https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-bad-plugins/html/gst-plugins-bad-plugins-proxysrc.html @@ -108,7 +108,7 @@ A slightly more interesting example can be found at * that when the video ends, the other pipelines continue. -## inter (intervideosink/intervideosrc and their audio & subtitle counterparts) +## inter (intervideosink/intervideosrc and their audio & subtitle counterparts) The 'inter' versions of 'proxy' are dumber. They don't attempt to sync timings. But this can be useful if you want pipelines to be more independent. (Pros and cons on this discussed [here](http://gstreamer-devel.966125.n4.nabble.com/How-to-connect-intervideosink-and-intervideosrc-for-IPC-pipelines-td4684567.html).) diff --git a/srt.md b/srt.md index 517e72d..f54b667 100644 --- a/srt.md +++ b/srt.md @@ -20,7 +20,7 @@ And create a receiver client like this: gst-launch-1.0 -v srtclientsrc uri="srt://127.0.0.1:8888" ! decodebin ! autovideosink ``` -## Server receiving the AV +## Server receiving the AV To have the server receiving, rather than sending, swap 'srtclientsrc' for 'srcserversrc'. Likewise, to have the client sending rather than receiving, swap 'srtserversink' for 'srtclientsink'. diff --git a/tee.md b/tee.md index bb5aa0b..6e5e06f 100644 --- a/tee.md +++ b/tee.md @@ -2,7 +2,7 @@ This page describes the `tee` element, which allows audio & video streams to be sent to more than one place. -## Tee to two local video outputs +## Tee to two local video outputs Here's a simple example that sends shows video test source twice (using `autovideosink`) @@ -14,7 +14,7 @@ gst-launch-1.0 \ t. ! queue ! videoconvert ! autovideosink ``` -## Tee to two different video outputs +## Tee to two different video outputs Here's an example that sends video to both `autovideosink` and a TCP server (`tcpserversink`). Note how `async=false` is required on both sinks, because the encoding step on the TCP branch takes longer, and so the timing will be different. @@ -44,7 +44,7 @@ gst-launch-1.0 videotestsrc ! \ t. ! queue ! x264enc ! mpegtsmux ! tcpserversink port=7001 host=127.0.0.1 recover-policy=keyframe sync-method=latest-keyframe ``` -## Tee on inputs +## Tee on inputs You can also use `tee` in order to do multiple things with inputs. This example combines two audio visualisations: diff --git a/test_streams.md b/test_streams.md index b7bf510..28b7c0d 100644 --- a/test_streams.md +++ b/test_streams.md @@ -60,7 +60,7 @@ You can change the *volume* by setting the `volume` property between `0` and `1` gst-launch-1.0 audiotestsrc volume=0.1 ! autoaudiosink ``` -### White noise +### White noise Set `wave` to `white-noise`: @@ -70,7 +70,7 @@ gst-launch-1.0 audiotestsrc wave="white-noise" ! autoaudiosink There are variations (e.g. _red noise_) - see the [docs](https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-plugins/html/gst-plugins-base-plugins-audiotestsrc.html) for a complete list. -### Silence +### Silence If you need an audio stream with nothing in: diff --git a/writing_to_files.md b/writing_to_files.md index 94055ae..799e7e8 100644 --- a/writing_to_files.md +++ b/writing_to_files.md @@ -5,7 +5,7 @@ The [`filesink`](https://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstre Note that, when using the command-line, the `-e` parameter ensures the output file is correctly completed on exit. -### Write to an mp4 file +### Write to an mp4 file This example creates a test video (animated ball moving, with clock), and writes it as an MP4 file. Also added is an audio test source - a short beep every second.