onnx: add README outlining install and test instructions

Part-of: <https://gitlab.freedesktop.org/gstreamer/gstreamer/-/merge_requests/5816>
This commit is contained in:
Aaron Boxer 2023-12-15 15:21:50 -05:00
parent 5f3d4215a0
commit e2ee207367
3 changed files with 65 additions and 30 deletions

View file

@ -0,0 +1,38 @@
ONNX Build Instructions
### Build
1. do a recursive checkout of [onnxruntime tag 1.15.1](https://github.com/microsoft/onnxruntime)
1. `$SRC_DIR` and `$BUILD_DIR` are local source and build directories
1. To run with CUDA, both [CUDA](https://developer.nvidia.com/cuda-downloads) and [cuDNN](https://docs.nvidia.com/deeplearning/cudnn/archives/cudnn_762/cudnn-install/index.html) libraries must be installed.
```
$ cd $SRC_DIR
$ git clone --recursive https://github.com/microsoft/onnxruntime.git && cd onnxruntime && git checkout -b v1.15.1 refs/tags/v1.15.1
$ mkdir $BUILD_DIR/onnxruntime && cd $BUILD_DIR/onnxruntime
```
1. CPU
```
$ cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install
```
2. CUDA
```
cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_CUDA=ON -Donnxruntime_CUDA_HOME=/usr/local/cuda -Donnxruntime_CUDNN_HOME=/usr/local/cuda -DCMAKE_CUDA_ARCHITECTURES=native -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install
```
3. Intel oneDNN
3.0 install [intel oneDNN](https://www.intel.com/content/www/us/en/developer/tools/oneapi/onednn.html)
3.1 clone, build and install [Khronos OpenCL SDK](https://github.com/KhronosGroup/OpenCL-SDK.git). Build dependencies for Fedora are
`sudo dnf install libudev-devel libXrandr-devel mesa-libGLU-devel mesa-libGL-devel libX11-devel intel-opencl`
3.2 build and install `onnxruntime` :
```
cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_DNNL=ON -Donnxruntime_DNNL_GPU_RUNTIME=ocl -Donnxruntime_DNNL_OPENCL_ROOT=$SRC_DIR/OpenCL-SDK/install $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install
```
Note: for Fedora build, you must add the following to `CMAKE_CXX_FLAGS` : `-Wno-stringop-overflow -Wno-array-bounds`

View file

@ -30,17 +30,13 @@
*
* ## Example launch command:
*
* note: image resolution may need to be adapted to the model, if the model expects
* a certain input resolution. The `videoscale` element in the pipeline below will scale
* the image, using padding if required, to 640x383 resolution required by model
* Test image file, model file (SSD) and label file can be found here :
* https://gitlab.collabora.com/gstreamer/onnx-models
*
*
* GST_DEBUG=objectdetector:5 gst-launch-1.0 multifilesrc \
* location=bus.jpg ! jpegdec ! videoconvert ! \
* videoscale ! 'video/x-raw,width=640,height=383' ! \
* onnxinference execution-provider=cpu model-file=model.onnx \
* ssdobjectdetector label-file=COCO_classes.txt ! \
* videoconvert ! autovideosink
* GST_DEBUG=ssdobjectdetector:5 \
* gst-launch-1.0 multifilesrc location=onnx-models/images/bus.jpg ! \
* jpegdec ! videoconvert ! onnxinference execution-provider=cpu model-file=onnx-models/models/ssd_mobilenet_v1_coco.onnx ! \
* ssdobjectdetector label-file=onnx-models/labels/COCO_classes.txt ! videoconvert ! autovideosink
*
*/

View file

@ -27,49 +27,50 @@
* This element can apply an ONNX model to video buffers. It attaches
* the tensor output to the buffer as a @ref GstTensorMeta.
*
* To install ONNX on your system, recursively clone this repository
* https://github.com/microsoft/onnxruntime.git
*
* To install ONNX on your system, recursively clone the repository
* https://github.com/microsoft/onnxruntime.git, check out tag 1.15.1
* and build and install with cmake:
*
* CPU:
*
* cmake -Donnxruntime_BUILD_SHARED_LIB:ON -DBUILD_TESTING:OFF \
* $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install
* cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF \
* -Donnxruntime_BUILD_UNIT_TESTS=OFF $SRC_DIR/onnxruntime/cmake \
* && make -j$(nproc) && sudo make install
*
*
* CUDA :
*
* cmake -Donnxruntime_BUILD_SHARED_LIB:ON -DBUILD_TESTING:OFF -Donnxruntime_USE_CUDA:ON \
* -Donnxruntime_CUDA_HOME=$CUDA_PATH -Donnxruntime_CUDNN_HOME=$CUDA_PATH \
* $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install
* cmake -Donnxruntime_BUILD_SHARED_LIB=ON -DBUILD_TESTING=OFF \
* -Donnxruntime_BUILD_UNIT_TESTS=OFF -Donnxruntime_USE_CUDA=ON \
* -Donnxruntime_CUDA_HOME=/usr/local/cuda \
* -Donnxruntime_CUDNN_HOME=/usr/local/cuda -DCMAKE_CUDA_ARCHITECTURES=native \
* -DCMAKE_CUDA_COMPILER=/usr/local/cuda/bin/nvcc \
* $SRC_DIR/onnxruntime/cmake && make -j$(nproc) && sudo make install
*
*
* where :
*
* 1. $SRC_DIR and $BUILD_DIR are local source and build directories
* 2. To run with CUDA, both CUDA and cuDNN libraries must be installed.
* $CUDA_PATH is an environment variable set to the CUDA root path.
* On Linux, it would be /usr/local/cuda
*
*
* ## Example launch command:
*
* GST_DEBUG=onnxinference:5 gst-launch-1.0 multifilesrc location=bus.jpg ! \
* jpegdec ! videoconvert ! \
* onnxinference execution-provider=cpu model-file=model.onnx \
* videoconvert ! autovideosink
* Test image file, model file (SSD) and label file can be found here :
* https://gitlab.collabora.com/gstreamer/onnx-models
*
* GST_DEBUG=ssdobjectdetector:5 \
* gst-launch-1.0 multifilesrc location=onnx-models/images/bus.jpg ! \
* jpegdec ! videoconvert ! onnxinference execution-provider=cpu model-file=onnx-models/models/ssd_mobilenet_v1_coco.onnx ! \
* ssdobjectdetector label-file=onnx-models/labels/COCO_classes.txt ! videoconvert ! autovideosink
*
*
* Note: in order for downstream tensor decoders to correctly parse the tensor
* data in the GstTensorMeta, meta data must be attached to the ONNX model
* assigning a unique string id to each output layer. These unique string ids
* and corresponding GQuark ids are currently stored in the ONNX plugin source
* in the file 'gsttensorid.h'. For an output layer with name Foo and with context
* unique string id Gst.Model.Bar, a meta data key/value pair must be added
* to the ONNX model with "Foo" mapped to "Gst.Model.Bar" in order for a downstream
* decoder to make use of this model. If the meta data is absent, the pipeline will
* fail.
* and corresponding GQuark ids are currently stored in the tensor decoder's
* header file, in this case gstssdobjectdetector.h. If the meta data is absent,
* the pipeline will fail.
*
* As a convenience, there is a python script
* currently stored at