################################################################################
################################################################################
This repository includes Yolo series project implementation based on Deepstream.
build your own Deepstream project and help you familiarize yourself with the process, you can deploy any inference model!!
including yolov5, yolov4, and And some model deployment projects such as OCR project based on yolo-ob detection and more will be added later.
Reference:
you can git clone and prepare your models or test convert to trained models
yolov5: https://github.com/ultralytics/yolov5
yolov4: https://github.com/Tianxiaomo/pytorch-YOLOv4.git
download its weights!
Check Nvidia graphics driver, CUDA, CUDNN, TensorRT, Deepstream
NOTE:If you ensure that your environment is configured, you can start to create a project!
pip install -r requirement.txt
cd Deepstream_Project
cd Deepstream_Yolo
cd trans_project
cd yolov4_convert
python demo_darknet2onnx.py <cfgFile> <weightFile> <imageFile> <batchSize>
python demo_pytorch2onnx.py <weight_file> <image_path> <batch_size> <n_classes> <IN_IMAGE_H> <IN_IMAGE_W>
trtexec --onnx=<your onnx name >.onnx --explicitBatch --saveEngine=yolov4_1_3_320_512_fp16.engine --workspace=4096 --fp16
python -m onnxsim ./weights/yolov4.onnx ./weights/yolov4_sim.onnx
mkdir build && cd build
cmake ..
make
./yolov4_convert ../config.yaml ../images
<your config file> <your test data folder>
cd yolov5_convert
python demo_pytorch2onnx.py --weights ./weights/yolov5x.pt --img 640 --batch 1
<your model path> <imagsize> <batchsize>
python convert.py --weights <your modle path > --img-size<imgsize> --bata-size<default=1>
python -m onnxsim ./weights/yolov5x.onnx ./weights/yolov5x_sim.onnx
<input> <output>
mkdir build && cd build
cmake ..
make
./yolov5_trt ../config.yaml ../images
<your config file> <your test data folder>
reference:https://github.com/wang-xinyu/tensorrtx
NOTE: if you use this project ,you should have yolov4 and yolov5 project sources models to convert modlde file,so you can use the project prepared.
source actviate <yolov5 conda env name>
cd tensorrtx/yolov5
python gen_wts.py <input weights file > <outputfile_name>
mkdir build && cd build
cmake ..
make
[NOTE]:you should line 13 in yolov5.cpp , #define NET x // s m l x,configure your yolov5 model
./yolov5 -x
-s
-l
-m
sudo ./yolov5 -d ../samples
cp -r yolov5*.engine ../../engine_models/
cp -r libmyplugins.so ../../engine_models/
In any case, the above is just to get the engine file of the model you trained. If you have another way or modify the code to generate the engine file.
cd nvdsinfer_custom_impl_Yolo
cmake..
make
cd ..
deepstream-app -c deepstream_app_config_yoloV4.txt
LD_PRELOAD=./libmyplugins.so deepstream-app -c deepstream_app_config_yoloV5.txt