Skip to content
/ eva Public

[CVPR Workshop 2025] Eva-EVS is an action-prediction method suitable for deployment on a AMD Xilinx DPU.

Notifications You must be signed in to change notification settings

pbonazzi/eva

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Low-Latency Obstacle Avoidance on a FPGA. πŸ”΅

References & Citation βœ‰οΈ

Code base to reproduce results in :

@article{Bonazzi2025CVPRW,
    author    = {Bonazzi, Pietro and Vogt, Christian and Jost, Michael and Khacef, Lyes and Paredes-Valles, Federico and Magno, Michele},
    title     = {Towards Low-Latency Event-based Obstacle Avoidance on a FPGA-Drone.},
    journal   = {IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshop (CVPRW)}, 
    month     = {June},
    year      = {2025}, 
} 
@article{Bonazzi2025IJCNN,
    author    = {Bonazzi, Pietro and Vogt, Christian and Jost, Michael and Qin, Haotong and Khacef, Lyes and Paredes-Valles, Federico and Magno, Michele},
    title     = {Fusing Events and Frames with Self-Attention Network for Ball Collision Detection.},
    journal = {International Joint Conference on Neural Networks (IJCNN)},
    month     = {June},
    year      = {2025}, 
}

Leave a star to support our open source initiative!⭐️

The name of the main folder with all the modules is eva which stands for Event Vision for Action prediction.

1. Installation instructions πŸš€

1.1. Downloads πŸ“₯

To download the dataset click here.

Click on this link to download the models.

1.2.a. Conda env πŸ› οΈ

conda create -n eva-env python=3.10 -y
conda activate eva-env
pip install h5py numpy matplotlib opencv-python pandas scipy dtaidistance \
  pytorch-lightning torchvision torch fire wandb torchprofile onnx scikit-learn dotenv \
  pybind11 tensorboard tensorboardX

1.2.b. Docker containers πŸ› οΈ

Replace the path variable of the dataset (ABCD_DATA_PATH) to your installation directory. Build the docker container:

docker build -t eva-pytorch .

Run the docker container:

docker run --gpus all -it --rm \
  -v `ABCD_DATA_PATH`:/workspace/data \
  -e DISPLAY=$DISPLAY \
  --network host \
  --volume /tmp/.X11-unix:/tmp/.X11-unix \
  eva-pytorch

1.3. OpenEB for Event PreProcessing πŸ› οΈ

In your shell, inside the code directory, install OpenEB, following these instructions :

git clone https://github.com/prophesee-ai/openeb.git --branch 4.6.0 
cd openeb && mkdir build && cd build 
cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=OFF 
cmake --build . --config Release -- -j $(nproc) 
. utils/scripts/setup_env.sh 
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib 
export HDF5_PLUGIN_PATH=$HDF5_PLUGIN_PATH:/usr/local/lib/hdf5/plugin 
export HDF5_PLUGIN_PATH=$HDF5_PLUGIN_PATH:/usr/local/hdf5/lib/plugin
cd ../..

2. Reproduce the results πŸš€

First, visualize a recording πŸ‘€

python3 -m eva.data.subclasses.flight --id=1

Then, evaluate the EVS and RGB models πŸ“š For example Tab. 3 [1] can be reproduced as follows:

python3 -m scripts.eval.cvprw.tab_3

3. Train a model πŸ‹οΈβ€β™‚οΈ

To train the fusion model:

python3 -m scripts.train --inputs_list=["dvs","rgb"] --name=fusion-model --frequency="rgb"

To train the event-based model:

python3 -m scripts.train --inputs_list=["dvs"] --name=dvs-model --frequency="rgb"

To train the rgb-based model:

python3 -m scripts.train --inputs_list=["rgb"] --name=rgb-model --frequency="rgb" 

4. Deploy the event-based model on AMD Kria K26 πŸ“¦

Deploy instructions coming soon.

About

[CVPR Workshop 2025] Eva-EVS is an action-prediction method suitable for deployment on a AMD Xilinx DPU.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published