You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* feat(flow4d): update flow4d model.
docs(conf): update flow4d README and conf
* Add Flow4D details to README.md
* docs: add flow4d bib also.
* fix(trainer): validation setp with res may None, fix with if.
* docs(dockerfile): update dockerfile for convenient env setup.
* update README and delete useless info
* update bib on the end of readme but with link to jump at the beginning.
* docs(REDME): update print message
and ignore warning info for np in eval script.
---------
Co-authored-by: jykim94 <89293559+jykim94@users.noreply.github.com>
-[x][FastFlow3d](https://arxiv.org/abs/2103.01306): RA-L 2021, a basic backbone model.
41
38
-[x][ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](tools/zerof2ours.py).
42
39
-[ ][NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](assets/cuda/README.md), same (slightly better) performance. Done coding, public after review.
43
40
-[ ][FastNSF](https://arxiv.org/abs/2304.09121): ICCV 2023. Done coding, public after review.
44
-
-[ ][Flow4D](https://arxiv.org/abs/2407.07995): Under Review. Done coding, public after review.
45
-
-[ ] ... more on the way
41
+
-[ ][ICP-Flow](https://arxiv.org/abs/2402.17351): CVPR 2024. Done coding, public after review.
46
42
47
43
</details>
48
44
49
-
## Citation
50
-
51
-
If you find *OpenSceneFlow* useful to your research, please cite our work as encouragement. (੭ˊ꒳ˋ)੭✧
52
-
53
-
```
54
-
@inproceedings{zhang2024seflow,
55
-
author={Zhang, Qingwen and Yang, Yi and Li, Peizheng and Andersson, Olov and Jensfelt, Patric},
56
-
title={{SeFlow}: A Self-Supervised Scene Flow Method in Autonomous Driving},
57
-
booktitle={European Conference on Computer Vision (ECCV)},
58
-
year={2024},
59
-
pages={353–369},
60
-
organization={Springer},
61
-
doi={10.1007/978-3-031-73232-4_20},
62
-
}
63
-
@inproceedings{zhang2024deflow,
64
-
author={Zhang, Qingwen and Yang, Yi and Fang, Heng and Geng, Ruoyu and Jensfelt, Patric},
65
-
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
66
-
title={{DeFlow}: Decoder of Scene Flow Network in Autonomous Driving},
67
-
year={2024},
68
-
pages={2105-2111},
69
-
doi={10.1109/ICRA57147.2024.10610278}
70
-
}
71
-
```
45
+
💡: Want to learn how to add your own network in this structure? Check [Contribute section] and know more about the code. Fee free to pull request and your bibtex [here](#cite-us) by pull request.
72
46
73
47
---
74
48
75
-
📜 Changelog:
49
+
<!--📜 Changelog:
76
50
77
51
- 🎁 2025/1/28 14:58: Update the codebase to collect all methods in one repository reference [Pointcept](https://github.com/Pointcept/Pointcept) repo.
78
52
- 🤗 2024/11/18 16:17: Update model and demo data download link through HuggingFace, Personally I found `wget` from HuggingFace link is much faster than Zenodo.
79
53
- 2024/09/26 16:24: All codes already uploaded and tested. You can to try training directly by downloading (through [HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow)/[Zenodo](https://zenodo.org/records/13744999)) demo data or pretrained weight for evaluation.
- 🔥 2024/07/02: Check the self-supervised version in our new ECCV'24 [SeFlow](https://github.com/KTH-RPL/SeFlow). The 1st ranking in new leaderboard among self-supervise methods.
55
+
- 🔥 2024/07/02: Check the self-supervised version in our new ECCV'24 [SeFlow](https://github.com/KTH-RPL/SeFlow). The 1st ranking in new leaderboard among self-supervise methods.-->
82
56
83
57
## 0. Installation
84
58
@@ -97,33 +71,49 @@ cd assets/cuda/mmcv && python ./setup.py install && cd ../../..
97
71
cd assets/cuda/chamfer3D && python ./setup.py install &&cd ../../..
98
72
```
99
73
100
-
<!-- Or you always can choose [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment and free yourself from installation, you can pull it by.
101
-
If you have different arch, please build it by yourself `cd OpenSceneFlow && docker build -t zhangkin/opensf` by going through [build-docker-image](assets/README.md/#build-docker-image) section.
74
+
Or you always can choose [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment and free yourself from installation, you can pull it by.
75
+
If you have different arch, please build it by yourself `cd OpenSceneFlow && docker build -t zhangkin/opensf` by going through [build-docker-image](assets/README.md#build-docker-image) section.
76
+
102
77
```bash
103
78
# option 1: pull from docker hub
104
-
docker pull zhangkin/seflow
79
+
docker pull zhangkin/opensf
105
80
106
81
# run container
107
-
docker run -it --gpus all -v /dev/shm:/dev/shm -v /home/kin/data:/home/kin/data --name seflow zhangkin/seflow /bin/zsh
108
-
``` -->
82
+
docker run -it --gpus all -v /dev/shm:/dev/shm -v /home/kin/data:/home/kin/data --name opensceneflow zhangkin/opensf /bin/zsh
83
+
# and better to read your own gpu device info to compile the cuda extension again:
84
+
cd /home/kin/workspace/OpenSceneFlow/assets/cuda/mmcv && /opt/conda/envs/opensf/bin/python ./setup.py install
85
+
cd /home/kin/workspace/OpenSceneFlow/assets/cuda/chamfer3D && /opt/conda/envs/opensf/bin/python ./setup.py install
86
+
```
109
87
110
88
111
89
## 1. Data Preparation
112
90
113
-
Check [dataprocess/README.md](dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset and [data preprocessed to h5 files commands](dataprocess/README.md#process).
114
-
Another good way to try code quickly is using **mini processed dataset**, we directly provide one scene inside `train` and `val`.
115
-
It already converted to `.h5` format and processed with the label data.
116
-
You can download it from [Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)/[HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip) and extract it to the data folder.
117
-
Then you can directly use this mini processed demo data to run the [training script](#2-quick-start).
91
+
Refer to [dataprocess/README.md](dataprocess/README.md) for dataset download instructions. Currently, we support **Argoverse 2**, **Waymo**, and **custom datasets** (more datasets will be added in the future).
92
+
93
+
After downloading, convert the raw data to `.h5` format for easy training, evaluation, and visualization. Follow the steps in [dataprocess/README.md#process](dataprocess/README.md#process). For a quick start, use our **mini processed dataset**, which includes one scene in `train` and `val`. It is pre-converted to `.h5` format with label data ([Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)/[HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip)).
These work were partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation and Prosense (2020-02963) funded by Vinnova.
219
-
The computations were enabled by the supercomputing resource Berzelius provided by National Supercomputer Centre at Linköping University and the Knut and Alice Wallenberg Foundation, Sweden.
208
+
*OpenSceneFlow* is designed by [Qingwen Zhang](https://kin-zhang.github.io/) from DeFlow and SeFlow project. If you find it useful, please cite our works:
209
+
210
+
```bibtex
211
+
@inproceedings{zhang2024seflow,
212
+
author={Zhang, Qingwen and Yang, Yi and Li, Peizheng and Andersson, Olov and Jensfelt, Patric},
213
+
title={{SeFlow}: A Self-Supervised Scene Flow Method in Autonomous Driving},
214
+
booktitle={European Conference on Computer Vision (ECCV)},
215
+
year={2024},
216
+
pages={353–369},
217
+
organization={Springer},
218
+
doi={10.1007/978-3-031-73232-4_20},
219
+
}
220
+
@inproceedings{zhang2024deflow,
221
+
author={Zhang, Qingwen and Yang, Yi and Fang, Heng and Geng, Ruoyu and Jensfelt, Patric},
222
+
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
223
+
title={{DeFlow}: Decoder of Scene Flow Network in Autonomous Driving},
224
+
year={2024},
225
+
pages={2105-2111},
226
+
doi={10.1109/ICRA57147.2024.10610278}
227
+
}
228
+
```
229
+
230
+
And our excellent collaborators works as followings:
231
+
232
+
```bibtex
233
+
@article{kim2025flow4d,
234
+
author={Kim, Jaeyeul and Woo, Jungwan and Shin, Ukcheol and Oh, Jean and Im, Sunghoon},
235
+
journal={IEEE Robotics and Automation Letters},
236
+
title={Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation},
237
+
year={2025},
238
+
volume={10},
239
+
number={4},
240
+
pages={3462-3469},
241
+
doi={10.1109/LRA.2025.3542327}
242
+
}
243
+
@article{khoche2025ssf,
244
+
title={SSF: Sparse Long-Range Scene Flow for Autonomous Driving},
245
+
author={Khoche, Ajinkya and Zhang, Qingwen and Sanchez, Laura Pereira and Asefaw, Aron and Mansouri, Sina Sharif and Jensfelt, Patric},
246
+
journal={arXiv preprint arXiv:2501.17821},
247
+
year={2025}
248
+
}
249
+
```
220
250
221
-
<!-- *OpenSceneFlow* is designed by [Qingwen Zhang](https://kin-zhang.github.io/). It -->
251
+
Feel free to contribute your method and add your bibtex here by pull request!
222
252
223
-
❤️: Evaluation Metric from [BucketedSceneFlowEval](https://github.com/kylevedder/BucketedSceneFlowEval); README reference from [Pointcept](https://github.com/Pointcept/Pointcept); Many thanks to[ZeroFlow](https://github.com/kylevedder/zeroflow) ...
0 commit comments