You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-**SSF: Sparse Long-Range Scene Flow for Autonomous Driving**
17
22
*Ajinkya Khoche, Qingwen Zhang, Laura Pereira Sánchez, Aron Asefaw, Sina Sharif Mansouri and Patric Jensfelt*
18
23
International Conference on Robotics and Automation (**ICRA**) 2025
@@ -105,11 +110,11 @@ docker run -it --gpus all -v /dev/shm:/dev/shm -v /home/kin/data:/home/kin/data
105
110
106
111
## 1. Data Preparation
107
112
108
-
Check [dataprocess/README.md](dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset.
109
-
Or maybe you want to have the **mini processed dataset** to try the code quickly, We directly provide one scene inside `train` and `val`.
113
+
Check [dataprocess/README.md](dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset and [data preprocessed to h5 files commands](dataprocess/README.md#process).
114
+
Another good way to try code quickly is using **mini processed dataset**, we directly provide one scene inside `train` and `val`.
110
115
It already converted to `.h5` format and processed with the label data.
111
116
You can download it from [Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)/[HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip) and extract it to the data folder.
112
-
Then you can directly use demo data to run the [training script](#2-quick-start).
117
+
Then you can directly use this mini processed demo data to run the [training script](#2-quick-start).
<!-- And the terminal will output the command for you to submit the result to the online leaderboard. You can follow [this section for evalai](https://github.com/KTH-RPL/DeFlow?tab=readme-ov-file#2-evaluation).
171
+
To submit to the Online Leaderboard, if you select `av2_mode=test`, it should be a zip file for you to submit to the leaderboard.
172
+
Note: The leaderboard result in DeFlow&SeFlow main paper is [version 1](https://eval.ai/web/challenges/challenge-page/2010/evaluation), as [version 2](https://eval.ai/web/challenges/challenge-page/2210/overview) is updated after DeFlow&SeFlow.
163
173
164
-
Check all detailed result files (presented in our paper Table 1) in [this discussion](https://github.com/KTH-RPL/DeFlow/discussions/2). -->
174
+
```bash
175
+
# since the env may conflict we set new on deflow, we directly create new one:
176
+
mamba create -n py37 python=3.7
177
+
mamba activate py37
178
+
pip install "evalai"
179
+
180
+
# Step 2: login in eval and register your team
181
+
evalai set-token <your token>
182
+
183
+
# Step 3: Copy the command pop above and submit to leaderboard
0 commit comments