You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
chore(base): updates demo and weights in codebase.
* eval and submit: new save zip file name and output the command directly
* dataloader: add num_frames into dataloader for afterward using.
* demo data & weights: update all weights for preprocess_v2 data where ego pose center in sensor center.
* config: update monitor metric to new one for all mode config.
Copy file name to clipboardExpand all lines: README.md
+30-20Lines changed: 30 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,9 +6,9 @@ SeFlow: A Self-Supervised Scene Flow Method in Autonomous Driving
6
6
[poster comming soon]
7
7
[video coming soon]
8
8
9
-
2024/07/09 16:34: I'm working on updating code here now. **Not fully ready yet** until Jul'15.
9
+
2024/07/16 17:18: Most of codes already uploaded and tested. You can to try training directly by downloading demo data. The process script will be ready when the paper published.
10
10
11
-
Pre-trained weights for models are available in [Zenodo](https://zenodo.org/records/12632962) link. Check usage in [2. Evaluation](#2-evaluation) or [3. Visualization](#3-visualization).
11
+
Pre-trained weights for models are available in [Zenodo](https://zenodo.org/records/12751363) link. Check usage in [2. Evaluation](#2-evaluation) or [3. Visualization](#3-visualization).
12
12
13
13
Task: __Self-Supervised__ Scene Flow Estimation in Autonomous Driving. No human-label needed. Real-time inference (15-20Hz in RTX3090).
14
14
@@ -32,9 +32,10 @@ You can try following methods in our code without any effort to make your own be
-[x][ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](TODO).
35
+
-[x][ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](tests/zerof2ours.py).
36
36
-[ ][NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](assets/cuda/README.md), same (slightly better) performance. Done coding, public after review.
37
37
-[ ][FastNSF](https://arxiv.org/abs/2304.09121): ICCV 2023. Done coding, public after review.
38
+
<!-- - [ ] [Flow4D](https://arxiv.org/abs/2407.07995): 1st supervise network in the new leaderboard. Done coding, public after review. -->
@@ -71,25 +72,23 @@ docker run -it --gpus all -v /dev/shm:/dev/shm -v /home/kin/data:/home/kin/data
71
72
72
73
## 1. Run & Train
73
74
74
-
Note: Prepare raw data and process train data only needed run once for the task. No need repeat the data process steps till you delete all data.
75
+
Note: Prepare raw data and process train data only needed run once for the task. No need repeat the data process steps till you delete all data. We use [wandb](https://wandb.ai/) to log the training process, and you may want to change all `entity="kth-rpl"` to your own entity.
75
76
76
77
### Data Preparation
77
78
78
-
Check [dataprocess/README.md](dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset.
79
+
Check [dataprocess/README.md](dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset. Or maybe you only want to have the **mini processed dataset** to try the code quickly, We directly provide one scene inside `train` and `val`. It already converted to `.h5` format and processed with the label data.
80
+
You can download it from [Zenodo](https://zenodo.org/record/12751363) and extract it to the data folder.
79
81
80
-
Maybe you only want to have the mini processed dataset to try the code quickly, We directly provide one scene inside `train` and `val`. It already converted to `.h5` format and processed with the label data.
81
-
<!-- You can download it from [Zenodo](https://zenodo.org/record/12632962) and extract it to the data folder. -->
82
82
```bash
83
-
# TODO: update the link later when the data is ready
Extract all data to unified h5 format. [Runtime: Normally need 10 mins finished run following commands totally in my desktop, 45 mins for the cluster I used]
Process train data for self-supervised learning. Only training data needs this step. [Runtime: Normally need 15 hours for my desktop, 3 hours for the cluster with five available nodes parallel running.]
Or you can directly download the pre-trained weight from [Zenodo](https://zenodo.org/records/12751363/files/seflow_best.ckpt) and skip the training step.
113
+
113
114
### Other Benchmark Models
114
115
115
116
You can also train the supervised baseline model in our paper with the following command. [Runtime: Around 10 hours in 4x A100 GPUs.]
@@ -129,27 +130,36 @@ Since in training, we save all hyper-parameters and model checkpoints, the only
129
130
130
131
```bash
131
132
# downloaded pre-trained weight, or train by yourself
We provide a script to visualize the results of the model. You can specify the checkpoint path and the data path to visualize the results. The step is quickly similar to evaluation.
146
+
We provide a script to visualize the results of the model also. You can specify the checkpoint path and the data path to visualize the results. The step is quickly similar to evaluation.
0 commit comments