You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+9-5Lines changed: 9 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,8 @@ SeFlow: A Self-Supervised Scene Flow Method in Autonomous Driving
6
6
[poster comming soon]
7
7
[video coming soon]
8
8
9
-
2024/07/16 17:18: Most of codes already uploaded and tested. You can to try training directly by downloading demo data. The process script will be public when the paper published.
9
+
2024/07/16 17:18: Most of codes already uploaded and tested. You can to try training directly by [downloading](https://zenodo.org/records/12751363) demo data or pretrained weight for evaluation.
10
+
The process script will be public when the paper published.
10
11
11
12
Pre-trained weights for models are available in [Zenodo](https://zenodo.org/records/12751363) link. Check usage in [2. Evaluation](#2-evaluation) or [3. Visualization](#3-visualization).
12
13
@@ -40,7 +41,7 @@ You can try following methods in our code without any effort to make your own be
40
41
41
42
</details>
42
43
43
-
💡: Want to learn how to add your own network in this structure? Check [Contribute](assets/README.md#contribute) section and know more about the code.
44
+
💡: Want to learn how to add your own network in this structure? Check [Contribute](assets/README.md#contribute) section and know more about the code. Fee free to pull request!
44
45
45
46
## 0. Setup
46
47
@@ -117,12 +118,12 @@ Or you can directly download the pre-trained weight from [Zenodo](https://zenodo
117
118
118
119
You can also train the supervised baseline model in our paper with the following command. [Runtime: Around 10 hours in 4x A100 GPUs.]
Note: You may found the different settings in the paper that is all methods are enlarge learning rate to 2e-4 and decrease the epochs to 20 for faster converge (Through analysis, we also found it had better performance).
125
-
However, we kept the setting on lr=2e-6 and 50 epochs in the paper experiment for fair comparison with ZeroFlow where we directly use their provided weights etc.
126
+
However, we kept the setting on lr=2e-6 and 50 epochs in the paper experiment for the fair comparison with ZeroFlow where we directly use their provided weights.
And the terminal will output the command for you to submit the result to the online leaderboard. You can follow [this section for evalai](https://github.com/KTH-RPL/DeFlow?tab=readme-ov-file#2-evaluation).
147
+
148
+
Check all detailed result files (presented in our paper Table 1) in [this discussion](https://github.com/KTH-RPL/DeFlow/discussions/2).
The cuda version: `pytorch::pytorch-cuda` and `nvidia::cudatoolkit` need be same. [Reference link](https://github.com/pytorch/pytorch/issues/90673#issuecomment-1563799299)
71
71
72
72
73
-
3. In cluster have error: `pandas ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.29' not found`
73
+
3. In cluster have error: `pandas ImportError: /lib64/libstdc++.so.6: version 'GLIBCXX_3.4.29' not found`
74
74
Solved by `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/proj/berzelius-2023-154/users/x_qinzh/mambaforge/lib`
75
75
76
76
77
77
## Contribute
78
78
79
79
If you want to contribute to new model, here are tips you can follow:
80
-
1. Dataloader: we believe all data could be process to `.h5`, we named as different scene and inside a scene, the key of each data is timestamp.
80
+
1. Dataloader: we believe all data could be process to `.h5`, we named as different scene and inside a scene, the key of each data is timestamp. Check [dataprocess/README.md](../dataprocess/README.md#process) for more details.
81
81
2. Model: All model files can be found [here: scripts/network/models](../scripts/network/models). You can view deflow and fastflow3d to know how to implement a new model.
82
82
3. Loss: All loss files can be found [here: scripts/network/loss_func.py](../scripts/network/loss_func.py). There are three loss functions already inside the file, you can add a new one following the same pattern.
83
83
4. Training: Once you have implemented the model, you can add the model to the config file [here: conf/model](../conf/model) and train the model using the command `python 1_train.py model=your_model_name`. One more note here may: if your res_dict from model output is different, you may need add one pattern in `def training_step` and `def validation_step`.
0 commit comments