Skip to content

Commit 7c0d70f

Browse files
committed
docs(README): leaderboard submission process and fix typo.
docs(workflow): github issue stale auto perf(vis): faster label view by paint
1 parent c9bf4b1 commit 7c0d70f

5 files changed

Lines changed: 93 additions & 15 deletions

File tree

.github/issue_stale.yaml

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
# Number of days of inactivity before an issue becomes stale
2+
daysUntilStale: 60
3+
# Number of days of inactivity before a stale issue is closed
4+
daysUntilClose: 7
5+
# Issues with these labels will never be considered stale
6+
exemptLabels:
7+
- backlog
8+
# Label to use when marking an issue as stale
9+
staleLabel: stale
10+
# Comment to post when marking an issue as stale. Set to `false` to disable
11+
markComment: >
12+
This issue has been automatically marked as stale because it has not had
13+
recent activity. It will be closed if no further activity occurs. Thank you
14+
for your contributions.
15+
# Comment to post when closing a stale issue. Set to `false` to disable
16+
closeComment: false

README.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,8 @@ SeFlow: A Self-Supervised Scene Flow Method in Autonomous Driving
66
[poster comming soon]
77
[video coming soon]
88

9-
2024/07/16 17:18: Most of codes already uploaded and tested. You can to try training directly by downloading demo data. The process script will be public when the paper published.
9+
2024/07/16 17:18: Most of codes already uploaded and tested. You can to try training directly by [downloading](https://zenodo.org/records/12751363) demo data or pretrained weight for evaluation.
10+
The process script will be public when the paper published.
1011

1112
Pre-trained weights for models are available in [Zenodo](https://zenodo.org/records/12751363) link. Check usage in [2. Evaluation](#2-evaluation) or [3. Visualization](#3-visualization).
1213

@@ -40,7 +41,7 @@ You can try following methods in our code without any effort to make your own be
4041

4142
</details>
4243

43-
💡: Want to learn how to add your own network in this structure? Check [Contribute](assets/README.md#contribute) section and know more about the code.
44+
💡: Want to learn how to add your own network in this structure? Check [Contribute](assets/README.md#contribute) section and know more about the code. Fee free to pull request!
4445

4546
## 0. Setup
4647

@@ -117,12 +118,12 @@ Or you can directly download the pre-trained weight from [Zenodo](https://zenodo
117118

118119
You can also train the supervised baseline model in our paper with the following command. [Runtime: Around 10 hours in 4x A100 GPUs.]
119120
```bash
120-
python 1_train.py model=fastflow3d lr=2e-4 epochs=20 batch_size=16 loss_fn=deflowLoss
121-
python 1_train.py model=deflow lr=2e-4 epochs=20 batch_size=16 loss_fn=ff3dLoss
121+
python 1_train.py model=fastflow3d lr=2e-4 epochs=20 batch_size=16 loss_fn=ff3dLoss
122+
python 1_train.py model=deflow lr=2e-4 epochs=20 batch_size=16 loss_fn=deflowLoss
122123
```
123124

124125
Note: You may found the different settings in the paper that is all methods are enlarge learning rate to 2e-4 and decrease the epochs to 20 for faster converge (Through analysis, we also found it had better performance).
125-
However, we kept the setting on lr=2e-6 and 50 epochs in the paper experiment for fair comparison with ZeroFlow where we directly use their provided weights etc.
126+
However, we kept the setting on lr=2e-6 and 50 epochs in the paper experiment for the fair comparison with ZeroFlow where we directly use their provided weights.
126127

127128
## 2. Evaluation
128129

@@ -142,6 +143,9 @@ python 2_eval.py checkpoint=/home/kin/seflow_best.ckpt av2_mode=test leaderboard
142143
python 2_eval.py checkpoint=/home/kin/seflow_best.ckpt av2_mode=test leaderboard_version=2
143144
```
144145

146+
And the terminal will output the command for you to submit the result to the online leaderboard. You can follow [this section for evalai](https://github.com/KTH-RPL/DeFlow?tab=readme-ov-file#2-evaluation).
147+
148+
Check all detailed result files (presented in our paper Table 1) in [this discussion](https://github.com/KTH-RPL/DeFlow/discussions/2).
145149

146150
## 3. Visualization
147151

assets/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -70,14 +70,14 @@ python -c "from assets.cuda.chamfer3D import nnChamferDis;print('successfully im
7070
The cuda version: `pytorch::pytorch-cuda` and `nvidia::cudatoolkit` need be same. [Reference link](https://github.com/pytorch/pytorch/issues/90673#issuecomment-1563799299)
7171

7272

73-
3. In cluster have error: `pandas ImportError: /lib64/libstdc++.so.6: version `GLIBCXX_3.4.29' not found`
73+
3. In cluster have error: `pandas ImportError: /lib64/libstdc++.so.6: version 'GLIBCXX_3.4.29' not found`
7474
Solved by `export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/proj/berzelius-2023-154/users/x_qinzh/mambaforge/lib`
7575

7676

7777
## Contribute
7878

7979
If you want to contribute to new model, here are tips you can follow:
80-
1. Dataloader: we believe all data could be process to `.h5`, we named as different scene and inside a scene, the key of each data is timestamp.
80+
1. Dataloader: we believe all data could be process to `.h5`, we named as different scene and inside a scene, the key of each data is timestamp. Check [dataprocess/README.md](../dataprocess/README.md#process) for more details.
8181
2. Model: All model files can be found [here: scripts/network/models](../scripts/network/models). You can view deflow and fastflow3d to know how to implement a new model.
8282
3. Loss: All loss files can be found [here: scripts/network/loss_func.py](../scripts/network/loss_func.py). There are three loss functions already inside the file, you can add a new one following the same pattern.
8383
4. Training: Once you have implemented the model, you can add the model to the config file [here: conf/model](../conf/model) and train the model using the command `python 1_train.py model=your_model_name`. One more note here may: if your res_dict from model output is different, you may need add one pattern in `def training_step` and `def validation_step`.

assets/slurm/dufolabel_sbatch.py

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,58 @@
1+
"""
2+
# Created: 2023-11-30 17:02
3+
# Copyright (C) 2023-now, RPL, KTH Royal Institute of Technology
4+
# Author: Qingwen Zhang (https://kin-zhang.github.io/)
5+
#
6+
#
7+
# Description: Write sbatch files for DUFO jobs on cluster (SLURM), no GPU needed
8+
# Reference:
9+
# * ZeroFlow data sbatch: https://github.com/kylevedder/zeroflow/blob/master/data_prep_scripts/split_nsfp_jobs_sbatch.py
10+
11+
# Run with following commands (only train need to be processed with dufo label)
12+
- python assets/slurm/dufolabel_sbatch.py --split 50 --total 700 --interval 1 --data_dir /home/kin/data/av2/preprocess/sensor/train --data_mode train
13+
- python assets/slurm/dufolabel_sbatch.py --split 100 --total 800 --interval 2 --data_dir /proj/berzelius-2023-154/users/x_qinzh/dataset/waymo/fix_preprocess/train
14+
"""
15+
16+
import fire, time, os
17+
def main(
18+
data_dir: str = "/proj/berzelius-2023-154/users/x_qinzh/av2/preprocess/sensor/train",
19+
split: int = 50,
20+
total: int = 2001,
21+
interval: int = 1,
22+
):
23+
# +1 因为range是左闭右开
24+
for i in range(0, total+1, split):
25+
scene_range = [i , min(i + split, total+1)]
26+
print(scene_range)
27+
sbatch_file_content = \
28+
f"""#!/bin/bash
29+
#SBATCH -J pack_{scene_range[0]}_{scene_range[1]}
30+
#SBATCH --gpus 0
31+
#SBATCH --cpus-per-task 32
32+
#SBATCH --mem 64G
33+
#SBATCH --mincpus=32
34+
#SBATCH -t 1-00:00:00
35+
#SBATCH --mail-type=END,FAIL
36+
#SBATCH --mail-user=qingwen@kth.se
37+
#SBATCH --output /proj/berzelius-2023-154/users/x_qinzh/seflow/logs/slurm/0_lidar/%J_{scene_range[0]}_{scene_range[1]}.out
38+
#SBATCH --error /proj/berzelius-2023-154/users/x_qinzh/seflow/logs/slurm/0_lidar/%J_{scene_range[0]}_{scene_range[1]}.err
39+
40+
cd /proj/berzelius-2023-154/users/x_qinzh/seflow
41+
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/proj/berzelius-2023-154/users/x_qinzh/mambaforge/lib
42+
43+
/proj/berzelius-2023-154/users/x_qinzh/mambaforge/envs/seflow/bin/python 0_process.py \\
44+
--data_dir {data_dir} \\
45+
--interval {interval} \\
46+
--scene_range {scene_range[0]},{scene_range[1]}
47+
48+
"""
49+
# run command sh sbatch_file_content
50+
with open(f"tmp_sbatch.sh", "w") as f:
51+
f.write(sbatch_file_content)
52+
print(f"Write sbatch file: tmp_sbatch.sh")
53+
os.system(f"sbatch tmp_sbatch.sh")
54+
55+
if __name__ == '__main__':
56+
start_time = time.time()
57+
fire.Fire(main)
58+
print(f"\nTime used: {(time.time() - start_time)/60:.2f} mins")

tests/scene_flow.py

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -93,18 +93,18 @@ def vis(
9393
pose_flow = pc0[:, :3] @ ego_pose[:3, :3].T + ego_pose[:3, 3] - pc0[:, :3]
9494

9595
pcd = o3d.geometry.PointCloud()
96-
# pcd.points = o3d.utility.Vector3dVector(pc0[:, :3][~gm0])
97-
# pcd.colors = o3d.utility.Vector3dVector(flow_color[~gm0])
98-
pcd.points = o3d.utility.Vector3dVector(pc0[:, :3])
9996
if flow_mode in ['dufo_label', 'label']:
10097
labels = data[flow_mode]
101-
pcd.colors = o3d.utility.Vector3dVector(np.ones_like(pc0[:, :3]))
102-
for i in range(pc0.shape[0]):
103-
if labels[i] <= 0:
104-
continue
98+
pcd_i = o3d.geometry.PointCloud()
99+
for label_i in np.unique(labels):
100+
pcd_i.points = o3d.utility.Vector3dVector(pc0[labels == label_i][:, :3])
101+
if label_i <= 0:
102+
pcd_i.paint_uniform_color([1.0, 1.0, 1.0])
105103
else:
106-
pcd.colors[i] = color_map[labels[i] % len(color_map)]
104+
pcd_i.paint_uniform_color(color_map[label_i % len(color_map)])
105+
pcd += pcd_i
107106
elif flow_mode in data:
107+
pcd.points = o3d.utility.Vector3dVector(pc0[:, :3])
108108
flow = data[flow_mode] - pose_flow # ego motion compensation here.
109109
flow_color = flow_to_rgb(flow) / 255.0
110110
is_dynamic = np.linalg.norm(flow, axis=1) > 0.1

0 commit comments

Comments
 (0)