Skip to content

Commit 4090eea

Browse files
committed
docs(README): docs typo and normal dataloader key check
1 parent 103478a commit 4090eea

2 files changed

Lines changed: 12 additions & 8 deletions

File tree

README.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -58,11 +58,11 @@ docker run -it --gpus all -v /dev/shm:/dev/shm -v /home/kin/data:/home/kin/data
5858

5959
## 1. Run & Train
6060

61-
Note: Prepare raw data and process train data only needed run once for the task. No need to run till you delete all data.
61+
Note: Prepare raw data and process train data only needed run once for the task. No need repeat the data process steps till you delete all data.
6262

6363
### Data Preparation
6464

65-
Check [dataprocess/README.md](dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset
65+
Check [dataprocess/README.md](dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset.
6666

6767
Maybe you only want to have the mini processed dataset to try the code quickly, We directly provide one scene inside `train` and `val`. It already converted to `.h5` format and processed with the label data.
6868
<!-- You can download it from [Zenodo](https://zenodo.org/record/12632962) and extract it to the data folder. -->
@@ -91,19 +91,23 @@ python 0_process.py --data_dir /home/kin/data/av2/preprocess/sensor/train --scen
9191

9292
### Train the model
9393

94-
Train SeFlow needed to specify the loss function, we set the config of our best model in the leaderboard.
94+
Train SeFlow needed to specify the loss function, we set the config of our best model in the leaderboard. [Runtime: Around 18 hours in 4x A100 GPUs.]
9595

9696
```bash
9797
python 1_train.py model=deflow lr=2e-4 epochs=20 batch_size=16 loss_fn=seflowLoss "add_seloss={chamfer_dis: 1.0, static_flow_loss: 1.0, dynamic_chamfer_dis: 1.0, cluster_based_pc0pc1: 1.0}" "model.target.num_iters=2" "model.val_monitor=val/Dynamic/Mean"
9898
```
9999

100100
### Other Benchmark Models
101101

102+
You can also train the supervised baseline model in our paper with the following command. [Runtime: Around 10 hours in 4x A100 GPUs.]
102103
```bash
103104
python 1_train.py model=fastflow3d lr=2e-4 epochs=20 batch_size=16 loss_fn=deflowLoss
104105
python 1_train.py model=deflow lr=2e-4 epochs=20 batch_size=16 loss_fn=ff3dLoss
105106
```
106107

108+
Note: You may found the different settings in the paper that is all methods are enlarge learning rate to 2e-4 and decrease the epochs to 20 for faster converge (Through analysis, we also found it had better performance).
109+
However, we kept the setting on lr=2e-6 and 50 epochs in the paper experiment for fair comparison with ZeroFlow where we directly use their provided weights etc.
110+
107111
## 2. Evaluation
108112

109113
You can view Wandb dashboard for the training and evaluation results or upload result to online leaderboard.

scripts/utils/mics.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -285,11 +285,11 @@ def __getitem__(self, index):
285285
data_dict['pc0'] = f[key]['lidar'][:]
286286
data_dict['gm0'] = f[key]['ground_mask'][:]
287287
data_dict['pose0'] = f[key]['pose'][:]
288-
for label_key in ['dufo_label', 'label']:
289-
if label_key in f[key]:
290-
data_dict[label_key] = f[key][label_key][:]
291-
if self.flow_view and self.vis_name in f[key]:
292-
data_dict[self.vis_name] = f[key][self.vis_name][:]
288+
for flow_key in [self.vis_name, 'dufo_label', 'label']:
289+
if flow_key in f[key]:
290+
data_dict[flow_key] = f[key][flow_key][:]
291+
292+
if self.flow_view:
293293
next_timestamp = str(self.data_index[index+1][1])
294294
data_dict['pose1'] = f[next_timestamp]['pose'][:]
295295
data_dict['pc1'] = f[next_timestamp]['lidar'][:]

0 commit comments

Comments
 (0)