Skip to content

Commit f7e59aa

Browse files
Kin-Zhangjykim94
andauthored
Merge Flow4D model into codebase (#1)
* feat(flow4d): update flow4d model. docs(conf): update flow4d README and conf * Add Flow4D details to README.md * docs: add flow4d bib also. * fix(trainer): validation setp with res may None, fix with if. * docs(dockerfile): update dockerfile for convenient env setup. * update README and delete useless info * update bib on the end of readme but with link to jump at the beginning. * docs(REDME): update print message and ignore warning info for np in eval script. --------- Co-authored-by: jykim94 <89293559+jykim94@users.noreply.github.com>
1 parent 67a6d47 commit f7e59aa

14 files changed

Lines changed: 632 additions & 87 deletions

File tree

Dockerfile

Lines changed: 12 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -2,22 +2,16 @@
22
FROM nvidia/cuda:11.7.1-devel-ubuntu20.04
33
ENV DEBIAN_FRONTEND noninteractive
44

5-
RUN apt update && apt install -y --no-install-recommends \
6-
git curl vim rsync htop
5+
RUN apt update && apt install -y git curl vim rsync htop
76

8-
RUN curl -o ~/miniconda.sh -LO https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh && \
9-
chmod +x ~/miniconda.sh && \
10-
~/miniconda.sh -b -p /opt/conda && \
11-
rm ~/miniconda.sh && \
7+
RUN curl -o ~/miniforge3.sh -LO https://github.com/conda-forge/miniforge/releases/latest/download/miniforge3-Linux-x86_64.sh && \
8+
chmod +x ~/miniforge3.sh && \
9+
~/miniforge3.sh -b -p /opt/conda && \
10+
rm ~/miniforge3.sh && \
1211
/opt/conda/bin/conda clean -ya && /opt/conda/bin/conda init bash
1312

14-
RUN curl -o ~/mamba.sh -LO https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh && \
15-
chmod +x ~/mamba.sh && \
16-
~/mamba.sh -b -p /opt/mambaforge && \
17-
rm ~/mamba.sh && /opt/mambaforge/bin/mamba init bash
18-
1913
# install zsh and oh-my-zsh
20-
RUN apt install -y wget git zsh tmux vim g++
14+
RUN apt update && apt install -y wget git zsh tmux vim g++
2115
RUN sh -c "$(wget -O- https://github.com/deluan/zsh-in-docker/releases/download/v1.1.5/zsh-in-docker.sh)" -- \
2216
-t robbyrussell -p git \
2317
-p https://github.com/agkozak/zsh-z \
@@ -26,18 +20,16 @@ RUN sh -c "$(wget -O- https://github.com/deluan/zsh-in-docker/releases/download/
2620
-p https://github.com/zsh-users/zsh-syntax-highlighting
2721

2822
RUN printf "y\ny\ny\n\n" | bash -c "$(curl -fsSL https://raw.githubusercontent.com/Kin-Zhang/Kin-Zhang/main/scripts/setup_ohmyzsh.sh)"
29-
RUN /opt/conda/bin/conda init zsh && /opt/mambaforge/bin/mamba init zsh
23+
RUN /opt/conda/bin/conda init zsh && /opt/conda/bin/mamba init zsh
3024

3125
# change to conda env
3226
ENV PATH /opt/conda/bin:$PATH
33-
ENV PATH /opt/mambaforge/bin:$PATH
3427

35-
RUN mkdir -p /home/kin/workspace && cd /home/kin/workspace && git clone https://github.com/KTH-RPL/SeFlow.git
36-
WORKDIR /home/kin/workspace/SeFlow
28+
RUN mkdir -p /home/kin/workspace && cd /home/kin/workspace && git clone https://github.com/KTH-RPL/OpenSceneFlow.git
29+
WORKDIR /home/kin/workspace/OpenSceneFlow
3730
RUN apt-get update && apt-get install libgl1 -y
3831
# need read the gpu device info to compile the cuda extension
39-
RUN cd /home/kin/workspace/SeFlow && /opt/mambaforge/bin/mamba env create -f environment.yaml
40-
RUN cd /home/kin/workspace/SeFlow/assets/cuda/mmcv && /opt/mambaforge/envs/seflow/bin/python ./setup.py install
41-
RUN cd /home/kin/workspace/SeFlow/assets/cuda/chamfer3D && /opt/mambaforge/envs/seflow/bin/python ./setup.py install
42-
32+
RUN cd /home/kin/workspace/OpenSceneFlow && /opt/conda/bin/mamba env create -f environment.yaml
33+
RUN cd /home/kin/workspace/OpenSceneFlow/assets/cuda/mmcv && /opt/conda/envs/opensf/bin/python ./setup.py install
34+
RUN cd /home/kin/workspace/OpenSceneFlow/assets/cuda/chamfer3D && /opt/conda/envs/opensf/bin/python ./setup.py install
4335

README.md

Lines changed: 84 additions & 54 deletions
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,16 @@
11
<p align="center">
2-
<!-- pypi-strip -->
32
<picture>
4-
<!-- <source media="(prefers-color-scheme: dark)" srcset="https://raw.githubusercontent.com/Pointcept/Pointcept/main/docs/logo_dark.png">
5-
<source media="(prefers-color-scheme: light)" srcset="https://raw.githubusercontent.com/Pointcept/Pointcept/main/docs/logo.png"> -->
6-
<!-- /pypi-strip -->
73
<img alt="opensceneflow" src="assets/docs/logo.png" width="600">
8-
<!-- pypi-strip -->
94
</picture><br>
10-
<!-- /pypi-strip -->
115
</p>
126

137
OpenSceneFlow is a codebase for point cloud scene flow estimation.
148
It is also an official implementation of the following paper (sored by the time of publication):
159

16-
<!-- - **Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation**
10+
- **Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation**
1711
*Jaeyeul Kim, Jungwan Woo, Ukcheol Shin, Jean Oh, Sunghoon Im*
1812
IEEE Robotics and Automation Letters (**RA-L**) 2025
19-
[ Backbone ] [ Supervised ] - [ [arXiv](https://arxiv.org/abs/2407.07995) ] [ [Project](https://github.com/dgist-cvlab/Flow4D) ] &rarr; [here](#flow4d) -->
13+
[ Backbone ] [ Supervised ] - [ [arXiv](https://arxiv.org/abs/2407.07995) ] [ [Project](https://github.com/dgist-cvlab/Flow4D) ] &rarr; [here](#flow4d)
2014

2115
- **SSF: Sparse Long-Range Scene Flow for Autonomous Driving**
2216
*Ajinkya Khoche, Qingwen Zhang, Laura Pereira Sánchez, Aron Asefaw, Sina Sharif Mansouri and Patric Jensfelt*
@@ -34,51 +28,31 @@ International Conference on Robotics and Automation (**ICRA**) 2024
3428
[ Backbone ] [ Supervised ] - [ [arXiv](https://arxiv.org/abs/2401.16122) ] [ [Project](https://github.com/KTH-RPL/DeFlow) ] &rarr; [here](#deflow)
3529

3630

31+
💞 If you find *OpenSceneFlow* useful to your research, please cite [our works 📖](#cite-us) and give a star 🌟 as encouragement. (੭ˊ꒳​ˋ)੭✧
3732

38-
<details> <summary>🎁 <b>One repository, All methods!</b> </summary>
33+
🎁 <b>One repository, All methods!</b>. Additionally, *OpenSceneFlow* integrates the following excellent work: [ICLR'24 ZeroFlow](https://arxiv.org/abs/2305.10424), [ICCV'23 FastNSF](https://arxiv.org/abs/2304.09121), [RA-L'21 FastFlow](https://arxiv.org/abs/2103.01306), [NeurIPS'21 NSFP](https://arxiv.org/abs/2111.01253),
3934

40-
- [x] [FastFlow3d](https://arxiv.org/abs/2103.01306): RA-L 2021
35+
<details> <summary> Summary of them:</summary>
36+
37+
- [x] [FastFlow3d](https://arxiv.org/abs/2103.01306): RA-L 2021, a basic backbone model.
4138
- [x] [ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](tools/zerof2ours.py).
4239
- [ ] [NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](assets/cuda/README.md), same (slightly better) performance. Done coding, public after review.
4340
- [ ] [FastNSF](https://arxiv.org/abs/2304.09121): ICCV 2023. Done coding, public after review.
44-
- [ ] [Flow4D](https://arxiv.org/abs/2407.07995): Under Review. Done coding, public after review.
45-
- [ ] ... more on the way
41+
- [ ] [ICP-Flow](https://arxiv.org/abs/2402.17351): CVPR 2024. Done coding, public after review.
4642

4743
</details>
4844

49-
## Citation
50-
51-
If you find *OpenSceneFlow* useful to your research, please cite our work as encouragement. (੭ˊ꒳​ˋ)੭✧
52-
53-
```
54-
@inproceedings{zhang2024seflow,
55-
author={Zhang, Qingwen and Yang, Yi and Li, Peizheng and Andersson, Olov and Jensfelt, Patric},
56-
title={{SeFlow}: A Self-Supervised Scene Flow Method in Autonomous Driving},
57-
booktitle={European Conference on Computer Vision (ECCV)},
58-
year={2024},
59-
pages={353–369},
60-
organization={Springer},
61-
doi={10.1007/978-3-031-73232-4_20},
62-
}
63-
@inproceedings{zhang2024deflow,
64-
author={Zhang, Qingwen and Yang, Yi and Fang, Heng and Geng, Ruoyu and Jensfelt, Patric},
65-
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
66-
title={{DeFlow}: Decoder of Scene Flow Network in Autonomous Driving},
67-
year={2024},
68-
pages={2105-2111},
69-
doi={10.1109/ICRA57147.2024.10610278}
70-
}
71-
```
45+
💡: Want to learn how to add your own network in this structure? Check [Contribute section] and know more about the code. Fee free to pull request and your bibtex [here](#cite-us) by pull request.
7246

7347
---
7448

75-
📜 Changelog:
49+
<!-- 📜 Changelog:
7650
7751
- 🎁 2025/1/28 14:58: Update the codebase to collect all methods in one repository reference [Pointcept](https://github.com/Pointcept/Pointcept) repo.
7852
- 🤗 2024/11/18 16:17: Update model and demo data download link through HuggingFace, Personally I found `wget` from HuggingFace link is much faster than Zenodo.
7953
- 2024/09/26 16:24: All codes already uploaded and tested. You can to try training directly by downloading (through [HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow)/[Zenodo](https://zenodo.org/records/13744999)) demo data or pretrained weight for evaluation.
8054
- 2024/07/24: Merging SeFlow & DeFlow code together, lighter setup and easier running.
81-
- 🔥 2024/07/02: Check the self-supervised version in our new ECCV'24 [SeFlow](https://github.com/KTH-RPL/SeFlow). The 1st ranking in new leaderboard among self-supervise methods.
55+
- 🔥 2024/07/02: Check the self-supervised version in our new ECCV'24 [SeFlow](https://github.com/KTH-RPL/SeFlow). The 1st ranking in new leaderboard among self-supervise methods. -->
8256

8357
## 0. Installation
8458

@@ -97,33 +71,49 @@ cd assets/cuda/mmcv && python ./setup.py install && cd ../../..
9771
cd assets/cuda/chamfer3D && python ./setup.py install && cd ../../..
9872
```
9973

100-
<!-- Or you always can choose [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment and free yourself from installation, you can pull it by.
101-
If you have different arch, please build it by yourself `cd OpenSceneFlow && docker build -t zhangkin/opensf` by going through [build-docker-image](assets/README.md/#build-docker-image) section.
74+
Or you always can choose [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment and free yourself from installation, you can pull it by.
75+
If you have different arch, please build it by yourself `cd OpenSceneFlow && docker build -t zhangkin/opensf` by going through [build-docker-image](assets/README.md#build-docker-image) section.
76+
10277
```bash
10378
# option 1: pull from docker hub
104-
docker pull zhangkin/seflow
79+
docker pull zhangkin/opensf
10580

10681
# run container
107-
docker run -it --gpus all -v /dev/shm:/dev/shm -v /home/kin/data:/home/kin/data --name seflow zhangkin/seflow /bin/zsh
108-
``` -->
82+
docker run -it --gpus all -v /dev/shm:/dev/shm -v /home/kin/data:/home/kin/data --name opensceneflow zhangkin/opensf /bin/zsh
83+
# and better to read your own gpu device info to compile the cuda extension again:
84+
cd /home/kin/workspace/OpenSceneFlow/assets/cuda/mmcv && /opt/conda/envs/opensf/bin/python ./setup.py install
85+
cd /home/kin/workspace/OpenSceneFlow/assets/cuda/chamfer3D && /opt/conda/envs/opensf/bin/python ./setup.py install
86+
```
10987

11088

11189
## 1. Data Preparation
11290

113-
Check [dataprocess/README.md](dataprocess/README.md#argoverse-20) for downloading tips for the raw Argoverse 2 dataset and [data preprocessed to h5 files commands](dataprocess/README.md#process).
114-
Another good way to try code quickly is using **mini processed dataset**, we directly provide one scene inside `train` and `val`.
115-
It already converted to `.h5` format and processed with the label data.
116-
You can download it from [Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)/[HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip) and extract it to the data folder.
117-
Then you can directly use this mini processed demo data to run the [training script](#2-quick-start).
91+
Refer to [dataprocess/README.md](dataprocess/README.md) for dataset download instructions. Currently, we support **Argoverse 2**, **Waymo**, and **custom datasets** (more datasets will be added in the future).
92+
93+
After downloading, convert the raw data to `.h5` format for easy training, evaluation, and visualization. Follow the steps in [dataprocess/README.md#process](dataprocess/README.md#process). For a quick start, use our **mini processed dataset**, which includes one scene in `train` and `val`. It is pre-converted to `.h5` format with label data ([Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)/[HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip)).
94+
11895

11996
```bash
12097
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/demo_data.zip
12198
unzip demo_data.zip -p /home/kin/data/av2
12299
```
123100

101+
Once extracted, you can directly use this dataset to run the [training script](#2-quick-start) without further processing.
102+
124103
## 2. Quick Start
125104

126-
<!-- ### Flow4D -->
105+
### Flow4D
106+
107+
Train Flow4D with the leaderboard submit config. [Runtime: Around 18 hours in 4x RTX 3090 GPUs.]
108+
109+
```bash
110+
python train.py model=flow4d lr=1e-3 epochs=15 batch_size=8 num_frames=5 loss_fn=deflowLoss "voxel_size=[0.2, 0.2, 0.2]" "point_cloud_range=[-51.2, -51.2, -3.2, 51.2, 51.2, 3.2]"
111+
```
112+
113+
Pretrained weight can be downloaded through:
114+
```bash
115+
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/flow4d_best.ckpt
116+
```
127117

128118
<!-- ### SSF -->
129119

@@ -132,7 +122,7 @@ unzip demo_data.zip -p /home/kin/data/av2
132122
Train SeFlow needed to specify the loss function, we set the config of our best model in the leaderboard. [Runtime: Around 11 hours in 4x A100 GPUs.]
133123

134124
```bash
135-
python train.py model=deflow lr=2e-4 epochs=9 batch_size=16 loss_fn=seflowLoss "add_seloss={chamfer_dis: 1.0, static_flow_loss: 1.0, dynamic_chamfer_dis: 1.0, cluster_based_pc0pc1: 1.0}" "model.target.num_iters=2" "model.val_monitor=val/Dynamic/Mean"
125+
python train.py model=deflow lr=2e-4 epochs=9 batch_size=16 loss_fn=seflowLoss "add_seloss={chamfer_dis: 1.0, static_flow_loss: 1.0, dynamic_chamfer_dis: 1.0, cluster_based_pc0pc1: 1.0}" "model.target.num_iters=2"
136126
```
137127

138128
Pretrained weight can be downloaded through:
@@ -213,11 +203,51 @@ python tools/visualization_rerun.py --data_dir /home/kin/data/av2/h5py/demo/trai
213203
https://github.com/user-attachments/assets/07e8d430-a867-42b7-900a-11755949de21
214204

215205

216-
## Acknowledgement
206+
## Cite Us
217207

218-
These work were partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation and Prosense (2020-02963) funded by Vinnova.
219-
The computations were enabled by the supercomputing resource Berzelius provided by National Supercomputer Centre at Linköping University and the Knut and Alice Wallenberg Foundation, Sweden.
208+
*OpenSceneFlow* is designed by [Qingwen Zhang](https://kin-zhang.github.io/) from DeFlow and SeFlow project. If you find it useful, please cite our works:
209+
210+
```bibtex
211+
@inproceedings{zhang2024seflow,
212+
author={Zhang, Qingwen and Yang, Yi and Li, Peizheng and Andersson, Olov and Jensfelt, Patric},
213+
title={{SeFlow}: A Self-Supervised Scene Flow Method in Autonomous Driving},
214+
booktitle={European Conference on Computer Vision (ECCV)},
215+
year={2024},
216+
pages={353–369},
217+
organization={Springer},
218+
doi={10.1007/978-3-031-73232-4_20},
219+
}
220+
@inproceedings{zhang2024deflow,
221+
author={Zhang, Qingwen and Yang, Yi and Fang, Heng and Geng, Ruoyu and Jensfelt, Patric},
222+
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
223+
title={{DeFlow}: Decoder of Scene Flow Network in Autonomous Driving},
224+
year={2024},
225+
pages={2105-2111},
226+
doi={10.1109/ICRA57147.2024.10610278}
227+
}
228+
```
229+
230+
And our excellent collaborators works as followings:
231+
232+
```bibtex
233+
@article{kim2025flow4d,
234+
author={Kim, Jaeyeul and Woo, Jungwan and Shin, Ukcheol and Oh, Jean and Im, Sunghoon},
235+
journal={IEEE Robotics and Automation Letters},
236+
title={Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation},
237+
year={2025},
238+
volume={10},
239+
number={4},
240+
pages={3462-3469},
241+
doi={10.1109/LRA.2025.3542327}
242+
}
243+
@article{khoche2025ssf,
244+
title={SSF: Sparse Long-Range Scene Flow for Autonomous Driving},
245+
author={Khoche, Ajinkya and Zhang, Qingwen and Sanchez, Laura Pereira and Asefaw, Aron and Mansouri, Sina Sharif and Jensfelt, Patric},
246+
journal={arXiv preprint arXiv:2501.17821},
247+
year={2025}
248+
}
249+
```
220250

221-
<!-- *OpenSceneFlow* is designed by [Qingwen Zhang](https://kin-zhang.github.io/). It -->
251+
Feel free to contribute your method and add your bibtex here by pull request!
222252

223-
❤️: Evaluation Metric from [BucketedSceneFlowEval](https://github.com/kylevedder/BucketedSceneFlowEval); README reference from [Pointcept](https://github.com/Pointcept/Pointcept); Many thanks to [ZeroFlow](https://github.com/kylevedder/zeroflow) ...
253+
❤️: [BucketedSceneFlowEval](https://github.com/kylevedder/BucketedSceneFlowEval); [Pointcept](https://github.com/Pointcept/Pointcept); [ZeroFlow](https://github.com/kylevedder/zeroflow) ...

conf/config.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,11 @@ defaults:
55

66
slurm_id: 00000
77

8-
wandb_mode: offline # [offline, disabled, online]
8+
wandb_mode: disabled # [offline, disabled, online]
99
wandb_project_name: seflow
1010

11-
train_data: /home/kin/data/av2/preprocess_v2/demo/sensor/train
12-
val_data: /home/kin/data/av2/preprocess_v2/demo/sensor/val
11+
train_data: /home/kin/data/av2/h5py/demo/train
12+
val_data: /home/kin/data/av2/h5py/demo/val
1313

1414
output: ${model.name}-${slurm_id}
1515

conf/eval.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11

2-
dataset_path: /home/kin/data/av2/preprocess_v2/sensor
2+
dataset_path: /home/kin/data/av2/h5py/sensor
33
checkpoint: /home/kin/model_zoo/deflow.ckpt
44
av2_mode: val # [val, test]
55
save_res: False # [True, False]

conf/model/flow4d.yaml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
name: flow4d
2+
3+
target:
4+
_target_: src.models.Flow4D
5+
voxel_size: ${voxel_size}
6+
point_cloud_range: ${point_cloud_range}
7+
num_frames: ${num_frames}
8+
9+
val_monitor: val/Dynamic/Mean

environment.yaml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,7 @@ dependencies:
3131
- dztimer
3232
- av2==0.2.1
3333
- dufomap==1.0.0
34+
- spconv-cu117
3435

3536
# Reason about the version fixed:
3637
# setuptools==68.5.1: https://github.com/aws-neuron/aws-neuron-sdk/issues/893

envprocess.yaml

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,13 @@ dependencies:
66
- python=3.8
77
- pytorch::pytorch=2.0.0
88
- pytorch::torchvision
9-
- mkl==2024.0.0
109
- numba
11-
- numpy
10+
- numpy==1.22
1211
- pandas
1312
- pip
1413
- scipy
1514
- tqdm
15+
- scikit-learn
1616
- fire
1717
- hdbscan
1818
- s5cmd
@@ -21,10 +21,13 @@ dependencies:
2121
- nuscenes-devkit
2222
- av2==0.2.1
2323
- waymo-open-dataset-tf-2.11.0==1.5.0
24-
- dufomap==1.0.0
24+
- open3d==0.18.0
2525
- linefit
2626
- dztimer
27+
- dufomap==1.0.0
28+
- evalai
2729

2830
# Reason about the version fixed:
2931
# numpy==1.22: package conflicts, need numpy higher or same 1.22
30-
# mkl==2024.0.0: https://github.com/pytorch/pytorch/issues/123097
32+
# open3d==0.18.0: because 0.17.0 have bug on set the view json file
33+
# dufomap==1.0.0: in case later updating may not compatible with the code.

process.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ def run_cluster(
7575
del f[key]['label']
7676
f[key].create_dataset('label', data=np.array(cluster_label).astype(np.int16))
7777
print(f"==> Scene {scene_id} finished, used: {(time.time() - start_time)/60:.2f} mins")
78-
print(f"Data inside {str(data_path)} finished. Check the result with vis() function if you want to visualize them.")
78+
print(f"Data inside {str(data_path)} finished. Check the result with tools/visulization.py if you want to visualize them.")
7979

8080
def run_dufo(
8181
data_dir: str ="/home/kin/data/av2/preprocess/sensor/train",

src/models/__init__.py

Lines changed: 19 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,20 @@
1+
"""
2+
# Created: 2024-11-21 20:12
3+
# Copyright (C) 2023-now, RPL, KTH Royal Institute of Technology
4+
# Author: Qingwen Zhang (https://kin-zhang.github.io/)
5+
#
6+
# This file is part of OpenSceneFlow (https://github.com/KTH-RPL/OpenSceneFlow)
7+
# If you find this repo helpful, please cite the respective publication as
8+
# listed on the above website.
9+
"""
10+
111
from .deflow import DeFlow
2-
from .fastflow3d import FastFlow3D
12+
from .fastflow3d import FastFlow3D
13+
14+
# following need install extra package:
15+
# * pip install spconv-cu117
16+
try:
17+
from .flow4d import Flow4D
18+
except ImportError as e:
19+
print("\033[93m--- WARNING [model]: Model with SparseConv is not imported, as it requires spconv lib which is not installed.")
20+
print(f"\033[91m--- Detail error message\033[0m: {e}")

0 commit comments

Comments
 (0)