This repository contains code and models for our paper:
- Download the model weights and place them in the
weights
folder:
Monodepth:
-
Set up dependencies:
pip install -r requirements.txt
The code was tested with Python 3.7, PyTorch 1.8.0, OpenCV 4.5.1, and timm 0.4.5
-
Place one or more input images in the folder
input
. -
Run a monocular depth estimation model:
python run_monodepth.py
-
The results are written to the folder
output_monodepth
.
Use the flag -t
to switch between different models. Possible options are dpt_hybrid
(default) and dpt_large
.
Additional models:
- Monodepth finetuned on KITTI: dpt_hybrid_kitti-cb926ef4.pt Mirror
- Monodepth finetuned on NYUv2: dpt_hybrid_nyu-2ce69ec7.pt Mirror
Run with
python run_monodepth -t [dpt_hybrid_kitti|dpt_hybrid_nyu]
Hints on how to evaluate monodepth models can be found here: https://github.com/intel-isl/DPT/blob/main/EVALUATION.md