Skip to content

Commit

Permalink
[README] update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
ForeverFancy committed Apr 21, 2023
1 parent 3667c49 commit 87b13b3
Showing 1 changed file with 24 additions and 20 deletions.
44 changes: 24 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,9 @@ conda env create -f environment.yml
conda activate meta_portrait_base
```

## Inference Base Model
## Base Model

### Inference Base Model

Download the [checkpoint of base model](https://drive.google.com/file/d/1Kmdv3w6N_we7W7lIt6LBzqRHwwy1dBxD/view?usp=share_link) and put it to `base_model/checkpoint`. We provide [preprocessed example data for inference](https://drive.google.com/file/d/166eNbabM6TeJVy7hxol2gL1kUGKHi3Do/view?usp=share_link), you could download the data, unzip and put it to `data`. The directory structure should like this:

Expand Down Expand Up @@ -56,7 +58,7 @@ cd base_model
python inference.py --save_dir /path/to/output --config config/meta_portrait_256_eval.yaml --ckpt checkpoint/ckpt_base.pth.tar
```

## Train Base Model from Scratch
### Train Base Model from Scratch

Train the warping network first using the following command:
```bash
Expand All @@ -69,33 +71,22 @@ Then, modify the path of `warp_ckpt` in `config/meta_portrait_256_pretrain_full.
python main.py --config config/meta_portrait_256_pretrain_full.yaml --fp16 --stage Full --task Pretrain
```

## Meta Training for Faster Personalization of Base Model
### Meta Training for Faster Personalization of Base Model

You could start from the standard pretrained checkpoint and further optimize the personalized adaptation speed of the model by utilizing meta-learning using the following command:
```bash
python main.py --config config/meta_portrait_256_pretrain_meta_train.yaml --fp16 --stage Full --task Meta --remove_sn --ckpt /path/to/standard_pretrain_ckpt
```

## Citing MetaPortrait
## Temporal Super-resolution Model

```
@misc{zhang2022metaportrait,
title={MetaPortrait: Identity-Preserving Talking Head Generation with Fast Personalized Adaptation},
author={Bowen Zhang and Chenyang Qi and Pan Zhang and Bo Zhang and HsiangTao Wu and Dong Chen and Qifeng Chen and Yong Wang and Fang Wen},
year={2022},
eprint={2212.08062},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
## Inference Temporal Super-resolution Model

### Base Environment
### Base Environment

- Python >= 3.7 (Recommend to use [Anaconda](https://www.anaconda.com/download/#linux) or [Miniconda](https://docs.conda.io/en/latest/miniconda.html))
- [PyTorch >= 1.7](https://pytorch.org/)
- System: Linux + NVIDIA GPU + [CUDA](https://developer.nvidia.com/cuda-downloads)
- Set the root path to [sr_model](sr_model)

### Data and checkpoint

Download the [dataset](
Expand All @@ -121,7 +112,6 @@ options

### Installation Bash command


```bash
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116
# Install a modified basicsr - https://github.com/xinntao/BasicSR
Expand All @@ -139,14 +129,28 @@ python setup.py develop
```

### Quick Inference

ckpt for inference: pretrained_ckpt/temporal_gfpgan.pth

Example code to conduct face temporal super-resolution:

```bash
CUDA_VISIBLE_DEVICES=7 python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 Experimental_root/test.py -opt options/test/same_id.yml --launcher pytorch
python -m torch.distributed.launch --nproc_per_node=1 --master_port=4321 Experimental_root/test.py -opt options/test/same_id.yml --launcher pytorch
```
You may adjust the ```nproc_per_node``` to the number of GPUs on your own machine.

## Citing MetaPortrait

```
@misc{zhang2022metaportrait,
title={MetaPortrait: Identity-Preserving Talking Head Generation with Fast Personalized Adaptation},
author={Bowen Zhang and Chenyang Qi and Pan Zhang and Bo Zhang and HsiangTao Wu and Dong Chen and Qifeng Chen and Yong Wang and Fang Wen},
year={2022},
eprint={2212.08062},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
You may adjust the ```CUDA_VISIBLE_DEVICES``` and ```nproc_per_node``` to the number of GPUs on your own machine.

## Acknowledgements

Expand Down

0 comments on commit 87b13b3

Please sign in to comment.