Skip to content

zhou671/STARVE

Repository files navigation

STARVE: Style TrAnsfeR for VidEos

This is the final project for class CSCI 1430: Introduction to Computer Vision.

Team members (alphabetical by first name): Yicheng Shi, Yuchen Zhou, Yue Wang,
and Zichuan Wang.

In this project, we explored video style transfer with TensorFlow2. Our work is heavily based on paper Artistic style transfer for videos and this repo written in Lua and C++. We also refer to in this tutorial for basic functions.

See this notebook tutorial for how to run the model. Open In Colab

See this notebook tutorial for how to compile Caffe and DeepMatching-GPU (optional). Open In Colab

Deliverables:
Report
Slides
Video Demo

Optic Flow

If calculate optic flow with 2 CPU cores, it takes 35s per frame on average.

Optic flow: 100% 120/120 [1:09:26<00:00, 34.72s/it]

When using the GPU version of DeepMatching, it takes 6s per frame on average. There's a 6x speed up!

DeepMatching: 100% 120/120 [00:18<00:00, 6.68it/s]
Optic flow: 100% 120/120 [12:14<00:00, 6.12s/it]