This repository contains the code for the continuous training and operation of a production-level weed mapping detector. These aerial pictures of crops were taken by flying drones, that mount a commercial multispectral camera. The model's objective is to distinguish crops from weeds at the pixel level for a complete orthomosaic map with improved accuracy.
The model output is an image with the same dimensions as the input orthomosaic map, but with colors that identify automatically crops (green) and weeds (red) at pixel-level.
- Installation: Setting up the project
- Dataset Format: Creating the HuggingFace Dataset for training and evaluation
- Configs and Finetune Training
- DriUNet: Walkthrough for Training DriUNet baseline
- AutoEncoder + DriUNet: Walkthrough for Training DriUNet with an auntoencoder as compressor
- Model Zoo: Links to pre-trained checkpoints