This project aims to track people across multiple surveillence cameras. You can select your source of videos spanning across multiple cameras and use the model to extract the different people that appear in them.
Model and method | mAP | R@1 | R@5 | R@10 | R@20 |
---|---|---|---|---|---|
Resnet50 with Proxy Anchor loss | 79% | 92% | 97% | 98% | 99% |
Resnet50 classic method | 68% | 85% | 93% | 95% | 97% |
OSNet with Proxy Anchor loss | 62% | 83% | 93% | 96% | 98% |
OSNet classic method | 73% | 90% | 96% | 97% | 98% |
- Install Docker.
- Clone this repository.
- Open a terminal in the cloned directory.
- Copy the videos you want to analyze in the data directory in the project root.
- Run the docker command:
docker-compose build
to build all the containers. - Run the docker command:
docker-compose up
to run the containers. - Download the model from (this folder)[https://drive.google.com/drive/folders/13YaXlI3IyP27bf4-rXBVmw7fvPXlcGBW?usp=sharing].
- Open the link http://localhost:8501 to launch the app.
Before you proceed with the notebooks, make sure you follow the steps below:
pip install -r requirements.txt
- Download the (Market1501)[http://zheng-lab.cecs.anu.edu.au/Project/project_reid.html] dataset.
All the notebooks except the train_classic uses Proxy Anchor Loss metric learning for training.
Person extractor-Video.ipynb
- contains the modules used in extracting people from videos and their features.train_market_OSNET.ipynb
- train osnet with a hold out validation.train_market_resnet.ipynb
- train Resnet with a hold out validation.train_market_resnet-xylinx_dataloader.ipynb
- Train and validate resnet using the standard way of evaluation.train_classic
- Trains network using the classic architecture as a classification problem.