Uses a match video to estimate the pose of the camera to be used to automatically track robot actions and positions.
The cycle is as follows for finding and identifying tags:
- YOLOv8 model detects where apriltags are on the field (the qr code thingys)
- Those detections are cropped and upscaled with a custom pytorch model (AI/ML)
- Finally, it uses opencv's apriltag detection to read data from each tag and match it to its real life location.
Currently only identifies apriltags found, will display their positions on video feed and output their positions. It is also common that it simply does not find any tags, but I assure you it is running properly.
-
Clone the repo
-
uv syncto install dependencies -
uv run main.py test.mp4to execute the script on a video.
This will take a minute to start up, as this is a large project. -
Detected tags are displayed, identified tags are saved in
output
- Average tag positions to get more accurate locations (WIP)
- Estimate camera specifications (focal dist, distortion)
- Estimate pose of camera