Skip to content

Latest commit

 

History

History
73 lines (56 loc) · 3.25 KB

File metadata and controls

73 lines (56 loc) · 3.25 KB

GSU Drone Recognition Model Framework

A framework for running a model on a Raspberry Pi to detect people and stop a drone if a person is detected to prevent the drone from flying over them. Tested on Arch Linux and Debian/Raspbian, but the docker version should run on any amd64/arm64 platform that supports docker.

All python source code is licensed under the MIT License

To convert the YOLO model to tflite for use in the framework, use [https://github.com/NathanCodeGitHub/Updated-Python-3.12-Ultralytics-PyTorch-To-TensorFlow-Lite-Converter].

Setup (On RaspberryPi)

Normal

  1. Install Picamera 2 using apt install libcamera
  2. Connect desired camera module to Raspberry Pi
  3. Ensure the camera is functioning using libcamera-hello
  4. Enable UART0 and 3 using dtoverlay to have a connection to both the drone's telemetry connection as well as a telemetry transceiver module.
    • dtoverlay=uart0-pi5 and dtoverlay=uart3-pi5
    • Modify /boot/firmware/config.txt to make these changes permanent.
  5. Ensure connection between the Pi and both the device receiving the telemetry and the drone works (same baud rate, correct connection pins).
    • You may need to create a cable of your own, connecting the TX/RX/Ground pins of the telemetry module and drone telemetry port to the respective GPIO pins.
    • GPIO 14 and 15 are TX/RX for UART0/ttyAMA0, and GPIO 9 and 10 are TX/RX for UART3/ttyAMA3. You can use other UARTs as long as you specify the correct tty when running the script.
    • Make sure you do NOT connect the telemetry port to a 5v pin on the Pi, as this might overload one or both devices.
  6. Run [prepare-pi.sh] to create a python virtual environment, install the necessary dependencies, and run the module.
  7. Telemetry should be available via MAVLink if you connect the other end of the telemetry module to a computer, for use in mission planner and similar.
  8. An object detection live feed should be available at the Pi's IP address on its network connection, on port 8080.

Docker

  1. Perform steps 2-5 to prepare the hardware.

  2. Run apt install docker.io to install docker.

  3. Clone this project and run [docker-prepare-pi.sh]

  4. OR, run the following command to use the prebuilt image:

    docker run -t \
    -p 8080:8080 \
    --device /dev/video0:/dev/video0 \
    -e DRONE_PORT=/dev/ttyAMA3 \
    -e DRONE_OUTPUT=/dev/ttyAMA0 \
    ghcr.io/bluejaycoder/model:main
  5. The live feed will also be available on port 8080 on the Pi's IP address.

Setup (On other hardware, for testing)

  1. Run apt install docker.io to install docker (or otherwise install docker according to your system).

  2. Run [docker-prepare.sh] to build and run with sane defaults, OR

  3. Run the following commands to build and then run the container

    docker build . -t bluejaycoder/model:main
    
    docker run -t \
    -p 8080:8080 \
    --device /dev/video0:/dev/video0 \
    -e DRONE_PORT= \
    -e DRONE_OUTPUT= \
    bluejaycoder/model:main
  4. OR, run this to pull the prebuilt image:

    docker run -t \
    -p 8080:8080 \
    --device /dev/video0:/dev/video0 \
    -e DRONE_PORT= \
    -e DRONE_OUTPUT= \
    ghcr.io/bluejaycoder/model:main
  5. The live feed will be available at http://127.0.0.1:8080.