Kickstarting the Autonomous Drone Project 🚁
The Bachelor's Thesis begins. Defining the stack for a drone that can track objects entirely on-board.
The Mission
The goal is straightforward but technically heavy: Build a drone capable of autonomous object tracking using only on-board constraints (Power, Computing, Weight). No external motion capture systems, no heavy ground stations.
The Stack Dilemma
I spent the last 48 hours deciding between ArduPilot and PX4.
- ArduPilot: Extremely stable, feature-rich, but the MAVROS integration felt slightly more "legacy".
- PX4: Feels more native to the ROS 2 ecosystem (specially with microXRCE-DDS agent).
Verdict: Switched to DJI Tello for the initial prototyping phase. Why? It's lightweight, has a documented Python API (`djitellopy`), and allows us to focus entirely on the Computer Vision algorithms (YOLO) by offloading the flight stability to the drone's internal controller.
Processing Unit
I need a setup that allows rapid iteration.
The Strategy: Offboard Edge Computing.
Instead of running the heavy object detection on the drone, I'll stream the video feed to a laptop
(or Edge Server), process the frames with YOLOv8 to find the target (Person), and send velocity commands
back to the drone over Wi-Fi (UDP) to keep the target centered.
$/ expected_architecture.yaml
hardware: drone: "DJI Tello (Ryze Tech)" camera: "720p Built-in" software: language: "Python 3.10" library: "djitellopy" ai_model: "YOLOv8 (Ultralytics)" task: "Person Following (Visual Servoing)"
Next Steps
- Flash PX4 firmware to the Pixhawk.
- Config the offboard control switch on the RC transmitter (Safety first!).
- Hello World: Make the drone takeoff 1m and land via Python script.