Researchers train drones to perform flips, rolls, and loops with AI

Researchers train drones to perform flips, rolls, and loops with AI


In a new paper published on the preprint server Arxiv.org, researchers at Intel, the University of Zurich, and ETH Zurich describe an AI system that enables autonomous drones to perform acrobatics like barrel rolls, loops, and flips with only onboard sensing and computation. By training entirely in simulation and leveraging demonstrations from a controller module, the system can deploy directly onto a real-world robot without fine-tuning, according to the coauthors.

Acrobatic flight with drones is extremely challenging. Human pilots often train for years to master moves like power loops and rolls, and existing autonomous systems that perform agile maneuvers require external sensing and computation. That said, the acrobatics are worth pursuing because they represent a challenge for all of a drone’s components. Vision-based systems usually fail as a result of factors like motion blur, and the harsh requirements of high-speed fast and precise control make it difficult to tune controllers — even the tiniest mistake can result in catastrophic outcomes.

The researchers’ technique entails training the abovementioned controller to predict actions from a series of drone sensor measurements and user-defined reference trajectories. A front-facing camera image, the trajectories, and an inertial measurement serve as inputs to the system, while the output is an action in the form of thrust and angular velocity values.

The controller trains via privileged learning, where a policy learns from demonstrations provided by a so-called privileged expert. This expert has access to privileged information that isn’t available to the controller, and it’s built on a planning and control pipeline that tracks a reference trajectory from the state (i.e., the drone’s position and orientation). To facilitate the transfer from simulation to reality, the controller doesn’t access raw sensor data like color images; instead, it acts on an abstraction of the input in the form of feature points (which depend on scene structure and motion) extracted via computer vision. A series of checks ensures it doesn’t exceed the drone’s physical limits.

VB Transform 2020 Online – July 15-17. Join leading AI executives: Register for the free livestream.

The coauthors chose the open source Gazebo simulator to train their policies, simulating the AscTec Hummingbird multirotor as opposed to the custom quadrotor they used in real-world experiments.  They then tested the polices’ robustness by having the custom quadrotor perform a loop, roll, and flip at high accelerations and fast angular velocities.

READ ALSO  Malware found in CamScanner's document scanning Android app, which has over 100M downloads

The results over 10 training runs show that the controller managed to complete each maneuver successfully 100% of the time, without intervention or breaks. “Our approach is the first to enable an autonomous flying machine to perform a wide range of acrobatics maneuvers that are highly challenging even for expert human pilots,” the researchers wrote. “We have shown that designing appropriate abstraction of the input facilities direct transfer of the policies from simulation to physical reality. The presented methodology is not limited to autonomous flight and can enable progress in other areas of robotics.”



Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com