mixed-reality simulation environment for a swarm of ...quadcopter is swapped for a real one, whereas...

11
Mixed-Reality Simulation Environment for a Swarm of Autonomous Indoor Quadcopters Christoph Steup Faculty of Computer Science University of Magdeburg, Germany [email protected] Sanaz Mostaghim Faculty of Computer Science University of Magdeburg, Germany [email protected] Lukas M¨ aurer Faculty of Computer Science University of Magdeburg, Germany [email protected] Vladimir Velinov Faculty of Computer Science University of Magdeburg, Germany [email protected] Abstract—This paper presents a new platform for developing and testing a swarm of autonomous quadcopters for indoor applications without using any external positioning systems. The major goal of this platform is to develop new swarm-intelligence algorithms for real-world applications. The research in the area of swarm robotics develops methodologies for autonomous behaviour of individuals which can adapt themselves to the dynamics of the environment and are robust against failures. Our goal is to project theoretical swarm robotics algorithms to a platform for autonomous flying robots. This platform does not need any external infrastructure, especially no external localization or computation. Additionally, we want to gain the benefits provided by swarm robotics such as scalability, robust- ness against failures, adaptation to the environment and spatial distribution. The dynamic behaviour of the copters enforces a rigorous design of the copter as a technical system as well as an extensive mathematical foundation of the swarm algorithms to reach stability in the expected behaviour. However, not all influences may be incorporated in the design of the system or included in the swarm behaviour. To conquer this, we propose a mixed-reality simulation, containing purely virtual copters and virtual twins of real copters. This approach enables us to compare the behaviour of real copters and virtual copters in realistic scenarios incorporating inter-copter and copter to environment influences. This paper presents our generic architecture including the incorporated communication and sensory equipment. Ad- ditionally, we evaluate the effect of the quality of the sensors on the results of the simulation. We present results on the current modelling of our swarm of copters evaluated using the mixed-reality testbed. Additionally, we show the relevance of the proposed approach based on the observed cross-talk between sensors and its devastating effects on the swarm behaviour. I. I NTRODUCTION Currently, the industrial and scientific world evaluates the usage of autonomous flying entities for different purposes. They are envisioned to heavy impact logistics applications, as proposed by Amazon and DHL [1]. They may also be used in search and rescue scenarios to increase the chance of survival for humans [2]. However, the development of single autonomous robots for this purpose is always difficult, because of issiues with hardware, sensors and environmental effects. Therefore, scientific solutions often focus on simulations to proof their theories and approaches. Unfortunately, there is a huge gap between simulation and reality, which is not easy to overcome. On obne hand simulation may oversimplify systems, because certain effects are not taken into account, which afterwards proof to be very relevant. On the other hand finding realistic parameters for the simulations is difficult and sometimes not even possible. We propose a mixed reality simulation as a mechanism to overcome these challenges in the context of autonomous swarm robotics for flying entities. The paper proposes a realistic, fast and scalable simulation of a swarm of flying robots to virtually extend the swarm. Simulated robots are connected to real ones, through pose estimation and sensor data injection and behave like its phys- ical counterpart. This allows a verification of the simulated behaviour of the robots against real robots. Additonally, the scalability of the system may be tested with less effort, since replication of virtual robots is much easier. The paper continues with an overview of related work regarding mixed reality simulations in Section II. Afterwards the flying robots, their relevant components and the used model are described in Section III. The paper continues with the description of the simulation environment in Section IV and the laboratory environment in Section V. The necessary modification to physical and virtual copters to enable the mixed reality simulation are described in Section VI. The evalution of the described mixed reality simulation is described in Section VII before the papers ends with the conclusion in Section VIII. II. RELATED WORK The literature on swarm robotics is very rich in terms of algorithms for navigation, search, planning and formation. Several projects have dedicated a large amount of research on swarm robotics such as Swarm-bots [3], I-SWARM [4], SFly (swarm of flying micro-robots) [5], RoboBee [6] and Swarmanoid 1 . [7] provides a very detailed overview of the literature from the swarm engineering point of view. Swarm engineering is an emerging discipline that aims at defining sys- tematic and well founded procedures for modelling, designing and realizing a swarm robotics system. Different to ground 1 http://www.swarmanoid.org

Upload: others

Post on 28-Feb-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Mixed-Reality Simulation Environment for a Swarm of ...quadcopter is swapped for a real one, whereas the camera is still virtual. Burgbacher, Steinicke and Hinrichs contribute a neat

Mixed-Reality Simulation Environment for a Swarm of AutonomousIndoor Quadcopters

Christoph SteupFaculty of Computer Science

University of Magdeburg, [email protected]

Sanaz MostaghimFaculty of Computer Science

University of Magdeburg, [email protected]

Lukas MaurerFaculty of Computer Science

University of Magdeburg, [email protected]

Vladimir VelinovFaculty of Computer Science

University of Magdeburg, [email protected]

Abstract—This paper presents a new platform for developingand testing a swarm of autonomous quadcopters for indoorapplications without using any external positioning systems. Themajor goal of this platform is to develop new swarm-intelligencealgorithms for real-world applications. The research in thearea of swarm robotics develops methodologies for autonomousbehaviour of individuals which can adapt themselves to thedynamics of the environment and are robust against failures.Our goal is to project theoretical swarm robotics algorithms toa platform for autonomous flying robots. This platform doesnot need any external infrastructure, especially no externallocalization or computation. Additionally, we want to gain thebenefits provided by swarm robotics such as scalability, robust-ness against failures, adaptation to the environment and spatialdistribution. The dynamic behaviour of the copters enforces arigorous design of the copter as a technical system as well asan extensive mathematical foundation of the swarm algorithmsto reach stability in the expected behaviour. However, not allinfluences may be incorporated in the design of the system orincluded in the swarm behaviour. To conquer this, we propose amixed-reality simulation, containing purely virtual copters andvirtual twins of real copters. This approach enables us to comparethe behaviour of real copters and virtual copters in realisticscenarios incorporating inter-copter and copter to environmentinfluences. This paper presents our generic architecture includingthe incorporated communication and sensory equipment. Ad-ditionally, we evaluate the effect of the quality of the sensorson the results of the simulation. We present results on thecurrent modelling of our swarm of copters evaluated using themixed-reality testbed. Additionally, we show the relevance of theproposed approach based on the observed cross-talk betweensensors and its devastating effects on the swarm behaviour.

I. INTRODUCTION

Currently, the industrial and scientific world evaluates theusage of autonomous flying entities for different purposes.They are envisioned to heavy impact logistics applications,as proposed by Amazon and DHL [1]. They may also beused in search and rescue scenarios to increase the chance ofsurvival for humans [2]. However, the development of singleautonomous robots for this purpose is always difficult, becauseof issiues with hardware, sensors and environmental effects.Therefore, scientific solutions often focus on simulations toproof their theories and approaches. Unfortunately, there isa huge gap between simulation and reality, which is not

easy to overcome. On obne hand simulation may oversimplifysystems, because certain effects are not taken into account,which afterwards proof to be very relevant. On the other handfinding realistic parameters for the simulations is difficult andsometimes not even possible. We propose a mixed realitysimulation as a mechanism to overcome these challenges inthe context of autonomous swarm robotics for flying entities.

The paper proposes a realistic, fast and scalable simulationof a swarm of flying robots to virtually extend the swarm.Simulated robots are connected to real ones, through poseestimation and sensor data injection and behave like its phys-ical counterpart. This allows a verification of the simulatedbehaviour of the robots against real robots. Additonally, thescalability of the system may be tested with less effort, sincereplication of virtual robots is much easier.

The paper continues with an overview of related workregarding mixed reality simulations in Section II. Afterwardsthe flying robots, their relevant components and the usedmodel are described in Section III. The paper continues withthe description of the simulation environment in Section IVand the laboratory environment in Section V. The necessarymodification to physical and virtual copters to enable themixed reality simulation are described in Section VI. Theevalution of the described mixed reality simulation is describedin Section VII before the papers ends with the conclusion inSection VIII.

II. RELATED WORK

The literature on swarm robotics is very rich in terms ofalgorithms for navigation, search, planning and formation.Several projects have dedicated a large amount of researchon swarm robotics such as Swarm-bots [3], I-SWARM [4],SFly (swarm of flying micro-robots) [5], RoboBee [6] andSwarmanoid1. [7] provides a very detailed overview of theliterature from the swarm engineering point of view. Swarmengineering is an emerging discipline that aims at defining sys-tematic and well founded procedures for modelling, designingand realizing a swarm robotics system. Different to ground

1http://www.swarmanoid.org

Page 2: Mixed-Reality Simulation Environment for a Swarm of ...quadcopter is swapped for a real one, whereas the camera is still virtual. Burgbacher, Steinicke and Hinrichs contribute a neat

robots, aerial robots have significantly different dynamics,require substantially more energy to locomote [8], [9], andthe small payload entails reduced sensing and processingcapabilities. However all these solutions needs a lot of effortto be developed and tested on real hardware. Therefore alot of research is focused on simulated environments, wherethe influence of the environment can be controlled and theswarm may be scaled up or down for evalution purposes.Unfortunetely the results of this research depends on theaccuracy of the simulation, which is ver y difficult to assess.

The concept of mixed reality as explained by Milgramand Kishino detailed in [10] may provide a solution to thatproblem.. They describe, how virtual worlds and reality canbe blended into each other in technical scenarios. In a broadersense, everything where simulation and reality influence eachother can be seen as a mixed reality system. This includese.g. Hardware-in-the-Loop applications, as they are commonlyused for the development of embedded control units, orAugmented Reality where virtual information is added tothe perceived reality. This enables a gradual transition frompurely simulated to purely hardware-based development andevaluation processes.

Chen, MacDonald and Wnsche [11] aim to build a genericmixed reality framework, which can be integrated into differ-ent simulation solutions and handles all scales of virtualization,from mainly simulated to mainly real-world implemented.They introduce a function library for gazebo2, which is ca-pable of abstracting sensors, actuators and other objects androuting them either to the simulation or the real world. Thisenables their work to be flexibly used in hardware or softwaredevelopment for robotics. This enables developers to earlytest their approaches and rapidly switch to a real-world test.The described approach lacks a clear use-case as it is verygeneric and heavily aims towards flexibility. The used softwarecomponents also hinder the integration of the approach in low-power systems.

Burgbacher, Steinicke and Hinrichs sketch a possibility toapply mixed-reality simulations on the development of realworld multi-robot projects in [12]. Their focus is less ontechnical challenges of integration reality at real-time into asimulation, but on how to include mixed reality simulation intoproject workflows. They describe a workflow, that start purelysimulative and evantually includes hardware components assoon as they are ready duing the progress of a developmentproject. In their example, they develop and apply image pro-cessing algorithms on picture generated by a simulated copterequipped with a virtual camera. In the next step, the simulatedquadcopter is swapped for a real one, whereas the camera isstill virtual. Burgbacher, Steinicke and Hinrichs contribute aneat use case for the use of mixed reality with quadcoptersimulation, but their implemented scenario doesn’t make highdemands on scalability and real-time communication, as theirideas of swarm scenarios are only outlined.

2http://gazebosim.org

Finally, the authors of [13] use mixed reality to enhancequadcopter swarms. Their application include simulated hu-mans, game engines and other robots. In contrast to Chen,MacDonald and Wnsche, Hnig et al. focus on special sce-narios. On the simulation side, they rely on available modelsof the used robots. Their implementation of swarms usesvery small and comparatively simple quadcopters and reliesheavily on a precise external camera tracking system. Thislimits the application to situations where such a trackingsystem can be provided. The copters are not autonomous,but are controlled in a centralized way by the simulation.The purely centralized approach enables them to improve thequality of the information on the real world that is sent to thesimulation. The external camera system provides data withless noise, less drift errors over time and does not need torely on low energy wireless communication. Therefore, a high-precision simulation can be achieved, that accurarely controlsthe ”drones”,even with less precise models, and very simplequadcopters with minimal sensors.

Compared to existing solutions, we aim for fully au-tonomous copters embedded in a mixed-reality setup. Thisprovides the benefit of inserting an removing copters onruntime without any changes to the system. At the sametime the copters are usable even without the mixed-realityinfrastructure. To this end, we try to avoid the dependancyon external hardware as much as possible. However thesimulation may use external hardware if it is beneficial. Thesimulation also needs to be real-time capable to allow adynamic interaction between the autonomous real copters andthe virtual ones handled in the simulation.

III. FLYING ROBOTS

The used copters are especially developed to include manydifferent sensors to enhance the environmental perceptioncapabilities. These are needed for the fully autonomous indoorbehaviour. However, quadcopters are typically limited in size,load-capacity and power storage, which together with thesensor payload defines maximum flight times and dynamics ofthe copter. To handle the sensory equipment and still proivdegood dynamics and medium flight times, the quadcopters areset up with powerful motors and large batteries. In this paperthe copters use configuration with four sonar sensors for xy-distance estimation and a single infrared sensor for heightmesaurement. The sonars are chosen because they providereliable wide angle detection and the IR sensor provides fastresponse times and easy integration.

The current configuration of the FINken consists of:• X-frame with 200 mm diagonal motor distance• Li-Po Battery for 10 min flight (3 Cells, 900 mAh)• Motors: MN1804-20: 2400 kV, max 10 A, 5×3” Pro-

pellers• Overall weight of 350 g• Embedded autopilot including 10-Axis-IMU (Paparazzi

Lisa/MX 2.1)• RC-Control with 2.4 GHz spectrum protocol• 802.15.4 based communication

Page 3: Mixed-Reality Simulation Environment for a Swarm of ...quadcopter is swapped for a real one, whereas the camera is still virtual. Burgbacher, Steinicke and Hinrichs contribute a neat

• SD-Card logging via SPI• IR-Height Sensor (Sharp GP2Y0A60SZLF)• Ultrasound-Object Sensors (Max-Botix MB1232)The copter is programmed with our fork3 of the Paparazzi

autopilot framework [14]. The changes include additionalsensors and the adaptation to the autonomous indoor use-case. Hence, our version of the software does not use GPSand implements object evasion with ultrasound as well asdedicated height control using a distance to ground sensor. Wehave developed two new modes of flight: Mixed-Manual modeand Wall-Avoid mode. In Mixed-Manual mode the copter iscontrolling thrust and yaw axis by itself, pitch and roll axisneed to be controlled by the pilot via RC-commands. Thismodus operandi is used for calibration and in most manualflight scenarios, as it is much easier to control the copter inMixed-Manual mode than in fully manual flight. The Wall-Avoid mode allows fully autonomous flight. The copter iscontrollable by the algorithms or a remote control device, aslong as it can keep a safe distance to all objects sensed bythe ultrasound sensors. If an object is detected, the copter isautonomously repelled enabling a stable baseline behaviour.The whole parametrization of the copter favors stability overdynamics. The resulting copters are very limited in speed andalso do not turn fast, which is benefical for the external sensorattached to them.

A. Quadcopter modeling

F1

F2

F3

F4

FG

τ1

τ2

τ3

τ4pitchθ

rollφ yaw ψ

x y

z

Fig. 1: Forces and torques of a quadcopter

To enable a realistic physical simulation the physical proper-ties of a single quadcopter needs to be modelled.In this sectionwe will limit the description to the necessary extensions of ageneric physics simulation. A quadcopter is an aircraft with4 independent rotors. In our case, we assume the rotors to beidentical, mounted to the body simetrically on the xy-planeand have parallel thrust vectors pointing in the same direction.In the following we use two coordinate systems to describethe behaviour. The first is the simulation’s coordinate systemformed by the x, y, z-axis. The copter has its own coordinatesystem keeping the copter’s center of mass at its origin, whichis denoted by the axes x′, y′, z′.

3Available online at https://github.com/ovgu-FINken/paparazzi

When the rotor i is powered, it turns with the angularvelocity ωi, creating a force along the rotor’s thrust axis, whichis equivalent to the quadcopter’s body axis z′, and a torque τiaround the rotor’s axis.

Fi = kω2i , τi = dω2

i + IM ωi (1)

The constant k depends on air density and rotor geometry. dis the drag constant for the rotor drive train and IM is themoment of inertia of the rotor, which adds a torque duringangular acceleration. However, due to the small diameters andlightweight plastic rotors, this contribution is comparativelysmall and may be omitted.

The combined force of the rotors is Fsum with Fsum =∑4i=1 Fi. This results in a thrust Fb relative to the body with

Fb = (0, 0, Fsum)T . The torque is dependent on the indidivualangular velocities of the rotors and their directions, as visiblein Equation 2.

τb =

τφτθτψ

=

1/√2lktorque(F1 + F2 − F3 − F4)

1/√2lktorque(−F1 + F2 + F3 − F4)∑4

i=1 τi

(2)

The rotors are mounted in distance l from the copter’s centerof mass and the copter arms form a 45◦ angle to the x′- andy′-axis, resulting in a distance of 1/

√2 to the axis around which

their thrust creates a force as descibed by Luukkonen [15].The thrust of the copter can be decomposed into three

components Fx, Fy, Fz along the simulations axis using thecopters current pitch θ and roll Φ according to formula:F ′xF ′y

F ′z

=

− sin(Θ) cos(Φ)− sin(Φ)

cos(Φ) cos Θ

Fsum (3)

To keep the copter in air, the forces generated by the thrust ofthe four rotors have to compensate the force FG generated bythe weight of the quadcopter. Any difference between theminduces an acceleration az of the copter along the simulationsz-axis depending on the mass of the copter mc. Accelerationsalong the x and y axis of the simulation depend additionallyon the yaw-angle of the copter Ψ.axay

az

=

cos ΦF ′x − FDx

sin ΦF ′y − FDy

F ′z − FG

1

mc(4)

Our simulation assumes that the copters fly with very smallspeed. Therefore we omit the drag resistance of the air foraccelerations in x and y direction and set FDx

and FDyto 0.

Equations 2 and 4 show that the whole behaviour of thecopter results from the rotational speeds of the rotors. Thesection will extend the realism of the model through a particlesimulation simulating the air flow through each rotor of thecopter.

B. Rotor Modeling

Due to manufacturing tolerances and external influences likeair stream, the forces Fi and torques τi generated at a certainangular velocity ω as described in 1 are different for each

Page 4: Mixed-Reality Simulation Environment for a Swarm of ...quadcopter is swapped for a real one, whereas the camera is still virtual. Burgbacher, Steinicke and Hinrichs contribute a neat

rotor. In the previous section III-A we assumed that all therotors are identical. As this assumption doesn’t hold, a particlesimulation was used to simulate the forces Fi and torquesτi of the rotors. Using four particle objects with identicalparameters, the model of section 1 may be used, but theparticle simulation adds some noise which makes the copter’sbehavior more realistic.

The particle simulation is used to simulate the airstreamgenerated by the rotor. The particle object can be configuredwith particle size spx , particle density ρpx and maximumnumber of particles npx it can hold. A simulation of rotorsspinning in a particle cloud is computationally too expensive,so the particles are generated below the rotor modeling theair stream. For the following explanations, it is assumed thatthe copter is hovering, that the free stream velocity v0 ofthe air around the quadcopter is zero and that the air isincompressible, which is valid as long as the stream velocityis well below the speed of sound [16]. Also, a homogeneousstream velocity under the whole rotor area is assumed whichis sufficiently accurate for this case.

Based on our assumptions, Momentum Theory gives us thethrust Fi of a single rotor as a product of the mass flow ratem and final speed vf of the air accelerated by the rotor:

Fi = mvf (5)

This means, that in order to simulate the thrust Fi withthe particle object, the mass of particles and the final streamvelocity is needed.

Neither the mass of the airstream nor the final streamvelocity is easy to measure, but the thrust Fi when hoveringis easily calculated from the weight of the copter.

Fi =FG4

(6)

FG = mc ∗ g (7)

The mass flow rate m, though not directly measurable, canbe obtained from the air density ρ0, and the volumetric flowrate V through the rotor as shown in 8. The air density isconstant (we assume standard conditions), and the volumetricflow rate depends on the area A covered by the rotor and theair velocity in the rotor plane vr.

m = ρ0V = ρ0Avr (8)

Note that the air stream velocity vr in the rotor plane isdifferent from the final air stream velocity vf the air reachesbehind the rotor. The reason for this is, that because ofconservation of energy, the power P the rotor puts into theair stream has to equal the energy Ek the air stream carriesper time as in 11 and 12. For the first derivative of the kinetic

energy Ek in 10 note that the velocity is considered constantduring hovering.

Ek =1

2mv2f (9)

Ek = mv2f2

(10)

P = Fivr = Ek (11)

Fivr = mv2f2

(12)

(13)

Inserting 5 into 12 shows the relation between vr and vf .

mvfvr = mv2f2

(14)

vr =vf2

(15)

With the air velocity vr in the rotor plane, the thrust Fi of arotor can be calculated from the rotor area A and air densityρ0, which are known.

Fi = 2ρ0Av2r (16)

16 and 15 together give a formula to determine the velocityvr when the copter hovers.

vr =

√Fi

2ρ0A(17)

As written in the introduction, the particle simulation includesthe parameters particle density ρpx , particle size spx and ratenpx , denoting the amount of particles created in defined time.Particle density and particle size should be constant, as the airstream is considered incompressible, so when leaving poise,the particle rate has to change according to 8 [17].

The particles are spherical, with the particle size spx as thesphere’s diameter, so the mass mpx of a single particle can becalculated as in 18.

mpx = Vpxρpx =π

6s3pxρpx , Vpx =

π

6s3px (18)

The mass flow rate m of the particle is the product of particlerate npx and the particle mass mpx .

m = npxmpx = npxπ

6s3pxρpx (19)

During flight, if the air stream velocity vf changes, the massflow changes as well according to 8, so the mass flow rateneeds to be expressed as a function of air stream velocity.

Equating 19 with 8 in 20 relates particle rate npx to thealready known parameters particle mass mpx , air density ρ0,rotor area A and the final air stream velocity vf in 21.

npxmpx = ρ0Avr (20)

npx =ρ0A

mpx

vr =ρ0A

2mpx

vf (21)

Now, the particle simulation can be parameterized based onthe copter hovering. But all parameters except for air stream

Page 5: Mixed-Reality Simulation Environment for a Swarm of ...quadcopter is swapped for a real one, whereas the camera is still virtual. Burgbacher, Steinicke and Hinrichs contribute a neat

speed are constant. Therefore, the copter’s dynamics can besimulated by connecting the air stream velocity to the throttle,so the copter’s thrust will be adjusted accordingly.

The thrust of the rotor can be expressed as 22 by inserting19 into 5

Fi = npxπ

6s3pxρpxvf (22)

Particle size spx , mass mpx and rate npx can be arbitrarilychosen based on performance and accuracy needs, as long as20 is satisfied.

IV. SIMULATION ENVIRONMENT

Among the available simulation frameworks for quadcopterssuch as JSBSim [18], ROS/Gazeebo [19] and V-REP [20], weselected V-REP [20], because it provides a versatile, highlycustomizable simulation environment, mainly developed forrobots as visible in Figure 2. For physics simulation weuse its integrated bullet physics engine, including a particlesimulation. Additionally we use the flexible programmingsystem it offers by using its external API, the communicationbetween submcomponents of VREP using signals and theincluded scripting language Lua. Additionally, it providesa rich set of sensor-simulations and the scene visualizationmechanisms To use our flying robot and the modelled be-

Fig. 2: API structure of V-REP4

haviour we use the CAD model used to build the copteralso for the simulation. The CAD model is very detailed,which is problematic towards the performance of the physicssimulation. Therefore we splitted the model in multiple layersthat are used to describe different aspects of the copter. Thelayers are visualize in Figure 3. The physics simulation uses arough layers of simple shapes to approximate the distributionof mass within teh copter to handle the rigid body dynamicsof th copter. The rotors are only visualized and their behaviour

Fig. 3: The different abstraction layers of the modelled quad-copter in a V-REP scene. The visible layers are from left toright: Control Sphere, CAD-Model, Physics Model and SensorModel

is implemented as a LUA script. The CAD model itselfis used for visualization and collision checking. The sonarsensors are modelled using VREP proximity sensor systemwith appropriate parameters. The control of the copter is doneeither by moving and interacting with the control sphere or bydirectly controlling the parameters using VREPs signal system.

The signal control interface consists for each simulatedcopter of pitch, roll and yaw angles and the applied thrust,exactly the same as the real copter. Since the real coptersalready contain autonomous height control a similar controlleris implemented for the virtual ones. Additionally, we evaluatethe noise model of the sonar distance sensors and apply itto the perfect virtual distance values to create more realisticsensor data.

The simulated environment contains simple objects likeboxes and walls. For this approach it is sufficient, since ourcopters are not equipped with cameras or other high-resolutionsensors, that may detect details on these objects.

The needed control algorithms for attitude and height aswell as the particle-based simulation of the rotors are im-plemented using V-REPs included LUA scripting engine. Weadditionally implement the wall-avoidance and mixed-realitybehaviour in the simulation to have a full virtual twin ofa real copter. This allows an easy integration of the copterspecific behaviour with the simulation. To communicate withcomponents outside of V-REP we use its external API, whichallows acces to most of the objects present in the scene.

V. LAB ENVIRONMENT

The lab consists of an arena for at most two coptersequipped with a cushioned walls and floor and a cameratracking an identification system. The arena is used to limitthe movement range of the copters and provide some safetyfor th experimentators it has a size of 4 m × 3 m and amaximum height of 2.5 m. The copters are typically configuredto fly in the xy-plane at a height of 0.4 to 1.2 m. The arenais surrounded by sonar reflecting foils and nets to protecthuman spectators and provide some orintation to the copters.Even though our copters fly in specified arena, they have

Page 6: Mixed-Reality Simulation Environment for a Swarm of ...quadcopter is swapped for a real one, whereas the camera is still virtual. Burgbacher, Steinicke and Hinrichs contribute a neat

no knowledge on the arena sizes and the structure. Thisenables us, to use the system also outside of the lab withoutmodification. Other projects that are performed in similarenvironments usually optimise the weight, sacrificing the in-dependence of their copters from ground-components. This isdone by utilising external tracking systems and outsourcingcomputing power to ground based servers [21], which do notdo. The used tracking system is only responsible for trackingthe path of the copter for analyses, but the data is never fedto the copters themself.

VI. MIXED REALITY EXTENSION

To enable the Mixed Reality Simulation three major chal-lenges need to be solved. Each real copter needs to have avirtual twin exactly following its motion in the simulation.Additionally, the real copter needs to react to the sensoryinput of its virtual twin. Finally, all components need toview virtual and real copters homogeneusly. To solve thesechallenges we used the architecture visible in Figure 4. We

Physics

SensingActuation

Stability

Swarm

Fig. 4: Communication flow between V-REP and Quadcopter

use a single simulation instance as the snychronization pointof virtual and real copters, as this simulation contains allenvironmental, modelling and sensor information necessary todistribute homogeneus views to the individual copters. Thissimulation is connected to an external copter managementsystem, that pairs virtual and real copters and translates theincoming and outgoing data of copters and infrastructure toinformation for the simulation. The copter management islinked to the ground control station network observing andcontrolling the copters. There the virtual sensory data of thesimulation os transmitted to the real copter to be integratedinto the behaviour.

A. Communication

The first and second challenge are very much coupledwith the communication between the simulation and the realcopters. In our mixed-reality simulation we use three differentcommunication links. The first link connects the copter to itsground station network and uses 802.15.4 modules to handlethe communication. These modules also allow communicationbetween copters as well as distance measurement between

copters. However, the bandwidth is limited to 2 MBit/s.Paparazzi delivers a serialization mechanism for messagestransmitted to and from the copter to ground station of pa-parazzi, so only a specification of the messages content is nec-essary. These message are transformed to Ivy-Bus messagesinside the ground station. The Ivy-Bus constitutes a content-based Publish/Subscribe network that allows any componentto subscribe itself to any data produced within the network.The subscription is done using regular expression, because thetransmitted messages are always ASCII strings. This allowsthe ground control application as well as the simulation and thelogging components to access the data of all copters and thesimulation in a flexible way. Therefore Ivy-Bus transmissioncan be viewed as multicast packets. The data to and from thecopters, that are relevant to the simulation are forwarded tothe copter management infrastructure, that assigns it with avirtual twin copter and transmits it to the simulation instance.To communicate with the simulator we choose to use theexternal API, which is socket based and allows local as wellas remote connections. V-REP provides language binding fordifferent langauges, so we did not handle the low-level socketcommunication, but rather use the existing API of V-REP tochange the state or transfer information to and from the virtualtwins.

This architecture has multiple benefits. We can distriburesimulation, ground control station, logging and telemetry ondifferent machines to share workload. Additionally, we canhave multiple telemetry receivers, that connect to differentcopters to share workload and also to decrease usage of themedium if the copters are far away or the transmission poweris low.

1) Management of virtual and real entities: Through themechanism described in Section VI-A we are able to con-nect our virtual and real entities on a communication level.However, a specific module is necessary to pair a real copterand its virtual twin and route the information accordingly.To this end, we implemented a special software component,that uses Paparazzis XML-based aircraft description file. Weuse the information in the aircraft’s description to decideif a copter is purely virtual (only existing in simulation)or real. For the real ones only their state is routed to theground station network and no information ever reaches thereal copters. The real copters are routed bidirectional. Positonand state information is received through the telemety linkand the ground station network and virtual sensors data isforwarded from the simulation to the copter. The managementcomponent is configurable through an XML-File that allowsto arbitraly link simulated copter to real copters. The modulealso automatically spawns new copters as needed in the virtualenvironment. The module uses the V-REP API to query thescene and change the scene to represent the current real-worldstate, by synchronization the actuation commands of the virtualtwins with the real-copters. Additionally, the command aremodified to fit the estimated position of the real copters to theposition of the virtual twins.

Page 7: Mixed-Reality Simulation Environment for a Swarm of ...quadcopter is swapped for a real one, whereas the camera is still virtual. Burgbacher, Steinicke and Hinrichs contribute a neat

B. Position Synchronization

The pose of the real copters is very important to thesimulation environment, as it defines the orientation of thevirtual sensors and the sensing results of the purely virtualcopters. Therefore, we use a hybrid approach to synchronizethe position of the real copter with the virtual environment.We directly use the current actuation commands, which arepitch and roll angle, yaw tunr rate and thrust. These can beused to predict the short term movement of the copter in thesimulation.

In order to estimate the position, we have used theMEDUSA localization system developed by Zug et. al [22].The system calculates two dimensional position of a mobilerobot moving in a rectangular arena, using eight proximitysensors and a gyroscope. It uses the yaw angle of the robotand the distances to the walls from the proximity sensors tocalculate a set of possible positions, where the quadcoptercould be. Afterwards, it iterates through all possible pointsand calculates the distances of a imaginative proximity sensorsat that particular point and angle of rotation. Comparing thereal readings from the quadcopter proximity sensors andthe calculated distances, a probability function is calculatedindicating the probability of the real quadcopter situated atthe particular point. The point with the highest probability ischosen for the quadcopter position.

The MEDUSA positioning system turned out to be an easyand fast way to get an estimation of the position, since thequadcopter has at least four sonar sensors, positioned at 90degrees from each other, and a gyroscope, which providesthe angle of rotation. The fact, that the original positioningsystem used eight distance sensors was not disturbing, since itis possible to estimate the position even with two sensors. Ahigher number of sensors just provides more fault-tolerance tothe system and more accuracy. The algorithm is implementedon the firmware of the copter and the results are transmittedto the management node.

Finally, we created a link between our camera-based ref-erence tracking system to the management node to providehigh precision locations. However, the camera based trackinglags behind the other datasources and may therefore createadditional disturbances.

The heterogeneus tracking mechanism enable us, to usethe system in a maximum amount of experiments in differentsetups.

C. Sensor Synchronization

To make virtual objects visible to the real copters, the sensorinformation of the virtual copters’s sensors need to be fusedwth the sensor data of the real copter. We do not use themanagement node to handle the fusion of the sensor data, sincethe transmission of the real copters sensory data towards themodule and back after fusion introduces additional latency tothe system. Therefore we directly forward the virtual sensordata to the real copter through ground station network andtelemetry link and fuse it directly on the copter. The used

sonar sensors provide a one dimensional value with the specialpropertie, that the closest object will be reported. This isexploited in the fusion, because only the closest object andtherefore the smallest distance is observable by the sensor.Consequently, the integration of the virtual sensors was doneby using the minimum value of the real and virtual distancemeasurements. This ensures, that the real copter will try toavoid any detected object, independent if the object is detectedin the real world or in the simulation. As an additional propertyour simulation environment does not need to directly reflectthe real world, as the copters behaviour will always be safe.

VII. EVALUATION

TO evaluate the performance and accuracy of our mixedreality simulation, we conducted three types of experiments.The first was a manual control experiment to check thecommunications setup and the latency, as well as the baselinesynchronization between simulation and real copter. The sec-ond experiment compared the virtual representation against theautonomously flying real copter. In this experiment the virtualarena contained additional obstacles to create virtual sensorreading smaller then the real ones to test the sensor fusionwithin the real copter. The last experiment was a long termtest evaluating a hybrid controlled copter with two additionalvirtual copters. This experiment was a public demo issued onan open doors day at the faculty.

A. Performance of the Simulation

The V-REP simulation is quite performant, running even onan old 2.4GHz Core 2 Duo (T8300, 4GB RAM) close to realtime. On this machine, running OS X, the execution of theLua scripts takes the most time with typically 32ms− 42ms.The distance sensor handling takes 14ms − 18ms and theBullet physics engine accounts for 10ms− 12ms. A moderncomputer even with a low-voltage i7 runs the simulation in realtime without problems. A second computer, running Windows10 with a Core i7-6650U CPU and 16GB RAM computes theLua scripts in 26ms− 31ms, the distance sensor handling in6ms−7ms and the Bullet physics in 6ms−8ms. These timeswere obtained by the V-REP internal profiler.

Profiling inside the Lua scripts showed, that the quadcoptermain script, with the controllers and e.g. logging functionsonly takes 1ms− 5ms according to time measurements withsimGetSystemTimeInMs(). The rotor scripts need 1ms− 5mseach, which might be optimizable. However, it seems that thebiggest factor is the V-REP internal handling of Lua.

B. Communication Timings

The rate at which the V-REP simulated quadcopter receivesthe telemetry data from the flying quadcopter is crucial forthe mixed reality simulation. The period at which a messageis sent is configurable in the paparazzi system, but there areseveral factors that influence the message frequency. Thepaparazzi ground station link module distributes the receivedtelemetry on the Ivy-Bus and a slight delay is expectedas a result of message forwarding. The Ivy-Bus is a topic

Page 8: Mixed-Reality Simulation Environment for a Swarm of ...quadcopter is swapped for a real one, whereas the camera is still virtual. Burgbacher, Steinicke and Hinrichs contribute a neat

based publisher subscriber communication protocol and theprocessing of the regular expressions can be expensive. Onthe final link of chain is our Java communication bridge.

Considering the above factors, that can introduce a delayin the message transmission, we decided to measure howfrequently the messages are sent to V-REP. Since V-REPdoes not provide any methods to measure the communicationlag, we decided to measure how fast the copter managementmodule is sending the parameters to V-REP. Figure 5a showsthe results of the measurements with a single quadcopterflying. The status message is configured to be sent every22ms. We can observe that the average time the managementmodel sends the message to the virtual quadcopter is approx.27ms. Some messages are lost, this can be observed throughthe peaks in the diagram. However, they seldomly exceedthe 50ms simulation time step of V-REP. A surprising factis that some messages are sent even more frequently thenthe configured 22ms. This stems from the internal telemetryserialization strategie of the paparazzi firmware on the copter.

In order to evaluate the performance of the Ivy-Bus whenthere are more agents, we did the same test with two flyingquadcopters. On 5b you can see that the graph shows the samepattern as in 5a, but the average value is slightly increased.

Fig. 5: Measurred periodicity of copter communications inthe management module. Above: a single real copter flying,Below: two copters flying

C. Simulation Accuracy

To evaluate the accuracy of the simulation, we comparedthe orientation of the real copter and the simulated model.We let the real copter fly freely in the arena using the wallavoid control and linked the simulated copter. We did not usethe external tracking camera to feed the position of the realcopters to the simulation, to better observe the synchronizationof the command values. V-REP, paparazzi and the managementmodul all ran on the same machine.

0 10 20 30 40 50−80

−60

−40

−20

0

20

40

60

80

100

t[s]

angl

e[°]

Pitch of real FINken vs pitch of vrep FINken

imuPitchvrepPitch

(a) Pitch

0 10 20 30 40 50−30

−20

−10

0

10

20

30

40

50

t[s]

angl

e[°]

Roll of real FINken vs roll of vrep FINken

imuRollvrepRoll

(b) Roll

Fig. 6: Overlay plot of real (imu) and virtual (vrep) actuationcommands.

The real quadcopter gradually changed its yaw angle, sincethe magnetometer does not provide reliable values indoors,due to interferences of electro-magnetic fields. Fortunally, thesensors of the copter are symmetrical, so yaw is not needed forstable movements. The first analyses focues on the pitch androll angles of the simulated copter. To evalaute these, the timebase of the internal log of the copter and the simulation needto be synchronized. This was done manually using indicatingsensor data like the height, which heavily changes on takeoffof a copter. Figure 6a shows that the simulated copter nicelyadopts the real copters’s movements. The first spike at about7s shows the takeoff. From 12s to 22s it can be seen howthe real copter got close to a wall and started to avoid it. Theflight was stopped after about 43s, the huge spikes at the endof the graph show the response when the copter fell to theground. The virtual copter started to drift through the arenaapproximately at 30s. Figure 6a shows, that the virtual andreal pitch deviateat 30s for a short time, which causes thisdrift.

A more detailed plot of the pitch comparison is shownin 7. Every spike of the real copter’s movement is directlyfollowed by a spike in the same direction of the simulatedFINken. The smaller movements do not correspond exactly, asthe simulated copter contains its own attitude controller. Thus,keeping the simulated FINken stable in air has a higher prioritythan following the real copter’s movements. The graph showssome points, e.g. before 17s where movement of the simulatedFINken appears to precede the real FINken’s movement. Thiscan be explained by the controller of the simulated copter,which aims to stabilize the copter and therefore modifies theactual attitude commands of the real finken. Interestingly, theroll of the copter as shown in 6b doesn’t fit as well as thepitch, despite having identical controllers. In this flight, weobserved some logging error in V-REP. The huge spikes inthe virtual copter’s roll angle could not be observed in thescreen capture of the simulation. When comparing the spikesin 6b with the plot of the pitch in 6a, one can notice that theerroneous spikes in the roll correspond to the valid spikes inthe pitch. An explanation could be, that the rotation matrix forthe dummy object, which is linked to the simulated copter’sbody, was not computed correctly. Unfortunately, we could notreproduce this error, but it shows that one should be careful

Page 9: Mixed-Reality Simulation Environment for a Swarm of ...quadcopter is swapped for a real one, whereas the camera is still virtual. Burgbacher, Steinicke and Hinrichs contribute a neat

12 13 14 15 16 17 18 19 20−15

−10

−5

0

5

10

15

t[s]

angl

e[°]

Pitch of real FINken vs pitch of vrep FINken

imuPitchvrepPitch

Fig. 7: Detailed view of the pitch angles of simulated and realcopter

0 1 2 3 41

150

100

200

50

Count

0

1

2

3

(a) Single Copter0 1 2 3 4

500

1

1500

1000

Count

0

1

2

3

(b) Multi Copter

Fig. 8: Heat-map visualizing the position (in x and y coor-dinates) of the copters (in the arena of size 4 m × 3 m) inthe single copter (left) and two copters simulation scenarios(right).

when using sensor data, be it from hardware sensors or fromsoftware.

The plots for the angular responses of the simulation showthat it is possible to let a virtual FINken mimic a real flyingone by transmitting the actuation commands. The response ofthe model does not exactly match, but to achieve this, thesimulation model needs to be excessively tuned, which is outof scope of this work.

D. Behaviour Comparision of Virtual and Real Copter

We start with a behaviour test of the simulation. In thesimulation, the distance sensors are configured to have a zero-mean Gaussian noise with a standard derivation of 0.05 m.The simulated copter has no access to its global position andcould only change its pitch, roll, yaw and thrust values similarto the real copters. We let the simulation run for approximately90 seconds for all the experiments.

Figure 8 illustrate a heat-map of the positions of the coptersover 90 seconds with sampling rate of 40 HZ. We observe thatin both experiments the copters are mainly moving close tothe initial positions and deliver a stable movement.

Similar to the experiments in the simulation, multiple realcopters are tested in the arena. The data transmitted bythe copter contains its state information including attitude,

50 60 70 80 90 100

−1

0

1

2

3

·10−2

time (second)

dist

ance

(m)

X Front Back

260 280 300 320 340

time (second)

Fig. 9: Behaviour of a single copter in the arena. The left graphshows the simulated behaviour, while the right one shows thereal behaviour.

0 1 2 3 4

1.010

1.010

1.510

5.010

Count

0

1

2

3

(a) Single Copter0 1 2 3 4

2000

1

3000

1000

Count

0

1

2

3

(b) Multi Copter

Fig. 10: Heat-map of the positions (in x and y coordinates)of a single copter flying fully autonomously in Wall-Avoidmode (left) and together with a manually controlled copter(not shown here) as in Experiment 3 (right).

distances, acceleration and turn rates. Additionally, we usethe camera-based tracking system to evaluate the movementswith high accuracy. The experiments are performed for about9 minutes and sampling frequency of the tracking systems is50 HZ.

The first experiment is dedicated to analyse the behaviour ofone single autonomous copter and delivers the basic behaviouras expected by the simulation. We observe that the copter canreliably avoid the other ”swarm entities”, which are in this casethe walls of the arena. Since no particular control commandis sent to the copter, it starts with a random movement withinthe arena as illustrated in Figure 10 (left graph). Here thecopter has the initial position of (1.9, 1.6). The copter canautonomously stay in the middle of the arena and avoid thewalls by 1.20 m. The basic behaviour can be additionallyobserved in Figure 9 (right graph), which shows the valuesof the distance sensors in x-Axis together with the resultingattitude of the copter.

The second experiment is meant to evaluate the behaviourof two copters flying autonomously in the swarm behaviourmode. The experiment shows a very strong instability of thecopters as they moved erratically. One important observationis that most of the times the copters crash in the walls, butdo not collide with each other. There could be two possibleexplanations for this behaviour:

1) Strong interactions between the copters caused by airflow

Page 10: Mixed-Reality Simulation Environment for a Swarm of ...quadcopter is swapped for a real one, whereas the camera is still virtual. Burgbacher, Steinicke and Hinrichs contribute a neat

40 60 80 100 120 1400

0.5

1

1.5

2

2.5

time (second)

dist

ance

(m)

Y Left Right

140 145 150 155 160 165 170 175 180 185

time (second)

Fig. 11: Visualization of the y position, left and right sonardistances of the fully autonomous copter with a second copter(left) partially manually controlled and (right) with activatedsonars and deactivated rotors.

2) Disturbance of sensory data of one copter by the other

To identify the problem, we perform an two additional exper-iments:

In the first experiment, one copter is flying autonomouslyand the other one manually. This is meant to give us anestimation of the airflow-based interaction between copters.To this end, one copter was controlled partially manual, withdeactivated distance sensors. The height of the copter iscontrolled autonomously, but the movement in the xy-planeis done manually. As illustrated in Figure 10 (left graph),the fully autonomous copter is very stable. We additionallymeasured the sensory data as shown in Figure 11 (left graph)and observe that even though the sensor values are very noisy,the copter is relatively stable. The noise of the sonar data ispartially created by the ”walls” which are out of foils andmove once a copter is in their vicinity. Consequently, even ifboth copters are close to each other, the airflow generated bythem has a much stronger impact on the walls than on thecopters themselves.

The goal of the fourth experiment is to evaluate the inter-action between the distance sensors. To this end, we let onecopter fly fully autonomously in the arena and added the sec-ond copter without letting it fly. Afterwards we rotate the non-flying copter slowly around its z-axis. On specific positions,we observe heavy disturbances in the sensory data of the fullyautonomous copter shown in Figure 11 (right graph) leadingto unstable behaviour. The graph shows heavy disturbancesin the sonar data. Unfortunately, these disturbances occurredperiodically, which induce an oscillation in the copter, leadingto a crash. Therefore the cross-talk between the sonar sensorsof the copters was the cause of the instability in the secondexperiment.

The above experiments show that a critical interaction ofthe sensor was not modelled in the simulation, which led tocompletely different results between simulation and reality.The mixed-reality allowed us to rapidly switch between thediffernet experimental setup and synchronize the parametersand the environmental condiations. Additionally, we couldeasily access the different data sources in an homogeneus way

to compare and analyze the system.

E. Demonstration Setup Using External Positioning

The final experiment we used was a demonstration at anopen-day at the faculty were people could fly a real copterusing the hybrid mode. The hybrid mode allowed the pilotto control the copter without worrying to hit obstacles in thearena. The copter was tracked using our camera based trackingsystem that aquired position and orientation of the copter. Thesimulation contained an additional two copters that viewedeach other an the virtual twin of the real copter as obstaclesand avoid them. The goal of the pilot was to reach on ofthe virtual copters to catch them. In this setup the back-propagation of sensor data from the virtual twin to the realcopter was deactivated, to give the pilot more control.

The system was working flawlessly for 8 hours and deliv-ered good performance. The people could fly the copter aftera small introduction and the simulated environment allowedus to enhance the experience. The pilot actually flew thecopter only using the visualization of the simulation, becauseotherwise the target copters were not visible. This showed thestability of the positon band orientation between real copterand virtual twin.

The mixed reality allowed us to provide a system testingthe swarm formation algorithm without beeing forced to solvethe sonar cross-talk issue directly. The problem could becicumenvented though the virtual copters, which induce nocrosstalk to the real copter. The experiment showed thatthe mixed reality system is stable and well suited for rapiddevelopment. Additonally it provides easy means to enhancethe experience for users.

VIII. CONCLUSION AND FUTURE WORK

This paper presented the architecture to virtually extenda swarm of quadcopters through a physical simulation en-vironment. The system showed a realistic behaviour and andecreased setup time for experiments and demonstrations. Ad-ditionally, the unified data path allows an easy cross-validationof simulation models and real copter behaviours by comparingbehaviour of real and virtual entities and their interactions. Thesystem was able to include partially and fully autonomousbehaviours on different levels, while still providing indepenceof simulation and real system if necessary. The used simulationand copter framework allowed for easy extension regardingvisualization, behaviour and input sensors.

Our next steps are to try to decouple the simulation form theexternal positoning system to increase the applicability of thetest setup to experiments. Additionally, we want to enhancethe performance of the simulation to simulate larger swarmsof copters to enable realistic scalability analysis of swarmalgorithms. Finally we aim towards a simulation system thatdirectly executes the same code as run on the copters. Thiswould even allow to detect implementation errors.

Page 11: Mixed-Reality Simulation Environment for a Swarm of ...quadcopter is swapped for a real one, whereas the camera is still virtual. Burgbacher, Steinicke and Hinrichs contribute a neat

REFERENCES

[1] DHL, “Dhl testing delivery drones,” Septem-ber 2013, [Online at 23.09.2016]. [Online]. Avail-able: http://www.dhl.com/en/press/releases/releases 2014/group/dhlparcelcopter launches initial operations for research purposes.html

[2] W. Hoffman, “Drone swarms will soon be used forsearch-and-rescue operations,” April 2016, [Online at23.09.2016]. [Online]. Available: https://www.inverse.com/article/14368-drone-swarms-will-soon-be-used-for-search-and-rescue-operations

[3] M. Dorigo, V. Trianni, E. Sahin, R. Groß, T. H. Labella, G. Baldassarre,S. Nolfi, F. Mondada, J.-L. Deneubourg, D. Floreano, and L. M.Gambardella, “Evolving self-organizing behaviors for a swarm-bot,”Autonomous Robots, vol. 17, pp. 223–245, 2004.

[4] J. Seyfried, M. Szymanski, N. Bender, R. Estana, M. Thiel, and H. Worn,“The I-SWARM project: Intelligent small world autonomous robotsfor micro-manipulation,” in Swarm Robotics Workshop: State-of-the-artSurvey, E. Sahin and W. M. Spears, Eds. Springer, 2005, pp. 70–83.

[5] M. Achtelik, M. Achtelik, Y. Brunet, M. Chli, S. Chatzichristofis,J. Decotignie, K. Doth, F. Fraundorfer, L. Kneip, D. Gurdan, L. Heng,E. Kosmatopoulos, L. Doitsidis, G. H. Lee, S. Lynen, A. Martinelli,L. Meier, M. Pollefeys, D. Piguet, A. Renzaglia, D. Scaramuzza,R. Siegwart, J. Stumpf, P. Tanskanen, C. Troiani, and S. Weiss, “Sfly:Swarm of micro flying robots,” in Intelligent Robots and Systems (IROS)IEEE/RSJ International Conference on, 2012, pp. 2649–2650.

[6] R. Wood, R. Nagpal, and G.-Y. Wei, “Flight of the robobees,” ScientificAmerican, vol. 308, no. 3, pp. 60–65, 2013.

[7] M. Brambilla, E. Ferrante, M. Birattari, and M. Dorigo, “Swarmrobotics: a review from the swarm engineering perspective,” SwarmIntelligence, vol. 7, no. 1, pp. 1–41, 2013.

[8] J. Roberts, J.-C. Zufferey, and D. Floreano, “Energy management forindoor hovering robots,” in Intelligent Robots and Systems (IROS)IEEE/RSJ International Conference on, 2008, pp. 1242–1247.

[9] T. Stirling and D. Floreano, “Energy-time efficiency in aerial swarmdeployment,” in Distributed Autonomous Robotic Systems, ser. SpringerTracts in Advanced Robotics, A. e. a. Martinoli, Ed. Springer, 2013,vol. 83, pp. 5–18.

[10] P. Milgram and F. Kishino, “A taxonomy of mixed reality visualdisplays,” IEICE TRANSACTIONS on Information and Systems, vol. 77,no. 12, pp. 1321–1329, 1994.

[11] I. Y.-H. Chen, B. A. MacDonald, and B. C. Wunsche, “A flexible mixedreality simulation framework for software development in robotics,”Journal of Software Engineering for Robotics, vol. 2, no. 1, pp. 40–54, 2011.

[12] U. Burgbacher, F. Steinicke, and K. H. Hinrichs, “Mixed realitysimulation framework for multimodal remote sensing,” in IUI2011 workshop on location awareness for mixed and dual reality(LAMDa). DFKI, 2011. [Online]. Available: http://viscg.uni-muenster.de/publications/2011/BSH11/?clang=1

[13] W. Hnig, C. Milanes, L. Scaria, T. Phan, M. Bolas, and N. Ayanian,“Mixed reality in robotics,” in Intelligent Robots and Systems (IROS),2015 IEEE/RSJ International Conference on, 2015. [Online]. Available:http://www-bcf.usc.edu/∼ayanian/files/Ayanian IROS2015a.pdf

[14] G. Hattenberger, M. Bronz, and M. Gorraz, “Using the paparazziuav system for scientific research,” in International Micro Air VehicleConference and Competition (MAV). Delft Univ., 2014.

[15] T. Luukkonen, “Modelling and control of quadcopter,” Independentresearch project in applied mathematics, Espoo, 2011.

[16] B. Lautrup, Physics of Continuous Matter, Second Edition: Exotic andEveryday Phenomena in the Macroscopic World. CRC Press, 2011.

[17] C. Deeg, “Modeling, simulation, and implementation of anautonomously flying robot,” Ph.D. dissertation, TU Berlin, 2006.[Online]. Available: http://www.carstendeeg.de/marvin/dissertation/aerodynamik.html

[18] J. S. Berndt and A. De Marco, “Progress on and usage of the open sourceflight dynamics model software library, jsbsim,” in AIAA modeling andsimulation technologies conference, 2009, pp. 10–12.

[19] J. Meyer, A. Sendobry, S. Kohlbrecher, U. Klingauf, and O. Stryk,Simulation, Modeling, and Programming for Autonomous Robots: ThirdInternational Conference, SIMPAR 2012, Tsukuba, Japan, November 5-8, 2012. Proceedings. Springer, 2012, ch. Comprehensive Simulationof Quadrotor UAVs Using ROS and Gazebo, pp. 400–411.

[20] E. Rohmer, S. P. N. Singh, and M. Freese, “V-rep: A versatile andscalable robot simulation framework,” in Intelligent Robots and Systems(IROS), 2013 IEEE/RSJ International Conference on, 2013, pp. 1321–1326.

[21] A. Kushleyev, D. Mellinger, C. Powers, and V. Kumar, “Towards aswarm of agile micro quadrotors,” Auton. Robots, vol. 35, no. 4, pp.287–300, 2013.

[22] S. Zug, C. Steup, A. Dietrich, and K. Brezhnyev, “Design and imple-mentation of a small size robot localization system,” in Robotic andSensors Environments (ROSE), 2011 IEEE International Symposium on,September 2011, pp. 25–30.