Modeling and Control of Unmanned Aerial Vehicles
– Current Status and Future Directions
George Vachtsevanos, Ben Ludington, Johan Reimann, Georgia Institute of Technology
Panos Antsaklis, Notre Dame University
Kimon Valavanis, University of South Florida
sharing
the potential utility
Abstract
Recent military and civil actions worldwide
have highlighted
for
Unmanned Aerial Vehicles (UAVs). Both fixed
wing and rotary aircraft have contributed
significantly to the success of several military
Future
and surveillance/rescue operations.
to place
combat operations will continue
unmanned aircraft in challenging conditions
such as
the urban warfare environment.
However, the poor reliability, reduced autonomy
and operator workload requirements of current
unmanned vehicles present a roadblock to their
success. It is anticipated that future operations
will require multiple UAVs performing in a
resources and
cooperative mode,
complementing other air or ground assets.
Surveillance and reconnaissance tasks that rely
on UAVs
require sophisticated modeling,
planning and control technologies. This paper
reviews the current status of UAV technologies
with emphasis on recent developments aimed at
UAV improved autonomy and reliability and
discusses future directions and technological
challenges
the
immediate future. We view the assembly of
multiple and heterogeneous vehicles as a
“system of systems” where individual UAVs are
Thus,
functioning as sensors or agents.
networking, computing and communications
issues must be considered as the UAVs are
tasked
and
urban
reconnaissance missions
environment. The same scenario arises in
similar civil applications such as forest fire
detection, rescue operations, pipeline monitoring,
etc. A software (middleware) platform enables
real time reconfiguration, plug-and-play and
other quality of service functions. Multiple
UAVs, flying in a swarm, constitute a network
that must be addressed
surveillance
perform
an
to
in
in
target
issues of
of distributed (in the spatio-temporal sense)
sensors that must be coordinated to complete a
complex mission. Current R&D activities are
discussed that concern issues of modeling,
planning and control. Here, optimum terrain
coverage,
tracking and adversarial
reasoning strategies require new technologies to
deal with
system complexity,
uncertainty management and computational
efficiency [Vachtsevanos, et al, 2004]. We will
pose the major technical challenges arising in
the “system of systems” approach and state the
need
networking,
new modeling,
communications and computing technologies
that must be developed and validated if such
complex unmanned systems as UAVs are to
perform
in
conjunction with manned systems, in a variety
of application domains. We will conclude by
proposing possible solutions to these challenges.
efficiently,
effectively
and
for
I.
INTRODUCTION
The future urban warfare , as well search and
rescue, border patrol, Homeland security and
other applications, will utilize an unprecedented
level of automation in which human-operated,
autonomous, and semi-autonomous air and
ground platforms will be linked through a
coordinated control system. Networked UAVs
bring a new dimension to future combat systems
that must
adaptable operational
procedures, planning and deconfliction of assets
coupled with the technology to realize such
concepts. The technical challenges the control
designer is facing for autonomous collaborative
sensing,
from
operations
computing and communications requirements,
environmental and operational uncertainty,
threats and the emerging need for
hostile
real-time
include
stem
George Vachtsevanos, Ben Ludington, Johan Reimann, Panos Antsaklis, Kimon Valavanis, “Modeling and Control of Unmanned Aerial Vehicles– Current Status and Future Directions,” Workshop on Modeling and Control of Complex Systems (MCCS), Ayia Napa, Cyprus, June 30-July 1, 2005. Also Chapter 9, in Modeling and Control of Complex Systems, CRC Press 2007. Figure 1: Autonomous Control Level Trend
Figure 2(a): The Autonomous Control Level Chart
from
surveillance
single vehicle
improved UAV and UAV team autonomy and
reliability. Figure 1 shows the autonomous
control level trend according to the DoD UAV
Roadmap [Office of Secretary of Defense, 2002].
The same roadmap details the need for new
technologies that will address single vehicle and
multi-vehicle autonomy issue. The challenges
increase significantly as we move up
the
hierarchy of the chart shown in Figures 2 (a) and
(b)
to multi-vehicle
coordinated control. Moderate success has been
reported thus far in meeting the lower echelon
challenges. Achieving the ultimate goal of full
autonomy for a swarm of vehicles executing a
reconnaissance
complex
mission still remains a major challenge. To
meet these challenges, innovative coordinated
technologies such as
planning and control
(DAI),
intelligence
artificial
distributed
computational intelligence and soft computing,
as well as game
theory and dynamic
optimization, have been investigated intensively
in recent years. However, in this area, more
work has been focused on solving particular
problems, such as
formation control and
autonomous search, while less attention has been
paid to the system architecture, especially from
an implementation and integration point of view.
Other significant concerns relate to inter-UAV
communications, links to command and control,
contingency management, etc.
and
Figure 2(b): The Autonomous Control Level Chart
referred
We will review briefly in this paper a few of the
challenges
to above and suggest
possible approaches to these problems. The
through application
intent
is
examples
and
control
communication concerns and highlight those
to motivate
the modeling,
2
George Vachtsevanos, Ben Ludington, Johan Reimann, Panos Antsaklis, Kimon Valavanis, “Modeling and Control of Unmanned Aerial Vehicles– Current Status and Future Directions,” Workshop on Modeling and Control of Complex Systems (MCCS), Ayia Napa, Cyprus, June 30-July 1, 2005. Also Chapter 9, in Modeling and Control of Complex Systems, CRC Press 2007. of
the
the
tools
new directions that are needed to assist in
arriving at satisfactory solutions. We will
emphasize
and
synergy
methodologies stemming from various domains
as well as
resurfacing of classical
mathematical notions that may be called upon
now to solve difficult spatio-temporal dynamic
situations. Recent advances in computing and
communications promise to accommodate the
on-line real
implementation of such
mathematical algorithms that were considered
intractable some years back.
time
back.
agent
failures
provide
systems
application
II. System Architecture
While networked and autonomous UAVs can be
centrally controlled, this requires that each UAV
communicates all the data from its sensors to a
central location and receives all the control
and
Network
signals
communication delays are one of the main
concerns in the design of cooperative control
systems. On
the other hand, distributed
an
intelligent
environment in which agents autonomously
coordinate, cooperate, negotiate, make decisions
and take actions to meet the objectives of a
or mission. The
particular
autonomous nature of agents allows for efficient
communication
among
distributed resources.
For the purpose of coordinated control of
multiple UAVs, each individual UAV in the
team is considered as an agent or sensor with
particular capabilities engaged in executing a
portion of the mission. The primary task of a
typical team of UAVs is to execute faithfully
and reliably a critical mission while satisfying
local survivability conditions. In order to define
the application domain, we adopt an assumed
mission scenario of a group of UAVs executing
reconnaissance and surveillance (RS) missions
in an urban warfare environment, as depicted in
Figure 3.
processing
and
Urban Warfare
GTMax
Manned Vehicle
Fixed Wing UAV
GTMax
GTMav
OAV
Sniper
Ground
Sensor
Soldiers
Ground
Sensor
Ground
Sensor
Moving
Target
Commander
Operator
Figure 3: A Team of 5 UAVs Executing RS Missions in
an Urban Warfare Environment
baseline
controllers,
A “system of systems” approach suggests a
hierarchical architecture for the coordinated
control of multiple UAVs. The hierarchical
architecture, shown in Figure 4, features an
upper level with global situation awareness and
team mission planning, a middle level with local
knowledge, formation control and obstacle
avoidance, and a low level that interfaces with
onboard
sensors,
communication and weapon systems. Each level
interacting agents with
consists of several
dedicated functions. The formation control
problem is viewed as a Pursuit Game of n
pursuers and n evaders. Stability of
the
formation of vehicles is guaranteed if the
vehicles can reach their destinations within a
specified time, assuming that the destination
points are avoiding the vehicles in an optimal
fashion. Vehicle model is simplified to point
mass with
limit. Collision
avoidance is achieved by designing the value
function so that it ensures that the vehicles move
away from one another when they come too
close
to each one. Simulation results are
provided to verify the performance of the
proposed algorithms.
acceleration
3
George Vachtsevanos, Ben Ludington, Johan Reimann, Panos Antsaklis, Kimon Valavanis, “Modeling and Control of Unmanned Aerial Vehicles– Current Status and Future Directions,” Workshop on Modeling and Control of Complex Systems (MCCS), Ayia Napa, Cyprus, June 30-July 1, 2005. Also Chapter 9, in Modeling and Control of Complex Systems, CRC Press 2007. Level 3
Global
Knowledge
Team Mission Planning
/Re-planning Agent
Local
Knowledge
Level 2
Formation Control
Agent
Command
& Control
Global Performance
Measurement Agent
Local Mission
Planning Agent
Behavioral
Knowledge
Level 1
Vehicle
Control Agent
Weapon System
Agent
The highest level of the control hierarchy
features functions of global situation awareness
and teamwork. The mission planning agent is
able to generate and refine mission plans for the
team, generate or select flight routes, and create
operational orders. It is also responsible for
keeping track of the team’s plan, goals, and team
members’ status. The overall mission is usually
planned by the command and control center
based on the capabilities of each individual
UAV agent, and is further decomposed into
tasks/subtasks which are finally allocated to the
UAV assets (individually or in coordination with
other vehicles). This can usually be cast as a
constrained optimization problem and tackled
with various approaches, such as
integer
programming, graph theory, etc. Market based
methods [Dunbar and Murray, 2002] [Voos,
1999] and especially auction theory [Clearwater,
1996],
1998],
[Engelbrecht, et. Al 1983] can be applied as a
solution to autonomous mission re-planning.
and Wellman,
[Walsh
……
Sensing Agent
Local Situation
Awareness Agent
Knowledge Fusion
Agent
FDI/Reconfigurable
Control Agent
Manned
Vehicle
Global Situation
Awareness Agent
QoS Assessment
Agent
Moving Obstacle
Avoidance Agent
Communication
Agent
Planning the UAVs’ flight route is also an
integral part of mission planning. A modified A*
search algorithm, which attempts to minimize a
suitable cost function consisting of the weighted
sum of distance, hazard and maneuverability
measures [Bertsekas, 1992], [Vachtsevanos et. al,
1997], can be utilized to facilitate the design of
the route planner. In the case of a leader-
follower scenario, an optimal route is generated
for the leader, while the followers fly in close
formation in the proximity of the leader. The
global situation awareness agent, interacting
with the knowledge fusion agent, evaluates the
world conditions based on data gathered from
each UAV (and ground sensors if available) and
reasons about
likely actions.
the enemy’s
Adversarial reasoning and deception reasoning
are two important tasks executed here. The
agent
performance measurement
global
measures the performance of the team and
suggests team re-configuration or mission re-
planning, whenever necessary. Quality of
service (QoS) is assessed to make the best effort
4
Figure 4: A Generic Hierarchical Multi-agent System Architecture
George Vachtsevanos, Ben Ludington, Johan Reimann, Panos Antsaklis, Kimon Valavanis, “Modeling and Control of Unmanned Aerial Vehicles– Current Status and Future Directions,” Workshop on Modeling and Control of Complex Systems (MCCS), Ayia Napa, Cyprus, June 30-July 1, 2005. Also Chapter 9, in Modeling and Control of Complex Systems, CRC Press 2007. quality
the mission and meet
the
to accomplish
criteria. Real world
predefined
implementation of this level is not limited to the
agents depicted in the figure. For example, in
heterogeneous agent societies, knowledge of
coordination protocols and languages may also
reside [Sousa and Pereira, 2003].
III. Formation Control
The problem of finding a control algorithm,
which will ensure that multiple autonomous
vehicles can maintain a
formation while
traversing a desired path and avoid inter-vehicle
collisions, will be referred to as the formation
control problem. The formation control problem
has recently received considerable attention due
in part to its wide range of applications in
aerospace and robotics. A classic example
involving the implementation of the virtual
potential problem is presented in [Howard et. al,
2000]. The authors performed simulations on a
two-dimensional system, which proved to be
well behaved. However, as they mention in their
conclusion, the drawback of the virtual potential
function approach is the possibility of being
“trapped” in local minima. Hence, if local
minima exist, one cannot guarantee that the
system is stable. In [Baras, et. al, 2003], the
individual trajectories of autonomous vehicles
moving in formation were generated by solving
the optimal control problem at each time step.
This is computationally demanding and hence
not possible to perform in real-time with current
hardware.
This paper views the formation control problem
from a two player differential game perspective,
which provides a framework
to determine
acceptable initial vehicle deployment conditions
but also, provides
into acceptable
formation maneuvers that can be performed
while maintaining the formation.
The formation control problem can be regarded
as a Pursuit Game, except that, it is in general,
much more complex in terms of the combined
dynamical equations, since the system consists
of n pursuers and n evaders instead of only one
of each. However, if the group of vehicles is
viewed as the pursuer and the group of desired
points in the formation as the evader, the
insight
strategies
the formation of vehicles
problem is essentially reduced to the standard
but much more complex pursuit game.
Differential Game Theory was initially used to
determine optimal military
in
continuous time conflicts governed by some
given dynamics and constraints [Isaacs, 1965].
One such application is the so-called Pursuit
Game in which a pursuer has to collide with an
evading target. Naturally, in order to solve such
a problem it is advantageous to know the
dynamics and the positional information of both
the evader and the pursuer, that is, the Pursuit
Game will be viewed as a Perfect Information
Game.
Stability of
is
guaranteed if the vehicles can reach their
time,
destination within
assuming that the destination points are avoiding
the vehicles in an optimal fashion. It seems
counterintuitive
the destination points
should be avoiding the vehicles optimally,
however if the vehicles can reach the points
under such conditions then they will always be
able to reach their destination.
As a consequence of our stability criterion, it is
necessary not only to determine the control
strategies of the vehicles but also the optimal
avoidance strategies of the desired points. Let us
label the final control vector of the vehicles by
φ and the control final vector of the desired
points by ψ . Then, the main equation which has
to be satisfied is:
⎡
⎢
⎢
⎣
⋅∑
fV
j
min
ψφ
specified
(cid:71)
,(
xG
some
that
,
)
ψφ
)
,
ψφ
(cid:71)
,(
x
max
⎤
⎥
⎥
⎦
+
=
0
j
j
(1)
which has to be true for both φ andψ .
The
)ψφ,
(cid:71)
,x
(
term is the jth dynamic equation
the system, and
the
the
jV is
(cid:71)
(
)ψφ,
,xG
is a
corresponding Value of the game.
predetermined function which, when integrated,
provides the payoff of the game. Notice, that the
only quantity that is not specified in the equation
is the
jV term.
f j
governing
5
George Vachtsevanos, Ben Ludington, Johan Reimann, Panos Antsaklis, Kimon Valavanis, “Modeling and Control of Unmanned Aerial Vehicles– Current Status and Future Directions,” Workshop on Modeling and Control of Complex Systems (MCCS), Ayia Napa, Cyprus, June 30-July 1, 2005. Also Chapter 9, in Modeling and Control of Complex Systems, CRC Press 2007. From the main equation it is possible to
determine the retrograde path equations (RPEs),
which will have to be solved to determine the
actual paths traversed by the vehicles in the
formation. However, initial conditions of the
retrograde path equations will have to be
considered in order to integrate the RPEs. These
initial condition requirements provide us with
the ability to introduce tolerance boundaries,
within which we say that the formation has
add
boundaries
settled. Such
complexity to the problem, however they also
provide a framework for positional measurement
errors.
The above formulation suggests a way for
approaching the solution to differential game.
However, how does one ensure that inter-vehicle
collisions are avoided? To ensure this, it is
the payoff function
necessary
)ψφ,
determined by the integral of
. As an
example, if we simply seek that the vehicles
must reach their goal within a certain time τ,
. This can be verified by
then
to consider
naturally
(cid:71)
(
,xG
(cid:71)
,
)
dt
τψφ
=
,
,
. Hence, we have
(
) 1
=ψφxG
,
(cid:71)
(
xG
evaluating
τ
∫
0
)ψφ,
(cid:71)
(
,xG
restricted our solutions to the initial vehicle
deployment, which will ensure that the vehicles
will reach the desired points in τ time. However,
if
is changed to penalize proximity of
vehicles to one-another, only initial conditions
that ensure collision free trajectories will be
valid.
does not provide the means
However,
to perform the actual collision avoidance, but
merely limits the solution space. So, in order to
the
incorporate
controller, one can either change the value
function or add terms to the system of dynamic
equations.
avoidance
collision
(cid:71)
(
,xG
)ψφ,
into
IV. Two-Vehicle Example
In order to illustrate some of the advantages and
disadvantages with
the differential game
approach to formation control, consider the
following system of simple point “Helicopters”,
that is, points that can move in three dimensions
governed by the following dynamic equations:
cos(
φ
)
sin(
φ
2
i
)
−
vk
⋅
i
xi
i
12
−
=
yi
sin(
φ
)
sin(
φ
2
i
)
−
vk
⋅
i
yi
i
12
−
(cid:5)
x
i
(cid:5)
v
xi
(cid:5)
y
i
=
v
=
=
v
(cid:5)
v
(cid:5)
z
(cid:5)
v
i
=
v
=
zi
2,1=i
xi
F
i
yi
F
i
zi
F
i
.
cos(
φ
2
i
)
−
vk
⋅
i
zi
Where
The two desired “points” are described by one
set of dynamic equations. This simply implies
that there is a constant distance separating the
two desired points, and that the formation can
only perform translations and not rotations in the
three dimensional space. Hence the dynamic
equations become:
(cid:5)
x
(cid:5)
v
xd
(cid:5)
y
ψψ
sin(
2
xd
F
d
cos(
−
=
=
xd
v
v
k
v
=
)
)
⋅
1
d
d
d
(cid:5)
v
(cid:5)
z
(cid:5)
v
yd
d
zd
yd
F
d
zd
F
d
=
=
v
=
sin(
ψψ
sin(
2
)
1
)
−
k
d
⋅
v
yd
cos(
ψ
2
)
k
d
⋅
v
zd
−
In the above dynamical systems, the
ik and dk
factors are simply linear drag terms to ensure
dF and
that the velocities are bounded, and the
iF terms are the magnitudes of the applied
forces. Figure 5 shows the coordinate system
and the associated angles.
Figure 5: Definition of Angles
Substituting the dynamical equations into the
main equation (1), we obtain the following
expressions:
6
George Vachtsevanos, Ben Ludington, Johan Reimann, Panos Antsaklis, Kimon Valavanis, “Modeling and Control of Unmanned Aerial Vehicles– Current Status and Future Directions,” Workshop on Modeling and Control of Complex Systems (MCCS), Ayia Napa, Cyprus, June 30-July 1, 2005. Also Chapter 9, in Modeling and Control of Complex Systems, CRC Press 2007.
(
VF
⋅
1
vx
1
[min
φ
⋅
cos(
φ
1
)
⋅
sin(
φ
2
)
+
⋅
cos(
φ
2
)
sin(
sin(
⋅
)
φ
1
cos(
⋅
)
+
φ
2
sin(
⋅
)
vx
2
sin(
φ
3
sin(
V
vz
1
)
φ
4
V
vz
+
φ
3
)
⋅
φ
4
)
+
⋅
cos(
φ
4
2
)
+
)
])
V
⋅
vy
1
(
VF
⋅
2
⋅
V
2
vy
And
[max
ψ
(
VF
⋅
d
vxd
⋅
cos(
ψ
1
)
⋅
sin(
ψ
2
)
+
V
vyd
⋅
sin(
ψ
1
)
⋅
sin(
ψ
2
)
+
V
vzd
⋅
cos(
ψ
2
)])
(2)
To obtain the control law that results from the
max-min solution of equation (2), the following
lemma is used:
Lemma 1:
ℜ∈ba,
Let
:
Then
=ρ
2
a +
b
2
is obtained where
cos
b
⋅+
)θ
( )
( )
θ
sin
(
a
⋅
max
θ
a
ρ
cos
( )
θ
=
,
and
sin
( )
θ
=
b
ρ
and the max is ρ
By combining Lemma 1 with Equation 2, the
following control strategy for vehicle 1 is found:
Where
cos(
φ
1
)
−=
,
sin(
φ
1
)
−=
cos(
φ
2
)
−=
,
sin(
φ
2
)
−=
V
1
vy
ρ
1
ρ
1
ρ
2
V
1
vx
ρ
1
vzV
1
ρ
2
2
V
vx
1
+
2
V
vy
1
ρ
1
=
and
ρ
2
=
2
V
vx
1
+
2
V
vy
1
+
2
V
vz
1
Similar results are obtained for vehicle 2. For the
optimal avoidance strategy of the desired
points, we obtain the following:
From this, we see that the retrograde equations
have the following form:
cos(
ψ
1
)
+=
,
sin(
ψ
1
)
+=
cos(
ψ
2
)
+=
,
sin(
ψ
2
)
+=
V
vxd
ρ
1
d
vzdV
ρ
d
2
V
vyd
ρ
1
d
ρ
d
ρ
d
1
2
o
v
x
1
o
x
1
o
xV
o
V
F
⋅−=
1
+
vk
⋅
1
x
1
V
vx
1
ρ
2
−=
v
x
1
1 =
0
vx
1
=
V
x
1
Vk
⋅−
1
vx
1
For this example, the final value will be zero,
and occurs when the difference between the
desired position and the actual position is zero.
Naturally, to obtain a more general solution, a
solution manifold should be used; however, in
order to display the utility of this approach, the
previously mentioned
final conditions will
suffice. The closed form expression of the value
function is then of the form:
V
1
vx
=
(
x
1
−
x
d
)
⋅
tk
1
1
−−
e
k
1
reduce
It should be noted that the above analysis could
be performed on a reduced set of differential
equations, where each equation would express
the differences in distance and velocity, and
hence
the number of differential
equations by a factor of 2. However, for the sake
of clarity, the analysis is performed on the actual
position and velocity differential equations.
Furthermore, it should also be noted that this
solution closely resembles the isotropic rocket
pursuit game described in [Isaacs, 1965]. This is
due to the fact that the dynamic equations are
decoupled, and hence working within a three-
dimensional framework will not change the
problem considerably.
7
George Vachtsevanos, Ben Ludington, Johan Reimann, Panos Antsaklis, Kimon Valavanis, “Modeling and Control of Unmanned Aerial Vehicles– Current Status and Future Directions,” Workshop on Modeling and Control of Complex Systems (MCCS), Ayia Napa, Cyprus, June 30-July 1, 2005. Also Chapter 9, in Modeling and Control of Complex Systems, CRC Press 2007.
V. Simulation Results
From the closed form expression of the control
presented in the previous section, it is obvious
that the optimal strategies are in fact bang-bang
controllers. Since the forces in the system are
not dependent on the proximity of the vehicles
to the desired points, there will always exist
some positional error. It is however possible to
this problem simply by switching
resolve
threshold, or
error
controllers
introducing terms that minimize the force terms
F1 and F2 as the vehicles approach the desired
points.
some
at
Figure 6: Two-Vehicle Simulation with Sufficient
Vehicle Velocities
Figure 7: Two-Vehicle Simulation with Insufficient
Vehicle Velocities
The above plot shows the tracking capabilities of
the derived controller. The two vehicles are
attempting to follow two parameterized circular
trajectories with a radius of three. In Figure 6 the
target
task of finding/estimating
vehicles can move quickly enough to actually
reach the desired trajectories, while in Figure 7
the velocities of the vehicles are not sufficient to
reach the desired trajectories. In the latter case,
the vehicles simply move in a smaller circle,
which ensures that the error remains constant.
The term target tracking is often used to refer to
the
the motion
parameters (mainly the location and direction) of
time sequence of
in a
a moving
measurements. This task is achievable as long as
the target is within the sensor’s field of view
(FOV). If it happens that the target keeps
moving away to the point it runs o® the FOV,
the target tracking task will fail to track the
moving target until the target re-enters the
sensors FOV. To address such problem, the
sensor is mounted on a moving platform such as
a UAV. We call the new setup (the sensor plus
the UAV) an agent. Thus, we can start a second
task, other than the target tracking task, to
(reactively or proactively) move the sensor to
guarantee that the target stays in view. That
second task is what we call the agent placement
task. The work presented in this paper is of the
active sensing-based target tracking variety, in
which both tasks discussed above are integrated.
VI. Target Tracking
There exists a number of efforts to formally
describe the dynamic agent placement problem
for target tracking. The choice is made to use a
the variety of Weighted
formulation of
Cooperative Multi-robot Observation
of
Multiple Moving Targets
(W-CMOMMT)
(Werger and Mataric, 2000), (Werger and
Mataric, 2001) since it captures the multiple-
observer-multiple-target scenario with
target
prioritization. W-CMOMMT can be shown to be
an NP-hard problem [Hegazy and Vachtsevanos,
2004].
The agent
is
formulated by defining a global utility function
to be optimized given a graph representing the
region of interest, a team of agents and a set of
targets. A coarse motion model is developed
first where target transitions follow a stochastic
thM order Markov chain.
model described by an
(sensor) placement problem
8
George Vachtsevanos, Ben Ludington, Johan Reimann, Panos Antsaklis, Kimon Valavanis, “Modeling and Control of Unmanned Aerial Vehicles– Current Status and Future Directions,” Workshop on Modeling and Control of Complex Systems (MCCS), Ayia Napa, Cyprus, June 30-July 1, 2005. Also Chapter 9, in Modeling and Control of Complex Systems, CRC Press 2007.