Formation Control of Nonholonomic Mobile Robots with
Omnidirectional Visual Servoing and Motion Segmentation
Ren´e Vidal
Omid Shakernia
Shankar Sastry
Department of Electrical Engineering & Computer Sciences
University of California at Berkeley, Berkeley CA 94720-1772
rvidal,omids,sastry
f
@eecs.berkeley.edu
g
Abstract— We consider the problem of having a team of
nonholonomic mobile robots follow a desired leader-follower
formation using omnidirectional vision. By specifying the
desired formation in the image plane, we translate the control
problem into a separate visual servoing task for each follower.
We use a rank constraint on the omnidirectional optical flows
across multiple frames to estimate the position and velocities
of the leaders in the image plane of each follower. We show
that the direct feedback-linearization of the leader-follower
dynamics suffers from degenerate configurations due to the
nonholonomic constraints of the robots and the nonlinearity
of the omnidirectional projection model. We therefore design
a nonlinear tracking controller that avoids such degenerate
configurations, while preserving the formation input-to-state
stability. Our control
law naturally incorporates collision
avoidance by exploiting the geometry of omnidirectional
cameras. We present simulations and experiments evaluating
our omnidirectional vision-based formation control scheme.
I. INTRODUCTION
The problem of controlling a formation of ground and
aerial vehicles is gaining significant importance in the
control and robotics communities thanks to applications in
air traffic control, satellite clustering, automatic highways,
and mobile robotics. Previous work in formation control
(see Section I-B for a brief review) assumes that commu-
nication among the robots is available and concentrates
on aspects of the problem such as stability, controller
synthesis, and feasibility.
In the absence of communication, the formation control
problem becomes quite challenging from a sensing view-
point due to the need for simultaneous estimation of the
motion of multiple moving objects. Das et al. [2] tackle
vision-based formation control with feedback-linearization
by employing a clever choice of coordinates in the con-
figuration space. They mitigate sensing difficulties by
painting each leader of a different color, and then using
color tracking to detect and track the leaders.
(a)
(b)
Fig. 1. Motion segmentation for two mobile robots based on their
omnidirectional optical flows.
Our approach is to translate the formation control prob-
lem from the configuration space into a separate visual
servoing control
task for each follower. In Section II
we show how to estimate the position and velocities of
each leader in the image plane of the follower by using
a rank constraint on the central panoramic optical flow
across multiple frames. We also derive the leader-follower
dynamics in the image plane of the follower for a cali-
brated camera undergoing planar motion. In Section III we
show that the direct feedback-linearization of the leader-
follower dynamics suffers from degenerate configurations
due to the nonholonomic constraints of the robots and the
nonlinearity of the central panoramic projection model.
We therefore design a nonlinear tracking controller that
avoids such degenerate configurations, while maintaining
the formation input-to-state stability. Our control
law
naturally incorporates collision avoidance by exploiting
the geometry of central panoramic cameras. In Section IV
we present simulations and experiments validating our
omnidirectional vision-based formation control scheme.
Section V concludes the paper.
A. Contributions of this paper
B. Previous work
In this paper, we present a novel approach to vision-
based formation control of nonholonomic robots equipped
with central panoramic cameras in which the detection and
tracking of the leaders is based solely on their motions on
the image plane, as illustrated in Fig. 1.
Swaroop et al. [10] proposed the notion of string
stability for line formations and derived sufficient con-
ditions for a formation to be string stable. Pant et al. [7]
generalized string stability to formations in a planar mesh,
through the concept of mesh stability. Tannner et al. [12]
PSfrag replacements
image plane
O
(x; y)T
concentrated on formations in acyclic graphs and studied
the effect of feedback and feedforward on the input-to-
state stability of the formation. Fax et al. [5] analyzed the
stability of formations in arbitrary graphs and proposed a
Nyquist-like stability criteria that can be derived from the
spectral properties of the graph Laplacian. Egerstedt and
Hu [4] propose the use of formation constraint functions
to decouple the coordination and tracking problems, while
maintaining the stability of the formation. Stipanovic et
al. [9] studied the design of decentralized control laws that
result in stable formations, provided that the leader’s de-
sired velocity is known. Desai et al. [3] proposed a graph-
theoretic approach for coordinating transitions between
two formations. Tabuada et al. [11] studied the conditions
under which a desired formation is feasible, i.e. whether
it is possible to design a trajectory that both maintains the
formation and satisfies the kinematic constraints of the
robots.
II. CENTRAL PANORAMIC FORMATION DYNAMICS
A. Central panoramic camera model and its optical flow
Central panoramic cameras are realizations of omnidi-
rectional vision systems that combine a mirror and a lens
and have a unique effective focal point. Building on the
results of [6], we show in [8] that the image (x; y)T of a
R3 obtained by a calibrated
3D point q = (X; Y; Z)T
central panoramic camera with parameter (cid:24)
[0; 1] can
be modeled as a projection onto the surface
1 + (cid:24)2(x2 + y2)
1 + (1
followed by orthographic projection onto the XY plane,
as illustrated in Fig. 2. The composition of these two
projections gives1:
z = f(cid:24)(x; y) ,
(cid:24)2)(x2 + y2)
1 + (cid:24)
(1)
p
(cid:0)
(cid:0)
2
2
:
(2)
x
y
=
1
Z + (cid:24)pX 2 + Y 2 + Z 2
X
Y
, 1
(cid:21)
X
Y
(cid:21)
(cid:20)
(cid:21)
(cid:21)
(cid:20)
(cid:0)
(cid:20)
When the camera moves in the XY plane, its angular
R3
and linear velocities are given by (cid:10) = (0; 0; (cid:10)z)T
R3, respectively. Relative to the
and V = (Vx; Vy; 0)T
camera, the point q evolves as _q = (cid:10)
q+V . This induces
a motion in the central panoramic image plane, which can
be computed by differentiating (2) with respect to time.
We show in [8] that the optical flow ( _x; _y)T induced by
a central panoramic camera undergoing a planar motion
((cid:10); V ) is given by:
(cid:2)
2
2
_x
_y
=
y
(cid:0)
x
(cid:10)z +
1
(cid:21)
1
(cid:26)x2
(cid:0)
(cid:26)xy
(cid:0)
(cid:26)xy
(cid:26)y2
(cid:0)
1
(cid:0)
Vx
Vy
(cid:20)
(cid:21)
(cid:20)
where (cid:21) =
factor, z = f(cid:24)(x; y) and (cid:26) , (cid:24)2=(1 + z).
(cid:21)
Z + (cid:24)pX 2 + Y 2 + Z 2 is an unknown scale
(cid:21) (cid:20)
(cid:0)
(cid:20)
(cid:21)
;
(3)
1Notice that (cid:24) = 0 corresponds to perspective projection, while
(cid:24) = 1 corresponds to paracatadioptric projection (parabolic mirror with
orthographic lens).
virtual retina z = f(cid:24)(x; y)
q = (X; Y; Z)T
Fig. 2. Central panoramic projection model.
B. Central panoramic motion segmentation
Consider a central panoramic camera observing k lead-
ers moving in the XY plane. We now describe how to
estimate the image positions of the leaders from measure-
ments of their optical flows across multiple frames. To this
end, let (xi; yi)T , i = 1; : : : ; n, be a pixel in the zeroth
frame associated with one of the leaders and let ( _xij ; _yij)T
be its optical flow in frame j = 1; : : : ; m relative to the
zeroth. From (3) we have
_xij
= SiM T
j where
_yij
Si =
xi
Mj =
(cid:0)
1(cid:0)(cid:26)iy2
i
(cid:21)i
yi
1(cid:0)(cid:26)ix2
(cid:2)
i
(cid:21)i
(cid:3)
(cid:0)(cid:26)ixiyi
(cid:21)i
(cid:10)zj Vxj Vyj
0
0
0
Vxj Vyj
R1(cid:2)5
2
i
R2(cid:2)5:
2
(cid:21)
0
(cid:10)zj
h
(cid:20)
Therefore the optical flow matrix W
with a single leader satisfies
2
Rn(cid:2)2m associated
= ~S ~M T
(4)
W ,
_y11
…
_yn1
_x11
…
_xn1
2
6
4
(cid:1) (cid:1) (cid:1)
(cid:1) (cid:1) (cid:1)
(cid:1) (cid:1) (cid:1)
(cid:1) (cid:1) (cid:1)
1 ST
_x1m _y1m
…
…
_xnm _ynm
3
7
5
2 (cid:1) (cid:1) (cid:1)
Rn(cid:2)5 denotes the
where ~S = [ST
n ]T
ST
2
R2m(cid:2)5
structure matrix and ~M = [M T
1 M T
2 (cid:1) (cid:1) (cid:1)
denotes the motion matrix. We conclude that, for a single
leader-follower configuration moving in the XY plane,
the collection of central panoramic optical flows across
multiple frames lies on a 5-dimensional subspace of R2m.
More generally, the optical flow matrix associated with
m]T
M T
2
k independently moving leaders can be decomposed as:
3
2
3
7
5
(5)
(cid:1) (cid:1) (cid:1)
. . .
= SM T
W = 2
~S1
…
0
0
…
~Sk
6
(cid:1) (cid:1) (cid:1)
4
Rn(cid:2)5k and M
2
~M T
1
…
~M T
k
6
7
4
5
R2m(cid:2)5k. In practice, how-
where S
2
ever, the optical flow matrix will not be block diagonal,
because the segmentation of the image measurements is
not known, i.e. we do not know which pixels correspond to
which leader. We showed in [8] that one can recover the
block diagonal structure of W , hence the segmentation
of the image measurements, by looking at its leading
singular vector v. Since the entries of v are equal for
pixels corresponding to the same leader and different
otherwise, one can determine which pixels correspond
to which leader by thresholding v. We use the center
of gravity of each group of pixels as the pixel position
for that leader. Note that, in practice, there will be an
extra group of pixels corresponding to static points in the
ground plane, whose motion is simply the motion of the
camera. For a formation control scenario with few leaders,
we can always identify this group of pixels as the largest
one in the image. Since this group does not correspond to
a leader, we do not compute its center of gravity.
C. Central panoramic leader-follower dynamics
Consider now the following nonholonomic kinematic
model for the dynamics of each leader ‘ and follower f
_Xi = vi cos (cid:18)i;
_Yi = vi sin (cid:18)i;
_(cid:18)i = !i;
i = ‘; f
(6)
where the state (Xi; Yi; (cid:18)i)
SE(2), and the inputs vi
2
and !i are the linear and angular velocities, respectively.
R3. We showed in [13] that the
Let Ti = (Xi; Yi; 0)T
2
relative angular and linear velocities of leader ‘ relative
R3 and V‘f
to follower f , (cid:10)‘f
R3, are given by:
2
2
(!‘
(cid:10)‘f =
1
0
03
5
where F‘f , F‘f (T‘; Tf ; (cid:18)‘; (cid:18)f ; v‘; !‘)
0
0
13
5
!f ); V‘f =
(cid:0)2
(cid:0)
4
2
4
cos((cid:18)‘
sin((cid:18)‘
(cid:18)f )
(cid:18)f )
(cid:0)
(cid:0)
v‘
(cid:0)
cos((cid:18)f ) sin((cid:18)f )
sin((cid:18)f ) cos((cid:18)f )
(cid:21)
(cid:20)
(cid:21)
(cid:0)
(cid:21)(cid:20)
(cid:20)
Consider now a central panoramic camera mounted on-
board each follower. We assume that the mounting is
such that the camera coordinate system coincides with
that of the follower, i.e. the optical center is located at
(X; Y; Z) = 0 in the follower frame and the optical axis
equals the Z axis. Therefore, we can replace the above
expressions for (cid:10)‘f and V‘f in (3) to obtain the optical
flow of a pixel associated with leader ‘ in the image plane
of follower f as:
=
_x
_y
(cid:0)”
1(cid:0)(cid:26)x2
y
(cid:21) (cid:0)
(cid:0)(cid:26)xy
x#(cid:20)
(cid:21)
vf
!f
+
”
(cid:21)
1(cid:0)(cid:26)x2
(cid:21)
(cid:0)(cid:26)xy
(cid:21)
(cid:0)(cid:26)xy
y
(cid:21) (cid:0)
1(cid:0)(cid:26)y2
x#(cid:20)
(cid:21)
F‘f
!‘
(cid:21)
(cid:20)
Since z = f(cid:24)(x; y) and (cid:21) = Z=z, if we assume a ground
plane constraint, i.e. if we assume that Z = Zground < 0
is known, then we can write the equations of motion of a
pixel as the drift-free control system
(cid:21)
:
= H(x; y)uf + d‘f
(8)
_x
_y
(cid:20)
(cid:21)
where uf = (vf ; !f )T
R2 is the control action for the
follower and d‘f
R2 can be thought of as an external
input that depends on the state and control action of the
leader and the state of the follower.
2
2
vf +
F‘f
0
(cid:20)
(cid:21)
(7)
R2 is given by:
2
(cid:0)
(Y‘
X‘
Yf )
Xf
(cid:0)
(cid:0)
!‘:
III. OMNIDIRECTIONAL VISUAL SERVOING
In this section, we design a control law uf for each
follower to keep a desired distance rd and angle (cid:11)d from
each leader in the image plane. That is, we assume that we
are given a desired pixel location (xd; yd) for each leader,
where (xd; yd) = (rd cos((cid:11)d); rd sin((cid:11)d)).
A. Visual servoing by feedback-linearization
Let us first apply feedback-linearization to the control
system (8) with output (x; y)T . We observe that the system
has a well defined vector relative degree of (1; 1) for all
pixels (x; y) such that H(x; y) is of rank 2, i.e. whenever
= 1=(cid:24)2. In this case, the relative
x
degree of the system is 1 + 1 = 2 thus the zero dynamics
of the system are trivially exponentially minimum phase.
Therefore the control law
= 0 and x2 + y2
uf =
H(x; y)(cid:0)1
d‘f +
(cid:0)
(cid:18)
k1(x
k2(y
xd)
yd)
(cid:0)
(cid:0)
(cid:20)
(cid:21)(cid:19)
(9)
results in a locally exponentially stable system around
(xd; yd) whenever k1 > 0 and k2 > 0.
Notice however that the control law (9) is undefined
whenever x = 0 or x2 + y2 = 1=(cid:24)2. The first degenerate
configuration x = 0 arises from the nonlinearity of the
central panoramic projection model and the nonholonomic
constraints of the robots. For instance, consider a (static)
point in the ground for which x = 0. Then the y compo-
nent of the flow _y is zero. Such a flow can be generated
by purely translating the follower, or by purely rotating
the follower, or by an appropriate rotation-translation
combination. In other words, given the optical flow of
that pixel, we cannot tell whether the follower is rotating
or translating. Notice also that, due to the nonholonomic
constraints of the robots, if x = 0 and y
= 0,
then the robot cannot instantaneously compensate the error
since it cannot translate along its Y axis. On the other
hand, the second degenerate configuration x2 + y2 = 1=(cid:24)2
corresponds to the set of pixels on the outer circle of an
omnidirectional image. These pixels are projections of 3D
points at infinity, i.e. they correspond to the horizon z = 0.
Therefore, the degenerate configuration x2 + y2 = 1=(cid:24)2
is not so critical from a control point of view, because it
can be avoided by assuming a finite arena. We therefore
assume that x2 + y2
max < 1=(cid:24)2, from now on.
r2
yd
(cid:0)
(cid:20)
B. Visual servoing by nonlinear feedback
Although the control law (9) guarantees locally that
(xd; yd) asymptotically, this requires that
(x(t); y(t))
x(t)
!
= 0 for all t and xd
= 0. Therefore,
a) one cannot specify a desired formation with xd = 0
= 0, the controller will saturate when the
b) even if xd
leader crosses the follower’s Y axis at x = 0.
Since the latter case is fairly common in most formation
configurations, we now design a slightly different con-
troller that avoids this degeneracy, while maintaining the
6
6
6
6
6
6
input-to-state stability of the formation. We first rewrite
the leader-follower dynamics in polar coordinates (r; (cid:11))
so as to exploit the geometry of the central panoramic
camera. The dynamics become:
_r
_(cid:11)
(1(cid:0)(cid:26)r2
) cos((cid:11))
(cid:21)
sin((cid:11))
r(cid:21)
=
(cid:0) "
vf
!f
0
1# (cid:20)
(cid:20)
(cid:0)
(cid:21)
d‘f is the external input in polar coordinates. Rather
where
than exactly inverting the dynamics as in (9), we use the
pseudo-feedback linearizing control law:
e
(cid:21)
e
+
d‘f ;
(10)
uf =
(cid:21) cos((cid:11))
(1(cid:0)(cid:26)r2)
sin((cid:11)) cos((cid:11))
r(1(cid:0)(cid:26)r2)
"
With this controller,
tracking errors er = r
_er
_e(cid:11)
=
(cid:0)
k1 cos2((cid:11)) er
k2 e(cid:11)
k1(r
k2((cid:11)
0
1# (cid:18)(cid:20)
(cid:21)
the closed-loop dynamics on the
d‘f
(cid:19)
rd)
(cid:11)d)
(cid:0)
(cid:0)
(11)
+
:
e
(cid:11)d become:
(cid:0)
rd and e(cid:11) = (cid:11)
sin2((cid:11))
0
+
(cid:0)
0
0
d‘f :
(12)
(cid:20)
(cid:20)
(cid:21)
Therefore, (cid:11)(t)
(cid:11)d asymptotically when k2 > 0. On
the other hand, after solving the first order differential
equation for the error er we obtain:
!
e
(cid:21)
(cid:21)
(cid:20)
k1
(cid:0)
t
(cid:28) =t0
er(t) = er(t0) exp
cos2((cid:11)((cid:28) ))d(cid:28)
+
t
Z
sin2((cid:11)((cid:28) ))dr((cid:28) ) exp
(cid:18)
t
(cid:19)
cos2((cid:11)((cid:27)))d(cid:27)
d(cid:28)
k1
Z
(cid:19)
(cid:27)=(cid:28)
(cid:0)
(cid:28) =0
(cid:18)
Z
where dr is the r component of d‘f . A straightforward
dr(t)
calculation shows that
.
!‘(t)
=Z +
v‘(t)
j (cid:20) j
j
j
j
j
Thus, if k1 > 0, (cid:11)(t)
(cid:25)=2 from some t on, and
=
(cid:6)
the leader velocities (v‘; !‘) are uniformly bounded, then
the formation is input-to-state stable (ISS)2. Now, since
t0)), the formation is ISS
(cid:11)(t) = (cid:11)d +e(cid:11)(t0) exp(
k2(t
except when (cid:11)d =
(cid:25)=2 and e(cid:11)(t0) = 0. Furthermore,
if the leader velocities are constant, so is the steady-state
tracking error. One may overcome this error by adding an
integral term to the controller (11).
(cid:0)
(cid:6)
(cid:0)
(cid:6)
that
together with the fact
Remark 3.1: Notice that the controller (11) is discon-
(cid:25) due to the identification of S1 with
tinuous at e(cid:11) =
R,
the seemingly continu-
ous feedback term k2e(cid:11) does not respect the underlying
topology of S1. One could use smooth feedback instead,
e.g. k2 sin((cid:11)), at the cost of a spurious critical point at
(cid:25). Since the topology of the annulus dictates that such
(cid:6)
spurious critical points are inevitable for smooth vector
fields, we prefer the discontinuous controller (11) at the
benefit of greater performance.
C. Estimation of the feedforward term
In order to implement either controller (9) or con-
troller (11), we need to feedforward the unknown external
input d‘f
R2. Although this term is a function of the
2
2See [12] for the definition of formation input-to-state stability.
state and control of the leader and the state of the follower,
we do not need to measure any of these quantities. Instead,
we only need to estimate the two-dimensional vector d‘f ,
which can be easily done from the output of the motion
segmentation algorithm developed in Section II-B. To this
end, let (xw; yw) and ( _xw; _yw) be the position and optical
flow of a pixel that corresponds to a static 3D point in the
= 0. From (8)3, the velocities of the
world such that xw
follower causing that optical flow are given by4:
uf = H(xw; yw)
(cid:0)1
(13)
_xw
_yw
(cid:20)
(cid:21)
:
Now let (x‘; y‘) and ( _x‘; _y‘) be the position and optical
flow of a pixel that corresponds to a 3D point on leader
‘. From (8), the external disturbance can be estimated as:
d‘f =
_x‘
_y‘
(cid:0)
(cid:20)
D. Collision avoidance
(cid:21)
H(x‘; y‘)H(xw; yw)(cid:0)1
(14)
_xw
_yw
(cid:20)
(cid:21)
:
Although the control law (9) guarantees local stability
of the leader-follower formation, it does not guarantee that
the follower will not run into the leader. For example,
imagine that the follower is initially in front of the leader
and that the desired formation is with the follower behind
the leader. Since the closed-loop dynamics are linear in the
error (x
yd), the follower will apply a negative
linear speed, and will most likely run into the leader.
xd; y
(cid:0)
(cid:0)
Thanks to the geometry of central panoramic cameras,
collisions can be avoided by ensuring that
the leader
stays far enough away from the center of the image.
Effectively, our choice of image coordinates (r; (cid:11)) for the
controller (11) reveals the safe configurations as a simple
rmax. Furthermore,
constraint on r, namely rmin (cid:20)
the control law (11) is the gradient of a potential function
(cid:20)
r
V (r; (cid:11)) =
k1(r
(cid:0)
rd)2 + k2((cid:11)
2
(cid:11)d)2
;
(cid:0)
(15)
which points transversely away from the safety boundary,
and has a unique minimum at (rd; (cid:11)d) (assuming rd >
rmin). Following Cowan et al. [1], one can modify V (r; (cid:11))
to yield a proper navigation function whose associated
controller guarantees collision avoidance.
IV. EXPERIMENTAL RESULTS
We tested our segmentation algorithm in a real se-
quence. Fig. 1(a) shows one out of 200 frames taken
by a paracatadioptric camera ((cid:24) = 1) observing two
moving robots. Fig. 1(b) shows the results of applying
the segmentation algorithm in Section II-B. The sequence
is correctly segmented into two independent motions.
3Notice that the second term in (8) is zero in this case, because the
point in 3D space is static, i.e. (vl; !l) = (0; 0).
4Notice that in the presence of noise one may improve the estimation
of uf in (13) by using more than one pixel and solving the equations
in a least squares sense.
6
6
PSfrag replacements
(rd; (cid:11)d)
Controller
Follower
Leader
PSfrag replacements
vf
!f
(r; (cid:11);
d‘f )
Camera
Fig. 3. Omnidirectional vision-based formation control scheme.
e
2
We tested our omnidirectional vision-based formation
control scheme (see Fig. 3) by having three nonholonomic
robots start in a V-Formation and then follow a Line-
Formation with (rd; (cid:11)d) = (1=p2; 0), as illustrated in
Fig. 4. Since (cid:11)d = 0, we choose to use controller (9)
in polar coordinates with the parameters (cid:24) = 1, k1 = 2:5
and k2 = 1:76. Fig. 5 shows the simulation results. For
[0; 29] the leader moves with v‘ = 0:5 and !‘ = 0
t
and the followers move from the initial configuration to
the desired one. Notice how the followers automatically
avoid collision due to Follower1 trying to move in between
Follower2 and the leader. For t
[29; 36] the leader
changes its angular velocity to w‘ = 1, thus moving in a
circle. Follower1 starts rotating to the right to follow the
leader, but soon realizes that the leader is coming towards
it, and hence it backs up to avoid collision. For t
[36; 55]
the leader changes its angular velocity to w‘ = 0, and the
followers are able to return to the desired formation. For
[55; 60] the leader turns at w‘ = 0:5 and the followers
t
are able to keep the formation. For t
[60; 100] the leader
turns at w‘ = 0:1 and the followers maintain the formation
into a line and a circle.
2
2
2
2
V. CONCLUSIONS AND FUTURE WORK
We have presented a novel approach to formation
control of nonholonomic mobile robots equipped with
central panoramic cameras. Our approach uses motion
segmentation techniques to estimate of the position and
velocities of each leader, and omnidirectional visual ser-
voing for tracking and collision avoidance. We showed
that direct feedback-linearization of the leader-follower
dynamics leads to asymptotic tracking, but suffers from
degenerate configurations. We then presented a nonlinear
controller that avoids singularities, but can only guarantee
input-to-state stability of the formation.
Future work will include combining the two controllers
presented in this paper in a hybrid theoretic formulation
that allows the design of a feedback control law that
avoids singularities and guarantees asymptotic tracking.
We would also like to explore the design of alternative
control laws that do not use optical flow estimates in the
computation of the feedforward term. We also plan to
implement our formation control scheme on the Berkeley
test bed of unmanned ground and aerial vehicles.
F2
F1
L
F2
F1
L
V-Formation
Line-Formation
Fig. 4. Formation configurations.
VI. ACKNOWLEDGMENTS
We thank Dr. Noah Cowan for his insightful comments
on the preparation of the final manuscript. We also thank
the support of ONR grant N00014-00-1-0621.
VII. REFERENCES
[1] N. Cowan, J. Weingarten, and D. Koditschek. Visual
IEEE Transactions on
servoing via navigation functions.
Robotics and Automation, 18(4):521–533, 2002.
[2] A. Das, R. Fierro, V. Kumar, J. Ostrowski, J. Spletzer,
and C. Taylor. A framework for vision based formation
control. IEEE Transactions on Robotics and Automation,
18(5):813–825, 2002.
[3] J. Desai, J. Otrowski, and V. Kumar. Modeling and control
of formations of nonholonomic robots. IEEE Transactions
on Robotics and Automation, 17(6):905–908, 2001.
[4] M. Egerstedt and X. Hu. Formation constrained multi-agent
control. IEEE Transactions on Robotics and Automation,
17(6):947–951, 2001.
[5] A. Fax and R. Murray. Graph laplacians and stabilization
In International Federation of
of vehicle formations.
Automatic Control World Congress, 2002.
[6] C. Geyer and K. Daniilidis. A unifying theory for central
panoramic systems and practical implications. In European
Conference on Computer Vision, pages 445–461, 2000.
[7] A. Pant, P. Seiler, T. Koo, and K. Hedrick. Mesh stability
of unmanned aerial vehicle clusters. In American Control
Conference, pages 62–68, 2001.
[8] O. Shakernia, R. Vidal, and S. Sastry. Multibody motion es-
timation and segmentation from multiple central panoramic
views. In IEEE ICRA, 2003.
[9] D. Stipanovic, G. Inalhan, R. Teo, and C. Tomlin. De-
centralized overlaping control of a formation of unmanned
aerial vehicles.
In IEEE Conference on Decision and
Control, pages 2829–2835, 2002.
[10] D. Swaroop and J. Hedrick. String stability of intercon-
nected systems. IEEE Transactions on Automatic Control,
41:349–357, 1996.
[11] P. Tabuada, G. Pappas, and P. Lima. Feasible formations
of multi-agent systems. In American Control Conference,
pages 56–61, 2001.
[12] H. Tanner, V. Kumar, and G. Pappas. The effect of feedback
and feedforward on formation ISS. In IEEE ICRA, pages
3448–3453, 2002.
[13] R. Vidal, O. Shakernia, and S. Sastry. Omnidirectional
vision-based formation control. In Fortieth Annual Allerton
Conference on Communication, Control and Computing,
pages 1625–1634, 2002.
−1
−2
1
−1
−2
−3
−4
−2
4
3
2
1
0
2
1
0
2
1
0
2
1
0
−1
−1
−2
−4
−5
−6
−7
−8
Leader and followers trajectories
t=0
1
t=8
t=5
t=20
−1
0
2
3
4
1
2
3
4
5
6
2
3
4
5
6
7
8
9
10
11
12
13
t=29
t=32
Leader
Follower 1
Follower 2
5
10
15
20
25
30
35
40
Follower to Leader Distance in pixels
Follower 1
Follower 2
12
13
14
15
16
17
18
12
13
14
15
16
17
t=34
t=45
10
20
30
40
50
60
70
80
90
100
Follower to Leader Angle in degrees
Follower 1
Follower 2
12
13
14
15
16
17
16
17
18
19
20
21
22
t=60
t=70
−1
−2
7
−1
−2
3
2
1
0
2
1
0
2
1
0
−1
−2
0
−1
−2
−3
−4
−5
−3
−4
−5
−6
−7
6
4
2
0
−2
−4
−6
−8
0
1
0.9
0.8
0.7
0.6
0.5
0
80
60
40
20
0
−20
−40
−60
−80
0
22
23
24
25
26
27
28
27
28
29
30
31
32
33
10
20
30
40
50
60
70
80
90
100
Fig. 5. Simulation results for a Line-Formation. For t 2 [0; 10] the followers move from their initial V-Formation to the desired Line-Formation, while
avoiding a collision due to Follower1 moving in between Follower2 and the leader. The leader abruptly rotates for t 2 [29; 36], but the followers are
able to both avoid collision and later return to the desired line. For t > 36, they maintain their formation into a line, circle, line and a circle. Notice
that it is not possible to maintain zero angular error during circular motion, because of the nonholonomic kinematic constraints of the robots.