INVESTIGADORES
ROBERTI Flavio
capítulos de libros
Título:
Nonlinear Stable Formation Control using Omnidirectional Images
Autor/es:
GAVA, C.; VASSALLO, R.; FLAVIO ROBERTI; CARELLI, R.
Libro:
Computer Vision
Editorial:
I-Tech Education and Publishing
Referencias:
Año: 2008; p. 71 - 98
Resumen:
There are a lot of applications that are better performed by a multi-robot team than a single agent. Multi-robot systems may execute tasks in a faster and more efficient way and may also be more robust to failure than a single robot. There are even some applications that can not be achieved by only one robot and just by a group of them (Parker, 2003; Cao et al., 1997). Another known advantage of multi-robot systems is that instead of using one expensive robot with high processing capacity and many sensors, sometimes one can use a team of simpler and inexpensive robots to solve the same task. Some examples of tasks that are well performed by cooperative robots are search and rescue missions, load pushing, perimeter surveillance or cleaning, surrounding tasks, mapping and exploring. In these cases, robots may share information in order to complement their data, preventing double searching at an already visited area or alerting the others to concentrate their efforts in a specific place. Also the group may get into a desired position or arrangement to perform the task or join their forces to pull or push loads. Although multi-robot systems provide additional facilities and functionalities, such systems bring new challenges. One of these challenges is formation control. Many times, to successfully perform a task, it is necessary to make robots get to specific positions and orientations. Within the field of robot formation control, control is typically done either in a centralized or decentralized way. In a centralized approach a leader, which can be a robot or an external computer, monitores and controls the other robots, usually called followers. It coordinates tasks, poses and actions of the teammates. Most of the time, the leader concentrates all relevant information and decides for the whole group. The centralized approach represents a good strategy for small teams of robots, specially when the team is implemented with simple robots, only one computer and few sensors to control the entire group. In (Carelli et al., 2003) a centralized control is applied to coordinate the movement of a number of non-holonomic mobile robots to make them reach a pre-established desired formation that can be fixed or dynamic. There are also the so called leader-follower formation control as (Oliver & Labrosse, 2007; Consolini et al., 2007), in which the followers must track and follow the leader robot. The approach in (Oliver & Labrosse, 2007) is based on visual information and uses a set of images of the back of the leader robot that will be tracked by the follower robot. In (Consolini et al., 2007), a leader-follower formation control is introduced in which follower´s position is not rigidly fixed but varies in suitable cones centered in the leader reference frame On the other hand, when considering a team with a large number of robots under a centralized control, the complexity significantly rises, demanding a greater computational capacity besides a larger communication system. In this case, a decentralized approach would be preferred. Usually in a decentralized control there is no supervisor and each robot makes its decisions based on its own duties and its relative position to the neighbouring teammates. Some researchers propose decentralized techniques for controlling robots´ formation (Desai et al., 2001; Do, 2007) or cooperation on tasks such as exploration and mapping (Franchi et al., 2007; Correl & Martinoli, 2007; Rekleitis et al., 2005). There are also scalable approaches to control a large robotic group maintaining stability of the whole team control law (Feddema et al.,2002). Moreover some models are based on biologically-inspired cooperation and behaviour-based schemes using subsumption approach (Kube & Zhang, 1993; Balch & Arkin, 1998; Fierro et al., 2005. In these behaviour-based cases stability is often attained because they rely on stable controls at the lower level while coordination is done at a higher level. The work presented in this chapter addresses the issue of multi-robot formation control using a centralized approach. Specifically, the principal concern is how to achieve and maintain a desired formation of a simple and inexpensive mobile robot team based only on visual information. There is a leader robot responsible for formation control, equipped with the necessary computational power and suitable sensor, while the other teammates have very limited processing capacity with a simple microcontroller and modest sensors such as wheel encoders for velocity feedback. Therefore, the team is composed of one leader and some simple, inexpensive followers. This hierarchy naturally requires a centralized control architecture. The leader runs a nonlinear formation controller designed and proved to be stable through Lyapunov theory. A nonlinear instead of linear controller was chosen because it provides a way of dealing with intrinsic nonlinearities of the physical system besides handling all configurations of the teammates, thus resulting in a more reliable option. It joins a pose controller with a compensation controller to achieve team formation and take the leader motion into account, respectively. To control team formation it is necessary to estimate the poses of the robots that form the group. Computer vision has been used in many cooperative tasks because it allows localizing teammates, detecting obstacles as well as getting rich information from the environment. Besides that, vision systems with wide field of view also become very attractive for robot cooperation. One way of increasing the field of view is using omnidirectional images (360° horizontal view) (Nayar, 1997) obtained with catadioptric systems, which are formed by coupling a convex mirror (parabolic, hyperbolic or elliptic) and lenses (cameras) (Baker & Nayar, 1999). Such systems can really improve the perception of the environment, of other agents and objects, making task execution and cooperation easier. Interesting works on cooperative robotics using omnidirectional images can be found in (Das et al., 2002; Vidal et al., 2004) and (Zhu et al., 2000). In (Das et al., 2002), all the robots have their own catadioptric system, allowing a decentralized strategy and eliminating the need for communication between the robots. The authors propose a framework in which a robot can switch between controllers to follow one or two leaders, depending on the environment. However, all the processing is done on an external computer and the use of many omnidirectional systems (one for each robot) makes the team expensive. In (Vidal et al., 2004), a scenario is developed in which each follower uses optical flow for estimating the leaders relative positions, allowing the group to visually mantain a desired formation. The computational cost for optical flow calculations is high and results are shown only through simulations. The work in (Zhu et al., 2000) proposes a cooperative sensing strategy through distributed panoramic sensors on teammate robots to synthesize virtual stereo sensors for human detection and tracking. The main focus is the stereo composing and it does not address team formation. Now, in this work, we propose a formation controller based on omnidirectional vision and nonlinear techniques that runs onboard the leader robot. To drive all followers to a specific formation, the controller considers the desired formation parameters, the leader´s linear and angular velocities and current followers´ poses. The desired parameters and leader velocities are assumed to be known from a higher level controller that drives the leader robot to an appropriate trajectory. The followers´ poses are estimated to feedback the controller using an omnidirectional vision system, formed by a hyperbolic mirror combined with a colour camera and mounted on the leader, which allows it to see all followers by acquiring just one image. It is worth mentioning that although omidirectional vision was used to estimate followers´ positions and orientations, the proposed controller is independent of which sensor is used to implement the feedback. Followers are identified by rectangles of different colours placed on the top of their platforms. Through a set of image processing techniques such as motion segmentation and color tracking, followed by Kalman filtering combined with Least Squares and RANSAC algorithm for optimization, followers´ poses are reliably estimated. These poses are then used by the nonlinear controller to define followers´ linear and angular velocities to achieve and maintain the desired formation. Notice that we focus on team formation during robot motion, while obstacle avoidance and task coordination are not addressed at this stage. Simulations and real experiments were carried out with different team formations. Current results show that the system performs well even with noisy and low resolution images. The main contribution of this work is that stable formation control is achieved based solely on visual information totally processed onboard the leader. Also, there is no need for an absolute reference frame or a limited working area, since the vision system is carried by the leader and measurements are taken relative to it. Related works usually have an expensive robot team, use a fixed camera to observe the environment or even make all computations using an external computer.