Collective machine cognition: Autonomous dynamic mapping and planning using a hybrid team of aerial and ground based robots
Monitoring, surveillance and manipulation of the real-world is a complex problem. This is further aggravated when the relevant ask information is distributed in space and time. Here we present a novel solution to this ambient intelligence problem that combines a number of components: a cognitive system that uses inference techniques to develop in real time a world model that controls the collaborative properties of a team of autonomous robots. Each of the robots is equipped with a set of sensors. Among their capabilities, the robots have an autonomous reactive system that assures the integrity of the robot in compromising situations such as collision and mine detection and avoidance, at the same time they are used by a global cognitive system to sample and interact with their environment. The cognitive system, which defines the distal behaviors of the robots, uses the information of their sensors to generate an online and continuously evolving world model and to generate online an interactive 3D representation of the world. The world model, which contains relevant information about the location of robots, obstacles and goals, is maintained by means of the, so called, Joint Probabilistic Data Association method. This Bayesian inference method is used to generate a world model that is robust to inconsistent data and noise and that is ultimately used for the online generation of a collaborative strategy to plan and update the path that each of the robots have to autonomously follow in order to achieve the overarching goal of the cognitive system. Here we detail the application of this approach to the MAV08 challenge where not only mapping and planning are involved but also mine detection and the coordination of the robot team with humans that act in the task space. Grand scheme of the integrated autonomous dynamic mapping and planning approach. In this view robot are used as autonomous dynamic sensing platforms that contribute information to a central mapping and planning stage. The planning stage defines goal positions the robots should attempt to reach using their local proximal sensory-motor capabilities, e.g. collision avoidance, mine detection, etc. The aerial vehicle is guided by a human pilot, and the information gathered by this method is added to the world model. The state of the task environment is transformed into a 3D representation of the task area. To this representation objects and terrain features are added in real-time. . The human operator inspects the 3D model and makes decisions on future actions while making his own annotation in the language of the virtual world.