Shield AI Fundamentals: On Mapping
Written by Vibhav Ganesh, Senior Autonomy Engineer.
The Role of Mapping in Autonomy
As is evident in the name, the key component of any autonomous robotic system is its basic autonomy; its loop of perception, cognition, and action which enables a robot to determine what it should do, when it should do it, and how. In the following paragraphs, I will cover the topic of mapping, an integral component of the perception component of this loop.
In order to effectively operate within a complex and dynamic environment, a robot must be able to represent its surroundings. To do so, it seeks the answer to the question, “What does the world look like?” by creating a digital representation of the world called a map.
Mapping enables the robot to make informed decisions. Understanding of the world is crucial for the system to interact with its surroundings. If the robot does not know what the world looks like, it will not be able to figure out where to move or how to avoid obstacles: it will not be able to perform its basic operations.
How Mapping Works
In order to create a map, a robot requires a stream of sensor observations such as LiDAR and IMU measurements. The approach that we employ first fuses the sensor observations into a state estimate. A state estimate provides the robot with knowledge of where it is relative to some fixed reference frame. A subset of the sensor observations also provide visibility into the structure of the environment. Each individual exteroceptive sensor observation can be thought of as a partial snapshot of the environment. A robot can construct a more complete understanding of its environment, or a map, by combining these sensor observation snapshots using the state estimate. This is done by choosing a representation of the environment and updating the beliefs about the environment based on each new sensor and location pair.
Types of Mapping
There are many different kinds of maps which an autonomous system can create, such as Gaussian Regression Maps, Normal Distance Transform Maps, Octomaps, and Occupancy Grids.
One type of map we use at Shield AI is an Occupancy Grid. Occupancy Grids discretize the world into squares (for 2D) or voxels (for 3D) to create a grid. A map is derived from this grid using the state estimate by projecting sensor observations and updating the likelihood of obstacles at each cell or voxel. This is a simple, yet powerful, representation that allows a robot to effectively represent the world while handling the uncertainty of sensor measurements and dynamic obstacles.
Challenges in Mapping
Because a robot’s view of the world is inherently uncertain, probability and statistics play an important role in mapping. Not accounting for uncertainty can make the robotic system brittle. Likewise, if uncertainty is improperly handled, a map can easily become corrupted and present an incorrect view of the world. Uncertain state estimates and noisy, or missing, sensor information can lead to poor exploration performance for the robot if accidentally incorporated in the map.
One issue that may arise from uncertain state estimation is position estimation drift. Position estimation drift occurs when adding individual state estimates together. If each estimate has some small error, as more estimates are added, the larger the error grows. One way to address uncertainty in state estimation, and thus position estimation drift, is to maintain a graphical history of locations the robot has been and use that graph structure to construct a map. In keeping a historical log, the robot is able to use to adjust and update its map over time as more information is obtained, and as areas are re-explored. This technique mitigates uncertain state estimations. Without it, the robot would accumulate drift over time and develop a corrupted view of the environment.
Uncertainty in sensor information is often based on the limitations of the sensors themselves. Although sensor technology has improved significantly in recent years, many sensors still produce a fair bit of noise. And all sensors will have difficulty seeing certain things in particular kinds of environments. Understanding each sensor and its limitations is key to developing a good map.
In addition to dealing with the aforementioned difficulties, another challenge for today’s mapping technology is working with limited-compute systems. Computing the optimal combination of high data-rate sensor information over a long period of time is expensive, so assumptions and relaxations are used to make the problem tractable for SWaP constrained systems, such as a quadrotor like Nova. At Shield AI, as we continue to build our mapping framework, we approach this problem from both an engineering and a fundamental research perspective to define the next generation mapping work.