What is SLAM

posted in: Robotics News | 0

SLAM is short for simultaneous localization and mapping, also called CML, concurrent mapping and localization. Simultaneous localization and mapping, or concurrent mapping and positioning. The problem can be described as: put a robot in an unknown location in an unknown environment, and is there a way for the robot to step through a complete map of the environment as it moves, a consistent map is an unobstructed access to everywhere of the room.

SLAM was first proposed by Smith, Self, and Cheeseman in 1988. Because of its important theory and application value, it is considered by many scholars as the key to realize truly autonomous mobile robot.

SLAM usually includes the following parts: feature extraction, data association, state estimation, state update and feature update. SLAMcan be used in both 2D and 3D motion. Here, we’ll talk only about motion in 2D.

Robot Chassis

In the process of learning SLAM, the robot platform is very important, in which the robot platform needs to be mobile and contain at least one ranging unit. We mainly discuss indoor wheeled robots and the algorithm implementation of SLAM without considering complex motion models such as legged robots.

The main factors to consider when choosing a robotic platform include ease of use, positioning performance, and price. The localization performance mainly measures the ability of the robot to estimate its position only according to its own motion. The positioning accuracy of the robot should not exceed 2% and the steering accuracy should not exceed 5%. In general, a robot can estimate its position and turn in a Cartesian coordinate system based on its own motion.

Building a robotic platform from scratch will be a time consuming and unnecessary process. We can choose some mature robot development platform on the market to carry on our development. Here, we discuss it in a very simple robot development platform that we developed ourselves.

At present, the common ranging units include laser ranging, ultrasonic ranging and image ranging. Among them, laser ranging is the most commonly used method. Usually the laser ranging unit is more accurate, efficient and its output does not need too much processing. Its disadvantage is that the price is generally more expensive (there are already some relatively cheap laser ranging units) . Another problem with a laser ranging unit is its passage through the glass plane. In addition, the laser ranging unit can not be used in underwater measurement.

Another commonly used distance measurement method is ultrasonic distance measurement. Ultrasonic ranging and acoustic ranging have been widely used in the past. Compared with laser ranging unit, it is cheaper, but its measurement accuracy is lower. The transmitting angle of the laser ranging unit is only 0.25 ° , so the laser can be regarded as a straight line, and the transmitting angle of the ultrasonic wave is up to 30°, so the measuring accuracy is poor. However, underwater, because of its strong penetration, it is the most commonly used method of ranging. The most commonly used ultrasonic ranging unit is the Polaroid ultrasonic generator.

A third commonly used method of ranging is visual ranging. Traditionally, distance measurement by vision requires a lot of computation, and the measurement results can easily change with the change of light. If the robot runs in the dark room, then the visual ranging method can not be used basically. But in recent years, there have been some solutions to these problems. In general, visual ranging is usually done using binocular vision or triocular vision. Using visual ranging, the robot can think better like a human. In addition, more information can be obtained by vision method than by laser range finder and ultrasonic range finder. But more information also means higher processing costs, but with the improvement of algorithms and computing power, the above information processing problems are slowly being solved.

General Process of SLAM

SLAM usually consists of several processes, the final goal of which is to update the robot’s position estimation information. Because the robot’s position information estimated by robot’s motion usually has big error, so we can not estimate the robot’s position information simply by robot’s motion. After the robot’s position is estimated by the robot’s motion equation, we can correct the robot’s position by using the ambient information obtained from the ranging unit. The above correction process is usually realized by extracting the environment feature and then re-observing the position of the feature after the robot moves. At the heart of SLAM is the EKF. The EKF is used to estimate the exact position of the robot based on the above information. These selected features are generally referred to as landmarks. The EKF continuously estimates the position of the robot and the position of the landmarks in the surrounding environment.


SLAM is still a hot topic, and there are many areas we haven’t addressed, for example, the closed loop problem of how to get a robot back to where it was before. The robot should be able to identify re-appearing landmarks and update them with the information that first appeared. In addition, the robot should update the landmark information before the robot returns to a known location.

In addition, we can combine SLAM with grid method to map the map into a format that can be read by human. In addition, we can combine SLAM with grid method to plan the robot’s path.