Robot Localization: An Introduction
Abstract
Robot localization is the process of determining where a mobile robot is located with respect to its environment. Localization is one of the most fundamental competencies required by an autonomous robot as the knowledge of the robot's own location is an essential precursor to making decisions about future actions. In a typical robot localization scenario, a map of the environment is available and the robot is equipped with sensors that observe the environment as well as monitor its own motion. The localization problem then becomes one of estimating the robot position and orientation within the map using information gathered from these sensors. Robot localization techniques need to be able to deal with noisy observations and generate not only an estimate of the robot location but also a measure of the uncertainty of the location estimate. This article provides an introduction to estimation of theoretic solutions to the robot localization problem. It begins by discussing the mathematical models used to describe the robot motion and observations from the sensors. Two of the most common probabilistic techniques, the extended Kalman filter and the particle filter, that can be used to combine information from sensors to compute an estimate of the robot location are then discussed in detail and illustrated by simple examples. A brief summary of the large body of literature on robot localization is presented next. Appendices that present the essential mathematical background and alternative techniques are provided. The MATLAB code of the localization algorithms is also available.
1 Introduction
Robot localization provides an answer to the question: Where is the robot now? A reliable solution to this question is required for performing useful tasks, as the knowledge of current location is essential for deciding what to do next (1, 2). This article focuses on the solutions to the robot localization problem when the map of its environment is available. The problem then becomes one of estimating the robot pose (position and orientation) relative to the coordinate frame in which the map is defined. Typically, the information available for computing the robot location is gathered using onboard sensors, while the robot uses these sensors to observe its environment and its own motion. Given the space limitations, alternative scenarios where sensors such as surveillance cameras are placed in the environment to observe the robot or the robot is equipped with a receiver that provides an estimate of its location based on information from an external source (e.g., a Global Positioning System (GPS) that uses satellites orbiting the earth) are excluded from the following discussion.
A mobile robot equipped with sensors to monitor its own motion (e.g., wheel encoders and inertial sensors) can compute an estimate of its location relative to where it started if a mathematical model of the motion is available. This is known as odometry or dead reckoning. The errors present in the sensor measurements and the motion model make robot location estimates obtained from dead reckoning more and more unreliable as the robot navigates in its environment. Errors in dead reckoning estimates can be corrected when the robot can observe its environment using sensors and is able to correlate the information gathered by these sensors with the information contained in a map. How this can be achieved within a probabilistic framework will be discussed in this article.
The formulation of the robot localization problem depends on the type of the map available as well as on the characteristics of the sensors used to observe its environment. In one possible formulation, the map contains locations of some prominent landmarks or features present in the environment and the robot is able to measure the range and/or bearing to these features relative to the robot. Alternatively, the map could be in the form of an occupancy grid that provides the occupied and free regions of an environment and the sensors on board the robot measures the distance to the nearest occupied region in a given direction. As the information from sensors is usually corrupted by noise, it is necessary to estimate not only the robot location but also the measure of the uncertainty associated with the location estimate. Knowledge of the reliability of the location estimate plays an important role in the decision-making processes used in mobile robots as catastrophic consequences may follow if decisions are made assuming that the location estimates are perfect when they are uncertain. Bayesian filtering (3) is a powerful technique that could be applied to obtain an estimate of the robot location and the associated uncertainty. Both extended Kalman filter (EKF) and particle filter provide tractable approximations to Bayesian filtering and they are the focus of this article.
2 Vehicle Model and Sensor Models
The mathematical ∼ models describing the behavior of the robot and the sensors mounted on it are the most important components in the formulation of the robot localization problem. The vehicle kinematic model describes the equations governing the robot motion in response to control actions. Figure 1 illustrates a differential drive robot operating on a two-dimensional plane where the forward velocity and the angular velocity of the robot body can be controlled using two motors that drive the two wheels. The differential equation that describes how the robot position and orientation evolve with time as a function of its forward and angular velocity is known as the robot motion model.

The relationship between the observations from the sensors and the location of the robot in the map is known as the sensor model. The sensor model is dependent on the characteristics of the sensor mounted on the robot as well as on the way the map of the environment is represented. As discussed in Section 1, the map of the environment is typically defined either using coordinates of known landmarks or features, or in the form of an occupancy grid where the status of each grid cell defines whether the area represented by the cell is occupied or free space. Figure 2 illustrates a map with four landmarks, while Figure 3 shows an occupancy grid map where the occupied areas are shaded.


In the following sections, a vehicle model and the sensor models typically used for robot localization are described. For simplicity and clarity of notation, it is assumed that the robot has three degrees of freedom and is moving in a two-dimensional plane.
2.1 Vehicle Model
2.2 Sensor Model for Landmark-Based Maps
Laser range finders and ultrasonic sensors are most common sensors used for obtaining range and bearing measurements to landmarks. In case of a sensor that is only able to observe the bearing, for example, a camera, the equation for becomes the sensor model. A simple robot localization problem with a landmark-based map is illustrated in Figure 2.
2.3 Sensor Model for Occupancy Grid Maps
Occupancy grid maps provide a discretized representation of an environment where each of the grid cells is classified into two categories: occupied or free. Consider the scenario where a sensor on the robot can determine the distance to the nearest occupied grid cell along a given direction. A laser range finder is such a sensor. These sensors consist of a laser beam that rotates at a relatively high speed on the order of tens of revolutions per second and measures the distance to the obstacle that reflects it. If there is no obstacle within the sensor range, a reflection is not received and the sensor typically reports a nominal maximum distance dmax. Although the range measurements obtained depend on the environment and the robot location, it is not feasible to find an analytical observation model of the form equation 3 in this scenario. However, given an estimate of the robot location and the grid map, the expected value of a range measurement can be numerically obtained using ray casting (4). This makes it possible to evaluate the likelihood of a given pose, which is sufficient in some localization approaches such as a particle filter (see Section 4 for details).
Figure 3 shows a simple example of a robot localization problem where a laser range finder observes an environment described using an occupancy grid. The robot moves a few steps in the environment. The scan provided by the sensor at the first pose is shown in red.
3 Extended Kalman Filter for Localization in Landmark-Based Maps
In its most fundamental form, the problem is to estimate the robot poses xi (i = 0, …, k + 1) that best agree with all robot actions and all sensor observations. This can be formulated as a nonlinear least-squares problem using the motion and observation models derived in Section 2. The solution to the resulting optimization problem can then be calculated using an iterative scheme such as Gauss–Newton to obtain the robot trajectory and as a consequence the current robot pose. Appendix Appendix and Appendix Appendix provide the details on how both linear and nonlinear least-squares problems can be solved, and how the localization problem can be formulated as a nonlinear least-squares problem. The dimensionality of the problem is 3(k + 1) for two-dimensional motion, and given the sampling rate of modern sensors are on the order of tens of hertz, this strategy quickly becomes computationally intractable.
If the noises associated with the sensor measurements can be approximated using Gaussian distributions, and an initial estimate for the robot location at time 0, described using a Gaussian distribution x0 ∼ with known is available (in this article, is used to denote the estimated value of x), an approximate solution to this nonlinear least-squares problem can be obtained using an EKF. EKF effectively summarizes all the measurements obtained in the past in the estimate of the current robot location and its covariance matrix. When a new observation from the sensor becomes available, the current robot location estimate and its covariance are updated to reflect the new information gathered. Essential steps of the EKF-based localization algorithm are described in the following:
Recursive application of the above equations every instant a new observation is gathered yields an updated estimate for the current robot location and its uncertainty. This recursive nature makes EKF the most computationally efficient algorithm available for robot localization.
An important prerequisite for EKF-based localization is the ability to associate measurements obtained with specific landmarks present in the environment. Landmarks may be artificial, for example, laser reflectors, or natural geometric features present in the environment such as line segments, corners, or planes (5, 6). In many cases, the observation itself does not contain any information as to which particular landmark is being observed. Data association is the process in which a decision is made as to the correspondence between an observation from the sensor and a particular landmark. Data association is critical to the operation of an EKF-based localizer, as catastrophic failure may result if data association decisions are incorrect.
EKF relies on approximating the nonlinear motion and observation models using linear equations and that the sensor noises can be approximated using Gaussian distributions. These are reasonable assumptions under many practical conditions and therefore EKF is the obvious choice for solving the robot localization problem when the map of the environment consists of clearly identifiable landmarks.
Figure 4 shows the result of EKF localization for the simple problem given in Figure 2. The ground truth of the robot poses and the estimated robot poses are shown in red and blue, respectively. The 95% confidence ellipses obtained from the covariance matrices in the EKF estimation process are also shown in the figure.

4 Particle Filter for Localization in Grid Maps
There are two important situations where EKF is not the method of choice for robot localization. The first is when the environment is represented by an occupancy grid. Sensor model for occupancy grid maps described in Section 2.3 is not an analytic model but based on the numerical process of ray casting and as such is unsuitable for use with an EKF. The other situation is when initial robot location is completely unknown, usually known as the global localization problem. In this case, the location of the robot needs to be described using an arbitrary probability distribution; thus, the Gaussian assumption that is the basis of the EKF formulation is violated. In general, manipulating arbitrary probability distributions is computationally intractable. One possible strategy is to discretize the space of possible robot locations and thus deal with manipulating discrete probability distribution. This method is known as Markov localization. The computation burden associated with Markov localization is proportional to the size of the environment and the resolution of the discretization, making this strategy unsuitable in many situations. Markov localization is described in Appendix Appendix .
In the particle filter localization (also known as Monte Carlo localization) (7), rather than discretizing the space of robot locations, a weighted set of robot location estimates, termed as particles, is used to describe the probability distribution of the robot location. As the computations are focused on particles and more particles are placed at more probable robot locations, the particle filters provide a more efficient alternative to Markov localization. Number of particles used determines the accuracy of the representation. However, increasing the number of particles to obtain a higher accuracy leads to a more costly estimation process.
In the particle filter, each particle in effect provides a guess as to the location of the robot. Thus, each particle is represented by three variables (x, y (position), φ (orientation)) for a robot operating in a two-dimensional plane. Each particle i has a weight that indicates the contribution of that particular particle to the probability distribution. The sum of the weights of all particles is set to 1, that is, , where n is the total number of particles used. A collection of such guesses describes the best knowledge available, usually termed the belief, as to the true location of the robot. In the case of global localization, the initial robot location is completely unknown; therefore, all locations of the environment are equally likely to contain the robot. Thus, a set of equally weighted particles uniformly distributed in the environment is used to represent the belief of the robot location. During the localization process, this belief is updated as more and more information is acquired from the sensors.
- Prediction:When the robot is commanded to move, the new belief is obtained by moving each particle according to the motion model equation 2 with randomly generated .
- Update:When a new sensor observation is received, the belief is updated using an observation model. In this step, the weights of the particles are changed to reflect the likelihood that the true robot location coincides with the corresponding particle. In the case of jth observation from a laser range finder, ray casting from each particle is used to obtain an expected measurement . If the actual measurement is given by dj and if the sensor noise is assumed to be zero mean with a variance , the likelihood can be computed using a Gaussian distribution based on
(16)As at a given instance multiple independent range observations are acquired from the sensor, likelihood of obtaining a sequence of observations is computed by multiplying together all the likelihood. Once likelihoods of all the particles are computed, these are normalized to obtain the weight of each of the particles.
- Resampling:This is performed to avoid the situation where a small number of particles with large weights dominate the representation of the belief. One common strategy used for resampling (10) is as follows:
- Compute an estimate of the effective number of particles as
(17)
- If neff is less than a threshold, then draw n particles from the current particle set with probabilities proportional to their weights. Replace the current particle set with this new one. Set the weights of each particle to be 1/n.
- Compute an estimate of the effective number of particles as
- Resulting set of particles represents the updated belief of the robot location.
This process is repeated as new control actions are taken and new observations become available. The mean or the mode of the corresponding probability distribution can be used if a numerical value for the best estimate of the robot location is desired.
Figure 5 shows the result of particle filter localization for the simple problem given in Figure 3. The particles, the best estimate of the robot location, and the ground truth robot location at the last step are shown in green, blue, and red, respectively.

5 Alternative Localization Techniques
When a map of the environment is not available, the robot localization problem becomes significantly more challenging. In this case, the problem becomes one of estimating both the robot location and landmark locations simultaneously. In the past 15 years, robust and efficient methods to dealing with robot localization in unknown environments, also known as the simultaneous localization and mapping (SLAM) problem (3), have emerged. SLAM in a feature-based environment has been well studied and it has been shown that both EKF and least-squares optimization could be used to reliably solve SLAM in most cases (11, 12). Presence of efficient solvers for very large-scale nonlinear least-squares optimization problems (13) has resulted in real-time solutions to the robot location estimation in unknown environments. Due to their low cost and presence of algorithms that extract rich information, cameras have become an important sensor in many robot navigation applications. A comprehensive survey on vision-based navigation is reported in Reference (14), while a recent strategy for SLAM using information from a camera is reported in Reference (15). Combining information from monocular cameras with inertial sensors makes it possible to obtain reliable localization in real time with a high level of accuracy (16).
An alternative method for localization in unknown environments is to simply use the information from a sensor such as a laser range finder to obtain the translation and rotation relating two robot poses from which two laser scans are taken. This is done through scan matching that aligns the two scans. Collection of such relative pose estimates can be used to formulate a nonlinear least-squares problem that can be solved to estimate all the robot poses (17). A modern solver such as g2o (13) could be used to efficiently obtain the location estimates. A representation of the environment (map) is not estimated. This strategy is called pose-graph SLAM (18). For a feature-based or pose-graph SLAM technique to be effective, a strategy to recognize the fact that the robot has returned to areas it has visited before, known as loop closure, is required. The reliable detection of loop closure is one of the remaining challenges for both feature-based SLAM and pose-graph SLAM (19).
Many other interesting robot localization problems and alternative solutions have been reported in the literature. Typical examples are localization using signal strengths from wireless infrastructure (20) and RFIDs (21), fusion of information from GPS and inertial navigation units (22), and an optimization technique for localization in grid maps (23) based on chamfer distance (24) that does not rely on many tuning parameters as is usually the case with particle filters.
The problem of estimating robot location given a map is now considered a solved problem, although highly dynamic environments populated with people pose significant challenges in practical deployments. The more complex problem of continuously estimating robot location within an unknown environment over a long period is still the subject of much research.
Appendix A: Least Squares Problem and Gauss–Newton Iteration
1 Linear Least Squares Problem and Solution
2 Weighted Linear Least Squares Problem and Solution
3 Nonlinear Least Squares and Gauss–Newton Iteration
In general, closed-form solution for a nonlinear least squares problem cannot be obtained. Many techniques for solving nonlinear least squares problems are based on iteration.
Iterating until convergence leads to the optimum solution, provided that the initial guess X0 is sufficiently close to the solution. This is known as the Gauss–Newton iteration.
4 Weighted Nonlinear Least Squares Problem
Appendix B: Least Squares Method for Landmark Based Localization
Least squares method provides a way to optimally estimate the robot trajectory using all the available robot motion and observation information up to time k + 1. In least squares method, we estimate all the robot poses together instead of only the last pose as in the case of both the EKF and the particle filter based robot localization.
Now the localization problem can be formulated as a nonlinear least squares problem as in equation 22 and could be solved using Gauss–Newton iteration by equation 23.
Appendix C: Intuitive Explanations of the Kalman Filter
1 Kalman Filter Equations
In this case, the state estimation can be obtained using the following Kalman filter equations assuming the initial state x0 follows a known Gaussian distribution .
In the following, we will provide some intuitive explanations of the Kalman filter equations for the one-dimensional case.
2 One-Dimensional Gaussian Distribution and Its Information
Intuitively speaking, the larger the uncertainty, the smaller the information.
3 Important Properties of 1D Gaussian Distributions
- For any constant a,
(39)
- For any constant u,
(40)
- For two independent random variables x and y,
(41)
4 One-Dimensional Kalman Filter Prediction
This section shows that the Kalman filter prediction equation can be obtained easily from the properties of 1D Gaussian distributions listed in Section 3.
5 One-Dimensional Kalman Filter Update
The prior information about xk+1 is given by equation 45 (after the prediction but before the update). So we have two pieces of information about xk+1: one from observation equation 52 and one from prior equation 45.
Note that the sum of the two weights is 1, that is, .
Figure 6 illustrates the update step of Kalman filter.

These are equations 33 and 34 (when all variables are scalars), where is the innovation, is the innovation variance S, and is the Kalman gain K.
Appendix D: Markov Localization
In Makov localization, rather than relying on a set of particles to represent a probability distribution, the state space is discretized into a grid and the probability that the robot is present in a particular grid cell is used to describe the estimate of the robot location. In a two-dimensional scenario, the discretization is over three–dimensional space incorporating the robot position and orientation.
- Prediction:In this step the new control input uk and the previous belief bel(k) are used to compute the predicted belief . Here is the prediction, which for any possible location j can be computed using
(61)This equation is obtained using the law of total probabiliy. Here P(xk+1 = j|xk = i, uk) is the conditional probability that can be obtained from the motion model.
- Update:In this step, using Bayes' theorem, information in the new observation zk+1 is fused with the prediction to obtain the new belief at time k + 1 as follows:
(62)where the conditional probability P(zk+1|xk+1 = i) can be obtained from the sensor model.