MUR Blog - Driverless Slam

The purpose of slam is to simultaneously generate the map of cones and localise the car within that track. The control team will take as an input to race along the track.

This post outlines the basic interfaces, steps and definitions needed to achieve EKFSlam.

The first slam technique we will use will be ekfslam based - this was selected as the first technique as it is simpler than most other algorithms and is well known and widely used. If we expect better performance can be achieved later, then we may expand the scope slightly to use more advanced techniques.

Extended Kalman Filters

Extended kalman filters are extensions of the regular linearised kalman filtering method that attempts to account for non-linear models. Whereas regular kalman filtering uses a linear model, if that model is not truly linear then an error will creep in and the model may diverge if the non-linear components are not negligible.

Good examples and explanations of EKF and EKFSLam can be found in Probabilistic Robotics (Sebastian Thrun, Wolfram Burgard, Dieter Fox) and Python Robotics

1_EKF SLAM

EKF slam path from Python Robotics, Bladk = DR, Blue = True, Red = estimated, green = landmark est

Definition of interfaces

In order to define the problem more exactly, we define a precise state vector that will be passed from the perception team to the control team. Based on discussions with the control team the pose of the car and the list of x-y coordinates will be sufficient.

The inputs to the SLAM system will be from:

  • Internal car state sensors: IMU, GPS
  • The control inputs: Desired angular velocity, Desired acceleration
  • Lidar/Camera cone estimation algorithms

The outputs will be in the form:

2_Model Setup

Model Update step

In order for an extended kalman filter to predict the future state during the apriori step, we use a dynamical model of state. After this step, the posteriori step integrates sensor measurements.

3_ODE

The control effort input will be in the form 4_ODE therefore:

5_ODE

Each of the landmark cones denoted by6_ODE should not be moved at all.

Therefore:

7_ODE

Defining this total state vector update step as:

8_ODE

Data Association

Data association will be achieved by repeatedly selecting the smallest mahalanobis distance between measurements and already known landmarks. If the distance exceeds a tuneable parameter that will be tailored based on testing and modeling, the measurement is said to be of a new landmark, and it is added to the state.

Mahalanobis Distance

Mahalanobis works by computing the distance in terms of standard deviations across the parameter space. This should allow for the data association step to use the covariance matrix to choose the most likely nearest landmark to associate to the new measurement, rather than the nearest in spatial terms - these are not necessarily the same thing as the cone detection algorithm may end up being accurate in measuring cone range but poor in measuring angular position. It is mathematically defined as the following.

9_DIstance

A good explanation is found here

Code under development

Current state of the code is open sourced and available here

About the Author:

JackMcRobbie

Jack McRobbie
Spatial & Perception Engineer, 2020