Connect with us

4 Reasons to use 6D Map Matching Technology for Autonomous Driving

Computer vision

4 Reasons to use 6D Map Matching Technology for Autonomous Driving

Building a self driving car isn’t easy, but the right map can complement any autonomous vehicle and make their journeys safer. I want to let you in on my 4 top reasons why you should use 6D map matching in your self driving car.

Map matching solves the localization problem.

To be able to profit from the content of an HD map, map-relative localization is indispensable: Ultimately, you are interested where map features like lane markings are relative to your car. This is the equivalent of knowing where your car is relative to your map. And this again is equivalent to map matching.

Here is a sequence showing an example of map matching based localization. With this approach, you are not negatively affected by poor GPS conditions (like in this tunnel). Different than you might think, you are also not affected by map accuracy problems, as long as you can guarantee sufficient local precision of your map (more on that later).

The orange lane markings that you can see in this sequence have been annotated during the one-time mapping process (on the bottom sequence). They are 3D-vector features and part of an HD map. Through map matching technology, these lane markings get localized relative to the car, and I back-projected them onto the live re-localization image stream.

In the next section, I will show you some more concrete applications of the map + localization combo. But ask yourself this: Do you know a lane marking detector that solves this task equally well? Please let me know in the comments.

Note that those vector features that you have seen in the video are not the features used for localization. Think instead of the map as consisting of two layers: an HD map layer with features relevant to autonomous driving (like the boundaries of lanes). And a localization layer, which contains features suitable for map matching. For Lidar based systems, the localization layer typically consists of a large point cloud. The Atlatec system is video based, and the localization layer is a textured 3d model of the mapped area.

The HD map layer and localization layer must be perfectly aligned. Achieving this is a no-brainer if both layers are created from the same sensor data. It becomes difficult if different sensors are used (cross calibration is required). It becomes next to impossible if different mapping traversals, different sensor modalities (Lidar vs camera), different mapping vehicles or maybe even different mapping companies are used. Creation of both layers should be in one hand. I believe this is the reason that the current leaders in autonomous driving (Waymo, Cruise and Zoox, according to this article) are still building their own maps.

Maps provide information that is hard or impossible for the car to see.

The ultimate purpose of having a map in autonomous driving is to improve situational awareness. Maps can complement and even replace true perception. With a map and map-relative localization, you do not have to see features, but you still know that they are there.

This is especially useful for “invisible” information that is difficult for the car to actively perceive with its onboard sensors, such as traffic rules. Traffic rules do not only require a vehicle to detect traffic lanes, but also the relations between them. Here are some examples of how this looks with a Lanelet2-map and map matching:

Now look at this picture of the raw point cloud output from a lidar scanner (our car is the blue box):

When using a map of the lane structure as a backdrop, those point clusters suddenly start making sense. Situational awareness is improved:

Map matching alerts vehicle vision to areas of interest.

This video shows a traffic light state detector which is guided by an HD map in combination with map matching based localization. The position of the traffic lights is stored in the map. Through map matching, it is possible to create a digital zoom onto the traffic lights. This makes detection of the traffic light state extremely simple, even if they are only a few pixels in size in the original image. Note that this is only possible with full 6d pose matching, i.e. both position (x,y,z) and orientation (yaw, pitch, roll) must be determined with pixel precision.

Traffic light state detection is one obvious case for the potential of guiding vision algorithms. But note that all the examples in the earlier video above show similar potential. Most traffic rules imply the existence of a certain conflict area that must be scrutinized by sensors.

Map matching neutralizes map accuracy problems.

There is no single word or number to appropriately describe how “good” a map is.

Accuracy and precision are often used synonymously to describe the properties of a map, but in scientific use, they are distinct. This is a schematic picture of a map with high precision, but low accuracy:

In this case, the map is offset slightly from the true position. How do you recognize that this is an accuracy problem?

Because, at least locally, the maps can be brought into almost perfect alignment with just a small shift, like so:

When using map matching technology for localization, this sort of accuracy problem is irrelevant and will disappear (more on that later).

In mapping systems, accuracy is determined by satellite navigation (GPS), so accuracy problems are unavoidable in maps of areas with bad satellite visibility, like urban canyons or tunnels.

In contrast to the example above, this map shows a pure precision problem:

The root cause for the precision problem here is twofold: Firstly, the low resolution is a problem. The map has been sampled accurately but there aren’t enough data points. The underlying geometric model (connect the dots with straight lines) is too simple. These two problems combine together for a low precision map. It could be improved by either improving the resolution (denser sampling of points) or using more sophisticated modelling (e.g. fit a spline curve through the points).

How do you recognize that this is a precision problem?

Because it can not be brought into alignment with reality by just shifting it around a bit.

Note that resolution and modelling errors like in the above example are only one possible source of precision problems. Others include sensor noise or calibration problems in the mapping system, or a non-smooth mapping trajectory.

Sometimes, accuracy of a map is called global accuracy, while precision is called local accuracy of the map.


So, with superior localization, better situational awareness, driving foresight and improved map accuracy, 6D map matching technology is pushing autonomous vehicles to safer driving and to new locations, like urban areas. At Atlatec, we believe that the future is in self-driving vehicles, and that map based technologies are the key component to bring them on the road. In some ways, you can see map matching as a shortcut around the difficult problem of computer vision. With maps and map matching, autonomous vehicles gain knowledge on their environment and can make decisions and solve problems faster.

Want to know more about our map matching? Then check out our website or leave us a comment below!

(Visited 143 times, 1 visits today)
Continue Reading
You may also like...
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More in Computer vision

Follow by Email


Subscribe to our mailing list


* indicates required

You can unsubscribe at any time by clicking the link in the footer of our emails.


To Top