Cognition for Autonomous Cars Using 6D Localization

To drive autonomously, vehicles need software which emulates the routines of natural human cognition. Autonomous vehicles must be able to understand the world that surrounds them, and this environmental context can be provided in the form of a machine-readable, high-definition “semantic map.”

Civil Maps has addressed this knowledge gap by developing techniques for localizing a vehicle in six degrees of freedom: the movement axes (x, y, z) and also rotational axes (roll, pitch, yaw). Localization in 6D allows the 3D semantic map to be projected in the field of view of vision sensors such as LiDARs, cameras, and radars.

Localization in 6DoF results in selective attention and lower bandwidth/compute.

Without 6DoF localization, the semantic map projections will not be accurately aligned with the physical objects that the car’s sensors are recording. A vehicle without six degree localization would be unable to accurately track its position, misjudging the precise location of expected signs, signals, and other roadway infrastructure.