SENSOR FUSION


  1. What is sensor fusion and what part does it play in autonomous driving?
  2. What is the status quo for working with sensor data?
  3. How is Civil Maps a standout when it comes to sensor fusion?

Back to FAQ


 

i.  What is sensor fusion and what part does it play in autonomous driving?

Sensor fusion is the process of combining data from multiple sensors with the goal of improving system performance, taking full advantage of each sensor, and producing a more accurate “ground truth.” The fused data, which is often “noisy” or imperfect, can be used for many functions, including vehicle localization, mapping, hypothesis testing, crowdsourcing, and perception. In order to drive efficiently and safely, the car’s decision engine needs to be able to digest as much relevant data as it can get. Furthermore, by fusing different sensor data together, the decision engine gets a more robust understanding of its environment than if the sensor data was feed to it as separate entities.  In other words, fused sensor data is greater than the sum of its parts.

Back to Top
 

ii.  What is the status quo for working with sensor data?

During the map creation process, a lot of companies in our industry are storing their raw sensor data on a dedicated hard drive storage array, typically located in the trunk of the vehicle. The sensor data is later moved from the car and is often physically shipped to another location for further processing, before the resulting map information ends up back in the car. Because our processed data is very lightweight, Civil Maps does not need to store the data in the car.  We are able to bypass that step. After converting the data to a common representational format, we compress the data’s metadata and send it (using just 3G and 4G connectivity) to the cloud for further processing.

Back to Top
 

iii.  How is Civil Maps a standout when it comes to sensor fusion?

Civil Maps’ Sensor Fusion platform combines and correlates data from multiple sensor feeds. Our algorithms convert (in-car) raw sensor data into information that the vehicle can use in real-time.  We work with a variety of sensors and have created a hardware abstraction layer (HAL) to streamline processing different sensor configurations and sensing devices from different manufacturers. The HAL serves as a translation layer, enabling an infinite number of hardware devices to work productively with our software.

While many companies in our industry are focused on a primary source set of data (such as camera or LiDAR, GPS, radar, cameras, and/or IMU), we have designed our system to be sensor agnostic and highly flexible. Civil Maps has extensive experience fusing multiple data streams to create a rich, convergent dataset that can easily be read and interpreted by machines — without requiring massive computing power. Leveraging the experience of working with multiple automotive OEMs at once, Civil Maps’ sensor fusion has been built to work with a vast array of sensor configurations, unlike many of our competitors who require customers to use their specific sensor configuration. This enables R&D teams to innovate faster, without using up engineering time to write drivers for every device they want to test. We have found that our customers really prefer this flexible approach.

Back to Top