1. How did the company get started and what did it set out to do better?
  2. What is Cognition for Cars?
  3. What is the fundamental problem you are solving for self-driving cars?
  4. Where is your place in the framework for how an an autonomous car drives?
  5. What are key technical differentiators to your stack?
  6. If I am already using a “supercomputer” in my research car, what can I get from Civil Maps’ technology?
  7. What happens when the vehicle encounters bad cellular reception while driving? And how will this work?
  8. Do cars need high definition maps?

Back to FAQ


i.  How did the company get started and what did it set out to do better?

Civil Maps started developing its technology in 2015, with an emphasis on safety and scalability. Our R&D team began working on 3D mapping for heavy industries — to support rail, pipeline, and power transmission projects as well as municipal asset management. Through those experiences, we’ve gained insights and skills for dealing with logistical complexity and large volumes of sensor data. While many players in the autonomy space are adapting legacy mapping and robotic systems for autonomous vehicle applications, we have built our technology stack from the ground up to meet the needs of autonomous driving at scale.

Back to Top

ii.  What is Cognition for Cars?

Cognition for Cars is a “brain-inspired” concept that uses some parallels shared between human and robotic driving. We all have a cognitive map or memory of places that we’ve been to before. We know how to get around our bedroom in the dark using our own cognitive map of the room. When we drive through our neighborhoods with ease, we efficiently plan our path ahead because we know what to expect and where to look out for trouble spots. One key difference with our technology is that we enable cars to “inherit” this kind of memory and use it to know exactly where the vehicle’s sensors should devote attention. Through sensor fusion, advanced localization in six degrees of freedom (6 DoF), HD Semantic Maps, Augmented Reality Maps, and crowdsourcing of map data — we are providing cognition to cars.

Back to Top

iii.  What is the fundamental problem you are solving for self-driving cars?

We develop technology enabling autonomous vehicles to build, globally share, and inherit a “mental model” of the world, giving them the context of surrounding roadway conditions (static objects) and providing them with the ability to focus attention selectively. Unlike basic navigational tools, our software informs the car where it needs to look, based on its exact position and orientation. With these prioritization tools, the car is able to use its sensors more efficiently and perform many self-driving functions with relatively minimal computation, in-vehicle. This lightweight, computationally inexpensive design is a key benefit that we offer, enabling cars that use our software to crowdsource data — share the wealth of sensor observations generated by other vehicles in the same network to make better decisions. Each vehicle continually performs hypothesis testing on observations from its sensors in comparison to the aggregated base reference map. This testing ensures an up-to-date situational awareness that is continually refined and propagated back into Civil Maps’ network of vehicles.

Back to Top

iv.  Where is your place in the framework for how an autonomous car drives?

There are many ways to categorize the technological components necessary for robust autonomous driving; at its simplest, we see these as belonging to two distinct categories:

1. Vehicular Cognition and Intelligence: responsible for perceiving the surroundings, sensor fusion, localizing the vehicle, creating the car’s “mental model” of the world, and generating a trajectory through the environment.

2. Vehicle Platform Manipulation: responsible for stabilizing the vehicle, ensuring safety, and “driving”, i.e. trajectory execution (propulsion, steering, braking).

Civil Maps fits in the first category: vehicular cognition and (artificial) driving intelligence. Unlike most companies in our space, we have developed vertically across all the high impact facets of the cognition layer, from crowdsourced cartography and edge-mapping to centimeter-accurate localization in 6D and sensor fusion.

Back to Top

v.  What are the key technical differentiators to your stack?

1. Multi-sensor Data Fusion with Streamlined Data Validation (processing and fusing raw data from one or all: LiDAR, cameras, radars, GPS, IMU, etc.)

With a flexible approach to sensor configuration and inputs, Civil Maps processes a wide and growing variety of raw sensor data. Our robust software ensures data integrity and accuracy. Nuanced data validation and sensor fusion are of vital importance – merely ingesting a wealth of sensor information won’t guarantee that the resulting output (i.e., map) is high quality. The level of detail and accuracy from various sensors can and does fall across a broad spectrum. These variables give reason to why some call sensor data fusion an “art” rather than a process.

Advantages: Our software suite is built to be sensor agnostic and can easily integrate with other OEM designed systems and specifications. 
2. Compression Innovation – (kb/kilometer vs. GB/kilometer of map data)

From day one, we have architected our systems for lightweight data transfer and storage to enable crowdsourcing of HD map data at a global scale. For large amounts of raw sensor data, we can ingest, extract, and classify (in-car), the portions most meaningful to vehicular navigation. We compress the data down to kilobytes and send only the updated info to the cloud. In doing so, the map data can be shared over existing 3G and 4G networks.

Advantages: The raw sensor data associated with autonomous vehicle technology (especially from LiDAR) is extremely voluminous. Due to the massive amounts of information generated, data is typically moved from hard drive to hard drive (1TB week, multiple GB/km), which requires different layers of data management and thousands of people to validate the data. Consequently, the cost of this approach to mapping is quite high, and the frequency of updating maps in this manner is disappointingly slow. Conversely, our approach is dynamically scalable and easy-to-deploy. For example, we can fit the map of an entire continent on an SD card or email the map of an entire city.
3. Computational Efficiency

Autonomous vehicles rely on processes that need to happen continuously, and in real-time, such as understanding their location, mapping while on-the-go, and updating a map database. Being so, many of Civil Maps’ key technical design concepts were built around being able to prioritize quickly and process aspects of map creation, map usage, localization, and data management without the need for costly GPUs solely dedicated to those tasks or having numerous CPUs in the car. With our architecture, Civil Maps can perform these functions in real-time with a relatively inexpensive ARM processor (commonly used in mobile phones). While some service providers offer HD mapping products on hard drives that are swapped in/out of the car, Civil Maps software is designed for integration with existing and planned OEM ADAS systems and will significantly enhance the performance of these products.

Advantages: High computational efficiency enables the car to utilize the processed sensor data more quickly. It also brings down the unit cost per development car because it removes the need for expensive equipment dedicated to mapping and map updating. This design also improves overall safety, as compute resources can be freed up and devoted to other important tasks such as pedestrian, collision, or obstacle avoidance.
4. Crowdsourced Edge Mapping

(Conventional Mapping Processes: map data is recorded to hard drive > physically shipped  > cloud processing > publish map data back to the cars.)

To create our HD Semantic Maps and Augmented Reality Maps, our data pipeline digests the full-resolution, incoming sensor data, in-vehicle. This pipeline reduces the data set and minimizes the amount of compute, bandwidth, and time needed to make that data useful. After extracting the relevant map information and environmental context, a subset of that data is rendered down and transmitted to a database in the cloud using wireless 3G and 4G connectivity for further processing, aggregation, and validation — all while the car is driving and in use. The anonymized data can then be shared back to fleets of cars on the road. Civil Maps’ feature extraction, vehicle trajectory creation (a path that the car will take), and state transitions (an observation of signal light changes) occur within cars using our software and every vehicle in the network can share what it has observed, dramatically increasing group awareness and safety.

With the Atlas DevKit, our low-cost hardware + software reference platform, developers can easily get started creating and updating maps on the edge.

Advantages: Edge processing of raw sensor data streams enables Civil Maps to localize the vehicle and provide cognition in real-time. This process can offer significant safety benefits for a self-driving car. Based upon how quickly a vehicle interprets incoming sensor data, it can construct the appropriate action. The faster an autonomous car can complete the cycle of sensing, analyzing, understanding, and decision-making — the better off the car and its passengers will be. This loop will only get faster and faster, as sensors and machine learning improve over time. Another key benefit to Civil Maps’ edge processing is cost reduction. Since we are compressing the data and running the feature extraction in-car, only a massively reduced dataset is sent to the cloud. In operational terms, as the process becomes much more efficient, data management costs are dramatically reduced.
5. Adaptive Localization in Six Degrees of Freedom (6 DoF)

Civil Maps uses adaptive, vision-based localization techniques, generating situational awareness based on the car’s environmental context. Specifically, our localization in 6D enables us to provide an augmented reality visualization to the car that shows how it should prioritize its attention and enables it to quickly locate relevant road features (our Augmented Reality Maps product). This a key strength and differentiator for our technology suite. Localization in 6D limits the scope of what the sensors actively search for while driving, reducing real-time compute and freeing up resources for safety-related tasks. By superimposing this layer of metadata on top of what the car’s sensors are seeing in real-time, Civil Maps ensures a detailed positional accuracy of its maps while showcasing the most important features of a roadway to the vehicle.

Advantages: Adaptive localization in 6D reduces the computational load on the car.  It also enhances safety, as the car can be more focused on the tasks at hand, preventing accidents and aiding navigation through complex and dynamic environments.
6. Synthetic Training

Civil Maps has developed a suite of synthetic testing and validation solutions to simulate driving environments and scenarios. This training dramatically increases the iteration speed for automotive OEM research and development.

By creating detailed, high definition, 3D environments with physically-based rendering (i.e. virtual worlds with real physics-based properties), we can drive a simulated vehicle and develop raycasting models to create synthetic LiDAR sensor data. By generating synthetic datasets based on real-world scenarios and infrastructure, Civil Maps synthetics systems may approach 100% ground truth in hypothesis testing.

Advantages: Our synthetics technology drastically reduces the cost and timeline for testing and refining an autonomous vehicle platform for our customers. Instead of employing hundreds of people as cartographers and data validators for tens of thousands of hours, we can accomplish the same testing in just a few days or weeks using dynamic machine learning frameworks, refined and enhanced with synthetic training.

Back to Top

vi.  If I am already using a “supercomputer” in my research car, what can I get from Civil Maps’ technology?

With Civil Maps (using an ARM processor), autonomous vehicles can map, navigate, localize, and gain cognition for driving. If an organization is already dedicated to using multiple GPUs onboard, Civil Maps’ software will free up computing resources for other driving tasks like reacting to other vehicles, dynamic object detection, and pedestrian detection. By taking a loosely-coupled approach to hardware and sensor integration, Civil Maps offers OEMs and other developers the ability to better prioritize computing resources, or otherwise downsize expensive onboard computing stacks.

Back to Top

vii.  What happens when the vehicle encounters bad cellular reception while driving? How will this work?

With dynamic scalability and an onboard real-time approach to vehicular cognition, Civil Maps’ software caches a ~50km radius of map data around a vehicle’s last known location. If there is a bad reception for any particular area, a Civil Maps’ enhanced vehicle can use the map cache, and otherwise generate new edge-case observations for contributing back to the mapping network.

Back to Top

viii.  Do cars need high definition maps?

Conventional, 2D navigation systems (familiar mapping applications on mobile phones) are not useful for autonomous vehicles. To drive safely, self-driving cars need precise, machine readable, location-based information. It is well-known that GPS alone is insufficient for autonomous driving. Autonomous vehicles can drive without high definition maps, but overall safety, compliance with regional laws, passenger experience, and confidence levels in the driving systems are significantly improved when HD maps are utilized. Our HD Semantic Maps help a vehicle and its passengers anticipate the yet-unseen and “remember” previously encountered environments. This anticipation gives the car foresight and facilitates a smoother ride, reducing excessively abrupt, sickness inducing stops. HD Semantic Maps also enhance safety by providing environmental context well beyond the range supported by conventional, real-time sensing.

One example involves reaction time. If a car is driving at 60 mph and its sensors detect obstacles about half a mile ahead, the required 30 second reaction time may be too demanding to avoid danger. Moreover, highly detailed crowdsourced maps can also act as an independent sensor on the car, helping the vehicle through challenging driving conditions. This conditions includes situations where the vehicle can’t see beyond buildings, lane markings, signs, or curbs due to changing weather conditions, time of day, or other circumstances. Thus, safety can be enhanced through observational redundancy. When it comes to difficult driving conditions such as sun glare, snow, dense fog, or rain, a detailed, up-to-date map becomes a must-have, not a luxury. Lastly, a map has other more traditional roles, informing other tasks such as route planning. Civil Maps’ unique approach also utilizes our HD Semantic Maps to produce a real-time visualization for the passenger about the car’s intentions and how it understands its environment (Augmented Reality Map).

Back to Top