At Civil Maps, we’ve created technology that allows vehicles to navigate roads using crowdsourced self-driving car maps rather than real-time mapping. Our technology outperforms real-time mapping in cost, safety, and mass-market adoption readiness. By using our technology, our partners will be able to create the first, mass-market self-driving cars with low cost and fast implementation.
For self-driving and analog cars, the Atlas DevKit offers a quick-start approach for those needing low-cost, rapid geospatial data collection, real-time localization, and mapping capabilities in one package. The Atlas DevKit is designed to fit varying needs: The Atlas DevKit plug-and-play system for any vehicle; the Atlas DevKit Lite to integrate with vehicles with existing sensors; and the Atlas DevKit SDK for implementation in hardware-ready vehicles.
The Atlas DevKit HAL (Hardware Abstraction Layer) allows for seamless integration with any number of active and passive sensors to ensure compatibility across vehicle types and locations. This is useful for allowing our customers to use their own vehicles and for Civil Maps to be sensor & vehicle agnostic. The abstraction layer is also important to isolate the mission critical software from dealing with any hardware specific code. This allows the integration process to be decoupled from the software logic of localization.
Atlas DevKit Lite
The Atlas DevKit Lite is the go-to option for customers who wish to test Civil Maps’ system on vehicles with existing sensor systems. Much like the Atlas DevKit comes as a plug-and-play system, the Altas DevKit Lite allows users to set up vehicles and begin testing quickly. The Atlas DevKit Lite is compact and easily integrated with a vehicle with minimal connections.
By following simple instructions, customers are able to input the sensors types and their locations into a UI and configure their unit to work. This can be confirmed as working immediately. If sensors need to be moved, simply add the new values to the UI. Users can save profiles to quickly test performance of different configurations in the real world.
The Context Engine is a critical component of establishing situational awareness for autonomous vehicles. Context Engine database consist of geographical features, observable paths, event triggers and semantics. Together these elements are referred as a 3D Semantic Map. 3D Semantic Maps are autonomous vehicle-readable cached maps containing centimeter accurate, 3D digital road infrastructure and the relationship between geographic features. The maps are highly detailed in terms of geographic feature types (for example lane edges, signs, signals) and geographic feature geometry ( for example road curvatures in 3D and curb heights). The maps also contain semantic information between objects (for example the relationship of a stop sign to the lane centerlines where the sign is applicable).
The maps provide unique advantages over competing maps, including a highly scalable process for generating and updating maps, high refresh rates, crowd-sourcing vehicle sensor information instead of dedicated data-collection vehicles, centimeter-accurate 3D information of feature geometry, small search areas for scanning the environment, small over the air data footprint requirements.
The Context Engine, coupled with other Civil Maps technology enables
- WIder operating envelope of autonomous vehicles
- Safer operation
- Anticipation of events: update map in realtime
- Beyond visual range perception: augment 100m range of vision sensors
- Understanding of road & its rules: semantic maps contain purpose of road objects and their relation
- Driving at conventional-car speeds
- Smaller sensing & compute requirements and lower costs