Working with LiDAR – Webinar 1 


A few weeks ago, Civil Maps hosted our very first webinar from the conference room of our shiny new offices in downtown San Francisco. It was the first part of a series we’re presenting called Getting Started with Sensors. We talked about LiDAR, one of the sensor types we use for creating 3D semantic map overlays that provide a new level of cognition to autonomous vehicles. Watching the full video playlist of the webinar, it’s obvious we were excited about being able to share what we’ve been doing. We also got to release some free, open-source code and a few other goodies on GitHub. Below are some points from the session:

How Civil Maps System Works with LiDAR

To understand the role LiDAR plays in our maps, you have to know something about our system. Civil Maps uses LiDAR as well as other types of sensors to create depth maps and point clouds while the vehicle is driving. This point cloud is then processed to create 3D semantic maps and information for localizing the vehicle to that map. The map and localization information can then be sent and received from the cloud to update the map and use the map for autonomous driving.

Localization Makes It Real

By localizing in 6 degrees of freedom- 6DoF (x, y, z, roll, pitch, yaw) and combining that with our 3D semantic maps, we are able to give cognition to the vehicle. Those six dimensions — as opposed to just three — enable us to project an augmented reality map directly into the field of view of the car’s sensor space. This helps inform the car’s decision engine. We are able to isolate, very precisely, where the vehicle should look to understand what the road infrastructure around it is telling it. The projections can actually guide the car to foveate its sensors towards that critical information (such as traffic lights, pedestrian crossings, signal lights), based on where the car is at that moment in time (lane-centric). The added bonus to this — it reduces the computational load on the car. The car gets the context it needs, without having to process every single pixel it sees.

The Great Equalizer: Abstraction Layer

It all starts with a sensor suite built into a car or mounted on top of the vehicle in one of our Atlas DevKits. We write drivers for any kind of sensor that a client would like to use: LiDAR, Inertial Measurement Units (IMU), or GPS being the most common. Since each sensor outputs its own type of data, our drivers translate all of that information into a common format. We’re big fans of abstraction — David testifies in the webinar, “Throughout the entire team, we’ve never come to regret [putting the effort] into abstraction.”

Though we’re focused on autonomous vehicles, it’s obvious that our abstraction layer has applications well beyond the roadway, from general robotics to IoT devices or any system with a suite of sensors.

The shared object libraries in our abstraction layer allow us to stitch together the sensors’ data into a single point cloud.

Going Tiny in Our Cloud

Civil Maps has architected for efficient storage to enable crowdsourcing. Other systems’ packets are typically 2–10 GB per kilometer. Depending on the geography and characteristics of the particular areas that we are serving and running our stack on, our data gets compressed into signatures that are about — our signatures range from 100–120 kb.

How LiDAR Works

Atlas DevKit with LiDAR

LiDAR captures its environment by firing laser beams at objects that surround it, much the same way dolphins use sound, or radar uses radio waves. A LiDAR unit may have 8, 16, 32, or 64 beams positioned at strategic angles to produce a highly accurate 360°, spherical view of the environment. Time of Flight (ToF) tech allows the LiDAR to measure how far away objects are based on how long it takes the beams to bounce back. LiDAR also tracks the retro-reflectivity of the objects it sees.

LiDAR scan visualized

We use both of these measurements at Civil Maps. Of course we need to know how close or far away things are. We use retro-reflectivity to ascertain the nature of the LiDAR’s targets. For example, road signage is highly reflective, while fauna — trees, bushes, etc. — is not. This helps us focus on the things a vehicle’s decision engine needs to know about.

Free Source Code — What it does

To make our abstraction layer device-agnostic, we normalize each sensor’s data into a single set of Civil Maps parameters and values. The free, open-source code repository we’re releasing contains our Velodyne VLP-16 abstraction layer.

We’re hoping to get our code into the hands of industry engineers as a way of making support for the system as ubiquitous as possible. The release also serves as a “taste” for potential customers, and really, we’re just happy to have engineers play with it.

A big part of translating the VLP-16 data has to do with converting LiDARs’ native spherical coordinates into the Cartesian x/y/z data with which we generate our 3D semantic map overlays. We go into some of the math involved in this part of the webinar if you’re interested.

Sample Code Run Through

Also of interest — we ran some of our code on a LiDAR view of the conference room to generate a point cloud that we uploaded toCloudCompare for visualization. Check out the video:

Next Up: IMU and GPS

In the next session in a few weeks (REGISTER HERE), we’ll be getting into how we use IMUs and GPS separately and together in a sensor fusion. It should be fun and informative. We hope you can join us.