Smart LiDAR Sensors for Level 3, 4 & 5 Vehicle Automation

Today’s autonomous vehicle sensors are unintelligent. They are typically mapping or survey grade devices that produce 2D imagery from cameras or 3D distance measurements from LiDAR. In the case of LiDAR, each distance return sensed by the unit must be processed individually for some autonomous vehicle functions and must be anchored and processed in groups for other autonomous functions.

Today LiDAR point clouds are utilized for many precision, non-real-time functions like mapping, surveying, and as-built-design. The use of simple point-cloud LiDAR requires a lot of computer post-processing in order to achieve the desired results. Simply placing a point-cloud LiDAR in a sensor suite for an autonomous vehicle relegates the LiDAR point cloud to a couple of simple tasks like collision avoidance and obstacle detection. Higher-order functionality like localization, vehicle trajectory creation, and object identification are not possible with dumb LiDAR due to their lack of resolution and throughput and due to the real-time processing burden placed on the control system.

Adding resolution and throughput to next-generation LiDAR does not fix the higher-order functionality problem. As an example, we’ll assume a 35 million point/second LiDAR similar to a version proposed herein. In order to utilize this data stream for higher-order autonomous navigation functions, the point cloud processing system must perform several tasks in real time like point registration, intra-scan registration, indexing, 3D segmentation, object bundling, depth map application and alignment to imagery, and computation of object orientations. These real-time functions require processing on the order of 75 billion graphics operations per second. As LiDAR throughput increases to 300 million points per second in the near future, the burden placed on the navigation system increases to about 30 trillion graphics operations per second.

Autonomous vehicle navigation systems that utilize smart sensors will allow for distributed processing within the vehicle and will dramatically reduce the graphics horsepower required for the navigation control unit. The advent of smart sensors will bring functionality to the sensor like edge detection, corner detection, 3D segmentation, normal vector determination for all objects, and object identification for localization functions including, but not limited to, road edges, signs, pavement markings, signals, and overhead bridge structures.

Fig 1 Sensor

Figure 1 shows a LiDAR point cloud for a scene along a roadway. Utilizing a 700,000 point-per-second dumb LiDAR unit yields roughly 100,000 points per second for the forward-facing view of the vehicle. Processing these 100,000 points per second will require roughly 10 GGOPS (giga graphics operations per second) for simple functions and 75 GGOPS for higher-order autopilot and autonomous functions.

Fig 2 Sensor

Figure 2 shows a LiDAR point cloud from along the same roadway, but this time utilizing a forward-facing solid state LiDAR unit capable of generating 35 million points per second for the forward-facing view. Processing these 35 million points per second will require over two TGOPS (tera graphics operations per second) for simple functions and over 30 TGOPS for higher-order autopilot and autonomous functions.

Fig 3 Sensor

Figure 3 shows the 35 million point-per-second LiDAR point cloud on the same roadway. This time, however, the point cloud stream is replaced by or augmented with the object vectors from the sensor. The processing burden for the autopilot or autonomous control system drops from 30 TGOPS for the higher-order functions to about 100 MGOPS (mega graphics operations per second) – a 300,000x decrease.

Facet smart sensor architecture and functionality are related to one or more of certain Facet-owned or Facet-licensed US and international patent rights. facet-tech.com/patent-marking/