Interview: Autonomous vehicles are being trained to "see" faster Special

Posted Mar 4, 2018 by Tim Sandle
Scale API has announced the launch of Sensor Fusion Annotation API, for advanced 3D perception including LIDAR (Light Detection and Ranging) and RADAR (Radio Detection and Ranging). This enables self-driving cars to process images in real time.
File photo: A self-driving car on the road in Mountain View
File photo: A self-driving car on the road in Mountain View
Grendelkhan (CC BY-SA 4.0)
It is anticipated autonomous cars will help to free up over a billion hours of time humans spend on driving; save lives lost to collisions due to human emotion and error; and generate new business revenue. As part of these developments, the company Scale API is accelerating computer vision training. This is through a platform that combines machine learning with human insight.
The aim is to shape artificial intelligence to become more instinctive and "street smart." Scale API has recently launched its Sensor Fusion Annotation API for advanced 3D perception. This technology sets out to elevate the artificial intelligence potential beyond simply collision avoidance by making providing more comprehensive object recognition capabilities.
To understand more about the technology and developments with autonomous cars, Digital Journal spoke with Alexandr Wang, founder and CEO of Scale API.
Digital Journal: Why is there so much interest in LiDAR?
Alexandr Wang: LiDAR sensors use lasers to help cars understand where they are in their environment. It informs the car of how far away objects are, how fast they may be moving and where objects are located in relation to the car’s position on the street. This information can then be used to render 3D point cloud data.
The interest in LiDAR is spurred by the quick advancement in its technology and affordability. Some solid state LiDARS have the potential to be as low as in the thousands, or even hundreds (GM Cruise, for example, acquired Strobe to do this). As it becomes more cost-effective, we can expect Level 4 autonomous fleets equipped with LIDAR since consumers won’t have to invest in the high ticket price.
DJ: Is LIDAR the optimal technology for driverless cars?
Wang: LiDAR will play a critical role in making self-driving vehicles a reality. When it comes to autonomous vehicles, LiDAR sensors can help make them significantly safer, even if cars can potentially drive without it.
There are exceptions in certain cases, however. Rain, snow and other treacherous weather conditions can cause LiDAR to fail. That being said, when LiDAR performs well, it exceeds when other technologies don’t. It is extremely useful for near-field understanding of surrounding objects.
While we see self-driving cars as the initial use case, LIDAR and RADAR are important for nearly all fields of robotics and computer vision to augment image data with valuable 3D perception.That includes use cases like drones, surveying imagery, and other robots (like delivery robots, manufacturing robots, security robots, and more). For example, Liberty Mutual actually uses Scale API to augment their drone efforts, in which they are building machine learning algorithms to automatically detect roof damage from drone imagery.
Visualization of section of Sierra Nevada  forest structure developed by the SNAMP  Spatial Team usi...
Visualization of section of Sierra Nevada forest structure developed by the SNAMP Spatial Team using Lidar data.
University of California Cooperative Extension Forestry
DJ: What are the main competitor technologies to LiDAR?
Wang: That’s an interesting question. While RADAR is also an object-detection system as well, it complements LiDAR instead of competing against it. RADAR is less precise but can reach farther. When used together, the technologies can build 3D perception.
While companies like Tesla aren’t direct competitors, they are building new radar systems with higher resolutions. That technology relies solely on cameras and RADAR, however.
DJ: Does LiDAR have any drawbacks?
Wang: LiDAR sensors do still have a few drawbacks. As I stated before, it performs best in well-lit environments but may fall short when it's snowing or foggy. For that reason, most autonomous vehicles employ all four sensors for more accurate detection.
DJ: What’s the idea behind the Sensor Fusion API?
Wang: The largest bottleneck for the development of high-performing perception algorithms is access to high quality labeled data for training. With the launch of Scale’s Sensor Fusion API, we’re delivering the only solution that’s able to handle full sensor fusion labeling in 3D, which is extremely valuable for any autonomous vehicle or robotics company.
It’s very easy to send data to Scale API automatically using our Sensor Fusion and Image Annotation APIs. The data will then be populated for our customers automatically via callbacks. Some of our customers have hooked up their integrations so that once a disengagement occurs on one of their vehicles, that data gets automatically sent to us to label. Once the data is sent back, a trigger signals the retraining of the algorithms. Companies like Voyage and Embark have been waiting for this technology and are incredibly excited that we can partner with them to provide it.
DJ: How did Scale develop the technology?
Wang: Scale’s engineering team is made up of machine learning, computer science, and electrical engineering experts from organizations like MIT, CMU, Harvard, Stanford, Google, and Facebook. We’ve worked closely with our partners like Alphabet to develop advanced proprietary technology which allows for the generation of the highest quality data.
DJ: Which companies are you working with?
Wang: Companies that use Scale’s API include GM Cruise, Uber, nuTonomy, Alphabet, Embark, Voyage, Starsky Robotics and many more.
DJ: By which date in the future do you think self-driving cars will be the norm?
Wang: It’s extremely hard to say when the technology will be widely available in every single geography. It will likely not be long, i.e. 1-2 years, before self-driving cars are available in fleets in certain cities in the US. It will take longer, i.e. 3-5 years, for these fleets to be available in other geographies due to the investment required. Right now, most self-driving technology relies on the generation of HD maps for the cars to be able to perform, and these maps are expensive to generate and maintain. Finally, it might be even longer before consumers are able to buy a self-driving car themselves since the technology will have to be fully baked.