Scoping the Latest Advancements and Companies in the Autonomous Vehicle Industry

Naila Moloo
9 min readAug 21, 2021

Imagine getting into a car, telling it to take you to the nearby mall, and sitting back and relaxing as you are driven to your destination. Self driving cars are becoming more and more advanced, and companies are working to make the above description a reality. It’s not sci-fi anymore, either — we’re nearly there.

The concept of autonomous vehicles is straightforward: have a car that detects and tracks nearby objects, and reacts accordingly. This would substantially increase safety on the roads, seeing as 94% of serious crashes are due to human error. Car accidents would be decreased by 90%, up to $190 billion prevented in damages and health costs, and most importantly, thousands of lives saved. The efficiency of self driving cars would reduce vehicle pollution, road congestion, and environmental impact.

AV Levels

Before delving into the latest advancements within the field, we first should have an understanding of the five levels of autonomy.

Level 0: The driver is in complete control of the vehicle, and no functions are automated.

Level 1: The car can intervene slightly in driving to keep you safe, but the driver still controls most functions.

Level 2: At least two primary functions are automated, and the features communicate with each other. These systems require drivers to keep their eyes focused ahead. Most self driving cars available widespread fall in this category.

Level 3: The car can drive itself under limited conditions, like steering, accelerating, and passing other cars. The driver must still remain aware.

Level 4: The rider is not required to take over driving at any time, and the car is almost completely autonomous.

Level 5: For now, this is theoretical. The car can drive itself under any conditions and on any road, and does not have steering wheels or pedals.

Tesla’s Technology, and its Relationship With Lidar

Tesla is no doubt a leader within self driving cars. Their technology is built on a deep neural network through the use of cameras, ultrasonic sensors, radar, and an onboard computer for detection. A big component of Tesla is that they don’t use lidar. Lidar stands for light detection and ranging, and it uses light in a pulsed laser to measure distances and ranges from the car. Radar, on the other hand, sends out a signal and waits for it to bounce back with radio waves instead of light waves.

Predominantly every self driving company is using lidar, but the problem here is that you have to pre-map an entire environment, including the traffic lights, the lanes, and their connections — for every single environment. This is fairly difficult to deploy globally because mapping every environment is not realistic, and is also a big contributor to cost.

In Tesla’s case, a computer vision system has to be very precise and function without having predefined information about the location it is navigating, solely being reliant on video feeds. The company’s belief is that although it’s challenging, once you get it to work, this could be scaled all over the world. Tesla is already successfully developing this, and has now begun to ship cars without radar which is also used in more or less every self driving car today. Tesla’s technology is so accurate that they are ready to be purely contingent on vision.

“Lidar is a fool’s errand. Anyone relying on lidar is doomed. Doomed! [They are] expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices. Like, one appendix is bad, well now you have a whole bunch of them, it’s ridiculous, you’ll see.” -Elon Musk

Lidar Companies

Luminar Technologies is likely the biggest and most well known lidar company, valued at 7.7 billion. What’s special about their software is that they use extra long waves of laser light (1,550 nanometers versus 905, used for most lidar). This allows the technology to spot even smaller and low-reflective objects at a range of 250 meters which is pretty huge! Tesla was experimenting with lidar from Luminar when they were looking at safety and development.

Some other significant companies within lidar are Ouster which uses over 75 beam configurations, has designed a compact lidar with no moving parts and is targeting the mainstream vehicle market through developing a roadmap that will reduce the price to under $10. There is also Velodyne, which has a variety of different sensor lines like Velarray, a sensor that produces a directional image, Alpha Puck, a sensor with range up to 300 meters, VelaDome, a compact and embeddable lidar, and Velabit, a micro lidar.


Waymo is is a subsidiary of Alphabet, the parent company of Google. This is an example of a company that uses lidar because they believe that it can fill the gaps left by neural networks. Their vehicles have lidar sensors placed all around the outside to send millions of laser pulses in different directions, allowing measurements to be taken on how long it takes for the pulses to bounce back off objects.

Waymo operates a commercial self-driving taxi service in the greater Phoenix, Arizona area called “Waymo One”, which is a fully driverless vehicle. People can use it to get around several cities in the Metro Phoenix area. However, it is not yet general use around the world because it will take time to map different areas (which can partially be credited to the use of lidar).


Uber, which sold its driverless car subsidiary to start-up Aurora Technologies in 2017, is using lidar mounted on top of the car from Velodyne. A concern here is that snow and fog can block the lidar’s lasers, and its accuracy decreases with range, but for shorter distances it’s decent. Uber has a front mounted radar and short and long range optical cameras. When looking at the future, multiple people from Uber like Eric Meyhofer, head of the advanced technology division, think that in the next five years lidar won’t be needed because cameras and radars will be accurate to a T, but for now he still thinks it is a necessity.


Mercedes-Benz recently came out with Drive Pilot, a new driving assistance system. It falls under the level 3 AV category and allows you to let your car do all the driving without continual supervision. This was created as an evolution of its previous Distronic system, launched in 2013, which used cameras and radar. Drive Pilot is different because of its addition of lidar. With more high definition mapping and improved cameras, it is planning to debut on Germany highways later this year.


The Cruise Origin is a self driving car being worked on to achieve an automous level of 4. Being custom built on the foundation of the Chevy Bolt, The Cruise Origin will have no steering wheel, mirrors, or pedals. The drive is ‘purely designed around the rider’ and would enable an environment with a spacious cabin and doors 3 times larger than the average car door. Cruise now has over $9 billion in funding. The interior is made for shared rides, which is a goal of this company because it could allow for fewer cars on the road. If you could order a shared car that would drive up minutes later, why own a car? The cost of transportation for households could be drastically decreased and the screens could display a drop off and pick up schedule for the passengers.

Although the sensor suite may change before the vehicle goes into production, Cruise’s technology is similar to standard configuration with radar, cameras, and lidar. The hard drive is cooled by a battery system allowing for quiet rides and ideal temperatures.


Lastly, there’s Ford, one of Tesla’s largest competitors. Ford and Argo AI’s self-driving vehicles are built on the Escape Hybrid platform and include high resolution cameras and advanced radar and lidar technologies. An underfloor liquid cool battery helps to reduce the gasoline consumption and supports the battery power for the car system. As this design readies commercialization, Ford will test its vehicles across cities like Detroit, Palo Alto, DC, and Austin.

Examining Ethics

Something else to consider is the ethics behind self driving cars, and, if an accident is unavoidable, who should a vehicle save and who should it harm? There’s a lot of debate around this topic, but right now ethics can be looked into by using a Partially Observed Markov Decision Process (POMDP). This is a mathematical model valuable for autonomous vehicle decision making. MDP, standing for the Markov Decision Process, is where an environment fully knows its environment. A system decides which action to take based on three states: present, prior, and predicted future. POMDP on the other hand will work based on probabilities because the knowledge of an environment is partially unknown. The POMDP does not make decisions with complete knowledge of an environment, but chooses a probability of some act changing a state of affairs.

Reinforcement learning, in which there are rewards and penalties, is one of the most common strategies for systems to learn. The POMDP requires the selection of the most optimum policy under uncertainty and discounting for future reward states. Training under many environments is needed for a system to make decisions based on interactions that have happened in the past, and what the outcome of that decision was (reward or consequence). Essentially, this is extrapolation: drawing from the past for proficiency in the present.

Overall, self driving cars are showing a lot of promise and in the next decade our transportation industry will look very different. Check out some of these resources if you want to learn more!

Thank you so much for reading this! I’m a 15-year-old passionate about sustainability, and am the author of “Chronicles of Illusions: The Blue Wild”. If you want to see more of my work, connect with me on LinkedIn, Twitter, or subscribe to my monthly newsletter!