January 15, 2023
20 mins

Intro to self-driving cars

Originally published in October 2017 here

Introduction

Let’s start with three pieces of little known automotive history:

In 1995 a typical desktop computer had 4MB of RAM and a 33MHz processor running on Windows 3.1. In the same year a team around the German Ernst Dickmanns built a car in collaboration with Mercedes, which drove almost 160km (!) continuously without any driver intervention on a trip from Munich to Copenhagen. They had just shown that autonomous cars can work even with the super slow computer hardware they had to use at that time.

Almost a hundred years earlier - at around 1910 - almost 40% of American cars were electric and most people expected electric cars to be the future of mobility. Only 20% of cars were running on gasoline at that time with the other 40% being steam-based.

The first car-sharing model emerged out of Switzerland in 1948 with the first bigger commercial availability in the 1990s in Germany and Switzerland. Berlin based StattAuto for example had hundreds of cars available to its members.

So even though all of these developments have been going on for decades, the traditional car OEMs are currently being seriously challenged by those three trends:

  • Human drivers to autonomous vehicles
  • Gasoline to electric vehicles
  • Ownership to mobility services

The effect of those trends will change the car industry more in the next 10-20 years than in the last 50 years together.

Human drivers to autonomous vehicles

More than 20 years after Ernst Dieckmanns record drive between Munich and Copenhagen, Google seems to have taken the leadership in this space. Last year Google has driven over 500.000 fully autonomous miles on public roads in California with only 124 human interventions required. All other OEMs have driven below 10.000 miles each at least in California. While Google currently seems to lead the race towards fully automated driving, Tesla, BMW, Audi and other OEMs already have very sophisticated assistants active in their existing fleets. Among other things those assistants are capable of driving the car hands-free on a highway in good weather situations. For example Tesla’s assistant - called Autopilot - has driven over 222 million miles without driver intervention across all Teslas sold. It seems the challenge of autonomous driving has entered the well-known R&D cycle of the computer industry. The core capabilities have mostly been invented - we just need a few more years of the typical “smaller, cheaper, better” to make autonomous vehicles technically and economically viable.

Gasoline to electric vehicles

Electric vehicles currently have a market share of below 1%. While this is still far away from the 40% market share they had in 1910, they are definitely on the rise.

This became clear when Tesla opened up his pre-orders for the more affordable Tesla Model 3: over 400.000 customers paid 1000 USD in 2016 to get on the waiting list for that car to be delivered in 2018. This is impressive and unheard of in the car industry. However Tesla will have massive problems building up the production capacity to deliver those cars and even if they can do it this is still <1% compared to the 77m cars sold worldwide every year.

Still in some countries like Norway electric vehicles have already reached >20% market share. Furthermore China is expected to heavily support electric vehicles with regulations which may speed up the shift to electric vehicles significantly. For now the market remains niche but let’s also not forget that low volume products with high demand sometimes make it really big - a lesson the Nokia CEO Olli-Pekka Kallasvuo had to learn as well in 2008 when he called the iPhone a “niche product”.

Ownership to mobility services

The trend towards mobility services is probably the most developed already. UBER in the Western world and Didi Chuxing in the East (mostly China) have heavily increased the usage of those “on demand” mobility services. They were able to undercut Taxi prices by leveraging existing consumer cars, using car-pooling and by avoiding taxi regulatory costs. They also managed to significantly increase convenience for the consumers via hailing by app and automatic payment. Other models like the on-demand rental company Car2Go, DriveNow our COUP have also proven to be popular with consumers. More and more consumers especially in cities no longer want to have their own car and don’t even bother to get a driver’s license. For example in the US 16-24 year olds are over 20% less likely to have a driver’s license than 30 years ago. Once the mobility services work fully autonomous BCG research for the WEF has shown that it could reduce the required number of vehicles by almost 60%

This threatens the position of the car industry in the value chain. If more and more customers use mobility services instead of owning their own car the car industry will change from a B2C company with its own pricing & branding capabilities to a B2B company basically becoming a supplier to UBER and other mobility service providers. This would eventually lead to lower margins for those companies and the risk of being replaced over the long term.

Autonomous Vehicles is the most important trend

So which of those mentioned trend is the most important one? Which trend should the OEMs focus on? We are arguing in this paper that the trend towards Autonomous Vehicles is probably the most important one to get right.

From a customer perspective electric vehicles are not fundamentally better than gasoline cars. Yes they have better acceleration, yes they are better for the environment and yes they are more quiet. On the other hand they take longer to recharge, have less range and currently still cost way more. They are just a different trade-off similar to a diesel compared to a normal gas car. You can also compare EV vs gasoline cars with a Macbook vs. a Windows laptop. Macbooks are exciting and maybe a little nicer than Windows laptops but at the end of the day you can do the same things with them. So most people will buy the cheaper Windows laptop. Therefore as long as the economics don’t work out most people will still want to buy a gasoline car because they just want to go from A to B as cheap as possible. The adoption of EV will follow the economics of them. Once the total cost of ownership gets cheaper than gasoline cars they will be adopted on scale. However this still seems several years off and gives OEMs enough time to develop & scale their own offerings. Fast follow seems a reasonable strategy here.

Mobility services on the other hand are already having an impact on the bottom line of OEMs by reducing the demand to own a car especially in urban areas. This development also has tremendous strategic value as it threatens the position of OEMs in the value chain. It could turn OEMs from proud B2C companies into B2B suppliers of UBER and other mobility service companies. Still mobility services are not the most important trend because the key to win the “mobility services war” is autonomous driving. Fully autonomous vehicles would reduce the costs for a model like UBER over 60% which gives the first company operating with “robo taxis” a massive comparative advantage. This cost reduction will also lead to a massive increase of adoption of those services because it almost makes them as cost-effective as (current) public transport.

Elon Musk outlined in his “Master plan - part deux” how OEMs will utilize this development: Once their cars are capable of driving autonomously the OEMs can offer car owners to earn money with their cars while they don’t need them by using their own version of UBER as the default option. The OEMs could further incentivize this by packaging this into the financing deal for the car and by combining it with all necessary insurances. It is pretty much the same strategy which Microsoft used to push its own web browser into the market. They just packaged it with Windows and managed to push Netscape Navigator out of the market. So yes, OEMs need to win in the mobility services sector to keep their position in the automotive value chain. But in order to win in mobility services they need to win in autonomous driving first.

Furthermore, autonomous driving will be important even before full autonomy has been reached. A useful concept to discuss this are the different levels of autonomy described by the Society of Automotive Engineers (SAE):

  • Level 0: Automated system has no vehicle control, but may issue warnings.
  • Level 1: Driver must be ready to take control at any time. Automated system may include features such as Adaptive Cruise Control (ACC), Parking Assistance with automated steering, and Lane Keeping Assistance (LKA) Type II in any combination.
  • Level 2: The driver is obliged to detect objects and events and respond if the automated system fails to respond properly. The automated system executes accelerating, braking, and steering. The automated system can deactivate immediately upon takeover by the driver.
  • Level 3: Within known, limited environments (such as freeways), the driver can safely turn their attention away from driving tasks, but must still be prepared to take control when needed.
  • Level 4: The automated system can control the vehicle in all but a few environments such as severe weather. The driver must enable the automated system only when it is safe to do so. When enabled, driver attention is not required.
  • Level 5: Other than setting the destination and starting the system, no human intervention is required. The automatic system can drive to any location where it is legal to drive and make its own decisions.

The current state of technology for car OEMs is between Level 2 and Level 3 with the outlook to reach Level 3 for current market leaders like Tesla & Mercedes in the next 1-3 years.

Freeing up (most) of the commute time somewhere above Level 3 will be the “iPhone moment” of the car industry. The iPhone enabled a full range of applications on the go in a convenient way which previously were only possible on a stationary device: email, browsing, messaging, music, video & more. You could do most of these things as well on feature phones but not in a viable way for most of the population. The iPhone was the first true MVP for these applications. The same will be true for Level 3+ autonomous vehicles. Once the “Autopilot” feature works reliably e.g. for the highway part of your daily commute, you will be able to do email, video, browsing etc. in this timeframe. This is the true “hard” advantage of owning a (semi-)autonomous vehicle compared to the “soft” advantages of owning an electric vehicle discussed above. So while owning a EV vs. a gasoline car is like having a Macbook vs. a Windows laptop, owning a AV is like having an iPhone vs. a Nokia feature phone. This is why getting Autonomous Vehicles right is the most important objective for OEMs.

Technology required for Autonomous Vehicles

So how can car OEMs develop Autonomous Vehicles? What technologies & capabilities do they need to reach fully autonomous vehicles?

AV is supposed to replace a human driver - so what does a driver typically do?

  • Sees and senses the situation on the road
  • Interprets and judges the situation
  • Handles brakes, steering wheel & gas

Seeing and sensing the situation on the road

Unfortunately it is not enough to put a 360 degree camera onto the driver’s seat. The human eyes are surprisingly good in adapting to different circumstances like darkness, rain, fog etc. and cameras are not there yet. You will notice this whenever you try to make a picture in a bar without flash even with your newest iPhone. Humans are also really good in perceiving depth, distance, knowing the context of the current situation and a lot more. While on the surface it may look like water on the road a human driver will know that it may be iced because it is winter and she is driving in the morning in the north of Sweden.

Since no single camera is good enough for the task, the solution is to combine several cameras & sensors. There is no “gold standard” yet which is used by everybody in the industry but most OEMs will use all or a selection of the following cameras & sensors:

Video Camera (mono or stereo)

Records a 2D or 3D video of the surroundings. Resulting images need to be interpreted by software and are not very accurate. Affected by weather, e.g. blind when snow is on the camera.

LIDAR

Uses laser to “scan” environments and their distances. Much more accurate than stereo video cameras but also more expensive. Cannot see color (e.g. traffic lights) and is still affected by weather

Radar

Very good to detect objects and their distance regardless of weather. Can be short ranged or long range. Mostly only targeted in one direction. Cannot see color and cannot detect all kinds of objects.

Ultrasonic

Great to detect objects very close to the car which may be in the blind spot of cameras or LIDAR.

GPS

Detect the position of the car. Sometimes complemented by odometry to improve accuracy.

Maps

Offline maps which can be complemented & updated by connectivity to a central server

Connectivity

LTE or similar module like in a mobile phone. The car needs to be able to update itself with latest information about traffic, construction and also needs to be able to send information about the real world back to the server.

Inertial Measurement Unit (IMU)

Combination of different accelerometers, gyroscopes to report the car’s specific force. Basically gives the information the driver feels in his belly when stopping, accelerating etc. Similar to the sensors in your mobile phone which you use to play games.

Interpreting and judging the situation

The data from above is then fused together by so-called SLAM algorithms to have enough information in a machine readable form about the surroundings of the car and its context.

The information then needs to be interpreted and decisions need to be made according to the interpretations. The main technology required for this task is machine learning - or mostly computer vision to be more specific. The problem with cars in the real world is that basically anything can & will happen at some point. There is no way to predict all potential situations on the road and program the computer accordingly. That is why the normal approach to programming a computer does not work here. The typical “If exactly this happens, then do exactly that” approach would be both too time-consuming to program and would still be not good enough for all potential situations on the road.

Machine Learning approaches this problem now in a new way: You show the computer as much data as you can from real world driving situations and the appropriate behaviour in each of those situations. You then simulate the human brain with neural networks to enable the computer to detect patterns in this vast amount of data. The computer will detect patterns on its own like “the driver brakes on red lights”. However those patterns are imprecise enough that the traffic lights can have all kinds of shapes of form. Basically the computer will program itself with something like “If something (not exactly) like this happens, then do something (not exactly) like that.”. Those self-derived rules will get better, the more data the computer gets. If a computer has never seen an elk on the street it will not know what to do. However if it has seen already 24.323 horses on the street it will probably know that the appropriate reaction to elks is similar to horses.

That is why it is so important for all car OEMs to get real-life driving data for their specific set of sensors. Tesla has some advantage here because it says to have equipped its current version of cars with Level 5 capable sensors. They are getting more data every day than any other OEM right now and plan to enable Level 5 driving when they have gathered enough real-world data and refined their algorithms enough.

The brain of the self-driving cars are these algorithms which put the sensor data together and then interprets them using machine learning techniques. This is where the magic happens.

Handles brakes, steering wheel & gas

This part called actuation is relatively easy since it has already been mostly solved. Most modern cars already have electronic steering wheel, brakes & gas. So all it takes is to correctly calculate the amount of gas/brake/wheel needed for the desired outcome. Computers are good at this since they have much better reaction times and can adjust the needed reaction 30 times per second and more if needed.

The road to Level 5 autonomous vehicles

It seems pretty clear that we will get to Level 5 autonomous driving in 5-15 years. The interesting part is which road we will take to get there.

There are two different schools of thought currently in the market: The traditional OEMs want to continuously improve their advanced driver assistance with more & more features until they have reached autonomy. Google has chosen a different approach starting from scratch and aiming directly for Level 4 / 5 autonomy. Former Director for Self-Driving Cars from Google Chris Urmson framed it like this in his TED talk: “You cannot assume to be able to fly, if you always jump”.

However at the end of the day the difference between those two approaches is a bit arbitrary. Both approaches will end up with a similar set of hardware & software as mentioned in the chapter above. The difference is just that Tesla & OEMs already release versions of their cars with Level 2-4 autonomy and the hardware in some cars not being ready for full Level 5 autonomy. Google probably won’t start before Level 4 with the hardware being fully capable of Level 5 as well.

While Google’s approach to build a full AV vehicle from scratch seems great in theory it has proven to be more complicated than expected. The release of a real product has been delayed several times already, lots of high profile employees have recently quit and started their own companies. Some of those delays are due to the complexity of building the car itself - not because of problems with the AV brain. Google itself now rebranded its effort to WAYMO and now wants to work closer with OEMs. Currently the approach taken by Tesla seems to be the fastest way to bring value to its customers. Since the end of 2016 all cars delivered are supposed to be capable of Level 5 autonomy hardware-wise - even though the software still is somewhere between Level 2 and Level 3 autonomy. It remains to be seen if Elon Musk's assumption of the hardware being “Level 5 capable” will be correct but assuming it is they can now optimize their software against a stable hardware base and will also get more real world driving data from their exact set of sensors than any other OEMs or technology provider in the world. As mentioned above this is important due to the nature of how developing the “brain” for the AV works - it is not engineers implementing all the rules by themselves. It is engineers developing a learning algorithm and then feeding it with real world driving data. This approach also seems reasonable because even Level 3 autonomy will have “hard” advantages for the customers. If you are able to do something else even only for half of your daily commute this will free up a lot of time already for daily commuters - the core demographic of car owners.

The downside of the “continuous improvement” model is that there will be problems with humans trusting their half-developed autopilots & assistance too much resulting in new types of accidents and problems in the public acceptance of those technologies. Still it is very likely that those problems will be overcome because the net effect of those new technologies will still decrease accidents - not increase them. There are also lots of ideas how assistants will help prevent those problems during this phase before reaching Level 5 autonomy. For example Nissan is developing a system where basically “human call centers” will take over driving the car when the autopilot encounters a situation it cannot handle yet. NVIDIA just plans to alert the driver when he is required again. Toyota plans to have a “Guardian Angel” permanently running in the background taking over the control of the car if required. It looks like the "continuous improvement" model is the way to go.

Conclusion

The development of self-driving cars is the most important change in the automotive industry since the invention of the car itself. It is a question of "when" and not "if". Autonomous Vehicles have the potential to impact the car industry as much as the iPhone impacted the phone industry. I will discuss how OEMs can avoid becoming the next Nokia in my next article :)