Autonomous Driving

This page is my attempt to explain what autonomous driving is and how it works. I’ve tried on this page to make a complex subject easier to understand without overly simplifying it. The emphasis is on Tesla’s implementation. There are three sections:

  • Part 1: What is Autonomous Driving?
  • Part 2: How Does It Work?
  • Part 3: The Future

Part 1: What is Autonomous Driving

I’m retired from a career in software development and know enough about autonomous driving to know that I don’t know enough about it. Please leave your comments below. What did you like? What did you not understand? What did I get wrong?

I hate driving! Driving on highways is boring. Sitting in traffic jams is frustrating. Surviving rush hour traffic is scary.

Even though I loathe driving, I recently completed a five-year road trip. Together, my truck and RV were 52’ long, 13’ tall, and weighed 12 tons. Yet, I hauled it to 48 of the United States plus two trips to Canada and two trips to Mexico. Imagine 50,000 miles of white-knuckle driving!

I do not want to share the road with some old codger. Unfortunately, I’m getting to be an old codger!

I decided I wanted a car that will drive me!

Definition

The Society of Automotive Engineers (SAE) defines 6 levels of driving automation ranging from 0 (fully manual) to 5 (fully autonomous). These levels have been adopted by the U.S. Department of Transportation.
These six levels are

  • Level 0: No Driving Automation
  • Level 1: Driver Assistance
  • Level 2: Partial Driving Automation
  • Level 3: Conditional Driving Automation
  • Level 4: High Driving Automation
  • Level 5: Full Driving Automation

Level 0 – No help: No automation at all. Fully manual driving. That’s all we had a few decades ago.

Level 1 – No Feet: All but the lowest-priced cars sold in the United States today have at least Level 1 automation. Examples are adaptive cruise control and automated emergency braking. Some new cars have auto-steering that will help keep the car reasonably centered in its lane on highways, including Audi, BMW, Ford, Kia, Mercedes Benz, Volvo, Nissan, and Infiniti.

Level 2 – No Hands: As of this writing, several companies sell cars with Level 2 automation. For example, GM has Super Cruise and Tesla has Autopilot. More will be released soon. These semi-autonomous systems handle all of the driving on highways but require that the driver be alert and ready to take control at any time.

Level 3 – No Eyes: Many companies, including GM, Telsa, BMW, Mercedes, and Audi have prototypes for Level 3 automation. Mercedes was recently approved for Level 3 driving in Germany and in the state of Nevada under very limited circumstances. The others have not yet received regulatory approval. This, when available, will allow hands-off driving in certain conditions, such as limited access highways.

Level 4 – No Brain: A few companies have Level 4 autonomous cars today, but all are “geofenced” to very restricted areas. That means that they are fully autonomous but only operate inside tiny zones using HD maps. A prime example is Waymo operating in specific parts of Phoenix, Arizona. They work well in tightly constrained environments, like a college campus or a bus route. All current systems are for commercial use only, mainly as taxis or bus services.

Level 5 – No Human: A few companies are testing prototypes with Level 5 automation. All of them are geofenced to very specific, limited areas, such as a bus route. Some do not even have a steering wheel or a brake. These will soon be common in highly constrained environments, such as college campuses. This level of automation will eventually become common but will be geofenced for the foreseeable future.

Is It Safe?

Are autonomously driven cars safer than human-driven cars? Yes!

Would you know that from watching the news? No!

Whenever an autonomous car is involved in an accident, it makes the national news. Is it common? No. In 2020, there were 38,680 deaths in automobile crashes nationwide. That’s more than 100 deaths per day! If only one of those involved a car with autonomous driving capabilities, it will make the evening news. But none of the other 99.99% of the crashes makes the national news unless there were many cars involved. This lopsided reporting gives a false impression that autonomously driven cars aren’t safe.

The goal is to make autonomous cars much, much safer than human-driven cars. Current statistics show that human-driven cars are involved in approximately six times as many accidents as autonomously driven cars. That difference improves every year.

Imagine a future world where almost all cars are autonomous. If they caused one death every day, would that be good? Of course, it would. Statistically, there are 1.35 million people killed each year on roadways around the world. That’s approximately one every 23 seconds! If that were reduced to one per day, that would be 1,000 times safer!

Cars are never going to provide much better physical crash protection than they do today. In the United States, all cars must have passive collision protection (airbags, seat belts). All cars must have crumple zones and pass rigorous testing. Crashes due to mechanical failure are rare. The problem now is the “nut behind the wheel”. Drivers often text while driving, even though that’s illegal in nearly every state. They drive far above the legal speed limit. They drive under the influence of alcohol and drugs. They fall asleep while driving. Automated cars don’t do that!

Autonomous cars have a 360-degree view of their environment which they reevaluate many times per second. They have no “blind spots”. They have excellent night vision and aren’t blinded by the headlights of oncoming traffic. They know more about their surroundings than I do! They already drive better than most humans, including me.

“I heard they’re Dangerous.”

Many people absolutely hate autonomous driving cars and claim that they are dangerous and deadly. On the other hand, some people love autonomous driving cars and predict that they will soon be ubiquitous. Neither is right. The truth is hiding somewhere in the middle.

It’s human nature to fear anything we don’t understand. But autonomous driving is worse than that. Some people are consciously distorting the truth. One example is Consumer Reports that stated in an article that Telsas will drive with no one in the driver’s seat. This is absurd! Their claim was quickly discredited by many sources. Each Tesla requires that the driver’s seatbelt be buckled, that it detects weight on the driver’s seat, and that it detects weight on the steering wheel. Anyone who goes to that much trouble to defeat the safeguards is intentionally misusing the system. In its latest release, each Tesla uses its interior camera to confirm that the driver’s eyes are on the road and not looking at a cell phone.

I think that one of the problems is that Tesla refers to their driver assistance system as Full Self Driving (FSD). Some claim that’s false advertising. According to Tesla: “All Tesla cars require active driver supervision and are not autonomous.” Personally, I’d prefer that they called it Enhanced Auto Pilot or Limited Self Driving.

Another problem is that some drivers become complacent, expecting too much from the current technology. Sometimes the cars seem to actually think, which they don’t. This leads people to relax and rely on technology. Fortunately, as of this writing, there have been very few fatal accidents from Telsa’s FSD Beta program, even though more than 100,000 Beta testers are using it on public highways every day. And many reports are claiming that FSD saved lives by avoiding accidents.

A real problem with autonomously driven cars is objects that aren’t moving. For example, a widely seen video was posted years ago of a Tesla driving at full speed into an overturned truck. How could it not have seen the truck? Surprisingly, the hardest objects to analyze are unusual non-moving objects. Almost everything visible while driving doesn’t move: trees, houses, bridges, signs, etc. Almost everything seen by radar isn’t moving. As a result, systems relying on radar tend to ignore non-moving objects. It takes incredible computing power to analyze and then ignore the thousands of non-moving objects it sees.

And, finally, there’s the cynic’s “follow the money” advice. The most vocal opponents of autonomous driving are legacy auto manufacturers, car dealers, and others with self-interested financial motivations.

My Tesla

In 2019, Telsa announced that they would be introducing their Model Y, which is a “Crossover SUV”. It was expected to ship in the Fall of 2020. I pre-ordered one, assuming that it would be delivered around the time that my five-year road trip was over.

Two unexpected events occurred. One, the pandemic hit while I was in Mexico. It took me several months to drive back to New England with all of the lockdowns and travel restrictions. Two, the Model Y shipped earlier than expected. The result was that my new Telsa and I both arrived in New Hampshire in August 2020.

When I had placed my pre-order, I had configured it with the Full-Self Drive (FSD) “capability”. It didn’t have full self-driving. It still doesn’t. What it has is the hardware that Tesla thinks will enable it to support full self-driving in the future. In other words, I’m one of those who paid Telsa a lot of money to develop a feature that no one knows will ever work!

All Tesla cars come with Autopilot on Highways, which keeps the car centered in its lane on divided highways. There are millions of Teslas on the road with Autopilot.

When my car arrived, it came with what’s generally called the “Public FSD Release”. In addition to highways, this software also worked on most two-lane roads, It changed lanes automatically. It read speed limit signs. It stopped at red lights but wouldn’t go on green lights without human intervention.

In November of 2021, I qualified to receive what Tesla calls “FSD Beta”. About 100,000 Teslas (all in the United States) had FSD Beta. Recently, Telsa merged FSD Beta into the main release allowing anyone who purchased the feature to use it. It does extremely well with traffic lights. It stops on red, starts on green, and decides which to do on yellow. The most common driving mistake it makes is getting in the wrong lane, such as moving into the “left turn only” lane when it isn’t turning left. The scariest is when it suddenly stops in the middle of the road for no apparent reason, called “phantom braking”. It has a long way to go!


Part 2: How Does It Work?

The Hardware

My Tesla is loaded with sensors. It has nine high-resolution cameras. Three forward-facing, one rear-facing, two on each side, plus one inside.

The ninth camera, mounted inside, is used to confirm that the human driver is paying attention. If it senses me holding my cell phone, it will refuse to continue driving. (That feature alone would reduce fatalities if installed in all cars.) Telsa claims that the images from the inside camera aren’t being saved or transmitted. I just have to believe them. Best not to do anything inside my car that you wouldn’t want to have recorded!

Most autonomous cars also use radar (radio detection and ranging). These shoot out a microwave beam looking for reflections. Many brands have LiDAR (light detection and ranging) equipment. These use a laser beam to scan the area to determine distances. Nearly all of them use sonar (sound navigation and ranging). These send out an ultrasonic beep and listen for an echo.

Having more sensors is not necessarily always better. Too many sensors will increase the cost and complexity of the system. What happens if sensors disagree? For example, what if the LiDAR and the radar return incompatible results? Should it always choose the LiDAR info? And, if so, what’s the point of installing the radar?

If every car had radar, LiDAR and sonar, there is a huge potential for them to interfere with each other. It would be like a noisy room with everyone trying to shout over each other.

Tesla is currently the only major company that is attempting to perform autonomous driving using only cameras. Elon Musk, the CEO of Telsa, points out that humans rely almost entirely on visual input. We don’t have laser beams shooting out of our eyes!

The significant advantage of radar is that it directly senses the distance and the speed of nearby objects. However, it is low-resolution and can’t see colors. Lidar is also very good at detecting distance but is also low-resolution and can only see its own reflected light. Sonar is excellent at detecting distance but only works in a very short distance and has extremely poor resolution. None of these sensors can detect color or read signs. None of them can detect stop signs or distinguish red lights from green lights. Thus, none of them are capable of autonomous driving without a vision system.

The problem with using only a vision system is that it cannot directly determine distance and speed. Tesla has solved this problem using a neural net. (More on that latter in this article.)

The Software

People are not worried about the hardware of self-driving cars. We understand cameras and think of the other sensors as being similar. It’s the software that scares people. How does the car know what to do? How can we trust it?

In the early days of Artificial Intelligence (AI), most research was done with knowledge-based systems. Humans would encode their domain knowledge as “rules”. For example, chess-playing programs were designed by grandmaster chess players. Physicians and specialists designed medical diagnosis systems.

For an autonomous driving example, consider the stop sign. How can you recognize a stop sign? In the United States, the sign is octagonal. The sign has the letters “S”, “T”, “O”, “P” in white paint on a red background. That will work most of the time. But what if there’s a branch obstructing some of the letters. What if a truck hit the sign and the sign is no longer octagonal? What if the sign is aimed at a different road and is not for your traffic lane? What if the stop sign is handheld at a construction site? It’s essentially impossible to recognize all stop signs using a knowledge-based approach.

A completely different approach to Artificial Intelligence (AI) is neural nets (NN). Unlike knowledge-based systems, neural nets can make decisions when only partial data is available and they can learn!

When I was working with neural nets in the 1980s, the nets had a few dozen nodes (called neurons) with a few hundred connections. Think of these as knobs that you can tune. These tiny neural nets were amazingly powerful. The U. S. Postal Service used small neural nets to read handwritten zip codes on mail.

By comparison, your brain is essentially a neural net with about one trillion neurons. Tesla engineers jokingly refer to a human brain as a “meat computer”. We “train” this neural network by sending children to school for a decade or more.

Tesla cars use a neural net with about a quarter-million neurons. They are not nearly as smart as the human brain but they are incredibly powerful.

This description of neural nets is vastly oversimplified but helps to explain the concept.

When initialized, the neural net has all of its “knobs” set to random values. It is “trained” with a series of images. Some are labeled “this is a stop sign”. Some are labeled “this is not a stop sign”. The neural net is “trained” by making minor adjustments to its knobs over and over and over again. Each time it guesses right, it turns some of those knobs up. Each time it guesses wrong, it turns some of those knobs down. After thousands of iterations, it starts to make intelligent decisions. After millions of iterations, the results are astounding.

Once the initial training is complete, the system is tested with images it has never seen before. If it guesses correctly which images are stop signs and which aren’t, then the training is complete. If not, you generate and label images representing the ones it got wrong, and resume the training. (More on training later in this article.)

There are three steps major algorithms involved in Autonomous Driving:

  1. Sensing (surveying the environment)
  2. Planning (deciding where to drive)
  3. Action (controlling the steering, acceleration, braking, etc.)

Phase 1: Sensing

All autonomous driving systems use similar techniques. In this article, I will focus on Tesla, since that is the only system with which I have personal experience.

For obvious competitive reasons, Telsa is tight-lipped about the specifics of its algorithms. Its algorithms are incredibly complex and constantly improving. Nonetheless, some general concepts have been announced.

Tesla refers to their first step in the sensing phase as Sensor Fusion. The external cameras return eight separate images. There’s not enough information in any one of them to enable autonomous driving. Each Telsa car uses a neural net to generate an “occupancy network”. It combines the images from all of its cameras into a single database of “voxels” (volumetric pixels), the three-dimensional (3D) equivalent of pixels. Each voxel has kinematic information (distance, speed, and direction). The camera images overlap much like human eyes allowing it to use the parallax effect to determine distance, just like your brain does. Using Neural Nets, it attempts to guess where each voxel will be in the next image. It uses the errors in its guesses to adjust its assumptions. Within a few frames (less than a second), it accurately knows how far away each voxel is and where it’s heading. It even keeps track of voxels that are not currently visible because they are occluded, that is, blocked by a car or another object.

The second step is classifying these primitive shapes into object types. The car uses several neural nets to classify simple shapes into objects, such as cars, trucks, signs, road markings, traffic cones, etc. This is where the enormous training data set comes into play. It’s amazing how many exceptional cases exist. Fortunately, it doesn’t have to classify everything perfectly. Fire hydrants come in many different colors and shapes. But it doesn’t really matter to the automation since fire hydrants don’t move. Similarly, it’s hard to distinguish between a motorcycle and a bicycle, but the difference isn’t critical since its speed and direction are known. In some cases, the system might encounter something completely new, like an unusual construction vehicle or farm equipment. It’s OK if the system can’t classify it because the voxel information tells it where the unknown object is and its speed and direction.

Simultaneously, the algorithm must determine what Traffic Rules apply. The car uses its GPS to know its current jurisdiction. In the United States, speed limit signs are in miles/hour. In France, speed limit signs are in kilometers/hour. In the United States, the steering wheel is on the left and we drive on the right. In England, they have “starboard” driving; the steering wheel is on the right and they drive on the left. In the U. S. Virgin Islands, they have “gutter” driving; the steering wheel is on the left and they also drive on the left. In the United States, we drive counterclockwise around rotaries. In England, they drive clockwise around “roundabouts”. Some places have concentric rotaries; the inner ring goes one way; the outer ring goes the other way.

And, finally, the system must determine which direction it should drive. How many driving lanes are there? Are there one-way restrictions or left-turn-only lanes, etc.

Phase 2: Planning

The next step is to Plan where to drive. Using a combination of its neural nets and knowledge of the local rules of the road, it knows where everything is, what it is, and where it’s heading.

One might naively assume that the planning stage is trivial. The car knows where it is, where the other cars are, where the other road users are (pedestrians, bicycles, etc.) Can’t it just find a route that doesn’t involve hitting anything?

Unfortunately, it’s much, much more complicated. In fact, it’s the phase that autonomous cars have the most difficulty processing

Planning would be trivial if nothing were moving, but that’s rarely the case. The car must guess where everything is headed. The Tesla software creates a “collision avoidance field” indicating where the vehicle can travel and where it can’t. It uses yet another neural net to predict many possible actions for each of the other moving objects. Will another driver stop? Pull out into traffic? Tesla cars even monitor the braking pattern of cars entering an intersection to anticipate when someone might run a red light. After computing thousands of possible scenarios, the car computes a safe “corridor” where it won’t hit anything or break any driving regulations. It makes long-range decisions but isn’t obliged to follow them. It will repeat the entire analysis one-tenth of a second from now. The default is always safety.

Phase 3: Action

The final phase is to issue Control commands to the steering, braking, and accelerator to implement its plan. If the car can’t decide what to do, it tells the driver to take over. If the driver still doesn’t take control, the car turns on its four-way blinkers, stops, and refuses to proceed.

Each algorithm step is performed continuously except for the final action of Control. Every Tesla executes the first two phases, whether or not FSD is engaged or purchased. Tesla calls this “shadow” mode. It gives Telsa the ability to collect data even when their software isn’t driving the car. For example, if the owner slams on the brakes (and has permitted data collection), the car automatically uploads its status allowing Tesla to use that real-world information to improve the system.

The final actions to control the car are only performed if FSD (full self-driving) has been purchased and is currently enabled.

Training

Neural nets are used by all of the developers of self-driving cars.

It’s clear from the above vastly simplified description that the key to the safety and comfort of autonomous driving is the quality of the Neural Nets. The quality of the decisions is the direct result of the quality and the quantity of the training data used when generating each Neural Net.

To understand how Telsa trains its neural nets, let’s reconsider stop signs.

First, employees collected and labeled thousands of images containing stop signs. This data was entered into the training data set to begin recognizing stop signs.

Next, Telsa collected additional data through queries to the cars in its fleet. There are over a million Teslas on the road today. All of them have the full software installed and running, even if it’s running in “shadow” mode. The query asked the fleet for images that the car thinks might be stop signs but it’s not sure. Telsa now had thousands of additional images of signs that either are or are not stop signs.

Tesla had a staff of about one thousand humans who manually labeled this new training data and images to the training data. Once this iteration was complete, the fleet was queried again. Eventually, the fleet got extremely good at recognizing stop signs.

The training system set also considers map data. If the maps say that the intersection has a stop sign but the car couldn’t find a stop sign, this case needs to be considered and added to the training data. Conversely, if the car sees a stop sign but the map says there isn’t one, it needs to be investigated.

To create huge datasets of labeled data, Telsa uses its computer systems to automatically label images. For example, suppose the training system has one frame of a video showing an object obscured with fog or snow. It can fast forward in the video to see the object when it gets clearer. The training computer now knows with certainty what the object is and can automatically label the object in the earlier frame and add it to its training set.

Tesla is the only company with over a billion miles of training data.

In addition, Telsa creates even more data by simulating events that are too rare or too dangerous to collect from the real world. For example, they can take an actual image and then digitally simulate a moose crossing the road. Then the training computer reverses the sensor fusion stage creating highly accurate simulations of what the cameras would have seen had there actually been a moose on the road. This data can be automatically labeled as having a moose because the computer put the moose in the simulation in the first place.

Training the neural net is done by one of the most powerful supercomputers in the world. It runs day and night, continuously iterating over its vast data set. Once the training reaches the desired level and is thoroughly tested, the results are compressed and downloaded into the fleet of Teslas. Then, a new training run is started.

They are building an even more powerful supercomputer called the Dojo (named for the training room in a martial arts studio). It is being built from D1 devices, custom-designed computer chips with multiple computer cores and memory systems optimized to perform neural net computations. These are assembled into “Training Tiles”. Each tile is capable of 9 petaflops. That’s 9,000,000,000,000,000 floating-point operations per second per tile. (The human brain is estimated to have the equivalent of 100 – 1000 petaflops.)

Tesla isn’t announcing how many Training Tiles will eventually be in their Dojo supercomputer. With only 100 tiles, it could approach the raw processing power of the human brain!


Part 3: The Future

Trust

Whenever anyone gets in my car for the first time, they freak out! I did, too. It’s scary when the steering wheel moves all by itself! My daughter-in-law is still not comfortable riding in my car. One of my sisters refuses to allow me to enable FSD if she’s in my car!

My car’s self-driving is similar to a 16-year-old who just received their driver’s license. It knows what to do and is generally correct but occasionally makes poor decisions. Whenever the car starts to do anything different than I would do under the same circumstances, I immediately take control. I don’t pause to wonder why.

It can change lanes all by itself to pass other cars or prepare for an upcoming exit. I don’t like surprises in traffic. So, I disabled that feature. If it wants to change lanes, it displays why. I have to acknowledge and approve this using the turn signal.

Gradually, you learn to accept that the car does the right thing almost all of the time. On rare occasions, I have to prevent my car from getting in the left-turn-only lane when it’s not turning left or vice versa. But, human drivers frequently make that mistake, too.

You don’t have to guess what the car will do; it constantly displays it on its screen. Part of the screen is an animation. A simplified view of the situation. You know that it can see the road lines because they are displayed on the screen. It shows the location and status of each traffic light that it sees. The ones in or near your lane show red, green, or yellow icons. The traffic lights that don’t apply to you appear as grey icons. If a vehicle is approaching from behind, it will show on the screen as an icon of a car, an SUV, a motorcycle, a truck, etc. When it sees a speed limit sign, it will display an icon where the sign is with the new speed limit appearing in the icon. The path it’s planning to take is displayed in blue. If it wants to move to the left lane, it displays where it’s planning to be and shows how much space it needs. If there’s already a car there, that car will be highlighted and the Tesla waits before changing lanes. You don’t have to guess what it sees, it tells you constantly!

It doesn’t take long to learn when to trust it to drive and when to drive it yourself. Occasionally, I forget who’s driving, it or me. Once I side-swiped a curb because I thought it was driving. Now, I keep one hand on the steering wheel if it’s driving and two hands on the steering wheel if I’m driving. It keeps learning. Over time, I expect it will be driving more often and I will be driving less often.

Together, my car and I make a good team. It constantly worries about the cars in front of me suddenly stopping. It keeps the car centered in my lane. It often sees stop signs long before I do. It sometimes sees pedestrians at night that I don’t see. I worry about whether or not that car up ahead is going to suddenly pull out into traffic. I carefully observe any cars that are weaving or speeding. Together, we’re safer than either of us alone.

The Near Future

The first 90 percent of the code accounts for the first 90 percent of the development time. The remaining 10 percent of the code accounts for the other 90 percent of the development time.— Tom Cargill, Bell Labs

Software complexity is notoriously difficult to estimate. Everyone vastly underestimates how hard it is to handle the “corner cases”.

There are two very different approaches to autonomous driving:

  • Limiting the situations
  • Limiting the locations

Almost all automotive vendors limit the situations under which their cars and trucks can operate. In all other situations, they rely on the human driver to take control. These companies always start with the simplest operations, like staying a safe distance behind the cars in front of them. Or, steering to the center of their lane. Gradually, they expand the car’s abilities so that it can drive in more and more complicated situations. The problem is that this approach gets exponentially harder as the environment gets more complex. For example, radar is extremely good at determining the distance to the next car. However, it receives an echo from every object, most of which should be ignored, like trees, fences, telephone poles, and bridges. So, they typically ignore anything that doesn’t move. That means that they will also ignore a car stopped in the middle of the road! Expanding these systems to city driving is an extremely hard problem.

A few companies limit the locations where the car can go. These companies produce taxi-like vehicles that are not for sale to the public. These vehicles are fully autonomous but extremely limited to where they can go. For example, Waymo has vehicles operating autonomously in the suburbs outside of Phoenix, Arizona, and in San Francisco and Mountain View, California. These companies rely on highly detailed, centimeter-accurate, HD (high-definition) maps that include detailed 3D information on road markings, traffic signs, etc. These maps are labor intensive to produce and require constant supervision to maintain accuracy. Expanding these systems to create these HD maps automatically is an extremely hard problem.

Neither approach can be used for truly autonomous driving without much larger, more capable neural nets or other self-learning systems.

Will most cars be autonomous in the future? Probably. Once self-driving cars are statistically ten times safer than human drivers, there will be strong motivations to save lives. Simulations have shown that traffic jams are eliminated and traffic flows more smoothly when only a small percentage of cars are autonomous. This future isn’t that distant for new cars. But it will take more than a decade for older cars to be gradually replaced with newer cars.

Will self-driving cars ever be required? Not likely. People enjoy driving classic cars. They might use a self-driving car for commuting. But they will want to drive themselves some of the time. People still ride horses.

V2V (Vehicle to Vehicle) communication might be required in the future. Today’s autonomous cars must guess what the other vehicles will do. Should I veer right to avoid the crash or left? V2V won’t just help drivers survive a crash—it will help them avoid the crash altogether.

The Trolley Problem

Some people claim that autonomous driving systems will eventually have to find a solution to the trolley problem. If you’re not familiar with it, the trolley problem is a thought experiment:

There is a runaway trolley barreling down the tracks. Ahead, on the tracks, there are five people who don’t see the trolley. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the sidetrack. You have two options: Do nothing and allow the trolley to kill the five people on the main track. Pull the lever, diverting the trolley onto the side track where it will kill one person who otherwise would have lived. Which is the more ethical option? Or, more simply: What is the right thing to do?

This is an extremely hard problem that humans haven’t solved. In fact, studies have shown that people’s responses are not the same in different cultures.

Deontological ethics say that the correct action is right or wrong based on a moral code. Utilitarian ethics states that a course of action should be taken by considering the most favorable outcome. Legal considerations say that taking no action is the only legally protected response.

I think that the trolley problem is outside the domain of self-driving cars. The way the problem is always stated artificially rules out the possibility of simply stopping or making a noise. These two solutions would always be the preferred solution. It’s always best not to kill any of them!

Autonomous systems will take the cautious approach and stop long before it hits anyone. Autonomous cars are patient and never in a hurry. They simply won’t drive so fast that the trolley problem is a problem.

The Distant Future

Where is this headed?

Society is fast approaching “The Singularity” when machine intelligence exceeds human intelligence. The Dojo supercomputer being built by Telsa could exceed the human brain in terms of the number of “nodes” and hence pure processing power.

Movies stoke our fears of a dystopian future ruled by computers: The Matrix, Terminator, and I Robot.

I don’t think that’s a realistic future. Certainly not in this century.

Why? All neural nets are optimized for some specific task or process. For example, autonomous cars are optimized to drive safely, with comfort and speed as secondary considerations.

The current Neural Net technology cannot reach what’s called “general artificial intelligence”. Most certainly, the existing systems can’t consciously decide to “kill all humans”. That would require a new generation of artificial intelligence.

Personally, I’m more concerned about humans. They already possess “general intelligence” and have a history of subjugating others, misusing every new technology for their own power and ambition. Autonomous killing machines, that are inevitable, if not already under development, will be optimized to seek out and destroy their targets while minimizing collateral damage. A swarm of such systems will be difficult or impossible to defend against.

It’s expected that someone somewhere will eventually figure out how to accomplish artificial general intelligence but not in the near future. It’s the humans that we need to worry about!

Hope you enjoyed my rant about autonomously driven cars. Please leave a note in the comments section with your feedback.

1 Comment

  1. Thank you for telling me about this article. It has taken me a while to get to it, but I really enjoyed it. It was very interesting and well written.

Leave a Reply

Your email address will not be published. Required fields are marked *