The self-driving Ford Fusion hybrid took off at a leisurely pace and after about 50 yards slowed as it approached the pedestrian crosswalk. When the vehicle’s sensors detected no human presence nearby, it drove past the crosswalk and continued to a stop sign.
It stopped. And waited — a little too long for automated driving engineer Jakob Hoellerbauer, who took control of the vehicle, turned left and returned the Fusion into autonomous mode.
A vehicle that was halted near the stop sign had confused the car, which, Ford employee Schuyler Cohen explained, was set to drive very defensively for safety reasons.
Cohen, Ford’s supervisor of autonomous vehicles localization and mapping, and Hoellerbauer gave journalists a short demonstration of Ford’s self-driving vehicles program in Dearborn, Mich., last week as part of the company’s annual Further with Ford event. They and two other teams prompted the Fusions to drive a 10-minute course around Ford’s Dearborn campus, which includes public roads and some challenging obstacles, such as a left turn at a traffic light that lacks a left-turn arrow.
That’s tricky for the autonomous vehicle, because it has to wait for an opening in oncoming traffic, and the “view” of the vehicle’s sensors can be obscured by oncoming vehicles that are turning left, said Greg Stevens, global manager of automated driving at Ford.
Stevens emphasized that the vehicle was driving in a public setting, on public roads, with Ford employees driving and walking around the campus.
“There’s nothing in any way staged, set up or controlled,” he said.
The vehicle’s very defensive driving setting — think back to your first driver’s ed experience — required the car to wait until pedestrians had cleared any crosswalks — and walked a few steps beyond — before the journey continued. That defensiveness prolonged the wait time at one occasion, because Ford’s Dearborn campus can be a busy place.
At four-way stop signs, the vehicle deferred to human-driven cars if they arrived at about the same time.
Cohen, who was sitting in the passenger seat, and collected video data on a laptop, said the engineers set no course for the vehicle but simply instructed it to travel to one point on Ford’s campus and then return to the starting point. Like a run-of-the-mill GPS system, the vehicle’s computer chose the best path.
The car’s sensors, including radar and LiDAR (lasers shot from silver soda can-size cylinders on the vehicle’s roof) constantly collected data and compared it to the map in the car’s computer. The lasers, for example, determine how quickly oncoming vehicles drive and whether any gap is big enough for a left turn. If the sensors detect an obstacle in the road, such as a box that has fallen off a truck, the vehicle stops. At one point during the test ride, the vehicle slowed because of an overhanging tree branch. When it realized that no obstacle was in the road, it proceeded.
At Ford’s event last week, many presentations focused or at least touched upon the future of autonomous vehicles.
Ford CEO Mark Fields told visitors that the company expects the technology to be nothing short of revolutionary.
He said Ford believes that the automation of the automobile will define the next decade and will have an impact akin to Ford’s moving assembly line 100 years ago.
Ford is investing in Silicon Valley operations and expanding its fleet of self-driving test vehicles to prepare for the revolution. Ten vehicles are transporting passengers in Dearborn now. By the end of the year, it will be 30. And by the end of next year, Ford plans to have a fleet of 90.
The automaker projects that in 10 years, 20 percent of the vehicles on the road will not require a human driver.
Before that can happen, though, a few obstacles remain, Fields said, including the adoption of good regulations, persuading people to give up driving and being able to build the vehicles at a price consumers can afford. And, he said, engineers still have to overcome significant technological problems: Cars have to understand and follow the rules of the road — but they also have to understand when not to follow the rules, Fields said.
Stevens, the company’s autonomous vehicle guru, told IL that the challenge lies with humans’ unpredictability. Sometimes, he said, people will run a red light or leave from a four-way stop in the wrong order, because they’re not paying attention or because they’re in a hurry.
Introducing self-driving cars would be much easier, he said, if the whole world, from one day to the next, adopted autonomous vehicles that all followed the same rules. However, for many years and decades, artificial intelligence-controlled vehicles will have to share the roads with human-steered machines.
“Humans are far less predictable than computers are,” Stevens said.
The trolley problem
Beyond some of the technical problems, Ford also is working with the Mobility Transformation Center, at the University of Michigan, to address some ethical challenges that are popping up as artificial intelligences are making decisions for the “bags of saltwater.”
For example, what should an autonomous vehicle do if it is about to crash into an obstacle in heavy traffic in a busy city? Ideally, it should swerve to miss the obstacle to protect the vehicle’s occupants. But should it swerve onto the sidewalk? Or into oncoming traffic? How will it weigh the risks of injury/death to occupants versus those in another vehicle? What if it cannot avoid striking a pedestrian? Should it be allowed to choose which pedestrian to strike? The woman in her prime because she is most likely to survive the impact? The elderly person because he has the fewest number of years left to live? As humans increasingly are relinquishing more control to AI, decades-old thought experiments such as the trolley problem are starting to mirror real-world problems.
Stevens said regulators and the industry will have to reach a consensus on such matters, because they extend far beyond engineering challenges.
As autonomous and semi-autonomous vehicles are involved in the first crashes, and more of them hit the road — Uber just introduced some driverless Ford Fusion taxis in Pittsburgh — governments are starting to take notice.
The Obama administration said Monday that it will release today “a new Federal Automated Vehicles Policy to help facilitate the responsible introduction of automated vehicles to make transportation safer, cleaner, more accessible, and more efficient.”
Vox.com reported that the policy addresses aspects including regulatory challenges, testing and safety.
According to the German business magazine Wirtschaftswoche, the traffic minister of Germany, home of such brands as Volkswagen, Mercedes and BMW, this month created an ethics commission for autonomous driving and recently proposed some laws akin to Asimov’s Three Laws of Robotics. Highlights of the law proposed by Alexander Dobrindt:
- Damage to property is preferable over damage to people.
- In dangerous situations, the AI must not be allowed to classify people by size, age, etc.
- If something happens, the manufacturer is responsible.
Dobrindt also said that the human drivers in autonomous vehicles have to pay some basic attention to vehicle operation. That means they can check their email, watch TV or read a book — but they cannot take a nap, because they must be able to interfere at any moment.
Making sure that the AI in self-driving cars adheres to complex rules will require lots of coding, and Ford said its autonomous Fusion already includes 17 million lines of code. That’s more than a Boeing 787 Dreamliner.