Semi-autonomous driving is safe? Human's "full trust" is the biggest hidden danger


Netease Technology News July 8th, foreign media Vox released senior writer Timothy B. Lee article said to provide a variety of advanced driver assistance functions, and gradually transition to unmanned or straight Running full automatic driving? This is a question to be asked by companies that are involved in drones. The realization of fully automatic driving will take some time, but semi-autonomous driving cars have a major hidden danger: Consumers quickly believe that this type of technology, completely relaxed and alert during the automatic driving process.

The following is the main content of the article:

An important moment in the discussion around driverless cars occurred on May 7, 2016, when Joshua Brown died on the day his Tesla car crashed into a trailer truck. Brown turned on Tesla's Autopilot automatic driving mode, which did not detect the white edges of the truck under sunlight. His car crashed into the truck at full speed (74 mph) - he crashed into the roof and killed Brown.

The accident was characterized by many people as the first accident of a driverless car. It caught the attention of the National Transportation Safety Board (NTSB). Last month, the committee released hundreds of pages of new details on the incident.

But the auto industry trying to make drones a reality does not agree with this characterization. Tesla claims that Autopilot is not an unmanned technology, but more similar to an advanced form of cruise control. The company pointed out that after the Autopilot mode is enabled, the driver needs to put his hands on the steering wheel all the time while keeping his eyes on the road ahead. A computer log released by the NTSB last month showed that Brown's car had warned him seven times to put his hands on the steering wheel. Each time, Brown put his hand on the steering wheel for a few seconds - for a few seconds enough to let the warning disappear - and then left his hand off the steering wheel.

In January, the National Highway Traffic Safety Administration of the United States expressed support for Tesla, who was involved in the case, summing up that the Autopilot system was flawless. "Not all systems are omnipotent," said the agency's spokesperson, Bryan Thomas. "Some driving scenarios that automate emergency braking systems cannot be handled."

However, Brown's death is still one of the biggest challenges that automakers will face in their journey to bring driverless cars to market. Most car companies are planning to gradually realize the driverless function: In the next few years, cars with more and more advanced driving assistance functions will be sold, and finally fully automated driving cars without pedals or steering wheels will be introduced.

However, such a development strategy has a hidden danger: people will soon believe in driverless technology, just like Brown did. After hundreds or even thousands of miles of error-free driving, people may become unconscious of the roads, which can be fatal. What makes a driverless car safer? The industry's answer reveals much about our driving habits—and it also reveals that we easily become dependent on new technologies.

Semi-autonomous driving cars will constantly remind drivers

Tesla's current approach to unmanned driving is to let cars with semi-automated driving functions alert drivers to road conditions. Since Brown's death, the company has redoubled its efforts to develop such technologies.

Earlier this month, I visited Tesla’s headquarters in Palo Alto and personally test the Model S. Before I was allowed to drive the car alone, a Tesla employee directed me, during which she emphasized that Autopilot was not a fully automated driving system. She pointed out that before starting the Autopilot function for the first time, the driver must read and agree to the disclaimer displayed on the 17-inch screen on the Tesla console. This disclaimer emphasizes the need to place both hands on the steering wheel.

This is not just casual advice. The Model S is equipped with sensors to detect if my hands are on the steering wheel. If it is detected that I have not done so, then the dashboard will eventually start flashing until I put my hand back on the steering wheel.

Joshua Brown's Tesla Motors also warned him - warning 7 times - but he did not pay enough attention. Since the accident, Tesla has instituted a stricter "triple strike" rule: If the driver ignores the warning three consecutive times, he will no longer be able to use the Autopilot feature during the remainder of the trip. If the driver still does not hold the steering wheel, the car assumes that he cannot move, then slowly stops and turns on the hazard warning flash.

Other car companies are developing similar technologies. For example, Audi will soon launch a product called "traffic jam pilot" that allows drivers to free their hands on the highway at a speed of up to 35 miles per hour (Audi said, distance to achieve full speed driving on the highway There are four or five years.) During the recent test drive, Audi engineer Kaushik Raghu told me that the traffic jam pilot technology will include a “driver availability monitoring system” to ensure that the driver does not Stay asleep or look back.

Cadillac's recently announced Super Cruise, an expressway driving technology, also offers similar functionality. The infrared camera installed on the steering wheel can determine whether the driver is looking at the road ahead and whether he has looked down at the phone. If the driver’s eyes are not staring at the road ahead for a long time, the car will start to sound until the driver sees the road again.

While visiting Nauto, a startup that is developing advanced driver monitoring technology, I understand how Super Cruise probably works. Nauto's windshield mounted equipment monitors the driver and determines if the driver is looking at the road. If integrated into a driverless car - or an ordinary car - this kind of technology is expected to prevent a large number of distracted driving and highway deaths.

Why almost perfect autopilot system is also at risk

In the opinion of Chris Urmson, who led Google's team of driverless car engineers, the decision to provide drivers with driver assistance or to completely remove the driver from driving is "the industry's largest public discussion topic. One.” Rumsom left Google last year to start his own startup.

Google has been pondering this issue for years. It recently renamed its driverless car project to Waymo. As early as 2014, Google driverless car engineer Nathaniel Fairfield gave a speech at a computer vision conference, describing Google’s creation of driverless technology for freeway driving, but later determined the plan. Too dangerous a course. Waymo customers will never be allowed to touch the steering wheel of their car.

Earlier versions of Google's driverless car technology were very similar to Tesla's Autopilot or the Audi prototype I drove a few weeks earlier. The driver is responsible for the streets at the beginning and end of the journey, but the driverless mode can be activated on the freeway. Marked roads and the absence of pedestrians and other obstacles make unmanned on the highway a relatively simple computer science issue.

Fairfield pointed out that the problem is that people start to believe the system too soon. “People felt unrealistic at first and then they felt half-trusted and eventually became too trusty,” he said. After seeing Google's freeway driving technology has been flawless for several hours in a row, people have become "full of trust" to its effectiveness.

“The camera in the car recorded that during the car's driving, the employees climbed into the back seat, reached out the car window, and even embraced kissing.” John Markoff reported in the New York Times.

Even if a technology is capable of driving 100 or even 1,000 miles flawlessly, there may be occasional catastrophic errors. However, after one mile and one mile of error-free automatic driving, it is unlikely that the driver will pay close attention to the road conditions. This means that if the car suddenly encounters a problem that it cannot cope with, the transfer of control of the car to the human driver may instead make things worse.

This is a problem that the aviation industry has been trying to overcome for decades. In 2009, Air France Flight 447 crashed into the ocean on its way from Rio de Janeiro to Paris. The cause of the accident was that the ice on the sensor of the aircraft caused the automatic driving function to stop operating. Due to the lack of experience in dealing with aircraft problems without computer assistance, the pilot just made a mistake: he should have lowered his nose, but he has pulled the nose high. The plane thus stalled and then fell into the ocean, killing all 228 people on board.

"Obviously, automation technology is responsible for this accident, but there is some controversy about how much responsibility it has to bear." A 2015 article from the well-known online magazine "Slate" pointed out in the incident, "Maybe The system has been designed so badly that the pilot feels confused, or may have relied on automation technology for many years to make pilots unprepared and do not know how to take over the aircraft."

This is a contradiction in the automated system: The better the autonomous driving system is done, the more humans will depend on it. If suddenly they have to take over the aircraft, they will be caught off guard.

It's not hard to imagine that driverless cars may have similar things—younger drivers are especially prone to problems. As driving assistance technology becomes more and more popular, there will be more and more young drivers who have never driven without computer assistance in the future. However, if these cars are designed to return driving tasks to human drivers in unexpected situations, then human drivers may be caught off guard and make mistakes at the moment of life and death.

The closer a self-driving car is to full automation, the harder it is for drivers to remain vigilant. If thousands of miles of drivers do not have to make any driving decisions, even if the car can force them to keep their hands on the steering wheel and watch the road at the same time, they may also be guilty of jealousy. After reaching a certain technical level, letting the car guess what to do may be safer than letting drivers who may be confused or lacking practice to take over the steering wheel.

Waymo wants to go straight to fully automatic driving

Google finally concluded that semi-automated driving is technically a dead end. Instead, the company set itself the goal of creating a fully automated driving car - a fully automated, fully automated driving car that would never require the intervention of a human driver.

Former Google engineer Emerson believes that driving assistance and fully automated driving "are actually two very different technologies."

"In the driver assistance system, most of the time, those auxiliary functions do not have to be used better," Ommerson pointed out in his speech in April. "You should only activate if you have 100 percent certainty that you will not be involved in an accident. This system will guide you to choose various technologies, but they will limit your ability to deal with unexpected situations."

In contrast, Waymo is trying to build a car that never gives the steering wheel to human passengers. That means that its software must be able to choose reasonable and safe responses no matter what conceivable situation. This means deploying a backup system to ensure that even in the event of a major component failure, the car can comfortably respond.

“Every one of our driverless cars is equipped with an emergency auxiliary computer to prevent the host computer from going offline with a small probability.” Google wrote in its 2016 report, “its sole responsibility is to monitor the host computer, if necessary. It will safely drive the car to the side of the road, or stop it completely. It can even figure out things like staying at the crossroads and avoiding danger to other drivers."

Compared to the development of driver assistance technologies that count on people to handle difficult situations, developing software that can handle any unexpected situation with ease — and spare bulletproof hardware — can be a much more difficult technical problem. But it also has a big advantage: if the car never gives control to the human driver, then it does not have to worry about whether the driver will be caught off guard.

At the moment, Waymo lets drivers sit behind the steering wheel, but these drivers are Waymo employees who have received special driverless car handling training. No matter how bored or boring, they must continue to pay attention to the road conditions. They should also be able to get paid from this errand. In recent months, this work has become extremely boring. According to regulatory filings, the Waymo driverless car has traveled more than 600,000 miles in California, and only 124 controls were handed over to human drivers.

That's equivalent to a "disengagement" of 5,000 miles per trip (the driver has to take over the car because of a car's automated driving software or safety reasons) once more than four times higher than in 2015, and is currently Until now, the best performance of the company's driverless car testing on California highways. At this rate of progress, Waymo will be able to surpass humans' driving safety in a few years. If there is any inspiration from the California data, it is that competitors like GM, Ford, BMW and Mercedes have to catch up for a long time.

Waymo currently seems to be a clear leader in the field of driverless cars, but the strategy of trying to go straight to fully automated driving technology has put the company at great risk. Achieving higher levels of reliability—for example, from 99.99% reliable to 99.9999% reliable—may take several years. During this period, companies adopting a gradual strategy may gradually enter the market.

A big advantage of the incremental strategy is that it allows car companies to collect large amounts of data. Many experts believe that sitting on large amounts of data is critical to the success or failure of driverless cars. Waymo's car has more than 3 million miles on the road, which gives it raw data that can be used to refine its software algorithms. In contrast, Tesla has collected more than 1 billion miles of real sensor data from the customer's car. All of the additional data is conducive to Tesla to make more progress to achieve full automation, and ultimately catch up with and exceed the Waymo with first-mover advantage. (Lebang)

VOZOL BAR Lite Vape

This is a special vozol bar lite electronic cigarette product series. We sell Vozol Vape, Vozol Vape Pen, vozol vape 3000 Puffs, and other vozol vapes.

We are specialized electronic cigarette manufacturers from China, Vapes For Smoking, Vape Pen Kits suppliers/factory, wholesale high-quality

products of Modern E-Cigarette R & D and manufacturing, we have the perfect after-sales service and technical support. Look forward to

your cooperation!

vozol bar lite vape, vozol bar lite vape pen, vozol bar lite 600 puffs vape,vozol bar lite disposable vape kit,vozol bar lite vape kit

Ningbo Autrends International Trade Co.,Ltd. , https://www.ecigarettevapepods.com

This entry was posted in on