SwedeSpeed - Volvo Performance Forum banner
1 - 20 of 28 Posts

·
Administrator
Joined
·
55,932 Posts
Discussion Starter · #1 ·
Volvo Cars is looking ahead to when its drivers can sit back and enjoy free time in their car on the daily commute.

At the 2016 Consumer Electronics Show (CES) Volvo revealed that it is developing intelligent, high bandwidth, streaming capabilities with its technology partner, Ericsson, that will ensure drivers and passengers get the most out of their time travelling in an autonomous Volvo.

“We recently unveiled our design vision for fully autonomous cars with Concept 26. Now we are actively working on future solutions to deliver the best user experience in fully autonomous mode. Imagine a highway full of autonomous cars with their occupants sitting back watching their favourite TV shows in high definition. This new way of commuting will demand new technology, and a much broader bandwidth to ensure a smooth and enjoyable experience,” said Anders Tylman General Manager Volvo Monitoring & Concept Center at Volvo Car Group.





Interruption-free media streaming
Autonomous drive will bring a paradigm shift to mobile net demands. Volvo and Ericsson believe that this shift will see an increased need for consistent and high-bandwidth coverage outside densely populated areas such as city centres and suburbs.

Utilising Ericsson’s network and cloud expertise, Volvo Cars’ aim is to deliver a high quality, interruption-free experience in its cars whilst on the move. By predicting your route and looking ahead at network conditions, content can be tailored to the duration of each trip and intelligently buffered to deliver a high quality and uninterrupted viewing experience.

“Our research shows that almost 70 per cent of all mobile data traffic will be from video in the coming years. This requires an innovative connectivity, cloud and analytics solution that is not only capable of serving multiple moving vehicles across a highway, but also has the capacity to provide the high quality, uninterrupted video service today’s consumer is accustomed to, said Claes Herlitz, ‎Head of Automotive Services at Ericsson.

By learning the most common routes and times of travel and understanding media preferences, future Volvo cars will be able to provide one-click navigation and a customised preference based list of potential media - allowing customers to choose routes and select content tailored to the amount of autonomous time that is available during their commute.

Personalized and optimized content
“If you want to watch the latest episode of your favourite series, the car will know how long the journey needs to take and can optimize the route and driving control accordingly. With autonomous drive it is no longer just a question of just getting from A to B quickly – it’s about the experience you wish to have in the car – how you wish to spend the time you are saving. With our future autonomous drive technology we will provide people with the freedom to choose the way they would like to commute and the content they would like to experience,” concluded Anders Tylman.
 

·
Registered
Joined
·
1,009 Posts
Huh. With the driver (as shown) being so far away from the steering wheel and pedals and eyes off the road, that means the driver is no longer able to correct the vehicle in case any system fault occurs in the auto-drive. So, if, say, the steering motor fails and turns the wheel hard left into oncoming traffic, the driver can't correct in time. So is the system being designed to be zero, one, or two fault tolerant in case any sensor, computer, or actuator fails (for any reason)?

And that leads to another question, like when the automation and human interact with disastrous results like China Airlines 140, the Airbus A300 where the autopilot and flight crew got into a duel and the autopilot won, leading the airplane to go into a nose-high attitude and full stall only 500 feet off the deck resulting in a nose-plant into the ground and the deaths of 264 people.

If anyone's interested, what happened was the Pilot Flying (PF) had engaged the autopilot for landing but did not engage auto-throttle. The PF manually manipulated the throttles and inadvertently hit the TOGA (Take-Off-Go-Around) button on the throttle levers. This cued the autopilot to command 'nose up' to take the airplane around and put it into a nose-up climb configuration. The PF and captain did not see the "TOGA" light lit on the instrument panel and continued the approach. The PF was focused on landing, so when the autopilot started to raise the nose for what it thought was a legit TOGA, the PF shoved nose down on the yoke. The autopilot, detecting that the nose was not rising, dialed in nose up trim. This caused the PF to shove in more nose-down control input, causing the autopilot to dial in more nose-up trim. This duel between PF and the automation continued until full nose up trim was applied and the PF was applying full nose-down force on the stick. Note: the autopilot did NOT disconnect at the application of the minimum PF's stick force input to disconnect because a little piece of software prevented this from happening below 1400 feet. There was a software patch to correct this, however, it hadn't been installed in the aircraft yet. So, after the autopilot finally gave up and said to the PF "your airplane" the captain applied power to go around, however, because the trim was set in nose-high position the nose went to the sky, the captain was unable to lower the nose as altitude went up to 1700 feet, the aircraft stalled, and then the nose went down and the entire ship did a face plant a few miles from the runway, killing the 264 people. All from hitting one button at the wrong time.

I wonder what Volvo's stance will be on fault-tolerance of the system to prevent this sort of thing? Dual fault tolerant? (That needs at least 4 independent systems of sensors, controllers, and actuators voting against each other to throw away any system that has an answer different from the other 3, then if one of those goes out then the other 3 systems can vote against each other and bump the one that has a different solution) or single fault tolerant where there are 3 complete systems voting against each other to ignore the one that has the fault, or none where the system fails and the car veers off into opposing traffic or directly into a tree on its own (and by the time the driver can respond it's too late.) Ooh, lawsuits over the latter, I'd imagine.

Interesting times ahead.
 

·
Registered
Joined
·
4,289 Posts
Volvo stated it will be (huge) liability on Volvo when the car is running in auto pilot mode. Guess Volvo will have to buy very high commercial insurance for that.

But there is similar risk in traditional cars, like unexpected acceleration, electric steering lockup, even catch a fire on itself in extreme case, etc.
 

·
Registered
Joined
·
1,009 Posts
But there is similar risk in traditional cars, like unexpected acceleration, electric steering lockup, even catch a fire on itself in extreme case, etc.
Right, but on current cars the driver is in-the-loop and can countermand the fault. Well, except in the case of the Toyota unintended acceleration where the driver had to remove one hand from the steering wheel as the car is accelerating at full throttle and hold the start/stop button for 3 seconds to shut the engine off, vs. just turning off the key the old-fashioned way. The driver is then left with only one hand to steer the car as it's approaching 100 mph.

Ugh. Clearly they never thought of that fault scenario.

But with auto-drive, especially as depicted above with reading the paper and watching a movie, the driver is completely disengaged from the ability to detect and correct faults. And that means fault protection must be incorporated into the system. Otherwise people are going to die, especially if it's a single-string (no fault-tolerant) system.
 

·
Registered
Joined
·
4,289 Posts
There is probably fault detection, tolerance and safety fallback as much as a company see safety as top priority could do. Such as both front and side radar can be used to detect sideway clearance. It is indeed tough question how to prove the vehicle is safe 99.99% (or even higher) of auto pilot driving time, or on par with jet plane. Only time can tell.

Sent from my SD4930UR using Tapatalk
 

·
Registered
Joined
·
1,009 Posts
To be truly 'safe' one has to have at least 3 complete independent systems. That's 3 complete sets of sensors, control computers, and actuators. They talk to each other and compare the actions they're about to take. If all 3 agree, the action is taken. If two of the three agree and if there's one 'odd-man-out,' then the two that agree vote the miscreant 'off-line' and ignore the miscreant's proposed solution. At that point the driver will have to get 'back in the loop' because if any sensor, computer, or actuator fails on the remaining two systems then the computers have no idea which one is right.

The Space Shuttle actually had 4 primary computers all voting against each other (they had to be two-fault tolerant. So they could have two computers/sensors/actuators fail and still be able to fly safely. The three-computer system I describe above is only single-fault tolerant) Then there was a fifth computer that ran a completely different software build developed by a completely independent team (in case the fault was software-derived that would affect all 4 primary computers simultaneously) that would vote all 4 of the primaries off-line and take over. That system worked fine, obviously the Shuttle had hardware issues in the zero-fault tolerance category that killed it twice.
 

·
Registered
Joined
·
1,009 Posts
Oh, and that leads us down the rabbit hole of what happens when too much automation degrades the driver's basic skillset, much like Air France 447 and Asiana 162, the former when all four pitot tubes iced over, the computers gave control of the plane back to the pilots, who forgot how to fly and the PF raised the nose, got it into the stall and held full aft stick thereafter and kept the aircraft in a full stall all the way down to the ocean. And in Asiana 162, the pilots had engaged FLCH mode in the autopilot, played with the throttles just enough that autothrottle didn't kick back in when airspeed got too slow and they plowed into the sea wall at SFO.

That's gonna be fun . . . .
 

·
Registered
2021 XC60 Inscription Denim Blue, T5, Prancing Moose, AWD, Climate,
Joined
·
1,843 Posts
I guess I didn't fully read this but when looking at twitter this morning and Volvo getting ready for NAIAS, #concept26 came up and had some pics.

https://twitter.com/VolvoCarsCyprus/status/685483814076977152 is the tweet referenced with these pics...








(This link doesn't embed in the forum so you have to click it to see the dash in action)

I was going to NAIAS (probably next week late) to see the new S90 but now I'm really excited about this Concept 26 as well.

:thumbup::thumbup::thumbup: :D:D:D
 

·
Registered
2021 XC60 Inscription Denim Blue, T5, Prancing Moose, AWD, Climate,
Joined
·
1,843 Posts
To be truly 'safe' one has to have at least 3 complete independent systems. That's 3 complete sets of sensors, control computers, and actuators. They talk to each other and compare the actions they're about to take. If all 3 agree, the action is taken. If two of the three agree and if there's one 'odd-man-out,' then the two that agree vote the miscreant 'off-line' and ignore the miscreant's proposed solution. At that point the driver will have to get 'back in the loop' because if any sensor, computer, or actuator fails on the remaining two systems then the computers have no idea which one is right.

The Space Shuttle actually had 4 primary computers all voting against each other (they had to be two-fault tolerant. So they could have two computers/sensors/actuators fail and still be able to fly safely. The three-computer system I describe above is only single-fault tolerant) Then there was a fifth computer that ran a completely different software build developed by a completely independent team (in case the fault was software-derived that would affect all 4 primary computers simultaneously) that would vote all 4 of the primaries off-line and take over. That system worked fine, obviously the Shuttle had hardware issues in the zero-fault tolerance category that killed it twice.
In a car, you don't need 3 complete independent systems. Aerospace does this because you can't park on the side of the road to fix it.

Expecting automobiles to be designed like aeronautic vehicles is a bit on the preposterous side.

You can design and over design and then you can throw in aerospace features with a military budget so that the final outcome is a $2.6M automobile.

If that's what you're looking for, don't expect it to be mass produced.
 

·
Registered
Joined
·
134 Posts
Talk about elegance epitomized. Both the car and the video.

I had actually seen that video some time ago, but forgot the overall tone.

You were very nice to try to help after my early morning mini meltdown the other day (facepalm), which I am pretty mortified about. It was rambly and just dumb in parts, and I actually do know how to spell the word interference, but if I went back to edit that, I would have just deleted the whole thing, and I don't want to be that person. At the least, it was heartfelt, so I will own it.

Thanks for posting this. :)
 

·
Registered
Joined
·
1,009 Posts
In a car, you don't need 3 complete independent systems. Aerospace does this because you can't park on the side of the road to fix it.
No, aerospace does this because they must fail operational, i.e., maintain correct operation after a fault
Expecting automobiles to be designed like aeronautic vehicles is a bit on the preposterous side.
Oh, really? Let's discuss faults, then. In a single-string system (only one sensor, one computer, one actuator) a fault could occur where, say, the steering actuator is commanded or fails in a way that it dials in full steering wheel deflection in either direction. If one is cruising down the road per the above picture where the driver is not actively monitoring the automation, the car will swerve off the road in less than two seconds long before the driver can actively 1) become aware that a fault occurred and 2) countermand the fault correctly, and 3) disengage the faulty autodrive. A common scenario I envision that is encouraged from the picture above is not that the driver will be reading the paper or watching a movie, he'll be taking a nap. He'd need 20 to 30 seconds to recover from a fault, and there's no way that can save his ass. If the fault occurs on an undivided highway and the fault steers the car into oncoming traffic you'll see a head-on collision equivalent to a car plowing into another at 120 mph head on. You won't survive that, even in a Volvo.

But you can counter with "oh, let's put in a software algorithm that checks for that and turn the wheel the other way in case that happens?" Fine, but what happens if THAT algorithm gets triggered when it's not supposed to and yields the exact same result only in the opposite direction and the car plows into a tree on the side of the road?

You have to be careful with this stuff. Think Toyota unintended acceleration, only uncommanded steering inputs that will take you into oncoming traffic and leave you little room to correct for it.
You can design and over design and then you can throw in aerospace features with a military budget so that the final outcome is a $2.6M automobile.

If that's what you're looking for, don't expect it to be mass produced.
If you can't make it even single-fault tolerant, then it shouldn't be made AT ALL. This stuff scares me if it's not done properly.
 

·
Registered
2021 XC60 Inscription Denim Blue, T5, Prancing Moose, AWD, Climate,
Joined
·
1,843 Posts
No, aerospace does this because they must fail operational, i.e., maintain correct operation after a fault

Oh, really? Let's discuss faults, then. In a single-string system (only one sensor, one computer, one actuator) a fault could occur where, say, the steering actuator is commanded or fails in a way that it dials in full steering wheel deflection in either direction. If one is cruising down the road per the above picture where the driver is not actively monitoring the automation, the car will swerve off the road in less than two seconds long before the driver can actively 1) become aware that a fault occurred and 2) countermand the fault correctly, and 3) disengage the faulty autodrive. A common scenario I envision that is encouraged from the picture above is not that the driver will be reading the paper or watching a movie, he'll be taking a nap. He'd need 20 to 30 seconds to recover from a fault, and there's no way that can save his ass. If the fault occurs on an undivided highway and the fault steers the car into oncoming traffic you'll see a head-on collision equivalent to a car plowing into another at 120 mph head on. You won't survive that, even in a Volvo.

But you can counter with "oh, let's put in a software algorithm that checks for that and turn the wheel the other way in case that happens?" Fine, but what happens if THAT algorithm gets triggered when it's not supposed to and yields the exact same result only in the opposite direction and the car plows into a tree on the side of the road?

You have to be careful with this stuff. Think Toyota unintended acceleration, only uncommanded steering inputs that will take you into oncoming traffic and leave you little room to correct for it.


If you can't make it even single-fault tolerant, then it shouldn't be made AT ALL. This stuff scares me if it's not done properly.
Let's start at the basic concept you're espousing. Name a car that does this now? Are there any single fault systems in production cars now? Triple redundant systems?

If not, why are you using this thread as a launching pad for a personal vendetta against the industry?

How much are you willing to pay for such a car?
 

·
Registered
Joined
·
1,419 Posts
If you can't make it even single-fault tolerant, then it shouldn't be made AT ALL. This stuff scares me if it's not done properly.
I concur, a robust design is a necessity if you're taking control of people's (the passengers of ALL vehicles involved) lives.
 

·
Registered
Joined
·
87 Posts
No, aerospace does this because they must fail operational, i.e., maintain correct operation after a fault

Oh, really? Let's discuss faults, then. In a single-string system (only one sensor, one computer, one actuator) a fault could occur where, say, the steering actuator is commanded or fails in a way that it dials in full steering wheel deflection in either direction. If one is cruising down the road per the above picture where the driver is not actively monitoring the automation, the car will swerve off the road in less than two seconds long before the driver can actively 1) become aware that a fault occurred and 2) countermand the fault correctly, and 3) disengage the faulty autodrive. A common scenario I envision that is encouraged from the picture above is not that the driver will be reading the paper or watching a movie, he'll be taking a nap. He'd need 20 to 30 seconds to recover from a fault, and there's no way that can save his ass. If the fault occurs on an undivided highway and the fault steers the car into oncoming traffic you'll see a head-on collision equivalent to a car plowing into another at 120 mph head on. You won't survive that, even in a Volvo.

But you can counter with "oh, let's put in a software algorithm that checks for that and turn the wheel the other way in case that happens?" Fine, but what happens if THAT algorithm gets triggered when it's not supposed to and yields the exact same result only in the opposite direction and the car plows into a tree on the side of the road?

You have to be careful with this stuff. Think Toyota unintended acceleration, only uncommanded steering inputs that will take you into oncoming traffic and leave you little room to correct for it.


If you can't make it even single-fault tolerant, then it shouldn't be made AT ALL. This stuff scares me if it's not done properly.
JPL Guy, I'm impressed with your understanding of control systems and automation, and agree that proper implementation is critical to autonomous driving being safe. I suspect your explanations and examples are correct, but I think you're building a straw man. The argument I am inferring from your posts is "autonomous systems have failed in the past and people died, so they should not be attempted in cars." I disagree, and believe autonomous vehicles can lead to safer roads and increase productivity by giving people their commutesTo be done properly, autonomous driving must first be attempted. Volvo's approach is noteworthy because they have worked with the community to create a controlled testbed, and will accept liability for negative outcomes.

Your argument is also based on a fair amount of speculation leading to the implication that Volvo, a company with some aircraft history, will not have the necessary redundancy. In this, I agree with tarrbot, that the consequences of a failed system for a car is less than for an aircraft, which is less than a spacecraft. That means that a car system can be designed to a different standard. Additionally, when more autonomous vehicles are on the road, they will be able to react to the failed vehicle.

Your examples are based on the assumption that a human driver can better compensate for a system (mechanical or computer) error like you described than an algorithm-based countermeasure. I don't think that's the case for many drivers, possibly including me.
 

·
Registered
Joined
·
1,419 Posts
JPL Guy, I'm impressed ....
A few comments :

I disagree, and believe autonomous vehicles can lead to safer roads and increase productivity by giving people their commutes. To be done properly, autonomous driving must first be attempted.
Only if implemented properly.

I agree with tarrbot, that the consequences of a failed system for a car is less than for an aircraft, which is less than a spacecraft.
I don't. There are several orders of magnitude more cars than aircraft in operation. Statistically, a huge opportunity for problems.

Additionally, when more autonomous vehicles are on the road, they will be able to react to the failed vehicle.
Not necessarily.

I'm not saying we shouldn't have autonomous vehicles, I think we should all understand what's required to have the actual results live up to our expected fantasies.
 

·
Registered
Joined
·
1,009 Posts
JPL Guy, I'm impressed with your understanding of control systems and automation, and agree that proper implementation is critical to autonomous driving being safe. I suspect your explanations and examples are correct, but I think you're building a straw man. The argument I am inferring from your posts is "autonomous systems have failed in the past and people died, so they should not be attempted in cars."
Nope, I'm saying they need to be single-fault tolerant sufficient to preclude the car veering into opposing traffic or a tree before the driver is able to take corrective action against the fault.
I disagree, and believe autonomous vehicles can lead to safer roads and increase productivity by giving people their commutesTo be done properly, autonomous driving must first be attempted. Volvo's approach is noteworthy because they have worked with the community to create a controlled testbed, and will accept liability for negative outcomes.
There's a little Canadian company called XeMODeX that built its business strategy on repairing Volvo electronics. Note that there are 17 product lines for Volvo module repairs; the next highest manufacturer is Audi with only 6. This doesn't include the TFT retrofit which is an aftermarket nice-to-have. They are very much in business, thanks to Volvo's design philosophy and implementation (read--flaky). Are you sure you want to trust the lives of your kids to single-string Volvo electronics?
Your argument is also based on a fair amount of speculation leading to the implication that Volvo, a company with some aircraft history, will not have the necessary redundancy. In this, I agree with tarrbot, that the consequences of a failed system for a car is less than for an aircraft, which is less than a spacecraft. That means that a car system can be designed to a different standard. Additionally, when more autonomous vehicles are on the road, they will be able to react to the failed vehicle.
Again, not if the time constant from onset of failure to deadly crash is under 2 seconds if the car is commanded by fault to run into oncoming traffic or into a tree.
Your examples are based on the assumption that a human driver can better compensate for a system (mechanical or computer) error like you described than an algorithm-based countermeasure. I don't think that's the case for many drivers, possibly including me.
If the processor that runs the algorithm fails and commands the deviation into oncoming traffic, then, no, the algorithm is not better. I remember the story of one of the guys who started working with me when I first started going to work after college. He was in charge of search and rescue at Edwards AFB. During his tenure the FB111 was testing its terrain-following radar as part of the process of getting the aircraft certified for the Air Force. They had lost 4 aircraft where the bird was doing fine, following the terrain only a few hundred feet off the ground at 500 mph when the plane just put itself into a full nose-down dive. Wasn't much left of the plane, the co-worker said all they could find of the pilot at most was a head in the helmet or a foot in a boot. Even though the terrain-following radar was redundant, the designers hadn't counted on the radar feed horns on the antennas going into a 'common mode' failure where all the antennas resonated, and even though the electronics was working perfectly it set up a condition in the sensor system where all the antennas thought the aircraft was higher than it was and commanded full nose-down pitch when the bird was only 200 feet off the deck.

So all I'm saying is that, if you're not careful and make things single-string (not fault tolerant), this can be the same as the GM ignition switch. And nowadays microcontrollers, sensors, and actuators are cheap. Hell, my watch has more processing power than the Shuttle computers did back in 1978.

But single-string auto-drive while the driver takes a nap? No way.
 

·
Registered
2021 XC60 Inscription Denim Blue, T5, Prancing Moose, AWD, Climate,
Joined
·
1,843 Posts
You were very nice to try to help after my early morning mini meltdown the other day (facepalm), which I am pretty mortified about. It was rambly and just dumb in parts, and I actually do know how to spell the word interference, but if I went back to edit that, I would have just deleted the whole thing, and I don't want to be that person. At the least, it was heartfelt, so I will own it.

Thanks for posting this. :)
I saw nothing wrong. Saw it as a small rant and nothing more. We all rant and ramble sometimes. :)

Nope, I'm saying they need to be single-fault tolerant sufficient to preclude the car veering into opposing traffic or a tree before the driver is able to take corrective action against the fault.

If the processor that runs the algorithm fails and commands the deviation into oncoming traffic, then, no, the algorithm is not better. I remember the story of one of the guys who started working with me when I first started going to work after college. He was in charge of search and rescue at Edwards AFB. During his tenure the FB111 was testing its terrain-following radar as part of the process of getting the aircraft certified for the Air Force. They had lost 4 aircraft where the bird was doing fine, following the terrain only a few hundred feet off the ground at 500 mph when the plane just put itself into a full nose-down dive. Wasn't much left of the plane, the co-worker said all they could find of the pilot at most was a head in the helmet or a foot in a boot. Even though the terrain-following radar was redundant, the designers hadn't counted on the radar feed horns on the antennas going into a 'common mode' failure where all the antennas resonated, and even though the electronics was working perfectly it set up a condition in the sensor system where all the antennas thought the aircraft was higher than it was and commanded full nose-down pitch when the bird was only 200 feet off the deck.

So all I'm saying is that, if you're not careful and make things single-string (not fault tolerant), this can be the same as the GM ignition switch. And nowadays microcontrollers, sensors, and actuators are cheap. Hell, my watch has more processing power than the Shuttle computers did back in 1978.

But single-string auto-drive while the driver takes a nap? No way.
But can you tell me an autonomous system now that has this redundancy and how much you'd be willing to pay for it?

I understand your point and in an ideal world, this is the way it should be.

Did you watch the video and listen to them discuss this? Phrases like, "And then in other ways when you go into delegated driving of making sure you stay in control of the vehicle".

Are you making an assumption on an imaginary product that isn't to market yet?

I'd say you were.

This may be something you know, but I'm betting you aren't the only person on the planet that knows about autonomous systems.

Have you looked at the definition levels of what makes an autonomous car?

NHTSA lists them as:

Level 0: The driver completely controls the vehicle at all times.
Level 1: Individual vehicle controls are automated, such as electronic stability control or automatic braking.
Level 2: At least two controls can be automated in unison, such as adaptive cruise control in combination with lane keeping.
Level 3: The driver can fully cede control of all safety-critical functions in certain conditions. The car senses when conditions require the driver to retake control and provides a "sufficiently comfortable transition time" for the driver to do so.
Level 4: The vehicle performs all safety-critical functions for the entire trip, with the driver not expected to control the vehicle at any time. As this vehicle would control all functions from start to stop, including all parking functions, it could include unoccupied cars.

We currently only are at level 2 autonomous vehicle. There's a long way to go before we see Level 4, which is what this is talking about.

There's a LOT to go on before we get to this level. It may be a decade before we truly see it working properly. We still need to get to Level 3 in the industry and I honestly do not expect it to happen before 2020.

I feel like you're jumping the gun a bit early on this one, JPL Guy.
 

·
Registered
Joined
·
1,009 Posts
I saw nothing wrong. Saw it as a small rant and nothing more. We all rant and ramble sometimes. :)


But can you tell me an autonomous system now that has this redundancy and how much you'd be willing to pay for it?

I understand your point and in an ideal world, this is the way it should be.

Did you watch the video and listen to them discuss this? Phrases like, "And then in other ways when you go into delegated driving of making sure you stay in control of the vehicle".

Are you making an assumption on an imaginary product that isn't to market yet?

I'd say you were.

This may be something you know, but I'm betting you aren't the only person on the planet that knows about autonomous systems.

Have you looked at the definition levels of what makes an autonomous car?

NHTSA lists them as:

Level 0: The driver completely controls the vehicle at all times.
Level 1: Individual vehicle controls are automated, such as electronic stability control or automatic braking.
Level 2: At least two controls can be automated in unison, such as adaptive cruise control in combination with lane keeping.
Level 3: The driver can fully cede control of all safety-critical functions in certain conditions. The car senses when conditions require the driver to retake control and provides a "sufficiently comfortable transition time" for the driver to do so.
Level 4: The vehicle performs all safety-critical functions for the entire trip, with the driver not expected to control the vehicle at any time. As this vehicle would control all functions from start to stop, including all parking functions, it could include unoccupied cars.

We currently only are at level 2 autonomous vehicle. There's a long way to go before we see Level 4, which is what this is talking about.

There's a LOT to go on before we get to this level. It may be a decade before we truly see it working properly. We still need to get to Level 3 in the industry and I honestly do not expect it to happen before 2020.

I feel like you're jumping the gun a bit early on this one, JPL Guy.
If you actually go to the PDF on the very link you reference, http://www.nhtsa.gov/About+NHTSA/Pr...eases+Policy+on+Automated+Vehicle+Development you'll see a very interesting paragraph:

NTSB said:
NHTSA expects to be in a position to determine the need for standards for these safety-critical electronic control systems. This work will complement and support the agency research to develop appropriate safety performance requirements for automated vehicles.
Within the areas of safe reliability and cybersecurity of control systems, the following topics will need to be addressed:
Safe Reliability
• Functional safety - Defining functional safety requirements for electronic control systems
• Failure modes – Evaluating failure modes and associated severities
• Failure probability – Evaluating the likelihood of a failure to occur
• Diagnostics/prognostics – Evaluating the need and feasibility of enhanced capabilities that can self-detect or predict failures and investigating how to communicate potential system degradation to the driver
Redundancy – Investigating what additional hardware, software, data communications, infrastructure, etc. may be needed to ensure the safety of highly automated vehicles
• Availability (of the automated system) – Ability to perform even at a degraded level in case of failure
• Certification – Requirements and processes to validate that the system is safe at deployment and remains safe in operation, including vehicle software
Cybersecurity
• Security – Capability of system to resist cyber attacks
• Risks – Potential gaps in the system that can be compromised by cyber attacks
• Performance – Effectiveness of security systems
• Unintended consequences – Impact of cybersecurity on performance of the system
• Certification – Method to assure that critical vehicle subsystems such as communications are secure
So NHTSA is saying the same thing I am.

Note their last paragraph:

NTSB said:
NHTSA does not recommend that states authorize the operation of self-driving vehicles for purposes other than testing at this time. We believe there are a number of technological issues as well as human performance issues that must be addressed before self-driving vehicles can be made widely available. Self-driving vehicle technology is not yet at the stage of sophistication or demonstrated safety capability that it should be authorized for use by members of the public for general driving purposes. Should a state nevertheless decide to permit such non-testing operation of self-driving vehicles, at a minimum the state should require that a properly licensed driver (i.e., one licensed to drive self-driving vehicles) be seated in the driver’s seat and be available at all times in order to operate the vehicle in situations in which the automated technology is not able to safely control the vehicle.
Welcome to the land of FMECAs, dude. (Failure Mode Effects Criticality Analysis)
 

·
Registered
2021 XC60 Inscription Denim Blue, T5, Prancing Moose, AWD, Climate,
Joined
·
1,843 Posts
If you actually go to the PDF on the very link you reference, http://www.nhtsa.gov/About+NHTSA/Pr...eases+Policy+on+Automated+Vehicle+Development you'll see a very interesting paragraph:



So NHTSA is saying the same thing I am.

Note their last paragraph:



Welcome to the land of FMECAs, dude. (Failure Mode Effects Criticality Analysis)
I see you completely missed what I said... :facepalm:

I said, in case you missed it:

We currently only are at level 2 autonomous vehicle. There's a long way to go before we see Level 4, which is what this is talking about.

There's a LOT to go on before we get to this level. It may be a decade before we truly see it working properly. We still need to get to Level 3 in the industry and I honestly do not expect it to happen before 2020.

I feel like you're jumping the gun a bit early on this one, JPL Guy.
Which my point is that this was about talking on the new technology, not being a Debbie Downer and crapping all over the hurdles that need to be overcome.

This is good forward thinking technology. Is it ready for primetime? Hell no it's not.

You saying it isn't is pretty much a Captain Obvious statement if there ever was one.
 

·
Registered
Joined
·
1,009 Posts
I see you completely missed what I said... :facepalm:

I said, in case you missed it:



Which my point is that this was about talking on the new technology, not being a Debbie Downer and crapping all over the hurdles that need to be overcome.

This is good forward thinking technology. Is it ready for primetime? Hell no it's not.

You saying it isn't is pretty much a Captain Obvious statement if there ever was one.
Huh? In case you missed it, the very picture that Chris posted at the start of this thread was of a driver completely disengaged from the task of driving and enjoying while the car is in "fully autonomous mode." We've been talking level 4 from the start.
 
1 - 20 of 28 Posts
Top