Uber

muckles:
Also as well as our many qualities as a species, we have failings, we evolved for a very different existance, our brain doesn’t process everything our eyes see,

Neither, evidently, does the computer process everything it sees!

What humans do have is a teleological comprehension and an intimate appreciation of human failings - most of us have operated as pedestrians as well as drivers, and use experience in one context to inform our behaviour in the other.

Humans also learn spontaneously and from the experience of others (through communication), whereas the effect of this crash will not have taught the computer anything (either about the general principles or the particular hazard) and it will be left to humans to review the circumstances and determine how the computer ought to have behaved or assayed the circumstances differently.

Indeed, driving is actually not a particularly “different existence” from the one we have evolved to cope with - we are extraordinarily good at it, not only as operators but as those who have devised it entirely, including the machinery used, the environmental context, and the rules that govern it.

(better than most dogs they’re red/green colour blind which is why you don’t see many with driving licences. :laughing: ) we also have problems concentrating on low stimulus activities, of which driving …

It can be low stimulus but humans tend to adjust the throttle accordingly, to maintain an appropriate rate of stimulus, and maintain an efficient balance between concentration and risk.

I can give some illustration of the challenges with monitoring and intervention side of things that would relate directly to this. I’m wondering how I can scribble it down without turning into a bloody essay or load of boring ■■■■■ :laughing: :laughing: . I’ll have to think on it :laughing:

The BA Flight 38 unpowered crash landing at Heathrow, The US Air Flight 1549 which ended up unpowered in the Hudson river and Qantas Flight 38 which suffered a catastophic engine failure and systems shutdown all illustrate that there is no substitute for a human in an emergency. IIRC the Qantas crew had to override computer programed systems in order to land safely.

These may be exceptional instances of human ability and contrast with other occasions when the wrong decisions have been made, but the chance of a computer being able to deliver the same outcome in these incidents is nil.

linkedin.com/pulse/what-air … bus-hughes

i notice over here on the liberal news channels they report that the woman has a criminal record etc didn’t cross at a proper spot so was jaywalking etc…what the KF has having a criminal record (drugs,petty theft )got to do wae getting ran over rant over!! :open_mouth: :open_mouth: new news it is being said that the Uber driver is an x criminal with a federal record.Questions being asked why UBER employed a convicted armed robber as a driver/observer

There was someone on the Radio 4 Today programme this am, who pointed out that the less you gave a driver to do, the worse they did what they were left to do.

There was a direct correlation between workload and performance, she said. Higher workload = better performance.

She also said that being the ‘attendant observer’ in a vehicle that was driving itself was far more stressful than driving the thing.

In the Hero on the Hudson book, Captain Chelsey devotes a whole chapter to the ‘automated cockpit’ …he says that the automated systems decrease pilot workload at points when it is already low and increase it when workload is high.

The example he cites is letting the plane do an automated landing…when there’s a sudden and unexpected runway change on approach. It takes time for the pilot to reorientate himself before he can take back control of the aircraft.

That said, I can see the advantage in automated emergency braking when a vehicle suddenly encounters a fog-bank or whatever. Had it been in use at the time, it might have saved some lives when there was that dreadful Bonfire night pileup on the M5 a few years back.

On a wider note…with all the fuss about Facebook etc, people seem to be falling out of love with ‘new tech.’

Freight Dog:

Wiretwister:
Another the guy said that there were about 100 people killed by vehicles with drivers at the controls which I thought was a pretty pointles statistic to offer as he didn’t say if that was State wide or Nationally.

It’s always going to be said that one. To kick that argument back to them.

I read the US 1.16 people are killed for every 100 million miles driven. Driverless cars have been figuratively just driven around the block, and they’ve already totalled 1 person.

Sub human ability to monitor and intervene with sub performance technology.

They also killed the dude who had his Tesla on autopilot and drove it under a truck, which neither he nor the car noticed.

They found the car up a tree, two fields from the crash site, wheels still spinning and DVD still playing on the ‘infotainment’ screen. The car didn’t realise that it had crashed, but the dude had had his head torn off!

JIMBO47:
i notice over here on the liberal news channels they report that the woman has a criminal record etc didn’t cross at a proper spot so was jaywalking etc…what the KF has having a criminal record got to do wae getting ran over rant over!! :open_mouth: :open_mouth:

A common occurance on UK roads and one which most human drivers deal with without issue.

In a perfect world, self driving cars would be, well, perfect. But it isn’t, it’s full of imperfect humans, imperfect humans who react to gut instinct.

Automation is perfect in a controlled environment, our roads aren’t controlled. Until they are, auto vehicles will continue to kill and injure people with the blame being passed onto the human.

I can see how they might make full automation work in California, where the weather is fine, the roads wide and straight.

But the UK?

How would it deal with imperfect roads surfaces - snow, mud, leaves, potholes, standing water?

As for the countryside, I just can’t see a fully automated car coping with a narrow muddy country lane, popping a couple of wheels on the verge and folding in the mirrors to squeeze past another vehicle, reversing into a field gateway, swerving a pothole, passing a horse or a cyclist safely.

How would it make that judgement we all make about giving way to an oncoming car at a narrow bridge for example. We work it out, but imaging two automated cars arriving at the narrow bridge at the same time? They’d either both go for it, or neither would.

I really can’t see it working, ever.

Juddian:
I can state absolutely for the record, that they can stick this garbage where the sun doesn’t shine, i shall never own a car that is programmed to interfere with the steering, nor one that has any control of the brakes either, i’ll stick with my old trustworthy Japanese stuff so long as i can find them, if necessary i’ll grey import an older Japanese model when the ones here get too rusty and have to be scrapped.

I don’t know what car(s) you currently own/drive, but you may already have one that has some control over the brakes. Various “electronic stability” systems have been in use for the past couple of decades and many of them have access to the brakes…

JIMBO47:
i notice over here on the liberal news channels they report that the woman has a criminal record etc didn’t cross at a proper spot so was jaywalking etc…what the KF has having a criminal record got to do wae getting ran over rant over!! :open_mouth: :open_mouth:

Not overly surprised by that, as with the motorcycle crash in California, or the Tesla crash much was made of the other vehicle/person being in the wrong, but that really misses the point, unless you are going to operate autonomous vehicles in a closed and controlled system free from human interactions, you are going to have to develop systems that can deal with humans irrationalities.
But then no doubt the people who write for these media outlets have fully bought into the autonomous future and believe it is beyond criticism.
The other worrying thing was the Tesla crash where it drove in to the back of a stationary Fire Engine, because basically it didn’t recognise it because it was stationary and as with the fatal Tesla crash the drivers (who have bought into the cult of Tesla) seem to believe the cars are truly autonomous and have found way of overriding the safety systems which check they are keeping their hands of the steering wheel regularly while in auto-pilot mode.

Rjan:

muckles:
Also as well as our many qualities as a species, we have failings, we evolved for a very different existance, our brain doesn’t process everything our eyes see,

Neither, evidently, does the computer process everything it sees!

What humans do have is a teleological comprehension and an intimate appreciation of human failings - most of us have operated as pedestrians as well as drivers, and use experience in one context to inform our behaviour in the other.

Humans also learn spontaneously and from the experience of others (through communication), whereas the effect of this crash will not have taught the computer anything (either about the general principles or the particular hazard) and it will be left to humans to review the circumstances and determine how the computer ought to have behaved or assayed the circumstances differently.

Indeed, driving is actually not a particularly “different existence” from the one we have evolved to cope with - we are extraordinarily good at it, not only as operators but as those who have devised it entirely, including the machinery used, the environmental context, and the rules that govern it.

(better than most dogs they’re red/green colour blind which is why you don’t see many with driving licences. :laughing: ) we also have problems concentrating on low stimulus activities, of which driving …

It can be low stimulus but humans tend to adjust the throttle accordingly, to maintain an appropriate rate of stimulus, and maintain an efficient balance between concentration and risk.

Firstly you have cut my post, which has left it out of context, as a reply to Cav551’s post.

Secondly, research shows that many people don’t learn from their mistakes and repeat them, especially those who don’t take responibility and believe somebody else is to blame.

Thirdly, I didn’t say or imply that the systems on an autonomous car see all.

Fourthly, I can only assume you drive in some sort of nirvana or with your head up you arse, if you believe people are counteracting the low stimulus by using the throttle, I see them doing all sorts of other things because their attention has wandered from the task of driving.

cav551:
The BA Flight 38 unpowered crash landing at Heathrow, The US Air Flight 1549 which ended up unpowered in the Hudson river and Qantas Flight 38 which suffered a catastophic engine failure and systems shutdown all illustrate that there is no substitute for a human in an emergency. IIRC the Qantas crew had to override computer programed systems in order to land safely.

These may be exceptional instances of human ability and contrast with other occasions when the wrong decisions have been made, but the chance of a computer being able to deliver the same outcome in these incidents is nil.

linkedin.com/pulse/what-air … bus-hughes

I would say these examples show what good training and selection can achieve, I think Freight Dog might like to explain what a commercial pilot has to go through to get and keep his licence to fly commercially.

However the powers that be don’t really want highly trained drivers, especially commercial ones, it would cost too much, (and this drive for automation is being driven by commercial factors, not some high ideal to make life better for the ordinary person) far better to have computerised systems and dumb it all down, cost less and far easier to replace the workers if they get uppity and try and hold the management to ransom.

This has what’s been done since the Industrial revolution, where once skilled jobs were reduced to unskilled processes enabling the high volume, low cost production using a low paid, easily replaced workforce.

dailymail.co.uk/news/article … felon.html

A bit more info. Looks like they’re blaming the pedestrian.

muckles:

cav551:
The BA Flight 38 unpowered crash landing at Heathrow, The US Air Flight 1549 which ended up unpowered in the Hudson river and Qantas Flight 38 which suffered a catastophic engine failure and systems shutdown all illustrate that there is no substitute for a human in an emergency. IIRC the Qantas crew had to override computer programed systems in order to land safely.

These may be exceptional instances of human ability and contrast with other occasions when the wrong decisions have been made, but the chance of a computer being able to deliver the same outcome in these incidents is nil.

linkedin.com/pulse/what-air … bus-hughes

I would say these examples show what good training and selection can achieve, I think Freight Dog might like to explain what a commercial pilot has to go through to get and keep his licence to fly commercially.

However the powers that be don’t really want highly trained drivers, especially commercial ones, it would cost too much, (and this drive for automation is being driven by commercial factors, not some high ideal to make life better for the ordinary person) far better to have computerised systems and dumb it all down, cost less and far easier to replace the workers if they get uppity and try and hold the management to ransom.

This has what’s been done since the Industrial revolution, where once skilled jobs were reduced to unskilled processes enabling the high volume, low cost production using a low paid, easily replaced workforce.

I would agree that those are examples of good training and selection procedures, however there is one inescapable truth which canot be denied as long as man’s fundamental orifice points downwards: there is no maybe in the operation of a computer. It is a binary sytem - either on or off. It has to make a yes/no decision. Man can make a decision based on maybe and choose more than two options.

muckles:
I think Freight Dog might like to explain what a commercial pilot has to go through to get and keep his licence to fly commercially
.

I’ll try my best :astonished: :laughing:

I won’t harp on about the course to obtain the basic Air Transport Pilots Licence. The training is hard, thorough and exacting. There’s statutory classroom learning with 14 exams you sit at a Civil Aviation Authority adjudicated exam centre. Once you’ve got all that that licence allows you to fly for a living, but you of course can’t actually fly anything useful as the largest thing you’d have flown would’ve been a light aircraft with two piston engines.

After bagging your new licence you need to pass selection with an airline and find a job! Once you join an airline they will put you through a “type rating”. Each aircraft design is called “a type”. So your being rated on that aircraft. A Boeing 737-800 is a type.

Airline selection typically consists of various tests and interviews designed to fathom out your technical skills, your “non technical skills” and various other aspects required for an effective crew member, and of course traits that company wish to see in their employees :laughing: . A typical airline selection might run over 3 or 4 seperate days with things such as:-

Day 1 - written tests in verbal reasoning and comprehension, mathematical reasoning, and a series of tests using computer models to assess spatial awareness, workload management, prioritisation, leadership, communication, ability to cope with pressure, hand eye coordination etc

Day 2 - interview. Typically what they call “performance based”. A series of questions asking “give us an example of a time when yadda yadda…”. Each question designed to explore various aspects. Up to you to have enough jackanories you can demonstrate all these qualities :laughing:

Day 3 - simulator assessment. Typically putting into practice some of the key skills they’re looking for against a backdrop of pressure in the environment whilst assessed on raw handling skills

Day 4- perhaps another final interview and group exercises to see how effective you are at utilising your “human factors” skills that are relevant to the job :laughing: . Or not :laughing:

Once you’ve plumped a job, you’ll be sent on a type rating. This takes about 3 -4 months end to end. It’s a very intensive course designed to train you on all aspects of flying a particular type within an operation. First phase is ground school.

So typically aircraft specific ground school covering the technical systems, performance calculations, load and balance calcs, operational procedural knowledge, emergencies. Then more wide range operational ground school. For instance, area specific operation such as North Atlantic track system, Russian ops, dangerous goods, security, emergency equipment on board, RNAV operations (modern nav system. The list is bloody endless :laughing: . There’s a stack of exams you then sit.

Next bit on the type rating is simulator training. Typically 9 sessions of 4 hours each. Each session exploring various technical and emergency aspects within the framework of standard operation procedures. Engine failures, fires, hydraulics, emergency descents, stalling, upset recovery, Electrics, gear, terrain, wind shear, all weather ops. It’s a big list anyway. Then you sit a test over 2 days with a CAA examiner watching you and your colleague sweating it out in the sim. Each flight test in the simulator is 4 hours long. The most stressful and intensive 8 hour long driving test imaginable where everything that possibly can go wrong with an aircraft seems to go wrong :laughing: .

Then after that it is several weeks of “line training”. That’s flying for real on real flights with a trainer. Esssentially learning how to fly the line and utilise all the training against the backdrop of everyday. Basically learning the job :laughing: .

Finally there’s a line check. An examiner sits along for the ride and observes you and your colleague on a standard flight. Once you’ve passed this, you’re done!..

…Until about 5 months later anyway :laughing: . And then it’s another round of technical exam papers to sit on various aspects and a 2 day simulator test again. Then there’s 2 days of groundschool on various aspects with more tests (spotting a theme here? :laughing: ).

So you have 2 - 2 day sim tests to do a year. These are by CAA approved examiners and essentially they resign your licence. Plus a line check every 12 months. Ground school every 11 months. There’s also a yearly medical requirement. In addition you’re expected to complete training and tests on endless aspects that have changed or been fed down via the industry authorities or manufacturers. You have to attend on going training on all human factors skills, threat and error management etc etc.

In the middle of that you’re flying the line, ■■■■■■ out of your face with jet lag, keeping abreast of the volumous and many manuals for the operation and swigging coffee like it’s going out of fashion :laughing: . You get used to the word “debrief” and become engrained that everything you do will be picked apart, analysed and soul searched for performance improvement. No one ever, ever ever has nothing on a debrief even after a standard day out, inc examiners themselves. The job isn’t for anyone that can’t take constructive criticism :laughing: .

muckles:

Rjan:

muckles:
Also as well as our many qualities as a species, we have failings, we evolved for a very different existance, our brain doesn’t process everything our eyes see,

Neither, evidently, does the computer process everything it sees!

What humans do have is a teleological comprehension and an intimate appreciation of human failings - most of us have operated as pedestrians as well as drivers, and use experience in one context to inform our behaviour in the other.

Humans also learn spontaneously and from the experience of others (through communication), whereas the effect of this crash will not have taught the computer anything (either about the general principles or the particular hazard) and it will be left to humans to review the circumstances and determine how the computer ought to have behaved or assayed the circumstances differently.

Indeed, driving is actually not a particularly “different existence” from the one we have evolved to cope with - we are extraordinarily good at it, not only as operators but as those who have devised it entirely, including the machinery used, the environmental context, and the rules that govern it.

(better than most dogs they’re red/green colour blind which is why you don’t see many with driving licences. :laughing: ) we also have problems concentrating on low stimulus activities, of which driving …

It can be low stimulus but humans tend to adjust the throttle accordingly, to maintain an appropriate rate of stimulus, and maintain an efficient balance between concentration and risk.

Firstly you have cut my post, which has left it out of context, as a reply to Cav551’s post.

Secondly, research shows that many people don’t learn from their mistakes and repeat them, especially those who don’t take responibility and believe somebody else is to blame.

Indeed, people do not have to learn - in the sense they may review the circumstances and decide they would change nothing about their behaviour - but they are still capable of learning, and they do undertake a post-mortem review (so to speak, unless they suffer from the kind of mental deficits that would preclude them from driving and most other responsibilities).

A computer does not even reason about responsibility or form a view upon blame. And indeed, it is not unusual that those responsible for programming computers, also decide that there is nothing to learn, and that the programming will not be changed, unless very significant penalties are threatened or inflicted upon them.

Thirdly, I didn’t say or imply that the systems on an autonomous car see all.

I think that was the general tenor, when Cav referred to things “not being picked up by radar or video”, and you countered by arguing that we do not necessarily process everything we see - as well as providing some nonsense about our evolution as a species, which is a pet hate of mine I must say.

Fourthly, I can only assume you drive in some sort of nirvana or with your head up you arse, if you believe people are counteracting the low stimulus by using the throttle, I see them doing all sorts of other things because their attention has wandered from the task of driving.

It depends in what context. I take your point to an extent, but the vast majority of drivers in urban or rural areas are not in a state of understimulation unless there is congestion - in fact sometimes the problem can be too much workload.

One of the positive differences between humans and computers is that the latter can be specified to cope with higher workloads than a human (in the domains in which it excels), and can sustain them indefinitely, and it will not ration or reapply it’s attention when workload is lower.

cav551:

muckles:
[…]

I would agree that those are examples of good training and selection procedures, however there is one inescapable truth which canot be denied as long as man’s fundamental orifice points downwards: there is no maybe in the operation of a computer. It is a binary sytem - either on or off. It has to make a yes/no decision. Man can make a decision based on maybe and choose more than two options.

It is more than that. Computers can make complex decisions - that’s what led to the “expert systems” craze in the 1980s.

But man can decide whether to decide, or whether to desist. He can postpone decisions or explore the implications of multiple choices. He can decide what weight to give to criteria, and when. He often has an innate sense of his confidence in a decision and proceed more cautiously if it is not high. He may easily notice circumstantial or contextual oddities without being pre-primed for them, and having only been primed for the usual case. He is free to decide that an exception to the normal rules apply, and to provide a justification. Two men may differ on their decisions, and they usually have some insight as to the nature of the difference, so the one who in hindsight usually gets them right will be given more responsibility and his reasoning communicated. So many ways in which humans differ from computers.

dailymail.co.uk/news/article … r-car.html

Not pleasant viewing so be aware. From what we’re being told, it should have been well within the capabilities of the auto car to avoid the collision.

Captain Caveman 76:
Video shows moment an Uber driverless car kills a woman in Arizona | Daily Mail Online

Not pleasant viewing so be aware. From what we’re being told, it should have been well within the capabilities of the auto car to avoid the collision.

Didn’t help that the operator was looking down several times at something, system monitor?

Anyways by time he looked up car was literally hitting her!

Doesn’t look good for him as he’ll be the scapegoat.

Doesn’t take away overwhelming fact that autonomous cars are nowhere near ready.

Sent from my iPhone using Tapatalk

Just to put some stats on it
state of Arizona 2014
popuulation 7 million
Total vehicle miles = 62.6 billion
pedestrian deaths = 142
so we have one death every 440 million miles in Arizona

autonomous vehicles have now coverered 4 million miles and killed one
so we have one death every 4 million miles 100 x more dangerous

far too early to make a statistical conclusion as autonomous may do the next 500 million miles without incident, but this clearly is not a good start.

just comparing Arizona to the UK
in the UK we have one pedestrian fatality every 800 million vehicle miles which makes the UK a much safer place to be a pedestrian

curiously the good people of arizona average 9,000 vehicle miles per person as opposed to 5,300 per person in the UK.
Presumably more people drive instead of walk in the US which suggests their driving is even more dangerous.

And clearly their “Jaywalking” laws certainly don’t seem to make it very safe for pedestrians, I wonder if drivers feel they have some sort of right to mow down anyone on their roads?