July 2, 2015

"The 22-year-old was part of a team that was setting up the stationary robot when it grabbed and crushed him against a metal plate."

"[I]nitial conclusions indicate that human error was to blame, rather than a problem with the robot, which can be programmed to perform various tasks in the assembly process. He said it normally operates within a confined area at the plant, grabbing auto parts and manipulating them."

29 comments:

Peter said...

But Isaac Asimov said that couldn't happen:


1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Ann Althouse said...

"The original laws have been altered and elaborated on by Asimov and other authors. Asimov himself made slight modifications to the first three in various books and short stories to further develop how robots would interact with humans and each other. In later fiction where robots had taken responsibility for government of whole planets and human civilizations, Asimov also added a fourth, or zeroth law, to precede the others:

"0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

Peter said...

Then again, most potentially dangerous machines one sees in a factory prominently display a brightly colored "emergency stop" button. Is there something like this on this robot and, if not, shouldn't there be?

Laslo Spatula said...

If it was a stationery robot it would have enveloped him.

I am Laslo.

sinz52 said...

Obviously, Isaac Asimov didn't design this robot.

To implement Asimov's laws, you need to first design a robot that knows the difference between a human being and a piece of machinery. This robot didn't know that what it was manipulating was a human being because it wasn't programmed that way.

So I would add a fifth law to Asimov's laws:

Law number -1 (negative one): A robot must be capable of distinguishing between a living human being and all other objects.

Meade said...

"grabbed and crushed him"

Like a gorilla.

Bob Boyd said...

"... Zuckerberg expects technology to evolve to a point that we can share whole thoughts and full “sensory and emotional” experiences, telepathically..."

I'd hate to have been mind-melded to this poor guy when the robot got him.

Etienne said...

I know what the kids last words were: "What does this button do?"

Tank said...

VW in Germany.

Wonder if the guy was Jewish?

Hmmmmm.

Bob Boyd said...

Industrial Park Rangers were called in to put the robot down.

Chris N said...

Think of all the social work needed for wayward robots.

*recaptcha asked me to prove I was not a robot.

Bob Boyd said...

"Danger Will Robinson!"

Jim Howard said...

I had a chance to write some software that drove a smaller fixed industrial robot one. It was about the size of small person with one really long arm.

It was supposed to slowly wave a wand over the head of a mannequin that was talking on a cellphone, to measure the RF energy around the head.

One slip in the code and it could knock the head clean off the mannequin and send it flying across the room.

As far as Asimov's three laws, software that could be that discerning that quickly isn't even on the horizon yet. Nobody reading this blog will live to see a reliable implementation of Asimov's laws.

CWJ said...

"German news agency DPA reported that prosecutors were considering whether to bring charges, and if so, against whom."

Or should it be "against what"?

JackWayne said...

I'm waiting for the first robot designed to murder a particular human. That's real progress.

MayBee said...

Imagine how awful that would be to see.

MadisonMan said...

Did an enemy of the deceased program the robot? Murder Mystery Plot Alert!

rhhardin said...

Murder isn't the only robot crime to watch out for. There's robbery.

mccullough said...

Thankfully the man killed wasn't named John Conner

Rusty said...

Industrial robots are just multi-axis CNC machines. There are safety protocols for working around them. One of them is to stay outside the machines working envelope while it is under power.

Fernandinande said...

Blogger sinz52 said...
Law number -1 (negative one): A robot must be capable of distinguishing between a living human being and all other objects.

This thing was a mechanical arm, not a robot in the clickbaity Asimov sense, which is why they didn't show a picture of it.

Unknown said...

you know you are in trouble when the robot tricks you into thinking it was human error that caused the "malfunction"

I Callahan said...

Skynet...

Julie C said...

Just watched the first episode of an AMC show called "Humans" about our future robot overlords. Very creepy and well-acted, particularly by the gal who plays the main robot character.

Rusty said...

Ferget it Fernandinande. They're on a roll.

Freeman Hunt said...

I'll come closer to believing robots are going to rule us when Facebook starts serving up relevant ads and Google can tell the difference between people and gorillas.

Computers can do very simple things very, very well if you tell them exactly what to do. Otherwise, they're morons, much bigger morons than morons of the people type.

Rusty said...

That' right , Freeman. Thay're just machines. They can only do what they are progranmmed to do. And they will do that over and over and over again until they are programmed to stop. They're just dumb machines.

John henry said...

Rusty's right, they are just stupid machines. Very smart stupid machines, they will do exactly what they are told, very precisely, time after time. But like any machine, they will only do what they are told.

I work with robots and have some questions about this story. German standards may be different but in the US:

1) the robot would be caged. If the door to the cage is opened, the robot will stop dead.

2) There will be an emergency stop button that when pushed, will stay in until manually reset. When pushed, the robot stops dead in its tracks. It does not cycle back to home position.

3) the worker would not enter the cage without disconnecting power (electric, air, hydraulic)and padlocking the switch etc in the off position. S/he will have the only key. If more than one worker is on the job, each will have their own padlock so the robot (or any machine, generally) cannot be energized until all workers have removed their locks.

If this was in the US, I would say that it is a failure of safety procedures. The robot sounds like it did what robots occasionally do. It acted unpredictably and erratically. that is why safety procedures exist.

I have seen some pretty lax safety processes in other countries. A soap mixing tank where a worker would enter for cleaning with no more protection than turning the switch off, for example. One of the reasons I have grey hair.

HJohn Henry

John henry said...

The article mentioned that this was not a "collaborative robot" 2 years ago, this term didn't exist. A year ago few, even in the industry, knew what it meant. Not it is getting into general news usage.

Good!

this is the most exciting development in robots in may years. A collaborative robot is one that is inherently safe and can work alongside humans, shoulder to shoulder, with no guarding.

I'd not thought about it before but I have long been familiar with Asimovs laws and these collaborative robots (sort of) approach compliance with them.

If anyone is interested in learning more, I wrote an article in Packaging Digest in May as an intro to Cobots, what they are, what they do and what they aren't.

http://www.packagingdigest.com/robotics/what-are-collaborative-robots-and-why-should-you-care1505

My email is in the article if anyone wants to talk about them more. Or just swap robot stories.

John Henry