July 2, 2015

Google's Photos app kept tagging black people as "gorillas."

"On June 28th, computer programmer Jacky Alciné found that the feature kept tagging pictures of him and his girlfriend as 'gorillas.' He tweeted at Google asking what kind of sample images the company had used that would allow such a terrible mistake to happen."

Google didn't explain. It apologized... and removed "gorilla" as an option for the machine to misapply.

Meanwhile, at Facebook, Mark Zuckerberg enthuses about how amazingly helpful the computers of the future are going to be:
People will... be wearing augmented-reality glasses to assist them on an everyday basis....

... Zuckerberg expects technology to evolve to a point that we can share whole thoughts and full “sensory and emotional” experiences, telepathically....

Zuckerberg was... curious about whether there is a fundamental mathematical law underlying human social relationships that governs the balance of who and what we care about. “I bet there is,” he said....

For vision, [computers] may be able to recognize everything in an image or video, from people to objects and scenes. They’ll be able to understand the “content of the images and videos,” he said....
Amazingly helpful, but helpful to whom? To the people whose needs are getting anticipated and shaped... not necessarily yours. That the automatic tagging didn't work for black people is horrible — or did you laugh? — but just a little warning, a signpost on the path to the future.

73 comments:

Rae said...

So they didn't actually fix the problem, just hid it and apologized. A metaphor for times.

tim maguire said...

For now, taking gorillas out as an option is probably the right thing to do. Until the software gets better at photo recognition, they probably shouldn't give it insulting choices.

Next crisis: Google labels photo of gorillas "African Americans."

Emil Blatz said...

But then there will be no end of Gorillas being pissed for being tagged as " ". You'll see them blogging about it until you are just sick of it.

Meade said...

Google is a robot troll.

Bob Ellison said...

tim maguire is correct: a quick apology and disabling of that bug was right for Google.

I continue to disbelieve in the promise of the various forms of creeping AI. OCR is still not perfect; Siri is not perfect; Dragon is not perfect; handwriting recognition is still not perfect; face definition in cameras is still not perfect.

The engineers behind these extremely challenging technologies always think some percentage of accuracy, like 98% or whatever, is "good enough". For Siri, that's somewhat true. I use it pretty frequently.

But in general, AI will require 100% accuracy. That has to be the goal.

This is why we won't have self-driving cars in the near future.

rhhardin said...

Probably it has a similar Karhunen–Loève expansion.

rhhardin said...

We need an upbeat term meaning gorilla or black.

AI has its needs.

Matt Sablan said...

"This is why we won't have self-driving cars in the near future."

-- All self-driving cars have to be is a noticeable % better than human-driving cars, as opposed to something like Siri or face recognition where the point of the tech is to have 100% success rate.

Bob Ellison said...

Matthew Sablan, women have better driving records than men, but we continue to license both genders.

But once GM says "we'll drive the car; you just sit back", the liability flips to the car company (and probably the municipal authorities), and all hell breaks loose, legally.

pm317 said...

They’ll be able to understand the “content of the images and videos,” he said...

Dude, people have been trying to do this since before you were born. We are however, at a point where YOU can put a disclaimer that some content may be mislabeled to the point of being offensive to some people.

Matt Sablan said...

Yeah. Self-driving cars are probably 15-30 years out, but I think we'll get them eventually.

rhhardin said...

if (opposable_thumb()) gorilla(); else cat();

lemondog said...

How does Google tag itself? Is there a "stupid" tag?

Matt Sablan said...

As for Google's solution: Seems right to me. Your software has a bug that's making your company look stupid?

Squash the bug, fix the bug, make it so the bug can't happen, move forward. Spending too much time focusing on the error leads to the Dukes of Hazard being banned from TV. There is NO benefit to anything more in depth than what Google did, and loads of downsides.

Ignorance is Bliss said...

Bob Ellison said...

But in general, AI will require 100% accuracy. That has to be the goal.

This is why we won't have self-driving cars in the near future.


If AI required 100% accuracy then it would never happen. But it doesn't. It requires approximately human accuracy ( although it varies by applicatiion, some wiill require greater that human accuracy, some less than human ). It will also require a bit of humility ( on the part of the developers ) to be more willing to not return an answer, rather than return a wrong answer.

We won't have (general purpose) self-driving cars in the near future because of the juries*. A self-driving car could be 10 times safer than a human driven car, but if the company producing the car has 100 times deeper pockets the liability will put them out of business.


*A lot of people blame lawyers, but lawyers don't award the settlements.

rhhardin said...

"Vulnerable minority" might cover the AI ambiguity without offending anybody.

n.n said...

The gorillas will not be invited to the next family reunion.

jr565 said...

Even the apps are racist.

Glenn Howes said...

AI doesn't have to be perfect, it just has to be better than a human, or comparable to a human and much faster. As it is, I would bet that a robot truck is now comparable in safety to a human driver at the end of the human's 11 hour maximum.

David said...

I have to admit that I laughed. It's usually funny when technology fails in embarrassing but not disastrous ways. My funny bone made no exception for political correctness. It has a mind of its own. I do not think that my funny bone is racist but I will have a talk with it. But I know what it will say. "The fact that I laughed does not mean that I think black people actually resemble gorillas in any way."

Tank said...

Systemic gorillaism.

Anonymous said...

Google didn't explain. It apologized... and removed "gorilla" as an option for the machine to misapply.

When gorillas were put thru Google's photo app, they were tagged as black. Google removed "black" as an option...

Matt Sablan said...

Relevant?

Jonathan Card said...

I don't think I should laugh. My specialty in college was AI and I built a system that would pick out people by picking skin color out from the background. None of my team members were dark-skinned if any sort and it didn't occur to us to test it thoroughly. It wasn't until my advisor asked the question that we realized we'd built a racist computer.

Fernandinande said...

Tank said...
Systemic gorillaism.


Therefore a trivial, harmless error in some very complicated software is national news.

Matt Sablan said...

Wouldn't it be better to have an AI computer thing look for eyeballs/face structure instead of pigmentation?

lemondog said...

How does Google tag itself? Is there a "stupid" tag?

Ann Althouse said...

@Matthew Sablan Thanks. I put up another post for that.

Big Mike said...

Zuckerberg's a coder, that's all. If you want to see what the future will look like go read David Gelernter (a Unabomber survivor) or Ray Kurzweil.

Peter said...

I'll bet there are a few test engineers at Google who are feeling some intense pressure right about now. How could they not catch such a gross flaw?

Fernandinande said...

Jonathan Card said...
My specialty in college was AI and I built a system that would pick out people by picking skin color out from the background.


Maybe the facial/feature recognition software written by blacks ID's white people as Japanese macaques. Maybe we'll never know.

sinz52 said...

Folks here are incorrect about self-driving cars.

For years, we've had autopilots on jetliners that automatically land the plane by themselves at properly equipped airports. They are sufficiently reliable that we haven't had them stopped by the threat of lawsuits.

That's because there is still a human pilot there who can override or take manual control in extremely bad weather.

And as long as self-driving cars don't entirely eliminate the human driver but just give him a break (like cruise control does now), I don't think it will be long off.

What may be longer off is totally automated self-driving cars which you would trust your toddler or infant to ride in by himself without you at the wheel.

dbp said...

"Google didn't explain. It apologized... and removed "gorilla" as an option for the machine to misapply."

This is the right thing to do: Getting into a debate with a user over how your software works is a rathole. Fix the problem and move on.

Big Mike said...

@Jonathan Card, that's a common error. A bunch of years ago I was a judge at a science fair at one of the local high schools. There was a similar project by some very, very bright students but their training set was composed of their friends and used an identical background and lighting so their histograms were skewed when it came to skin color.

(Did you go to Thomas Jefferson in Annandale, VA? Were you one of those students?)

Bob Ellison said...

Toyota paid more than a billion dollars to settle "unintended acceleration" claims caused by a few people who were obviously stepping on the gas, not the brake. That class action was a steaming pile of crap.

So tell me again that we'll have self-driving cars.

damikesc said...

-- All self-driving cars have to be is a noticeable % better than human-driving cars, as opposed to something like Siri or face recognition where the point of the tech is to have 100% success rate.

The liability concerns alone, I think, will kill self-driving cars dead.

Bobby said...

Virtually everyone I talk to working in the industry believe that while the technology will be mature within the next decade, the US will be among the last countries to implement self-driving cars and it could be a century away -- all of them cite precisely what Bob Ellison and damikesc are referencing, which is the legal liability. Note that this is not a failure of the technology- they believe the tech will be superior to the human- but rather of our legal system, specifically the lack of tort reform.

However, countries with more stringent limitations on suing for liability and damages- for example, Singapore or South Korea- will fully implement the technology and we'll be left wondering how these countries could leave us behind. But we already know why.

CStanley said...

It is horrible, and I did laugh....and I guess it's horrible that I laughed. In my defense though, I was thinking of the "Oh shit!!" moment that the google guys must have had when this was brought to their attention. That's what makes it funny.

Matt Sablan said...

Also: You'd think with how often this happens [face recognition software not working on people with darker complexions/skin], that Google would have tested it on a range of people to make sure it didn't do anything like this.

Original Mike said...

Am I the only one who likes to drive?

Original Mike said...

Of course I laughed. It was harmlessly amusing.

Bobby said...

Original Mike,

There's very little that I won't do to avoid driving, including passing on promotions because the public transportation and commuter options aren't there for me -- I truly hate the experience that much (and I'm a lousy driver, so you all shouldn't want me on the road!). However, my father is like you- he loves driving, thought little of driving three hours a day for work and makes up excuses in retirement for why he needs to drive somewhere. My brother-in-law is the same way -- when we went to Hawaii a few years back, he circled the island on 5 of the 7 days because he had to get in his driving "fix" ... As we were on the north shore and the waves touched our lanai, I opted for the relaxation of the surf, sand and margaritas, but we all relax differently and there's nothing wrong with that.

Original Mike said...

Maybe I like to drive because I don't have to do that much of it. I dread the day when we aren't allowed to drive anymore, though I don't think I'll live that long.

Charlie Currie said...

"... Zuckerberg expects technology to evolve to a point that we can share whole thoughts and full “sensory and emotional” experiences, telepathically..."

So, will "mind rape" be the next campus rape epidemic?

Bob Ellison said...

Bobby, maybe one way around the liability problem would be for governments to get behind the thing. Kinda like Cash for Clunkers. They could put a whole bunch of money and/or guarantees and/or liability waivers behind self-driving cars in the interests of efficiency and/or mobility for disabled people and/or health care.

I'm just spit-balling here. Obama could kick this off in a few months and put his idiots at the Transit and Energy to work! That's two new jobs right there!

Mary Beth said...

It tagged the photo next to it as "graduation" even though there were black people in it so I don't think it's a skin color thing, I think it had more to do with the angle of her face to the camera.

Sydney said...

"But in general, AI will require 100% accuracy. That has to be the goal."

What if we are someone's AI project. Have we reached 100% accuracy?

Original Mike said...

"Zuckerberg was... curious about whether there is a fundamental mathematical law underlying human social relationships that governs the balance of who and what we care about. “I bet there is,” he said...."

Ridiculous.

Beach Brutus said...

AI is the perpetual motion machine of our time. The "I" for intelligence means inspiration and imagination. As processors get faster and faster and memory gets cheaper and cheaper, computers will be programmed to do more and more complex things, faster. But in the end they will still only do what they are programmed to do.

Bobby said...

Bob,

And that's basically what Google and others have been doing at the state and local level with, for example, Nevada and California in order to create the legal space they need to be able to experiment (without which the technology would need exponentially more time to mature). I think it would be more difficult to do at the federal level, and especially once the tech starts maturing and the trial lawyers see what its impact could be (as with healthcare "reform," my guess is that the trial lawyers can embrace driver-less cars as long as they still have free license to profit insanely from anyone's mistakes.... you know, reform for everyone, but them).

I'd be terrified to see what would happen if this administration sided with the self-driving car tech companies. It would probably set the effort back decades, if not kill it outright.

Original Mike said...

How does a "driver" tell a self-driving car where he wants to go?

Bobby said...

I should note that some governments (of countries that are less favorable to trial lawyers than the USA) will get behind the self-driving cars and that's where the concept will get demonstrated and proofed. But human drivers make way too much money for trial lawyers in the USA- they will fight with everything they have (and remember, they are the single largest special interest group donor) to keep their streams coming in.

Bobby said...

Mike,

In most of the systems, the humans just say the destination aloud to the car, it brings up the destination on the screen, the human confirms it, and then it plots the route and goes. You can also Bluetooth or NFC it directly from your phone or tablet, so if you found it on Yelp or something, it would do the work for you. Or you can type it in to the computer screen. (Technically, as far as the car is concerned, it's all the same thing since the only change is the user interface, right?)

The engineers know they need to be able to get the tech to where it can take on-command directions from the rider (i.e., "take the next left") like we currently do with taxi drivers.

Original Mike said...

We had a GPS thingie in a rental car in New Zealand which continuously told us to "turn left" on a mountain road. "Left" would have taken us over a 1000 foot cliff. Sure glad it didn't have control of the steering wheel.

Lewis Wetzel said...

"Zuckerberg was... curious about whether there is a fundamental mathematical law underlying human social relationships that governs the balance of who and what we care about. “I bet there is,” he said...."
It has to be a mathematical law for Zuckerberg because digital computers are deterministic. If people are deterministic, the history of the last century would have worked out very differently, however. If people make decisions deterministically, the incentive to find this "fundamental mathematical law underlying human social relationships" is incredible. If you can determine tomorrow's market price of a stock even a tiny bit more reliably than other people, you will eventually own everything in the world.
I've heard Zuckerberg and Bill Gates called coders, and I suppose that is true in some sense, but they did not make their billions by coding. Coding is a talent you can hire for $. What both Zuckerberg and Gates did to become wealthy beyond the dreams of Croesus is to correctly gauge the market and especially they knew the ins and outs of capitalism, especially IP law.

Expat(ish) said...

On the self driving side, while liability is important, you guys have missed the interesting argument.

Driverless cars will be peer-networked to provide the highest level of traffic flow. This means that Car X will be delayed at a light so that cars A, B, C, and D can flow through. That's cool and we're used to that with lights.

But when happens when Car A is routed into a tree because that saves cars X and Y from the same disaster?

The software is gonna have to make hard calculations and go with "most benefit for most" - which sounds great unless you're the car swerving into a tree.

-XC

Original Mike said...

"The software is gonna have to make hard calculations and go with "most benefit for most" - which sounds great unless you're the car swerving into a tree."

Politicians will have a secret "save me" code that they enter into their computer.

Tank said...

Original Mike said...

We had a GPS thingie in a rental car in New Zealand which continuously told us to "turn left" on a mountain road. "Left" would have taken us over a 1000 foot cliff. Sure glad it didn't have control of the steering wheel.


I had the same in Sicily. Kept telling me to continue straight, where straight was over a cliff (the road ended abruptly). Also told me to turn left down a street which turned out to be a pedestrian mall in Taorminia. Embarrassing. Also took me the wrong way on a one way street. Really, it's amazing I never got arrested in Sicily. Must be because the locals are worse drivers than I was.

Virgil Hilts said...

One way around the legal issues Bob Ellisons raises is for the law simply to make the driver responsible (in terms of liability) for whatever his self-driving car does. If we believe that cars will be statistically safer when driven in automatic mode, the insurance companies should be willing to insure them at comparable rates. The law will have to be significantly, changed, however, to avoid the stupid Toyota bogus acceleration issue that Bob mentions. I am not confident that we will make the right choices. Our broken legal system and the trial lawyers' stranglehold on the democratic party and tort reform, will continue to be a drain on America's ability to modernize and compete.

Original Mike said...

I don't think you're going to get people to agree to be held "responsible" for a fatality caused by their self-driving car. Would I be sent to prison?

Char Char Binks, Esq. said...

We're all apes, and the sooner we stop driving cars, the better.

Lewis Wetzel said...

What does "statistically safer" mean? It is unlikely that self-driving cars will have the same kinds of accidents that human driven cars have. Maybe automatic cars will avoid intersection actions better than humans, but have a worse time handling accidents caused by poor road conditions. Maybe they will kill pedestrains to save the driver more frequently than people do.
Anyhow, "driverless cars" will initially mean driverless commercial trucks. That's where the automation money is at.
A large part of the reason why SpaceX wants to land its RLVs on barges at sea is because of liability issues (that and the the FAA). The difference between a multi-ton rocket returning to the Earth's surface and a multi-ton meteor impact is that the entry of the rocket is planned and under human control. Theoretically.

Virgil Hilts said...

Original Mike said: I don't think you're going to get people to agree to be held "responsible" for a fatality .. Would I be sent to prison?

I meant responsible (in terms of liability), thinking more of which insurance company pays. I do not think you could be criminally liable unless you did something truly reckless with the self-driving car that was akin to driving at high speed on completely bald tires or at night with two broken headlights.

pm317 said...

@original Mike, yes ridiculous. But his worldview is Facebook where you can generate a connectivity graph of your FB minion and plop a number to each arc.

Lewis Wetzel said...

At least some driving accident avoidance involves values. Suppose you are forced into a situation where you have to plunge off of a bridge or a cliff to your doom, or hit a mother pushing a baby carriage? What if you are hauling three or four kids to school at the time? Maybe you go out boozing in your autonomous car and (like some drunks) decide you want to take the wheel for yourself? What if you are inebriated and need to take the wheel to prevent an accident?
All hypotheticals, but situations like this will need to be reduced to code or purposely omitted from the software.

Original Mike said...

"...unless you did something truly reckless with the self-driving car that was akin to driving at high speed on completely bald tires or at night with two broken headlights."

What? I still have to change the tires? The car gets to do the fun stuff and I'm still stuck with the maintenance?

Freeman Hunt said...

I would never want a self-driving car if I still had to sit behind the wheel and watch the road. How would a person stay awake and alert while so disengaged? The whole point is that you could lounge around in there doing something else besides driving.

richard mcenroe said...

Guys like Zuckerberg have to realize that Samuel L.Jackson in Kingsman is not a how-to...

Laslo Spatula said...

Freeman Hunt said...
"....The whole point is that you could lounge around in there doing something else besides driving..."

Lounge around?

Lounge?

C'mon Freeman, we know better than that?

Don't we?


I am Laslo.

wildswan said...

I bet it will turn out that all sorts of accidents are being averted by alert drivers which will happen with AI. For instance I can remember avoiding a terrific crash in Seattle simply because I saw that a car ahead of me on the left had an Oregon licence plate and was in the wrong lane for the turn to Oregon. I watched and sure enough the car suddenly came swerving across trying to get to the Oregon lane then braked and spun sideways directly in front of me because a car was illegally in the Oregon lane. (Cars illegally in the Oregon lane was pretty common.) Knowing and acting on what often happened at that intersection I slowed down as soon as the car started sideways and got the brakes on when I saw the other car but knowing and acting on what should happen (the AI way) would have caused a crash. The AI would have been in the right but being right would have led to a crash. I bet that kind of crash will increase but drunken driving caused by the driver will decrease.

Bleach Drinkers Curing Coronavirus Together said...

Zuckerberg was... curious about whether there is a fundamental mathematical law underlying human social relationships that governs the balance of who and what we care about.

Um, no. There's not.

At some point it's just more useful to describe things in words and adjectives and verbal descriptions. Autistic psychopaths don't get that - no matter how fancy a tool they can design.

Zuck: Yeah so if you ever need info about anyone at Harvard
Zuck: Just ask
Zuck: I have over 4,000 emails, pictures, addresses, SNS
[Redacted Friend's Name]: What? How'd you manage that one?
Zuck: People just submitted it.
Zuck: I don't know why.
Zuck: They "trust me"
Zuck: Dumb fucks

Bobby said...

wildswan,

Not necessarily- for starters, when the technology is fully mature, the "driver"wouldn't suddenly realize that it's about to miss the turn for Oregon and have to swerve across lanes (note that your whole problem was created by a human error on the part of the Oregon driver). In a self-driving car, the "driver" would have known its route well in advance. Moreover, assuming everything is networked, the cars will be continuously "talking" to one another on the road -- that is, the Oregon car would have been silently (and invisibly to humans) transmitting, "hey, I'm the 2026 Dodge Charger, I'm crossing two lanes and making the left to the Oregon Trail" and the other cars would be receiving and adjusting speeds or changing lanes as necessary to maintain the flow of traffic. This part of the technology is essentially already mature (you see it with the Internet of Things appliances, for example), they're just not applying it much in cars for the obvious reason that it wouldn't add any value yet.

Rosalyn C. said...

Out of curiosity I looked up Jacky Alciné to see what he looks like. He has a broad flat forehead which slopes back and very wide nostrils, in addition to dark pigmentation. ( I am writing as an artist who has studied and done realistic portrait sculpture and I have nothing against Jacky Alciné or gorillas.) The AI mistake probably wasn't just based on color or race. Google should come up with an app called, "What animal do you most resemble?"