View source for Amazon Can’t Fix Facial Recognition
From Critiques Of Libertarianism
Jump to:
navigation
,
search
<!-- you can have any number of categories here --> [[Category:Cathy O'Neil]] [[Category:Algorithmic Prison]] <!-- 1 URL must be followed by >= 0 Other URL and Old URL and 1 End URL.--> {{URL | url = https://www.bloomberg.com/opinion/articles/2019-01-24/amazon-can-t-fix-facial-recognition}} <!-- {{Other URL | url = }} --> <!-- {{Old URL | url = }} --> {{End URL}} {{DES | des = "The whole ecosystem of artificial intelligence is optimized for a lack of accountability. Neither the builders nor the users need think too much about the potential consequences of its application, or of mistakes in the code. This is particularly troubling in the realm of facial recognition, which can easily cross the line between useful and creepy. " | show=}} <!-- insert wiki page text here --> <!-- DPL has problems with categories that have a single quote in them. Use these explicit workarounds. --> <!-- otherwise, we would use {{Links}} and {{Quotes}} --> {{List|title=Amazon Can’t Fix Facial Recognition|links=true}} {{Quotations|title=Amazon Can’t Fix Facial Recognition|quotes=true}} {{Text | A group of Amazon.com shareholders has added a new twist to the concept of corporate social responsibility, asking the company to stop selling its facial recognition service for purposes that might violate people’s civil rights. In doing so, they have raised an important question: Could this be the way to curb the creepy use of new algorithms? By appealing to the enlightened self-interest of their makers? Sadly, I think not. Relying on companies is a flawed approach, because they typically don’t know — and don’t want to know — how the technology really works. Like most algorithms being deployed these days, facial recognition is largely a black box. Based on vast databases of faces and its own experience of the most relevant features, a computer identifies a person as, say, your aunt Freda, a suspected criminal, or a target for a drone strike. Users rarely know exactly how it does this — licensing agreements often stipulate that they don’t have access to the source code. Vendors also prefer to remain in the dark. They’re focused on profits, and cluelessness insulates them from responsibility for anything unethical, illegal, or otherwise bad. In other words, the whole ecosystem of artificial intelligence is optimized for a lack of accountability. Neither the builders nor the users need think too much about the potential consequences of its application, or of mistakes in the code. This is particularly troubling in the realm of facial recognition, which can easily cross the line between useful and creepy. Airlines can use it to identify frequent flyers or members of terrorist watch lists, retailers for favored customers or known shoplifters, casinos to help gambling addicts or to nab card counters, schools to save time on taking attendance or to monitor students’ whereabouts. It plays an integral role in China’s social credit system. The creepiness is highly context-dependent. I might like getting offered an upgrade at the airline counter. I wouldn’t enjoy being identified as a shoplifter — particularly if I’d done my time, transgressed as a child or was mistaken for my twin sister. The consequences can be particularly dire for certain groups of people: One recent MIT study of publicly available facial recognition systems found the error rate for dark-skinned women to be many times higher than for white men. Even if accuracy improves, issues will remain. Black women tend to live closer to urban centers with a lot of cameras, so they’re more likely to be tagged. Blacks are also more likely to have been arrested, and thereby have their mugshot in the police database. In other words, even if the technology can be made “fair” across groups, that doesn’t guarantee it will be applied fairly. This is a tricky business, and far more responsibility than a company such as Amazon is equipped to take on. My guess is that if shareholders apply enough pressure, the company will sooner exit the market than police its clients’ use of the software. That’s no solution, because other companies — probably with smaller public profiles — will take its place. What to do? Most likely, the government will have to step in with targeted, context-specific regulation. An initiative called the Safe Face Pledge, started by MIT researcher Joy Buolamwini, has begun to sketch out what that might look like. For example, it calls for banning drone strikes based on facial recognition. Similarly, any algorithms that play a role in high-stakes decisions — such as criminal convictions — should be held to a very high standard. We’ll probably have to go through some iterations to get it right, but we have to start somewhere. Ignorance is certainly no solution. }}
Template:DES
(
view source
)
Template:End URL
(
view source
)
Template:Extension DPL
(
view source
)
Template:List
(
view source
)
Template:Quotations
(
view source
)
Template:Red
(
view source
)
Template:Text
(
view source
)
Template:URL
(
view source
)
Return to
Amazon Can’t Fix Facial Recognition
.
Navigation menu
Views
Page
Discussion
View source
History
Personal tools
Log in
Search
Search For Page Title
in Wikipedia
with Google
Translate This Page
Google Translate
Navigation
Main Page (fast)
Main Page (long)
Blog
Original Critiques site
What's new
Current events
Recent changes
Bibliography
List of all indexes
All indexed pages
All unindexed pages
All external links
Random page
Under Construction
To Be Added
Site Information
About This Site
About The Author
How You Can Help
Support us at Patreon!
Site Features
Site Status
Credits
Notes
Help
Toolbox
What links here
Related changes
Special pages
Page information
Guidelines To Create
Indexable Page/Quote
Indexable Book/Quote
Indexable Quote
Unindexed
Templates
Edit Sidebar
Purge cache this page