Membership is FREE!   Professional Designations for Business & Technology Professionals

Artificial IntelligenceETHICS

We read the paper that forced Timnit Gebru out of Google. Here’s what it says

2 Mins read

The company’s star ethics researcher highlighted the risks of large language models, which are key to Google’s business.

On the evening of Wednesday, December 2, Timnit Gebru, the co-lead of Google’s ethical AI team, announced via Twitter that the company had forced her out. Gebru, a widely respected leader in AI ethics research, is known for coauthoring a groundbreaking paper that showed facial recognition to be less accurate at identifying women and people of color, which means its use can end up discriminating against them. She also cofounded the Black in AI affinity group, and champions diversity in the tech industry. The team she helped build at Google is one of the most diverse in AI, and includes many leading experts in their own right. Peers in the field envied it for producing critical work that often challenged mainstream AI practices.

series of tweetsleaked emails, and media articles showed that Gebru’s exit was the culmination of a conflict over another paper she co-authored. Jeff Dean, the head of Google AI, told colleagues in an internal email (which he has since put online) that the paper “didn’t meet our bar for publication” and that Gebru had said she would resign unless Google met a number of conditions, which it was unwilling to meet. Gebru tweeted that she had asked to negotiate “a last date” for her employment after she got back from vacation. She was cut off from her corporate email account before her return.

Online, many other leaders in the field of AI ethics are arguing that the company pushed her out because of the inconvenient truths that she was uncovering about a core line of its research—and perhaps its bottom line. More than 1,400 Google staff and 1,900 other supporters have also signed a letter of protest.

Many details of the exact sequence of events that led up to Gebru’s departure are not yet clear; both she and Google have declined to comment beyond their posts on social media. But MIT Technology Review obtained a copy of the research paper from  one of the co-authors, Emily M. Bender, a professor of computational linguistics at the University of Washington. Though Bender asked us not to publish the paper itself because the authors didn’t want such an early draft circulating online, it gives some insight into the questions Gebru and her colleagues were raising about AI that might be causing Google concern.

Titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” the paper lays out the risks of large language models—AIs trained on staggering amounts of text data. These have grown increasingly popular—and increasingly large—in the last three years. They are now extraordinarily good, under the right conditions, at producing what looks like convincing, meaningful new text—and sometimes at estimating meaning from language. But, says the introduction to the paper, “we ask whether enough thought has been put into the potential risks associated with developing them and strategies to mitigate these risks.”

The paper

The paper, which builds off the work of other researchers, presents the history of natural-language processing, an overview of four main risks of large language models, and suggestions for further research. Since the conflict with Google seems to be over the risks, we’ve focused on summarizing those here.

Environmental and financial costs…

 

Read The Full Article

Related posts
Artificial Intelligence

The state of AI in 2020

1 Mins read
The results of this year’s McKinsey Global Survey on artificial intelligence (AI) suggest that organizations are using AI as a tool for…
ETHICS

Canadian military wants to establish new organization to use propaganda, other techniques to influence Canadians

2 Mins read
The Canadian Forces wants to establish a new organization that will use propaganda and other techniques to try to influence the attitudes,…
Artificial Intelligence

How AI And Covid-19 Have Accelerated The Decline Of Human Labor

2 Mins read
The jobs aren’t coming back — well, at least, a lot of them aren’t. Eventually, Covid-19 will be beaten — vaccines and…
Join BIZTEK

Yes, I have read and live by this Code of Ethics - https://biztek.org/code-of-ethics/. We are BIZTEK, located in Mississauga, Ontario. Business Certification is an important part of doing business in Canada. Join us to set new standards and professionalism to the technology sector. We will email you regarding issues that affect business and technology professionals in Canada. Contact us at info@biztek.org or call us at 647 499 2744. You can unsubscribe at any time.