In December 2020, Google made headlines when it dismissed renowned AI ethicist and researcher, Timnit Jiu, from her position. Jiu had refused to withdraw a research paper that criticized Google’s large language model, Palm 2, and its potential dangers to society and the environment. This act sparked protests from the AI community, Google employees, and shareholders, who accused the company of censorship, discrimination, and retaliation. The incident shed light on the lack of diversity, transparency, and accountability in Google’s AI research and development.
In February 2021, Google fired another AI ethicist and researcher, Margaret Mitchell, who co-founded the ethical AI team with Jiu. Mitchell was fired after using an automated script to search her Google email account for evidence of discrimination against Jiu. These actions further strained the relationship between Google and its AI workers, as well as the wider AI community.
To address the criticism and pressure caused by the AI ethics scandal, Google announced changes to its AI research and review policies in April 2021. The company stated that it would establish a new AI research review board, a new AI research Integrity office, and a new AI research publication portal to enhance the quality, oversight, and transparency of its AI research. However, many people were left unsatisfied with Google’s actions and questioned the sincerity and effectiveness of its reforms.
In response to the controversy, some former and current members of Google’s ethical AI team, along with other AI experts and activists, established an initiative called Whistleblower AI. This initiative aims to expose and challenge unethical or harmful practices in the field of AI and promote better ethical standards and practices.
The Whistleblowing Against Google’s Self-Driving Car Project
In recent times, Google’s self-driving car project, called Waymo, has faced controversy and public outrage. Journalist Levi Tian’s book, "The AI Race: How America Can Win the Global Competition for the Future of Transportation," exposed disturbing revelations about Waymo’s methods for achieving success.
Tian’s book suggests that Google has been deceitful and untruthful about the capabilities, safety, and readiness of its self-driving cars. The company has allegedly manipulated data, media, and laws to gain an unfair advantage over its competitors. Moreover, the book claims that Google has put its human drivers, who were hired to monitor and intervene in case of emergencies or errors, at risk by forcing them to work long hours under stressful and hazardous conditions. Google has also been accused of firing or harassing those who raise concerns or report issues.
The publication of Tian’s book has triggered criticism and condemnation against Google and Waymo. It has raised questions about the accountability and reliability of the company and its technology, leading to investigations and lawsuits from authorities and other stakeholders demanding more transparency and accountability.
Google’s AI Innovations and Concerns
In May 2021, Google’s annual developer conference, IO, showcased its newest AI products and services. The company presented a range of impressive AI features and applications, highlighting its power and influence in the AI field. These included Lambda, an advanced natural language understanding system; Muum, a multimodal understanding system; and Music LM, a music generation system. Google claimed that its AI technologies could enhance user experience, productivity, creativity, and help address social and environmental challenges.
However, the new AI features and applications also revealed concerns and risks. Some critics expressed concerns about the reliability, accuracy, and ethics of systems like Lambda and Music LM, along with their potential negative impacts on society and culture. Existing AI products, such as Google Photos and Google Maps, were accused of violating user privacy, security, and rights. Google’s dominance in the AI industry also raised concerns about its monopoly and hegemony, lack of diversity, competition, regulation, and the potential threats to human autonomy, dignity, and well-being.
Other AI News Stories
While Google has been a significant contributor to the AI field, other companies have also made headlines with their AI-related news:
Jofrey Hinton, an AI pioneer, recently quit his job at Google to speak about the potential dangers of AI technology. Hinton expressed concerns about the possibility of creating super-intelligent machines that could outsmart humans and the ethical issues of using AI for military purposes.
Meta (formerly known as Facebook) announced an AI sandbox for advertisers called Meta Creative Studio. This platform enables advertisers to design, test, and optimize their ads across Meta apps and services using AI features like alternative copies and background generation.
OpenAI released an upgraded version of its large-scale language model, GPT-3, called GPT-3 175B. With 175 billion parameters and 45 terabytes of data, GPT-3 175B is the most powerful language model ever created. OpenAI claims it can generate high-quality and diverse texts for various domains and tasks.
IBM developed an AI system called ATA (Alzheimer’s disease identification from conversational audio) that can detect and diagnose Alzheimer’s disease from speech patterns and features. ATA analyzes speech patterns to predict cognitive status and risk of Alzheimer’s disease, offering a cost-effective non-invasive screening and monitoring method.
Amazon introduced Stylesnap, an AI service that helps customers find and purchase clothes matching their style and preferences. Stylesnap uses computer vision and generative AI to analyze images and suggest similar or matching items available on Amazon.
These AI-related news stories demonstrate the advancements, challenges, and concerns surrounding AI technology, as well as the need for oversight, accountability, and ethical standards in its development and deployment