AI's 'unsettling' rollout is exposing its flaws. How concerned should we be?

The ChatGPT website displayed on a tablet in Madrid, Spain.
The ChatGPT website displayed on a tablet in Madrid, Spain. (Image credit: Europa Press News / Contributor)

The CEO of Google and Alphabet is warning that society needs to move quickly to adapt to the rapid expansion of artificial intelligence (AI). 

"This is going to impact every product across every company," Sundar Pichai said April 16 in an interview with "60 Minutes (opens in new tab)." Last month, Google released its chatbot, Bard — a competitor of ChatGPT, the widely known chatbot produced by OpenAI — despite scathing reviews in internal testing, according to The Byte (opens in new tab)

Programs like ChatGPT and Bard can produce confident-sounding text in response to user queries, and they're already finding a foothold in some tasks, such as coding, said Ernest Davis (opens in new tab), a computer scientist at New York University. However, they do often flub basic facts and "hallucinate," meaning they make up information. In one recent example, ChatGPT invented a sexual harassment scandal (opens in new tab) and named a real law professor as the perpetrator, complete with citations of nonexistent newspaper articles about the case. 

The power of these programs — combined with their imperfections, has experts concerned about the rapid rollout of AI. While a "Terminator" Skynet situation is a long way off, AI programs have the capacity to amplify human bias, make it harder to discern true information from false, and disrupt employment, experts told Live Science. 

Related: DeepMind AI has discovered the structure of nearly every protein known to science 

Benefit or bias?

During the "60 Minutes" discussion, interviewer Scott Pelley called the Bard chatbot's capabilities "unsettling" and said "Bard appears to be thinking." 

However, large language models such as Bard are not sentient, said Sara Goudarzi (opens in new tab), associate editor of disruptive technologies for the Bulletin of the Atomic Scientists. "I think that really needs to be clear," Goudarzi said. 

These AI chatbots produce human-sounding writing by making statistical inferences about what words are likely to come next in a sentence, after being trained on huge amounts of preexisting text. This method means that while AI may sound confident about whatever it's saying, it doesn't really understand it, said Damien Williams (opens in new tab), an assistant professor in the School of Data Science at the University of North Carolina who studies technology and society. 

These AI chatbots are "not trying to give you right answers; they're trying to give you an answer you like," Williams told Live Science. He gave an example of a recent AI panel he attended: The introductory speaker asked ChatGPT to produce a bio for Shannon Vallor (opens in new tab), an AI ethicist at The University of Edinburgh in the U.K. The program tried to give Vallor a more prestigious educational background than she actually had, because it simply wasn't statistically likely that someone of her stature in the field went to community college and a public university. 

It's easy for AI to not only copy but amplify any human biases that exist in the training data. For example, in 2018, Amazon dropped an AI résumé-sorting tool that showed persistent bias against women. The AI ranked résumés with female-sounding names as less qualified than those with male-sounding names, Williams said.

"That's because the data it had been trained on was the résumé sorting of human beings," Williams said. 

AI programs like ChatGPT are programmed to try to avoid racist, sexist or otherwise undesirable responses. But the truth is that there is no such thing as an "objective" AI, Williams said. AI will always include human values and human biases, because it's built by humans. 

"One way or another, it's going to have some kind of perspective that undergirds how it gets built," Williams said. "The question is, do we want to let that happen accidentally as we have been doing … or do we want to be intentional about it?" 

Building AI safeguards 

Pichai warned that AI could increase the scale of disinformation. Already, AI-generated videos dubbed "deepfakes" are becoming more convincing and harder to discern from reality. Want to animate the "Mona Lisa" or bring Marie Curie back to life? Deepfake tech can already do a convincing job. 

Pichai said societies need to develop regulation and pen treaties to ensure that AI is used responsibly. 

"It's not for a company to decide," Pichai told "60 Minutes." "This is why I think the development of this needs to include not just engineers but social scientists, ethicists, philosophers and so on."

So far, regulations around AI largely fall under laws designed to cover older technologies, Williams said. But there have been attempts at a more comprehensive regulatory structure. In 2022, the White House Office of Science and Technology Policy (OSTP) published the "AI Bill of Rights (opens in new tab)," a blueprint meant to promote ethical, human-centered AI development. The document covers issues of equity and possible harm, Williams said, but it leaves out some concerning problems, such as the development and deployment of AI by law enforcement and the military. 

Increasingly, Williams said, political nominees for federal agencies and departments are being drawn from people who have a sense of the costs and benefits of AI. Alvaro Bedoya, the current commissioner of the Federal Trade Commission, was the founding director of the Georgetown Law Center for Privacy and Technology and has expertise in technology and ethics, Williams said, while Alondra Nelson, former interim director of the OSTP, has had a long career studying science, technology and inequalities. But there is a long way to go to build technological literacy among politicians and policymakers, Williams said. 

"We are still in the space of letting various large corporations direct the development and distribution of what could be very powerful technologies, but technologies which are opaque and which are being embedded in our day-to-day lives in ways over which we have no control," he said.

Stephanie Pappas
Live Science Contributor

Stephanie Pappas is a contributing writer for Live Science, covering topics ranging from geoscience to archaeology to the human brain and behavior. She was previously a senior writer for Live Science but is now a freelancer based in Denver, Colorado, and regularly contributes to Scientific American and The Monitor, the monthly magazine of the American Psychological Association. Stephanie received a bachelor's degree in psychology from the University of South Carolina and a graduate certificate in science communication from the University of California, Santa Cruz.