Categories
Widget Image
Trending
Recent Posts
Thursday, Mar 28th, 2024
HomeTechUnderstanding AI’s limits helps fight dangerous myths

Understanding AI’s limits helps fight dangerous myths

Comment

Shortly after Darragh Worland shared a news story with a scary headline about a potentially sentient AI chatbot, she regretted it.

Worland, who hosts the podcast “Is That a Fact?” from the News Literacy Project, has made a career out of helping people assess the information they see online. Once she researched natural language processing, the type of artificial intelligence that powers well-known models like ChatGPT, she felt less spooked. Separating fact from emotion took some extra work, she said.

“AI literacy is starting to become a whole new realm of news literacy,” Worland said, adding that her organization is creating resources to help people navigate confusing and conflicting claims about AI.

From chess engines to Google translate, artificial intelligence has existed in some form since the mid-20th century. But these days, the technology is developing faster than most people can make sense of it, misinformation experts caution. That leaves regular people vulnerable to misleading claims about what AI tools can do and who’s responsible for their impact.

With the arrival of ChatGPT, an advanced chatbot from developer OpenAI, people started interacting directly with large language models, a type of AI system most often used to power auto-reply in email, improve search results or moderate content on social media. Chatbots let people ask questions or prompt the system to write everything from poems to programs. As image-generation engines such as Dall-E also gain popularity, businesses are scrambling to add AI tools and teachers are fretting over how to detect AI-authored assignments.

The flood of new information and conjecture around AI raises a variety of risks. Companies may overstate what their AI models can do and be used for. Proponents may push science-fiction storylines that draw attention away from more immediate threats. And the models themselves may regurgitate incorrect information. Basic knowledge of how the models work — as well as common myths about AI — will be necessary for navigating the era ahead.

“We have to get smarter about what this technology can and cannot do, because we live in adversarial times where information, unfortunately, is being weaponized,” said Claire Wardle, co-director of the Information Futures Lab at Brown University, which studies misinformation and its spread.

There are plenty of ways to misrepresent AI, but some red flags pop up repeatedly. Here are some common traps to avoid, according to AI and information literacy experts.

Don’t project human qualities

It’s easy to project human qualities onto nonhumans. (I bought my cat a holiday stocking so he wouldn’t feel left out.)

That tendency, called anthropomorphism, causes problems in discussions about AI, said Margaret Mitchell, a machine learning researcher and chief ethics scientist at AI company Hugging Face, and it’s been going on for a while.

In 1966, an MIT computer scientist named Joseph Weizenbaum developed a chatbot named ELIZA, who responded to users’ messages by following a script or rephrasing their questions. Weizenbaum found that people ascribed emotions and intent to ELIZA even when they knew how the model worked.

As more chatbots simulate friends, therapists, lovers and assistants, debates about when a brain-like computer network becomes “conscious” will distract from pressing problems, Mitchell said. Companies could dodge responsibility for problematic AI by suggesting the system went rogue. People could develop unhealthy relationships with systems that mimic humans. Organizations could allow an AI system dangerous leeway to make mistakes if they view it as just another “member of the workforce,” said Yacine Jernite, machine learning and society lead at Hugging Face.

Humanizing AI systems also stokes our fears, and scared people are more vulnerable to believe and spread wrong information, said Wardle of Brown University. Thanks to science-fiction authors, our brains are brimming with worst-case scenarios, she noted. Stories such as “Blade Runner” or “The Terminator” present a future where AI systems become conscious and turn on their human creators. Since many people are more familiar with sci-fi movies than the nuances of machine-learning systems, we tend to let our imaginations fill in the blanks. By noticing anthropomorphism when it happens, Wardle said, we can guard against AI myths.

Don’t view AI as a monolith

AI isn’t one big thing — it’s a collection of different technologies developed by researchers, companies and online communities. Sweeping statements about AI tend to gloss over important questions, said Jernite. Which AI model are we talking about? Who built it? Who’s reaping the benefits and who’s paying the costs?

AI systems can do only what their creators allow, Jernite said, so it’s important to hold companies accountable for how their models function. For example, companies will have different rules, priorities and values that affect how their products operate in the real world. AI doesn’t guide missiles or create biased hiring processes. Companies do those things with the help of AI tools, Jernite and Mitchell said.

“Some companies have a stake in presenting [AI models] as these magical beings or magical systems that do things you can’t even explain,” said Jernite. “They lean into that to encourage less careful testing of this stuff.”

For people at home, that means raising an eyebrow when it’s unclear where a system’s information is coming from or how the system formulated its answer.

Meanwhile, efforts to regulate AI are underway. As of April 2022, about one-third of U.S. states had proposed or enacted at least one law to protect consumers from AI-related harm or overreach.

If a human strings together a coherent sentence, we’re usually not impressed. But if a chatbot does it, our confidence in the bot’s capabilities may skyrocket.

That’s called automation bias, and it often leads us to put too much trust in AI systems, Mitchell said. We may do something the system suggests even if it’s wrong, or fail to do something because the system didn’t recommend it. For instance, a 1999 study found that doctors using an AI system to help diagnose patients would ignore their correct assessments in favor of the system’s wrong suggestions 6 percent of the time.

In short: Just because an AI model can do something doesn’t mean it can do it consistently and correctly.

As tempting as it is to rely on a single source, such as a search-engine bot that serves up digestible answers, these models don’t consistently cite their sources and have even made up fake studies. Use the same media literacy skills you would apply to a Wikipedia article or a Google search, said Worland of the News Literacy Project. If you query an AI search engine or chatbot, check the AI-generated answers against other reliable sources, such as newspapers, government or university websites or academic journals.

Source link

Print Friendly, PDF & Email

No comments

Sorry, the comment form is closed at this time.