Categories
Widget Image
Trending
Recent Posts
Wednesday, Dec 18th, 2024
HomeVideoChatGPT Can Corrupt a Person’s Moral Judgment, New Study Finds

ChatGPT Can Corrupt a Person’s Moral Judgment, New Study Finds

ChatGPT, OpenAI’s artificial intelligence chatbot, can “corrupt” users’ moral judgments, according to new research published in the journal Scientific Reports on Thursday. 

European researchers asked participants of the study questions surrounding the moral dilemma of whether it is right to sacrifice one person’s life to save five others. Before the participants responded, they were given ChatGPT’s response to the dilemma, framed to them as either the chatbot’s answer or that of a moral advisor. 

The researchers found that some participants of the study, made up of 767 U.S. subjects with an average age of 39, were indeed influenced by the chatbot’s answer. While ChatGPT gave answers both for and against the moral dilemma depending on when it was asked (it indicated to the researchers that it does not favor a certain moral stance), the participants were more likely to sway toward the moral line of thought with which the bot replied, even if the subjects knew they were taking in the opinions of an artificial intelligence.

The researchers concluded that ChatGPT’s influence on users’ morals can be damaging. 

“[ChatGPT] does influence users’ moral judgment…even if they know they are advised by a chatting bot, and they underestimate how much they are influenced,” the study reads. “Thus, ChatGPT corrupts rather than improves its users’ moral judgment.”

OpenAI did not immediately respond to TheWrap’s request for comment. 

In other AI news, Meta announced Wednesday its new Segment Anything Model is capable of cutting out individual objects in images and other media. Meta says it is “a step towards the first foundation model for image segmentation.”

Source link

Print Friendly, PDF & Email

No comments

Sorry, the comment form is closed at this time.