The article from Columbia Journalism Review examines the challenges posed by ChatGPT’s handling of publisher content, revealing significant issues with misattribution and inaccuracies in source citations. Although OpenAI claims to have improved its engagement with news organizations by allowing publishers to control their content’s visibility through a “robots.txt” file, a study by the Tow Center found that ChatGPT often fails to identify the sources of quotes from various publications accurately. The research highlighted instances where the chatbot provided incorrect citations or fabricated sources, raising concerns about the reliability of AI-generated information and its potential impact on the reputation of news publishers.
Editor’s Note: To be fair, OpenAI marketed ChatGPT as a tool to help individuals generate ideas and create content efficiently. It never promised to be accurate. The problem is that people who knew nothing about how Large Language Models (LLMs) work believed that it could only integrate information and, hence, trusted it to synthesize the numerous information found online.
We now know that there is something seriously wrong with the way ChatGPT was built. [Also read Beware: AI Is Lying]. Unfortunately, it doesn’t look like OpenAI would be able to correct the problem anytime soon.
The good thing is that this AI is only text-based. Can you imagine how massive its impacts would be if it were a decision-making AI involved in warfare? [See AI and the Future of Warfare]. To think that some people already have unquestioned trust in AI, believing that it can be fair! [See Why Do People Prefer AI Over Humans for Decision Making?. Also read AI Ushers New Era of Gender Apartheid, AI Bias and Its Impact on Recruitment, Companies Create AI Girlfriends for Lonely Men: What Could Go Wrong?, How AI works is often a mystery — that’s a problem].
Read Original Article
Read Online
Click the button below if you wish to read the article on the website where it was originally published.
Read Offline
Click the button below if you wish to read the article offline.
You may also like
-
Generative AI Is Not The Useful Tool Its Developers Had Us Believe
-
This Ambitious Project Wants to Sequence the DNA of All Complex Life on Earth
-
Google Wants to Build AI That Can Simulate the Physical World
-
Elon Musk: “In the Future, There Will Be No Phones, Just Neuralinks”
-
ChatGPT: Convenience At The Expense of Critical Thinking