A recent report from the Pew Research Center reveals that the use of ChatGPT among U.S. teens for schoolwork has surged, doubling from 13% in 2023 to 26% in 2024. While a majority of teens (54%) find it acceptable to use the AI chatbot for research, support drops significantly for more complex tasks, with only 29% approving its use for math problems and just 18% for writing essays. The rising trend highlights a nuanced relationship between students and AI, suggesting that while some may rely on it as a homework helper, there are concerns about fostering critical thinking and independent problem-solving skills.
Editor’s Note: This is a concerning development as ChatGPT and other Large Language Models (LLM) have been found to generate fabricated content, raising questions about the accuracy of the assistance it provides. [Also see Generative AI Is Not The Useful Tool Its Developers Had Us Believe].
Research also indicates that reliance on AI for academic tasks may inhibit students’ ability to think critically and solve problems independently. [Read ChatGPT: Convenience At The Expense of Critical Thinking, ChatGPT: Confidently Inaccurate, Beware: AI Is Lying].
If teens are already using biased AI like ChatGPT before they learn to do proper research, how can we expect them to discover the true, but censored information about the world?
Read Original Article
Read Online
Click the button below if you wish to read the article on the website where it was originally published.
Read Offline
Click the button below if you wish to read the article offline.