Skip to playerSkip to main content
  • 13 hours ago
The New York Times has delivered a serious caution to freelance writers following numerous AI-induced editorial mistakes that raised alarms about the accuracy and reliability of its newsroom. Reports indicate that one piece featured a false quote generated by AI-enhanced summarization, while another freelancer reportedly utilized AI in a book review that later exhibited signs of plagiarism. The publication specifically mentioned tools such as ChatGPT, Gemini, Claude, Perplexity, DALL-E, and Midjourney in its revised guidelines. This action underscores the increasing pressure on media companies globally to find a balance between AI efficiency and the standards of journalism along with public confidence.
Transcript
00:00What happens when AI writes the news and nobody checks it?
00:03The New York Times is now warning freelancers to stay away from AI tools.
00:08This came after several embarrassing mistakes.
00:11One article reportedly included a quote that was not real.
00:14AI had created a summary, and it was published as if someone actually said it.
00:19In another case, a freelancer admitted using AI for a book review.
00:24That review was later found to contain plagiarism.
00:27Now, the Times says freelancers cannot use AI to draft, edit, improve, or rephrase their work.
00:34Tools like ChatGPT, Gemini, Claude, Perplexity, DollE, and MidJourney were all named in the warning.
00:42The message is clear.
00:44AI can help with ideas, but it cannot replace accuracy, honesty, and human checking.
00:50Because in journalism, one fake quote can damage trust.
00:53And once trust is gone, it is very hard to win back.
Comments

Recommended