Disagree, this is very much needed. The problem with AI-generated content is that it looks superficially worthwhile and plausible, but in fact it often says things that are not correct.
> The problem with AI-generated content is that it looks superficially worthwhile and plausible, but in fact it often says things that are not correct.
This is also a problem with human-generated content.
Sort of. But if I read a five page article by a human, and everything on the first couple pages checks out as correct, I expect the rest of the article to be at least reasonable. If it’s written by AI, it’s entirely possible that it’ll start in on something that doesn’t even make sense. Then I’ll realize that I can’t even trust the part that seemed correct.
But that is true also of lots of content not generated by AI! Fact checking always needs to be done, AI generated or not. But does it matter that it was generated by AI?
What I find fascinating is that aspect of AI-generated content actually captures what humans do all the time, confidently making incorrect statements, "bullshitting" with filler text that only pretends to be meaningful, making illogical statements that contradict what was stated earlier, etc.