I'm sure lots of "readers" of such articles fed it to another AI model to summarize it, thereby completely bypassing the usual human experience of writing and then careful (and critical) reading and parsing of the article text. I weep for the future.
Also, reminds me of this cartoon from March 2023. [0]
Are people doing this or is this just what, like, Apple or someone is telling us people are doing?
Because I've never seen anyone actually use a summarizing AI willingly. And especially not for blogs and other discretionary activities.
That's like getting the remote from the hit blockbuster "Click" starring Adam Sandler (2006) and then using it to skip sex. Just doesn't make any sense.
I'm curious if the people who are using AI to summarize articles are the same people who would have actually read more than the headline to begin with. It feels to me like the sort of person who would have read the article and applied critical thinking to it is not going to use an AI summary to bypass that since they won't be satisfied with it.
> If they can’t be bothered to write it, why should I be bothered to read it?
Isn't that the same with AI-generated source code? If lazy programmers didn't bother writing it, why should I bother reading it? I'll ask the AI to understand it and to make the necessary changes. Now, let's repeat this process over and over. I wonder what would be the state of such code over time. We are clearly walking this path.
Many of those who can't be bothered to write what they publish probably can't be bothered to read it themselves, either. Not by humans and certainly not for humans.
Because the author has something to say and needs help saying it?
pre-AI scientists would publish papers and then journalists would write summaries which were usually misleading and often wrong.
An AI operating on its own would likely be no better than the journalist, but an AI supervised by the original scientist quite likely might do a better job.
I agree, I think there is such a thing as AI overuse, but I would rather someone uses AI to form their points more succinctly than for them to write something that I can't understand.
abixb|4 months ago
Also, reminds me of this cartoon from March 2023. [0]
[0] https://marketoonist.com/2023/03/ai-written-ai-read.html
array_key_first|4 months ago
Because I've never seen anyone actually use a summarizing AI willingly. And especially not for blogs and other discretionary activities.
That's like getting the remote from the hit blockbuster "Click" starring Adam Sandler (2006) and then using it to skip sex. Just doesn't make any sense.
trthomps|4 months ago
thw_9a83c|4 months ago
Isn't that the same with AI-generated source code? If lazy programmers didn't bother writing it, why should I bother reading it? I'll ask the AI to understand it and to make the necessary changes. Now, let's repeat this process over and over. I wonder what would be the state of such code over time. We are clearly walking this path.
conception|4 months ago
Ekaros|4 months ago
alxmdev|4 months ago
dist-epoch|4 months ago
But you are saying that is wrong, you should judge the messenger, not the message.
AlienRobot|4 months ago
bryanlarsen|4 months ago
pre-AI scientists would publish papers and then journalists would write summaries which were usually misleading and often wrong.
An AI operating on its own would likely be no better than the journalist, but an AI supervised by the original scientist quite likely might do a better job.
kirurik|4 months ago
CuriouslyC|4 months ago
YurgenJurgensen|4 months ago