top | item 46845065

(no title)

vages | 29 days ago

I think that this summary is oversimplifying: The rest of the blog post elaborates on how the author and Masley has a completely different interpretation of that bullet point list. The rest of the text is not only examples; it provides elaborations of what thought processes led him to his conclusions. I found the nuancing of the two opposing interpretations, not the conclusion, the most enjoyable part of the post.

(This comment could also be shortened to “that’s oversimplifying”. I think my longer version is both more convincing and enjoyable.)

discuss

order

sebasv_|29 days ago

I feel like your comment is in itself a great analogy for the "beware of using LLMs in human communication" argument. LLMs are in the end statistical models that regress to the mean, so they by design flatten out our communication, much like a reductionist summary does. I care about the nuance that we lose when communicating through "LLM filters", but others dont apparently.

That makes for a tough discussion unfortunately. I see a lot of value lost by having LLMs in email clients, and I dont observe the benefit; LLMs are a net time sink because I have to rewrite its output myself anyway. Proponents seem to not see any value loss, and they do observe an efficiency gain.

I am curious to see how the free market will value LLM communication. Will the lower quality, higher quantity be a net positive for job seekers sending applications or sales teams nursing leads? The way I see it either we end up in a world where eg job matching is almost completely automated, or we find an effective enough AI spam filter and we will be effectively back to square one. I hope it will be the latter, because agents negotiating job positions is bound to create more inequality, with all jobs getting filled by applicants hiring the most expensive agent.

Either way, so much compute and human capital will go wasted.

fragmede|28 days ago

> Proponents seem to not see any value loss, and they do observe an efficiency gain.

You get to start by dumping your raw unfiltered emotions into the text box and have the AI clean it up for you.

If you're in customer support, and have to deal with dumbasses all day long who are too stupid to read the fucking instructions. I imagine being able to type that out, and then have the AI remove profanity and not insult customers to be rather cathartic. Then, substitute "read the manual" for an actually complicated to explain thing.