top | item 44914345

(no title)

pbw | 6 months ago

Yes, GPT-5's response above was not shortening because there was nothing in the OP about Plato's Cave. I agree that Plato's cave analogy was confusing here. Here's a better one from GPT-5, which is deeply ironic:

A New Yorker book review often does the opposite of mere shortening. The reviewer:

* Places the book in a broader cultural, historical, or intellectual context.

* Brings in other works—sometimes reviewing two or three books together.

* Builds a thesis that connects them, so the review becomes a commentary on a whole idea-space, not just the book’s pages.

This is exactly the kind of externalized, integrative thinking Jenson says LLMs lack. The New Yorker style uses the book as a jumping-off point for an argument; an LLM “shortening” is more like reading only the blurbs and rephrasing them. In Jenson’s framing, a human summary—like a rich, multi-book New Yorker review—operates on multiple layers: it compresses, but also expands meaning by bringing in outside information and weaving a narrative. The LLM’s output is more like a stripped-down plot synopsis—it can sound polished, but it isn’t about anything beyond what’s already in the text.

discuss

order

pbw|6 months ago

Essentially, Jenson's complaint is "When I ask an LLM to 'summarize' it interprets that differently from how I think of the word 'summarize' and I shouldn't have to give it more than a one-word prompt because it should infer what I'm asking for."

qingcharles|6 months ago

I think exactly this. When someone is given the task of writing a book review for the New Yorker there is a (probably unstated) agreement that they won't simply summarize the contents, but weave it into an essay in the way the LLM proposed. You could definitely get a similar result from an LLM by giving a more suitable and verbose prompt such as "review these 3 titles together, talk about their shared themes and concepts in a way that is relevant to the contemporary audience" etc etc.

gowld|6 months ago

Specifically, Jenson's complaint is that the LLM knows what the word "summarize" means, instead of misunderstanding in the same way Jenson does.

pitpatagain|6 months ago

Ah ok, you meant the second thing.

I don't think the Plato's Cave analogy is confusing, I think it's completely wrong. It's "not in the article" in the sense that it is literally not conceptually what the article is about and it's also not really what Plato's Cave is about either, just taking superficial bits of it and slotting things into it, making it doubly wrong.

pbw|6 months ago

And you think the comparison to book reviews is equally bad? Both are from GPT-5.