top | item 44799902

(no title)

stillpointlab | 6 months ago

I'm usually the one defending AIs in the comments, but this article hits home for me. I find myself zoning out when I read long tracts written by AI. I absolutely hate filler and many LLMs just fill up space with text.

In my own usage, that has meant that even when I use LLMs to help with prose, I write the text and use the LLM to review and provide feedback. In some cases I will copy a sentence if the LLM version is better but generally I just ask for opinions. I explicitly request the AI _not_ to write. When it re-writes entire paragraphs of my prose I actually experience a deep cringe feeling.

discuss

order

amanaplanacanal|6 months ago

They seem to be good at generating a lot of boilerplate, which works for some people because our processes require a lot of boilerplate. We'd be better served by fixing our processes to not require all this useless text, but I don't see this happening.

jaredklewis|6 months ago

Over time I’ve come to appreciate boilerplate more.

Early in my career I really appreciated very DRY code with minimal repetition. However over time I’ve noticed that such code tends to introduce more abstractions as opposed to more verbose code which can often rely on fewer abstractions. I think this is good because I think we also have a sort of “abstraction budget” we have to stay within or our brains, metaphorically, stop reading from memory and need to start reading from disk (consulting docs, jumping to function definitions, etc…)

I feel the ideal code base would rely on a small number of powerful abstractions.

In practice I think this usually means relying mostly on the abstractions built into the language, standard library, and framework used and then maybe sprinkling in a couple of app/domain specific abstractions or powerful libraries that being their own abstraction.

So in my experience reducing boilerplate can often make the code more difficult to understand.