top | item 44043141

(no title)

lesser23 | 9 months ago

“AI” can’t use X so we have to dumb it down to the point a next token predictor can figure it out. Every day it seems like we are using spicy autocomplete as a measure of understandability which seems entirely silly to me. My own employer has ascribed some sort of spiritual status to prompts. The difference between prompting an LLM and a seance with psychedelics is getting smaller and smaller.

The next AI winter is going to be brutal and highly profitable for actual skilled devs.

discuss

order

nativeit|9 months ago

Your description of your employer struck a chord that's been resonating in me for the last several months. I'm legitimately concerned about the knowledge gap with regards to how LLMs work, and a new generation of cults using them as quasi-deities (in both good/bad faith, as it were).

syklemil|9 months ago

> My own employer has ascribed some sort of spiritual status to prompts.

You may want to get them away from the prompts asap. They might be headed down the route to heavy spiritual delusions: https://www.rollingstone.com/culture/culture-features/ai-spi...

It seems that just like how some people predisposed to psychosis should stay away from certain recreational drugs, some people should stay away from LLMs.

kurthr|9 months ago

I don't know, man. I also find AI slop to be uncanny and strangely repulsive, but the other side is that, if there's not enough information in your API and documentation for an LLM, then there's a clever intern that will get it just exactly wrong.

The challenge with making things idiot proof is the ingenuity of idiots. Remember 50% of people are "bellow" median.

john-h-k|9 months ago

> a next token predictor can figure it out

Describing LLMs as "next token predictors" is disingenuous and wrong

poly2it|9 months ago

How so? Autoregressive LLMs are quite literally "next token predictors", just very sophisticated ones.