top | item 44115616

(no title)

clutter55561 | 9 months ago

A lot of the discourse around AI, and LLMs specifically, suffer terribly from FOMO and cognitive biases such as confirmation bias and anthropomorphism. The fact that AI/LLMs are commercial concerns makes it even more difficult to distinguish reality from bullshit.

I’m not a LLM user myself, but I’m slowly incorporating (forcing myself, really) AI into my workflow. I can see how AI as a tool might add value; not very different from, say, learning to touch-type or becoming proficient in Vim.

What is clear to me is that powerful tools lower entry barriers. Think Python vs C++. How many more people can be productive in the former vs the latter? It is also true that powerful tools lend themselves to potentially shitty products. C++ that is really shitty tends to break early, if it compiles at all, whereas Python is very forgiving. Intellisense is another such technology that lowers barriers.

Python itself is a good example of what LLMs can become. Python went from a super powerful tool in a jack-of-trades-master-of-none sort of way, to a rich data DSL driven by Numpy, Scipy, Pandas, Scikit, Jupyter, Torch, Matplotlib and many others; then it experienced another growth spurt with the advent of Rust tooling, and it is still improving with type checkers, free threading and even more stuff written in Rust - but towards correctness, not more power.

I really do hope that we can move past the current fomo/marketing/bullshit stage at some point, and focus on real and reproducible productivity gains.

discuss

order

No comments yet.