top | item 40539898

(no title)

Last5Digits | 1 year ago

Why do you feel the need to arbitrarily ascribe some ideology to random people based on one comment? I'm not "techno-utopian" in any sense of the word; I believe that the current AI development is highly risky and that we need to take careful measures such that society at large is prepared for the changes it may bring.

The "wide range of opinions" I see on HN are largely misinformed: they either lack the necessary technical understanding of current LLMs or are attempting to spin up some crackpot philosophical distinctions lacking in any rigor or consistency. I've never claimed that LLMs are perfect and I'd love to discuss their flaws! Believe it or not, that's why I continue to read these threads - to find genuinely informed takes contradicting my own.

Most people outside of tech tend to have no bias against or for LLMs, which gives them a leg up in finding consistent opinions about their capabilities. They tend to inform themselves with an open mind, which allows them to put things into context. Tech people have an immediate negative bias, because the implication of any system being able to write even a single line of code is an immediate intellectual threat. Therefore, things are interpreted maximally negatively.

For example, all of the talking points you mentioned are completely irrelevant unless interpreted with maximal negative bias:

- Text prediction is a general problem, being good at it requires understanding, reasoning and any other intellectual property you believe to be unique to humans.

- Every single system in existence is highly dependent on the data it uses to model the world, humans are no exception to this.

- The enormity of data required by any modern LLM is massively dwarfed by the enormity of data that was required by evolution and human civilization to get to this point.

- The energy requirements of modern LLMs are environmentally irrelevant when compared to literally any industry in either manufacturing, transportation or entertainment. We justify immensely more environmental damage for far less utility every single day.

After the giant media carousel last year, most people know what an LLM is, and the intuitive understanding they built from that reporting is way more accurate than what I have seen here. I have asked relatives and even acquaintances about just that. And as I have stated in my comment, their understanding is vastly better than that of HN.

discuss

order

Jensson|1 year ago

> After the giant media carousel last year, most people know what an LLM is, and the intuitive understanding they built from that reporting is way more accurate than what I have seen here.

Or your own understanding is a lot less accurate than you think and you could learn something from listening a bit more.

For example, text prediction being a problem you need general intelligence to solve perfectly doesn't mean that training a model on text prediction will lead to general intelligence, that is a massive misunderstanding many pro LLM people seem to not get. They are trained on text prediction, that creates a large number of limitations, there is a big difference between a model trained to be general and a model trained to be a text predictor.

Similarly a computer is a turing machine so can calculate everything, but that doesn't mean that the computer can solve every problem with the programs we have today or even that they will be able to anytime in our lifetimes.

So, here you obviously got stuck on the meme argument "text prediction requires general intelligence" without thinking further, so you just did the error you accuse the other people of. People on HN pointing out all the limitations that comes from being trained to predict text doesn't make them ignorant, that makes them smart, LLMs are trained to predict text and that makes them capable at a lot of things but also bad at a lot of things, understanding that makes you a lot better at using LLMs.

chipotle_coyote|1 year ago

Yeah, it's true. I asked my grocery checker how an LLM works, and she rolled her eyes and said, "Come on, Chipotle, they're lexical analysis systems that operates by performing vector math operations on points in a vast multidimensional space that represent tokenized subwords, everyone can immediately intuit that based on the sixty-second cheerleading news clips they saw on CBS. What do you think I am, a Hacker News reader?" Then she threw a papaya at me.

Last5Digits|1 year ago

Understanding comes in many forms. My uncle will not be able to model the fuel flow in his car's engine using the Navier–Stokes equations, yet he can still drive better than me. When it comes to LLMs, an understanding of the transformer architecture is wholly unnecessary to develop a good model of their capabilities and pitfalls. HN commenters tend to lack both a technical and abstract understanding of LLMs, while non-tech people tend to only lack the former.

hackable_sand|1 year ago

" > you are a clerk at a local grocery store... "