top | item 40538436

(no title)

Last5Digits | 1 year ago

[flagged]

discuss

order

rurp|1 year ago

Man the techno-utopianism is awfully strong for some people when it comes to LLMs. There are a wide range of opinions about these models on HN, many positive and many point out the very real flaws the current models have. If you find this mix of takes so offensive you might want to reconsider your own opinions a bit. These models are interesting, but they aren't magically perfect.

Most people outside of tech don't understands how these models work at even the highest level of them being text predictors or their output being highly dependent on the training data. Many people don't even realize the enormous amount of energy and data consumption required to train these models.

Seriously, try asking a random relative how an LLM works at a basic level. You're likely to get a blank stare at even the term "LLM".

Last5Digits|1 year ago

Why do you feel the need to arbitrarily ascribe some ideology to random people based on one comment? I'm not "techno-utopian" in any sense of the word; I believe that the current AI development is highly risky and that we need to take careful measures such that society at large is prepared for the changes it may bring.

The "wide range of opinions" I see on HN are largely misinformed: they either lack the necessary technical understanding of current LLMs or are attempting to spin up some crackpot philosophical distinctions lacking in any rigor or consistency. I've never claimed that LLMs are perfect and I'd love to discuss their flaws! Believe it or not, that's why I continue to read these threads - to find genuinely informed takes contradicting my own.

Most people outside of tech tend to have no bias against or for LLMs, which gives them a leg up in finding consistent opinions about their capabilities. They tend to inform themselves with an open mind, which allows them to put things into context. Tech people have an immediate negative bias, because the implication of any system being able to write even a single line of code is an immediate intellectual threat. Therefore, things are interpreted maximally negatively.

For example, all of the talking points you mentioned are completely irrelevant unless interpreted with maximal negative bias:

- Text prediction is a general problem, being good at it requires understanding, reasoning and any other intellectual property you believe to be unique to humans.

- Every single system in existence is highly dependent on the data it uses to model the world, humans are no exception to this.

- The enormity of data required by any modern LLM is massively dwarfed by the enormity of data that was required by evolution and human civilization to get to this point.

- The energy requirements of modern LLMs are environmentally irrelevant when compared to literally any industry in either manufacturing, transportation or entertainment. We justify immensely more environmental damage for far less utility every single day.

After the giant media carousel last year, most people know what an LLM is, and the intuitive understanding they built from that reporting is way more accurate than what I have seen here. I have asked relatives and even acquaintances about just that. And as I have stated in my comment, their understanding is vastly better than that of HN.

Mawr|1 year ago

I find it extraordinarily unlikely that the average person's understanding of AI is any different from the man the article is about.

The text AI outputs resembles that of a well-spoken expert on the given subject matter speaking with full confidence and authority. There's nothing to clue the user into the unreliability of the output, so 99% of users will not think to double-check.

The chat from the article proves as much: [1]. There's no way a non-techie would doubt this well worded, reasonably sounding answer to a question about Meta from the official Meta chatbot while using the Meta app.

[1]: https://i.cbc.ca/1.7219639.1717103895!/fileImage/httpImage/i...

jtbayly|1 year ago

Blind cynicism?

I’ve read people on HN arguing that AIs are currently fully conscious. This is not what I would call “blind cynicism."