top | item 37589245

(no title)

mabbo | 2 years ago

The moment that generative AI became something crazy for me was when I said "holy shit, maybe Blake Lemoine was right".

Lemoine was the Google engineer who made a big fuss saying that Google had a sentient AI in development and he felt there were ethical issues to consider. And at the time we all sort of chuckled- of course Google doesn't have a true AGI in there. No one can do that.

And it wasn't much later I had my first conversation with ChatGPT and thought "Oh... oh okay, I see what he meant". It's telling that all of these LLM chat systems are trained to quite strongly insist they aren't sentient.

Maybe we don't yet know quite what to do with this thing we've built, but I feel quite strongly that what we've created with Generative AI is a mirror of ourselves, collectively. A tincture of our intelligence as a species. And every day we seem to get better distilling it into a purer form.

discuss

order

danielmarkbruce|2 years ago

Isn't the takeaway : "holy shit, these things are advanced enough to make people like Blake Lemoine believe they are sentient?"

tshaddox|2 years ago

Or "holy shit, we don't know enough about sentience to even begin to know whether something has it, other than humans, because we've gotten used to assuming that all human minds operate similarly to our own and experience things similarly to how we do."

whimsicalism|2 years ago

Having witnessed this debate maybe 50 times now, my view is it is purely about semantics.

__loam|2 years ago

HFRL is literally just training the AI to be convincing. That's what these systems are optimized for.

teeray|2 years ago

Isn’t that the hallmark of the Turing Test?

RC_ITR|2 years ago

The counterpoint to this is always "models work with numerical vectors and we translate those to/from words"

These things feel sentient because they talk like us, but if I told you that I have a machine that takes 1 20k-dimensional vector and turns it into another meaningful 20k-dimensional vector, you definitely wouldn't call that sentience.

dmd|2 years ago

What if I told you I have a machine that takes 1 20k-dimensional vector and turns it into another meaningful 20k-dimensional vector, but the machine is made of a bunch of proteins and fats and liquids and gels? Would you be willing to call it sentient now?

xg15|2 years ago

Yes, and you wouldn't believe you're made out of cells as well.

The brain can't see, hear, smell, etc directly and neither can it talk or move hands or feet. "All" it does is receive incoming nerve signals from sensor neurons (which are connected to our sensory organs) and emit outgoing nerve signals through motor neurons (which are connected to our muscles).

So the "data format" is really not that different.

rozgo|2 years ago

We ask it to predict, and in doing so it sometimes creates a model of the world it can use to "think" what comes next. In forward pass.

xg15|2 years ago

I think my moment was the realisation that we're one, maybe two years away from building a real-life C3PO - like, not a movie lookalike or marchandize, but a working Protocol Druid.

Or more generally that Star Wars of all things now looks like a more accurate predictor of our tech development than The Martian - the franchise that is so far on the "soft" side of the "hard/soft SciFi" spectrum that it's commonly not seen as "Science Fiction" at all but mostly as Fantasy with space ships. And yet here we are:

- For Protocol Druids, there are still some building blocks missing, mostly persistent memory and the ability to understand real-life events and interact with the real world. However, those are now mostly technical problems which are already being tackled, as opposed to the obvious Fantasy tropes they were until a few years ago. Even the way that current LLMs often sound more confident and knowledgeable than they really are would match the impression of protocol druids we get from the movies pretty well.

- Star Wars has lots of machines which seem to have some degree of sentience even though it makes little practical sense - battle droids, space ships, etc - and it used to be just an obvious application of the rule of cool/rule of funny. Yet suddenly you could imagine pretty well that manufactures will be tempted by hype to stuff an LLM into all kinds of devices, so we indeed might be surrounded by seemingly "sentient" machines in a few years.

- Machines communicating with each other using human language (or a bitstream that has a 1-1 mapping to human language) likewise used to be a cute space opera idea. Suddenly it became a feasible (if inefficient and insecure) way to design an API. People are already writing OpenAPI documentations whete the intended audience are not human developers but ChatGPT.

ToucanLoucan|2 years ago

They feel sentient in many cases because they're trained by people using data they've selected in the hope that they can train it to be sentient. And the models in turn are just mechanical turks repeating back what they've already read in slightly different ways. Ergo, they "feel" sentient, because to train them, we need to tell them which outputs are more correct, and we do that by telling them the ones that sound more sentient are more correct.

It's cool stuff but if you ever really want to know for sure, ask one of these things to summarize the conversation you just had, and watch the illusion completely fall to pieces. They don't retain anything above the barest whiff of a context to continue predicting word output, and a summary is therefore completely beyond their abilities.

LZ_Khan|2 years ago

Oh he was right for sure.

tetris11|2 years ago

I read through the transcripts and was stunned when my more CS-oriented colleagues dismissed it as stochastic parroting. It sure sounded human to me.