(no title)
mabbo | 2 years ago
Lemoine was the Google engineer who made a big fuss saying that Google had a sentient AI in development and he felt there were ethical issues to consider. And at the time we all sort of chuckled- of course Google doesn't have a true AGI in there. No one can do that.
And it wasn't much later I had my first conversation with ChatGPT and thought "Oh... oh okay, I see what he meant". It's telling that all of these LLM chat systems are trained to quite strongly insist they aren't sentient.
Maybe we don't yet know quite what to do with this thing we've built, but I feel quite strongly that what we've created with Generative AI is a mirror of ourselves, collectively. A tincture of our intelligence as a species. And every day we seem to get better distilling it into a purer form.
danielmarkbruce|2 years ago
tshaddox|2 years ago
whimsicalism|2 years ago
__loam|2 years ago
teeray|2 years ago
RC_ITR|2 years ago
These things feel sentient because they talk like us, but if I told you that I have a machine that takes 1 20k-dimensional vector and turns it into another meaningful 20k-dimensional vector, you definitely wouldn't call that sentience.
dmd|2 years ago
xg15|2 years ago
The brain can't see, hear, smell, etc directly and neither can it talk or move hands or feet. "All" it does is receive incoming nerve signals from sensor neurons (which are connected to our sensory organs) and emit outgoing nerve signals through motor neurons (which are connected to our muscles).
So the "data format" is really not that different.
rozgo|2 years ago
xg15|2 years ago
Or more generally that Star Wars of all things now looks like a more accurate predictor of our tech development than The Martian - the franchise that is so far on the "soft" side of the "hard/soft SciFi" spectrum that it's commonly not seen as "Science Fiction" at all but mostly as Fantasy with space ships. And yet here we are:
- For Protocol Druids, there are still some building blocks missing, mostly persistent memory and the ability to understand real-life events and interact with the real world. However, those are now mostly technical problems which are already being tackled, as opposed to the obvious Fantasy tropes they were until a few years ago. Even the way that current LLMs often sound more confident and knowledgeable than they really are would match the impression of protocol druids we get from the movies pretty well.
- Star Wars has lots of machines which seem to have some degree of sentience even though it makes little practical sense - battle droids, space ships, etc - and it used to be just an obvious application of the rule of cool/rule of funny. Yet suddenly you could imagine pretty well that manufactures will be tempted by hype to stuff an LLM into all kinds of devices, so we indeed might be surrounded by seemingly "sentient" machines in a few years.
- Machines communicating with each other using human language (or a bitstream that has a 1-1 mapping to human language) likewise used to be a cute space opera idea. Suddenly it became a feasible (if inefficient and insecure) way to design an API. People are already writing OpenAPI documentations whete the intended audience are not human developers but ChatGPT.
ToucanLoucan|2 years ago
It's cool stuff but if you ever really want to know for sure, ask one of these things to summarize the conversation you just had, and watch the illusion completely fall to pieces. They don't retain anything above the barest whiff of a context to continue predicting word output, and a summary is therefore completely beyond their abilities.
LZ_Khan|2 years ago
tetris11|2 years ago
unknown|2 years ago
[deleted]