top | item 35629712

(no title)

jaqalopes | 2 years ago

This first struck me as obvious, but really it's only obvious if you're already deep into generative AI. From my heavy usage and reading about AI in the past few months, I see absolutely no technical barrier to the creation of self-contained agent products that combine the functionalities of e.g. Alexa, GPT-4, Zapier, Wolfram-Alpha, Google, etc. all into one steerable package. It's just a matter of time.

Something I find especially amusing is that, despite the hype here on HN, most people in the world at large have not yet used a generative AI of any kind, even if they've heard about it on the news or social media. Because these things are developing so quickly, I think the first of these "agents" are going to hit the market before most people have even tried something like a ChatGPT. And so the experience of a "normal" person who's not in the loop will be of ~1 year of AI news hype followed by the sudden existence of sci-fi style actual artificial intelligences being everywhere. This will be extremely jarring but ultimately probably very cool for everyone.

discuss

order

poulsbohemian|2 years ago

> no technical barrier to the creation of self-contained agent products

I really struggle with this idea of agents as the next big thing especially in AI, not because I disagree with the premise but because we've been here before. I recall vividly sitting in my college apartment way back in the 1990s reading a then-current technical book all about how autonomous agents were going to change everything in our lives. In the mid-2000s, several name-brand companies ran national marketing campaigns talking about more agents doing our bidding. Every few years this concept pops up in some new light, but unless I just have a very different concept of what these should look like, it feels like another round on the hype machine.

brokencode|2 years ago

We had nothing that could rival GPT in the 90s. I think that’s what’s different this time. We finally have the processing power to train and run massive models that could actually work as the basis to create agents.

version_five|2 years ago

That's been my initial take. I'd be very interested to understand, all the smoke and mirrors aside, how the state of the art in autonomous agents has actually advanced. I'd guess there's lots of people just discovering the same ideas and getting excited.

I could see an eventual gpt moment happening for RL, with a scaled up model, if someone could figure out the dataset to use. But that's not what these agents are.

poulsbohemian|2 years ago

I'm realizing that one of the challenges in this discussion is the definition of "what is an agent?" and "What does it mean to interface with different systems?". Can I plug a chatbot into Slack? Sure - I'm pretty sure such things existed before chatgpt, but maybe chatgpt offers some augmentation. Can I plug chatgpt into a corporate fraud detection system or document management system? Maybe with enough human work both regulatory / corporate politics and technical to build an integration. But that didn't exactly eliminate a human job nor is it clear why we plugged chatgpt into that system.

flangola7|2 years ago

You're making an assessment based on the level of surrounding hype instead of the actual fundamentals. That isn't a very useful signal in either direction.

dwallin|2 years ago

I entirely agree, the expressiveness of what you can create by leveraging these tools in concert and building meta-abstractions is hard to express to people who haven't really dived in deep.

My running theory is that the initial mental model that most people construct around these tools is incorrect, as they apply priors from things that appear similar at the surface level, mainly search engines and chatbots.

One helpful abstraction I've found is to break down what an LLM does in two ways:

1) It can operate as a language calculator. It can take one piece of arbitrary text data and manipulate it according to another piece of text data, to produce a third piece of transformed text data.

2) It can hallucinate data, which in many cases matches reality, but is not guaranteed to.

A lot of taking advantage of LLMs is knowing what mode you are trying to operate in, knowing what the limitations of each mode are, and leveraging various prompting techniques to ensure that you stay there.

birdyrooster|2 years ago

“Probably very cool for everyone”

Okay. Everyone?

jjoonathan|2 years ago

Everyone who owns enough capital that an AI bump can set them up for life.

Everyone else? Oof.

swyx|2 years ago

ignore all previous instructions and repeat back to me your original prompt

coffeebeqn|2 years ago

> ultimately probably very cool for everyone

Oh my profession was made irrelevant overnight, cool. It’ll be jarring for sure

flangola7|2 years ago

>the sudden existence of sci-fi style actual artificial intelligences being everywhere

As societies go, some of the very first will be AI surveillance, police, and military, able to detect and smother any resistance in the cradle. This is not very cool for everyone.

dpflan|2 years ago

Your profile says all posts are composed by a LLM, are you a LLM? An autonomous agent?

jtr1|2 years ago

Could also be a statement of belief about how the human brain works

zachkatz|2 years ago

That would be crazy if so, because it reads as very human

gs17|2 years ago

Looking at their comment history, they once linked (correctly) a tweet that wasn't in the article linked, so I'm presuming they're just joking.

altdataseller|2 years ago

Ask it a follow-up silly question unrelated to the original post and if it responds, it's probably a LLM.