(no title)
jaqalopes | 2 years ago
Something I find especially amusing is that, despite the hype here on HN, most people in the world at large have not yet used a generative AI of any kind, even if they've heard about it on the news or social media. Because these things are developing so quickly, I think the first of these "agents" are going to hit the market before most people have even tried something like a ChatGPT. And so the experience of a "normal" person who's not in the loop will be of ~1 year of AI news hype followed by the sudden existence of sci-fi style actual artificial intelligences being everywhere. This will be extremely jarring but ultimately probably very cool for everyone.
poulsbohemian|2 years ago
I really struggle with this idea of agents as the next big thing especially in AI, not because I disagree with the premise but because we've been here before. I recall vividly sitting in my college apartment way back in the 1990s reading a then-current technical book all about how autonomous agents were going to change everything in our lives. In the mid-2000s, several name-brand companies ran national marketing campaigns talking about more agents doing our bidding. Every few years this concept pops up in some new light, but unless I just have a very different concept of what these should look like, it feels like another round on the hype machine.
brokencode|2 years ago
version_five|2 years ago
I could see an eventual gpt moment happening for RL, with a scaled up model, if someone could figure out the dataset to use. But that's not what these agents are.
poulsbohemian|2 years ago
flangola7|2 years ago
dwallin|2 years ago
My running theory is that the initial mental model that most people construct around these tools is incorrect, as they apply priors from things that appear similar at the surface level, mainly search engines and chatbots.
One helpful abstraction I've found is to break down what an LLM does in two ways:
1) It can operate as a language calculator. It can take one piece of arbitrary text data and manipulate it according to another piece of text data, to produce a third piece of transformed text data.
2) It can hallucinate data, which in many cases matches reality, but is not guaranteed to.
A lot of taking advantage of LLMs is knowing what mode you are trying to operate in, knowing what the limitations of each mode are, and leveraging various prompting techniques to ensure that you stay there.
birdyrooster|2 years ago
Okay. Everyone?
jjoonathan|2 years ago
Everyone else? Oof.
swyx|2 years ago
coffeebeqn|2 years ago
Oh my profession was made irrelevant overnight, cool. It’ll be jarring for sure
flangola7|2 years ago
As societies go, some of the very first will be AI surveillance, police, and military, able to detect and smother any resistance in the cradle. This is not very cool for everyone.
dpflan|2 years ago
jtr1|2 years ago
zachkatz|2 years ago
gs17|2 years ago
altdataseller|2 years ago