top | item 44093761

(no title)

deepsharp | 9 months ago

1. Why do we still tolerate AI systems that stop learning the moment they’re deployed? “Today’s AI systems go through two distinct phases: training and inference… After training is complete, the AI model’s weights become static… it does not learn from new data.”

In any dynamic environment—robotics, autonomous agents, healthcare—this rigidity seems like a fundamental flaw.

2. Is fine-tuning doing more harm than good in real-world AI? “Fine-tuning a model is less resource-intensive than pretraining it from scratch, but it is still complex, time-consuming and expensive, making it impractical to do too frequently.”

Worse, it's not just a compute problem. Repeated fine-tuning doesn’t just overwrite old knowledge (catastrophic forgetting), it can actually shut down a model’s ability to learn from new data altogether.

3. What would it take to build AI that actually sharpens itself as it learns about you?

"As you work with a model day in and day out, the model becomes more tailored to your context, your use cases, your preferences, your environment. Imagine how much more compelling a personal AI agent would be if it reliably adapted to your particular needs and idiosyncrasies in real-time… it could create durable moats for the next generation of AI applications...This will make AI products sticky in a way that they have never been before."

Sounds great in theory. But how, exactly? No one really knows. Repeated fine-tuning isn’t just impractical—its repeated use degrades the model and can eventually turn it into total garbage. Maybe it’s time to admit: we need something new. Something fundamental is missing from today’s AI architecture.

discuss

order

No comments yet.