zauberberg | 11 days ago
zauberberg's comments
zauberberg | 1 month ago | on: "Human in the loop" sounds hopeful but more likely is a grace period
A few points I mostly agree with, with one nuance:
Humans are in the loop today because accountability is clear. You can coach, discipline, replace, or escalate a person. You can’t meaningfully “hold an API responsible” in the same way.
But companies don’t always solve reliability with a person reviewing everything. Over time they often shift to process-based controls: stronger testing, monitoring, audits, fallback procedures, and contractual guarantees. That’s how they already manage critical dependencies they also can’t “fire” overnight (cloud services, core software vendors, etc.).
Vendor lock-in is real—but it’s also a choice firms can mitigate. Multi-vendor options, portability clauses, and keeping an exit path in the architecture are basically the equivalent of being able to replace a bad supplier.
High fault-tolerance domains will keep humans involved longer. The likely change is not “no humans,” but fewer humans overseeing more automated work, with people focused on exceptions, risk ownership, and sign-off in the most sensitive areas.
So yes: we need humans where the downside is serious and someone has to own the risk. My claim is just that as reliability and controls improve, organisations will try to shrink the amount of human review, because that review starts to look like the expensive part of the system.
zauberberg | 3 months ago | on: The Future Belongs to the Machines. The Irrational Belongs to Us
What I was trying to explore is how the locus of uncertainty is shifting. Instead of confronting the unknown in the world, we now confront it inside the systems we build—systems we struggle to fully inspect or explain. The unknown hasn’t vanished; it has moved closer to us, and become harder to hold.
This is where football and religion feel relevant. They are places where uncertainty is not a problem to be solved, but an experience to be shared—ritual, suspense, allegiance, the goal nobody expected. They give form to the unpredictable in a way models never quite can.
So yes, AI promulgates uncertainty. The challenge is not to abolish it, but to decide how we live with it—and where we gather around it.
zauberberg | 4 months ago | on: The Future Belongs to the Machines. The Irrational Belongs to Us
zauberberg | 1 year ago | on: Automatic For The People – Seven predictions for the future of knowledge work
zauberberg | 2 years ago | on: Maximizing the Potential of LLMs: A Guide to Prompt Engineering
zauberberg | 3 years ago | on: Placing #1 in Advent of Code with GPT-3
zauberberg | 3 years ago | on: Placing #1 in Advent of Code with GPT-3
Furthermore, it's important to remember that the development and deployment of AI is not inevitable. It's up to us as a society to decide how we want to use this technology, and to ensure that its benefits are widely distributed. By working together, we can use AI to improve people's lives and create a better future for everyone.
zauberberg | 4 years ago | on: Ask HN: What does your shutdown and boot up process looks like?
zauberberg | 4 years ago | on: Show HN: An agent-based model for social teamwork on Streamlit
zauberberg | 5 years ago | on: Automatic for the People – Seven predictions for the future of knowledge work
zauberberg | 5 years ago | on: Automatic for the People