(no title)
waldopat | 19 days ago
In product management (my domain), decisions are made under conflicting constraints: a big customer or account manager pushing hard, a CEO/board priority, tech debt, team capacity, reputational risk and market opportunity. PMs have tried with varied success to make decisions more transparent with scoring matrices and OKRs, but at some point someone has to make an imperfect judgment call that’s not reducible to a single metric. It's only defensible through narrative, which includes data.
Also, progressive elaboration or iterations or build-measure-learn are inherently fuzzy. Reinertsen compared this to maximizing the value of an option. Maybe in modern terms a prediction market is a better metaphor. That's what we're doing in sprints, maximizing our ability to deliver value in short increments.
I do get nervous about pushing agentic systems into roadmap planning, ticket writing, or KPI-driven execution loops. Once you collapse a messy web of tradeoffs into a single success signal, you’ve already lost a lot of the context.
There’s a parallel here for development too. LLMs are strongest at greenfield generation and weakest at surgical edits and refactoring. Early-stage startups survive by iterative design and feedback. Automating that with agents hooked into web analytics may compound errors and adverse outcomes.
So even if you strip out “ethics” and replace it with any pair of competing objectives, the failure mode remains.
nradov|19 days ago
https://balancedscorecard.org/
gamma-interface|19 days ago
The uncomfortable answer is that the most valuable use cases resist single-metric optimization. The best results come from people who use AI as a thinking partner with judgment, not as an execution engine pointed at a number.
Goodhart's Law + AI agents is basically automating the failure mode at machine speed.
waldopat|19 days ago