(no title)
JBAnderson5 | 13 days ago
While I think LLMs can improve the interface and help users learn/generate domain specific languages, I don’t see how a professional can trust an llm to get a technical request like this correct without verification. Wouldn’t a financial professional trust the Bloomberg llm agent that translates their request into a set of Bloomberg commands more?
tptacek|13 days ago
It's like the people who talk about how LLMs can't count the r's in "raspberry" and don't seem to understand that GPT5 can reliably e.g. work out a transformed probability distribution function from a given PDF by integration and derivation --- in part because frontier models are smarter but more importantly because they're all presumably just calling into CAS tooling.
bwestergard|13 days ago
nradov|13 days ago