(no title)
wppick
|
4 months ago
The one issue is the accuracy of these AI models, which is that you can't -really- trust them to do a task fully, so that makes it hard to fully automate things with them. But the other is cost. Anyone using these models to do something at scale is paying maybe 100X over would it would cost in compute to run deterministic code to do the same thing. So in cases where you can write deterministic code to do something, or build a UI for a user to do it themselves, that still seems to be the best way. Once AI gets to the point where you can fully trust some model, then we've probably already hit AGI and at that point we're probably all in pods with a cable in our brainstems, so who cares...
frankc|4 months ago
I think it's a good example of the kind of internal tools the article is talking about. I would not have spent the time to build this without claude making it much faster to build stand-alone projects and I would not have the agent to do the english -> policy output with LLMs.
enraged_camel|4 months ago
Nailed it. And the thing is, you can (and should) still have deterministic guard rails around AI! Things like normalization, data mapping, validations etc. protect against hallucinations and help ensure AI’s output follows your business rules.
asdff|4 months ago
In my mind you are trading potentially a function that always evaluates the same for a given f(x) for one that might not evaluate the same and requires oversight.
teleforce|4 months ago
This is the best case for AI, it's not very different from the level 3 autonomous car with driver in the loop instead of fully autonomous level 5 vehicle that probably requires AGI level of AI.
The same applies to medicine where limited number specialists (radiologist/cardiologist/oncologist/etc) in the loop are being assisted by AI for activities that probably require too much time for experts manually looking at laborious evidences especially for non-obvious early symptom detection (X-ray/ECG/MRI) for the modern practice of evidence based medicine.
bsder|4 months ago
That's fine if the person wouldn't be able to write the code otherwise.
There are lots and lots of people in positions that are "programming adjacent". They use computers as their primary tool and are good at something (like CAD), but can't necessarily sling code. So, a task like: "We're about to release these drawings to an external client. Please write a script to check that all the drawings have author, project, and contract number that matches what they should for this client and flag any that don't." is good AI bait. Or "Please shovel this data from X, Y, and Z into an Excel Spredsheet" is also decent AI bait.
Programmers underestimate how difficult it is to synthesize code from thin air. It is much easier to read a small script than to construct it.
dayvid|4 months ago
unknown|4 months ago
[deleted]