(no title)
JoeCortopassi | 1 year ago
gpt-4, gpt-4-turbo, and gpt-4o are not the same models. They are mostly close enough when you have a human in the loop, and loose constraints. But if you are building systems off of the (already fragile) prompt based output, you will have to go through a very manual process of tuning your prompts to get the same/similar output out of the new model. It will break in weird ways that makes you feel like you are trying to nail Jello to a tree
There are software tools/services that help with this, and a ton more that merely promise to, but most of the tooling around LLMs these days gives the illusion of a reliable tool rather than results of one. It's the early days of the gold rush still, and every one wants to be seen as one of the first
tmpz22|1 year ago
[2]: https://insurtechdigital.com/articles/chatgpt-the-risks-and-...
--- please disregard [1] it was a terrible initial source I pulled of Google
[1]: https://medium.com/artivatic/use-of-chatgpt-4-in-health-insu...
Sharlin|1 year ago
benreesman|1 year ago
https://youtu.be/4JF1V2hzGKE
bcrl|1 year ago
SkyPuncher|1 year ago
If you rely on third party packages of any type, you have dependencies that can rapidly and unexpectedly break with an update. Semantic versioning is supposed to help with this, but it doesn’t always help.
djohnston|1 year ago
Probably the best description of working with LLM agents I've read
visarga|1 year ago
barrell|1 year ago
outside1234|1 year ago
bbor|1 year ago
stavros|1 year ago
tbarbugli|1 year ago
mdp2021|1 year ago
adamgordonbell|1 year ago