(no title)
corry | 4 months ago
I'm sure OpenAI and Antropic look at the data, and I'm sure it says that for new / unsophisticated users who don't know how to prompt, that this is a handy crutch (even if it's bad here and there) to make sure they get SOMETHING useable.
But for the HN crowd in particular, I think most of us have a feeling like making the blackbox even more black -- i.e. even more inscrutable in terms of how it operates and what inputs it's using -- isn't something to celebrate or want.
brookst|4 months ago
For instance, I can ask "what windshield wipers should I buy" and Claude (and ChatGPT and others) will remember where I live, what winter's like, the make, model, and year of my car, and give me a part number.
Sure, there's more control in re-typing those details every single time. But there is also value in not having to.
brulard|4 months ago
hereonout2|4 months ago
Not only does the model (chat gpt) know about my job, tech interests etc and tie chats together using that info.
But also I have noticed the "tone" of the conversation seems to mimick my own style some what - in a slightly OTT way. For example Chat GPT wil now often call me "mate" or reply often with terms like "Yes mate!".
This is not far off how my own close friends might talk to me, it definitely feels like it's adapted to my own conversational style.
abustamam|4 months ago
skeeter2020|4 months ago
fomoz|4 months ago
Footprint0521|4 months ago
chaostheory|4 months ago
unknown|4 months ago
[deleted]
gordon_freeman|4 months ago
cubefox|4 months ago
taejavu|4 months ago
tom_m|4 months ago
LLMs are very simply text in and text out. Unless the providers begin to expand into other areas, there's only so much they can do other than simply focus on training better models.
In fact, if they begin to slow down or stop training new models and put focus elsewhere, it could be a sign that they are plateauing with their models. They will reach that point some day after all.
awesome_dude|4 months ago
BUT I do like that Claude builds on previous discussions, more than once the built up context has allowed Claude to improve its responses (eg. [Actual response] "Because you have previously expressed a preference for SOLID and Hexagonal programming I would suggest that you do X" which was exactly what I wanted)
logicallee|4 months ago
awesome_dude|4 months ago
Apparently they know better even though
1. They didn't issue the prompt, so they... knew what I was meaning by the phrase (obviously they don't)
2. The LLM/AI took my prompt and interpreted it exactly how I meant it, and behaved exactly how I desired.
3. They then claim that it's about "knowing exactly what's going on" ... even though they didn't and they got it wrong.
This is the advantage of an LLM - if it gets it wrong, you can tell it.. it might persist with an erroneous assumption, but you can tell it to start over (I proved that)
These "humans" however are convinced that only they can be right, despite overwhelming evidence of their stupidity (and that's why they're only JUNIORS in their fields)
inquirerGeneral|4 months ago
[deleted]
crucialfelix|4 months ago
philmont|4 months ago
mbesto|4 months ago
If you already know what a good answer is why use a LLM? If the answer is "it'll just write the same thing quicker than I would have", then why not just use it as an autocomplete feature?
Nition|4 months ago
Once I get into stuff I haven't worked out how to do yet, the LLM often doesn't really know either unless I can work it out myself and explain it first.
svachalek|4 months ago
But myself as well, that prompt is very short. I don't keep a large stable of reusable prompts because I agree, every unnecessary word is a distraction that does more harm than good.
fluidcruft|4 months ago
brookst|4 months ago
Why should I have to mention the city I live in when asking for a restaurant recommendation? Yes, I know a good answer is one that's in my city, and a bad answer is on one another continent.