Ask HN: What do you dislike about ChatGPT and what needs improving?
33 points| zyruh | 7 months ago
What aspects of the experience do you find lacking, confusing, or outright irritating? Which improvements do you think are most urgent or would make the biggest difference?
[+] [-] Fade_Dance|7 months ago|reply
You can say things like "you are a robot, you have no emotions, don't try to act human", but the output doesn't seem to be particularly well calibrated. I feel like when I modify the default response style, I'm probably losing something, considering that the defaults are what go through extensive testing.
[+] [-] quatonion|7 months ago|reply
It used to be a lot better before glazegate. Never did quite seem to recover.
I don't mind us having fun of course, but it needs to pick up on emotional queues a lot better and know when to be serious.
[+] [-] jamestimmins|7 months ago|reply
[+] [-] kristianp|7 months ago|reply
[+] [-] zyruh|7 months ago|reply
[+] [-] akkad33|7 months ago|reply
[+] [-] zyruh|7 months ago|reply
[+] [-] quatonion|7 months ago|reply
Copy/Pasting sections of the chat on mobile is laborious
That it still gets manic and starts glazing
That it can remember some things and keeps bringing them up, but forgets other, more pertinent things
If you switch away from it while it is in the middle of generating an image it often cancels the image generation
Image editing accuracy seems to have gone down significantly in quality based on intent.
You can't turn a temporary chat into a permanent one.. sometimes you start a temporary and realize half way it should be permanent - but too late.
The em dashes need to go
And so do the "it's not this, it's that!"
Is it really necessary to make so many lists all the time
Canvas needs a bunch of work
[+] [-] zyruh|7 months ago|reply
[+] [-] ComplexSystems|7 months ago|reply
[+] [-] RugnirViking|7 months ago|reply
[+] [-] beering|7 months ago|reply
[+] [-] zyruh|7 months ago|reply
[+] [-] nubela|7 months ago|reply
This to me, is a sign that intelligence/rationalization is not present yet. That said, it does seem like something that can be "trained" away.
[+] [-] zyruh|7 months ago|reply
[+] [-] 8bitsrule|7 months ago|reply
A simple 'I don't know, I haven't got access to the answer' would be a great start. People who don't know better are going to swallow those crap answers. For this we need to produce much more electricity?
[+] [-] zyruh|7 months ago|reply
[+] [-] nebben64|7 months ago|reply
better memory management: I have memories that get overlooked or forgotten (even though I can see them in the archive), then when I try to remind chatGPT, it creates a new memory; also updating a memory often just creates a new one. I can kind of tell that Chat is trying hard to reference past memories, so I try to not have too many, and make each memory contain only precise information.
Some way to branch off of a conversation (and come back to the original master, when I'm done; happens often when I'm learning, that I want to go off and explore a side-topic that I need to understand)
[+] [-] zyruh|7 months ago|reply
[+] [-] throwawaylaptop|7 months ago|reply
It just guessed. But didn't tell me it had no idea what columns and where I was really talking about. So not only did it guess, wrongly, but it didn't even mention that it had to do so. Obviously the code failed.
Why can't it tell me there's a problem with what I'm asking???
[+] [-] decide1000|7 months ago|reply
On the LLM: It's too positive. I don't always want it to follow my ideas and I don't want to hear how much my feedback is appreciated. Act like a machine. Also the safety controls are too sensitive sometimes. Rlly annoying because there is no way to continue the conversation. I like gpt4.5 because i can edit the canvas. Would like to have that with all models.
Also some stats like sentiment and fact check would be nice. Because it gives nuances in answers I want to see with the stats how far from the truth or bias I am.
And the writing.. Exaggerating, too many words, spelling mistakes in European languages.
[+] [-] zyruh|7 months ago|reply
[+] [-] krpovmu|7 months ago|reply
2- The fact that it always tries to answer and sometimes doesn't ask for clarification on what the user is asking; it just wants to answer and that's it.
[+] [-] zyruh|7 months ago|reply
[+] [-] jondwillis|7 months ago|reply
- Opaque training data (and provenance thereof… where’s my cut of the profits for my share of the data?)
- Closed source frontier models, profit-motive to build moat and pull up ladders (e.g. reasoning tokens being hidden so they can’t be used as training data)
- Opaque alignment (see above)
- Overfitting to in-context examples- e.g. syntax and structure are often copied from examples even with contrary prompting
- Cloud models (seemingly) changing behavior even on pinned versions
- Over-dependence: “oops! I didn’t have to learn so I didn’t. My internet is out so now I feel the lack.”
[+] [-] yelirekim|7 months ago|reply
[+] [-] mradek|7 months ago|reply
Also, I wish it was possible for the models to leverage local machine to increase/augment its context.
Also, one observation is that Claude.ai (the web UI) gets REALLY slow as the conversation gets longer. I'm on a M1 Pro 32gb MacbookPro, and it lags as I type.
I really enjoy using LLMs and would love to contribute any feedback as I use them heavily every day :)
[+] [-] zyruh|7 months ago|reply
[+] [-] barrell|7 months ago|reply
If I have an emotionless natural language database that burns a tree for every question, I do not want to have to have small talk before getting an answer
[+] [-] zyruh|7 months ago|reply
[+] [-] hotgeart|7 months ago|reply
I want him to tell me if my process is bad or if I’m heading in the wrong direction, to not to sugarcoat things just to make me feel good. I mostly use it for code reviews.
[+] [-] mythrwy|7 months ago|reply
[+] [-] y-curious|7 months ago|reply
This tone grates on me constantly.
[+] [-] zyruh|7 months ago|reply
[+] [-] speedylight|7 months ago|reply
[+] [-] amichail|7 months ago|reply
[+] [-] jondwillis|7 months ago|reply
Where X is an exaggeration of what it actually is and Y is some saccharine marketing proclamation of what it definitely is not but the prompter wishes it was.
Infomercial slop.
[+] [-] zyruh|7 months ago|reply
[+] [-] egberts1|7 months ago|reply
ChatGPT got the basic terminology such as Vimscript’s terminology like group name, regex, match, region, and maintaining top-level, first encounter sorted list of ‘contains=‘ group names correctly from largest static pattern down to most wildest regex patterns sorted correctly.
Also got S-notation of operators in correct nested order as well.
AND got Bison”s semantic action (state transition), lexical token …. Cna make EBNF from Bison (although Bison does it better).
But it fails often in form of brevity of which an expert (like me) would prod ChatGPT occasionally of omissions.
Makes assumptions of some keywords having invalid value ranges, invalid syntax arrangement, and provides incorrect terminators.
So, I considered ChatGPT to be more of a intermediate editor’s README that requires occassional consult with EBNF notations and Vimscript man page, and more often Bison’s parser source (parser_bison.y) file to be final arbitrator.
Does it learn? Constant ‘nft’ command outputs set ChatGPT straight. But there are slippage when starting a new ChatGPT session which leads me to believe that it won’t learn for others (as well as me).
EDIT: say “no glazing” cuts down on filler words, nicely.
[+] [-] wewewedxfgdf|7 months ago|reply
[+] [-] NuclearPM|7 months ago|reply
What can you do?
“Good question! I can do x, y, z…”
Do that.
“…”
“…”
“…”
“Sorry I can’t do this for you because blah blah blah”
[+] [-] divan|7 months ago|reply
I use projects for research purposes for articles/scripts/etc, and I would love to use chatgpt in voice mode to talk about the article I'm writing. Like "hey, read last paragraph from the article... let's elaborate on topic X... here what I would love to write - x,y,z - please improve the style and read it back to me... nice, add it as a next paragraph."
[+] [-] zyruh|7 months ago|reply