top | item 45904749

(no title)

url00 | 3 months ago

I don't want a more conversational GPT. I want the _exact_ opposite. I want a tool with the upper limit of "conversation" being something like LCARS from Star Trek. This is quite disappointing as a current ChatGPT subscriber.

discuss

order

tekacs|3 months ago

That's what the personality selector is for: you can just pick 'Efficient' (formerly Robot) and it does a good job of answering tersely?

https://share.cleanshot.com/9kBDGs7Q

pants2|3 months ago

FWIW I didn't like the Robot / Efficient mode because it would give very short answers without much explanation or background. "Nerdy" seems to be the best, except with GPT-5 instant it's extremely cringy like "I'm putting my nerd hat on - since you're a software engineer I'll make sure to give you the geeky details about making rice."

"Low" thinking is typically the sweet spot for me - way smarter than instant with barely a delay.

op00to|3 months ago

I use Efficient or robot or whatever. It gives me a bit of sass from time to time when I subconsciously nudge it into taking a “stand” on something, but otherwise it’s very usable compared to the obsequious base behavior.

kivle|3 months ago

If only that worked for conversation mode as well. At least for me, and especially when it answers me in Norwegian, it will start off with all sorts of platitudes and whole sentences repeating exactly what I just asked. "Oh, so you want to do x, huh? Here is answer for x". It's very annoying. I just want a robot to answer my question, thanks.

layer8|3 months ago

At least for the Thinking model it's often still a bit long-winded.

bogtog|3 months ago

Unfortunately, I also don't want other people to interact with a sycophantic robot friend, yet my picker only applies to my conversation

angrydev|3 months ago

Exactly. Stop fooling people into thinking there’s a human typing on the other side of the screen. LLMs should be incredibly useful productivity tools, not emotional support.

halifaxbeard|3 months ago

How would you propose we address the therapist shortage then?

93po|3 months ago

Food should only be for sustenance, not emotional support. We should only sell brown rice and beans, no more Oreos.

glitchc|3 months ago

Maybe there is a human typing on the other side, at least for some parts or all of certain responses. It's not been proven otherwise..

cowpig|3 months ago

I think they get way more "engagement" from people who use it as their friend, and the end goal of subverting social media and creating the most powerful (read: profitable) influence engine on earth makes a lot of sense if you are a soulless ghoul.

sofixa|3 months ago

It would be pretty dystopian when we get to the point where ChatGPT pushed (unannounced) advertisements to those people (the ones forming a parasocial relationship with it). Imagine someone complaining they're depressed and ChatGPT proposing doing XYZ activity which is actually a disguised ad.

Other than such scenarios, that "engagement" would be just useless and actually costing them more money than it makes

easygenes|3 months ago

I use the "Nerdy" tone along with the Custom Instructions below to good effect:

"Please do not try to be personal, cute, kitschy, or flattering. Don't use catchphrases. Stick to facts, logic, reasoning. Don't assume understanding of shorthand or acronyms. Assume I am an expert in topics unless I state otherwise."

sbuttgereit|3 months ago

This. When I go to an LLM, I'm not looking for a friend, I'm looking for a tool.

Keeping faux relationships out of the interaction never let's me slip into the mistaken attitude that I'm dealing with a colleague rather than a machine.

Y_Y|3 months ago

I don't know about you, but half my friends are tools.

nathan_compton|3 months ago

You can just tell the AI to not be warm and it will remember. My ChatGPT used the phrase "turn it up to eleven" and I told it never to speak in that manner ever again and its been very robotic ever since.

pgsandstrom|3 months ago

I added the custom instruction "Please go straight to the point, be less chatty". Now it begins every answer with: "Straight to the point, no fluff:" or something similar. It seems to be perfectly unable to simply write out the answer without some form of small talk first.

andai|3 months ago

I system-prompted all my LLMs "Don't use cliches or stereotypical language." and they like me a lot less now.

moi2388|3 months ago

Same. If i tell it to choose A or B, I want it to output either “A” or “B”.

I don’t want an essay of 10 pages about how this is exactly the right question to ask

LeifCarrotson|3 months ago

10 pages about the question means that the subsequent answer is more likely to be correct. That's why they repeat themselves.

astrange|3 months ago

LLMs have essentially no capability for internal thought. They can't produce the right answer without doing that.

Of course, you can use thinking mode and then it'll just hide that part from you.

LaFolle|3 months ago

Exactly, and it does't help with agentic use cases that tend to solve problem in on-shot, for example, there is 0 requirement from a model to be conversational when it is trying to triage a support question to preset categories.

Tiberium|3 months ago

Are you aware that you can achieve that by going into Personalization in Settings and choosing one of the presets or just describing how you want the model to answer in natural language?

gcau|3 months ago

Yea, I don't want something trying to emulate emotions. I don't want it to even speak a single word, I just want code, unless I explicitly ask it to speak on something, and even in that scenario I want raw bullet points, with concise useful information and no fluff. I don't want to have a conversation with it.

However, being more humanlike, even if it results in an inferior tool, is the top priority because appearances matter more than actual function.

cmrdporcupine|3 months ago

To be fair, of all the LLM coding agents, I find Codex+GPT5 to be closest to this.

It doesn't really offer any commentary or personality. It's concise and doesn't engage in praise or "You're absolutely right". It's a little pedantic though.

I keep meaning to re-point Codex at DeepSeek V3.2 to see if it's a product of the prompting only, or a product of the model as well.

SergeAx|3 months ago

Just put it in your system prompt?

egorfine|3 months ago

Enable "Robot" personality. I hate all the other modes.

kranke155|3 months ago

Gemini is very direct.

jasonsb|3 months ago

Engagement Metrics 2.0 are here. Getting your answer in one shot is not cool anymore. You need to waste as much time as possible on OpenAI's platform. Enshittification is now more important than AGI.

spaceman_2020|3 months ago

This is the AI equivalent of every recipe blog filled with 1000 words of backstory before the actual recipe just to please the SEO Gods

The new boss, same as the old boss

glouwbug|3 months ago

Things really felt great 2023-2024

mmcnl|3 months ago

Exactly. The GPT 5 answer is _way_ better than the GPT 5.1 answer in the example. Less AI slop, more information density please.

vunderba|3 months ago

And utterly unsurprising given their announcement last month that they were looking at exploring erotica as a possible revenue stream.

[1] https://www.bbc.com/news/articles/cpd2qv58yl5o

subscribed|3 months ago

Everyone else provides these services anyway, and many places offer using ChatGPT or Claude models despite current limits (because they work with "jailbraking" prompts), so they likely decided to stop pretending and just let that stuff in.

Whats the problem tbh.