top | item 44523051

(no title)

Powdering7082 | 7 months ago

Really concerning that what appears to be the top model is in the family of models that inadvertently starting calling it's self mechahitler

discuss

order

jm4|7 months ago

I don't know why anyone would bother with Grok when there are other good models from companies that don't have the same baggage as xAI. So what if they release a model that beats older models in a benchmark? It will only be the top model until someone else releases another one next week. Personally, I like the Anthropic models for daily use. Even Google, with their baggage and lack of privacy, is a far cry from xAI and offers similar performance.

tonymet|7 months ago

i like grok because i don't hit the obvious ML-fairness / political correct safeguards that other models do.

So i understand the intent in implementing those, but they also reduce perceived trust and utility. It's a tradeoff.

Let's say I'm using Gemini. I can tell by the latency or the redraw that I asked an "inappropriate" query.

togetheragainor|7 months ago

Some people think it’s a feature that when you prompt a computer system to do something, it does that thing, rather than censoring the result or giving you a lecture.

Perhaps you feel that other people shouldn’t be trusted with that much freedom, but as a user, why would you want to shackle yourself to a censored language model?

stri8ed|7 months ago

It's a result of the system prompt, not the base model itself. Arguably, this just demonstrates that the model is very steerable, which is a good thing.

anthonybsd|7 months ago

It wasn't not a result of system prompt. When you fine tune a model on a large corpus of right-leaning text don't be surprised when neo-nazi tendencies inevitably emerge.

riversflow|7 months ago

Is it good that a model is steerable? Odd word choice. A highly steerable model seems like a dangerous and potent tool for misinformation. Kinda evil really, the opposite of good.

Herring|7 months ago

Who cares exactly how they did it. Point is they did it and there's zero trust they won't do it again.

> Actually it's a good thing that the model can be easily Nazified

This is not the flex you think it is.

DonHopkins|7 months ago

[deleted]

api|7 months ago

Isn't this kind of stuff something that happens when the model is connected to X, which is basically 4chan /pol now?

Connect Claude or Llama3 to X and it'll probably get talked into LARPing Hitler.

archagon|7 months ago

Great, so xAI gave their model brain damage.