top | item 42719105

(no title)

IronWolve | 1 year ago

The political training in chatgpt has gotten in the way of asking funding and policy questions for basic questions.

I gave it a budget and asked it to give me the programs/departments/etc that have little return of value, overruns, possible fraud, spot problems, etc. so I could outsource or combine departments and save money.

It went on a long lecture that cutting funding was a horrible thing, and I'm horrible for asking, and refused to answer.

Really?

I'm asking basic auditing/restructure/spending, and it was trained to ignore my request and lecture against providing help, and refusal to give results.

smh, this isnt helpful.

discuss

order

jamiek88|1 year ago

Wow.

I don’t blame it.

How could a fucking LLM make decisions about closing departments? What kind of person would even consider using an LLM for that a good idea?

This comment gave me the ick as the kids say.

Asking a word predictor which people to lay off. I’ve seen it all now and really question your morality. Just like GPT did.

IronWolve|1 year ago

It was just a task to find issues and didn't even mention layoffs, it just assumed, just like you.

You do know asking chatgpt a question doesnt actually do anything in real life right?

comte7092|1 year ago

How would chatgpt be able to spot overruns, fraud, or make a value assessment off of a budget? Unless you provided significantly more information than implied in your post, eg actual spend, the entire exercise was pretty absurd.

This is the biggest danger of LLMs: people assuming that they have some sort of magical super intelligence.

Grimblewald|1 year ago

Even if you did provide the data, if its tabular you can forget chatGPT understanding it properly, unless it is a very small table or without writing code to summarise things. If it writes code, theres a significant chance it still messes things up unless what you're asking is incredibly routine.

Grimblewald|1 year ago

Sounds to me you got the right answer.