top | item 46910744

(no title)

hereonout2 | 23 days ago

I was playing about with Chat GPT the other day, uploading screen shots of sheet music and asking it to convert it to ABC notation so I could make a midi file of it.

The results seemed impressive until I noticed some of the "Thinking" statements in the UI.

One made it apparent the model / agent / whatever had read the title from the screenshot and was off searching for existing ABC transcripts of the piece Ode to Joy.

So the whole thing was far less impressive after that, it wasn't reading the score anymore, just reading the title and using the internet to answer my query.

discuss

order

nobodywillobsrv|23 days ago

Yes I have found that grok for example actually suddenly becomes quite sane when you tell it to stop querying the internet And just rethink the conversation data and answer the question.

It's weird, it's like many agents are now in a phase of constantly getting more information and never just thinking with what they've got.

Szpadel|23 days ago

but isn't it what we wanted? we complained so much that LLM uses deprecated or outdated apis instead of current version because they relied so much on what they remembered

HappMacDonald|22 days ago

2010's: Google Search is making humans who constantly rely on it dumber

2020's: LLMs are making humans who constantly rely on them dumber

2026: Google Search is making LLMs who constantly rely on it dumber

bestham|23 days ago

Touché, that is what we humans are doing to some degree as well.

anomaly_|23 days ago

Sounds pretty human like! Always searching for a shortcut

lpcvoid|23 days ago

It sounds like it's lying and making stuff up, something everybody seems to be okay with when using LLMs.

kouunji|23 days ago

For structured outputs like that wouldn’t it be better to get the LLM to create a script to repeatably make the translation?