top | item 46492767

(no title)

threeducks | 1 month ago

> But if you are insinuating AI made all this up on it's own, I have to disappoint you.

No worries, I am not a native English speaker myself. I was genuinely interested in whether commercial LLMs would use "bad" words without some convincing.

discuss

order

fristovic|1 month ago

Oh, it was a hassle for sure! It kept rewriting the sentences I fed to it, trying to style them properly and it kept throwing out words and changing the rebellious tone I wanted in the book. It was worth it for some pieces, they really became more punchy and to the point, but for others looking back at it - I could have just saved the time and just published it as-is. So it's a medium success for me.

threeducks|1 month ago

That was my experience as well. Sometimes, LLMs were a big help, but other times, my efforts would have been better spent writing things myself. I always tell myself that experience will make me choose correctly next time, but then a new model is released and things are different yet again.

dizhn|1 month ago

Try some Made In PRC models. They do not give a shit.

threeducks|1 month ago

I have tried a few Qwen-2.5 and 3.0 models (<=30B), even abliterated ones, but it seems that some words have been completely wiped from their pretraining dataset. No amount of prompting can bring back what has never been there.

For comparison, I have also tried the smaller Mistral models, which have a much more complete vocabulary, but their writing sometimes lacks continuity.

I have not tried the larger models due to lack of VRAM.