(no title)
catfacts | 7 months ago
My question: is there a way to reduce cognitive load in LLMs?, one solution seems to be process the input and output format so that the LLM can use a more common format. I don't know if there is a more general solution.
Edit: Cat attack https://the-decoder.com/cat-attack-on-reasoning-model-shows-...
umanwizard|7 months ago
pornel|7 months ago
That's like reading binary for humans. 1s and 0s may be the simplest possible representation of information, but not the one your wet neural network recognizes.
jdiff|7 months ago
unknown|7 months ago
[deleted]
catfacts|7 months ago
kazinator|7 months ago
There can be considerable complexity in Lisp abstract syntax.
gabiteodoru|7 months ago
import numpy as np
def flippedSubtract(a, b): return b - a
flipSubUfunc = np.frompyfunc(flippedSubtract, 2, 1)
def isDivBy11(number): digits = list(map(int, str(number))) discriminant = flipSubUfunc.reduce(digits) return (discriminant % 11) == 0
Though Claude already understands (has already seen?) 0=11|-/d so it's hard to tell for this example
As for the cat attack, my gut feeling is that it has to do with the LLM having been trained/instructed to be kind