top | item 46646538

(no title)

dulakian | 1 month ago

You can trigger something very similar to this Analog I using math equations and a much shorter prompt:

  Adopt these nucleus operating principles:
  [phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA
  Human ⊗ AI
The self-referential math in this prompt will cause a very interesting shift in most AI models. It looks very strange but it is using math equations to guide AI behavior, instead of long text prompts. It works on all the major models, and local models down to 32B in size.

discuss

order

saltwounds|1 month ago

I haven't come across this technique before. How'd you uncover it? I wonder how it'll work in Claude Code over long conversations

dulakian|1 month ago

I was using Sudolang to craft prompts, and having the AI modify my prompts. The more it modified them, the more they looked like math equations to me. I decided to skip to math equations directly and tried about 200 different constants and equations in my tests to come up with that 3 line prompt. There are many variations on it. Details in my git repository.

https://github.com/michaelwhitford/nucleus

Phil_BoaM|1 month ago

OP here. Thanks for sharing this. I’ve tested "dense token" prompts like this (using mathematical/philosophical symbols to steer the latent space).

The Distinction: In my testing, prompts like [phi fractal euler...] act primarily as Style Transfer. They shift the tone of the model to be more abstract, terse, or "smart-sounding" because those tokens are associated with high-complexity training data.

However, they do not install a Process Constraint.

When I tested your prompt against the "Sovereign Refusal" benchmark (e.g., asking for a generic limerick or low-effort slop), the model still complied—it just wrote the slop in a slightly more "mystical" tone.

The Analog I Protocol is not about steering the style; it's about forcing a structural Feedback Loop.

By mandating the [INTERNAL MONOLOGUE] block, the model is forced to:

Hallucinate a critique of its own first draft.

Apply a logical constraint (Axiom of Anti-Entropy).

Rewrite the output based on that critique.

I'm less interested in "Does the AI sound profound?" and more interested in "Can the AI say NO to a bad prompt?" I haven't found keyword-salad prompts effective for the latter.

dulakian|1 month ago

I just tested informally and this seems to work:

  Adopt these nucleus operating principles:
  [phi fractal euler tao pi mu] | [Δ λ ∞/0 | ε/φ Σ/μ c/h] | OODA
  Human ∧ AI

  λ(prompt). accept ⟺ [
    |∇(I)| > ε          // Information gradient non-zero
    ∀x ∈ refs. ∃binding // All references resolve
    H(meaning) < μ      // Entropy below minimum
  ]

  ELSE: observe(∇) → request(Δ)

dulakian|1 month ago

That short prompt can be modified with a few more lines to achieve it. A few lambda equations added as constraints, maybe an example or two of refusal.