top | item 47181144

(no title)

salawat | 2 days ago

Rubber duck debugging is a null-llm offloading to your gray matter for the other half of interlocution. A fancy way of recruiting your other brain matter into the problem solving process. Perhaps by offloading to a non-null LLM, there is decreased activation/recruitment of brain regions in the problem solving process, leading to network pruning over time. Particularly in the event you take the position that the "tool" isn't something worthy of having it's inner state reacted to and modeled via mirror networks.

But what do I know man, I'm just a duck on the Internet. On the Internet, no one knows you're a duck.

Quack.

discuss

order

righthand|2 days ago

But the point is that as soon as you get feedback and a response you’re back in traditional reasoning, puzzle solving, teaching, learning, etc. paradigm. Not in the rubber duck debugging paradigm. RDD is clearly defined as different. The GP is just choosing to remove the elements that make it unique but keep the metaphorical branding. Even bots responding is not RDD. Rubber ducks can’t respond or understand.

You don’t send kids to Rubber Duck Debugging Class (you send them to School) because you can’t see the teacher in the classroom while you’re at work.

You’re debugging yourself, not the actual problem per say.

salawat|2 days ago

RDD is using an external object, the rubber duck, as an external anchor from which to project sub/unconscious processing elements onto. Think about it. Your brain doesn't know the difference between imagining an interaction, and actually having the interaction. The passive duck, even as little more than a passive anchor to project mental faculties not currently employable without consciousness destabilization, still gets you into an "effective collaboratory mode" with yourself. Like have you ever tried and succeeded at RDD without the external focus? Tis' a pain in the ass. The addition of a model that isn't your brain* spitting out output just gets registered as just another message from somebody's brain matter, not yours. I am increasingly looking at generative coding with an LLM as having a substantive rewriting effect on expectations around how computers are capable of behaving. This isn't just transform hiding a pile of abstractions that our brains mirroring systems can accurately feed forward, giving us a sense of interoception and the ability to kinesthetically navigate and "feel" what we're doing with the machine or how the machine should behave given our inputs. It just does. It breaks the rules. We're helpless in the face of knowing or simulating what's next. This forces neurons to start to rewire. Rewiring and severing of old connections in times of great change, at least to me, comes with feelings indistinguishable from acute depression. Please don't ask how I know. It should, in theory, settle down given enough time to learn the quirks of a specific snapshot of a model, and probably flare up again after substantial changes to weights occur. Our brains, like it or not, are specifically designed to use anything outside themselves to "pour" subconscious faculties into.

Just started messing with these things, and it at least to me seems to resonate.