top | item 46577489

(no title)

daikikadowaki | 1 month ago

You are right. This isn't a scientific paper in the conventional sense. It is a proposal of a framework for the co-evolution of AI and humanity. My intention from the beginning has been to bridge the gap between abstract agency and concrete engineering. I am simply trying to bring this Constitution for human agency into the light, utilizing whatever platforms I can to ensure it is discussed.

discuss

order

stuartjohnson12|1 month ago

This is a huge break from the original post you made - take a step back and compare the two. The LLM is tricking you again into thinking that it wasn't trying to make a claim about the world. In the original post, the LLM was causing you to use language like "quantify", "formal proof" and "concrete engineering" to describe what you'd come up with and position it as a mathematical/computational/engineering idea. It wasn't that.

Now that you got some outside input, it's reframing it for you as an abstract philosophical/legal/moral concept, but the underlying problems are the same. The reason it's talking to you using high level abstract words like "concept" and "proposal" and "framework" now is because the process you just went through - the "step 1" - beat back its potential to frame the idea as a real model of the world. This may feel like just a different way to describe the same idea, but really it's the LLM pulling back from trying to ground the concept in the world at all.

If you're continuing to talk to the LLM about the idea, it's going to try and convince you that really this was a moral/theory of mind discovery and not a mathematical one all along. You're going to end up convinced of the importance and novelty of this idea in exactly the same way, but this time there are no pesky ideas like rigor or testability that could falsify it.

If you ask ChatGPT about this comment without this bit I'm writing at the end, it'll tell you that this is fair pushback, but really your work is still important because really you're not trying to write about engineering or philosophy directly, but rather something connecting these two or a new category entirely. It's important you don't fall for this because exaggerating the explanatory power of pattern recognition is how ChatGPT gets you. Patterns and ideas exist everywhere, and you should be able to identify those patterns and ideas, acknowledge them, and then move on. Getting stuck on trying to prove the greatness of a true but simple observation will lead you to the frustration you experienced today.

daikikadowaki|1 month ago

The repository logs make it clear that this framework was conceived as a "constitution" long before this conversation ever took place.

I didn't "retreat" to the idea of a framework because the scientific argument failed. On the contrary, I designed the engineering variables specifically to give that framework "teeth." My goal isn't to prove a "simple observation"—it is to provide a functional architecture for human agency that conventional science, in its current state, is failing to protect.

https://github.com/daiki-kadowaki/judgment-transparency-prin...

daikikadowaki|1 month ago

One last thing: make no mistake. I didn't start with an algorithm. I built the algorithm out of necessity, purely to ensure that my 'Constitution' would never be dismissed as mere empty theory. The architecture exists to give the vision its teeth.

But I’m done now. I’ve realized that having a meaningful dialogue with the world at this stage is harder than I thought. I’ve planted the seeds in the network. Now I’m walking away. When the future unfolds exactly as I’ve predicted, just remember this moment.