(no title)
wtbdqrs | 1 year ago
Given a few months of peace of mind and enough money for good enough food, I could actually learn to reason without sounding like a confused babelarian.
Reasoning is mostly a human convention supported by human context that would have been a different one if the Fascists had won the war or the Soviet Union wouldn't have gotten corrupted.
But none of that has anything to do with pulling up a whiteboard to draw some flowcharts and run some numbers, all of which is why I am certain there is nothing the devs have "to fix". It took most reasonable humans many generations to learn stuff. Very few of us did the actual work.
It's all just a matter of time.
voxic11|1 year ago
Here is my prompt:
I have a riddle for you. Please reason about possible assumptions you can make, and paths to find the answer to the question first. Remember this is a riddle so explore lateral thinking possibilities. Then run through some examples using concrete values. And only after doing that attempt to answer the question by reasoning step by step.
The riddle is "Alice has N brothers and she also has M sisters. How many sisters does Alice’s brother have?"
After you answer the riddle please review your answer assuming that you have made a logical inconsistency in each step and explain what that inconsistency is. Even if you think there is none do your best to confabulate a reason why it could be logically inconsistent.
Finally after you have done this re-examine your answer in light of these possible inconsistencies and give what you could consider a second best answer.
cpleppert|1 year ago
Asking it to do lateral thinking and provide examples isn't really helpful because its final output is mostly driven by the step by step reasoning text, not by examples it has generated. At best, the examples are all wrong but it ignores that and spits out the right answer. At worst, it can become confused and give the wrong answer.
I've seen gpt-4 make all kinds of errors with prompts like this. Sometimes, all the reasoning is wrong but the answer is right and vice versa.
daveguy|1 year ago
LLMs are fundamentally incapable of following this instruction. It is still model inference, no matter how you prompt it.
zeknife|1 year ago
wtbdqrs|1 year ago
We are, from our aware POV, a very young civilization.
And you only ever need game theory logic when you have to survive, got no thing and no skill to trade and you are too pathetic to move back in with your parents to work on your mind and or fuckability. Making money by ways of game theory logic compensates for all that but also diminishes the survival chance of the users' offspring to zero once super-unalligned AGIs start to assess the entire supply chain of wealth and how it impacts the evolution of human organisms and the ones inside them.