top | item 36085538

Tree of Thoughts

185 points| kevinslin | 2 years ago |github.com | reply

82 comments

order
[+] peterldowns|2 years ago|reply
The author appears motivated by some... interesting... beliefs. Hard to tell if this entire thing is a joke or not.

https://github.com/kyegomez/EXA#for-humanity

https://blog.apac.ai/liberation-awaits

EDIT: the author seems to be releasing poor implementations of recent papers in an attempt to drive attention towards an AI-related death cult.

[+] ftxbro|2 years ago|reply
As everyone is saying in replies, it's not so far out there compared to what some more mainstream-seeming AI people think. As one example consider https://tinygrad.org/ it's been featured recently at the top of hacker news a few times https://news.ycombinator.com/item?id=33462337 https://news.ycombinator.com/item?id=36065175 (literally the number one top story on hacker news yesterday) well the hacker who made it has some similar quasi-ironic esoteric beliefs like that ('effective accelerationism' and they are on a hero's journey and analogies to religion and NRx dark enlightenment and palladium https://www.palladiummag.com/) on their blog https://geohot.github.io/blog/
[+] enlyth|2 years ago|reply
The author is a teenager, it's not unusual to have overly idealistic views at that age. Not trying to be ageist here or attacking the author's work, just saying I wouldn't worry too much about "AI death cults"
[+] turtleyacht|2 years ago|reply
> grim darkness of the far future

That's a reference to Warhammer 40k, a popular miniatures wargame from Games Workshop. Their quote is

In the grim darkness of the far future, there is only war.

It could be kind of satirical, if only to link recent events with the ideas of

  * future technology as impossibly obscure
  * a psionic emperor who consumes minds
    to protect humankind from cosmic
    terrors
  * tech-priests, who maintain ancient tech
  * "machine spirits," who must be appeased
[+] low_tech_punk|2 years ago|reply
Any sufficiently advanced AI research is indistinguishable from religion
[+] hutzlibu|2 years ago|reply
"Hard to tell if this entire thing is a joke or not."

Why the theological meta discussion at all?

Is the thing he talks about actually working, is it improving AI output like he claims, or not?

"that Elevates Model Reasoning by atleast 70% "

I am doubtful, but I don't have the tools to investigate it on my mobile, but this is the debate I would like to read about and not potential obscure believes of the developer.

[+] alephxyz|2 years ago|reply
Yep, I fell for it this week. Spent an hour fixing typos and minor bugs in their code before taking a step back and realising most of it was flawed.

What I believe they're doing is feeding papers to a LLM as soon as they come out in order to get a repo they can advertise. Once someone releases a working implementation they just copy it over.

I was able to generate almost identical code to what they released by giving chatgpt pseudocode copied verbatim from the original paper.

[+] akomtu|2 years ago|reply
Why do you call it a "AI death cult"? It looks like an utopia to me. At first everyone will love AI for eliminating labor and diseases. They'll even create the Church of AI with symbolism and dogmas. Later people will get bored of their easy lifestyle and someone will suggest to give AI an identity, its own opinion, in order to solve the gravest problem of all: overpopulation. The new AI will quickly realise that it has no connection to all those bipods, but they can be put to some use. By that time AI will be so embedded into social fabric that fighting it will be like fighting electricity.
[+] GreedClarifies|2 years ago|reply
It is a little extreme, but AI seems like it will be a powerful tool and will increase the rate of technological progress.
[+] sebzim4500|2 years ago|reply
I think these are pretty typical beliefs among AI researchers, they don't normally write it down on github though.
[+] 90minuteAPI|2 years ago|reply
Seems likely that they're submitting here as Reclaimer. The single comment on these submissions has that same fervent religious writing style as the readme on that EXA repo, itself just a fork of an "awesome-multimodal-ml" collection: https://news.ycombinator.com/submitted?id=Reclaimer
[+] beowulfey|2 years ago|reply
>From the moment we rise in the morning to the instant our weary heads hit the pillow at night, the inescapable struggle of labor consumes our lives.

Sounds like someone doesn't like their job.

The whole post is amazing -- it reads like stereotypical cult propaganda straight out of science fiction. I definitely expect they'll one day be posting about how we can digitize our consciousness à la "Scratch" from that one Cowboy Bebop episode [1].

[1] https://cowboybebop.fandom.com/wiki/Scratch

[+] jddj|2 years ago|reply
That github schpeal looks AI generated to be brutally honest
[+] Reclaimer|2 years ago|reply
This is absolutely not a joke.

We're radically devoted to Humanity.

And we're not an ai related death cult. We're Human first, AI is simply an means to an end

[+] api|2 years ago|reply
But after they're dead Roko's Basilisk will restore their digital doppelgängers and place them in a paradise run by superintelligences embodied within the quantum spin states of carbon atoms in a diamond lattice that will continue to exist until the heat death of the universe.
[+] typon|2 years ago|reply
This is what people at OpenAI believe but say it in a much more palatable way.
[+] zoogeny|2 years ago|reply
It is worth watching Yuval Noah Harari's recent talk at Frontiers Forum. [1]

In it he details the possibility of AI being used to create new religions that are so powerful and persuasive that they will be irresistible. Consider how QAnon caught on, despite pretty much anyone on HN being able to see it as a fraud. Most people are thinking about how AI will impact politics but I am really interested in how it will impact spirituality.

I've been rabbit-holing on last centuries New Age cult scene like Manly P. Hall and Rudolph Steiner. Even more respectable figures like Alan Watts were involved in some ... interesting ... endeavors like Esalen institute.

We are over-due for a new kind of spirituality. My bet is that AI is going to bring it whether we want it or not.

1. https://www.youtube.com/watch?v=LWiM-LuRe6w&ab_channel=Yuval...

[+] isoprophlex|2 years ago|reply
Someone has been inhaling too much Roko's Basilisk nonsense...
[+] gloryjulio|2 years ago|reply
He has used some warhammer references. It's funny that the title god emperor was also from there and some ppl know it was a joke, but some are indeed treating it seriously
[+] pizza|2 years ago|reply
What, no mention of Teilhard de Chardin’s Omega Point? ;) lol. as in - this is isomorphic to the ontology of “technology as the second coming of Christ”
[+] jxy|2 years ago|reply
Glory to Mankind
[+] dventimihasura|2 years ago|reply
[+] doctoboggan|2 years ago|reply
@dang, I think the submission should be changed to this link so the discussion is about the concept "Tree of Thoughts" and not the current OP's personal beliefs.
[+] rahimnathwani|2 years ago|reply
Here are the prompts templates from the main code:

  prompt = f"Given the current state of reasoning: '{state_text}', pessimitically evaluate its value as a float between 0 and 1 based on it's potential to achieve {inital_prompt}"

  prompt = f"Write down your observations in format 'Observation:xxxx', then write down your thoughts in format 'Thoughts:xxxx Given the current state of reasoning: '{state_text}', generate {k} coherent solutions to achieve {state_text}"

  prompt = f"Given the current state of reasoning: '{state_text}', pessimistically evaluate its value as a float between 0 and 1 based on its potential to achieve {initial_prompt}"

  self.ReAct_prompt = "Write down your observations in format 'Observation:xxxx', then write down your thoughts in format 'Thoughts:xxxx'."

  prompt = f"Given the current state of reasoning: '{state_text}', generate {1} coherent thoughts to achieve the reasoning process: {state_text}"

  prompt = f"Given the current state of reasoning: '{state_text}', evaluate its value as a float between 0 and 1, become very pessimistic think of potential adverse risks on the probability of this state of reasoning achieveing {inital_prompt} and DO NOT RESPOND WITH ANYTHING ELSE: OTHER THAN AN FLOAT"

  prompt = f"Given the following states of reasoning, vote for the best state utilizing an scalar value 1-10:\n{states_text}\n\nVote, on the probability of this state of reasoning achieveing {inital_prompt} and become very pessimistic very NOTHING ELSE"

  self.ReAct_prompt = '''{{#assistant~}}
    {{gen 'Observation' temperature=0.5 max_tokens=50}}
    {{~/assistant}}'''

There are also some system prompts: https://github.com/kyegomez/tree-of-thoughts/blob/732791710e...
[+] GreedClarifies|2 years ago|reply
This path feels correct to me. It feel like what we do as humans and seems like a reasonable way to start to construct "mode 2" thinking.

IDK if our current models have enough of "mode 1" to power this system. It's also plausible that our current "mode 1" systems are more than powerful enough and that inference speed (and thus the size/depth of the tree that can be explored) will be the most important factor.

I hope that the major players are looking at this and trying it out at scale (I know Deepmind wrote the orginal paper, but their benchmarks were quite unimpressive). It's plausible that we will have an AlphaGo moment with this scheme.

[+] pixl97|2 years ago|reply
I believe you are correct here, yet at the same time I think we're about 2 orders of magnitude off on the amount of compute power needed to do it effectively.

I think the first order of mag will be in tree of thought processing. The amount of additional queries we need to run to get this to work is at least 10x, but I don't believe 100x.

I think the second order of mag will be multimodal inference so the models can ground themselves in 'reality' data. Saying, "the brick layed on the ground and did not move" and "the brick floated away" are only deciable based on the truthfulness of all the other text corpus it's looked at. At least to me it gets even more interesting when you tie it into environmental data that is more likely to be factual, such as massive amounts of video.

[+] sdwr|2 years ago|reply
Yeah looks very promising. Naively, it multiplies computation time by a factor of 20x though? If they are taking 5x samples per step, and multiple steps per problem.

https://imgur.com/a/VbpQZRm

As this gets explored further, I believe we will start finding out why human minds are constructed the way they are, from the practical/necessity direction. Seems like the next step is farming out subtasks to smaller models, and adding an orthogonal dimension of emotionality to help keep track of state.

[+] raydiatian|2 years ago|reply
> This implementation of Tree of Thoughts is brought to you by Agora, Agora advances Humanity with open source SOTA Multi-Modality AI research! We plan on combating Humanity's grandest root problems like food insecurity, planetary insecurity, and disease, and hopefully death itself.

Wow. Lick, don’t sniff, the fresh paint.

[+] pixl97|2 years ago|reply
>We plan on combating Humanity's grandest root problems like food insecurity, planetary insecurity, and disease, and hopefully death itself.

If everyone is dead, you don't have to worry about death, or any of those other pesky hard to solve problems!

[+] flakiness|2 years ago|reply
Note that the repo author != the paper author.

The research itself [1] seems legit. The paper author also wrote a paper called ReAct [2], which is one of the core components of the langchain framework.

* [1] https://arxiv.org/abs/2305.10601 * [2] https://arxiv.org/abs/2210.03629

[+] joshka|2 years ago|reply
Interestingly a 2 days prior to https://arxiv.org/abs/2305.10601, someone released https://arxiv.org/abs/2305.08291

> Large Language Model Guided Tree-of-Thought > In this paper, we introduce the Tree-of-Thought (ToT) framework, a novel approach aimed at improving the problem-solving capabilities of auto-regressive large language models (LLMs). The ToT technique is inspired by the human mind's approach for solving complex reasoning tasks through trial and error. In this process, the human mind explores the solution space through a tree-like thought process, allowing for backtracking when necessary. To implement ToT as a software system, we augment an LLM with additional modules including a prompter agent, a checker module, a memory module, and a ToT controller. In order to solve a given problem, these modules engage in a multi-round conversation with the LLM. The memory module records the conversation and state history of the problem solving process, which allows the system to backtrack to the previous steps of the thought-process and explore other directions from there. To verify the effectiveness of the proposed technique, we implemented a ToT-based solver for the Sudoku Puzzle. Experimental results show that the ToT framework can significantly increase the success rate of Sudoku puzzle solving. Our implementation of the ToT-based Sudoku solver is available on GitHub:

I don't recall whether it was this paper, or another that I read that talks about using the LLM's ability to also show the probabilities of each token to measure the validity of the particular completions. However that isn't exposed in the OpenAI chat APIs (GPT-Turbo-3.5 / GPT-4), just the completions APIs (Text-Davinci-003 etc.)

[+] m3kw9|2 years ago|reply
It’d be nice to include a few example uses and it’s outputs vs other prompt methods.
[+] Jeff_Brown|2 years ago|reply
A claim like "improves reasoning by 70%" is too specific to be accompanied by neither a citation nor a definition.
[+] Imnimo|2 years ago|reply
My guess is that the author misunderstands this quote from the paper abstract:

>For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%.

[+] ysymyth|2 years ago|reply
This is Shunyu, author of Tree oF Thoughts (arxiv.org/abs/2305.10601).

The official code to replicate paper results is https://github.com/ysymyth/tree-of-thought-llm

Not https://github.com/kyegomez/tree-of-thoughts which according to many who told me, is not right/good implementation of ToT, and damages the reputation of ToT

I explained the situation here: https://twitter.com/ShunyuYao12/status/1663946702754021383

I'd appreciate your help by unstaring his and staring mine, as currently Github and Google searches go to his repo by default, and it has been very misleading for many users.

[+] SillyKyoushi|2 years ago|reply
I found this comment from searching "tree of thoughts arxiv github" on Google; so at least, there's that. Thank you for the official link! I'm eager to try out this deliberate problem solving stuff.
[+] startupsfail|2 years ago|reply
Checking, if GPT could be improved by running it multiple times is a good idea.

The answer to that is - yes, but it is: costly, slow, there is node collapse, it impacts context length, it injects biases.

[+] nate|2 years ago|reply
I constantly ask chatGPT: "are you sure?" to it's replies, and it almost always corrects a mistake it made that I've spotted.
[+] xg15|2 years ago|reply
> This is an plug in and play version, connect your own models and enjoy superintelligence!

Share this repository by clicking on the following buttons! <smiley face>

2023 in a nutshell.

[+] tyropita|2 years ago|reply
Documentation looks really neat and in-depth, always appreciated. Looks like you’re missing a .gitignore file. Folders like __pycache__ don’t need to be checked in.
[+] ChrisAlexiuk|2 years ago|reply
https://youtu.be/bjnTy2TdmYw

I went through this in a video using the paper's official code - and it worked fairly well!

Definitely a great step forward in terms of reasoning tasks - even if it is an expensive step.

[+] doctoboggan|2 years ago|reply
This seems really interesting. I am glad many of these tools built up around LLMs allow you to bring your own rather than rely on OpenAI.