(no title)
nickwatson | 10 months ago
Almost every subject has been learned this way, whether at school from a teacher or text-book, or reading papers.
The Oxford dictionary definition says the same, "to study a subject in detail". This is what AI is doing - I see it as a "power suit" for distilling information much faster, without the cognitive bias that many of us will carry.
Learning is an important part of research, and this must come with discernment over credibility of existing research, including identifying where the gaps are. This kind of critical thinking allows for another level, experiments, surveys, etc to uncover things even further.
If you were to study the language of dolphins today, where would you start? Would you jump into the ocean and start trying to talk with them, or would you look up what is already discovered? Would you study their behaviors, patterns, etc?
What drove me to do this project is exactly the example you mentioned, the flat-earther type who look up an article on some kind of free hosting website or Sandra from accounts social media page and taken as the be-all-and-end-all of knowledge. It comes without bias recognition or critical thinking skills. This is where I'm hopeful to level the playing field, and ensure unbiased, balanced information is uncovered.
latexr|10 months ago
It is naive and incorrect to believe LLMs do not have biases. Of course they do, they are all trained on biased content. There are plenty of articles on the subject.
> Would you jump into the ocean and start trying to talk with them, or would you look up what is already discovered?
Why resort to straw men arguments? Of course anyone would start by looking up what has already been discovered, that doesn’t immediately mean reaching for and blindly trusting any random LLM. The first thing you should do, in fact, is figure out which prior research is important and reliable. There are too many studies out there which are obviously subpar or outright lies.
nickwatson|10 months ago
I agree to first figuring out which research is most important and reliable. There is a planning stage, to consider the sources and which ones hold credibility.
In addition, the user has full control over the sources the tool uses, and even add their own (MCP tools).
In addition, being open source, you have full control over the flow/prompts/source methods/etc and as a result can optimize this yourself and even contribute improvements to ensure this benefits research as a whole.
I welcome your feedback, and any code amendments you propose to improve the tool. You clearly understand what makes good research and your contributions will be highly valued by all of us.
vidarh|10 months ago
But I think you're right to describe it as research in the headline, because a lot of people will relate more to that term. But perhaps describe it as conducting a literature review further down.
nickwatson|10 months ago
I didn't give the wording too much thought in all honesty - was just excited to share.
Where would you suggest to put the literature review text? Readme.md?
What about something like "synthesized findings from sources across the internet" or something like that.
When I see the word literature, I immediately think of books.
sReinwald|10 months ago
First, AI systems absolutely embody cognitive biases - they're just different from human ones. These systems inherit biases from:
An AI doesn't independently evaluate source credibility or apply domain expertise - it synthesizes patterns from its training data according to its programming.Second: You frame AI as a "power suit" for distilling information faster. While speed has its place, a core value of doing research isn't just arriving at a final summary. It's the process of engaging with a vast, often messy, diversity of information, facts, opinions, and even flawed arguments. Grappling with that breadth, identifying conflicting viewpoints, and synthesizing them _yourself_ is where deep understanding and critical thinking are truly built.
Skipping straight to the "distilled information," as useful as it might be for some tasks, feels like reading an incredibly abridged version of Lord of the Rings: A small man finds a powerful ring once owned by an evil God, makes some friends and ends up destroying the ring in a volcano. The end. You miss all the nuance, context, and struggle that creates real meaning and understanding.
Following on from that, you suggest that this AI-driven distillation then "allows for another level, experiments, surveys, etc to uncover things even further." I'd argue the opposite is more likely. These tools are bypassing the very cognitive effort that develops critical thinking in the first place. The essential practice for building those skills involves precisely the tasks these tools aim to automate: navigating contradictory information, assessing source reliability, weighing arguments, and constructing a reasoned conclusion yourself. By offloading this fundamental intellectual work, we remove the necessary exercise. We're unfortunately already seeing glimpses of this, with people resorting to shortcuts like asking "@Grok is this true???" on Twitter instead of engaging critically with the information presented to them.
Tools like this might offer helpful starting points or quick summaries, but they can't replicate the cognitive and critical thinking benefits of the research journey itself. They aren't a substitute for the human mind actively wrestling with information to achieve genuine understanding, which is the foundation required before one can effectively design meaningful experiments or surveys.
nickwatson|10 months ago
As humans, we align to our experiences and values, all of which are very diverse and nuanced. Reminds me of a friend who loves any medical conspiracy theory, whose dad was a bit of an ass to him, and of course, a scientist!
Without our cognitive biases, are we truly human? Our values; our desired outcomes inherently are part of what shapes us. and of course the sources we choose to trust reinforce this.
It's this that makes me think AGI can never be achieved, or human-like ability for AI to think, because we are all biased, like it or not. Collectively and through challenging each other, this is what makes society thrive.
I feel there is no true path towards a single source of truth, but collaboratively we can at least work towards getting there as closely as possible