(no title)
nikhilsimha | 1 year ago
i do think ip infringement is not cool in general - but it doesnt seem right that geo research is private property.
nikhilsimha | 1 year ago
i do think ip infringement is not cool in general - but it doesnt seem right that geo research is private property.
roenxi|1 year ago
I have a friend who applys research to businesses as a consultant. One of his biggest challenges is how to index all the papers and work out what is relevant to a particular topic. I don't know if the current generation of bots are up to the challenge but sooner or later ProfessorGPT will be perfect for that niche. Then journals that force human's to manually research through large numbers of papers will be massive albatrosses that hamper scientific progress.
threeseed|1 year ago
This is debatable.
I've seen countless "AI on knowledge base" projects and all have been on a whole not that much better than just using ElasticSearch. Some aspects are better e.g. discovery but some aspects are worse e.g. accuracy, speed when you are looking for something specific.
I would argue that simply having a knowledge graph in front that can provide related papers for a topic would accomplish the goals better.
jcranmer|1 year ago
I have a hard time seeing this. If you're an academic or an industrial researcher, the hard part of the literature review isn't finding the relevant papers, it's digesting them--and in some fields (e.g., chemistry), replicating their results. If you're more an industry person trying to apply academic research, well in general, you probably want a good textbook synthesis of the field rather than trying to understand stuff from research papers.
From your second paragraph, it seems to me that you're thinking AI will help with the textbook synthesis step, but this is the sort of thing that as far as I can tell, current LLMs are just fundamentally bad at. To use a concrete example, I have been off-and-on poking at research into simplex presolving, and one of the things you quickly find is that just about everybody has their own definition of the "standard model", so to mix and match different papers, you have to start by recasting everything into a single model. And capturing the nuance of "these papers use the same symbols to mean completely different things" isn't a strong point of LLMs.