(no title)
roykishony | 1 year ago
You suggested some directions for more complex analysis that could be done on this data - I would be so curious to see what you get if you could take the time to try out running data-to-paper as a co-pilot on your own - you can then give it directions and feedback on where to go - will be fascinating to see where you take it!
We also must look ahead: complexity and novelty will rapidly increase as ChatGPT5, ChatGPT6 etc are rolled in. The key with data-to-paper is to build a platform that harnesses these tools in a structured way that creates transparent and well-traceable papers. Your ability to read and understand and follow all the analysis in these manuscripts so quickly speaks to your talent of course, but also to the way these papers are structured. Talking from experience, it is much harder to review human-created papers at such speed and accuracy...
As for your comments on “it's certainly not close to something I could submit to a journal” - please kindly look at the examples where we show reproducing peer reviewed publications (published in a completely reasonable Q1 journal, PLOS One). See this original paper by Saint-Fleur et al: https://journals.plos.org/plosone/article?id=10.1371/journal...
and here are 10 different independent data-to-paper runs in which we gave it the raw data and the research goal of the original publication and asked it to do the analysis reach conclusions and write the paper: https://github.com/rkishony/data-to-paper-supplementary/tree... (look up the 10 manuscripts designated “manuscriptC1.pdf” - “manuscriptC10.pdf”)
See our own analysis of these manuscripts and reliability in our arxiv preprint: https://arxiv.org/abs/2404.17605
Note that the original paper was published after the training horizon of the LLM that we used and also that we have programmatically removed the original paper from the result of the literature search that data-to-paper does so that it cannot see it in the search.
Thanks so much again and good luck for the exam tomorrow!
No comments yet.