(no title)
nijuashi | 7 months ago
In that sense, you’d give the LLM the purpose of the paper, the field you’re writing in, and the relevant data from your lab notebook. Personally, I never enjoyed writing manuscripts — most of the time goes into citing every claim and formatting everything correctly, which often feels more like clerical work than communicating discovery.
I don’t mind if LLMs help write these papers. I don’t think learning to mimic this stylistic form necessarily adds to the process of discovery. Scientists should absolutely be rigorous and clear, but I’d welcome offloading the unnecessary tedium of stylized writing to automation.
pcrh|7 months ago
I remain to be convinced that the tasks you propose an LLM could do contribute any more to the process of writing a paper than dictating to a typist could do in the 1950's. It's impressive for a machine, but not particularly productivity-boosting. Tedious tasks such as correctly formatting references belong to the copy-editing stage (i.e. very last stage of writing a paper), where indeed I have seen journals adopt "AI" approaches. But these processes are not a bottleneck in the scientist's workflow.
I certainly don't think the performance of LLMs that I'm familiar with would be any use at all in compiling the original data into scientifically accurate figures and text, and providing meaningful interpretations. Most likely they would simply throw out random "hallucinations" in grammatically correct prose.
knappa|7 months ago
If there is any use for LLMs in paper writing, I would think that it is for tedious but not well-defined tasks. For example, asking if an already written paper conforms to a journal's guidelines and style. I don't know about you, but I spend a meaningful amount of time [2] getting my papers into journal page limits. That involves rephrasing to trim overhangs, etc. "Rephrase the following paragraph to reduce the number of words by at least 2" is the kind of thing that LLMs really do seem to be able to do reliably.
1: As usual, the input data can be wrong, but that would be a problem for LLMs too. 2: I don't actually know how much time. It probably isn't all that long, but it's tedious and sure does feel like a long time while I'm doing it.