top | item 39876447

Benchmarking LLMs against human expert-curated biomedical knowledge graphs

41 points| Al0neStar | 1 year ago |sciencedirect.com

5 comments

order

CraftingLinks|1 year ago

Academic writing 101: The abstract is NOT meant to be written as a cliff-hanger!

serialdev|1 year ago

You will not believe what it is all you need!

nyrikki|1 year ago

Due to the cliffhanger abstract, here is a part from the discussion that may help.

> In our case, the manual curation of a proportion of triples revealed that Sherpa was able to extract more triples categorized as correct or partially correct. However, when compared to the manually curated gold standard, the performance of all automated tools remains subpar.

jmugan|1 year ago

I didn't see UMLS in the paper, but I've tried some of their human-created biomedical knowledge graphs, and they were too full of errors to be used. I imagine different ones have different levels of accuracy.

egberts1|1 year ago

i was right; LLM needs two major components added before we can swan dive into humanistic aspect of medicine/pyschology/politics using a form of LLM.

1) weighting of each statement for probability of correctness and

2) citation for each source.