top | item 35316576

(no title)

abbefaria27 | 2 years ago

The author just needed to get a second opinion. I showed this to a veterinarian I know, and she said this is pretty obvious from a quick glance at the bloodwork. In fact she's skeptical that the first vet didn't see it, and rather thinks there may have been some miscommunication. You can look up anemia in a reference and it lists the same differentials. Though according to her, ChatGPT didn't present them in the most likely order (e.g. babesiosis should be lower), and there are many other factors to consider.

This is still very impressive for a computer program, but not as mind-blowing as I first thought when reading the thread. ChatGPT didn't find some obscure disease like in a medical TV show. Rather, it correctly read the low blood cell count, and pulled up the differentials for anemia from a reference book.

On a side note, considering how often ChatGPT will lie with full confidence, personally I can't imagine using it for anything medically related.

discuss

order

Tolaire|2 years ago

And yet, a dog is alive.

When benchmarking AI vs. humans, it's important to take into account how garbage humans can be.