Update: Just ran our benchmark on the Mistral model and results are.. surprisingly bad?
Mistral OCR:
- 72.2% accuracy
- $1/1000 pages
- 5.42s / page
Which is pretty far cry from the 95% accuracy they were advertising from their private benchmark. The biggest thing I noticed is how it skips anything it classifies as an image/figure. So charts, infographics, some tables, etc. all get lifted out and returned as [image](image_002). Compared to the other VLMs that are able to interpret those images into a text representation.
It’s interesting that none of the existing models can decode a Scrabble board screen shot and give an accurate grid of characters.
I realize it’s not a common business case, came across it testing how well LLMs can solve simple games. On a side note, if you bypass OCR and give models a text layout of a board standard LLMs cannot solve Scrabble boards but the thinking models usually can.
themanmaran|1 year ago
https://github.com/getomni-ai/benchmark
themanmaran|1 year ago
Mistral OCR:
- 72.2% accuracy
- $1/1000 pages
- 5.42s / page
Which is pretty far cry from the 95% accuracy they were advertising from their private benchmark. The biggest thing I noticed is how it skips anything it classifies as an image/figure. So charts, infographics, some tables, etc. all get lifted out and returned as [image](image_002). Compared to the other VLMs that are able to interpret those images into a text representation.
https://github.com/getomni-ai/benchmark
https://huggingface.co/datasets/getomni-ai/ocr-benchmark
https://getomni.ai/ocr-benchmark
kergonath|1 year ago
cdolan|1 year ago
WhitneyLand|1 year ago
I realize it’s not a common business case, came across it testing how well LLMs can solve simple games. On a side note, if you bypass OCR and give models a text layout of a board standard LLMs cannot solve Scrabble boards but the thinking models usually can.
xnx|1 year ago
ChemSpider|1 year ago
resource_waste|1 year ago
I'll give it a try, but I'm not holding my breath. I'm a huge AI Enthusiast and I've yet to be impressed with anything they've put out.