top | item 43187209

Replace OCR with Vision Language Models

292 points| EarlyOom | 1 year ago |github.com

125 comments

order

rafram|1 year ago

It’s an interesting idea, but still way too unreliable to use in production IMO. When a traditional OCR model can’t read the text, it’ll output gibberish with low confidence; when a VLM can’t read the text, it’ll output something confidently made up, and it has no way to report confidence. (You can ask it to, but the number will itself be made up.)

I tried using a VLM to recognize handwritten text in genealogical sources, and it made up names and dates that sort of fit the vibe of the document when it couldn’t read the text! They sounded right for the ethnicity and time period but were entirely fake. There’s no way to ground the model using the source text when the model is your OCR.

themanmaran|1 year ago

Thing is, the majority of OCR errors aren't character issues, but layout issues. Things like complex tables with cells being returned under the wrong header. And if the numbers in an income statement are one column off creates a pretty big risk.

Confidence intervals are a red herring. And only as good as the code interpreting them. If the OCR model gives you back 500 words all ranging from 0.70 to 0.95 confidence, what do you do? Reject the entire document if there's a single value below 0.90?

If so you'd be passing every single document to a human review, and might as well not run the OCR. But if you're not rejecting based on CI, then you're exposed to just as much risk as using an LLM.

constantinum|1 year ago

The primary issue with LLMs is hallucination, which can lead to incorrect data and flawed business decisions.

For example, Llamaparse(https://docs.llamaindex.ai/en/stable/llama_cloud/llama_parse...) uses LLMs for PDF text extraction but faces hallucination problems. See this issue for more details: https://github.com/run-llama/llama_parse/issues/420.

For those interested, try LLMWhisperer(https://unstract.com/llmwhisperer/) for OCR. It avoids LLMs, eliminates hallucination issues, and preserves the input document layout for better context.

Examples of extracting complex layout:

https://imgur.com/a/YQMkLpA

https://imgur.com/a/NlZOrtX

https://imgur.com/a/htIm6cf

EarlyOom|1 year ago

This is the main focus of VLM Run and typed extraction more generally. If you provide proper type constraints (e.g. with Pydantic) you can dramatically reduce the surface area for hallucination. Then there's actually fine-tuning on your dataset (we're working on this) to push accuracy beyond what you get from an unspecialized frontier model.

KoolKat23|1 year ago

I've been using gemini 2 flash to extract financial data, within my sample which is perhaps small (probably 1000 entries so far), I've had one single error only so like a 99.9% success rate.

(There's slightly more errors if I ask it to add numbers but this isn't OCR and a bit more of a reach, although it is very good at this too regardless).

Many hallucinations can be avoided by telling it to use null if there is no number present.

cratermoon|1 year ago

Agree wholeheartedly. Modern OCR is astonishingly good, more importantly it's deterministically so. It's failure modes, when it's unable to read the text, are recognizably failures.

Results for VLM accuracy & precision are not good. https://arxiv.org/html/2406.04470v1#S4

delichon|1 year ago

How about calculating confidence in terms of which output regions are stable across the same input on multiple tries. Expensive, but the hallucinations should have more variable output and be fuzzier than higher confidence regions in averages.

staticman2|1 year ago

I think it would be pretty reliable in controlled circumstances. If I take a picture of a book with my cell phone- google Gemini pro is much better at recognizing the text than Samsung's built in OCR.

the8472|1 year ago

Shouldn't confidence be available at the sampler level and also be conditional on the vision input, not just the next-token prediction?

j_bum|1 year ago

This is naive, but can you ask the model to provide a confidence rating for sections of the document?

themanmaran|1 year ago

We recently published an open source benchmark [1] specifically for evaluating VLM vs OCR. And generally the VLMs did much better than the traditional OCR models.

VLM highlights:

- Handwriting. Being contextually aware helps here. i.e. they read the document like a human would, interpreting the whole word/sentence instead of character by character

- Charts/Infographics. VLMs can actually interpret charts or flow diagrams into a text format. Including things like color coded lines.

Traditional OCR highlights:

- Standardized documents (e.x. US tax forms that they've been trained on)

- Dense text. Imagine textbooks and multi column research papers. This is the easiest OCR use case, but VLMS really struggle as the number of output tokens increase.

- Bounding boxes. There still isn't really a model that gives super precise bounding boxes. Supposedly Gemini and Qwen were trained for it, but they don't perform as well as traditional models.

There's still a ton of room for improvement, but especially with models like Gemini the accuracy/cost is really competitive.

[1] https://github.com/getomni-ai/benchmark

fzysingularity|1 year ago

Saw your benchmark, looks great. Will run our models against those benchmark and share some of our learnings.

As you mentioned there are a few caveats to VLMs that folks are typically unaware of (not at all exhaustive, but the ones you highlighted):

1. Long-form text (dense): Token limits of 4/8K mean that dense pages may go over limits of the LLM outputs. This requires some careful work to make them work as seamlessly as OCR.

2. Visual grounding a.k.a. bounding boxes are definitely one of those things that VLMs aren't natively good at (partly because the cross-entropy losses used aren't really geared for bounding box regression). We're definitely making some strides here [1] to improve that so you're going to get an experience that is almost as good as native bounding box regression (all within the same VLM). [1]

[1] https://colab.research.google.com/github/vlm-run/vlmrun-cook...

ekidd|1 year ago

I've been experimenting with vlm-run (plus custom form definitions), and it works surprisingly well with Gemini 2.0 Flash. Costs, as I understand, are also quite low for Gemini. You'll have best results with simple to medium-complexity forms, roughly the same ones you could ask a human to process with less than 10 minutes of training.

If you need something like this, it's definitely good enough that you should consider kicking the tires.

fzysingularity|1 year ago

Very cool! If you have more examples / schemas you'd be interested in sharing, feel free to add to the `contrib` section.

rendaw|1 year ago

Why do all these OCR services only show examples with flawless screenshots of digital documents? Are there that many people trying to OCR digital data? Why not just copy the HTML?

If it's not intended for digital documents, where are the screenshots with fold marks, slipping lines, lighting gradients, thumbs, etc etc.

orliesaurus|1 year ago

I think OCR tools are good at what they say on the box, recognizing characters on a piece of paper etc. If I understand this right, the advantage of using a vision language model is the added logic that you can say things like: "Clearly this is a string, but does it look like a timestamp or something else?"

EarlyOom|1 year ago

VLMs are able to take context into account when filling in fields, following either a global or field specific prompt. This is great for e.g. unlabeled axes, checking a legend for units to be suffixed after a number, etc. Also, you catch lots of really simple errors with type hints (e.g. dates, addresses, country codes etc.).

raxxorraxor|1 year ago

This has always been part of the complete OCR package as far as I know. The raw result of an OCR constantly fails to differentiate 1 l I i | or other similar symbols/letters.

Maybe this necessary step can be improved and altered with a VLM. There is also the preprocessing where the image get its perspective corrected. Not sure how well a VLM performs here.

As you said, I think combining these techniques will be the most efficient way forward.

vintermann|1 year ago

You can also use it for robustness. Looking at e.g. historical censuses, it's amazing how many ways people found to not follow the written instructions for filling them out. Often the information you want is still there, but woe to you if you look at the columns one by one and assume the information in them to be accurate and neatly within its bounding box.

BrannonKing|1 year ago

What I want: take scan/photo of a document (including a full book), pass it to the language model, and then get out a Latex document that matches the original document exactly (minus the copier/camera glitches and angles). I feel like some kind of reinforcement learning model would be possible for this. It should be able to learn to generate Latex that reproduces the exact image, pixel for pixel (learning which pixels are just noise).

NoMoreNicksLeft|1 year ago

A big difficulty there is typeface detection, some of these were never digital fonts. But, even if it could detect them, you likely don't have those fonts on your computer to be able to put it back together as a digital typesetting for any but the most trivial fonts.

sva_|1 year ago

Did you try mathpix? Not sure about full pages, but it is pretty good at eqn

erulabs|1 year ago

You sort of have to use both. OCR and LLM and then correlate the two results. They are bad at very different things, but a subsequent call to a 2nd LLM to pair together the results does improve quality significantly, plus you get both document understanding and context as well as bounding boxes, etc.

I'm building a "never fill out paperwork again" app, if anyone is interested, would be happy to chat!

fzysingularity|1 year ago

We think VLMs would outperform most OCR+LLM solutions in due time. I get that there’s need for these hybrid solutions today, but we’re comparing 20+ year mature tech vs something that’s roughly 1.5 years old.

Also, VLMs are end-to-end trainable, unlike OCR+LLM solutions (that are trained separately), so it’s clear that these approaches scale much better for domain-specific use cases or verticals.

cpursley|1 year ago

Any tips on how to prompt that second pairing step? And what sort of things to ask the llm to extract in step 1?

K0balt|1 year ago

A VLM that invokes ocr tool use is a compelling idea that could result in pretty good results, I would expect.

serjester|1 year ago

Good to see more work being done here, but I don't understand why this is tied to someone's proprietary API. Swapping model providers and adding some basic logging is not remotely painful enough to justify onboarding yet another vendor. Especially one that's handling something as sensitive as LLM prompts.

iLemming|1 year ago

What's the fastest and accurate CLI OCR tool? My use case is simple - I want to be able to grab a piece of screen (Flameshot is great for that), and OCR it. I need this for note-taking during pair-programming over Zoom.

Currently I'm using tesseract - it works, it's fast, but it also makes mistakes; it would be also great if it could discern tabular data and put them in ascii or markdown tables. I've tried docling, but it feels like a bit of an overkill. It seems to be slower - remember, I need to be able to grab the text from the screenshot very quickly. I have only tried default settings, maybe tweaking it would improve things.

Can anyone share some thoughts on this? Thanks!

ANighRaisin|1 year ago

The AI OCR build into snipping tool in windows is better than tesseract, albeit more inconvenient than something like powertoys or Capture2Text, which use a quick shortcut.

leecarraher|1 year ago

maybe it was my prompt, but there seems to be far too much interpretation after the image embedding. In my examples it implicitly started to summarize parts of the text, unfortunately incorrectly. On an invoice with typed lettering it summarized that payments submitted would not post for 2-3 business days, when in reality the text said if you submitted after 2p on a friday, the payment would not post until the following monday. Which is significantly different. I'd be curious if you could ablate those layers in some way, because the one-shot structured text detection recognition was much better than vanilla ocr.

gfiorav|1 year ago

I wonder what the speed of this approach vs traditional ocr techniques. Also, curious if this could be used for text detection (find a bounding box containing text within an image).

vunderba|1 year ago

Was just coming here to say this, there does not yet exist a multimodal vision LLM approach that is capable of identifying bounding boxes of where the text occurs. I suppose you could manually cut the image up and send each part separately to the LLM but that feels like an kludge and it's still in-exact.

temp0826|1 year ago

I've been looking for a solution to translate a dictionary for me. It is a Shipibo-Conibo (indigenous Peruvian language) to Spanish dictionary- I'd like to translate the Spanish to English (and leave the Shipibo intact). Curious for any thoughts here. I have the dictionary as a PDF (already searchable so I don't think it would need to be re-OCR'd...though that's possible too, it's not clearest scan).

wrs|1 year ago

I wouldn’t be surprised to find that Claude/ChatGPT/etc. can just…do that. With the prompt you just gave.

The output could be in Markdown, which is easily turned into a PDF. You would have to break up the input PDF into pages to avoid running out of output window.

zzleeper|1 year ago

By any chance, would it be possible to share the PDF? I haven't heard shipibo language in a long while, and am quite curious about it.

fl0under|1 year ago

Looks cool!

May also be interested in Allen AI's OCR tool olmOCR they just released too [1][2]. They say "convert a million PDF pages for only $190 USD".

[1] https://github.com/allenai/olmocr [2] https://arxiv.org/abs/2502.18443

TZubiri|1 year ago

The issue with that promise is that anyone can convert pdfs, the question is whether the conversions are correct or whether you have

Income Expenses 200 100

On one document, and

Income Expenses 20 0100

On others.

There's no shortage of products that tried to solve this problem from scratch (or by piggybacking on other projects) and called it a day without worrying about the huge problem that is quality and parseability.

The most robust players just give you the coordinates of a glyph and you are on your own: Textract, PDFBox.

rasz|1 year ago

I rather see machine learning used to help OCR by

- recognizing/recreating exact font used

- helping align/rotate source

Not to hallucinate gibberish when source lacks enough data.

intalentive|1 year ago

What's the value-add here? The schemas?

fzysingularity|1 year ago

We've seen so many different schemas and ways of prompting the VLMs. We're just standardizing it here, and making it dead-simple to try it out across model providers.

vlmrunadmin007|1 year ago

Basically there is no model schema combination. IF you go ahead and prompt a open source model with the schema it doesn't produce the results in the expected format. The main contribution is how to make these model conform to your specific needs and in a structured format.

Inviz|1 year ago

Service doesnt inspire confidence. Openai-compatible api doesnt work (expects `content.str` in message to be a string - ???). Getting 500s on non-openai compatible endpoint - seems like timeouts(?). When it did work it missed a lot, and hallucinated a lot too on custom documents/schemas.

cyp0633|1 year ago

Existing solutions like Tesseract already can embed text into the image, but I'm wondering if there's a way to combine LLM with Tesseract, so that LLMs can help correcting results and finding unidentified text, and finally still embed text back to the image

syntaxing|1 year ago

Maybe I’m being greedy but is it possible to have a vLLM detect when a portion is an image? I want to convert some handwritten notes into markdown but some portion are diagrams. I want the vLLM to extract the diagrams to embed into the markdown output

vlmrunadmin007|1 year ago

We have successfully tested the model with vLLM and plan to release it across multiple inference server frameworks, including vLLM and OLAMA.

TZubiri|1 year ago

Wow thanks!

There's a client who had a startup idea that involved analyzing pdfs, I used textract, but it was too cumbersome and unreliable.

Maybe I can reach out to see if he wants to give it anothee go with this!

fzysingularity|1 year ago

Let us know, I think >70% of OCR tasks today can be done with VLMs with a little bit of guidance ;). Ping us at contact "at" vlm.run

rasguanabana|1 year ago

Wouldn’t VLM be susceptible to prompt injection?

egorfine|1 year ago

I had a need to scan serial numbers from Apple's product boxes out of pictures taken by a clueless person on their phone. All OCR tools failed.

Vision model did the trick so well it's not even funny to discuss anything further.

"This is a picture of Apple product box. Find and return only the serial number of the product as found on a label. Return 'none' if no serial number can be found".

ptx|1 year ago

Did you check if all the numbers were correct?

LeoPanthera|1 year ago

What's the characters-per-Wh of an LLM compared to traditional OCR?

fzysingularity|1 year ago

That's a tough one to answer right now, but to be perfectly honest, we're off by 2-3 orders of magnitude in terms of chars/W.

That said, VLMs are extremely powerful visual learners with LLM-like reasoning capabilities making them more versatile than OCR for practically all imaging domains.

In a matter of a few years, I think we'll essentially see models that are more cost-performant via distillation, quantization and the multitude of tricks you can do to reduce the inference overhead.

mlyle|1 year ago

A lot worse. But, higher quality OCR will reduce the amount of human post-processing needed, and, in turn will allow us to reduce the number of humans. Since humans are relatively expensive in energy use, this can be expected to save a lot of energy.

ambicapter|1 year ago

People really only started talking about the cost of running things when LLMs came out. Most everything before that was too cheap to be a serious consideration.

submeta|1 year ago

Can I use this to convert flowcharts to yaml representations?

EarlyOom|1 year ago

We convert to a JSON schema, but it would be trivial to convert this to yaml. There are some minor differences in e.g. tokens required to output JSON vs yaml which is why we've opted for our strategy.

htrp|1 year ago

VLM's can't replace ocr one to one.. most hosted multimodal models seem to have a classical OCR (tesseract-based) step in their inference loop

gunian|1 year ago

replaced it with real humans -> nano tech in their brain -> transmit to server getting almost 99% accuracy

skbjml|1 year ago

This is awesome!

tgtweak|1 year ago

Not really interested until this can run locally without api keys :\

mmusson|1 year ago

Lol. The resume includes expert in Mia Khalifa easter egg.