Very interesting! I had never heard of Apache PDFBox before, I must give it a try. I have a similar program that parses horse racing PDFs from sites such as www.racehorserunner.com - which are of a much simpler format, but cause endless problems for me when the PDFs have layout problems. For example, issues like one column being too long and overlapping with another, e.g the last race on http://www.racehorserunner.com/Archives/ELP/ELP170702.pdf
All PDF parsers that I have tried cope very badly with these kind of situations, and often try to be 'too clever' in that they value the final layout of the text over and above the individual strings.
Have you experienced similar problems with PDFBox, or does it handle formatting and layout fairly reliably?
PDFBox committer here, if you want even lower-level access to the page content stream, without anything 'clever' at all, check out the PDFGraphicsStreamEngine class, which is a superclass of the text extraction and rendering classes. Gives you access to the raw glyphs. You can override PageRenderer too, for visual debugging, e.g. render glyph bounding boxes. We have an interactive Swing PDFDebugger which does just that.
Yes I encountered similar issues but many of them were able to be solved.
With PDFBox I was able to deal with the content at a very low level (on a per-character basis), so that when for instance building a String, I would insert a pipe character when the distance between adjacent characters was greater than the width of the space character and then detect that when translating to a certain field.
Huh, interesting. I was looking around for PDF libs previously and PDFBox didn't show up in google results. pdftk was the only one that showed up in Google results anywhere useful.
Edit: Looks like it's on the second page of results and I never made it that far, heh. Goes to show how biasing the first page of results is.
I still don't understand how PDF could become one of the standards for publishing documents. Well structured content gets converted into PDF which loses most of that structure. And then a lot of work is done to guess that structure from PDF and convert it back to a better file format. It just shows that successful solutions don't have to be technically good.
The keyword is "publishing" --- as in, producing human-readable physical copies, not electronic ones. It just so happens that the format was relatively suitable for the latter too (because it actually looks like a printed document rendered on the screen --- unlike HTML or other formats around at the time), which is why that use-case became popular. PDF is basically a descendant of PostScript, which was designed to control printers.
(Its PostScript origins may also explain the bizarre mix of text and binary that constitute the file format. For example, page contents are in a relatively free-form PostScript-ish RPN-like textual language, but are found in "content streams" which may be compressed or encoded into a binary format. Data "object" structures include things like '<<'-delimited dictionaries, '[' arrays ']', textual "/Names", and even provisions for comments(!?).
Then there are things like the cross-reference table of all objects in the file, which is an array of fixed-width textual numbers representing file offsets, e.g. "0000001056 00000 n" refers to something 1056 bytes from the start of the file. Reactions of WTF!? from those working with the format for the first time are not uncommon.)
PDF has a feature called Tagged PDF, which allows the document to be annotated with a semantic structure. Almost nobody bothers to generate such PDFs, but the support is there!
Very neat, and gets me curious about PDFBox, but every time I see something that converts a consistent-layout PDF back to structured data, I just bemoan the fact that this would all be trivial with an API for these kinds of things.
I was just looking at collecting race information and historical results data a month or two ago and was struck by the lack of available structured data. Heck, I couldn't easily find any for pay options either.
Firstly, what an interesting library. Secondly, this is among the best TLDR readmes I've ever seen! I lack exposure to this area, so I'm actually quite impressed with the complexity of it.
As a python programmer, I found R's pdftools to be indispensable for messy text based PDFs. I couldn't find a python lib that worked as consistently across variously different formats.
I came across https://github.com/pdfminer/pdfminer.six recently and was impressed with what it could get done. The documentation can be challenging to parse, so I relied on a code sample from a StackOverflow answer. Have you had a chance to try it out? Curious about how/if it works well across platforms.
Impressive! Seems like you can't just use PDFBox out of the box (no pun intended) and need to write some custom code specific to the PDF itself per the chart-parser commits[1]
Author here; well, PDFBox is good for simple text stripping. If I wanted to print all the text on the PDF, that would be very straightforward and not much code. However, the PDF chart here is in essence a representation of structured data. I wanted to get the content in that format so that I could both serialize to JSON plus have an SDK to boot.
joosters|8 years ago
All PDF parsers that I have tried cope very badly with these kind of situations, and often try to be 'too clever' in that they value the final layout of the text over and above the individual strings.
Have you experienced similar problems with PDFBox, or does it handle formatting and layout fairly reliably?
jahewson|8 years ago
https://github.com/apache/pdfbox/blob/6f18d7c4bef4d23a22d/ex...
robinhowlett|8 years ago
With PDFBox I was able to deal with the content at a very low level (on a per-character basis), so that when for instance building a String, I would insert a pipe character when the distance between adjacent characters was greater than the width of the space character and then detect that when translating to a certain field.
See the convertToText() method for an example: https://github.com/robinhowlett/chart-parser/blob/master/src...
and https://github.com/robinhowlett/chart-parser/blob/f8d651e9a1... for when I used this technique
bpicolo|8 years ago
Edit: Looks like it's on the second page of results and I never made it that far, heh. Goes to show how biasing the first page of results is.
maxxxxx|8 years ago
userbinator|8 years ago
(Its PostScript origins may also explain the bizarre mix of text and binary that constitute the file format. For example, page contents are in a relatively free-form PostScript-ish RPN-like textual language, but are found in "content streams" which may be compressed or encoded into a binary format. Data "object" structures include things like '<<'-delimited dictionaries, '[' arrays ']', textual "/Names", and even provisions for comments(!?).
Then there are things like the cross-reference table of all objects in the file, which is an array of fixed-width textual numbers representing file offsets, e.g. "0000001056 00000 n" refers to something 1056 bytes from the start of the file. Reactions of WTF!? from those working with the format for the first time are not uncommon.)
jahewson|8 years ago
cpach|8 years ago
beager|8 years ago
0x445442|8 years ago
I was just looking at collecting race information and historical results data a month or two ago and was struck by the lack of available structured data. Heck, I couldn't easily find any for pay options either.
Cyph0n|8 years ago
Keep up the great work.
richiverse|8 years ago
tunaoftheland|8 years ago
hbcondo714|8 years ago
[1] https://github.com/robinhowlett/chart-parser/tree/master/src...
robinhowlett|8 years ago
JabavuAdams|8 years ago
vbuwivbiu|8 years ago
mpweiher|8 years ago
ocrimgproc|8 years ago