The PDF specification is wild. My current favourite trivia is that it supports all of Photoshop's layer blend modes for rendering overlapping elements.[1] My second-favourite is that it supports appended content that modifies earlier content, so one should always look for forensic evidence in all distinct versions represented in a given file.[2]
It's also a fun example of the futility of DRM. The spec includes password-based encryption, and allows for different "owner" and "user" passwords. There's a bitfield with options for things like "prevent printing", "prevent copying text", and so forth,[3] but because reading the document necessarily involves decrypting it, one can use the "user" password to open an encrypted PDF in a non-compliant tool,[4] then save the unencrypted version to get an editable equivalent.
In the context of a format that was originally proprietary and not widely available to everyone, and conceived in an era where encryption was strongly controlled by export law, that sort of security-by-obscurity was very common. Incidentally, a popular cracking tutorial back then was to de-DRM the official reader by patching the function that checks those permissions.
Aren’t the blend modes supported just the Porter-Duff compositing modes? You might think that’s overkill, but it’s a really good mapping of what other rendering pipelines offer and it can really help reduce the work to produce a PDF.
The permission field can also lead you down the rabbit hole of discovering noncompliance to its specification in some PDF writers and workarounds for these that may or may not be present in different PDF readers/libraries.
To be fair, if you wanted to stop copying of text it would be easiest just to drop the ToUnicode mapping against the fonts and then it’s a manual process for people to recreate them.
I used to be rich with selling a part of the stuff. FrameMaker. Used to be $5K US / copy Which came originally from Frame Technologies. [ Hi Steve Kirsch . I see you're rich still ]. PDF specification is wild. So right you are. At the time, many - including yours truly - said it was rude capitalism. So, you got it. People did not talk enough about DRM. Ps: I left Adobe embrace courtesy of my then wife, and me myself. I hate DRM as a user and as a -former- Salesman. Hola
This topic comes up periodically as most people think PDFs are some impenetrable binary format, but they’re really not.
They are a graph of objects of different types. The types themselves are well described in the official spec (I’m a sadist, I read it for fun).
My advice is always to convert the pdf to a version without compressed data like the author here has. My tool of choice is mutool (mutool clean -d in.pdf out.pdf). Then just have a rummage. You’ll be surprised by how much you can follow.
In the article the author missed a step where you look at the page object to see the resources. That’s where the mapping from the font name use in the content stream to the underlying object is made.
There’s also another important bit missing - most fonts are subset into the pdf. Ie, only the glyphs that are needed are maintained in the font. I think that’s often where the re-encoding happens. ToUnicode is maintained to allow you to copy text (or search in a PDF). It’s a nice to have for users (in my experience it’s normally there and correct though).
If you find pleasure in something that gives you pain, you're a masochist. A sadist likes inflicting pain onto others. Since you seem that you like helping people I'd say it's more likely you're the former. I appreciate the mutool advice!
pdf2ps a.pdf # convert to postscript "a.ps"
vim a.ps # edit postscript by hand
ps2pdf a.ps # convert back to pdf
Some complex pdf (with embedded javascript, animations, etc) fail to work correctly after this back and forth. Yet for "plain" documents this works alright. You can easily remove watermarks, change some words and numbers, etc. Spacing is harder to modify. Of course you need to know some postscript.
This is essentially how we used to do it for some "print-ready" jobs at a mailhouse I worked at. Usually we'd use proper tools to produce documents ready to print, but sometimes clients thought they knew better and would send us PDFs. It was more effort to work with those usually, and had a higher chance of errors.
Even if the output was correct, we still needed to re-order pages, apply barcodes for mail machine processing and postal sorting, and produce reports - which usually involved text scraping off the page to get an address via perl and other tools. Much easier in PS than PDF usually, but sometimes very unreliable when e.g. their PDFs were 'secure' and didn't have correct glyph mappings.
In the worst cases, they would supply single document PDFs, and merging those would cause an explosion of subset fonts in the output which would fill the printer's memory and crash it. When I stopped working in the area, I think there still wasn't a useful tool to consolidate and merge subset fonts known to come from the same parent font - it would have been a very useful tool and should be possible but I didn't have the time or knowledge to look into it.
If you can put javascript and animations in pdf, what's stopping you from making a frontend in it? I wonder what are the frontiers of things you can do in pdf.
Honestly, it seems like only malware authors benefit from the complexity of pdfs.
This seems to be missing an important point: at the end of PDF is a table ("cross-reference" table) that stores the BYTE-OFFSET to different objects in the file.
If you modify things within the file, typically these offsets will change and the file will be corrupt. It looks like in this article, maybe they were only interested in changing one number to another, so none of the positions change.
But, generally, adding/removing/modifying things in the middle of the file require recomputing the xref table and thus become much easier to use a library rather than direct text editing.
That's why they decode it with qpdf and re-encode it again afterwards, so qpdf takes care of that. qpdf reconstructs the original PDF structure, and I think it even tries to keep the object numbers the same, but the offsets are recalculated completely.
That's the weirdest part of the PDF spec IMHO. It's a mix of both binary and text, with text-specified byte offsets. It would be very interesting to read about why the format became like that, if its authors would ever talk about it. My guess is that it was meant to be completely textual at first (but then requiring the xref table to have fixed-length entries is odd), and then they decided binary would be more efficient.
In my experience it’s easiest just to break the xref table and run something like “mutool clean” to fix it again. It can be completely derived from the content so it’s safe to do.
You can modify binaries all you want as long as you preserve the length of everything.
Some piece of software we had authenticated against a server, but everything was done on the client. The client executed SQL against the server directly, etc. Basically, the server checked to see if this client would put you over the number of licenses you purchased and that's it.
I had run it against a disassembler, found the part where it performed the check, and was able to change it to a straight JMP and then pad the rest of the space with NOPs.
This seems to be missing an important step in the use of qpdf’s --qdf mode: after you’ve finished editing, you need to run the file through the fix-pdf utility to recalculate all the object offsets and rebuild the cross-reference table that lives at the end of the file (unless you only change bytes in-place rather than adding or removing bytes).
My top 3 fun PDF facts:
1) Although PDF documents are typically 8-bit binary files, you can make one that is valid UTF-8 “plain text”, even including images, through the use of the ASCII85 filter.[0]
2) PDF allows an incredible variety of freaky features (3D objects, JavaScript, movies in an embedded flash object, invisible annotations…). PDF/A is a much saner, safer subset.
3) The PDF spec allows you to write widgets (e.g. form controls) using “rich text”, which is a subset of XHTML and CSS - but this feature is very sparsely supported outside the official Adobe Reader.
After you've finished editing, just run it through qpdf without parameters, as explained in the beginning of the article, and it will recompress the data and recreate the xref table. No need for yet another tool.
What people often miss about PDF is that it's closer to an image format in some ways than to a Word document. Word documents, PDFs and images are in document editing what DAW projects, midis and mp3 files are in music and what Java source code, JVM bytecode and pure x86 machine code are in software.
The primary purpose of a PDF file is to tell you what to display (or print), with perfect clarity, in much fewer bytes than an actual image would take. It exploits the fact that the document creator knows about patterns in the document structure that, if expressed properly, make the document much more compressible than anything that an actual image compression algorithm could accomplish. For example, if you have access to the actual font, it's better to say "put these characters at these coordinates with that much spacing between them" than to include every occurrence of every character as a part of the image, hoping that the compression algorithm notices and compresses away the repetitions. Things like what character is part of what word, or even what unicode codepoint is mapped to which font glyph are basically unimportant if all you're after is efficiently transferring the image of a document.
If you have an editable document, you care a lot more about the semantics of the content, not just about its presentation. It matters to you whether a particular break in the text is supposed to be multiple spaces, the next column in a table or just a weird page layout caused by an image being present. If you have some text at the bottom of each page, you care whether that text was put there by the document author multiple times, or whether it was entered once and set as a footer. If you add a new paragraph and have to change page layout, it matters to you that the last paragraph on this page is a footnote and should not be moved to the next one. If a section heading moves to another page, you care about the fact that the table of contents should update automatically and isn't just some text that the author has manually entered. If you're a printer or screen, you care about none of these things, you just print or display whatever you're told to print or display. For a PDF, footnotes, section headings, footers or tables of contents don't have to be special, they can just be text with some meaningless formatting applied to it. This is why making PDF work for any purpose which isn't displaying or printing is never going to be 100% accurate. Of course, there are efforts to remedy this, and a PDF-creating program is free to include any metadata it sees fit, but it's by no means required to do so.
This isn't necessarily the mental model that the PDF authors had in mind, but it's an useful way to look at PDF and understand why it is the way it is.
The original goal of PDF was to have a portable print fidelity copy; WYSIWYG for real. You could take a PDF file and print it on a laser printer, a linotype, or a screen and it would look the same.
If you printed it on a postscript printer it would look exactly the same (or better, if you used type 1 fonts).
Anybody trying to do this is missing the point of PDF: It’s a page-description format and therefore only represents the marks on a page, not document structure.
One should not attempt to edit a PDF, one should edit the document from which the PDF is generated.
PDF does support incorporating information about the logical document structure, aka Tagged PDF. It’s optional, but recommended for accessibility (e.g. PDF/UA). See chapters 14.7–14.8 in [1].
One challenge is marking up corrections that need to go back to the source document. I get proofs from a typesetter, and I need to mark it up for them to fix. I can't change the pdf text, because the typesetter won't see that. Acrobat's markup tools aren't terrible, but they aren't quite what I could do in the days of paper and red pencils. Unless I use the 'pencil' tool in Acrobat. I'd like to see that improved.
20 years ago, I worked as the plate person at a newspaper: we had two million dollar kodak plate "printers" -- printer is the wrong word, but the emulsion on the plates could be hit by UV light and dissolved in a chemical bath iirc. Regularly, the kodaks would fail, and my boss would go into the postscript (or maybe eps) files manually, change a header or some other malformed bit that came from the layout software that sent us the files, and all would be well again (our giant German offset web press ran linux, btw)
I think his name was Bill. He took me, a 17 year old, to a Sigur Ros concert. Great dude. Wow two stories that don't involve pdfs!
Great post. I've spend a lot of time reading through the PDF specification over the last ~5 years while building DocSpring [1], and I still feel like I've barely scratched the surface. qpdf is a great tool. One of my other favorites is RUPS [2], which really lets you dig into the structure of a PDF.
I’m interested in extracting the contents of a pdf form — many individual text boxes. You’re saying libre office would likely be able to parse that pdf into a usable format?
A government form didn't have editable fields that needed to be filled out. And editing the PDF was impossible (password protection). This was my solution.
> I didn't see an obvious open-source tool that lets you dig into PDF internals
That’s a matter of the toolset. I program C#, and I have good experience with that open source library: https://www.nuget.org/packages/iTextSharp-LGPL/ It’s a decade old by now, but PDF ain’t exactly a new format. That library is not terribly bad for many practical use cases. Particularly good when you only need to create the documents as opposed to editing them, because for that use case you’d want to use an old version of the format anyway, for optimal compatibility.
PDF is such a weird format. Not so long ago I had to write some Java code for manipulating PDFs: find a string, remove it and place an image at the former string position. I should have known better as I thought "Well, how hard can that be?”
What followed was a deep dive down the rabbit hole, a lot of fiddling with the same tools the author of this gist is using trying to make sense of it all.
The final solution worked better than I thought while at the same time felt incredibly wrong.
I'm very thankful for all the (probably painful) work that went into those open source PDF tools.
This brings back horrible memories of working with large complex maps back in the 2000s. Having various CAD and GIS applications generate messy, inefficient spaghetti-coded PDF outputs - then bouncing those PDFs around the Adobe apps of the time, to add effects and other prettifications not available in the mapping apps.
It would reach the point where things would start to break, and .... "good times were had, by all".
My understanding is that the PDF syntax essentially imitates physical printing in that it instructs the reader software to leave something at a given coordinate on a defined media with supplied resources. Thus it's easily portable but barely mutable.
Some small PDF files are saved as uncompressed text. Invoices are a typical example.
This means that we can open those files, read them as one single string and match the expected text in unit tests. I've got a few projects doing that and it was fine.
If the text is compressed, pipe its content to qpdf first.
1. The PDF format is wildly overcapable compared to the majority of actual use (view text, tables and images).
2. The number of user devices with unpatched PDF readers is likely large.
3. The system of paywalled scientific knowledge drives millions of students and researchers to get their science PDFs from scihub and libgen pirate sites hosted in former Soviet countries, sometimes over http (not https).
These three facts combine to a huge vulnerabilty space.
On the flipside a sane and open PDF replacement format that also offered reduced file size could gain many users quickly by convincing scihub and libgen to convert and offer their files in the new format to cut costs and shorten download time, with reduced vuln as a positive externality.
For me it always seems to change the font from whatever was built into the PDF (rendered just fine in any PDF reader) to a random system font which completely breaks the spacing, making different parts of the document overflow into each other
It turns a PDF (typically everything in it is compressed binary blobs) into a mixed binary/ASCII file (which itself is a PDF) that can be edited with vim.
blincoln|2 years ago
It's also a fun example of the futility of DRM. The spec includes password-based encryption, and allows for different "owner" and "user" passwords. There's a bitfield with options for things like "prevent printing", "prevent copying text", and so forth,[3] but because reading the document necessarily involves decrypting it, one can use the "user" password to open an encrypted PDF in a non-compliant tool,[4] then save the unencrypted version to get an editable equivalent.
[1] "More than just transparency" section of https://blog.adobe.com/en/publish/2022/01/31/20-years-of-tra...
[2] https://blog.didierstevens.com/2008/05/07/solving-a-little-p...
[3] Page 61 of https://opensource.adobe.com/dc-acrobat-sdk-docs/pdfstandard...
[4] For example, a script that uses the pypdf library.
userbinator|2 years ago
totetsu|2 years ago
aardvark179|2 years ago
pph|2 years ago
aidos|2 years ago
johnalbertearle|2 years ago
aidos|2 years ago
They are a graph of objects of different types. The types themselves are well described in the official spec (I’m a sadist, I read it for fun).
My advice is always to convert the pdf to a version without compressed data like the author here has. My tool of choice is mutool (mutool clean -d in.pdf out.pdf). Then just have a rummage. You’ll be surprised by how much you can follow.
In the article the author missed a step where you look at the page object to see the resources. That’s where the mapping from the font name use in the content stream to the underlying object is made.
There’s also another important bit missing - most fonts are subset into the pdf. Ie, only the glyphs that are needed are maintained in the font. I think that’s often where the re-encoding happens. ToUnicode is maintained to allow you to copy text (or search in a PDF). It’s a nice to have for users (in my experience it’s normally there and correct though).
azangru|2 years ago
I think this is called masochist. Now, if you participated in writing the spec or were making others read it...
esafak|2 years ago
Their design philosophy of creating a read-only format was flawed to begin with. What's the first feature people are going to ask for??
gobdovan|2 years ago
haolez|2 years ago
Do you have any other insights on how to do a good job at that natively, i.e. without a cloud provider? Especially when dealing with tables.
enriquto|2 years ago
hnick|2 years ago
Even if the output was correct, we still needed to re-order pages, apply barcodes for mail machine processing and postal sorting, and produce reports - which usually involved text scraping off the page to get an address via perl and other tools. Much easier in PS than PDF usually, but sometimes very unreliable when e.g. their PDFs were 'secure' and didn't have correct glyph mappings.
In the worst cases, they would supply single document PDFs, and merging those would cause an explosion of subset fonts in the output which would fill the printer's memory and crash it. When I stopped working in the area, I think there still wasn't a useful tool to consolidate and merge subset fonts known to come from the same parent font - it would have been a very useful tool and should be possible but I didn't have the time or knowledge to look into it.
indeedmug|2 years ago
Honestly, it seems like only malware authors benefit from the complexity of pdfs.
hackernewds|2 years ago
ks2048|2 years ago
If you modify things within the file, typically these offsets will change and the file will be corrupt. It looks like in this article, maybe they were only interested in changing one number to another, so none of the positions change.
But, generally, adding/removing/modifying things in the middle of the file require recomputing the xref table and thus become much easier to use a library rather than direct text editing.
gpvos|2 years ago
userbinator|2 years ago
aidos|2 years ago
bena|2 years ago
You can modify binaries all you want as long as you preserve the length of everything.
Some piece of software we had authenticated against a server, but everything was done on the client. The client executed SQL against the server directly, etc. Basically, the server checked to see if this client would put you over the number of licenses you purchased and that's it.
I had run it against a disassembler, found the part where it performed the check, and was able to change it to a straight JMP and then pad the rest of the space with NOPs.
jl6|2 years ago
My top 3 fun PDF facts:
1) Although PDF documents are typically 8-bit binary files, you can make one that is valid UTF-8 “plain text”, even including images, through the use of the ASCII85 filter.[0]
2) PDF allows an incredible variety of freaky features (3D objects, JavaScript, movies in an embedded flash object, invisible annotations…). PDF/A is a much saner, safer subset.
3) The PDF spec allows you to write widgets (e.g. form controls) using “rich text”, which is a subset of XHTML and CSS - but this feature is very sparsely supported outside the official Adobe Reader.
[0] For example: https://lab6.com/2
gpvos|2 years ago
desgeeko|2 years ago
miki123211|2 years ago
The primary purpose of a PDF file is to tell you what to display (or print), with perfect clarity, in much fewer bytes than an actual image would take. It exploits the fact that the document creator knows about patterns in the document structure that, if expressed properly, make the document much more compressible than anything that an actual image compression algorithm could accomplish. For example, if you have access to the actual font, it's better to say "put these characters at these coordinates with that much spacing between them" than to include every occurrence of every character as a part of the image, hoping that the compression algorithm notices and compresses away the repetitions. Things like what character is part of what word, or even what unicode codepoint is mapped to which font glyph are basically unimportant if all you're after is efficiently transferring the image of a document.
If you have an editable document, you care a lot more about the semantics of the content, not just about its presentation. It matters to you whether a particular break in the text is supposed to be multiple spaces, the next column in a table or just a weird page layout caused by an image being present. If you have some text at the bottom of each page, you care whether that text was put there by the document author multiple times, or whether it was entered once and set as a footer. If you add a new paragraph and have to change page layout, it matters to you that the last paragraph on this page is a footnote and should not be moved to the next one. If a section heading moves to another page, you care about the fact that the table of contents should update automatically and isn't just some text that the author has manually entered. If you're a printer or screen, you care about none of these things, you just print or display whatever you're told to print or display. For a PDF, footnotes, section headings, footers or tables of contents don't have to be special, they can just be text with some meaningless formatting applied to it. This is why making PDF work for any purpose which isn't displaying or printing is never going to be 100% accurate. Of course, there are efforts to remedy this, and a PDF-creating program is free to include any metadata it sees fit, but it's by no means required to do so.
This isn't necessarily the mental model that the PDF authors had in mind, but it's an useful way to look at PDF and understand why it is the way it is.
mannyv|2 years ago
If you printed it on a postscript printer it would look exactly the same (or better, if you used type 1 fonts).
eschaton|2 years ago
One should not attempt to edit a PDF, one should edit the document from which the PDF is generated.
lucb1e|2 years ago
Somehow it became "unprofessional" to just send meant-to-be-editable documents around for everyone to enjoy, so this is where we end up...
layer8|2 years ago
[1] https://opensource.adobe.com/dc-acrobat-sdk-docs/pdfstandard...
o1y32|2 years ago
Finnucane|2 years ago
louthy|2 years ago
Maybe they should have called it ‘Page Description Format’ then? Instead of ‘Portable Document Format’
jaystraw|2 years ago
I think his name was Bill. He took me, a 17 year old, to a Sigur Ros concert. Great dude. Wow two stories that don't involve pdfs!
nathan_f77|2 years ago
[1] https://docspring.com
[2] https://github.com/itext/i7j-rups
seszett|2 years ago
gcanyon|2 years ago
totetsu|2 years ago
LispSporks22|2 years ago
yboris|2 years ago
https://github.com/whyboris/PDF-to-JPG-to-PDF
A government form didn't have editable fields that needed to be filled out. And editing the PDF was impossible (password protection). This was my solution.
kccqzy|2 years ago
Converting to JPG unnecessarily rasterizes text and introduces ugly compression artefacts.
jordann|2 years ago
https://pdfbox.apache.org/
It's relatively performant and it's a mature and supported codebase that can accomplish most pdf tasks.
Drakinte|2 years ago
https://pdfbox.apache.org/2.0/migration.html#why-was-the-rep...
> ReplaceText example has been removed as it gave the incorrect illusion that text can be replaced easily
Const-me|2 years ago
That’s a matter of the toolset. I program C#, and I have good experience with that open source library: https://www.nuget.org/packages/iTextSharp-LGPL/ It’s a decade old by now, but PDF ain’t exactly a new format. That library is not terribly bad for many practical use cases. Particularly good when you only need to create the documents as opposed to editing them, because for that use case you’d want to use an old version of the format anyway, for optimal compatibility.
schlowmo|2 years ago
What followed was a deep dive down the rabbit hole, a lot of fiddling with the same tools the author of this gist is using trying to make sense of it all.
The final solution worked better than I thought while at the same time felt incredibly wrong.
I'm very thankful for all the (probably painful) work that went into those open source PDF tools.
mx_02|2 years ago
Very hard?
I worked on a tool that generated PDFs based on API responses. The tool added charts from the api data.
Those PDFs were reports with some hardcoded text.
Yesh what a fun ride that was.
crtified|2 years ago
It would reach the point where things would start to break, and .... "good times were had, by all".
lucascacho|2 years ago
firexcy|2 years ago
pmontra|2 years ago
This means that we can open those files, read them as one single string and match the expected text in unit tests. I've got a few projects doing that and it was fine.
If the text is compressed, pipe its content to qpdf first.
mondaymusings|2 years ago
2. The number of user devices with unpatched PDF readers is likely large.
3. The system of paywalled scientific knowledge drives millions of students and researchers to get their science PDFs from scihub and libgen pirate sites hosted in former Soviet countries, sometimes over http (not https).
These three facts combine to a huge vulnerabilty space.
On the flipside a sane and open PDF replacement format that also offered reduced file size could gain many users quickly by convincing scihub and libgen to convert and offer their files in the new format to cut costs and shorten download time, with reduced vuln as a positive externality.
tomalbrc|2 years ago
herbst|2 years ago
rogeliodh|2 years ago
lucb1e|2 years ago
Alifatisk|2 years ago
maxerickson|2 years ago
yair99dd|2 years ago
dustypotato|2 years ago
unknown|2 years ago
[deleted]
elyobo|2 years ago
aleden|2 years ago
https://qpdf.readthedocs.io/en/stable/overview.html
It turns a PDF (typically everything in it is compressed binary blobs) into a mixed binary/ASCII file (which itself is a PDF) that can be edited with vim.
chrnola|2 years ago
gpvos|2 years ago
rhaway84773|2 years ago
> To view the compressed data, you can use a command line tool called qpdf.