top | item 37368148

So you want to modify the text of a PDF by hand (2020)

325 points| mutant_glofish | 2 years ago |gist.github.com

101 comments

order

blincoln|2 years ago

The PDF specification is wild. My current favourite trivia is that it supports all of Photoshop's layer blend modes for rendering overlapping elements.[1] My second-favourite is that it supports appended content that modifies earlier content, so one should always look for forensic evidence in all distinct versions represented in a given file.[2]

It's also a fun example of the futility of DRM. The spec includes password-based encryption, and allows for different "owner" and "user" passwords. There's a bitfield with options for things like "prevent printing", "prevent copying text", and so forth,[3] but because reading the document necessarily involves decrypting it, one can use the "user" password to open an encrypted PDF in a non-compliant tool,[4] then save the unencrypted version to get an editable equivalent.

[1] "More than just transparency" section of https://blog.adobe.com/en/publish/2022/01/31/20-years-of-tra...

[2] https://blog.didierstevens.com/2008/05/07/solving-a-little-p...

[3] Page 61 of https://opensource.adobe.com/dc-acrobat-sdk-docs/pdfstandard...

[4] For example, a script that uses the pypdf library.

userbinator|2 years ago

In the context of a format that was originally proprietary and not widely available to everyone, and conceived in an era where encryption was strongly controlled by export law, that sort of security-by-obscurity was very common. Incidentally, a popular cracking tutorial back then was to de-DRM the official reader by patching the function that checks those permissions.

aardvark179|2 years ago

Aren’t the blend modes supported just the Porter-Duff compositing modes? You might think that’s overkill, but it’s a really good mapping of what other rendering pipelines offer and it can really help reduce the work to produce a PDF.

pph|2 years ago

The permission field can also lead you down the rabbit hole of discovering noncompliance to its specification in some PDF writers and workarounds for these that may or may not be present in different PDF readers/libraries.

aidos|2 years ago

To be fair, if you wanted to stop copying of text it would be easiest just to drop the ToUnicode mapping against the fonts and then it’s a manual process for people to recreate them.

johnalbertearle|2 years ago

I used to be rich with selling a part of the stuff. FrameMaker. Used to be $5K US / copy Which came originally from Frame Technologies. [ Hi Steve Kirsch . I see you're rich still ]. PDF specification is wild. So right you are. At the time, many - including yours truly - said it was rude capitalism. So, you got it. People did not talk enough about DRM. Ps: I left Adobe embrace courtesy of my then wife, and me myself. I hate DRM as a user and as a -former- Salesman. Hola

aidos|2 years ago

This topic comes up periodically as most people think PDFs are some impenetrable binary format, but they’re really not.

They are a graph of objects of different types. The types themselves are well described in the official spec (I’m a sadist, I read it for fun).

My advice is always to convert the pdf to a version without compressed data like the author here has. My tool of choice is mutool (mutool clean -d in.pdf out.pdf). Then just have a rummage. You’ll be surprised by how much you can follow.

In the article the author missed a step where you look at the page object to see the resources. That’s where the mapping from the font name use in the content stream to the underlying object is made.

There’s also another important bit missing - most fonts are subset into the pdf. Ie, only the glyphs that are needed are maintained in the font. I think that’s often where the re-encoding happens. ToUnicode is maintained to allow you to copy text (or search in a PDF). It’s a nice to have for users (in my experience it’s normally there and correct though).

azangru|2 years ago

> I’m a sadist, I read it for fun.

I think this is called masochist. Now, if you participated in writing the spec or were making others read it...

esafak|2 years ago

It is a shame Adobe designed a format so hard to work with that people are amazed when someone accomplishes what should be a basic task with it.

Their design philosophy of creating a read-only format was flawed to begin with. What's the first feature people are going to ask for??

gobdovan|2 years ago

If you find pleasure in something that gives you pain, you're a masochist. A sadist likes inflicting pain onto others. Since you seem that you like helping people I'd say it's more likely you're the former. I appreciate the mutool advice!

haolez|2 years ago

That's awesome. I'm relying a lot on Amazon Textract for my PDF parsing needs.

Do you have any other insights on how to do a good job at that natively, i.e. without a cloud provider? Especially when dealing with tables.

enriquto|2 years ago

You can do this:

    pdf2ps a.pdf    # convert to postscript "a.ps"
    vim a.ps        # edit postscript by hand
    ps2pdf a.ps     # convert back to pdf
Some complex pdf (with embedded javascript, animations, etc) fail to work correctly after this back and forth. Yet for "plain" documents this works alright. You can easily remove watermarks, change some words and numbers, etc. Spacing is harder to modify. Of course you need to know some postscript.

hnick|2 years ago

This is essentially how we used to do it for some "print-ready" jobs at a mailhouse I worked at. Usually we'd use proper tools to produce documents ready to print, but sometimes clients thought they knew better and would send us PDFs. It was more effort to work with those usually, and had a higher chance of errors.

Even if the output was correct, we still needed to re-order pages, apply barcodes for mail machine processing and postal sorting, and produce reports - which usually involved text scraping off the page to get an address via perl and other tools. Much easier in PS than PDF usually, but sometimes very unreliable when e.g. their PDFs were 'secure' and didn't have correct glyph mappings.

In the worst cases, they would supply single document PDFs, and merging those would cause an explosion of subset fonts in the output which would fill the printer's memory and crash it. When I stopped working in the area, I think there still wasn't a useful tool to consolidate and merge subset fonts known to come from the same parent font - it would have been a very useful tool and should be possible but I didn't have the time or knowledge to look into it.

indeedmug|2 years ago

If you can put javascript and animations in pdf, what's stopping you from making a frontend in it? I wonder what are the frontiers of things you can do in pdf.

Honestly, it seems like only malware authors benefit from the complexity of pdfs.

ks2048|2 years ago

This seems to be missing an important point: at the end of PDF is a table ("cross-reference" table) that stores the BYTE-OFFSET to different objects in the file.

If you modify things within the file, typically these offsets will change and the file will be corrupt. It looks like in this article, maybe they were only interested in changing one number to another, so none of the positions change.

But, generally, adding/removing/modifying things in the middle of the file require recomputing the xref table and thus become much easier to use a library rather than direct text editing.

gpvos|2 years ago

That's why they decode it with qpdf and re-encode it again afterwards, so qpdf takes care of that. qpdf reconstructs the original PDF structure, and I think it even tries to keep the object numbers the same, but the offsets are recalculated completely.

userbinator|2 years ago

That's the weirdest part of the PDF spec IMHO. It's a mix of both binary and text, with text-specified byte offsets. It would be very interesting to read about why the format became like that, if its authors would ever talk about it. My guess is that it was meant to be completely textual at first (but then requiring the xref table to have fixed-length entries is odd), and then they decided binary would be more efficient.

aidos|2 years ago

In my experience it’s easiest just to break the xref table and run something like “mutool clean” to fix it again. It can be completely derived from the content so it’s safe to do.

bena|2 years ago

Ah. So it's a lot like editing compiled binaries.

You can modify binaries all you want as long as you preserve the length of everything.

Some piece of software we had authenticated against a server, but everything was done on the client. The client executed SQL against the server directly, etc. Basically, the server checked to see if this client would put you over the number of licenses you purchased and that's it.

I had run it against a disassembler, found the part where it performed the check, and was able to change it to a straight JMP and then pad the rest of the space with NOPs.

jl6|2 years ago

This seems to be missing an important step in the use of qpdf’s --qdf mode: after you’ve finished editing, you need to run the file through the fix-pdf utility to recalculate all the object offsets and rebuild the cross-reference table that lives at the end of the file (unless you only change bytes in-place rather than adding or removing bytes).

My top 3 fun PDF facts:

1) Although PDF documents are typically 8-bit binary files, you can make one that is valid UTF-8 “plain text”, even including images, through the use of the ASCII85 filter.[0]

2) PDF allows an incredible variety of freaky features (3D objects, JavaScript, movies in an embedded flash object, invisible annotations…). PDF/A is a much saner, safer subset.

3) The PDF spec allows you to write widgets (e.g. form controls) using “rich text”, which is a subset of XHTML and CSS - but this feature is very sparsely supported outside the official Adobe Reader.

[0] For example: https://lab6.com/2

gpvos|2 years ago

After you've finished editing, just run it through qpdf without parameters, as explained in the beginning of the article, and it will recompress the data and recreate the xref table. No need for yet another tool.

miki123211|2 years ago

What people often miss about PDF is that it's closer to an image format in some ways than to a Word document. Word documents, PDFs and images are in document editing what DAW projects, midis and mp3 files are in music and what Java source code, JVM bytecode and pure x86 machine code are in software.

The primary purpose of a PDF file is to tell you what to display (or print), with perfect clarity, in much fewer bytes than an actual image would take. It exploits the fact that the document creator knows about patterns in the document structure that, if expressed properly, make the document much more compressible than anything that an actual image compression algorithm could accomplish. For example, if you have access to the actual font, it's better to say "put these characters at these coordinates with that much spacing between them" than to include every occurrence of every character as a part of the image, hoping that the compression algorithm notices and compresses away the repetitions. Things like what character is part of what word, or even what unicode codepoint is mapped to which font glyph are basically unimportant if all you're after is efficiently transferring the image of a document.

If you have an editable document, you care a lot more about the semantics of the content, not just about its presentation. It matters to you whether a particular break in the text is supposed to be multiple spaces, the next column in a table or just a weird page layout caused by an image being present. If you have some text at the bottom of each page, you care whether that text was put there by the document author multiple times, or whether it was entered once and set as a footer. If you add a new paragraph and have to change page layout, it matters to you that the last paragraph on this page is a footnote and should not be moved to the next one. If a section heading moves to another page, you care about the fact that the table of contents should update automatically and isn't just some text that the author has manually entered. If you're a printer or screen, you care about none of these things, you just print or display whatever you're told to print or display. For a PDF, footnotes, section headings, footers or tables of contents don't have to be special, they can just be text with some meaningless formatting applied to it. This is why making PDF work for any purpose which isn't displaying or printing is never going to be 100% accurate. Of course, there are efforts to remedy this, and a PDF-creating program is free to include any metadata it sees fit, but it's by no means required to do so.

This isn't necessarily the mental model that the PDF authors had in mind, but it's an useful way to look at PDF and understand why it is the way it is.

mannyv|2 years ago

The original goal of PDF was to have a portable print fidelity copy; WYSIWYG for real. You could take a PDF file and print it on a laser printer, a linotype, or a screen and it would look the same.

If you printed it on a postscript printer it would look exactly the same (or better, if you used type 1 fonts).

eschaton|2 years ago

Anybody trying to do this is missing the point of PDF: It’s a page-description format and therefore only represents the marks on a page, not document structure.

One should not attempt to edit a PDF, one should edit the document from which the PDF is generated.

lucb1e|2 years ago

I'll stop trying to edit PDFs when people stop sending me PDFs that I want to edit.

Somehow it became "unprofessional" to just send meant-to-be-editable documents around for everyone to enjoy, so this is where we end up...

o1y32|2 years ago

"should not" is meaningless here, because in the real world there are tons of situations where people want you to edit PDF, one way or another

Finnucane|2 years ago

One challenge is marking up corrections that need to go back to the source document. I get proofs from a typesetter, and I need to mark it up for them to fix. I can't change the pdf text, because the typesetter won't see that. Acrobat's markup tools aren't terrible, but they aren't quite what I could do in the days of paper and red pencils. Unless I use the 'pencil' tool in Acrobat. I'd like to see that improved.

louthy|2 years ago

> It’s a page-description format and therefore only represents the marks on a page, not document structure

Maybe they should have called it ‘Page Description Format’ then? Instead of ‘Portable Document Format’

jaystraw|2 years ago

20 years ago, I worked as the plate person at a newspaper: we had two million dollar kodak plate "printers" -- printer is the wrong word, but the emulsion on the plates could be hit by UV light and dissolved in a chemical bath iirc. Regularly, the kodaks would fail, and my boss would go into the postscript (or maybe eps) files manually, change a header or some other malformed bit that came from the layout software that sent us the files, and all would be well again (our giant German offset web press ran linux, btw)

I think his name was Bill. He took me, a 17 year old, to a Sigur Ros concert. Great dude. Wow two stories that don't involve pdfs!

nathan_f77|2 years ago

Great post. I've spend a lot of time reading through the PDF specification over the last ~5 years while building DocSpring [1], and I still feel like I've barely scratched the surface. qpdf is a great tool. One of my other favorites is RUPS [2], which really lets you dig into the structure of a PDF.

[1] https://docspring.com

[2] https://github.com/itext/i7j-rups

seszett|2 years ago

Although this is an interesting dive into the PDF format, just opening the PDF in Libreoffice or Inkscape usually works fine to modify its text.

gcanyon|2 years ago

I’m interested in extracting the contents of a pdf form — many individual text boxes. You’re saying libre office would likely be able to parse that pdf into a usable format?

totetsu|2 years ago

Pdfmaster is a good tool for this too but the free version leaves a watermark

LispSporks22|2 years ago

As I recall, words aren’t even necessarily made up of contiguous characters. Especially true in OCRed documents in PDF.

yboris|2 years ago

Semi-related(?) - I created a repository to convert PDF to JPG and back to PDF:

https://github.com/whyboris/PDF-to-JPG-to-PDF

A government form didn't have editable fields that needed to be filled out. And editing the PDF was impossible (password protection). This was my solution.

kccqzy|2 years ago

On macOS using Preview you can add textual comments on otherwise uneditable PDFs. Then you can simply print the commented PDF as a new PDF.

Converting to JPG unnecessarily rasterizes text and introduces ugly compression artefacts.

Const-me|2 years ago

> I didn't see an obvious open-source tool that lets you dig into PDF internals

That’s a matter of the toolset. I program C#, and I have good experience with that open source library: https://www.nuget.org/packages/iTextSharp-LGPL/ It’s a decade old by now, but PDF ain’t exactly a new format. That library is not terribly bad for many practical use cases. Particularly good when you only need to create the documents as opposed to editing them, because for that use case you’d want to use an old version of the format anyway, for optimal compatibility.

schlowmo|2 years ago

PDF is such a weird format. Not so long ago I had to write some Java code for manipulating PDFs: find a string, remove it and place an image at the former string position. I should have known better as I thought "Well, how hard can that be?”

What followed was a deep dive down the rabbit hole, a lot of fiddling with the same tools the author of this gist is using trying to make sense of it all.

The final solution worked better than I thought while at the same time felt incredibly wrong.

I'm very thankful for all the (probably painful) work that went into those open source PDF tools.

mx_02|2 years ago

> "Well, how hard can that be?”

Very hard?

I worked on a tool that generated PDFs based on API responses. The tool added charts from the api data.

Those PDFs were reports with some hardcoded text.

Yesh what a fun ride that was.

crtified|2 years ago

This brings back horrible memories of working with large complex maps back in the 2000s. Having various CAD and GIS applications generate messy, inefficient spaghetti-coded PDF outputs - then bouncing those PDFs around the Adobe apps of the time, to add effects and other prettifications not available in the mapping apps.

It would reach the point where things would start to break, and .... "good times were had, by all".

lucascacho|2 years ago

Every time I read about the hardships of interacting with the PDF format, I gain more respect for Photopea, which has full PDF editing support.

firexcy|2 years ago

My understanding is that the PDF syntax essentially imitates physical printing in that it instructs the reader software to leave something at a given coordinate on a defined media with supplied resources. Thus it's easily portable but barely mutable.

pmontra|2 years ago

Some small PDF files are saved as uncompressed text. Invoices are a typical example.

This means that we can open those files, read them as one single string and match the expected text in unit tests. I've got a few projects doing that and it was fine.

If the text is compressed, pipe its content to qpdf first.

mondaymusings|2 years ago

1. The PDF format is wildly overcapable compared to the majority of actual use (view text, tables and images).

2. The number of user devices with unpatched PDF readers is likely large.

3. The system of paywalled scientific knowledge drives millions of students and researchers to get their science PDFs from scihub and libgen pirate sites hosted in former Soviet countries, sometimes over http (not https).

These three facts combine to a huge vulnerabilty space.

On the flipside a sane and open PDF replacement format that also offered reduced file size could gain many users quickly by convincing scihub and libgen to convert and offer their files in the new format to cut costs and shorten download time, with reduced vuln as a positive externality.

tomalbrc|2 years ago

I have been using Apples Preview.app to open "encrypted" or protected PDFs for quite a while, until it stopped working (Big Sur)

herbst|2 years ago

Just a heads up. You can edit PDFs in Gimp. AFAIK it just embeds a huge image in the end but easy to add a signature or something

rogeliodh|2 years ago

LibreOffice can open and edit PDFs. Last time I tried it was really good. Not sure what limitations are there.

lucb1e|2 years ago

For me it always seems to change the font from whatever was built into the PDF (rendered just fine in any PDF reader) to a random system font which completely breaks the spacing, making different parts of the document overflow into each other

Alifatisk|2 years ago

Is there any tool that competes with Adobe Acrobat? Like the censoring tool is rarely founs anywhere else.

maxerickson|2 years ago

PDF-XChange Editor. Not really used it much (have Acrobat for work and have checked that some things work as far as viewing).

yair99dd|2 years ago

Inkscape+1.2 multipage support is Great for editing graphics and text on PDFs

dustypotato|2 years ago

PSA, if you want to sign a PDF, firefox does it easily. Works like magic.

elyobo|2 years ago

"want" is probably a misleading term here

aleden|2 years ago

I'm surprised no one has mentioned qpdf.

https://qpdf.readthedocs.io/en/stable/overview.html

It turns a PDF (typically everything in it is compressed binary blobs) into a mixed binary/ASCII file (which itself is a PDF) that can be edited with vim.

chrnola|2 years ago

The linked article literally mentions qpdf within the first few paragraphs.

gpvos|2 years ago

I'm not sure what you were reading, but the fine article is centred around using qpdf.

rhaway84773|2 years ago

It’s mentioned in the gist

> To view the compressed data, you can use a command line tool called qpdf.