top | item 44634756

A Python tool to parse PDF statements from Poste Italiane

58 points| genbs | 7 months ago |github.com

27 comments

order

pietroppeter|7 months ago

For the curious on how it works (not mentioned in the readme), it uses pymupdf and a precise mapping of all information in area coordinates, as such the document layout is hard coded.

When layout changes this breaks but layout changes on this sort of documents do not happen often (I think). Also code is very clean and it serms straightforward to fix.

This kind of code is maybe something that can be generated from an LLM/agent? (It would be easy to write checks)

Besides the practical value for those who might need it, I think it is possibly interesting for others to look at this approach.

Neat project, thanks for sharing!

genbs|7 months ago

Thanks for reporting and for the feedback

liendolucas|7 months ago

Offtopic: Stay away from Poste Italiane at all costs. The worst bank I have ever dealt with in my entire life. I'm glad that I don't have to deal with them anymore. Terrible service and eternal waitings on their branches. They are extremeley incompetent.

sebtron|7 months ago

Not to mention it is also pretty terrible as a postal service.

nkjoep|7 months ago

In fact, it's not even a bank.

genbs|7 months ago

it was a mistake in my youth

denysvitali|7 months ago

I would love to have something more generic (and tried to build it already), but parsing tables and bank statements even from digital PDFs (as in, those that really have tables and not a picture) is still very difficult. Especially when the bank changes layouts from one month to another.

I would love to be proven wrong, but everything I have tried so far is... subpar.

Nowadays there's probably a solution based on LLMs, but I don't trust them with this kind of data

dimitri-vs|7 months ago

Have you tried datalab-to/marker with the "Use LLM" option? They have a playground you can test it out on https://www.datalab.to/playground but I use their local CLI option: https://github.com/datalab-to/marker

I just tried it on a fairly ugly TD Bank statement PDF I have and the markdown of the whole PDF (tables and all) is very accurate. Here is the config I use:

marker_single --format_lines --use_llm --llm_service marker.services.gemini.GoogleGeminiService --gemini_model_name gemini-2.5-flash --disable_image_extraction --output_format markdown --output_dir "$OutDir" ` "$In"

You might be able to tell the LLM to directly output the data in CSV format - granted it will still be in a .md file - using the `--block_correction_prompt` which apparently is "useful for custom formatting or logic that you want to apply to the output"

jgalt212|7 months ago

> Nowadays there's probably a solution based on LLMs, but I don't trust them with this kind of data

In practice, the flow from my perspective looks like LLM parser -> normalizer -> validator. So you only save one step (parser), and given the unique stochastic nature of the LLM output, the normalizer and validator can be trickier to write than one used for an old-fashioned rules-based parser. But each situation is different, or YMMV.

simonebrunozzi|7 months ago

Subtitled: "agli sventurati che hanno un conto postale" ("(dedicated to) the unlucky ones that have a bank account with the Italian postal service")

The usual, amazing irony of us Italians. Love it.

Tox46|7 months ago

Loving the subtitle in the md

brightbeige|7 months ago

> agli sventurati che hanno un conto postale

translates to:

“to the unfortunate ones who have a postal account”

rcastellotti|7 months ago

POPOPO POPOPO POPO POPOOOO come on fellow italians, sing with me!