(no title)
binarin | 1 month ago
- Everything is managed by build system that is able to track dependencies
- Inputs from financial institutions are kept in the repo as is
- Those inputs are converted by your scripts to .csv files that are readable by PTA import engine
- There are rules files that describe how to convert .csv lines to PTA entries
- Generated files are included from per-year PTA journals (and you can put any manual transactions in here also)
The benefit is that you can change any part of this pipeline, and just re-generate the changed parts:
- improve the program that converts to .csv - raw data immediately gets better across the whole repo
- add/customize import rules - better classification is immediately applied to all of the past data
And with this approach you can start small (like, a single month of data from your primary bank), and refine the thing in steps, like adding more historical data or adding more data sources (examples being not only bank statements, but even things like itemized Amazon orders and Paypal slips).
zellyn|1 month ago
I just downloaded a bunch of qfx and csv files, and got Claude Code to do this. Took an hour to create the whole system from nothing. Then of course I stayed up until 2am getting CC to create Python rules to categorize things better, and trying to figure out what BEENVERI on my bank statement means
(If you do this, make Claude generate fingerprints for each transaction so it’s easy to add individual overrides…)
Getting Claude to write a FastAPI backend to serve up a “Subscriptions” dashboard took about 3 minutes, plus another minute or two to add an svg bar graph and change ordering.
Crazy times.
nightski|1 month ago
bnj|1 month ago
croisillon|1 month ago
frogpelt|1 month ago
olsondv|1 month ago
ekjhgkejhgk|1 month ago
You have a balance that accrues a monthly interest. Separately, you're told that you're owed a monthly payment. If monthly payment > interest, then the difference is subtracted from the balance. No need for haskell.
binarin|1 month ago