top | item 46602324

Show HN: Self-host Reddit – 2.38B posts, works offline, yours forever

286 points| 19-84 | 1 month ago |github.com

Reddit's API is effectively dead for archival. Third-party apps are gone. Reddit has threatened to cut off access to the Pushshift dataset multiple times. But 3.28TB of Reddit history exists as a torrent right now, and I built a tool to turn it into something you can browse on your own hardware.

The key point: This doesn't touch Reddit's servers. Ever. Download the Pushshift dataset, run my tool locally, get a fully browsable archive. Works on an air-gapped machine. Works on a Raspberry Pi serving your LAN. Works on a USB drive you hand to someone.

What it does: Takes compressed data dumps from Reddit (.zst), Voat (SQL), and Ruqqus (.7z) and generates static HTML. No JavaScript, no external requests, no tracking. Open index.html and browse. Want search? Run the optional Docker stack with PostgreSQL – still entirely on your machine.

API & AI Integration: Full REST API with 30+ endpoints – posts, comments, users, subreddits, full-text search, aggregations. Also ships with an MCP server (29 tools) so you can query your archive directly from AI tools.

Self-hosting options: - USB drive / local folder (just open the HTML files) - Home server on your LAN - Tor hidden service (2 commands, no port forwarding needed) - VPS with HTTPS - GitHub Pages for small archives

Why this matters: Once you have the data, you own it. No API keys, no rate limits, no ToS changes can take it away.

Scale: Tens of millions of posts per instance. PostgreSQL backend keeps memory constant regardless of dataset size. For the full 2.38B post dataset, run multiple instances by topic.

How I built it: Python, PostgreSQL, Jinja2 templates, Docker. Used Claude Code throughout as an experiment in AI-assisted development. Learned that the workflow is "trust but verify" – it accelerates the boring parts but you still own the architecture.

Live demo: https://online-archives.github.io/redd-archiver-example/

GitHub: https://github.com/19-84/redd-archiver (Public Domain)

Pushshift torrent: https://academictorrents.com/details/1614740ac8c94505e4ecb9d...

64 comments

order

Aurornis|1 month ago

Cool way to self-host archives.

What I'd really like is a plugin that automatically pulls from archives somewhere and replaces deleted comments and those bot-overwritten comments with the original context.

Reddit is becoming maddening to use because half the old links I click have comments overwritten with garbage out of protest for something. Ironically the original content is available in these archives (which are used for AI training) but now missing for actual users like me just trying to figure out how someone fixed their printer driver 2 years ago.

anonymous908213|1 month ago

That would only really be ironic if the reason for people overwriting their comments was out of protest for LLM training, but the main reason that resulted in by far the biggest wave of deletions was Reddit locking down their API. If the result of their protest is that the site is less useful for you, the user, then in fact it served its purpose, as the entire point was an attempt to boycott Reddit, ie. get people to stop using it by removing the user contributions that give the site its only value in the first place.

accrual|1 month ago

Just offering another perspective because I see those missing comments too. The author decided they didn't want to participate in public discourse anymore and their comment is gone. So be it. I don't search archives or use tools to undermine their effort. I move onto the next thing.

I read "it's maddening because ... they decided to use their autonomy and..." and I stop there. So be it.

NickNaraghi|1 month ago

Data is available via torrent in this section: https://github.com/19-84/redd-archiver?tab=readme-ov-file#-g...

m463|1 month ago

I wonder if you could use this to "Seed" a new distributed social media thing and just take over from there.

sort of like forking a project.

feconroses|1 month ago

Very cool project! Quick question: is the underlying Pushshift dataset updated with new Reddit data on any regular cadence (daily/weekly/monthly), or is this essentially a fixed historical snapshot up to a certain date? Just want to understand if self-hosters would need to periodically re-download for fresh content or if it's archival-only.

19-84|1 month ago

the data from 2025-12 has been released already, it is usually released every month, it just needs to be split and reprocessed for 2025 by watchful1. i will probably eventually add support for importing data from the monthly arctic shift dumps so that archives can be updated monthly.

https://github.com/ArthurHeitmann/arctic_shift/releases

Arctic Shift https://academictorrents.com/browse.php?search=RaiderBDev

Watchful1 https://academictorrents.com/browse.php?search=Watchful1

alcroito|1 month ago

I tried spinning up the local approach with docker compose, but it fails.

There's no `.env.example` file to copy from. And even if the env vars are set manually, there are issues with the mentioned volumes not existing locally.

Seems like this needs more polish.

elSidCampeador|1 month ago

I wonder if this can be hooked up with the now-dead Apollo app in some way, to get back a slice of time that is forever lost now?

19-84|1 month ago

the API should allow for a lot of different integrations

twobitshifter|1 month ago

If reddit was a squeaky clean place, or if I could pick certain subs, maybe I would be interested, but I really wouldn't want ALL of reddit on my machine even temporarily.

19-84|1 month ago

the torrent has data for the top 40,000 subs on reddit. thanks to watchful1 splitting the data by subreddit, you can download only the subreddit you want from the torrent

nick007x|1 month ago

Hey, I’m working on a similar project and have uploaded Pushshift Reddit data to Hugging Face Datasets. If anyone wants to download specific files when torrents aren’t seeding well, you can use:

https://huggingface.co/datasets/nick007x/pushshift-reddit

It’s handy for grabbing individual months or subreddit slices without needing to pull the full torrent. Might be useful for smaller-scale archiving or testing.

justsomehnguy|1 month ago

Appreciated.

EDIT: Is there any cheap way to search? I have MS TechNet archive which is useless without search, so I realky want to know a way to have a cheap local search w/o grepping everyting.

19-84|1 month ago

redd-archiver uses postgres full text search. for static search you could use lunr.js

vivzkestrel|1 month ago

- slightly offtopic here but does anyone have a similar data set of all youtube channels out there?

- details probably include the 400 million youtube accounts, channel id, name, creator url, etc

blks|1 month ago

Does it also contains countless NSFW content?

blks|1 month ago

Opened the live demo, went into programming subreddit, felt like I was showered with liquid shit. I tend to forget what kind of edgelord hellhole Reddit was (and stil is sometimes).

dvngnt_|1 month ago

I want to do the same thing for tiktok. I have 5k videos starting from the pandemic downloaded. want to find a way to use AI to tag and categorize the videos to scroll locally.

drob518|1 month ago

This is a great way to participate in arguments you missed three years ago.

tetrisgm|1 month ago

Is there a docker compose?

Jordan-117|1 month ago

[deleted]

apstls|1 month ago

There are certainly things to be learned from analysis of the dataset. Keep your friends close but your enemies as JSON, or something...

devilsdata|1 month ago

Might be good for researchers to be able to perform studies on.

metaPushkin|1 month ago

It seems you have no understanding of the term neo-fascism, and yes, it's not what your propaganda talks about.

diggyhole|1 month ago

Wat?

kylehotchkiss|1 month ago

_Hacker News collectively grabs the dataset to train their models on how to become effective reddit trolls_

layer8|1 month ago

Don’t we have enough of those already? ;)

19-84|1 month ago

the API and MCP server is very powerful ;)

syngrog66|1 month ago

Did you pay all the people who created its content?

nullandvoid|1 month ago

Did anyone ever comment on reddit with an expectation of pay?

It's an open forum - similar to here, whatever I post I it's in the public forum and therefore I expect it to be used / remixed however anyone wants.

devilsdata|1 month ago

I have no problem with this being downloaded for personal use, in fact that's a good thing. But of course we both know it'll be used to train AI.

antisthenes|1 month ago

Reddit didn't pay me for posting either. Not that I posted in the last decade.