top | item 27799146

(no title)

plett | 4 years ago

I'm a mapping noob, but I've got a half-planned project which will need web based maps, and tilemaker looks great.

The GitHub readme lists "You want the entire planet" as a reason not to use tilemaker. Why is that? Presumably it's excessive RAM/CPU usage during pbf conversion, or when serving tiles from the mbtiles sqlite file.

But how excessive are we talking? How big a machine would be needed to process a planet file? What tools work better with huge input files?

discuss

order

Doctor_Fegg|4 years ago

It might be doable with 256GB. I've tried with my 144GB machine and it's too slow to be feasible. But ultimately I think 128GB will be achievable... I've got a few ideas that could potentially reduce memory usage.

For whole-planet scale, the traditional approach is to load the OSM data into a Postgres database, and then serve vector tiles from there.

Cyykratahk|4 years ago

The submitter of this issue [1] reported that 64GB of RAM was insufficient to load an 18GB PBF. Considering the planet PBF is 58GB, you're going to need a lot of RAM (and time).

I tried the example tilemaker config (4 layers, zoom 11-14) on the 519MB Australia PBF. My 24 thread PC took a little under 20 minutes to finish generating 773,576 tiles, using about 10GB of RAM.

1. https://github.com/systemed/tilemaker/issues/238