PMTiles and COG Killed My Tile Server
HTTP range requests on static files replace dedicated tile infrastructure. PMTiles and COG serve maps directly from S3/R2 with no backend, no cache layer, and 90% lower cost.
What previously required PostgreSQL + PostGIS, GeoServer, a load balancer, and Redis caching now runs from a single file on S3 at $0.02/GB/month with zero compute costs during idle periods. PMTiles packages all vector tiles into one Hilbert-curve-indexed file. COG structures raster imagery for HTTP Range Requests. Neither requires a running server. Tippecanoe achieves 92% size reduction from raw GeoJSON, translating to 6x faster loads on mobile networks.
Tile Servers: High Ops Cost, Poor Idle Economics
Traditional tile servers accept requests, query a database, generate tiles dynamically, and serve them. This requires managing database connections, caching strategies, horizontal scaling during traffic spikes, and paying for compute during idle periods. For government agencies and research institutions with limited DevOps capacity, the overhead often means geospatial data stays locked in desktop GIS.
COG: Sub-Second Previews of Multi-Gigabyte Satellite Scenes
A Cloud Optimized GeoTIFF is a standard GeoTIFF reorganized with internal tiling (256x256 or 512x512 blocks) and pre-computed overviews. This structure enables HTTP Range Requests -- clients fetch only the bytes they need. Previewing a multi-gigabyte satellite scene takes under a second. No tile server. Just an S3 bucket and CloudFront.
COG Technical Properties
Internal tiling
Divides the image into 256x256 or 512x512 pixel blocks
Overviews
Pre-computed lower-resolution versions for fast 'astronaut's eye view'
Range Request support
Fetch only needed bytes over HTTP
Backwards compatible
Still a valid GeoTIFF, works with existing tools
Compression
Supports LZW, DEFLATE, ZSTD, and JPEG for reduced storage
PMTiles: One File Replaces Your Entire Tile Server
PMTiles solves the same problem for vector data. Developed by Brandon Liu at Protomaps, it packages all vector tiles into a single file served from static storage -- S3, R2, Azure Blob, or a CDN. No database, no tile generation, no caching layer.
PMTiles packages all the vector tiles for a dataset into a single file that can be served from static storage. Traditional web map tiles require a server that can take incoming requests, query a database, and generate tiles on the fly. PMTiles eliminates this entirely.
Hilbert Curve ordering ensures spatially adjacent tiles are physically adjacent in the file, enabling efficient range requests for any viewport. A world-scale basemap PMTiles file runs 50-100GB, but requesting tiles for a city view fetches just kilobytes.
GeoParquet: Columnar Analytics with DuckDB, Spark, and BigQuery
COG and PMTiles optimize for visualization. GeoParquet targets analytics -- spatial joins, aggregations, ML feature engineering. It inherits Apache Parquet's columnar compression, predicate pushdown, and integration with DuckDB, Spark, BigQuery, and Snowflake. Geometry is stored as WKB, compatible with virtually every GIS tool.
Which Format for Which Job
- COG — Raster visualization and processing (satellite imagery, DEMs, aerial photos)
- PMTiles — Vector visualization (basemaps, reference layers, thematic maps)
- GeoParquet — Vector analytics (spatial joins, aggregations, ML feature engineering)
- FlatGeobuf — Simple vector streaming with bbox filtering
- Zarr — Multi-dimensional arrays (climate data, time series rasters)
Tippecanoe: 92% Smaller Than Raw GeoJSON
Mapbox's Tippecanoe is the gold standard for PMTiles generation. It strips features at low zoom levels that would render as single pixels and simplifies geometries to match per-scale visual fidelity. The result: 92% size reduction compared to raw GeoJSON, translating to 6x faster loads on mobile networks.
# Generate PMTiles from GeoJSON
tippecanoe -o output.pmtiles \
--minimum-zoom=0 \
--maximum-zoom=14 \
--drop-densest-as-needed \
--extend-zooms-if-still-dropping \
input.geojsonProduction Stack: Five Components, Zero Servers
The Serverless Geospatial Stack
Cloudflare R2 or AWS S3
Object storage for COG, PMTiles, GeoParquet files
CloudFront or Cloudflare CDN
Edge caching with Range Request support
MapLibre GL JS or deck.gl
Client-side rendering with pmtiles-protocol
Titiler (optional)
Serverless COG processing via Lambda
DuckDB WASM
Browser-based GeoParquet analytics
Cost comparison: multiple EC2 instances + RDS + Redis cache versus commodity storage at $0.02/GB/month with zero compute costs during idle periods.
The One Caveat: Transactional Editing Still Needs a Database
These formats optimize for visualization and analysis, not transactional editing. Frequent spatial data updates still require a database. But even then, PostGIS can export to cloud-native formats on a schedule for read-heavy workloads. The migration path for organizations still running tile servers: convert to COG/PMTiles, upload to object storage, update client code, decommission servers.
References & Further Reading
Cloud Native Geospatial Formats Guide
Comprehensive guide to COG, PMTiles, GeoParquet and related formats
https://guide.cloudnativegeo.org/
PMTiles GitHub Repository
Official PMTiles specification and tooling
https://github.com/protomaps/pmtiles
Cloud Native Geospatial Formats Explained
Matt Forrest's practical comparison of formats
https://forrest.nyc/cloud-native-geospatial-formats-geoparquet-zarr-cog-and-pmtiles-explained/
Where is COG for Vector?
Cloud Native Geo Foundation's analysis of vector format evolution
https://cloudnativegeo.org/blog/2023/10/where-is-cog-for-vector/
Tippecanoe Vector Tiles Optimization
Practical guide to Tippecanoe for production PMTiles
https://johal.in/tippecanoe-vector-tiles-python-geojson-optimize-2025/