Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: ServerlessMaps – Host your own maps in the cloud (github.com/serverlessmaps)
62 points by tobilg 7 months ago | hide | past | favorite | 20 comments
Have a look at the website with an example map, https://www.serverlessmaps.com/, or read the accompanying blog post https://tobilg.com/serverless-maps-for-fun-and-profit



https://protomaps.com/ is even simpler. Http range requests on a single static file.


I'm a little annoyed for the fact that the map shows South America and avoids showing the Argentina label because it overflows but doesn't give a fuck about Chile and its 2mm of width.


And shows Buenos Aires before it shows Argentina.



It says it uses PMTiles right on the front page, right?


Yep, it's a format they designed (ProtoMaps Tiles):

https://github.com/protomaps/PMTiles


It uses PMTiles… Range requests are only supported on S3 directly. If you want to use a custom domain, edge level caching etc. you‘ll need to use a CDN like CloudFront


Yeah Protonmaps is awesome, I’ve been very happy with them. Dirt cheap and simple. After getting hit with a couple hundred dollars (yes, not much, but it was a side project) with Google Maps I went looking for alternatives and found protonmaps, I haven’t looked back.


The architecture diagram in the repo is a little overwhelming for a Friday night — conceivably you could just toss a public pmtiles file on s3, and access it directly from the maplibre front-end, right? (As long as you’re not worried about someone coming along and downloading your whole tileset)


The architecture mostly is just putting things in S3 and serving them on a CDN. The diagram just goes into detail because little things like permissions and response header settings exist as separate but related "Policy" resources at the AWS level.

The only thing other than S3/CloudFront there is a lambda function endpoint that determines which tiles to serve.


Right, the diagram is generated from the IaC of the project itself, so it contains all AWS resources involved.

You could use the Lambda function to verify Access Tokens etc. before returning the tile data if that is a concern.

CloudFront as CDN will enable Edge caching, meaning that recurrent requests will serve much faster, as S3 is always region-based.


This is essentially the protonmaps getting started wrapped in some shell scripts and some infrastructure as code. See https://docs.protomaps.com/guide/getting-started.

I see how this adds some value via the set up scripts. However, I do wish the author more clearly identified that this work, including much of the wording of the blog post, is largely borrowed from protonmaps/pmtiles docs.


That’s not entirely correct. Yes, this project wraps Protomaps/PMTiles and describes the process and delivers Infrastructure as Code to deploy the whole stack to AWS, which the Protomaps website and code repos don’t deliver.

Not sure what you mean with borrowed from the docs, as I have a hard time to imagine a way of delivering the current project without using or referencing specific things from the libraries/projects used.


Also “your own maps” seem to be OpenStreetMaps in the example.


Yes, it uses OpenStreetMap data, which you then host on your own AWS account


This looks great!

I’ve done something similar with tippecanoe and mapshaper from gis files. That allowed me to use mapbox.js with my own hosted custom maps, as flat files. Very fast but still needed to run a server (tileserver-gl-light). This could negate that, very cool!


It does. Recently migrated a project from a tile server to PMTiles, and now tile generation is just part of CI in the kart repo where the shapefiles are edited. CI passes and uploads the pmtiles file to the web server. Completely eliminated the tile server altogether.


Nice, thanks for sharing!

https://github.com/headwaymaps/headway also worth checking out.


> Please be aware that this will run for several hours depending on your machine, and will generate a PMTiles file around 45GB. This file will take some time to upload to S3 in the next step as well.

Spin up a tiny EC2 instance with 100 GB volume if you want this to go much faster, assuming your upload is as bad as mine is.


Exactly! I ran this on a r7gd.4xlarge EC2 instance, which took below 3 hours. Then used the much better upload speed from EC2 to S3 as you described.

Let me know if you‘re interested in access to the IaC repo of this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: