Have they fixed all the keyboard bugs introducted in iOS 26.0 yet? I’m not sure how much longer I can put up with issues like this - I might need to switch back to Android if they don't fix these soon.
Seriously, how hard is it to correctly measure the keyboard height and not render important UI elements, such as submit buttons, underneath it so you can’t click “Send”? It's getting close to unusable.
So many bugs in this version of iOS, ive never seen anything like it. The UI for so many websites is mildly broken or misaligned now, keyboard randomly has a noticeable lag, audio does not return to normal volume if a background app makes a noise for a moment, and many more. Really awful, I’ve never wanted to downgrade iOS back to the old version until now.
I am quite surprised that most languages do not have an ORM and migrations as powerful as Django. I get that it's Python's dynamic Meta programming that makes it such as clean API - but I am still surprised that there isn't much that comes close.
Have used pmtiles to self-host a “find your nearest store” map, which only needed to cover Australia. Created two sources: (1) a low-detail worldwide map to fill out the view (about 50 MB), and (2) a medium-to-high detail source for Australia only, up to zoom level 15 (about 900 MB). In this case, there’s no need for up-to-date maps, so we were able to upload these two files to S3 and forget about them. Works great!
It's very cool! If you want to get higher cache hit rates from a CDN or redis etc. and lower the amount of S3 reads, you can get set up a proxy to convert `/{z}/{x}/{y}.mvt` requests into the byte-range requests: https://docs.protomaps.com/deploy/
Brandon has some example code you can lift to dump it into a Cloudflare Worker or other platforms on that page.
Thank you.
I'm going to try this on a different project that we have. Our current deployment is designed to work directly through s3/api gateway which reduces the number of moving parts.
We update the tiles frequently, so the setup has been amazing for us.
I am still using an LG UltraFine 5k since launch. I experienced flickering in the first month and had the monitor replaced by supplier - and it's been amazing ever since! Also, this DPI is perfect for having both crisp text and correct sized elements on screen (in MacOS).
Yes — PMTiles is exactly that: a production-ready, single-file, static container for vector tiles built around HTTP range requests.
I’ve used it in production to self-host Australia-only maps on S3. We generated a single ~900 MB PMTiles file from OpenStreetMap (Australia only, up to Z14) and uploaded it to S3. Clients then fetch just the required byte ranges for each vector tile via HTTP range requests.
It’s fast, scales well, and bandwidth costs are negligible because clients only download the exact data they need.
Hadn't seen PMTiles before, but that matches the mental model exactly! I chose physical file sharding over Range Requests on a single db because it felt safer for 'dumb' static hosts like CF. - less risk of a single 22GB file getting stuck or cached weirdly. Maybe it would work
My only gripe is that the tile metadata is stored as JSON, which I get is for compatibility reasons with existing software, but for e.g. a simple C program to implement the full spec you need to ship a JSON parser on top of the PMTiles parser itself.
At that point you're just io bound, no? I can easily parse json at 100+GB/s on commodity hardware, but I'm gonna have a much harder time actually delivering that much data to parse.
Look into using duckdb with remote http/s3 parquet files. The parquet files are organized as columnar vectors, grouped into chunks of rows. Each row group stores metadata about the set it contains that can be used to prune out data that doesn’t need to be scanned by the query engine. https://duckdb.org/docs/stable/guides/performance/indexing
LanceDB has a similar mechanism for operating on remote vector embeddings/text search.
> Look into using duckdb with remote http/s3 parquet files. The parquet files are organized as columnar vectors, grouped into chunks of rows. Each row group stores metadata about the set it contains that can be used to prune out data that doesn’t need to be scanned by the query engine. https://duckdb.org/docs/stable/guides/performance/indexing
But, when using this on frontend, are portions of files fetched specifically with http range requests? I tried to search for it but couldn't find details
Yes, you should be able to see the byte range requests and 206 responses from an s3 compatible bucket or http server that supports those access patterns.
I love the simplicity! Does this store state in the browser?
Have you considered adding an export/import data option? I was actually expecting "Copy link" to have my months worth of event data encoded in the url after the # (so it would never be sent to the server, but means I could share the month with a friend). Just an idea.
Thanks! Yes, everything is stored locally in the browser — no backend at all.
And that’s a great idea.
I’ve been thinking about adding an export/import option, and encoding the data into the URL hash actually fits the “offline + privacy-first” vibe really well.
I’ll explore it — would be super useful for sharing or backup without requiring any server.
A small suggestion that immediately came to my mind, why not try making it a JSON serialized data and base64 encode it just like JWT. So that it can be shared and loaded effectively.