Can someone explain why you would want a database larger than 16 TiB? Imagine there's database corruption, or you need to change the schema or something... why would you want to deal with that on an insane scale like hundreds of TiB? If you split your data up into smaller databases or shard, there's less chance of a problem affecting all of your data, and it being so gigantic that it takes forever to fix it.
You want high-density storage to minimize the number of servers required when data models get large. If you limited yourself to 16 TiB, then it would not be uncommon to require thousands of servers for large data models. If your database supports high-density storage then you could fit this in a single rack. For somewhat smaller data models you could fit this on a single server. It doesn't just save a lot of money, some edge environments -- essentially single servers -- are already PiB+. Storage bandwidth is typically greater than network bandwidth these days so many workloads scale well this way.
Quantity has a quality all its own. Fragmenting your database over thousands of servers introduces new classes of failure and greatly increases the risks of other types of failures that you don't have to deal with when you only have dozens of servers or a single server.
High-density storage does require strategies for recovery that reflect the bandwidth-to-storage ratios. Designers are usually aware of the implications and architect appropriately.
16TiB is not a problem. I have admin'd postgres databases larger than that. If you use good hardware (ECC RAM) and an error-correcting filesystem (ZFS) you are generally good to go.
Typically for postgres, DB corruption issues are isolated to single tables/indices, not the entire database. Sqlite's architecture might be a bit less "resilient" in this sense (I'm not sure, just guessing), but it also might have a lower chance of introducing corruption in the first place.