Well, if you really need the autoscale feature to handle sudden and ludicrous spikes of traffic, you will most likely also use a database that autoscales too. i.e. DynamoDB.
For most applications, it's really not needed though.
Absolutely agree, I tried it out for a short personal project and was disappointed with the bad documentation and even worse libraries that I found. I'm assuming large companies have their own internal libraries/ORMs for this, and that's how it is intended to be used.
In terms of the actual technology itself, it's very interestingly built, and as the poster above me mentioned, with a ton of caveats to the promises it makes.
It is one of very few technologies that I need to get a pen and paper and some quiet time to decide on a table schema. Most of the time I need to redo it after coming up with new query requirements.
"DynamoDB uses synchronous replication across multiple data centers for high durability and availability"
This does not seem scalable for OLTP type load on some busy store. Again I think you'll be way better of money wise hosting your own database on real hardware either on prem or colocated
You're forgetting specifics here and the amount of hardware resources thrown into it. I've already told that I am not discussing here FB/Google/Amazon scale stuff. It is their problem and not shared by more regular deployments
is there any transnational storage solution that actually handles well a traffic spike?
my understanding is that most storage solution scales well when planned, but scaling during a spike only causes more stress onto the current replicas/partitions for all the time needed to boot and shift data toward new replica/partitions, and the new instances don't contribute in moving the load until the process is complete.
so they work if you can predict a spike, but they can't handle a sudden spike, especially if the data to replicate/partition is sizeable.
For most applications, it's really not needed though.