Nice read, but I wish they had included the location (and AZs) that are in use. I've used Oregon, California and Virginia with different results.
The comment around Ubuntu is interesting and I wish there was more detail there.
We use mdadm to run RAID across multiple EBSes. mdadm is great, but has a kink that it will boot to a recovery console if there the volume is "degraded" (i.e. any failure). This is even if the volume is still completely viable due to redundancy. This is obviously very bad, as you've got no way of accessing the console. It's an unfortunate way to completely hose an instance.
It's an easy one to miss, as you rarely test a boot process with a degraded volume. When it happens though - hurts a lot.
(If you'd like to check on this, make sure you have "BOOT_DEGRADED=yes" in /etc/initramfs-tools/conf.d/mdadm).
We are primarily in us-east-1, as I mentioned in the post, with a skeleton set of DB slaves sitting in us-west as an emergency recovery if all of east-1 goes down.
In terms of AZs, were are distributed roughly evenly across all AZs in east-1.
The comment around Ubuntu is interesting and I wish there was more detail there.
We use mdadm to run RAID across multiple EBSes. mdadm is great, but has a kink that it will boot to a recovery console if there the volume is "degraded" (i.e. any failure). This is even if the volume is still completely viable due to redundancy. This is obviously very bad, as you've got no way of accessing the console. It's an unfortunate way to completely hose an instance.
It's an easy one to miss, as you rarely test a boot process with a degraded volume. When it happens though - hurts a lot.
(If you'd like to check on this, make sure you have "BOOT_DEGRADED=yes" in /etc/initramfs-tools/conf.d/mdadm).