There have been several AWS outages where EBS and EC2 instantiation was outright down, and you could not create new resources for a period of time. AWS is not “always accessible“ unless you’re in their marketing department.
Sure, you know you’re down, you simply can’t do anything about it. Not so with your own hardware, which is why many orgs continue to run their own gear.
> Sure, you know you’re down, you simply can’t do anything about it. Not so with your own hardware
I don't buy it. Everywhere I've worked that colo'd or owned their DC had wider-reaching outages (fewer than cloud, but affecting more systems). Usually to do with power delivery (turns out multi tiered backup power systems are hard to get right) or networking misconfiguration (crash carts and lights-out tools are great, but not if 400 systems need manual attention due to a horrible misconfiguration).
I think folks underestimate the non-core competencies of running a data center. Also often underestimated is the value of running in an environment designed to treat tenants as potential attackers; unlike AWS's fault isolation, when running your own gear it's really easy to accidentally make the whole system so interconnected as to be vulnerable--even if you make only good decisions while setting it up
Sure, you know you’re down, you simply can’t do anything about it. Not so with your own hardware, which is why many orgs continue to run their own gear.