Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As opposed to cloud ops suckers who accept 24/7 on call and just sit around panicking when cloud providers are down and the status page is full of lies?


Colo providers have massive outages too so it's the exact same thing in that regard.

If we're talking regular maintenance, like a raid controller or a power supply going bonker. AWS is always accessible, it allows you to realize something is broken and create a new volume or instance in 5 minutes. Whereas with dedicated hardware you might be toast with no remote access and/or no spare parts.


There have been several AWS outages where EBS and EC2 instantiation was outright down, and you could not create new resources for a period of time. AWS is not “always accessible“ unless you’re in their marketing department.

Sure, you know you’re down, you simply can’t do anything about it. Not so with your own hardware, which is why many orgs continue to run their own gear.


> Sure, you know you’re down, you simply can’t do anything about it. Not so with your own hardware

I don't buy it. Everywhere I've worked that colo'd or owned their DC had wider-reaching outages (fewer than cloud, but affecting more systems). Usually to do with power delivery (turns out multi tiered backup power systems are hard to get right) or networking misconfiguration (crash carts and lights-out tools are great, but not if 400 systems need manual attention due to a horrible misconfiguration).

I think folks underestimate the non-core competencies of running a data center. Also often underestimated is the value of running in an environment designed to treat tenants as potential attackers; unlike AWS's fault isolation, when running your own gear it's really easy to accidentally make the whole system so interconnected as to be vulnerable--even if you make only good decisions while setting it up




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: