Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1. Just because you don't mind if people have RO access doesn't mean you should use default passwords. Privilege escalation is a thing, and is often far easier than getting a foot-hold. The number of developers who "don't bother with" disciplined input validation in areas that are supposed to be accessible only be trusted users is staggering.

2. Don't expose entire hosts to the internet. Punch only necessary holes in the firewall. That way the device at least needs to phone home in order to cause a problem like this.

3. Do you have a similar policy with other hosts are your network? I.e., do you figure "well that's inside the firewall so we don't need to worry about encryption/timely application of security updates/resetting default passwords/etc."? If you're not 100% sure (or if the answer is "no"), you now have a lot of cleanup work to do.



Not the OP but I just was in a similar conversation: "Oh it's behind a firewall, so let's just disable the extra security (passwords, HTTPS, etc)." The idea of defense in depth is very important, but many people seem to think one security layer is enough sadly.


The network is always compromised.


Ever since the smart phone, this is the only acceptable perspective. Assume bad people are already on your network.

Remember, smart phones are literal bridges from one network to another.


Given the amount of effort I had to go to to "literally" bridge a smartphone to my network to give myself Internet access when my fibre connection went down, do you think this is slightly incorrect?

I'm assuming what you mean is that smartphones may be connecting to your internal network and bringing malware with them.

That said, the corporate networks I've seen have a separate network for phones/laptops and you need to VPN in if you want other access.


You are correct that activating both networks at the same time is hard, but what you have is a device which is traveling between untrusted and trusted networks. Assuming a compromised device anything is possible.


Sysadmins at places I've worked have used "defense in depth" as an excuse to create layer upon layer of frustrating hoops to jump through in order to get any work done. I'm pretty sick of it. One perfect layer is vastly preferable.


There are sysadmins that use complexity to maintain draconian control, hide laziness or mask a lack of knowledge but don't throw the baby out with the bathwater. No matter what these people will find some way to obstruct you or maintain control. Even if their hearts are in the right place and they are using security best practices it sounds like they weren't doing a good job of automating the complexity and processes. Complex doesn't have to mean complicated.

A security design that takes advantage of multiple layers and compartmentalization is your ally against attackers. They love networks with hard shells and squishy insides. Once they are in via a service, no matter how innocuous, they can move laterally to the real targets with impunity.

But ultimately this kind of stuff is a culture issue. Culture issues are hard to fix but it's usually the root cause of bad blood between operations and development. It generally needs to be addressed on both sides though. It's really easy to think it's just a bunch of grumpy and possessive ops people but those behaviors are often rooted in how the dev teams interact with them. Things like punting releases over a wall and calling it a day, not participating in oncall duties despite causing many outages and a disparity between how credit (for releases) and blame (for outages) are assigned are often cited as issues that create what devs think or irrational BOFHs.


"One perfect layer" does not exist. Doing defence in depth is of course not a good thing, and making people do a lot of hoop-jumping isn't helpful either. But say, using a smartcard and a OTP isn't all that hard, and vastly more secure than just a username and a password, to name a random option someone might implement.


There's always a balance, but I'll echo the other comments: One layer is not enough. Do you actually think that, if a DVR is behind a firewall, it shouldn't need a password for admin access?

Strong passwords, two-factor auth for privileged services, access control policies (ACLs/firewalls), access logging, etc. are all requirements of any secure network. And that was just "off the top of my head on a Friday" kind of stuff.


>One perfect layer is vastly preferable.

Ah, hello, every manager that has ever made a decision causing the problems people further up are grousing about.

The point of defense in depth is there IS no "one perfect layer."


There's no such thing as a perfect layer.


Do you mind telling me what this one perfect layer is? If you'd like to turn it into a business, i'll fund your seed round.


Do you have to open a firewall rule request for every src:dst host/port/protocol pair? Even for 3rd party applications you don't think you should have to understand, they should "just work"? Do you have the least privilege necessary at any given point in time?

If not, you have relatively little to complain about.

And, I'll add, if you're a developer, we'd all prefer you just crank out perfect code. That way we never have deployment issues, get paged for outages, never have to work around poor architecture or assumptions that don't scale or aren't load tested. thanks!


>One perfect layer

That's the problem.


Well you need at least one layer to protect from the outside world, and another for insider attacks. Many times they can be invisible to the user. For example many places have a policy that all internal services must be Internet hardened, as though they were exposed on the broader 'net (even though they're behind a firewall).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: