Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If I'm a dude in my workshop making children's toys, I'm on the hook for strict product liability the rest of my life in I made dangerous toys.

If I'm a one man contractor building a house, certain categories of mistakes I'm on the hook for for the rest of my life.

I don't think it would be unprecedented for IoT device makers to be on the hook forever for certain categories of security flaws. It is not like security flaws grow spontaneously like rust a car. They are all in there from the beginning, whether they are known or not. Most of the IOT security flaws that I have heard about could have been easily prevented with security conscious design and development practices. If we want to have secure IOT devices, then we need to hold people accountable for making insecure ones.



>It is not like security flaws grow spontaneously like rust a car.

They absolutely can. Its well know that cryptography algorithms "decay" over time as computer power increases. There was a time where encrypting with DES was secure, DES is no longer secure due to increase in computing power. In 50 years I doubt we'll be using many of today's algorithms. Its exactly like a car slowly rusting over time.

That's even assuming support is possible. I may have a stroke so I no longer have the ability to support my product.


"for certain categories".

If a device was made in 99, I wouldn't blame it for having DES. If a web appliance was built in 2005, I wouldn't blame the maker for unsalted MD5 passwords.

If a device or critical app were made in 2017 and stored its passwords with 777 permissions in the clear, I would blame the maker.


I think the correct tradeoff is to judge the malfeasance of the product based on what security precautions were reasonable at the time the product was created.

Granting that "reasonability" is a very fuzzy standard, it seems obvious that a product with 30 year old crypto should not be subject to lawsuits because someone got solved integer factorization on real hardware.


Modern crypto algorithms are decay-free. They may still get weaker because of improved math, or quantum computers, but never because of increased computing power. That was an artifact of too slow computers at the beginning of computers history.

Modern algorithms are replaced mostly because the alternatives are easier to use, faster, or more flexible in some ways. With some huge emphasis on "easier to use" because that means "more secure" on practice.


> They may still get weaker because of improved math

But aren't we (and the NSA) discovering new attacks all the time? So hence, their de facto security decays.


Yes, that's improved math.

All the time is a bit of an exaggeration. Some algorithms are broken very fast, others slowly accumulate partial attacks until people don't trust them anymore. Those last ones don't normally get completely broken¹. I think the only exceptions were the shorter key lengths of RSA².

1 - But you will find many examples of algorithms with less than the modern strength parameters that were broken by the mix of faster computers and partial attacks.

2 - But by the time those were abandoned people were mostly using even shorter keys that wouldn't suffice by today's standards even without any known attack.


>They absolutely can. Its well know that cryptography algorithms "decay" over time as computer power increases

This is bullshit.

No reasonably modern crypto algorithms has ever been broken. If you use any crypto algorithm in your less-than-10-year-old product that gets broken it's because you shipped a product with sub-standard crypto.

There was a time when people thought DES is secure, but that time was in the 80s and early 90s. Nobody will blame you for bad crypto if you released software in that time.


And a new car today also probably won't be rusted in less than 10 years time, what exactly is your point?

>Nobody will blame you for bad crypto if you released software in [the 80s and early 90s].

So you are 100% agreeing with me.

Security expert Bruce Schneier expanded upon what I said back in 1998:

>Cryptographic algorithms have a way of degrading over time. It's a situation that most techies aren't used to: Compression algorithms don't compress less as the years go by, and sorting algorithms don't sort slower. But encryption algorithms get easier to break; something that sufficed three years ago might not today.

>Several things are going on. First, there's Moore's law. Computers are getting faster, better networked, and more plentiful... Cryptographic algorithms are all vulnerable to brute force--trying every possible encryption key, systematically searching for hash-function collisions, factoring the large composite number, and so forth--and brute force gets easier with time. A 56-bit key was long enough in the mid-1970s; today that can be pitifully small. In 1977, Martin Gardner wrote that 129-digit numbers would never be factored; in 1994, one was.

>Aside from brute force, cryptographic algorithms can be attacked with more subtle (and more powerful) techniques. In the early 1990s, the academic community discovered differential and linear cryptanalysis, and many symmetric encryption algorithms were broken. Similarly, the factoring community discovered the number-field sieve, which affected the security of public-key cryptosystems.

https://www.schneier.com/essays/archives/1998/05/the_crypto_...

The ironic thing is this article also said "I recommend SHA-1"... SHA-1 was broken 7 years later.


The trouble with your analogy is that state-sponsored teams of children aren't working day and night to find new ways to make your toy dangerous that you could never have thought of at the time.

I get that D-Link sucks, and their security practices right out of the box were "bad", but what is "bad"? In the eyes of the FTC and US law, what constitutes "good" and "bad" practice? How do I ensure that I am holding myself to those practices when a product ships? What is my responsibility for vulnerabilities found after the product shipped?

Let's say for example I ship a widget with an IP address and something like Heartbleed shows up 4 years later and my device is vulnerable. Am I on the hook for patching all the systems in the field? What are my obligations here?


No, they're not state-sponsored teams... they're children, they're far more pernicious than black hat teams, and there's a ton more of them. ;)

The questions you're asking are good ones, and they're ones that we as a society need to start answering. 20 years ago, the idea of always-connected devices littering our homes would've seemed like sci-fi magic, but now it's actually coming commonplace.

But this isn't the first time we've had this sort of thing happen. Like, how did electronics gadgets become safe enough that we never think twice about the fire hazard of plugging in a mystery wall wart?

Or any kind of product. What we've come up with is, generally speaking, companies have to try to uphold a standard, they can be held liable, and can be put out of business for making unsafe products.

Why it should be any different just because it's a "now with Internet!" product, I don't really know.


But IMO this isn't a "safety" issue with the devices themselves that could hurt the users unintentionally, this is a 3rd party "weaponizing" the system for their own use.

I'm struggling think of a good analogy, but it seems more like suing Ford because thieves can easily steal their cars for use in robberies.

In essence, the device is being used by an unauthorized 3rd party to harm a 4th party. The device owner in some cases is never harmed or even inconvenienced, and neither is the manufacturer.

It's a shitty situation, and I don't personally know where the line should be drawn, but IMO it should be drawn clearly.


i'd compare it to freon and freon using devices which had to be properly utilized or the ozone layer got hurt, only now substitute freon for 'product with ability to connect to the internet that doesn't get security updates anymore'. such devices should be disconnected from any non-air-gapped network or they're a considerable risk for their environment (the internet).


The "EPA" is actually a perfect "analogy".

We need an IPA (perhaps a different name...). We need someone that will set "standards" for a minimum baseline of "security" to ensure the health of the internet, and dole out fines based on violations.

However they need to be VERY careful. With something like freon it's a physical "thing" that can be regulated. We don't want to regulate "ideas" or even code, that to me seems like a very dangerous thing.

But you are right, we need something that will protect the "health of the internet" like we protect the health of our environment.


How do you add any regulation without regulating the code? code is our environment.


I meant more that I don't want it to be a crime to write a TCP stream handler without SSL. Or to need a license to write crypto code.

To me it gets dangerously close to regulating ideas.

I'd want to it more based in consequences. If your product or code is used in an attack, you get fined. No need to dictate the code or software solutions allowed.


Yeah comparing this to toy safety is quite a leap. Actual internet connected toys? Sure. But what I see is a network gateway device in a slew of such devices in a still nascent industry which really hasn't figured out how to even create fully secure network protocols yet, much less hardware. This will have a chilling effect on innovation in the field more than it will improve security in the large.


I think the difference is protecting something against mistakes vs malice.

In other words, a better safety analogy might be the prompt when you do "rm -rf".

I'm not opposed to introducing standards here; I am just saying that it's adofferent problem.


"Let's have certifications!" is a common answer, but the mechanics are non-trivial. Flip the question around from it's usual perspective... suppose you are a certification authority. Put your business hat on. What would it take for you to be willing to certify a non-trivial product as being secure? Bear in mind that the act of certification puts your skin in the game; if you certify things that subsequently have flaws in them, you will at the very least suffer a reputation hit, and it's not out of the question you'll get caught in the lawsuit and monetary damages crossfire. So... what would it take for you to be willing to sign off on an IP camera?

Being honest with yourself, would you have certified something as secure if it used OpenSSL correctly (assuming such a unicorn could exist), before you knew about Heartbleed? If your answer is no, what would it take for you to be willing to certify something using SSL? (I assume "not using SSL" is an obvious certification black mark.)

What I mean by "put your business hat on" is that I am not trying to make the point that this is impossible. I don't think it is. What I mean is, think business, think risk, think risk management, think about how your business is basically made out of black swan events, think about what it would take for you to put yourself out like that, and put some real numbers on the money and at least mentally come up with a sketch of what it would take.

Speaking for myself, it doesn't take me long before I notice that my certification standards would simply annihilate the entire IoT industry as it now stands, on even basic stuff like "Since you're using C, are you using best-practices static analysis? Can you update your firmwares? How secure is your firmware update process?" Those three questions alone and I've probably tipped nearly the entire industry into a negative cost/benefits analysis. Does that solve the problem? Again, I mean that question more honestly than it probably sounds on the Internet; a case can be made that an industry that is currently basically only able to survive by extensive offloading of what become negative externalities really shouldn't exist, even in a bootstrap phase. Perhaps nuking the industry as it now stands is the best thing we could do in the next couple of years. Pour encourager les autres if for no other reason. Let the industry come up with some best practices, form some "sell shovels rather than dig for gold" companies around building more secure IoT platforms, come back at the problem after that.

A real certification process that really solves this problem is probably unsustainably expensive for the industry as we now know it, xor the certification will be a useless rubber stamp that doesn't solve the problem.


See but I think that's great stuff to consider. If we can't actually make things that are both cost-effective and safe to the public, I'd say those things should not be made and sold to the public.

Maybe it slows down the progress of the industry, but if that progress comes at a price all of us are paying (through the currently-unaddressed externality of shitty code enabling DDOSes around the world), I think that is progress that should be slowed down.

I certainly don't want to wake up one day and find that my employer's sites are gone, and their business (and my livelihood, and my home and family's security) threatened because rando manufacturer X's IoT cameras have taken out a data center for lulz. So..regulate? Bring it on.


>The trouble with your analogy is that state-sponsored teams of children aren't working day and night to find new ways to make your toy dangerous that you could never have thought of at the time.

Human society is continually striving to build a better idiot. Manufacturers often aren't held liable or for the first instances of idiocy, but eventually they are.

For example, it was easy for engineers to believe that no one would be dumb enough to stick their hand in a running lawnmower. And for a time if you stuck your hand in a running lawnmower you would not be able to win a lawsuit over it.

But once you know about the new flavor of idiocy, it is your civic responsibility to mitigate it with safety features if possible. If you don't agree, the civil suit you lose will convince you.

If you build products that are vulnerable in well known ways, you are neglecting your civic repsonsbility as a manufacturer.


I see you using the phrase "civic responsibility", but I don't actually see an argument for it.


>If you don't agree, the civil suit you lose will convince you.

There you go. This is how you deal with the trajedy of the commons, punish the overgrazers. Industries that neglect civic responsibility get regulated. Particularly bad actors get punished.


I apologize if this comes across as combative: does one need an argument that mitigating well-known safety or security hazards in a product you're making is a civic responsibility?


Yes. I don't take it as a given that a manufacturer is responsible for the product they manufacture. If they manufacturer promises certain things about that product, then yes, they should be held to those promises.

Otherwise, what makes it so obvious that the manufacturer must handle anything at all?


>Am I on the hook for patching all the systems in the field? What are my obligations here?

This complaint seems to have nothing to do with patching. Did you check out any of the actual text? Their complaints are about disregarding security norms from 2007 (backdoors, injection flaws), using hard-coded passwords, and posting their private signing key publicly.


Did you read the article?

> the FTC says the company failed to protect its routers and cameras from widely known and reasonably foreseeable risks.

It sounds like they shipped the devices with known security flaws. This is not at all related to not updating your software when new security flaws are found.


Just a heads up, from HN guidelines:

> Please don't insinuate that someone hasn't read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."

I agree here. Your first sentence can be entirely removed and your point will still stand. Except now the receiver won't have to be in a defensive position about their reading comprehension.


> state-sponsored teams of children aren't working day and night to find new ways to make your toy dangerous that you could never have thought of at the time.

I'm not so sure that's true anymore. Lots of kid's toys are internet-connected these days...


> Am I on the hook for patching all the systems in the field? What are my obligations here?

If you sell even a single device to Europe, you have, depending on category, for 2 or 5 years to patch every single flaw that appears.

In the first 6 months, if a user reports a flaw, you have to patch it anyway, no matter what it is, unless you can prove that this problem was caused by the user.

After that, you are liable for the rest of your life (and this pierces the corporate veil) for any and all damage your products do, or can be abused for.


If I'm a dude in my workshop making children's toys, I'm on the hook for strict product liability the rest of my life in I made dangerous toys.

I'm not sure this is strictly true. Stanards change over time and old toys might not meet the new standards. Sometimes product recalls occur, other times the change is so broad and insignificant an advisory may be issued or no advisory is necessary.

Point being it isn't always so clear-cut.

With software and hardware it is possible for new vulnerabilities to be discovered as new attack methods are developed.

Also, forever could be too strong. No one is going to complain about vulnerabilities in token-ring network protocols.


> They are all in there from the beginning, whether they are known or not

Some things only exist because of the environment.

If I went back in time to 2001 and tried to get people to take CSRF attacks seriously, they would lose me when I started talking about opening multiple tabs.

For me personally, this kind of product liability is like giving me a big bag of money, because of my profession. But this is a huge can of worms opening up and we don't even know it's going to lead to better security.


One slight difference is that people actively try to do bad on a large scale with software while toys, cars and houses can be used nefariously but short of ramming your truck into a Christmas market, the scope for large scale damage is usually more limited.


Children are continuously trying to come up with new ways to hurt themselves with toys, and if they do you can expect to get in serious trouble.


Children and the odd litigious adult maybe, but not nation states.


As others have pointed out, children are always trying to do crazy stuff with their toys. But, even if this weren't true, so what? Just because software gets picked on, it should be given a pass? Um, no.


Not a pass, no. However software attracts a level of attack the is pretty much out of the realms of any other industry though. A fair few countries seem to weaponise software in peacetime, and use what they create.


Consumer Product Safety Litigation is huge. I would imagine there's plenty of people out there trying to find ways to hurt them selves in every way imaginable.


The alternative to strict product liability is to make sure the IOT decide you provide has a clearly defined and enforced end of life.


Another example would be automobiles. Manufacturers do recalls and provide replaces more many, many years.

Ford isn't on the hook for the Model T, but it does do recalls for decades-old Saturns.


Although it is irrelevant to the point you're making, I feel I should point out that Saturn was a General Motors subsidiary, so it is not Ford that is making Saturn recalls.


Dammit, you're right. Gah.

Well, yeah, in any case, even a defunct subsidiary (of GM, not Ford) is getting recalls decades later.

High-tech has different standards than the rest of the world.


> If I'm a one man contractor building a house, certain categories of mistakes I'm on the hook for for the rest of my life.

Should contractors that built houses in the 60s be held liable for using things like asbestos and lead paint?

Should car manufactures from the 70s be held liable for using non-tempered glass and other unsafe features? What about CO2 emission standards?

The topic of product safety cannot be divorced from the historic timeframe in which it is considered.


The contractor building a house has specific warranties that vary state by states (in the US).

http://real-estate-law.freeadvice.com/real-estate-law/constr...


This is a ridiculous comparison - there are standards for these things, toys and buildings, provided by governments and insurance companies that you can simply meet and then disclaim almost all liability in the future. Nothing like this exists for security.

No one wants to insure your product is secure, even if they've fully audited it themselves - it's too easy to miss something and make a mistake, especially so in the C-centric world. Software security is a minefield much more so than standard building codes, child safety laws or meeting the best of standards insurance companies may request of those things.

The only alternative here is that we go all-in. Everyone who develops software is individually responsible for it, we all pay insurance for our ability to develop software. Because just about any piece of software can be a huge security liability.

Sounds like a scary world to me, one in which I would have never gotten involved in software development.


Not really. I can think of 2 examples that have decades-long involvement and action.

Lead and asbestos.

And it appears that the FTC is forming the basis of liability in software, which nearly every company doing software doesn't warranty.


Were people who made things from lead and asbestos decades earlier held liable for not knowing they would be found bad decades later?

No (or at least, not to my knowledge), instead people just had to buy new things. Standards change. New security vulnerabilities are discovered. Liability doesn't stand in these cases.

If you can't possibly find every security vulnerability in your product, you shouldn't be held liable for the inability to do so. You have to disclaim that, as I'm sure D-Link does.


> Were people who made things from lead and asbestos decades earlier held liable for not knowing they would be found bad decades later?

My knowledge of this is really fuzzy now (I had to learn about it for a college ethics course) but I believe that for asbestos manufacturers knew about the health hazards for years and covered it up.


Yep. In a lot of these cases, companies knew that X chemicals were really bad for people. But since they're not some academic arm, they most certainly aren't running studies that open them up to liability. It would be the understood 'we know this is deadly, who cares' kind of stuff coming from workers in the organizations.


This analogy is stupid. There is a difference between these things breaking under normal use and being activly attacked.

You're not on the hook as a contractor for a house if it's vulnerable to missile attacks.


I would argue that a device connected to the internet being actively scanned for exploits is normal use. Even if you have a disclaimer that your product should never be connected to the internet, you could still be on the hook.

Blitz went out of business because it could no longer afford liability insurance. Blitz made those ubiquitous red plastic gas containers you see on every landscaping trailer. They were constantly being sued because their gas can could explode if you poured gasoline directly from it onto a fire. They even put warnings and disclaimers directly on the cans against pouring gasoline on a fire.


> Blitz went out of business because it could no longer afford liability insurance. Blitz made those ubiquitous red plastic gas containers you see on every landscaping trailer. They were constantly being sued because their gas can could explode if you poured gasoline directly from it onto a fire. They even put warnings and disclaimers directly on the cans against pouring gasoline on a fire.

This is a very one-sided read of the situation.

The typical Blitz can lawsuit went something like this:

A 3-year old toddler knocked over a blitz can in a basement[1]. Vapours from the can reached the water-heater, which then flashed-back into the can, causing the can to explode, severely burning the child. This would not have happened had the can's nozzle been built with an industry-standard 10 cent flame arrestor, which federal regulators STRONGLY advise all gas can manufacturers to include, but which Blitz had for years refused to take the simple precaution of adding to their product.

It's the "ignoring simple, industry-standard safety precautions" that will get your ass nailed to the wall by a liability judge. Engineers who had worked for the company testified at trial that they were ordered to destroy documentation showing that Blitz was aware of the problem, had done internal testing, and had designed flame-arrestors for their nozzles, and that management killed the project after a change-of-ownership.

[1] http://www.recordonline.com/article/20030919/News/309199995


So would ignoring industry-standard security best practices be the equivalent in this case?


Generally, yeah.

Like, if you built a product today, and (pulling an example out of the air), used bcrypt for password encryption, you wouldn't be liable for that choice down the road -- you used what's generally considered a recommended best practice for protecting user's passwords at the time you released the product.

But if, in 2017, you used an unsalted md5, a lawyer could make the argument that you by now should sure as hell have known better, and that the problems arising from that were easily foreseeable (since most of the industry was aware of the problem and in fact had been writing about it for years).

In this case the FTC is essentially alleging that D-Link's practices were so bone-headed and obviously counter to industry best-practices that they have no real excuse .


The active attacks are the equivalent of "weather" on the Internet. It's nothing like protecting against violent crimes. If I bought a new house and the roof leaked after only 5 years of regular weather I would certainly expect the contractor to fix it, and file a construction defect lawsuit if they didn't.


Only if you attach the thing directly to the Internet. Would you drive a regular car through a war zone?


Do you frequently buy home routers for the purpose of not attaching them to the internet?


I don't attach cameras directly to the Internet.


I don't either, but many products are designed to do exactly that. It's called the Internet of Things (not the VPN of things) for a reason! :-)


You are if you advertise the house as invulnerable to missile attacks. (D-Link advertised its routers as secure.)


I bet you'd be on the hook for installing a door without locks though.


No, but you might be on the hook if it is vulnerable to heartquakes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: