FWIW, I found that you can either deterministically deduce a cell or prove that the choice is irrelevant for at least two of the conditions. In this case, you can fill it with the solution that gives you the most flexibility for the third.
> but I'm at a loss as to why some 'hacker' would go to the effort to sift through the type of content that is typically stored on the average NAS box. Like you said, family photos, birthday videos...
It's not 'some hacker' going through your stuff, it's an automated attack scheme. Your adversary may choose to do something CryptoLocker-like or more stealthy stuff that makes your NAS part of a botnet. Neither option is good.
As others pointed out, it is highly likely that the ownCloud instance ends up publicly accessible, because that's the primary way to access files from the outside.
Thank you. Out of curiosity, can you point me at a significant case cryptolockering of NAS data? I've read windows boxes being hit.
I have nothing on my WD MyCloud that isn't duplicated somewhere else (either it's a backup of google photos/videos, or dupe'd to a USB drive)
Again, a lock is the same as a HD failure to me. Both could happen with this setup. If the information is too valuable to fall prey to either circumstance then the system as implemented the wrong setup.
Now, a botnet is different. I assume one could not run off of the WD box and the Raspi/Owncloud base is too small to target. Unless you can point me at content that indicates otherwise?
> I have nothing on my WD MyCloud that isn't duplicated somewhere else (either it's a backup of google photos/videos, or dupe'd to a USB drive)
It sounds like it doesn't apply to your case, but a potential issue with Dropbox/ownCloud/Google Drive is that the master server can instruct all copy-holders to delete their copies. You should have off-site backups, but I suspect many people don't.
> I assume one could not run off of the WD box and the Raspi/Owncloud base is too small to target.
This is not really an issue - there are multiple router-based botnets as well. You can't really mine bitcoins, but there are tons of other stuff you can do, e.g. DDoS is usually not constrained by the processing power.
- ownCloud is a PHP application with quite a few third-party modules of varying quality. Looking at the security history of Wordpress, it's not hard to imagine what's going to happen.
- The maximum bug bounty for ownCloud is 500 USD. I think my data easily exceeds that.
- From what I've heard, security fixes are provided to enterprise customers first, so if you're lucky your adversary is one of them and knows about vulnerabilities way ahead of you.
To their credit, ownCloud openly publishes security advisories for every vulnerability, but I still think it's architecturally designed to fail.
Exposing this to the internet is probably a bad idea. If you just need storage, you probably should just use dumb storage. If you need project management stuff and care about privacy, maybe look at https://protonet.info/ or something along those lines. Also https://www.boxcryptor.com is really nice - the Dropbox desktop client does proper cert pinning (ownCloud doesn't) at least.
Other than that, storage connected to a raspi via USB will probably yield rather bad transfer speeds?
So an open source product with active and transparent security patching with a bug bounty isn't good enough?
Yet you want to offer two closed-source alternatives. I am not defending Owncloud's record, I'm attacking your logic. A low-effort all-in-one groupware and private doc cloud (Google Apps replacement) is an awesome thing - if Owncloud could make deploying an email server as simple as the rest of its toolset, they will have hit the home run.
And sandstorm.io - still waiting on internal user stores.
The entire premise of self-hosting is SELF hosting. Internal network, no operational, day-to-day necessity for Internet connectivity post-install.
If I install such a product, it's because I want total control over my data and the terms by which I access it. Being forced to use Google/Github/email as auth goes against that.
Especially considering, unlike those proprietary products, our open source product and the engineers we pay to work on it are under constant and public scrutiny thanks to availability of the source.
Honestly, I think you should not even consider trusting a proprietary product with your most important private data. There's no guarantee it isn't full of back doors and you can't audit the code or pay somebody to do it - some companies would even sue you if you try (see the Oracle debacle some weeks ago).
I don't agree with him, I just understand his point. Actually, I wouldn't really trust either product at this point - there have been too many vulnerabilities in OwnCloud for my taste, but buying a proprietary application is not an alternative either for my personal use.
- extensibility - you can adapt other apps, open source or commercial, to run on sandstorm. As far as I'm aware you can even upload them to the hosted version.
- security seems to be taken good care of by pragmatic and experienced people.
- the fact that I can pay a small amount monthly for it depending on my storage and computing needs, making sure incentives are aligned (although last time I checked I think Sandstorm hadn't even started the billing machine I think.)
Furthermore it seems to be completely, real, free software, -I ran the OS version at a maching at home for months before oasis became available to me, and I haven't noticed any juicy parts missing or filed under "Enterprise only". The only difference between the versions seems to be the storage and compute resources available. (OK, the billing system doesn't seem to be in the Open Source version, but that is OK to me. : )
Kenton, thanks for all of the work you and your contributors do on sandstorm.io; I have referred this project to many people wanting to get their feet wet "running a server" as a way to do something without instantly cutting yourself. Have you considered writing a how-to article on replicating ownCloud-like functionality within Sandstorm using apps? It might be a good first step, and I believe that it has the possibility to help people move onto (what I personally believe is) a superior platform.
The biggest things I see ownCloud still having a huge advantage on is contacts and calendars, nobody's ported or written good apps for Sandstorm to do that yet.
There's an awesome file storage app called Davros though, that's actually compatible with the ownCloud desktop client for file syncing!
AFAIK sandstorm self hosting only works on 64bit x86 machines and there's certain files I don't want hosted outside my own walls. The various ARMs are great for that use case, any future plans on supporting ARM?
Otherwise sandstorm looks pretty great overall. I always appreciate a focus on security :)
The primary problem with supporting ARM is that Sandstorm apps run native binaries. So in order to support ARM, every single Sandstorm app would need to be able to also be recompiled for ARM, or you end up fragmenting the ecosystem.
I'm also not sure how many board PCs like a Raspberry Pi or what have you would handle Sandstorm well performance-wise, though I do think there's some 64-bit board PCs you can get now to try it on.
Curious, why the scripts to install a simple series of binaries? If you packaged it natively then all the stuff you're doing with both GPG and install.sh simplifies dramatically from 2k lines of bash. With added benefit of pushing out security updates or releases becomes pretty simple.
- There is sadly no universal package manager on Linux.
- A lot of that 2k-line bash script is implementing an interactive setup that configures your server, optionally claims a hostname and obtains SSL certificates, etc. A package manager wouldn't replace any of that.
- Sandstorm's auto-updater will automatically update your server within 24 hours of any release. That's actually pretty hard to achieve with package managers. Most are not designed to auto-run in a cron job. Worse, many distros have long release cycles (6 months, 2 years, etc.) during which they only accept bugfixes.
- Most package managers don't verify PGP signatures back to the upstream author, but rather to the distro maintainer (which in Debian's case is any one of thousands of people). It's debatable which is preferable, but note in any case that it's a very different property from what our installer implements.
- Sandstorm self-containerizes in its own corner of the filesystem, basically avoiding any dependency on the rest of your system other than the kernel. This strategy works well for us -- it relieves us from having to test on every distro separately, and it avoids messing up the user's system -- but it probably wouldn't meet the guidelines required to get a package into a distro. So we'd still have to distribute our packages direct from our own server, or do a _lot_ more work.
With all that said, when Sandstorm stabilizes more we do plan to figure out a way to let people "apt-get install sandstorm", since a lot of people are more comfortable with this.
Unfortunately probably not any time soon, for the same reason as ARM: Sandstorm app packages include binaries built for x86-64. We'll need a lot of tooling to make it easy for developers to package for multiple architectures. :/
Sandstorm works decently on mobile browsers, they do test it. But how well different Sandstorm apps work on mobile depends on those apps.
It's also possible to use native apps that sync to Sandstorm. My Tiny Tiny RSS instance on Sandstorm I can access through a native Android app, for instance.
At present, apps can implement HTTP and WebDAV APIs but not SFTP nor SMB. In the future we plan to generalize things so that apps could potentially implement any protocol, but we want to be careful to do it in a way that lets us keep our strong security guarantees.
"Please look at this commit so you know how you can hack us", sounds certainly like a much better idea ;-)
> Security history at Wordpress
When was there a single very grave vulnerability within the core of Wordpress? Mostly plugins are the root of all evil there.
(besides the nasty XSS one in Jetpack, which was caused by a static HTML file)
> - From what I've heard, security fixes are provided to enterprise customers first, so if you're lucky your adversary is one of them and knows about vulnerabilities way ahead of you.
This is wrong. Until now there has not been a single moment where customers did receive patches in advance. The only difference being is that they see the advisories earlier, but at this moment patches are already available for all.
> maximum bug bounty is $500
For the record we receiced until now 340 reports by over 150 individuals and until now only 1 vulnerability within the server has been pointed out.
(Full Path Disclosure of the ownCloud root folder such as "/var/www")
> If you need project management stuff and care about privacy, maybe look at https://protonet.info/ or something along those lines
> "Please look at this commit so you know how you can hack us", sounds certainly like a much better idea ;-)
I think that'd be better than a deceptive commit message, yes. ;-)
IMO, security-related changes should clearly be marked as such - if you don't want to have them publicly, you can keep it on a private branch for the time being.
> When was there a single very grave vulnerability within the core of Wordpress? Mostly plugins are the root of all evil there.
The same (plugins) probably applies to ownCloud, still that doesn't make it better. I personally think that embracing PHP's low entry barrier [1] is the wrong approach and I'd rather see a security-driven design.
> This is wrong. Until now there has not been a single moment where customers did receive patches in advance. The only difference being is that they see the advisories earlier, but at this moment patches are already available for all.
Thanks for the clarification - very sorry for the FUD. I got this info at a conference from one of your enterprise customers not-so-technical management guys, who is apparently misinformed.
> For the record we receiced until now 340 reports by over 150 individuals and until now only 1 vulnerability within the server has been pointed out. (Full Path Disclosure of the ownCloud root folder such as "/var/www")
My argument was that the market price of vulnerability is more or less a metric for security strength [2], and 500 USD doesn't seem to be much. If we presume that the value of a critical ownCloud exploit exceeds 500 USD, your bounty program provides very little incentive to search for or report critical vulnerabilities and you'd only get low-quality reports (which seems to be the case).
> What makes you thinl they are more secure?
I think that ownCloud has a big problem with automated vulnerability scanning and the security properties of managed appliances are generally superior. I unfortunately can't edit my original post anymore, but I should have added that running ownCloud behind a VPN is a very good idea as well.
So, if we ignore all lower and medium severity ones we're basically only left with CVE-2015-2213 which requires authentication. Also XSS is barely something one can blame PHP for. That's pretty low number.
For the record: ownCloud protects you against XSS using Content-Security-Policy.
This is an arbitrary wishfulness. I'm not even sure why you're debating Wordpress vulnerabilities -- if your point is that PHP is a secure application development environment, then even if WP was riddled with 9.0 severity exploits, it shouldn't matter. It seems to me that by correlating your product's security with WP's, solely because they are both PHP apps, is conceding the ponit.
The one column at the link with that has "medium" and "low" values is "complexity" which means CVSS's "access complexity". So having many rows like this means there are many vulnerabilities that are easy to exploit!
Also CVE-2015-2213 is marked as NOT requiring authentication (along with about 7 other straight remote code execution CVEs).
> The one column at the link with that has "medium" and "low" values is "complexity" which means CVSS's "access complexity". So it means there are many vulnerabilities that are easy to exploit.
I'm aware of that, I have a ton of CVE entries filed myself. I was referring to the score (https://nvd.nist.gov/cvss.cfm), anything below 7.0 is not deemed "high".
> Also CVE-2015-2213 is marked as NOT requiring authentication (along with about 7 other straight remote code execution CVEs).
CVE entries are often terribly done wrong if they are not provided by the vendor (which is what ownCloud does).
See https://core.trac.wordpress.org/changeset/33555 for the patch for CVE-2015-2213. As you can see this is within the function "wp_untrash_post_comments" which is called by "wp_untrash_post" which only accepts user-input from the Wordpress admin panel.
> I don't really get why one wants to trust ownCloud with private files
Because
1. You get to host and control the data, and have 100% access to the code managing that data. You don't have to trust anyone else for anything.
2. The chances of somebody attacking you (a small target) vs somebody attacking a big centralized service is fairly small.
I'm not saying I believe Owncloud to be 100% secure (it being a semi-shoddy PHP application and all), but there are reasons someone may want to trust it over centralized, US-hosted and NSA-friendly online services.
No. 2 is not right. For fingerprintable service with known exploits, dragnet type attacks are very common. If the GP is right about OwnCloud having a poorly written code base then you have a very high chance of getting hacked unless you can stay on top of updates, which is unlikely for most people.
If your data is important enough that it needs to stay on self hosted machine, you should look at commercial solutions. Otherwise use dropbox/gdrive/s3 with self encrypted files.
We're a large project (often an order of magnitude larger than others trying something similar) and a company behind it with many large enterprise customers, which explains of course why we have good, transparent processes and dedicated security people.
None of that has to lead to good code as a rule, I admit that. And there sure is lots of less than perfect code in ownCloud. But I don't think it is fair to just claim it is any more shoddy than any competitor without any evidence of that.
I am not claiming that owncloud is shoddy, I am just refuting the claim that somehow hosting your own server makes you a smaller target and somehow safer.
Every code base eventually has security problems, sometime a big as heartbleed. If you are Amazon, you get a preferential disclosure and patches before it is publicly revealed. If you are John Doe, you better hope that you read the cve as soon as it's published and that you can patch the server right then.
That is why we publish updates with fixes 2 weeks before we publish CVE's. If a would-be-hacker follows CVE's, all users who updated in the last 2 weeks are safe.
On top of that, while we prepare updates mostly in public in github we only release the security-related fixes the moment we release the update.
So a would-be-hacker would have to look through the source code of the update to identify security fixes, and then he/she can hack ownCloud instances. (Lukas should check this, btw, I'm only 75% sure about this)
There is nothing we, or anybody working on any product can do about users not updating, though we do give warnings, offer packages which makes updating easier and do all we can to use security hardening to limit the damage security problems can do.
It is true that hosting your own server doesn't make you safer from targeted attacks. If you follow our security recommendations, you'll be quite OK, though, and there are tricks like using a special port and port knocking and what-not to improve security even more.
But this is no different to any other self-hosting tech.
Yeah, a public cloud can do better - they don't publish any source. They also have, almost by default, a back door to the NSA so that's like saying "let's give up on trying to build a roof because if you do, it could have a leak".
BTW if your ownCloud just presents a login screen to others, I mean, how often can somebody break in through that with automated means? Not 'never' I suppose but it should be rare...
But, for many users, security is complicated and us making it easy to run ownCloud includes that. You won't find many competitors with such extensive documentation, nor automatic security setup tips and warnings in the ownCloud admin interface.
Second, this is a matter of focus. For home and small server users, ease of use trumps perfect security, that is a simple risk model assumption: your security has to be good enough, not perfect. Better than others and all that.
For enterprise users, however, security IS paramount and ownCloud lends itself for that. We get security audits by the financial institutions and others which run ownCloud and have extensive security hardening and best practices in place. Of course, these enterprise users don't use the many 'random' community apps, which is where the vast majority of security issues can be expected. I think that, for enterprise usage, you'll find that ownCloud security practice belongs to the best. And that is in no small part thanks to the awesome that is Lukas Reschke.
For home and small server users, ease of use trumps perfect security, that is a simple risk model assumption: your security has to be good enough, not perfect. Better than others and all that.
As someone else points out in a neighbouring thread, OwnCloud is generally less secure than any of the large services, because of automated vulnerability scanning. If an OwnCloud user updates their server days or even hours to late, it can be game over and your data is on the street. It does not matter if the attacked service is OwnCloud or some other service with enough privileges.
This does not mean that open source and/or decentralized services are at a disadvantage, but you have to make the right security choices. The storage service[1] should never see unencrypted data - encryption at rest is not good enough. For instance, Bittorrent Sync provides this with their encrypted read-only keys. A cloud peer with such a key never sees unencrypted folder data. The only thing you lose when a cloud peer is hacked is a node in the swarm, but it'll never result in visibility of plain-text (unless you subvert AES-128). One SyncThing developer is currently also working on similar functionality for SyncThing.
For this reason, I would never recommend OwnCloud to anyone outside a large company that has the capacity to do continuous security auditing and monitoring, unless you apply client-side encryption (but then you could use Dropbox et al. as well if privacy is the primary consideration).
[1] I know that OwnCloud does more than just storage.
Self hosting does have this issue in general, yes. It is a bit harder to get at security vulnerabilities in ownCloud as was initially portrayed in the thread you mention (we publish CVE's 2 weeks after updates have hit the net, and these updates contain unmarked security updates).
Client side encryption is a great solution but you lose out on most of the benefits of the cloud.
Honestly, I don't know. I haven't seen any of such attacks but of course, with about 3 million users, ownCloud isn't a HUGE target. I just don't like the idea of giving up on self hosting ;-)
I also wonder how successful such automated scanning attacks are against a simple login screen. Esp compared with the fact that on the big services people routinely call the helpdesk and manage to get passwords reset and all that so they get into accounts. That won't happen with your private ownCloud...
mitmproxy keeps all requests in memory, so that you can browse them quickly. If you want better performance, you can just swap "mitmproxy [args]" with "mitmdump [args]" and get all features in a headless mode which scales well and keeps a low constant memory profile.
We finally ended up picking http://dgrid.io/ over SlickGrid and jqGrid. As with every table component, there are some quirks, but we're really happy with it. Feels like the most modern table component to us.
I wonder why Backbone is getting so much attention.
For small tasks, jQuery is completely sufficient (as in this example as others have noted). If you're going to develop applications on a larger scale, Dojo is a way better alternative in most cases IMO. It comes with modularization, build tools, i18n etc...
Backbone is somewhere in the middle, neither highlevel nor lowlevel JS. I really don't see a spot where Backbone significantly outperforms either Dojo or jQuery. Can anyone tell me what's so special about it?
Thank you for the FAQ Jeremy - helpful to this Backbone newb.
I won't ask you directly for the answers to this since it wouldn't be polite for you to answer (or would it?), but I feel like a lot of that FAQ is specifically addressing different frameworks' way of doing things, which is totally fine.
Would someone mind extrapolating a bit and let me know which frameworks / features he's referring to in each point? I assume the 2-way data binding and "nifty demos" piece is about Meteor and Derby, and the "stuffing application logic into your HTML" refers to Knockout, but I'm a bit lost on the others.
Just trying to grow my awareness of the landscape....
The focus is on supplying you with helpful methods to manipulate and query your data, not on HTML widgets (angular) or reinventing the JavaScript object model (Ember).
Backbone does not force you to use a single template engine (Ember). Views can bind to HTML constructed in your favorite way.
It's smaller. There's fewer kilobytes for your browser or phone to download, and less conceptual surface area. You can read and understand the source in an afternoon (Ember and Angular).
It doesn't depend on stuffing application logic into your HTML (angular / knockout). There's no embedded JavaScript, template logic, or binding hookup code in data- or ng- attributes, and no need to invent your own HTML tags (angular).
Synchronous events are used as the fundamental building block, not a difficult-to-reason-about run loop (Ember), or by constantly polling and traversing your data structures to hunt for changes (Angular). And if you want a specific event to be aynchronous and aggregated, no problem.
Backbone scales well, from embedded widgets to massive apps.
Backbone is a library, not a framework, and plays well with others. You can embed Backbone widgets in Dojo apps without trouble, or use Backbone models as the data backing for D3 visualizations (to pick two entirely random examples).
"Two way data-binding" is avoided. While it certainly makes for a nifty demo (Angular & Knockout), and works for the most basic CRUD, it doesn't tend to be terribly useful in your real-world app. Sometimes you want to update on every keypress, sometimes on blur, sometimes when the panel is closed, and sometimes when the "save" button is clicked. In almost all cases, simply serializing the form to JSON is faster and easier. All that aside, if your heart is set, go for it.
There's no built-in performance penalty for choosing to structure your code with Backbone. And if you do want to optimize further, thin models and templates with flexible granularity make it easy squeeze every last drop of potential performance out of, say, IE8.
I have marked the individual points he pointed out with the relevant MVC framework out there.. I might have missed out on some but it looks like ( atleast to me! ), jashkenas feels the greatest threat to Backbone currently is Angular.. And I think he is right about it.. Backbone is currently ruling the throne of Front End MVC frameworks, but it looks like its rule is ending soonish...
It's completely true that Backbone is somewhere in the middle. It's a light framework to make things more modularized and maintainable in a somewhat large javascript application.
Of course you can make things even better for very large javascript applications. That's not the point.
Backbone handles best applications that are neither 20 LOC (for which jQuery is enough) nor 200K LOC. That's why it's pretty popular. It corresponds to a need.