Totally down with the potential conflict of interest or implications of GitHub, but it’s not GitHub specifically that’s zealous about DMCA. Any server and any host of that server is going to be subject to it. The only appeal of a smaller or private server is less visibility, but legally it’s the same. DMCA isn’t going anywhere.
Well, GitHub/Microsoft could go on a PR campaign and say "we're not going to honor this RIAA DMCA since we know yt-dl isn't violating the DMCA!" but then GitHub/Microsoft would opening themselves up to a lawsuit against (basically) the entire music industry. The amount of goodwill MSFT loses over this (hopefully isolated) incident has to be worth a few orders of magnitude less than the tens of millions of dollars that would be burned to actually fight the RIAA.
Smaller hosts could get away with not honoring DMCAs since the RIAA likely isn't going to waste resources actually filing a lawsuit, but this yt-dl situation seems like the perfect setup for the RIAA to set a precedent outlawing video/music downloaders if someone were to actually fight them on it (and until then, they can continue to take down video/music downloaders until someone does counter it).
They don't want to get rid of copyright, it's just about creating a less punitive system. I don't think there's much downside for them if they really cared.
Does github actually have to honor every request that comes in? I thought the youtubes of the world did that because of the volume, but hypothetically, couldn't they do some due diligence and push back on requests they don't think are valid instead of just taking them down and requiring the repository owner to appeal. I'm sure it would be more expensive for them, but it's still a choice.
It is my understanding that DMCA requests have to be honored immediately unless the hoster wants to expose themselves to large legal risk. The uploader of the banned content can file a counter request, upon which the copyright holder either withdraws, or things land in court. But if the hoster doesn't honor the request they lose their hosting privilege and can be sued for copyright infringement themselves.
It may not be legally, but practically. Almost all major content-hosting companies are headquartered in the USA and thus bound to the DMCA: Facebook/Twitter (social networks), Google/Amazon/Microsoft (clouds), Github/Gitlab/Sourceforge (code repositories), StackOverflow, Automattic (Wordpress), Akamai/Cloudflare/Fastly (CDNs), Wikimedia/Fandom (wiki hosting). The only major exception is Atlassian who are headquartered in Australia.
No matter if your content may be legal under e.g. European law (e.g. right to repair, right to interoperability, right to reverse engineer), you are going to have a hard time hosting it. And even if you get it hosted at an European provider (remember, we don't have anything that competes with any of the three US cloud giants in terms of functionality!), you will have issues with accepting donations easily - Paypal, Stripe and all credit cards are under US regulation.
And it's not just theoretical, just look at what happened to Kim Dotcom/Megaupload (or, tangentially related, Julian Assange). If the US deems you a danger to their business interests, you are going to get hunted down, no matter where in the world you are and if what you are doing is legal under the jurisdiction of that country.
I partially agree with you. This seems like a very good argument to start competing with the US harder.
As a counterexample, I'd like to offer you sci-hub which doesn't seem to significant hosting problems. Remember, we are not trying to replace an entire industry and all possible use cases at once. We're simply discussing the hosting of a few git repositories which some US entity might consider unsavoury due to a borked and unfair law.
> As a counterexample, I'd like to offer you sci-hub which doesn't seem to significant hosting problems.
They have to change domains all so often as the copyright mafia has a "blanket" seizure grant (https://torrentfreak.com/publisher-gets-carte-blanche-to-sei...), Cloudflare won't touch them as a result, their founder has (at least!) one court judgement of 15 million US$ by Elsevier in New York and another 4.8M$ by ACS against them and I bet that there is some sort of secret indictment floating around that gets unsealed in the case Elbakyan ever travels out of Russia so an extradition warrant can be put out.. the relatively unique advantage they have is that their founder is possibly linked to the Russian secret service GRU: https://www.washingtonpost.com/national-security/justice-dep...
Effectively, Elbakyan's right to free movement is restricted to those nations that don't extradite to the US and have friendly relations to Russia. And for what we learned from the Snowden and Assange cases, it's safe to assume that even flying over a country that has extradition agreements with the US in a passenger flight is grounds enough for intervention.
- a lot of big tech companies are based in the US
- a lot of companies want to do business in the US
- DCMA can become part of a trade agreement with the US, I don't know if the E.U. will save us at this point.
No, you can't. Let's not weaken words in order to make a presumed point.
I do appreciate that these are factors which make DMCA more relevant than if these factors did not exist. But your last point is not even a current fact but a hypothetical future.
I always invite people not to be defeatist but proactive in materializing the future they want to see. Citing unfortunate potential futures is not the right way to solve our problems.
The entire lingo of this “release” is mind-numbingly vapid: “made improvements in...”, “partnered with ${vague}”, “take steps to...” Many words saying nothing.
IMO Apple has really lost its way in terms of genuinely good UX. The UI might be “pretty” but in terms of actual usability it’s gotten just frustrating at this point. The amount of complexity they require from their users now, memorizing swipe gestures, complex keyboard shortcuts, and strange incarnations is just sad. Good design is simple!
These things have been in Apple ever since the original Macintosh. In that case it was 'one mouse button, so simple'. Except to make the system usable as other systems with multiple button mice, they added a whole load of keyboard shortcuts mimiced how the multibutton mice worked, which ended up being far more complicated than just adding another button to the mouse.
I’m fairly sure they got rid of all those other buttons because for the majority of customers they created more confusion than power back then.
Also, speaking as a power user who’s also a southpaw, I’ve got more than a few choice words for many of the multibutton mice I’ve had to use over the years. Apple’s late ADB mice were lovely in the hand and completely unprejudiced.
None of that is required! In fact this article begins with people who are happily using MacOS despite not knowing all the features.
Truly simple design would not have these features at all; that’s where Mac started. The mouse had one button, period. There was no contextual click option at all. And MacOS can still be used productively that way today. Adding power features without disturbing the original usability is strictly positive IMO.
It’s incredibly easy to pick a power feature and demonstrate that some users don’t know it. The more capable a system is, the more likely this becomes. Might as well write an article demonstrating that some users don’t know how to use Terminal, so obviously Mac must have slipped in usability since the original Mac did not have a CLI.
Having visible scrollbars is not a power feature. Crippling a system because you don't want to ruin your "minimalist" aesthetic is always a bad choice.
I understand the natural frustrations articulated here, especially given OP’s experience working with files, but it seems to dismiss what is actually a core strength of current operating systems: they work. Given a program supporting 16 bit address spaces from the 1970s, you can load it into a modern x86 OS today and it works. This is an incredible feat and one that deserves a lot more recognition than offered here! Throughout an exponential explosion of complexity in computing systems since the 70s, every rational effort has been made to preserve compatibility.
The system outlined here seems to purposefully avoid it! Some sort of ACID compliant database analogy to a filesystem sounds nice until 20 years down the line when ACIDXYZABC2.4 is released and you have to bend over backwards to remain compatible. Or until Windows has a bug in their OS-native YAML parser (as suggested here) so now your program doesn’t work on Windows until they patch it. But when they do, oh no you can’t just tell your users to download a new binary. Now they have to upgrade their whole OS! Absolute chaos. And if you’re betting on the longevity of YAML/JSON over binary data, well just look at XML.
Want to admire your fancy After Dark win 3.1 screensaver? Just emulate the whole environment! We don't want to keep suporting the broken architectures and leaky abstractions of past, they drag us down. Microsoft's dedication to backwards compatibility is admirable but IMO misguided and unsustainable in the long run. The IT industry has huge problem with complexity. We need to simplify the whole computing stack in interest of reliability, security and future innovations.
The proposed improvement as I understood it would be future proof. It seems trivial to build a rock-solid YAML/XML/JSON/EDN parser on OS level, and since it would be so crucial part of OS the mistakes would be caught and fixed quickly. It shouldn't even matter if structured data syntax is replaced or expanded in future, as long as it is versioned and never redacted. Rich Hickey's talk "Spec-ulation" has much wisdom about future-proofing the data structure.
> The IT industry has huge problem with complexity. We need to simplify the whole computing stack in interest of reliability, security and future innovations.
Yes! I really hope I keep hearing more of this sentiment and that eventually we collectively take action. What would be the first practical step? There's a lot of effort duplicating the same functionality across different languages and frameworks. Is reducing this duplication a good first goal? Should we start at the bottom and convince ARM/x86/AMD64 to use the same instruction set? After that, should we reduce the number of programming languages? It seems there's still a lot of innovation going on, would it be worth stifling that?
The actual non-snarky first step would be to admit that we are over our depth and we can no longer deliver software that is reliable, secure and maintainable. We can only guarantee that our software works for at least some users, on current versions of OS/browser, and is hopefully secure against some of poorer attackers.
Countless variants of programming languages and of instruction sets are not an issue. The problem is lack of well-defined non-leaky interface on boundaries of abstraction layers.
This is too big a topic to reliably cover in a comment (or ten) but standardising using strongly and strictly typed data formats like ASN.1 and EDN and practically forfeiting everything else (JSON, YAML, TOML, INI, XML) for configuration might be a good first step.
You cannot innovate if you keep insisting on eternal backwards compatibility. That's just the facts of life. At some point a backwards compatibility breaking move must be made. It's absolutely unavoidable and we'll see such moves in the near future.
> Is reducing this duplication a good first goal? Should we start at the bottom and convince ARM/x86/AMD64 to use the same instruction set?
Not sure about the CPU architectures; it seems they have been stuck in a local maxima for decades and just in the last few years people started finally asking if there are better ways to do things.
But as for some of the author's points, you can bake in certain services directly in the OS (say, utilise SQL for accessing "files" and "directories" instead of having a filesystem), standardise that and then just make sure you have a good FFI (native interface) to those OS services no matter the programming language you use -- akind to how everybody is able to offload to C libraries, you know?
> After that, should we reduce the number of programming languages?
We absolutely should, even if that leads to street riots. We have too many of them. And practically 90% of all popular languages are held together by spit, duct tape and nostalgia -- let's not kid ourselves at least.
It cannot be that damned hard to identify several desirable traits, identify the languages that possess them, combine that with the knowledge of which runtimes / compilers do the best work (benchmarking the resulting machine code is very good first step in that), then finally combine that with desirable runtime properties (like the fault tolerance and transparent parallelism of Erlang's BEAM VM). Yes it sounds complicated. And yes it's worth it.
> It seems there's still a lot of innovation going on, would it be worth stifling that?
Yes. Not all innovation should see production usage. I can think of at least 10 languages right now that should have remained hobby projects but became huge commercial hits due to misguided notions like "easy to use". And nowadays we no longer want easy to use -- we want guarantees after the program compiles, not being able to spit out a half-working code in 10 minutes (I definitely can't talk about all IT here, of course, but this is a sentiment / trend that seems to get stronger and stronger with time).
Many languages and frameworks aren't much better than weekend garage tinkering projects and should have stayed that way -- Javascript is the prime example.
Most operating systems ship a general-purpose structured binary serialization format parser as an OS component: ASN1. There have over the years been a number of security critical bugs in there, and everybody hates ASN1 anyway.
ASN.1 has an amazing idea and an awful implementation. :(
I'd say standardise a subset of ASN.1's binary and text representations and introduce a completely different schema syntax -- LISP seems like a sane choice -- and just stop there.
ASN.1 suffers the same problems that many other technologies suffer: they have way too many things accumulated on top of one another. Somebody has to put their foot down and say: "NO! Floating-point numbers in these binary streams can be 32 bit, 64 bit and arbitrary precision but no more than 1024 bits! I don't care what you need, there's the rest of the world to consider, deal with it". And people will find a way (maybe introduce a composite type that has 2x 1024-bit floats).
We need standard committees with a bit more courage and less corporate influence.
> Given a program supporting 16 bit address spaces from the 1970s, you can load it into a modern x86 OS today and it works.
Actually, it doesn't. It is extremely hard to properly return to 16-bit userspace code from a 64-bit kernel, so Windows removed support for it entirely, and it's not enabled by default on Linux.
Well, I don't want to say anything about the utility, longevity or appeal of yaml/json, but I somehow think a user is going to upgrade their entire operating system before they upgrade my little app.
And if they're inclined to upgrade my app, I mean, nothing stops me from using a third party library to parse yaml. It sounds like we're talking about an app from three operating systems and 20 years ago so it's likely I'm doing that anyway - maybe not in the current Windows version, but in a recent enough version on some other operating system.