I would say it's actually fairly simple from a technology perspective: good, well-documented APIs, as much SaaS apps as reasonable, and CTOs in government who get it.
I think the first two are, indeed, fairly simple from a technology perspective. CTOs in government (or whatever other role is responsible for making the data in question available) isn't simple or technology-related.
From my perspective -- both working with groups mandated to make data available and researchers consuming public datasets -- those responsible for making the data available DON'T get it in the vast majority of instances. It's tough to tell if it's obtuseness, incompetence, or... call it what you will, but if you are mandated to make information available that can directly assess your performance or the performance of the organization you lead, you might not have the right incentives.
A recent experience: the federal (US) government releases data regarding clinical trials conducted by drug companies and universities available for download in a format that they basically made up. OK, no problem, I've written lots of parsers. Ingest the data from the source files, but wait! There's no data dictionary or even a vague description of the relationships between the contents of the (many) files they publish. You can make pretty good guesses, but it definitely doesn't follow a well-documented API (or schema, whatever). Just a recent gripe that's stuck in my craw, but it's not an isolated case in my experience. I have come across a few that are very good and follow the best practices you note, but most I've worked with do not. I would guess that the former have your third characteristic; the latter likely do not.
If you have a bunch of tiny apps, how do their data models relate? Who runs them, and owns the data? Theres a lot of assumptions baked into the status quo described in the article, that technology has been developed around. Sweet spot for building this stuff might be a little different.