I think the thing that's missing from much of the discussion is that all released information is subject to a very clear contractual agreement and for specific purposes. The agreements limit the ability to link supplied data with anything else. These contracts and use of data are subject to privacy group oversight, managed by the NHS.
The intended use is not that insurance companies can link your medical data against you and then charge you more (or any variant on that). Instead, the intended use is that companies with clear information controls can perform useful research more cheaply, and stop guessing at cause and effect. I personally support that intent, and am interested to see what comes out of it.
What's to stop the companies just doing whatever seems to get them the most money? In my opinion, it'd be the fact that failing to stick within the agreement would cause existential risk to the company. I think that courts, government, the NHS, and UK society at large would come down VERY heavily on any company contravening their contracts. Companies are going to spend significant effort ensuring their company doesn't disappear overnight in a storm of lawsuits with the directors in jail.
Companies wouldn't do this for the same reasons that Seagate doesn't sell the data off RMA'd hard drives on the open market.
I trust the relevant public bodies in the UK to protect my interests here. You may not, of course.
Let's be clear, the intention is for the UK Government to make money off your medical data.
If Seagate wanted to make money off your RMA'd hard drive and they thought the data on it would do the trick, you can bet it would be for sale on the open market.
If the law says that is illegal, Seagate does not have the option to change it. However, the Government can simply change the law to make whatever they want to do 'legal' and their problem is solved. That's essentially what they've done here.
Large 'healthcare' companies interested in this data are more than just health providers, they have multiple divisions with multiple competing and tangential aims and targets. Just because a piece of paper says it can only be used in one way, that is not going to stop the re-use (and leaking) of the data.
Remember the UK had bankers totally screwing the country and got rewarded with massive bail-outs - I don't recall any jail time for their bad behaviour [in the UK]; quite the reverse. Any social science student will be able to cite many examples of companies shielding individuals from the consequences of their bad behaviour - it's a whole subject area.
The UK government sets up QUANGOs specifically to shift liability and risk to prevent consequences; a Scottish care home where elderly people were burned to death escaped prosecution as the legal entity was simply shut down and dissolved prior to the court case starting [this did bring about legislation changes to close that avenue in Scotland http://www.bbc.co.uk/news/uk-scotland-17740645]. There are dozens of ways to get away with abusing the data and walk away free - if you're going to make a lot of money, you can afford good lawyers to help you prepare well ahead.
Sorry, but I don't agree with the premise and the first line of your comment. Much of the rest I do agree with.
The NHS (not "the government" - which is an emotionally charged noun in this sort of circumstance) is selling the data. They are in financial difficulty, yes, but they are also responsible for broad social-health in the UK.
The NHS is in an almost unique position world-wide, in that they have access to high quality data that can dramatically improve health at an international level. They aren't, however, a research group. Companies just do research better than government departments, and finding a balanced way to improve access to the data and improve social health is critical to the NHS's future as the population ages.
This is why they are selling health data, imho.
I think there's a balance to be struck. The global and NHS specific improvements in health need to be balanced against individual privacy.
Unfortunately, the only way to do this is through "pieces of paper" (again, an emotive term).
It's also worth mentioning that many of these pieces of paper have already been in place for years, where they have been sharing hospital data. So to some degree this extends an exiting structure that is already working. It's just more emotive to many people since it involves a centralised location, and their local GPs.
I'd rather have a centralised location with oversight fighting down a multinational, than my local GP trying to manage legal contracts with them.
It's difficult to respond to your specific examples. Some are completely valid. Some are (imho) not. "Mistakes were made" and mistakes will be made in the future.
It's complicated, and it's a balancing act. Personally, I think it's the right balance.
"I think that courts, government, the NHS, and UK society at large would come down VERY heavily on any company contravening their contracts."
Supposing a leak happened. What makes you think you'll be able to tie it down to a single company? The data could be leaked anonymously, and the risk of such a leak becomes higher the longer this care.data scheme carries on for.
Because the same data set isn't being given to loads of different companies, it is a unique process. The whole concept is you apply for specific data for a reason, get vetted and then receive the data.
At the scale we are talking about here the differences between both fields and individual records is going to be so great as to identify almost any leak.
You apply for specific data for a reason, but the whole point of releasing the data in the first place is to do statistical analysis on it, and to do any statistical analysis worth talking about you're going to need large data sets, that are bound to overlap with other data requests.
Let's think of a hypothetical situation. Companies that develop drugs to help people deal with mental illness are likely to want to research what mental illnesses are most prevalent so they know where the greatest ROI for R&D is going to be. To do this they're going to be exploring the data set, pulling patient data for people with mental illnesses of various kinds. However, there isn't just one company developing such drugs, and what seems like a good use of the data to one company is likely to seem like a good idea to another. Now imagine there's a leak of every bipolar person in the UK. Who would have access to such data? There's likely to be multiple interested parties.
The intended use is not that insurance companies can link your medical data against you and then charge you more (or any variant on that). Instead, the intended use is that companies with clear information controls can perform useful research more cheaply, and stop guessing at cause and effect. I personally support that intent, and am interested to see what comes out of it.
What's to stop the companies just doing whatever seems to get them the most money? In my opinion, it'd be the fact that failing to stick within the agreement would cause existential risk to the company. I think that courts, government, the NHS, and UK society at large would come down VERY heavily on any company contravening their contracts. Companies are going to spend significant effort ensuring their company doesn't disappear overnight in a storm of lawsuits with the directors in jail.
Companies wouldn't do this for the same reasons that Seagate doesn't sell the data off RMA'd hard drives on the open market.
I trust the relevant public bodies in the UK to protect my interests here. You may not, of course.