I think the real problem that facebook leadership sees isn't that they're amplifying overly partisan posters, but that the popularity of these posters demonstrates that their userbase is increasingly low rent and heading toward an AOL/Yahoo style long goodbye.
no, their problem is simply greed. zuckerberg, sandberg, et al. want money, status, and power, and as long as they can distract from losing (or slowing down gains of) any of those by blaming problems on those bad users (or anything else), they're fat and happy.
the leadership is the problem. from a systemic perspective, it's incentives (borne of societally-mediated values). you don't change systems by treating symptoms, but rather the incentives.
"Leadership" (ie officers) serve at the pleasure of the board, who are elected by shareholders.
Facebook intentionally focuses attention on Zuckerberg. "By all means, pump out memes about Android Zuck. Pay no attention to the inherent evil in our business model that cannot be fixed without destroying the company, or the shareholders behind the curtain who will keep right on pushing that business model because it makes them piles of cash."
That's where the attention belongs, Zuckerberg has controlling interest[1]:
> According to an estimate from CNBC last year, that means Zuckerberg and insiders control about 70 percent of Facebook’s voting shares, with Zuckerberg controlling about 60 percent. So whatever shareholders are voting on, Zuckerberg and those closest to him get to have the final say.
So the rest of the "leadership" are effectively Zuckerberg's proxies. Absolutely nothing Facebook does is beyond his control.
If you have more people trying to sell your stock than want to buy it, it doesn't matter how much controlling interest there is. The board has to keep the market happy.
The market doesn't care about user privacy or ethics. The market only cares about making more money than was made before. They only care if the source of data (us) get so fed up we stop giving them our data.
With Zuckerberg controlling 60% of the votes (and thus able to remove any of them unilaterally), the board has exactly one person they need to keep happy.
Zuckerberg could decide tomorrow that Facebook should be converted into a christian singles dating site, share price be damned, and the board would comply or they would be replaced.
I think there is a bit of a disconnect recently with how people think companies "should" be governed and how they actual are these days.
The tech boom has put, at various points, incredible power in the founders hands to dictate the terms of how corporate governance works. In the case of Facebook, Zuckerberg has controlling interest in the company. He can overrule shareholders and the board via his special class of shares.
An even more egregious example is Snap (NYSE: SNAP) - shareholders who purchase common shares of Snap don't even having voting rights at all. Between Evan + Bobby - they have 90% control of the shares. Even if you buy 100% of the available shares, you cannot even vote on company governance - let alone have any sort of control over the business.
These stock arrangements are pretty surreal. Buying shares in SNAP gives you zero say the operation of the company, but they make up for this by also giving you zero dividends. I'm not sure what the point is in owning the shares at all, it feels like the only value in owning the shares is that you might find another sucker willing to pay even more for them in the future.
It's frustrating that the discussion around technology's effects on society is still stuck between the extremes of:
- "tech has a causal effect on society"
- "society has a causal effect on tech, tech is a mirror"
At this point it seems pretty patently obvious that the reality is both? Technology is subject to pre-existing flaws in society, but also has the capability to amplify and worsen it.
> the popularity of these posters demonstrates that their userbase is increasingly low rent
It's worth considering if exposure to FB is what is causing the userbase increasingly "low rent". Yeah, FB didn't invent partisanship, misinformation, nor authoritarianism, but I think likely it is worsening it.
> Yeah, FB didn't invent partisanship, misinformation, nor authoritarianism, but I think likely it is worsening it.
Absolutely, Zuckerberg is basically a younger, less human version of Rupert Murdoch. He's going where the money takes him, and he's happy to exploit a profitable niche. My point though is that it's increasingly clear that Facebook's audience is becoming just that, an aging niche, and that's not a great place to be for a social media platform. We know what happens to those.
Just like Rupert Murdoch's various tabloids, exposure does indeed affect the consumers of that media, and it becomes an ouroboros of tabloid-type hysteria. I don't think that's what bothers facebook though, just the fact that this type of niche audience is a loser in the long run.
"My point though is that it's increasingly clear that Facebook's audience is becoming just that, an aging niche, and that's not a great place to be for a social media platform"
A few counter points can be made here.
Yours is a very US centric view. As one example, in several Central and South American countries, Facebook is everything. It's where everybody is, and this weird western idea that mingling with old people is "not cool" isn't a thing there.
Facebook is nearing 3 billion ACTIVE users. Astonishing to call that a niche. It isn't shrinking or stagnating, it keeps growing.
Last, the obsession with non-teenage users somehow being the death of a social network is ridiculous. Most people in this world are not teenagers. Adults have all the money, and all the political money.
Who cares what teenagers think is cool? Teenagers are idiots and they don't have any money. They aren't going to grow up in your service, they'll dump it within 2 years.
Facebook has largely the same dominance in Philippines. However, this situation indicts the nations in question more than it credits Facebook. Facebook is like cigarettes or sugared soda. Yes lots of people like it; no it isn't good for them.
There is no reason that social networks have to be run by firms as unethical as Facebook (i.e., there is no reason for social networks to be run as unethically as Facebook's is run). Please don't bore us with any "the market is always right" arguments. The people in "Western" nations who have decided to use Facebook less (and perhaps, other social networks more) have better taste and more agency than the people you're talking about. Those people should also acquire better taste and more agency.
That's once again a US perspective. Here in the Netherlands, Facebook is used to look at pictures of your friend whom visited a zoo. It's not used as a public debate platform, and there's very little political activity at all.
In Myanmar, Facebook is used to facilitate genocide. [0] One suspects that your representation of your impression of Facebook use in Netherlands is not accurate either.
It's not an impression. I'm a dutch native and have used Facebook in the Netherlands since it exists. I don't see how Myanmar has anything to do with what I said. If anything it only demonstrates the point that Facebook is used in wildly different ways.
At least we've moved from USA being the special snowflake where Facebook is harmful to the more rational position in which Holland is the special snowflake where Facebook isn't harmful. A yet more rational, informed position is still available...
>It's frustrating that the discussion around technology's effects on society is still stuck between the extremes of:
It is not just tech. It is pretty much everything else on the internet and media. No one is telling me the pros and cons anymore. After I read the pros I have to go and dig up my own list of cons or vice versa. No one is finishing the sentence with "having said that" and round off with some counter argument. Most are so consumed by their own ideology they fail to see anything else.
>It's worth considering if exposure to FB is what is causing the userbase increasingly "low rent". Yeah, FB didn't invent partisanship, misinformation, nor authoritarianism, but I think likely it is worsening it.
I would argue had any other Social Network raised instead of Facebook during the Web 2.0 era the effect would had been the same. I once try to make a smaller social network to force an opposite view on their feeds to try and balance things. Holly mother of god the backlash were insane. My conclusion was that this sort of extremism is a function of human nature. Before social media people would buy news paper they prefer or fit their own ideology.
"The only way to understand the Press is to remember that they pander to their readers' prejudices."
- Sir Humphrey Appleby
Internet "press" is no different to press in the 80s. Only at a much greater scale and fully automatic.
The VR stuff has the same energy as "The Titanic's crew has announced the installation of an amazing collection of trendy new Louis Vuitton deck chairs with an exciting new arrangement, and insists that the minor iceberg collision will be resolved soon."
Their solution to an aging userbase is buying out competitors (instagram, whatsapp) to prevent them from overtaking FB.
FB bought Oculus because they're beholden to Apple and Google in the mobile space; if VR is the next major platform (Zuck thought it was 10 years away in 2015) and Oculus is the market leader, it weakens Apple/Google's control on them.
Notice how everybody seems to have accepted that non-mainstream positions, sources, and interest-groups must be banned, and now they're simply arguing about who should be doing it and how?
“non-mainstream positions” is covering a lot in that statement. “provably false information that endangers the public” is a more accurate rephrasing.
In any case, the article doesn’t even come close to saying that. It details an internal discussion at Facebook about whether they should make data about the most popular posts public. Where is the word “ban” even mentioned?
Notice how whenever someone talks about holding Facebook for its algorithm spreading and promoting outright lies and falsehoods, e.g. active antivax propaganda, people start talking abstractly about “non-mainstream positions”?
Twitter suspended Jason Whitlock for tweeting "Black Lives Matter founder buys $1.4 million home in Topanga, which has a black population of 1.4%. She's with her people!"
They suspended the Chinese virologist Li-Meng Yan for supporting the COVID lab leak theory.
They blocked sharing the Hunter Biden story (for which they later apologized, after it blew up despite their efforts to suppress it).
So while the implication you're making is obvious, it doesn't stand up to scrutiny. The evidence shows that social media companies censor people for political reasons.
I didn't say it was the case with every single twitter ban. Twitter has lots and lots of horrible bans.
I was responding to the idea of "people start talking abstractly about “non-mainstream positions”". Whenever anyone gets that generic about what the problem was, be suspicious.
There are multiple misconceptions and straw man arguments in your post.
First of all algorithmic censorship is real and has been exposed quite a few times with whistleblowers, undercover journalists and just banning people for things mainstream media organizations do routinely (like filming someone's house). The bar for repercussions is set lower for anti-establishment and "right-wing" content creators, although it is beginning to affect radical leftists too and they are beginning to wake up to this fact (https://www.jacobinmag.com/2020/06/anti-corbyn-blair-censors...)
And your post only mentions one specific aspect of the libertarian / fiscal side of conservatism while conservatism is much more broad and now due to the polarization of society other non-related issues have gifted themselves along partisan lines (most of which used to be populist left wing causes): Vaccines, masks, lock downs, Israel, free speech, 2nd amendment rights, anti-establishment stances, free speech, section 230, big pharma, etc. And even then for fiscal conservatism, which is the more benign and popular aspect of it, it is hard to find on front pages and default news feed of the major platforms anything related to it.
Talking about cutting social programs, which is the counterpart of cutting taxes, will draw criticism, especially if it affects the favored minority of the time in some way in some percentage, will get you demonetized and demonized which usually means lynch mobs, mass reporting and worse.
Perhaps, yet the person who gets the most FB page views is apparently Ben Shapiro.
> And your post only mentions one specific aspect of the libertarian / fiscal side of conservatism while conservatism is much more broad and now due to the polarization of society other non-related issues have gifted themselves along partisan lines (most of which used to be populist left wing causes): Vaccines, masks, lock downs, Israel, free speech, 2nd amendment rights, anti-establishment stances, free speech, section 230, big pharma, etc. And even then for fiscal conservatism, which is the more benign and popular aspect of it, it is hard to find on front pages and default news feed of the major platforms anything related to it.
I see all those topics in my FB feed so at least from my perspective I have no idea what you're complaining about.
> Perhaps, yet the person who gets the most FB page views is apparently Ben Shapiro.
Shapiro, even if he is at the top, only represents a very small fraction of the overall attention space people see compared to everything else that's on Facebook. This is a cherry picked example at best. Also Shapiro is not anti-establishment by any definition of the word, see his stance on BlackRock buying up real estate as a recent example. Also outside debating naive socialist college students, he's not really an intellectual powerhouse outside variations of "it's a free market, they can do what they want and it will turn out fine; here's my cherry-picked facts", which is a narrative Facebook has a direct interest to promote. Anyway I digress.
> I see all those topics in my FB feed so at least from my perspective I have no idea what you're complaining about.
Your personal anecdotes are irrelevant. Also I never said they were outright banned, I said algorithmically manipulated. For instance Facebook is actively suppressing topics that are discussing vaccines in a negative light, such as discussion on long term effects which we obviously don't know since they just came out.
Hardly -- look at Roose's posts about CrowdTangle.* He is in the top 10 in every report, usually multiple times. Mostly the top 10 is Shapiro, Bigino, Fox etc.
* Good collection at https://www.nytimes.com/2021/07/14/technology/facebook-data.... But if you don't like the NYT you can simply read the daily reports on twitter yourself (up to the date when FB removed access to the tool). It reports the opposite of what you claim.
You are conflating being at the top with having a majority of the attention volume.
And even if this kind of content had the majority of the volume, the presence of popularity doesn't negate efforts of suppression. If anything that proves that Facebook is more popular with Boomers as opposed to the younger audience, which is also the kind of demographic these talk heads attract.
The story at least proves that FB has a need to for certain content suppression and has failed doing so, at least in the eyes of the leftists public opinion.
The promotion of false things is very profitable; by contrast, true things are far less engaging. Worse, it is more expensive to dispel a mistruth than it is to create and spread misinformation.
How do we address this systems design challenge without censorship?
James Madison figured it was better to leave a few “noxious branches to their luxuriant growth” than to risk “[injuring] the vigor of those yielding the proper fruits.” What makes you think now, 250 years into the most prosperous society in human history, is the right time to revisit this position?
For one, now mistruths can be made much more quickly than truths - hello GPT-3 - and can be delivered in a far more convincing way than any other time. I have a hard time telling AI generated video apart sometimes, which will only improve as time goes on.
Nor was there an existence of platforms that not only enable but encourage the creation of echo chambers that reinforce preexisting views and actively attempt to intervene for our attention. There is no practical limit to mistruth "content" that can be made, whereas proper information takes time and effort; I think we're nearing a point where genuine information will simply be crowded out and buried.
Bullshit. If anything, widespread misinformed beliefs were ever more common and deeply rooted prior to the modern digital age. Today many people do indeed pick up stupid concepts and begin to believe them, but the very same system of rapid digital communication of ideas, and evidence, that you seem to think worthy of censorship today also rapidly allows people to convince others of how false their more foolish ideas are. Truthful information may indeed travel more slowly than misinformation (though I'd love to see a citation for this constantly repeated claim) but good, evidence based information is also now easier than ever to find. This is a direct result of open, free expression of ideas in the digital sphere with fewer restrictions than at any time in history. I can't think of an easier way to crush such a good basic thing than some idiotic, misguided and emotionally driven crusade against "misinformation", or fake news.
> but good, evidence based information is also now easier than ever to find.
This assumption does not really work when "the echo chamber" of algorithmic curation and optimization for attention does not expose you to ideas outside of your bubble. It's also true that access to evidence based research, say in open journals, is now easier than ever; that doesn't mean I have enough knowledge to properly understand their implications. That significantly limits the permeability of good, solid information compared to garbage.
It's not at all related to the topic at hand, but Japan recently liberalized their power plan so that consumers can choose who generates the power; a study on Japanese public perception of energy liberalization[0] found that over 30% of these that did not consider power plan switching and over 20% that seriously considered it decided to not go through with the provider switch because they couldn't find enough information or had a hard time understanding. It's just a singular data point, but I think the actual hurdle to absorbing the information as opposed to accessing it is a lot higher.
Things like COVID safety or the shape of the Earth is far more personal and complex (once you go down to quality, evidence based information) than power plans, and they lack the financial incentive that switching power plans have. What makes you think that these people in a bubble would venture out to find quality information or listen to an outsider pointing out just how wrong they are?
Unlike the GP, I'm not personally in favor of censorship, as you say, the freedom of expression and information has far too much benefit to restrict mindlessly. Rather, I want to go back to the pre-"algorithm" days where quality content was spread via word of mouth and not through some engagement metric.
[0] 10.1016/j.erss.2017.09.026 - It's just something tangentially related that I've read recently.
One doesn’t need AI generated content for this to be true, just AI generated suggestions. The founding fathers almost certainly never anticipated that “newspapers” would have the capability and incentive to make every human being feel like the local conspiracy theorists down at the pub were actually representative of all of society. But here we are.
To be fair, James Madison didn't live in a time where someone could easily spread their beliefs to millions of people instantaneously. Historical comparisons have their place, but this probably isn't it.
Sure, but filtering spam is different than the censorship being done by social media platforms. I can easily opt in to calls and messages from a person or group by adding them to my contacts. However, if my contact is deplatformed, then that removes my ability to opt in. So the locus of agency shifts from me to some corporate or governmental authority, which is what people actually have a problem with.
You're confusing a protocol (email) with a platform (e.g., Twitter).
You can still pretty easily "opt in" to receiving their communications. You just can't force someone else to be the middleman. If they set up their own blog or forum you're completely free to visit that.
This is similar to how you can't force bob@example.org to forward you your friend's messages, but your friend can send them to you from their own personal email.
The locus of agency is on your friend to put in the effort themselves if they can't find anyone willing to help them spread their speech.
Maybe bob@example.com says something Google doesn’t like so gets blocked from sending to gmail.com or any other email server hosted by Google, which is a huge swath of people similar in scale to the YouTube audience.
Through either the accumulation of capital (e.g. computational power) or consolidation of political power, central authorities can police traffic to censor information on a massive scale, even on decentralized protocols like email.
So yes, at that point, you could argue that the locus of agency falls on Bob to start his own tech empire or whatever, but that becomes absurd. We’ve already seen censorship at the infrastructure level and even the domain registrar level, which is extreme.
There’s always a middle man on the Internet, or any peer to peer system with more than two edges for that matter. The problem arises when that middle man grows into a giant leviathan hell bent on manipulating the conversation of parties between itself.
> Maybe bob@example.com says something Google doesn’t like so gets blocked from sending to gmail.com or any other email server hosted by Google, which is a huge swath of people similar in scale to the YouTube audience.
If these people continue to use Gmail rather than more open alternatives, does that not indicate that they don't want to hear what Bob has to say?
> We’ve already seen censorship at the infrastructure level and even the domain registrar level, which is extreme.
There is absolutely zero good-faith use of the internet that can result in you getting irreversibly banned by a significant number of domain registrars, therefore this example is completely unrelated to the original quote (replicated below for convenience):
> it is better to leave a few “noxious branches to their luxuriant growth” than to risk “[injuring] the vigor of those yielding the proper fruits.”
There is a lot of validity to the quote. I largely agree with it. But the second part is important - it cannot be interpreted as "we better make it illegal for Twitter to ban people calling for genocide".
> There is absolutely zero good-faith use of the internet that can result in you getting irreversibly banned by a significant number of domain registrars, therefore this example is completely unrelated to the original quote
How is it unrelated? A noxious branch is exactly something that many or most people would consider abhorrent and perceive as not in good faith.
There will always be some moral veil put forth by censors to justify their actions. A good example is banning discussion on Ivermectin as a treatment for COVID. Some consider that a noxious branch — quackery endangering public health — but if it’s immediately pruned we won’t be able to see if it yields proper fruits.
It sounds like you’re misreading the quote. He is not saying noxious branches should be pruned to protect non-noxious ones. He’s saying do not prune them, instead let them fight it out and see which one bears the best fruit.
This quote is from the The Report of 1800 [1], where he goes on to say:
> Had “Sedition acts,” forbidding every publication that might bring the constituted agents into contempt or disrepute, or that might excite the hatred of the people against the authors of unjust or pernicious measures, been uniformly enforced against the press; might not the United States have been languishing at this day, under the infirmities of a sickly confederation? Might they not possibly be miserable colonies, groaning under a foreign yoke?
He’s crediting the formation of the United States to speech that was “noxious” to either the British parliament or the Confederate states in power at the time. This noxious speech was absolutely a risk to the extent that sedition acts were being threatened. He’s advocating freedom of the press even when their speech may excite hatred.
Yes, logically: If it were becoming harder to spread harmful misinformation, then the need to restrict it would be less; if it's becoming easier, then the need to restrict it becomes more acute.
One would think that's fairly obvious. What did you think; that there would be more of a need to restrict it if it were becoming harder?
I'm certainly not advocating for restricting free spread of information. My point is just that opinions are informed by the historical context they came from, so the comparison isn't really relevant given the drastic differences between communication abilities in the relevant time periods. Making the point that we shouldn't revisit that opinion because we're in prosperous times without considering the differences in the context informing the opinion seems disingenuous to me.
In his time, to send a letter from the US to the UK took 6-12 weeks. "Fruit" was relatively rare, whether noxious or proper. Too much of anything can be harmful.
Your appeal to authority could also be applied to suffrage or the abolishment of slavery. "What makes you think that now, xxx years into the most propsperous society in human history, is the right time to revisit the institution of slavery"? Times change.
That said, there's nothing in particular to stop a political party in the US from being completely anti democratic and bringing down the entire system of both government, elections and the constitution. It just hasn't really started happening since the civil war until now.
What makes you think that everyone who wrote the Constitution and relevant documents 250 years ago was infallible?
The founding fathers got plenty of things wrong, knowing they were designing an imperfect system.
For example, first past the post voting. Washington cautioned against the rise of political parties knowing full well that they were an inevitable consequence of the system he was complicit in engendering.
The noxious branches have mutated into kudzu- invasive, ever-spreading, and damaging to the vigor of those yielding the proper fruits. Madison's analogy imagines a world that was true up until 20-some years ago, where most people believed in a common view of the world, with different philosophies about that world.
Today, a significant percentage of the population believes that the election was stolen from donald trump due to extreme election fraud. Another, similar, significant percentage believes that vaccines are at best dangerous and at worst implanting microchips and turning people magnetic. That's people not operating under the same basis of facts as everyone else.
I am following some missinformation telegram groups for fun. And one thing I dont understand is that these people are actively going out of their way to blinden their eyes whenever someone says something that doesn't fit their narrative.
There is not one channel I am part of that does not constantly post contradicting information and yet the same people sit there to agree.
The admins only care about their herbal affiliate links, they rarely engage in discussions.
What I am saying is, it's the people that are to blame.
No one who is willing or able to do their own research, or who does not have some kind of brain damage hindering them should be very well able to fact check and realize what these groups actually are.
This is an over-generalization so I don't think your premise completely stands. Some truths are very engaging, and support large communities of people working together to explore them.
I would put forth that the systems design challenge has to start with the human mind. People need to be taught proper grammar, logic, and rhetoric to reason properly and be able to express and debate what is true and what is false. Getting to the truth is often difficult, but a critical thinker can quickly sort through bad information, because it's full of contradictions and fallacies.
However, the "system" doesn't seem to actually want too many critical thinkers, because that would make it more difficult to wield power over its population. Cognitive ability only needs to be trained up to a certain point where individuals become useful corporate inputs, so there's an incentive for conformity to outweigh critical thinking and even creativity. Therefore, I don't think a top-down approach e.g. federal education reform, etc. would actually work.
What might work is organizing a more bottom-up approach of building peer education systems. If you'll permit me to use Star Wars parlance, imagine 1,000 Jedi academies forming and coordinating at higher levels where possible. These academies could take many shapes and sizes, and some current examples include Khan Academy, Ad Astra Academy, Acton Academy, or even homeschool cooperatives.
This has always been the case and false rumors have a long history of spreading like wildfire. They're nothing new to social discourse, not even in the era of digital social media. Yet somehow, many of the most successful and humanitarian societies have managed to avoid forcing a uniform "misinformation-free truth" down peoples throats through censorship of contrary opinions without collapsing in ruin. If the freedom to claim whatever you like is profitable to many, the freedom to repress those who say what it doesn't like is even more profitable to a small group of powerful interests and government officials.
> Yet somehow, many of the most successful and humanitarian societies have managed to avoid forcing a uniform "misinformation-free truth" down peoples throats through censorship of contrary opinions without collapsing in ruin.
1) Yeah... And some didn't.
2) The mis- and disinformation-spreading machinery is getting ever more effective and efficient.
3) Removing the worst of the mis- and disinformation is not the same as "forcing a uniform 'misinformation-free truth" down peoples throats through censorship".
Make “mistruth” dispelling monetizable? Not sure how that could be done.
I see the root problem as the same as general device addiction. Seeking instathrills. So, teaching mindfulness and attention-strength-training. General mental health.
By letting people make their own decisions. You've no right to make any claims to better discernment of truth than anyone else, and no one else does either. Your claims to truth have no priority above anyone else's.
I don't think most people have issues with non-mainstream positions or interest groups, or at least to the degree of their being banned. What's at stake here is Facebook's role in amplifying content that is at best misleading and at worst outright falsehoods. We've got a situation in this country where half the population believes the election was fraudulently stolen from Donald Trump and a significant portion believes that COVID vaccines are dangerous.
FWIW, I think it's perfectly fine for people to make those claims and put them in the town square, but Facebook doesn't just provide the venue, they amplify and drive users to that content, encourage them to engage with it, and then reshare it to their own networks, putting a veneer of truth and acceptability on it.
EDIT: I said "half the country", I should have said "half of the republican party" - apologies for the mistake.
I don't believe it's anywhere close to half who believe the election was fraudulent. It's widely reported and talked about just like other fringe ideas, but it's surely not half of Americans who think that.
I believe the current belief is that 50% of Republicans believe it. So if you extrapolate to the American Population you would be looking at about 50 million Americans who believe the election was stolen.
I'm not 100% sure I understand your point, but I take it to mean that you're saying that Facebook's role is just an excuse to ban content I (or some groups) don't like?
If there was a true marketplace of ideas, everyone would have an equal playing field to build their own audiences, sway people to their positions, debate and discuss. But that's not how facebook works - engagement drives eyeballs, eyeballs drive revenue, and so Facebook optimizes for engagement, which itself happens to optimize for the most extreme and dramatic and horrifying positions. "Were the COVID vaccines hurried through approvals too fast?" doesn't drive as much engagement as "Covid vaccines are killing thousands, data shows".
Yeah, sorry, I wasn't super clear there. I meant to emphasize the characterization of the content: "at best misleading and at worst outright falsehoods".
Calling it that makes it easy to say it should be banned - but remember that the wuhan lab leak hypothesis was characterized exactly the same way. And that turned out to be acknowledged as plausible and even likely. If you're pro-censorship here, then you're in favor of banning speech about things that could be true and important.
Oh, sure, I think the problems in these kinds of discussions is that each reader has their own mental picture of the type of content that is being described, which colors the discussion. I think the wuhan lab leak hypothesis is a good example - I don't think anyone serious or in a position of power was arguing that the statement, "It is possible that COVID-19 was the result of a lab leak" should be banned or restricted. Some people said that was hasty or lacking evidnce, that's different, that's people's opinion about an opinion.
But there were also people saying, "china intentionally released covid to kill americans" and calling it "the china flu" and encouraging aggression towards china. That's obviously much more inflammatory.
I'm not even arguing THAT should necessarily qualify as an outright falsehood (though I don't think it's a helpful way to have a conversation about a complex subject). I'm talking about, "Covid vaccines are killing thousands, so you shouldn't take one". Or, "the election was stolen from donald trump".
I'm also not really arguing for censorship here - I'm just arguing that Facebook is a net negative in that they algorithmically optimize for the most inflammatory, most aggressive content, with the side effects that it amplifies the least true content.
> I don't think anyone serious or in a position of power was arguing that the statement, "It is possible that COVID-19 was the result of a lab leak" should be banned or restricted.
> "The social media giant banned any content that asserted COVID-19 was man-made or manufactured in February but told Politico the decision has now been reversed."
There was no distinction made between 'lab leak' and 'intentional leak' when censoring the conversation. To me, it's a great example of why censorship of these things should be nipped in the bud. There's plenty more examples if you ddg "lab leak theory".
I suspect if they provided telemetry into the Facebook Zeitgeist, it would show its userbase to be terrifyingly stupid and superstitious folk on par with the late Carl Sagan's predictions in _The Demon Haunted World_.
For IMO the Google Zeitgeist was bad enough 10 years ago in a somewhat simpler time when we could still laugh at stupidity and ludicrous conspiracies, but stupidity is a major influencer on social media now that it has been monetized and drives revenue. I don't see a solution to that any time soon.
How could it not? What you click on, what you look at, what you read and for how long, what you write, where you go, what you buy, how you feel, what you like and dislike, who you know, who you talk to, what sites you visit... Facebook knows many of its users (and non-users) better than they know themselves.
You are (perhaps intentionally) confusing correlation and causation. You're also trying to transform a conversation about predicting stupidity, gullibility, and superstition into a conversation about whether some people are more valuable than others because the latter argument is easier to win.
What's causal? Can you predict "stupidity," for example, based on telemetry data?
Perhaps ______ skin-color, ______ descendent property owners who own 5-6 acres southwest of the city in the _____ zone, a couple of cows, read the _____, and are hesitant allies of the revolution are "stupid"?
Are you aware that efforts to characterize the thoughts and minds of individuals based on their "telemetry data" are associated with the most horrific, unspeakable crimes against humanity that have occurred repeatedly in human history?
The largest crime against humanity in human history (the cultural revolution) is associated with the anti-intellectual position you're advocating. So I don't think that's any kind of convincing argument to stop doing our best to figure out the facts and follow where they lead.
Refusing this line of inquiry is itself a principled intellectual position. The structure of such questions invariably confirm the biases of the interrogators. The “evidence” is then typically levied against groups without respect to the individual, as it was in my understanding of the CR. There is only false intellectualism in this manner of social pseudoscience.
Refusing all inquiries into group statistics might be a principled intellectual position. Collecting data about some metrics at the group level (salaries, college admissions, ...) and then refusing to inquire about other attributes that might reasonably affect those metrics certainly isn't, for all the reasons you've just given.
The answer to your question can be found by researching the Cambridge Analytica MyPersonality tool scandals.
It is a tool that I personally used and tested. I wouldn’t call it a scandal but I’d describe it as being open to researchers and the general public, enormously powerful, and accurate. It only became a scandal after Steve Bannon’s team of smart political researchers used it with a high degree of effectiveness to get Trump into the White House —- and the Democrats who were still using weak user-data like race & ethnicity said hey that’s not fair! Essentially some researchers at Cambridge had a large number of participants take a personality test and click a button to share their FB account data. The personality test measured 5 traits:
1) Openness
2) Conscientiousness
3) Extroversion
4) Agreeableness
5) Neuroticism
Now based on our large sample size of personality test takers, we can correlate that you or this Facebook user you’re studying has the following 5 personality traits and they probably have the following oddly-specific likes: ex: Anime, Lil Wayne, popping bubble wrap, Frosted Mini Wheats, AK47s, anal sex, cheap beer, and the sweet smell of air right before it rains.
You certainly didn't answer what's causal, nor capture concepts such "terrifyingly stupid."
Given substantial experience in data analytics, I also don't believe in the slightest that Bannon's crack team identified all the open-minded mini-wheaters and rode that analysis to victory. That's absurd on its face but probably sounds amazing to clueless people.
Yes, of course your moral value is based, at least in large part, on what you read. How could it not be?!?
If you read nothing but bullshit, and furthermore not just read but "like" and share and amplify and spread it on, you not just "might as well be [a] second-class citizen", but absolutely reveal yourself to be a second-class human being.
This kind of thinking completely discards the concept of intent. People read things for all kinds of reasons. I have a family member who was an avid reader of the "Weekly World News", yet he did not believe in aliens, "Bat Boy", or the illuminati. I used to regularly read conservative blogs and listen to Fox News to keep up with what people I disagree with are doing/saying.
So the problem is that you can make some statistical predictions about what someone's views are based on their reading habits. But you can't morally judge them on that basis because you don't (and can't) know why they read something and what they thought about it. This is similar to how BMI is commonly misused. BMI is a fine indicator of population health, but is not always indicative of individual health.
This is a common mistake people make when thinking about marketing data analysis. It's similar to bits of entropy in browser tracking - the resolution of your screen and the fonts you have installed don't identify you by themselves, but when they're correlated their effectiveness scales much better than human intuition estimates.
So you read conservative blogs and listen to Fox News. If Facebook is predicting, for instance, your voting habits, their algorithm won't look only at that. It'll look at where you live, what your job is, the political stances of the people you most associate with, which groups you're a part of and how their members typically lean, what stores you shop at, what search terms you use, and far more metrics than I can list here. Can you honestly say that all of those will point to incorrect conclusions? If you somehow live a live entirely contrary to your internal beliefs you're in a minority so small as to be irrelevant.
You're using outliers to invalidate far more accurate predictions of the ensemble. You're right about people like yourself. IMO they are not the norm. Most "Weekly World News" readers are, I suspect, far more gullible than you. And far more Fox News viewers agree with their talking points and agenda than disagree or they wouldn't be watching, also IMO.
Right, but FB can figure out your intent much of the time as well. Simply clicking on a conservative news story might not indicate you're conservative, but if those are the only kind of stories you click on, that's a signal. And if 5 minutes after clicking that story, you reshare it, that's another signal. And if sentiment analysis done on whatever you wrote along side the reshare suggests a positive/agreeable reaction, then that's yet another signal. And the aggregation of these signals, especially if you do this often enough, can often correctly identify your political views.
> So the problem is that you can make some statistical predictions about what someone's views are based on their reading habits. But you can't morally judge them on that basis
Companies are making these kinds of flawed assumptions about you and every one of us every single day. They often then sell that info to data brokers who happily sell that data to others who will then start with incorrect assumptions about you as an individual and let that influence how they interpret the rest of the data they collect about you.
It's a real problem because those flawed data sets you aren't allowed to see, contest, or update are increasingly being used to meaningfully impact your everyday life in ways that you'll never be aware of.
The bottom line is that companies don't care. If they will make more money by being right most of the time that's what they are going to do. You might be the fittest, healthiest person on Earth, but if your health insurance company sees that people in your area code have started buying fast food more often they can decide to raise your rates. They won't tell you why they did it. They'll just do it. You could be the most financially responsible person on Earth but if you live in the wrong zip code don't be surprised when you get denied certain services or told that a company's polices are one thing when they would have told you they were something else if you lived on the other side of town.
That said, it's probably a whole lot easier to make accurate predictions about people than you think. Sure you'd read fox news, but most of your time is probably not spent on right wing sites, you probably aren't leaving comments espousing right wing talking points, and you probably aren't donating money to right wing causes.
With enough data it's not that hard to figure out if you're regularly hanging out at stormfront because you're working for the Anti-Defamation League or because you're a racist.
Algorithms can detect (and exploit) mental illnesses like bipolar disorder and Alzheimer's companies can certainly detect "stupid" well enough for their own needs.
You need to define a confidence interval for 'know'. We as a society judge and condemn people without 100% certainty as a matter of course. Both at a personal level and systematically, e.g. the judicial system.
To your example of BMI, it would be a perfectly reasonable public health policy[0] to use machine learning to probabilistically identify persons of an excess BMI to send them pamplets and resources for weight loss / exercise. Will the odd fit person get a letter from the surgeon general telling them "being a fatass is bad for their health"[1]? Of course. I don't see why that is a terminal problem.
[0]If a bit creepy for the privacy aspects - which are out of scope here.
I would not be a perfectly reasonable public health policy to do what you suggest, it would be a waste of time and money. Everybody knows that being fat is unhealthy. The people who are fat and not doing anything about it just don't care, or have other priorities.
Or they have a story far more complicated than what have you have reduced them to being. Just attempting to manage the pandemic in the US has demonstrated how complicated and contradictory people turn out to be. But I do agree that pamphlets about BMI aren't likely to change their outlook. They each need their own personalized moments of clarity about their path and it's not all a sure thing they ever have theirs.
But also, a few people are obese because they are on steroids or they have some genetic issue independent of their choices in life. They are not the norm, but they'll get lumped in with the norm and that's offensive and improper. That said, if FB found out their userbase is unusually obese compared to the rest of a country's citizens, that informs them they have a potential moral hazard in their hands.
But if you're deeply offended at such telemetry, maybe consider no using Facebook. You'll be fine without it. I personally make sure to post ridiculous and contradictory responses to the insipidly and horribly targeted ads they push into my feed. Some of them have gotten me suspended for violating their "community standards" that IMO would not get me suspended anywhere else.
My politics are idiosyncratic and lean far enough to the left that I have a hard time explaining them to my centrist liberal friends.
At the same time, I read a lot of far-right sources; I do not find them at all compelling, but in addition to the fact that I like to know what the people who think I am a literal baby-blood-drinking demon might be up to, I have an MA in rhetoric and find the discourse to be as engaging as stuff like William Burroughs or The Illuminatus Trilogy...
And I am not alone in that. You simply can't divine much of anything based on their search history and reading list, and that's made even more difficult by the compartmentalization that some of us are using in0-browser to work against tracking.
>Do you think what a person reads has no predictive value on their 'moral value'? (An admittedly vague term)
Yes, 100%. How can it, if we agree all human life is sacred? Are some people more sacred than others? Holier than thou? (getting downvoted for this)
All men are created equal. Ashes to ashes, dust to dust. Ephesians 6:9 "Slave owners, you must treat your slaves with this same respect. Don't threaten them. They have the same Master in heaven that you do, and he doesn't have favorites."
There are many different ways to be stupid and many to be superstitious. Not one of those attaches to a prerogative to ban reading material or assign cognitive attributes to swaths of humanity.
I introduced morality to demonstrate, with argument left to the reader, how the question you are trying to compel is misguided. Adjacent threads provide context for doing so.
If you like and share enough quotes from and links to Mein Kampf, it doesn't take Sherlock Holmes to figure out you're a Nazi. And if you don't think that says something about your "moral values" you're sadly mistaken.
(My prerogative to assign cognitive attributes to you derives not from your choice of reading material, but from your apparent inability to realise the above.)
Everyone is superstitious some people are just in denial. It wasn’t an accident that progressive ideology went down the eugenics path, or that countries like China engaged in things like the one child policy. Atheist utilitarianism freed of superstition can justify a lot. As against it, you can repackage the concept of “the inherent dignity of every human life in the eyes of god” into a variety of secular packagings but that doesn’t make it any less superstitious.
Please explain how 'pro-life' isn't a moral position. Isn't the position fairly summarized as: 'it is immoral to abort a pregnancy under <insert circumstances here>? Emphasis mine.
It's easy to see how pro abortion can be a decision based on money. So it might be considered a practical decision.
The opposite position could be practical in certain societies. A Mexican parent would encourage having the baby. In other areas with more traditional values it might be more practical.
Whatever position you have could be moral or practical or cyclical or spiritual.
Pray tell, which group of people do you think are stupid? Please also subsequently elaborate on how your judgment is objective, and detached from an appraisal that might be interpreted as mere goodness and badness.
I didn't have any particular groups in mind here, because those weren't my comments. And even if I pick a group, it's irrelevant to your "where do you propose this intersects" question because the intersection of those two phrases in this comment thread was set up by you. You tell me where they intersect, if the intent was something more complex than treating them as synonyms.
You are making a straw man argument by assuming that GP claims they can infer individual-level attributes from telemetry while all they claim is to infer population-level attributes.
Facebook at least claims to be a leader in scanning for and detecting child sexual abuse material (CSAM) on their platform. As a result of their diligence and size Facebook reports a lot ~90+% of all CSAM reports[1]. That fact suggests that Facebook really is better than most when it comes to detecting CSAM because there is nothing inherent to Facebook that would make it a more appealing platform for CSAM than pornography websites (e.g. MindGeek), 4chan, or snapchat.
When the media get a hold of a fact like 90% of CSAM reports come from Facebook they jump to the conclusion that Facebook is a "hotbed" of child pornography - as in this example[2]. Facebook's efforts and openness about the problem are fuel for click bait criticism of Facebook.
Naturally, this has a result. Facebook, in part, I think, because of negative feedback, wants to move towards end to end encryption in their messenger platforms. This will make it impossible for their automated systems to detect CSAM in messages and so their CSAM report numbers will fall. Everyone wins. Well, except for the children who are now exploited behind the protections of end to end encrypted messages.
This article seems similar, in part, to what I've described above. Facebook is unusually transparent to media companies by providing their CrowdTangle tool to media and academics. This level of transparency is not offered by competitors and so Facebook, with some data available, looks worse than competitors who have no available data to compare to.
I write "in part" because here the content isn't inherently bad the way CSAM is. Instead, it's just political messaging that some media groups do not like. Neither the article here nor in the New York Times even try to address whether Facebook is doing something to cause this, or whether Facebook's audience simply likes Ben Shapiro and Dan Bongino. Is Facebook giving preferential treatment to these people, or do Facebook users just like the "wrong" pundits?
Your post seems to imply that end to end encryption is bad and that we need a surveillance state run either by corporations or government in order to protect children.
We need to not head down this path.
Protecting children has always historically been led by their own parents and we should empower that. Improvements should always be about parental education (teaching parents to recognize the signs of abuse/grooming before abuse) and cultural education (teaching about the harms of sexual abuse so that potential offenders don't act). Contrary to popular belief, most pedophiles are not actually psychopaths and actually have consciences and if they see the damage that their actions could cause, they're less likely to act and more likely to find non-damaging outlets for their urges (simulated or drawn material, as two examples).
Facebook reports ~20 million instances of child sexual abuse materials a year. My understanding is that this material is largely transmitted by message (i.e. people aren't posting it on their wall, but sharing it in chat). Regardless of your ideological convictions, the fact remains that one trade off of end to end encryption is a reduction in ability to detect, track, and report CSAM.
My intuition is that our current trajectory will lead us to a "worst of all worlds" case where governments continue to spy on and suppress people by subverting end to end encryption at some level, but pedophiles will trade their material with impunity. Ideally, I would like to find the opposite compromise, where we could detect and catch criminals while maintaining privacy. This has been attempted by law, due process, warrants, and such - but that attempt is subverted by the government not following the law.
They don't actually describe what kind of content this is nor whether there's false positive in those reports to my knowledge. I wonder how many parents posting "babies first bath" type photos get accidentally labeled.
Then there's the question whether they're labeling adult anime photos as child abuse imagery which could also greatly inflate the numbers.
20 million sounds like a ridiculously unrealistic high number. There just isn't that much of that type of content out there. A reasonable number would be in the 10s of thousands I would think.
FB's move to e2e encryption explicitly discusses how they still will have reporting of content so users can report CSAM and send the plaintext back to FB.
Your framing of their motives here is both cynical and wrong.
Novel CSAM gets caught primarily because of users reporting it - not because of outside detection (with known stuff there are some digital fingerprinting ways to block content). They already operate WhatsApp which is e2e encryption and have to apply a lot of these strategies anyway.
I don't think my framing is cynical (or wrong). Facebook has a clear incentive to reduce CSAM reports and end to end encryption will help by preventing automated systems, which do most of the detection, from detecting CSAM in messages. I don't think its Facebook's only motive for implementing end to end encryption, but I doubt they haven't considered it.
It's good to give users the options to report such materials. As I mentioned though, the majority of CSAM detection is done by automated systems. I might guess that if two pedophiles are trading CSAM via Facebook messenger they will be both be unlikely to report the other.
> As a result of their diligence and size Facebook reports a lot ~90+% of all CSAM reports[1]. That fact suggests that Facebook really is better than most when it comes to detecting CSAM because there is nothing inherent to Facebook that would make it a more appealing platform for CSAM than pornography websites (e.g. MindGeek), 4chan, or snapchat.
Interesting numbers. Thanks for finding them. I think they support my intuition. MindGeek, which owns Pornhub as well as RedTube, YouPorn, and others, reports only 13k instances to Facebook's 20 million. By traffic MindGeek has, we can call it, a tenth of Facebook's traffic, but they have 6 thousandths of Facebook's reports.
Of course maybe there is a different nature of the companies. Most visitors to MindGeek properties, presumably, are there to consume content rather than share it. Maybe Facebook's model enables users to connect to one another and that facilitates the exchange of CSAM.
First time I found Facebook was because it was so well organized for specialized porn. You'd find groups for every kink and fetish similar to NSFW subreddits today.
Only a year or so later people here started to use it as social network and I was confused.
Not saying it still is that way, but the way how Facebook works (closed groups where you have to ask for access) and censorship still is slow and uneffective I am sure it is still a preferable platform.
It seems possible that the sort of broken brains who send (and possess in the first place) such material are more likely to use the Facebook platform than they are to use competing social network platforms.
What exactly is the evil that Facebook is supposedly trying to not see? And how would making reach data available make it easier for Facebook to see evil?
Or if Facebook is trying to hide the evil, what evil is it they are trying to hide? What will the reach data reveal that is so evil? And if the evil is manifest without reach data, why does it matter?
To understand the argument that 'facebook is evil' requires more context than what is explicitly stated in this article. It's more of a followup or continuation of a long running thread.
Basically there is a long thread about Facebook being a haven for the worst qualities of human discussion: insular, xenophobic, and reactionary. This started to get really big during the Cambridge analytica scandel but has continued from there.
In short the argument is that Facebook knowingly allows the aforementioned culture to manifest. In many parts of the world, this has resulted in real world consequences and deaths. That they refuse to divulge the most popular articles is, in this authors mind, a sort of coverup.
Personally I don't fully agree with this assessment. As facebook has become more and more of 'the web', replacing the distributed forums and chatrooms that once dominated, it's also become a reflection of us. Facebook is in a bind no matter what it does--it's either guilty of censorship or misinformation.
> Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that."
There's a piece by Cory Doctorow about this [1], which I found interesting for its expansion into highighting FB's apparent beef with Ad Observer [2]. I also like Doctorow's piece because he used the word "flensed", which I don't think I've seen for years but which, for some reason, I like.
How would you solve this problem? Presumably the engagement algorithms look at posts people share more frequently, reshare, and share to large numbers of people. Maybe there is a rate (virality/time) metric involved as well. This gives them some priority in the news feed which causes more sharing in a positive viral feedback loop. That would have to be filtered by NLP or image recognition and possibly a source reputation score.
I imagine there are a host of bots that instantly reshare things..
It's not so much that the key people at the top of Facebook "see no evil," but that they truly, sincerely, earnestly believe there is no evil to be seen, or if there is any evil, they truly, sincerely, earnestly believe they can subdue it with great engineering and great management.
They fail to see what every outsider, and many insiders, clearly see. Paraphrasing Upton Sinclair:
It is difficult to get people to see something if their wealth, status, and self-worth depend upon not seeing it.
> They fail to see what every outsider, and many insiders, clearly see.
The problem is that half of those outsiders clearly see the evil as Facebook censoring too much. And the other half clearly see the evil as Facebook censoring too little.
Surely we can all agree that Facebook should do what outsiders tell it to do, regardless of whether different groups of outsiders want contradictory things.
That sounds like another way of saying "never attribute to malice that which is adequately explained by stupidity".
Ignoring so many people asking them to examine their bias and human fallibility is definitely a form of stupidity. That is unless they believe the people are more flawed than they are and that they are better placed to judge what the people need. In that case, it becomes more malice than stupidity.
It's what is called "ideology" in critical theory & Marxism. Thought shaped by way of class position; aka people "think with their stomachs." Not always, but on the whole, the majority of people will adopt the world view that keeps them fed/wealthy/powerful/happy.
> they truly, sincerely, earnestly believe there is no evil to be seen
Truly? Sincerely? Earnestly? All that?
I must be a lot more cynical than you. I think they absolutely know the score. I think they sincerely would rather not be painted with the brush of accountability, would prefer deniability.
> they truly, sincerely, earnestly believe there is no evil to be seen, or if there is any evil, they truly, sincerely, earnestly believe they can subdue it with great engineering and great management.
That's hard to believe when they've responded to international catastrophes that they had a role in like this[1], and admitted this themselves[2]:
> Facebook admits it was used to 'incite offline violence' in Myanmar
My perception is that they think they can "fix" these horrifying problems with cleverer software, better policies, and new procedures without negatively altering or impacting the giant gush of money that flows into the company every day.
I'm "country folk" and have never experienced this kind of attitude from "city folk" on the basis of my address.
Then again, I do truly, sincerely, earnestly believe the enablers like Fox News, the GOP, et cetera are evil. The people who listen to them aren't evil, just deceived.
Indeed. So-called "country folk" are a serious minority, and if the right in the US had to rely on rural voters exclusively for support they'd never win an election.
For sure, there's a right wing voting block that might see itself as "salt of the earth" or "country folk" or "common sense"; but it's just an ideological package held together with shoestring and old gum made out of old school nativism, pro-life stuff, and an appeal to a sense of lost opportunity.
And I grew up in rural Alberta, Canada, oil country heartland of "country folk" in Canada, the Texas of Canada, and I live rural now, too. It's never been the case that all "country folk" vote right or hold right wing opinions, even if a sizable chunk do.
I do think "city folk" hold some pretty stereotyped views of rural life though.
It's weird to think of Facebook as the reasonable people in the room.
The left wants to own online political discussion in the US, and they are very bothered that the right finds a ready audience when they are allowed to compete.
That's a very odd definition of "left" floating around down there in the US. What is being called "left" in that context is nothing but neo-liberal centrist politics, and it has wanted to own political discussion in the US in some form or another for over a century. It's right of centre and frankly conservative on fundamental economics issues [i.e. where all the power is held], and could only be called "left" in the domain of cultural issues...
Actual socialist politics are not permitted in US discourse, just stuff around the margins which is not threatening to corporate power (identity politics, maybe some health care reform).
I remain flabbergasted by the increasing number of people who can somehow in the same breath complain about "radical socialists" and "cultural marxists" while at the same time somehow equating those people with "corporate elites" and "silicon valley" -- the two are the enemy of the other.
EDIT: as a person with actual radical socialist politics, I can assure you that both Facebook and the NYT want nothing to do with my views.
>> The left wants to own online political discussion in the US, and they are very bothered that the right finds a ready audience when they are allowed to compete.
> That's a very odd definition of "left" floating around down there in the US. What is being called "left" in that context is nothing but neo-liberal centrist politics, and it has wanted to own political discussion in the US in some form or another for over a century.
I don't think that's the "left" the GP was referring to. I think were most likely talking about the "culture war" left.
> I remain flabbergasted by the increasing number of people who can somehow in the same breath complain about "radical socialists" and "cultural marxists" while at the same time somehow equating those people with "corporate elites" and "silicon valley" -- the two are the enemy of the other.
It's because we don't always get to control definitions, even ones we care a lot about (ask me about "crypto" sometime). IMHO, those are both fashionable (in some circles) new terms for the "culture war" left, somewhat inflected by plutocratic interests that harness opposition to it to further their own agenda.
Edit: IMHO, I think a flag-waving socially-conservative socialism could be surprisingly successful in America, if someone could get it off the ground.
I think I agree with you (and actually prefer not to use the term "left" myself in general for this reason), but I still think it's worth underscoring the points about the incoherence of the use of these terms. Someone on a hobby group I am on the other day started ranting about how rising fire insurance rates for farmers were "Just another step to push out the middle class and independent owners to make way for big corporate ownership." [ok fine, whatever] and then suffixed it with "The United Socialist States of America" [W the actual F? Makes zero sense].
I see this kind of talk from people with Q & Trump-inflected politics all the time. It's bizarre.
Traditional left/progressive values would include things like affordable healthcare, worker protection, progressive taxation, livable wages, the like. Importantly, for all.
The Democrats don't seem to deliver on any of these basics long achieved in many other western countries, therefore I agree that they are neither left nor progressive.
By comparison, not even our main right wing party (VVD) would be as conservative as the Democrats on the matters above. So locally, we would see the "left" Democrats as near far-right. That's one huge gap.
(as a weird complexity, over here "liberal" means right-wing. In the US it means left-wing. yet since US left-wing is in fact right-wing, I guess it does add up)
The second type of left in the US, I do consider truly left. It's hard to put your finger on it, but it includes identity politics, the "woke", down to even marxists.
Clearly they are on the rise, at least in media and institutes. Yet they are now in an unhappy marriage with the core of the Democrats, which as we established is right-wing. Good luck with that.
For the record, here in the Netherlands we largely reject that type of left.
So I agree with most of what you said, except for the Silicon Valley part. You're going to be super surprised how the biggest supporters of extreme left policy are in fact rich comfortable people.
I'll refer to one of the most mind blowing tweets ever produced (now deleted). A co-founder of Twitter took issue with the founder of Coinbase disallowing political discussion in the workplace, and tweeted:
"When the revolution comes, me-only capitalists like X will be the first to be put against the wall. I'll be happy to provide video narration."
The extremity and cruelty is impressive, but the truly shocking part is that the person tweeting it has a net worth of 300M.
Democratic socialism isn't anywhere near the same as the Cuba/Venezuela/USSR type of socialism. Are any politicians of the latter stripe currently in Congress?
Like I get that there's a popular meme that taxes and government services are "socialism". But that's a wildly inaccurate boogeyman conjured up by people who really, really don't want to pay taxes. (No one wants to pay taxes, but most of us accept they're the cost of having a society)
If taxes that fund government services are "socialist" then having a police force is "socialist" too. Do you agree with that? It sounds ridiculous to me but that's where that logic leads.
No one is talking about censoring arguments about what constitutes reasonable levels of immigration, or whether the government is spending too much, or what rights states should have vs federal gov. IE conservative policy positions.
The divide is on topics like anti-vax, nonexistent election fraud, et cetera. IE, actual lies.
This is a weird argument to make when the lab leak “conspiracy” was censored the same way.
Also, there was election fraud. There always is. Was it enough to turn the election? Who knows. To say that people shouldn’t be able to discuss it is mind-boggling to me. The only way we can have trust in our election process is if we can ask questions.
> To say that people shouldn’t be able to discuss it is mind-boggling to me.
This is a wild misrepresentation of the opposing perspective. Nobody is arguing that we can't discuss election fraud.
The argument—which I'm sure you are actually aware—is that there needs to be some level of credibility to the idea that a) fraud occurred, and b) that it happened in meaningful quantities before we spend significant time, cost, and effort in investigating claims.
Simply having lost is not a credible claim to investigate widespread fraud. Finding one or two isolated cases in elections with margins of thousands or more votes is not a credible claim to investigate widespread fraud.
Further, fraud cannot simply be a claim that is made and then perpetually reinvestigated by decreasingly-reputable third parties until you are able to invalidate an election whose outcome you disagree with.
Stopping people talking about election fraud because you don't feel a certain credibility has been granted is censorship.
Whatever gatekeeping rules you agree or don't with shouldn't matter. The gatekeeping is the problem. Being afraid of ideas and shutting down anyone who doesn't speak about approved topics is the issue not whether your gatekeeping rules have been met.
> Stopping people talking about election fraud because you don't feel a certain credibility has been granted is censorship.
Zero people are being stopped from talking about election fraud. You and I are sitting here discussion election fraud right now. The only thing that has been stopped is investigations of claims of widespread fraud for which there is virtually zero evidence.
This is precisely the kind of wild misrepresentation that people—including myself—are tired of fighting. If you need to misrepresent your opponent in order to defeat them, maybe you should reflect: are we the baddies?
> This is a wild misrepresentation of the opposing perspective.
I think that's currently how the game is played. You can try to be better than that, but then the other side wins because they are still happy to play dirty.
Well, they're the position of the most vocal and nutty segment of the right wing. Trying to paint the entire right with it is dishonest (whether accidentally or deliberately).
I will admit that the nutty ones get all the media attention at the moment. (You can decide whether or not you believe that's the media trying to paint the entire right that way...)
They're not only getting all the media attention at the moment but they are holding all the power in the GOP at the moment. Especially in places like Florida.
Incidents of voter fraud, and side effects from vaccines, etc are quantifiable, therefore not subject to opinion. Interpretation yes, but interpretation must be supported by evidence.
Yes, there we go: This reaction, when calling it obvious and blatant lies. Every time. Believing in lies is now normal and expected of right-wingers, and calling it out is called attacking political opinions.
Even if by some crazy implausibility it was the "greatest threat to our democracy since the Civil War", that's like saying, "Man, I just got a paper cut and that is sure the worst injury I've had since that accident where I was paralyzed from the waist down"
And are you kidding me? What if the mob had caught AOC or Pelosi? That would have turned from a "paper cut" to a political murder during an insurrection.
To be fair, that's not who they were chanting about hanging on the gallows they built. And, frankly, changing the outcome the way they wanted made uncooperative Republican targets more valuable.
I see you point. But using "right-wingers" is a tell of sorts. It makes you appear dismissive of the opposing viewpoint.
But otherwise I agree: there is such a thing as truth and there are out and out lies.
I hope that my views are based on truths but I recognize that there are areas that while true are nuanced enough that someone else can come to a different conclusion than me. For this reason I don't claim that everyone with a different view is backing their view with lies.
I don't think it's my labels that are polarising anyone. If you believe this nonsense, you are fully polarised already. I didn't do that. "Leftists" didn't do that.
Are you talking about the same "recent very secure elections" where the processes in place worked as intended? Those test votes were found and were taken out, specifically because there are processes in place to check and double-check and triple-check the count.
America does not keep trying to convince you that there is "substantial election fraud". It is clearly one political party and their friends at Fox News that are trying to convince you of this. Reality is the antidote to their poison.
So liberals get to pick the political arguments humans get to talk about? You may support that type of social conversation, but don't call it democracy, because it's not.
> The divide is on topics like anti-vax, nonexistent election fraud, et cetera. IE, actual lies.
It's funny that the party of "my body my choice" is so against people wanting a say over what goes into their bodies. I am personally vaccinated, but I think it's reasonable to let individuals make that choice.
Also regarding election fraud, when on election night you see charts like [1] with enormous one party spikes, it is entirely natural for people to be suspicious. Those people then asked for audits and were told to go to hell. If you want to undermine trust in the election system, that is exactly how to accomplish it.
None of this is "you aren't allowed to lie" it's "You aren't allowed to ask questions"
Again, I don't believe there was widespread fraud but people who refuse to discuss what happened and refuse inspections sure aren't helping us becomong more confident.
Last I checked, pregnancy isn't contagious. Not exactly an apples to apples comparison.
Who was told to "go to hell"? There were plenty of recounts in all of the close states. Even the recent farcical commission checking fraud in Arizona didn't find anything.
As to the "spike" in that picture, a simple Google search of "Michigan spike voting" produces plenty of resources showing how the "spike" was not fraud. And if you're so worried about the spike in Biden votes later in the process, why are you not also worried about the spike in Trump votes at Nov 3 21:00 (on the graph on the right)?
You're being downvoted because these arguments are so bad as to almost clearly be in bad faith.
> You're being downvoted because these arguments are so bad as to almost clearly be in bad faith.
I'm being downvoted because some subset of people here view down vote as "I disagree". I used to be bothered by it. I don't think much about it anymore.
Edit:
I'm also fairly certain I've got some followers who take it upon themselves to go through my comment history and start downvoting other posts of mine just for good measure. You know, really sticking it to the man or whatever.
I downvoted you. Not because I disagree, but because I too believe your arguments are in bad faith and/or misrepresenting the positions of those you disagree with.
> It's funny that the party of "my body my choice" is so against people wanting a say over what goes into their bodies.
Unlike anti-abortion laws which force women to take pregnancies to term against their will, I am aware of zero proposed legislation that aims to force people into vaccination against their will. The one potential exception to this is for entry to public schooling, for which religious exemptions are (generally but not always) easy to come by.
If not bad faith or misrepresentation, then what?
> Also regarding election fraud, when on election night you see charts like [1] with enormous one party spikes, it is entirely natural for people to be suspicious. Those people then asked for audits and were told to go to hell.
It is reasonable for people to be suspicious. But far from being told to go to hell, people have been given repeated and convincing evidence for why these spikes occur (blue votes tending to cluster in high-density, high-population districts). There was even ample discussion in advance of the election about how, where, and when we expected these spikes to occur, why they're expected, and demonstrating their historical precedent.
Some people still demanded investigations of fraud. Most of those claims were dismissed through official processes due to lack of evidence. Being denied an investigation into claims that have been repeatedly debunked is not being told to go to hell. In fact some of those claims were investigated, but essentially zero systemic fraud has been found to date.
> Unlike anti-abortion laws which force women to take pregnancies to term against their will, I am aware of zero proposed legislation that aims to force people into vaccination against their will.
Just this week, Biden was talking about having people go door-to-door to push the unvaccinated to get the shot. Arizona publicly told him to get bent - they weren't going to do that in their state.
So, that's not "forcing" people, but it's too close for my taste. I'm going to presume that you wouldn't be fine with the state sending people door to door to push those who were pregnant to carry to term.
> Just this week, Biden was talking about having people go door-to-door to push the unvaccinated to get the shot... So, that's not "forcing" people, but it's too close for my taste.
Can you acknowledge that—even taking this completely at face value—going door-to-door encouraging the use of a vaccine has absolutely nothing in common with legally forcing women to take unwanted pregnancies to term, regardless of which side of either policy you care to take?
This is exactly what I'm talking about. Trying to draw parallels between these two situations is absurd to the point of bad faith or willful misrepresentation.
> I'm going to presume that you wouldn't be fine with the state sending people door to door to push those who were pregnant to carry to term.
For reasons completely independent of "my body, my choice" which was the original goalpost.
This is an issue of public health for which we had to globally shut down international travel and social gatherings for a year and a half, and which had incalculable economic impact on billions. Can you also acknowledge that such consequences might perhaps clear a higher bar than that of a choice whose impact is fundamentally limited in scope?
Recognizing that difference in impact is why we've spent $20bn on vaccine development and who knows how much on the actual vaccine rollout.
> Can you also acknowledge that such consequences might perhaps clear a higher bar than that of a choice whose impact is fundamentally limited in scope?
"Fundamentally limited"? Given that a fetus is genetically human, and genetically different from the woman who carries it, it's clearly both human and not part of her body. There are plenty of completely reasonable people who see those two facts as putting abortion as being perilously close to murder, at best.
First, given that it's genetically a different individual, "my body, my choice" seems willfully blind to the rest of what's involved in abortion. Second, though, if you do regard abortion as murder, the death count per year is of the same order of magnitude as from Covid. So "fundamentally limited in scope" is assuming the answer to something that is, at best, very much still in debate.
I'm gonna guess that you're unaware of the states/large regions in which military recruiters go door to door, constantly send mail, and come to public schools in an effort to recruit kids.
Why is there no uproar about this after decades of it...?
For people that are genuinely interested in this - I'd highly recommend Steven Levy's book: Facebook The Inside Story.
It's a fair account without a political bent (really hard to find on this topic) and he had a lot of access to FB leadership while writing it. It also gives a lot of historical context.
WRT the specific outreach metrics mentioned in the article - I'd bet the internal discussion is more nuanced. Making that public could make it harder for them to control abuse by making it easier for people to determine how to maximize reach with spammier hacks. This is already a problem without access to the data.
My personal opinion is this is a hard problem at scale and that Zuckerberg genuinely cares about it [0]. That there is some incentive mismatch given the issues around engagement driving ad revenue, but that the speech issues are more complicated and Zuck cares about how they leverage their power around issues of speech. My take on the Trump ban was less that he changed his position and more that the US no longer met the requirements for a hands-off approach after the insurrection.
Rather than rehash the stuff I wrote in the post, here's the relevant bit:
> "If there’s to be policy around political speech and social media, it should not be the responsibility of private companies to determine when to censor or not censor the speech from democratically elected politicians, operating in countries with rule of law and a free press.
> "There are a lot of conditions on that statement, but it’s because the conditions are relevant and important. The same standard cannot be automatically held for politicians in non-democratic countries, countries without rule of law, countries that suppress speech themselves, or countries without a free press."
...
> "It doesn’t mean they’re not doing anything wrong because they’ve long ago abandoned the chronological news feed in favor of an algorithmically sorted news feed focused on engagement. This is decidedly not neutral. Abuse of these engagement algorithms are what allowed spammers to leverage the viral nature of misinformation to enrich themselves. It’s also what gave credibility and reach to the Russian political interference. If Facebook wants to act as a neutral platform for speech, then they should be neutral. If they are elevating certain content algorithmically then they are acting as a publisher. This weakens their argument about being a neutral platform and makes them more responsible for what they choose to elevate on Facebook."
There's probably reasonable policy here - but it's not trivial.
> The company, blamed for everything from election interference to vaccine hesitancy, badly wants to rebuild trust with a skeptical public....
> These people, most of whom would speak only anonymously because they were not authorized to discuss internal conversations, said Facebook’s executives were more worried about fixing the perception that Facebook was amplifying harmful content than figuring out whether it actually was amplifying harmful content. Transparency, they said, ultimately took a back seat to image management.
I can think of few approaches that are worse than that at rebuilding trust. "Building trust," by ignoring the real issue to focus on perceptions can only work (sometimes) if you can dominate someone's perceptions. Facebook doesn't have the power to do that, especially since it's lost the public's trust and invited more scrutiny.
And that leads me to the conclusion that Facebook's executives may not event understand even some basic things about trust, at least with they're in a sociopathic corporate setting.
The problem is defining harmful content. For some it's china lab theory, for others CRT/anti-racist propoganda or stolen election. What ever FB censures, half the country will be mad at it. FB can't win here.
Personally I think "harmful content" concept is extremely dangerous when applied to social networks. If 5 years ago it was unquestionable that USA has more press and thought freedom than India, Russia or China, the situation now is much more nuanced and really depends on the topic discussed. What it mean for democracy in US long term is a big question.
What a lot of people mean by "Harmful content" isn't opinions and discourse on whether something is good or bad. Harmful content is disinformation about what something is.
It's a concept that can be abused for certain, and I suppose that makes it dangerous.
Unfortunately, there's also dangers in not intervening.
Every space in which valuable discourse takes place has to take some form of order including restraint seriously, otherwise the bad drives out the good. And if you look at the places we take discourse most seriously -- institutions of learning, courtrooms, legislative bodies -- because valued outcomes rely on how robust the discourse is, more order and restraint requirements seem to become the rule. And yet there's also lots of thought put into how to allow as much input as possible, even outright adversarial input.
It's possible that at a certain scale, social media systems have to move from a laissez faire approach to some sort of similar balance. It'd probably be best if each took responsibility to decide what that is. Twitter's approach doesn't have to be the same as FBs, but everybody has some responsibility to try and make discourse as good as they can (at least, if the reason they value freedom of speech is because of the value of discourse, rather than as a personal indulgence).
> What ever FB censures, half the country will be mad at it. FB can't win here.
People across the political spectrum are already mad at FB, and some have decided on a posture that lets them push the drama that FB is biased against them no matter what FB's actual policy is. FB has nothing to lose by making its best faith effort. And I'd like to think that no matter what someone's general political sensibilities are they can find a way to advocate for them under conditions where order and some degree of accountability for truthfulness is required of them.
> ... and some have decided on a posture that lets them push the drama that FB is biased against them no matter what FB's actual policy is.
"The only way to find the limits of what's possible is to push past them to the impossible." I forget who said it, but some people will push what they can get away with as far as possible, and then try to go farther. No matter where FB sets the line, they'll try to go past it, because they care about winning, not about civil discourse or reasonable standards or fairness or anything like that. (Rules are for the other side.) The fact that they can then play the martyr that FB is oppressing is a side benefit.
> Harmful content is disinformation about what something is.
This is the crux of the issue. Legacy institutions are used to determining what is true and therefore what is inside the bounds of allowable opinion.
This was their domain for decades but it can now be seriously challenged by relatively few people due to the amplification in sharing of human thought through the internet.
Their degradation was the only outcome likely from networking so many individuals together.
'Disinformation', even if you are inclined to trust legacy institutions, should be viewed with a healthy degree of skepticism. Even if we grant that legacy institutions always had the closest approximation reality over the past several decades (and this is generous).
Everyone should now be able to recognize these players for what they are: self interested actors bending public perception of reality to convenient conclusions for their own benefit. Given this, it is not surprising that people are more motivated than ever before to uncover past warpings of past narratives. They are searching for their historical moment.
Now we can grant that it may be best to have these legacy institutions looking out for our interests rather than some unknowable future state of governance. But checking their past conduct is a textbook littered with dark chapters that is now being viewed with the understanding that these are the rosier sections. Not a good look. As we move forward in time the failings of existing governments will only become more apparent not less; and it will continue to make less and less sense to believe in them.
The alternative to this would be to either restrict information flow to some pre-digital age or to perpetuate totalitarian regimes in charge of protecting their own existence by manipulating the masses into passivity.
And here is the second crux, it is only the legacy institutions from the 20th century who are perpetuating the conflict. Were the doubters given license to control their own territory, we have every indication that they would leave the past well enough alone and forge ahead. But the 20th century just cannot let go its dreams of total domination of thought and reality.
> And I'd like to think that no matter what someone's general political sensibilities are they can find a way to advocate for them under conditions where order and some degree of accountability for truthfulness is required of them.
The problem is that the sides don't even agree on the same facts anymore. For example, some think there was election fraud, and some don't. Some believe there is actual evidence of election fraud, and some don't. And neither side is without some evidence. For example, there have been election audits and those seem to show there was no fraud, so that's evidence. But at the same time, there are other auditors that show very convincing evidence that there was at least some additional voter fraud beyond what was detected in the audits.
If I recall correctly, there were some science experiments in the past that sort of "wiggled around" the truth for a period of decades as they got closer to the truth. With current events, it seems like it's a lot more like that than to just look outside the window and say it's raining or not.
For example, look at how certain views were banned on YouTube... until the CDC/FDA/etc. itself reversed its position and now those are the mainstream or at least acceptable views.
Isn't it usually the case that a difference in opinion ultimately is the result of a difference of belief about facts? And if that's the case, if a biased moderator of any platform can decide what the facts are, that is effectively the same as deciding what opinions are valid. And although there are some clear cases of ridiculous beliefs that we can think about (flat earth, etc.), very quickly we get into murky territory.
I mean, just imagine if Facebook was established in the deep south, and suppressed all non-Christian opinions as counter-factual? "The evidence is clear" they would say, "here in the Holy Bible".
> The problem is that the sides don't even agree on the same facts anymore.
That's certainly going to be the case at times, but that's also no reason to give up. Where substantial outcomes rely on facts, good institutions build ways of addressing contention about the facts themselves into to the order they impose on discourse. It can't guarantee a correct outcome -- as you say, sometimes you have to wiggle around the truth for a while first -- but it's better than the nihilism of failing to engage the problem altogether.
> some think there was election fraud, and some don't.
The institutions where questions of election fraud were mediated were accessible to both sides in equal measure -- arguably biased toward the side that lost, given how the privilege of selecting judicial appointments has shaken out over the last two decades and who held the resources/power available to federal and various state executive authorities last fall. And they seem to have determined that evidence of systemic outcome-changing fraud was thin indeed.
I suppose it's possible to imagine a different outcome from a similarly robust process over time, but if there are "other auditors that show very convincing evidence" of outcome-changing fraud, it would be interesting hear what that specifically is, and why that evidence didn't make its way into the venues that actually mattered in a moment when the outgoing administration had considerable advantages.
> But at the same time, there are other auditors that show very convincing evidence that there was at least some additional voter fraud beyond what was detected in the audits.
What evidence, exactly? IIRC, there were some professional-looking mathy analyses that claimed to find it, but they all had glaring methodological errors.
I think there are cases where "the sides don't even agree on the same facts anymore" and both have some claim to truth, but it's not on election fraud or the election results.
> Personally I think "harmful content" concept is extremely dangerous when applied to social networks.
It's a kludge. The real problem is broadcast technology becoming too easy to access (in the form of social media). It's reduced transmission friction too much, which has seriously undermined the ability of the "marketplace of ideas" to filter the good from the bad. There's a sweet spot between monopolization and total democratization of broadcast technology, and I don't think we're there anymore.
> If 5 years ago it was unquestionable that USA has more press and thought freedom than India, Russia or China, the situation now is much more nuanced and really depends on the topic discussed.
Come on. You could only say such a thing if you're almost totally ignorant of "press and thought freedom" situation in China. About the only thing you can say China is more permissive of than the US is outright racism, and that's only at the social level (you can still legally be a flaming racist in the US, it's just that many people won't want to associate with you and many will remind you you're full of shit). I'm less familiar with Russia and India, but I highly doubt the situations there will salvage your statement.
This is actually a really insightful way to look at it. I gather you are saying Facebook should not be the one to do the "filtering," but also that it's not ideal for each individual to have to do it themselves. What's the solution?
The easiest and least problematic is to add friction, which in my fantasies would be to ban sites like Facebook and Twitter outright that provide too-easy access to ready-made audiences of millions. They're basically like handing out loaded guns to a class of first graders playing on a playground. Guns are fine and shouldn't be monopolized by one group or another, but they should also not be in the hands of untrained first graders on playgrounds.
A somewhat less radical form would be to kneecap the ability to share widely on social media sites: no public posts, sharing limited to direct connections (and a cap the number of those to 1000 or something), no features to easily re-share a post outside of its initial audience, get rid of groups, kick out organizations, etc. Bring back classic web forums for communities, and push people who want to build an audience back to blogs and personal websites.
>The easiest and least problematic is to add friction, which in my fantasies would be to ban sites like Facebook and Twitter outright that provide too-easy access to ready-made audiences of millions. They're basically like handing out loaded guns to a class of first graders playing on a playground. Guns are fine and shouldn't be monopolized by one group or another, but they should also not be in the hands of untrained first graders on playgrounds.
Friction, or training? In the example of guns, there are SCREENING (not exactly the same as friction) and education/training that many seem to agree are a good solution. Could the same be applied to social media?
>A somewhat less radical form would be to kneecap the ability to share widely on social media sites: no public posts, sharing limited to direct connections (and a cap the number of those to 1000 or something), no features to easily re-share a post outside of its initial audience, get rid of groups, kick out organizations, etc.
This is again just letting Facebook dictate the solution - not that much different than the unilateral censorship they do.
>Bring back classic web forums for communities, and push people who want to build an audience back to blogs and personal websites.
Seems like this is happening today, and might be a good approach.
> In the example of guns, there are SCREENING (not exactly the same as friction) and education/training that many seem to agree are a good solution. Could the same be applied to social media?
Not really. The analogy between guns and social media is not perfect. Training as a solution has special problems with applied to media access that it doesn't have with guns. Specifically, it would probably amount to some kind of indoctrination program. Gun training is a technical topic, like how to drive a car safely.
>> A somewhat less radical form would be to kneecap the ability to share widely on social media sites: no public posts, sharing limited to direct connections (and a cap the number of those to 1000 or something), no features to easily re-share a post outside of its initial audience, get rid of groups, kick out organizations, etc.
> This is again just letting Facebook dictate the solution - not that much different than the unilateral censorship they do.
Actually, that was me dictating a solution to Facebook that stopped short of shutting them down. I'm pretty sure they'd hate to follow it.
And the important difference is that everything I suggested is content neutral, so it can't accurately be called "censorship."
> Seems like this is happening today, and might be a good approach.
Yes, but if it's happening today, I'd bet money it's mostly people who can handle social media relatively well. The problematic people who probably shouldn't have access to a broadcast megaphone are likely still on Facebook and Twitter and are unlikely to leave.
Adding friction helps slow down the honest but gullible. But if you have an organized, professional disinformation campaign, they're willing to do the work to push against the friction.
I suppose they count on their gullible followers to multiply their efforts, and hindering that will have an effect, even against organized campaigns, and that's good. But I wonder whether it will make organized campaigns relatively more powerful. If so, that might not be a net win.
> "These people, most of whom would speak only anonymously because they were not authorized to discuss internal conversations, said Facebook’s executives were more worried about fixing the perception that Facebook was amplifying harmful content than figuring out whether it actually was amplifying harmful content."
I don't work at FB, but I suspect this is bullshit.
"Facebook is not a giant right-wing echo chamber.
But it does contain a giant right-wing echo chamber — a kind of AM talk radio built into the heart of Facebook’s news ecosystem, with a hyper-engaged audience of loyal partisans who love liking, sharing and clicking on posts from right-wing pages, many of which have gotten good at serving up Facebook-optimized outrage bait at a consistent clip."
And then there is the advertising profit that these enthusiasts of the absurd generate for the corporation. What former US President DJT called, "the golden goose".
why is it anyone else’s business to see that data? why would they be entitled to? so they can whinge and ask fb to censor more content?
i could see an argument for saying they should provide users more data about their own posts. but not aggregate data about their platform that will just be used against them
What is also interesting is the motivation of the people wanting to use CrowdTangle, etc to see the list of top posts.
My guess, is that it less about learning what is the top post, but more about using that information as a way to try to pressure Facebook to censor content they don't like.
> ...but more about using that information as a way to try to pressure Facebook to censor content they don't like.
That's far too simplistic of a take. Facebook and social media amplifying partisan shit-stirrers and misinformation over more sober voices should be seen as a serious issue, regardless of your ideological commitments.
In my town there is a shizophrenic lady with a lot of weird signs, which partly are non readable partly outright dangerous should anyone believe that. If she is around, she defenitly is the most noticeable person.
Yet we don't need to censor her, people can decide themself how much they read into this.
The idea to mass censor mentally challenged people likely only increases their hate and gives them confirmation for their believes and even worse hides them from possible help.
It's not the missinformation that is the issue IMO but the people who just follow it without fact checking.
More fundamentally than asking for censorship is holding Facebook accountable for what content they are promoting. They've been caught multiple times allowing Ben Shapiro's page to break the rules that other pages have to follow [0]. They also keep claiming that they don't favor right-leaning content. Maybe this should be regulated or not, but at least we can call them out for lying about it.