All the social media sites did the research and learned that filter bubbles and outrage drives engagement more than anything else. If you tune your systems to maximize engagement and ignore the side effects, the side effects still happen, whether it’s deliberate or not
This is also true of TV, print, and other non-social medias - and that seems to be under commented on HN.
Has existed for longer and I believe started this downhill trajectory, perhaps created the 'tinder' if you will.
They too do it for the engagement/ratings.
Rupert Murdoch's empire as a prime cause creating an alternate reality solely because it made his media the #1 most watched news network, print, etc.
They've even argued in court themselves that any 'normal' person wouldn't take their 'news' seriously. Take Tucker Carlson's early videos as another example of this gross negligence; he's an actor saying what gets views no matter his actual beliefs are.
And making that distinction helps highlight that there are factors in how information is shaped and transmitted through networks of people that results in these outcomes.
Or, hear me out, requiring certain levels of transparency from content systems that have started to augment such fundamental constructs as human-to-human communication and information sharing.
There's nothing inherently wrong with automating content discovery; it's the cost function being optimized that I think we would almost all take issue with.
Agreed. Much of the problem I see is that people can fall down a rabbit hole of polarization without realizing it; no matter how far into the fringes the recommendation algorithm gets, it'll always feel like "oh everyone's saying this" to you as a viewer.
"What happens when you take a creature with a strong confirmation bias and feed it content specifically chosen for congruence with its particular bias?"
Or rather, they knew the answer, but knew that it was the best way to maximize engagement and thus profit.
There still needs to be scope for personal responsibility though. Blaming your own behaviours on the recommendation algorithm of youtube etc is just a cop out.
It's a copout on an individual level, but the question of who's responsible is a lot less important than the question of what we're going to do about the problem. In the absence of a plan to make millions upon millions of partisans more individually responsible, we've gotta do something about the recommendation algorithms.
A cop sees a guy crawling around under a streetlight and asks him, "Sir, what are you doing?"
The man replies, "Well, officer, I dropped my car keys and I'm trying to find them."
The policeman offers to help, and they search fruitlessly for ten minutes.
Finally, the officer says, "Are you sure this is where you dropped them?"
"Oh, no, it's not. I dropped them way over there in the parking lot."
Dumbfounded, the cop says, "WHY are you looking over HERE?"
"Well, the light is better over here."
If it is well known human behaviour, is it their personal responsibility? Especially combined with the deliberate exploitation of said behaviour by corporations?
Rabbit holes are good for most kinds content. I do want to go down a rabbit hole of restoration gardening videos. Its only politics and news which become a problem
Reminds me after the GFC how some of the rules that were brought in were around banning automated real time trading systems. This was in some similar ways recognizing that automated algorithmic treatment can have extremely harmful side effects - even when it successfully executes the goals of its owner (for the stock market - once a certain threshold is breached, get me out of the market as quickly as possible - as an individual, its exactly what I want, for the overall market it is a disaster if everyone does it suddenly).
Indeed. Generally along with blaming the algorithm, as we start using AI, the problem is the algorithm is no longer even understandable in many cases.
Tech companies should be able to explain and demonstrate the logic their systems use. These algorithms should probably be public. And any system which cannot be transparently explained should be shut down.
> These algorithms should probably be public. And any system which cannot be transparently explained should be shut down.
But what would even count as an explanation, and what does a "public" algorithm reveal?
There is no line of code that says "if (video.content == extreme) { show_to(EVERYONE) }".
A lot of the dangers are possible from a simple algorithm which merely performs A/B testing on whether a certain (randomly chosen, at first) video increases the amount of time a user spends on a site.
You could pass a law against A/B testing, or require companies to provide deliberately bad suggestions to make their users frustrated, but I'm not sure if that is a proportionate legislative response.
That sort of gets to the heart of it - saying "it's impossible to explain my algorithm because it is so complicated" no longer cuts it. It's the same as if I said to the FDA "This drug works in our testing for the indicated purpose but we have no idea if there are side effects" they aren't go to say "well, OK then, you can sell it". They'll say "come back when you have tested for all possible side effects and you can precisly define its behavior"
Currently Facebook, Google et al, will measure a narrow set of metrics to define the success of their algorithms (primarily relating to $$$) and then declare victory based on empirical optimisation of those metrics. They have to show that not only does the algorithm work for its stated purpose, it doesn't have undesirable side effects. They could do massive testing to demonstrate a huge range of side effects are not present. That would be similar to and probably as expensive as drug development. But an easier way to do that is to transparently explain what the algorithms do so that side effects can be predicted. If that is actually impossible for a given algorithm - well, maybe that algorithm shouldn't be used on the public at all.
If a company can't explain why a given user was shown a given video, at a given time, that system is faulty and should be prohibited. Which is to say, there's nothing particularly wrong with complicated rules and scoring systems making recommendations, but things like black box neural networks should not be able to be used for this stuff.
Let me give you an example of a good system: Spam filters not operated by Google. Algorithms decide whether or not mail gets through my corporate mail filter. It determines it based on a score, which is tallied from a set of rules. And you can drill down and see the score a spam email received, and then you can see the rules and factors that created that score. As a user, you can even generally see this, because the results are included in the message's headers in your inbox. And if you're the admin, you can then adjust those rules to fix errant behavior.
That's how a recommendation system should work. A system which can't be analyzed in that matter should not exist, and the rules and scoring applied to such systems should be disclosed in some sort of header.
The spam filter works because only you can see the scores of the emails. If the rules of Google search and Youtube were more transparent, spammers would always be one step ahead of spam detection, so everyone would lose in the end.
This is false. Spammers can and do test their own emails against spam filters. And the less transparent ones, like Gmail's often suffer longer failures to detect new strategies spammers are using.
There are enough unique filters that a spammer cannot hope to account for them all in a timely manner. Getting .001% of spam emails past Gmail's filters is much more efficient than getting 100% of emails past my personal Thunderbird spam filter.
That's a valid argument for not using Gmail, but that in the case of Youtube, almost everyone wants a single place where they can watch all the user-generated videos.
> almost everyone wants a single place where they can watch all the user-generated videos
I have my doubts this is true, so much as people are used to the relatively common controls/experience that comes with clicking a YouTube link. If I gave you a link to ocdtrekkietube.com/s8fsj2 (not a real link), and it worked as well as YouTube, I'm not sure any user would be particularly upset about it.
The biggest reason YouTube is as powerful as it is is that video hosting is expensive and few companies can eat that kind of bandwidth and storage without being an ad giant.
We're talking about Youtube's recommendation algorithms here, not their video hosting. Imagine if Youtube hosted the same videos, but provided no homepage, recommendations, or search engine. People would still gravitate towards a single source where they can watch all the content they want. There may be some more competition, such as a liberal recommendation engine, a conservative one, and one for teenagers. However, these websites/apps would likely face all the same problems that Youtube does today.
My iphone indexes photos and labels all the ones with dogs in it. I don't care how this system works and likely the answer is some black box NN. Why should this system be shut down when it causes no harm and works pretty well?
> Why should this system be shut down when it causes no harm and works pretty well?
You believe it causes no harm, but that doesn't make for a good assumption, considering image recognition NNs have been repeatedly demonstrated to be racist. So you may find it's classification of dogs might work really well, but it might label other people's coworkers or family members as dogs.
I still don't see why I should care since its only for my personal use. I agree in other areas like law enforcement and news curation this is an issue but for my personal photo search all I care about is that it works pretty well, which it does. Some mistakes are fine here.
Which is my main point, why should this apply to all tech when in some cases unexplainable mistakes are fine?
In many cases making a neural network explainable is literally impossible. Logic based AI failed in the 90s and only though neural nets have we gotten to human and superhuman level advances. If you want explainability it's going to require giving up a lot of innovation that could help people. Especially when we ourselves can't explain our own behavior psychologically.
> In many cases making a neural network explainable is literally impossible.
Then it should be illegal to use on consumer websites. It's that simple.
> If you want explainability it's going to require giving up a lot of innovation that could help people.
Four people died in the US Capitol because of this "innovation". It's time to stop pretending technology doesn't cause more harm than good in these cases.
I don't like when people try to stop or slow down technological innovation. Technologies are just tools, a nuclear bomb could just as easily have been made into a nuclear power plant. But to fully restrict them shows a lack of foresight. Yes, recommender systems can be used for bad, as yesterday, but they can also be good.
It's likely unrestricted technological innovation will inevitably cause the end of the human species, and your own example demonstrates why: In our rush to create the nuclear bomb, several times we came close to obliterating all life on earth due to simple mistakes. Several times, the hesitation of one man was the only thing that prevented our planet's complete annihilation.
The nuclear bomb is exactly why we need to block "innovation" that causes more harm than good.
And yet nuclear energy is one of the highest latent energy sources we have, orders of magnitude above solar, wind, coal and so on. If we blocked progress, then we'd have never discovered this.
The question is, how do you know something is bad before developing it? You oftentimes can't put the genie back in the bottle.
I would so love to have sliders on Youtube to be able to adjust bias filtering, and watch the suggested videos switch sides in real time.
This would probably get me to pay for the subscription.
Both FB and YouTube's top shared posts and video are super right-leaning, though (YouTube has other massively popular videos, but they are about gaming or influencer stuff). What would the equivalent "liberal" viral posts be, and why aren't they nearly as popular as the right's?
My bet is that Facebook’s user demographic skews old and white, and that explains their posting. If you go somewhere like Reddit or TikTok you’ll see an entirely different story.
Elsewhere in this thread the point is made that the opposite of a viral conservative video isn’t a viral liberal video. It’s either a moderate video, or no video at all.
(This might be the very point you are making, rhetorically?)
Doesn't matter, assuming you allow people to pick who they follow. Try to look at the twitter feed of somebody who disagrees with you on a topic you find important - most the posts will be insults towards those who disagree with that persons POV.
Why can't there be a Giant Lever on the side that inputs a variable into the Magic Algorithms that you can adjust. Up down, left or right, whatever you want to call it. Why would you do this - even if people never touch it: because at least people KNOW there's a decision-tree at play here.
Also there MUST be some randomization or more generalized content introduced. And I'm not talking Trending Now or Hit Songs That Just Came Out. I'm looking at you, YouTube and Facebook. Sometimes I actually want new stuff and it keeps dragging me back into a virtual rabbit hole.
Probably worth distinguishing between Algorithms targeting individuals and Algorithms running an entire site. Yes HN is algorithmic but you and I see the same things.
We blame algorithms and optimization a lot here but that analysis always glosses over the fact that people pick their own sources and form their own bubbles.
Everyone’s Youtube subscriptions are an echo chamber of views they mostly agree with because people only click subscribe on such channels.
A recommendation engine working perfectly is going to show you lateral channels that might be more or less extreme of the last one. But that’s not the root of the problem.
You are being naive. I wanted to see what the impact of a 50 caliber looks like on iron plates. This is just a childish curiosity for things that go bang. Immediately I was bombarded with recommendations of barely concealed white supremacists, and bizarre conspiracy theorists. I know that if I clicked on even one of them, then my recommendations will be full of that crap. Similarly, watch one motivational video, and Youtube decides "oh! I see!! so that's what you really want. Here's a deluge of 'similar' crap."
What I want to know about is more coherent explanations of abstract math and some scientific experiments which I never got to see in school, in addition to comedy. That I watch something else once in a while is not really an indication of my identity or my Jungian shadow self.
> I was bombarded with recommendations of barely concealed white supremacists, and bizarre conspiracy theorists. I know that if I clicked on even one of them, then my recommendations will be full of that crap.
You were offered the recommendations, but you declined. It's still a matter of self-selection and self-exposure. How many recommendations would it take to turn you into a white supremacist? How many video viewings? If your answer is that no amount would, then the algorithms don't seem to be the real problem.
I did decline. But I have seen many of my parents' generation getting increasingly radical (I am not American, BTW. I am talking about India). They don't have the tools to understand what is going on. Their entire life was training to "read between the lines" of newspaper articles and journals. They are used to subtle lies and distortions, and they can catch that. Blatant outright virulent rhetoric - with the tone "this is some truth that must be told" - is ok in 1 or 2 videos. But the recommendation engine enmeshes them in that mire.
My mother is a PhD. I asked her whether she will accept a paper without sound academic bibliography. She said no. Then why would she believe in extremist videos with shady citations and outright lies? But she does. I think it is because of the recommendation engines which keep feeding similar videos. Just like Goebbels' maxim of repeating a lie often enough and loudly enough to make it the truth.
We can keep saying that our algorithms are fine. But if what we see today has never been seen in history, I think it is high time that we re-evaluate our current beliefs about the impact and the correctness of our work.
> I asked her whether she will accept a paper without sound academic bibliography. She said no. Then why would she believe in extremist videos with shady citations and outright lies?
This is very common. I would even say the norm. No person is fully rational. Rather, people may think rationally about certain things — their "speciality", perhaps — but not about others. Everyone is irrational in some ways.
> But if what we see today has never been seen in history
Which part do you think is historically unprecedented? The technology of course is unprecedented, but I don't see anything about the beliefs of people today that's unprecedented.
There was once a time when newspapers were never before seen in history. Just as your parents' generation learnt how to detect misinformation in newspapers, they and new generations will learn to detect misinformation in new forms of media. People adapt, we always have. I'm sure when your parents were young or during the early days of journalism, there were problems and misunderstandings when reading newspapers. It didn't really warrant rushing out to end freedom of the press or calling to halt newspapers.
Just because you share more things in common with those you hate doesn't mean the recommendation is broken.
If a large percentage of people who watch gun blowing up videos also are interested in the videos you mention than why wouldn't they show it (I assume if it were breaking guidelines it would be removed)
The recommendation should be based on what a large fraction of my time is spent on (average over the timeline of my history), rather than a large of fraction of people who watch the last watched video is based on. Why does one video entirely screw up my historical record? I can't remember being recommended one 3Blue1Brown video in the last year, despite being subscribed to him. It's bizarre.
Pretty much. I watch some pretty left-wing political commentators on YouTube along with math/cs/physics videos and some geostratrgy videos, and all it takes is for me to watch one gun video by curiosity or one Joe Rogan video and leave YouTube playing to come back to some extremely questionable content. There's a sinkhole effect somewhere here.
IMO autoplay for anything but music is one of the worst decisions ever made on online streaming services.
Kind of reminds me how repeatedly clicking the first link on Wikipedia articles inevitably leads to the article about Philosophy, but diverging instead of converging.
I think algorithms have more of an effect than you give them credit. Anecdotally on the occasion I've been linked to or suggested a Jordan Peterson/Moon Landing conspiracy videos on Youtube, after watching just one my feed is absolutely packed with that type of content for up to a week.
It's fairly easy to see how this can lead people who did not already hold those viewpoints down a rabbit hole where they end up far more radicalised than they would have otherwise been, especially since I've not once seen Youtube put any sort of "rebuttal" videos in amongst the "illuminati aliens are controlling your government" ones.
My (logged out) YouTube feed has mostly cooking shows, programming videos, stuff about crafts and watchmaking etc, because those are mostly what I like to watch.
I’m also interested in guns. The moment I watch a gun video, I immediately get shown Ben Shapiro, Jordan Peterson, and The Blaze instead of all the cooking videos etc.
And yet none of the gun videos I watch are remotely political. They are exclusively about sports and history, and don’t even talk about gun politics, let alone politics in general.
I watch lots of videos on guns, cars, metalworking and history with the former two mostly geared toward history and manufacturing.
I never see (amateur) political talking heads recommended. It's all trash pop-history talking heads, semi-trash documentaries and low brow entertainment related to cars and guns (e.g. demo ranch and whistlin diesel).
I recently (like yesterday) watched a semi-political talking head discuss the economics of OnlyFans after a friend linked to that particular person's analysis so it'll be interesting to see if the algo tries to drag my content toward more talking heads.
Interesting. I imagine there is more to it than just the videos.
E.g. if you live in a generally pro-gun area I think they be algorithm would probably be less likely to assume that it’s worth showing you political content.
I guess its obvious that the algo must work for a large enough number of people and for its optimization function.
You may simply be that % of the false positive/false negative that is a cost of doing business.
I mean - the alternative to this is having Google employ all the out of work journalists and editors and manually curate lists for people. I think the cost of losing a few people is vastly lower than hiring that many people, in each culture and country around the world.
I've had mostly the same experience with youtube. I wanted to get into woodworking during COVID lockdowns, so I was watching a lot of popular woodworking channels that had nothing to do with politics, and was frequently shown Trump ads. Then once I realised how expensive woodworking would be, I started watching videos about game development. After ~1 week, all the Trump ads disappeared, and I started getting Biden ads.
> The moment I watch a gun video, I immediately get shown Ben Shapiro, Jordan Peterson, and The Blaze instead of all the cooking videos etc.
I know recommendations are based on many factors that differ between us (such as location), but I can't reproduce that behavior. If I watch hickok45 or Paul Harrell in an incognito window, I get recommendations for more of their videos along with a few from Demolition Ranch, Forgotten Weapons, and other gun channels. I see no political videos in the recommendations.
Interesting - well those are are the kinds of people I watch.
I’m using AppleTV, but not logged in. It’s possible that it takes some time for that to happen.
Also, I watched a Jerry Miculek Video yesterday, and got no political stuff, so it’s also possible that the algorithm has been improved or they have specifically acted to break this association.
> And yet none of the gun videos I watch are remotely political.
Sure, but to assume that you _probably_ hold conservative beliefs due to having an interest in guns is way more often the case than not. Same is true in an opposite way if you watched some video on abortion rights.
“to assume that you _probably_ hold conservative beliefs due to having an interest in guns is way more often the case than not“
Only in the crudest statistical sense. For example the fastest growing demographic of gun owners is black women, who are overwhelmingly Democratic voters.
And that’s exactly what the complaint about these algorithms is.
That these algorithms are crude statistical approximators of hidden latent biases in the training data? That's their entire method of functioning! That's the entire process of a neural network. It seems you're wanting something that can predict the future, which is currently outside of our matrix multiplication skills.
During elections, before I stopped using facebook, I used to try to follow everybody I could from every side of the political spectrum to try to get a more balanced feed. My goodness, my feed was immediately full of insane conspiracy theories, white nationalist group posts, communist posts, nothing but stories of subjugation and oppression of everybody from every side. The choice I made was to try to broaden my bubble, but what I got was insane
Its worth noting that the companies that decided to not do this are not huge megacorps, so there is an argument to be made that anybody who isn't aggressively chasing engagement just can't get a seat at the table anymore. Am I mistaken about that? This sort of makes it a systemic problem, not necessarily a problem that a company leader can solve. Eg, not even Google+ with all its resources was able to dislodge Facebook.
Exactly. Is why we outlawed underage drinking and hard drugs. Once you get a taste you can’t say no even if you want to. The problem is not humans and human nature. The problem is that social media exploits human nature in a new way. It’s not going away without regulation, in the same way we have regulated countless other human ills.
Most media companies under capitalism end up acting like a paperclip maximizing AI[1]. They will eat the entire planet to get a few more eyeballs because no one taught them not to.
In communism anyone who opposes the state is disappeared.
In capitalism anyone who opposes the corporation is disappeared.
You end up needing a mix of both. An independent news media with an explicit obligation to the truth, one that they can get in trouble for violating. We had that obligation for a long time through the middle of the 20th century. News organizations were well regarded even if they didn't always make the right calls they tried their best.
But then Rhupert Murdock realized that you could simply pretend to be one of those respected parties and lie to the viewers constantly and there would be no consequences. The obligation to the truth turned out to be a gentleman's agreement and there were no truth police breaking down your door when you told lies. That's when we discovered that the media is like the Prisoner's Dilemma. Fox news discovered that as long as everybody else was beholden to the facts they could lie repeatedly and constantly win the game. They've only fallen from the very tippy top of the ratings in recent years as other news organizations like OAN have discovered that the bigger the lie the bigger the ratings.
Fast forward to today and respect for the independent media (the all important 4th branch of government) is at an all time low and we have completely indoctrinated delusional people storming the Capitol building.
Well said. You answered that sibling comment that I refused to answer.
I am not a fan of the specifics of the fairness doctrine, but I believe the current state in which media companies can freely and knowingly lie as long as they don't stray too far into defamation is not tenable.
If you study political philosophy (e.g. Plato, Hobbes, Locke etc..) and read stuff like Orwell's Animal Farm, I think it becomes pretty evident that it ultimately won't matter how you structure society, whether that's a mix (of communism/capitalism), or having all of one or the other. Just using a simple thought experiment, you can show that the sheer existence of two or more people, can potentially give rise to situations where inequality becomes inevitable. In which case, leads to Hobbes's arguments about the state of nature.
None, as you well know. But it has been the result when self-proclaimed communists succeed in taking control of a state.
The well-known phrase "it wasn't real communism" comes to mind because it applies and is true, since of course these results have never followed to the letter Marx's doctrine and intentions. But given the pattern of authoritarian states that follow every attempt at communism it is logical to conclude that the plan as stated simply does not survive in any desirable fashion once it starts being followed by real people to organize real people.
Capitalism and Republicanism (and no, for some people in the US that need the clarification: I certainly don't mean the party) as perfect plans also fail allowing a lot of evil to flourish, but their failure modes have performed much better in the long run than everything else so far. You can pinpoint any flaws you want, but you can't argue the results as there is no real counterexample with universally better ones.
Constitutional monarchies, specially the more successful ones, are in practice mostly indistinguishable from a republic. The powerless diplomatic figure that is the monarch does not impact the overall decision making in any meaningful way.
Sweden and Canada fit that definition as well, and are not socialist countries according to what Marx meant by socialism. They're very much countries with a foundation of ownership rights and private enterprise, but taxed to a certain extent as to fund extensive government welfare programs. Of course, I'm not unaware that socialism is a word used loosely these days but within the context of discussing communism it's important to drive home the difference.
And of course, as someone that doesn't belong to any of the mentioned countries, I would say that the sentiment that a government like Sweden's or Canada's is better to live in than (pardon if I'm reaching here) the country that I believe you had in mind when making the comparison, is not only not universally accepted but very heavily debated across the world.
Constitutional monarchies tend to have a symbolic figurehead for the public to emotionally invest themselves in, but who is restrained from exploiting that public affection for political gain. I think this is valuable.
It is a major difference though, since they could do something if there's some large issue.
In a republic, the person with that power is also the person usually making the decisions, so when those decisions are bad, there isn't another party with the power to stop them.
In Canada at least(Canadian here) the Governor general does step in at times to prorogue or dismiss the government.
Socialism lost all proper meaning in the McCarthy era, but Canada and Sweden are specifically not capitalism in the way that capitalists say free markets are always better. The US is more capitalists than both those countries, and is worse off for it.
Certainly if you consider it hotly debated, you can't say that capitalist republicanism is the least bad, since alternatives are equivalent
I think Capitalism is the failure of society. Where I live, I have to pay someone to be able to sleep inside, to eat, to travel, to communicate, to access information. And the only way to make those payments is to obey one person or another who gets power that trickled down from the top.
Socialism is nothing but the attempt to fix these things by organizing like minded good people to be social. It might not always work, and nothing can last forever, but to the extent it does, I think that is the measure of good over evil in society.
A capitalist republic is the organization of like minded authoritarians who maintain a class of people like me to use as labor and manage via economics and trade.
As I see it socialism is the only sincere attempt at a fair society and republican capitalism is just authoritarianism with a delegation-style of management. IE: federal bank pays a corporation that bids on a job. corporation pays producers. producers pay servants. and on down the pyramid. And all the way down, there is no accountability to any good or fairness. In fact, the opposite is protected, because the way that an authority wants to treat their subordinates is considered human rights in capitalism. they call it privacy. corruption is the system. Pursuit of happiness (a euphemism for greed) is good.
I think fair access to natural resources without needing permission is human rights. I believe that is called socialism. I don't demand service or subordination from anyone, just fairness, by which i mean equal or at least a minimum basic access to land or material resources to live. Capitalists demand service from me and respect for their perspective that i don't own resources and they do since they have exploited and conquered more people than I have, which they call fairness.
The state = The status. It doesn't disappear people. Capitalism and crime disappear people. This just means the state is poor/capitalist.
An elected body is not the state. It is just an organizing scheme for people to communicate and participate in the kind of economy where 1 person = 1 vote and no one is allowed to gain more or less influence than that except by true social means aka popularity.
Your argument reminds me of people arguing against the scientific method because most or all scientists make mistakes. It just means you need better science. Science isn't a guarantee, but it is an ideal method of determining objective truth. Capitalism is the false utopia, with invisible hand and other superstitions. Socialism makes no such promise. It doesn't promise equality of outcomes. It is just a method for fairness. It is democracy that has not been corrupted by respect for unaccountable accumulation of private control or power.
It doesn't sound like you want to engage in this discussion in good faith if you are treating it as a binary choice between capitalism and communism. Any radical extreme is going to bad.
EDIT: My views largely align with what jandrese said here[1].