Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> People should be held accountable for what they say on media platforms, and lies shouldn't be allowed.. but i can't see that even happening.

I've suggested to @dang that he tries implementing a kind of experimental discussion mode like that right here on HN, where for certain types of topics he could enable this mode and see what the effect would be on the human mind.

I don't know all the particulars of what changes should be included in such a mode (would be a good topic for a discussion), but the main one I would include is an additional guideline something along the lines of "Please exert some effort in restricting your statements to the discussion of reasonably conclusive true facts about physical reality."

This way, when people inevitably succumb to mistaking the virtual reality in their mind (where one has supernatural powers like omniscience, the ability to read minds at scale, predict the future with precise accuracy, completely understand infinitely complex indeterminate/chaotic systems, etc) for physical reality (where we do not have these powers), such comments could be flagged and reviewed a few days later (when cooler heads prevail) in a group Post Incident Review process of some sort (maybe a zoom meeting), where we could examine our behavior from a more metaphysical perspective, the goal being to increase awareness of the fact that inaccurate beliefs about reality are not something that only members of our personal outgroups suffer from, but rather something we all suffer from. It is simply a consequence of the same base software we all run in our minds.

Unfortunately, this idea seems to be rather unpopular (shocking!) - so, the beatings will continue until morale improves (or some variation of that), or until this never ending process comes to its natural conclusion. Mother Nature is a cruel mistress.

https://humorinamerica.wordpress.com/2014/05/19/the-morpholo...

https://www.youtube.com/watch?v=3MHExbJGIQs

https://youtu.be/smX2UtdJFq8 (not recommended for filthy casuals)



Your proposed system seems mostly unaware that bias pervades all people and therefore all systems.

I don't think there's a way to system-design ourselves out of this situation.


That's the very point of such a system - to learn to recognize the symptoms of this problem, and how to rectify them. Also, note that bias is not a binary, it is a continuum, and our defaults can be overcome (see the third video linked above for an example).

> I don't think there's a way to system-design ourselves out of this situation.

I personally prefer to try and fail before declaring defeat. After all, how would one know if something isn't possible if no one ever tried it? If our ancestors had this attitude, we'd probably be doing something like having a pleasant ride in our horse and buggies rather than having pointless arguments with people we don't know who live half way around the world.


If our ancestors had this attitude, we might not have rage-builders being used daily by a third of living humans.

I don't believe all problems are unsolvable, but I've seen many systems fail because they do not take human nature into account.

Facebook and Twitter are probably the biggest such disasters I know about.

I realize your proposal's intent is to help people realize they have this problem.

I don't think it would do so very well.


> If our ancestors had this attitude, we might not have rage-builders being used daily by a third of living humans.

I agree. But we've got them now, so what will we do about it?

> I don't believe all problems are unsolvable, but I've seen many systems fail because they do not take human nature into account.

Me too. Let's say we did this and it didn't work - would this leave us in a worse state than before (excluding the development time involved)?

> I realize your proposal's intent is to help people realize they have this problem. I don't think it would do so very well.

You may be right! But how would we find out for sure?

Hypothetically: if the idea went to a vote (to try such an experiment here on HN), would you vote for or against, and why?


I'm not sure whether I'd vote for or against this proposal.

I probably would just ignore it entirely, if it was experimental and opt-in.

If it was required, I would probably stop using HN.

Despite trying to be a careful systems thinker and programmer, at my heart of hearts I am a mystic and an artist.

I suspect those parts of my worldview would be largely dismissed as drivel by people who are confident they know the truth and ideas that diverge from theirs as mine do aren't worth considering.


It's unpopular because it's naive. Nobody has the ability to prove everything to themselves and everyone else from first principles. Everyone takes shortcuts by repeating things they were told by others, thus "lying" is occasionally the 5-year old type of lying where someone just makes something up on the spot and tries to avoid looking guilty, but these are not the meaningful or common types of lies. In reality "lies" refer to the personal threshold someone has for edges in a sort of transitive PageRank over the graph of all claims made by all people.

A good way to see this is by looking at what news organisations call "fact checks". Almost always, these are simply repeating the claims of some random academics or government institutions, which are taken to be true by default. A lot of people really do assign a very high prior to "member of the establishment saying something makes it true", but a whole lot of other people do not. If the latter go and dig in and discover the underlying claim seems false, and start saying so, then the news orgs will claim it's "disinformation" and the others will say it's the "lying media" and yet nobody is literally making things up maliciously, even if a claim is objectively true or false.


This comment seems like a fine example of the very phenomenon I am trying to draw people's attention to.

I will try to explain...

> It's unpopular because it's naive.

You have no way of knowing whether this is naive.

> Nobody has the ability to prove everything to themselves and everyone else from first principles.

In no means whatsoever did I state this as a goal or requirement.

> Everyone takes shortcuts by repeating things they were told by others

Agreed. But my point is: people do not realize (in realtime) that they do this - and, this can have severe consequences.

> thus "lying" is occasionally the 5-year old type of lying where someone just makes something up on the spot and tries to avoid looking guilty, but these are not the meaningful or common types of lies.

I am not discussing lies (lying requires conscious intent) - I am discussing the mind's default inability to reliably distinguish between virtual reality and physical reality. This phenomenon can be observed in very high quantities on HN, but only in certain types of threads (culture war topics) - on other topics (computing, physics, etc), one will find very little flawed logic or assertions (relative to other social media sites).

Even if one doesn't think this is worth worrying about, it should at least be considered potentially interesting, considering the consequences of this phenomenon if it exists at scale. Take the climate change debate for example, or 'masks for covid' as a simpler scenario: what is the ACTUAL reason(s) our societies can't sort this shit out?

Should we care about such things, or should we not care? I'm getting very mixed messages on this from every social media site or organization I belong to. There seems to be general consensus that we should care (as a boolean) - but on the degree to which we should care, it seems like some people are opposed to caring too much (when it extends into thinking deeply and precisely, based on first principles (free of axioms and premises) and sound epistemology).

If you are debugging a complex system, would you use skills like precise observation, logic, and systems thinking, or would you just make a few wild guesses and throw up your hands in defeat when your guesses turn out to be ineffective?

> A good way to see this is by looking at what news organisations call "fact checks". Almost always, these are simply repeating the claims of some random academics or government institutions, which are taken to be true by default.

Agreed. This is an example of how 'that which is not true' in physical reality can become "true" in virtual reality.

> A lot of people really do assign a very high prior to "member of the establishment saying something makes it true", but a whole lot of other people do not.

Subconscious bayesian reasoning is often converted to binary when passed to the conscious (or so it seems).

> If the latter go and dig in and discover the underlying claim seems false, and start saying so, then the news orgs will claim it's "disinformation" and the others will say it's the "lying media" and yet nobody is literally making things up maliciously, even if a claim is objectively true or false.

Mostly agree, except for the "nobody is literally making things up maliciously" part - sometimes people actually do make things up with "malicious" intent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: