Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That deplatforming deprives the person being deplatformed of a platform is obvious, to the point that it's effectively a tautology. However, concluding that this means deplatforming is effective is extremely naive. When tech companies engage in acts of censorship like deplatforming, it causes many do lose trust in the perceived lack of partiality of these platforms. So while individual people getting censored may see their audiences diminish, support for the views they espouse and distrust of the authority carrying out the censorship often increase. In case it wasn't clear, the lack of efficacy in deplatforming I referred to in my previous comment was in reference to attempts to curb ideas and political movements - not individuals within those.

Again, deplatforming gained traction in the early to mid 2010s. It coincides more or less directly with the rise of the Alt Right. Increases in deplatforming is correlated with support for the far right, not against it.



> That deplatforming deprives the person being deplatformed of a platform is obvious, to the point that it's effectively a tautology. However, concluding that this means deplatforming is effective is extremely naive. When tech companies engage in acts of censorship like deplatforming, it causes many do lose trust in the perceived lack of partiality of these platforms. So while individual people getting censored may see their audiences diminish, support for the views they espouse and distrust of the authority carrying out the censorship often increase. In case it wasn't clear, the lack of efficacy in deplatforming I referred to in my previous comment was in reference to attempts to curb ideas and political movements - not individuals within those.

It didn't just impact the individuals, it reduced the behavior site-wide.

> Again, deplatforming gained traction in the early to mid 2010s. It coincides more or less directly with the rise of the Alt Right. Increases in deplatforming is correlated with support for the far right, not against it.

This is a pretty clear correlation/causality confusion. If factor A is on the rise and triggers reaction B, you cannot use the rise of B to prove that B caused A. Now, your narrative may be correct, but the narrative story you've provided does not demonstrate it.


> It didn't just impact the individuals, it reduced the behavior site-wide.

And? Even saying it reduced the behavior site-wide is not an effective measurement to conclude that deplatforming works in reducing the prevalence of those views in society. Again, "Deplatforming X views from platform Y resulted in less of X on platform Y" is effectively stating the obvious. To demonstrate the effectiveness of deplatforming, one would have to determine whether deplatforming actually results in fewer people believing in the views that are being deplatformed. I have not encountered any instance of this occurring. Ask yourself this: when you are banned from a forum for views you believe in, or you witness someone banned for views you agree with do you tend to turn around agree with the censor? Or do you become more enthusiastic for that view and lose respect for the censor

> This is a pretty clear correlation/causality confusion. If factor A is on the rise and triggers reaction B, you cannot use the rise of B to prove that B caused A. Now, your narrative may be correct, but the narrative story you've provided does not demonstrate it.

We're seeing trust in media and tech companies plummet. While the fact that a rise in extremist views is correlated with increases in deplatforming is not hard evidence of causation, it's extremely difficult to claim that deplatforming works to reduce said views in the face of that positive correlation between the two. That's trying to claim a causal relationship in the face of evidence of the opposite correlation.

In other words, if we see A rise alongside B it is indeed jumping the gun to say that A certainly causes B. But it's even more dubious to say that A reduces B in the face of that correlation.


> I have not encountered any instance of this occurring.

How could you witness such a thing occurring? This seems like an unreasonable evidentiary standard.

> We're seeing trust in media and tech companies plummet. While the fact that a rise in extremist views is correlated with increases in deplatforming is not hard evidence of causation, it's extremely difficult to claim that deplatforming works to reduce said views in the face of that positive correlation between the two. That's trying to claim a causal relationship in the face of evidence of the opposite correlation.

Do HIV drugs cause HIV? Do civil rights movements cause racism? De-platforming is a treatment. Of course it's going to co-occur with the thing it's attempting to treat. This is evidence of nothing at all. What you need to do, and what the studies I reference did do, is examine individual communities pre and post treatment. That is how you start to get at causality. The analysis is imperfect, to be sure, but it's a lot better than looking at simple correlation.


> Do HIV drugs cause HIV?

How is immunodeficiency treatment at all related to deplatforming? Viruses aren't thinking human beings.

> Do civil rights movements cause racism?

The Civil Rights movement did not enagage in deplatforming. Many of them explicitly acknowledged that their opponents also deserve the ability to speak. It was often the civil Rights movement itself that was subject to deplatforming.

> Of course it's going to co-occur with the thing it's attempting to treat. This is evidence of nothing at all.

The rise of deplatforming preceded the rise of the Alt Right by about a year or two. They didn't always co-occur, one preceded the other. Their continued co-occurrence suggests that the implementation of deplatforming either 1. has no effect on that brand of extremist, or 2. maybe even causes it.

This is not consistent with treating a disease. Usually, a disease is present sometime before treatment is administered. Then as treatment is administered the symptoms are reduced, if the treatment is successful. This is not what we are witnessing with the relationship between deplatforming and the brand of right wing extremism we've been seeing lately.

> What you need to do, and what the studies I reference did do, is examine individual communities pre and post treatment.

Limiting measurement to individual communities is not a good way to measur it's overall effect. Again pointing out the fact that when a community deplatforms a certain view that view is no longer present is pointing out an obvious consequence. Of course the platform sees a reduction in the view that was deplatformed. That's basically just restating the definition of deplatforming: kicking a person or group off the platform.

If you want to measure the effect of deplatforming on society, then the analysis has to be society-wide. Otherwise one is effectively just building a bubble of the communities that do engage in deplatforming, and burying their head with respect to it's impact on the rest of society.


> How is immunodeficiency treatment at all related to deplatforming? Viruses aren't thinking human beings.

> The Civil Rights movement did not enagage in deplatforming. Many of them explicitly acknowledged that their opponents also deserve the ability to speak. It was often the civil Rights movement itself that was subject to deplatforming.

My point is that the type of reasoning you used here would lead you to draw both of those conclusions.

> The rise of deplatforming preceded the rise of the Alt Right by about a year or two. They didn't always co-occur, one preceded the other. Their continued co-occurrence suggests that the implementation of deplatforming either 1. has no effect on that brand of extremist, or 2. maybe even causes it.

That's an a-factual statement. When did the alt right "rise"? Was it when Mencius Moldbug started writing Unqualified Reservations in 2007? When Richard Spencer joined the National Policy Institute in 2011? During Gamergate in 2014? Similarly, when did de-platforming 'start'? Was it when people first started protesting The Bell Curve when it was published in 1994? Was it when British National Union of Students adopted a no-platform policy?

The point is, neither of these events have a well-defined starting point, so any claim of one preceding the other is silly, and has no basis in fact.

> Limiting measurement to individual communities is not a good way to measur it's overall effect. Again pointing out the fact that when a community deplatforms a certain view that view is no longer present is pointing out an obvious consequence. Of course the platform sees a reduction in the view that was deplatformed. That's basically just restating the definition of deplatforming: kicking a person or group off the platform.

Your objection suggests a specific causal model, though. You're right that kicking users off of the platform will tautologically reduce the content. However, what if you didn't kick people off the platform? What if instead, as in the example I linked, you banned the sub-communities dedicated to advocacy of the proscribed topics? The people stay, the community goes. Then, you look at the level of the material in other sub-communities on the site. That is what those studies did, and that is why they demonstrate causality.


One can argue when these terms were initially coined. But we do have hard data on when they became prevalent in the public mind. Look at the Google trends for "deplatforming"[1], "no platforming"[2] and "alt right"[3]. "No platforming" had some blips starting in the late 2000s, but begins rising significantly in 2015, "deplatforming" in January of 2016, and "alt right" in august of 2016. There is evidence to the claim that deplatforming (or at least, widespread interesting in deplatofrming or "no platforming") preceded widespread interest in the alt-right.

> The people stay, the community goes. Then, you look at the level of the material in other sub-communities on the site. That is what those studies did, and that is why they demonstrate causality.

Yes, but as I stated multiple times by now the key limitation here is that they only looked at the material on the same site. Site X bans Y (whether in full or in only some subforums). You observe a reduction of Y on the site. That's not evidence that this action reduced Y in society as a whole. There is a causal relationship between deplatforming and reduction of the deplatformed view on said platform. Nobody is disagreeing with that - most people would likely read such a statement and think "no kidding, Sherlock".

For example, pointing to the fact that when Reddit banned racist subreddits racist content on other subreddits were reduced is proof that banning racist subreddits reduced racist content on Reddit. This is not at all surprising, and is something most would call obvious. But to portray this as proof that banning racist subreddits reduces racist content in society as a whole is a very large misrepresentation. This study did not study the impact on society as a whole - only the forum that is carrying out the deplatforming.

And again, I do not attempt to claim the the correlation between the rise of deplatforming and the rise of the alt right is irrefutable proof that the former causes the latter. But claiming that the former helps prevents the latter is not backed up by the evidence we do have.

1. https://trends.google.com/trends/explore?date=all&q=deplatfo...

2. https://trends.google.com/trends/explore?date=all&q=no%20pla...

3. https://trends.google.com/trends/explore?date=all&q=alt%20ri...


> One can argue when these terms were initially coined. But we do have hard data on when they became prevalent in the public mind. Look at the Google trends for "deplatforming"[1], "no platforming"[2] and "alt right"[3]. "No platforming" had some blips starting in the late 2000s, but begins rising significantly in 2015, "deplatforming" in January of 2016, and "alt right" in august of 2016. There is evidence to the claim that deplatforming (or at least, widespread interesting in deplatofrming or "no platforming") preceded widespread interest in the alt-right.

The terms themselves don't seem particularly relevant. The idea of deplatforming people has been around and practiced for a while. The alt-right dates back to at least Gamergate, and its roots in neoreaction, TRP, MGTOW, /pol/, etc can be traced back much further. I don't think Google trends really proves much here.

> Yes, but as I stated multiple times by now the key limitation here is that they only looked at the material on the same site. Site X bans Y (whether in full or in only some subforums). You observe a reduction of Y on the site. That's not evidence that this action reduced Y in society as a whole. There is a causal relationship between deplatforming and reduction of the deplatformed view on said platform. Nobody is disagreeing with that - most people would likely read such a statement and think "no kidding, Sherlock".

It's not tautological that that should happen. Remember, they're looking at the prevalence of that view elsewhere. It's not at all obvious that it should be the case that when you ban the 'Fat People Hate' subreddit, fat-shaming content elsewhere on reddit decreases.

It would be very hard to prove this effect on general social sentiment even for a site as big as reddit, because society is so much larger. Facebook might be big enough to have a measurable effect on society writ large, but their policing mechanism, and the internal organizational structure of Facebook doesn't really lend itself to these sorts of experiments.

> For example, pointing to the fact that when Reddit banned racist subreddits racist content on other subreddits were reduced is proof that banning racist subreddits reduced racist content on Reddit. This is not at all surprising, and is something most would call obvious. But to portray this as proof that banning racist subreddits reduces racist content in society as a whole is a very large misrepresentation.

It didn't just reduce the aggregate racist content on reddit. It reduced the aggregate racist content above and beyond the literal content that was removed. In other words, when they banned r/CoonTown, r/politics got less racist. That is not at all an obvious consequence.


Sure, it reduced toxic content "elsewhere" but that "elsewhere" is limited to the same space that is administered by the same authority. Banning /r/coontown may have made posters in /r/politics less toxic, likely because they witnessed the shift in moderation policies. Also because racist users likely stopped using the service for posting racist content. But you're acting as though this means this content wasn't posted at all. For all we know, this just displaced it to 4chan, Gab, or something else.

Again, I agree that banning racist subreddits led to a reduction of racist content across the board on Reddit. But you're treating this as proof that said bans reduced racist content in society as a whole, which is a baseless claim even with the aforementioned analysis of the impact on other subreddits.


I agree that it isn't absolute proof. It is possible that an effect like the one you described took place. But it isn't the only evidence. I'd direct you again to people like Alex Jones. I think it's extremely hard to argue that Alex Jones and his toxic brand of disinformation didn't benefit enormously from access to platforms like Facebook, Twitter, and Youtube. I think you'd be extremely hard pressed to argue that his reach has increased as a result of being de-platformed. You may be able to make the case that it has retrenched the support of his hardcore followers, but that is not the same thing as signal boosting his message in society at large.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: