What We Lose When We Eliminate Controversial Content

Status
Not open for further replies.
No, you're trying to ignore that deplatforming works, and the information goes directly against the idea that they will "always find another vector". You talk about the ethics of deplatforming, but there's plenty of ethical conundrums about platforming bad actors. I prefer to keep my platform relatively safe and welcoming, which would mean more moderation so that the most vulnerable feel welcome. You may differ and that's your choice. But the ethical questions cut both ways, and I feel pretty comfortable about where I stand on it.
On the contrary, I'm not ignoring that deplatforming gets results in terms of silencing people and controlling the flow of information; that's how it "works." There are many unethical ways to prevent people from speaking and stop people from finding out things that you don't want them to know about, at least for a time. But we're talking about virtue, insofar as what the right thing to do is, and how to achieve a "better" (i.e. more just, more inclusive, more diverse, etc.) society, and by extension, world. In that regard, what you're proposing is counterproductive, as it asserts that you should try and stifle dissent to create a more open society.

While there may be certain ethical conundrums with allowing bad actors to say their piece, but those conundrums themselves speak to the correctness of allowing people to talk, since that's how you seek out answers for them. Otherwise, you're just saying, "ignore the conundrums, and never listen to anyone who doesn't agree with the party line." That's not a stance that's historically been adopted by people I'd want to be associated with.
What do you think we're talking about? Because I'm talking about bigotry and such. When you're talking about engaging Nazis, I don't see a reason to platform their ideas and views. It's counterproductive and giving them big outlets to espouse those views is generally way more harmful because you give them a bullhorn to do so. Again, imagine if every time we had to talk about racial justice we had to debate a Nazi on-air. That's not constructive, that's actively destructive to the discussion because we are ceding easily-drawn boundaries as to what is acceptable and what isn't.
Again, this treats odious ideas as though they were a contagion, rather than something which can be engaged with and demonstrated to be lacking in virtue. Bigotry (which while often unambiguous nevertheless has many gray areas where reasonable people can disagree what falls under that label and what doesn't) is fairly easy to knock down when you confront it and show how it lacks virtue. The idea that it needs to be hidden away lest people find out about it serves only to empower it, creating enclaves which reinforce the people in them, and serve as alternative outlets for people who, in the course of seeking answers, look in alternative venues.

The idea that "if people hear their message, they'll be swayed" gives those ideas more power than they have, and certainly more than they deserve.
I mean, you can. We don't need to have a debate why slurs are unacceptable every time someone decides to use them, just as we don't need to engage them in a good faith discussion to convince them they are wrong. In fact, we know this doesn't work, which is why we have something called the backfire effect.
I don't know about "need," but doing so is the surest route to showing them why they shouldn't do that, as opposed to attacking them and making them harden their position.

As for the backfire effect, if you follow the link that you posted, it says the following:

However, subsequent research has since failed to replicate findings supporting the backfire effect.

Which goes to show that this idea really is a phantom menace, one even more awful than the movie of the same name. :p
Again, this is not a "discussion" tactic. It's a "deprogramming" tactic. These are very different and require very different approaches. The latter is not a workable approach to content and platform moderation, and really needs to come from outside those discussions first.
No, I think you're making a distinction where there isn't one; the example I posted earlier was very much with regard to a person who, by their own admission, approached people in settings where they felt comfortable, got to know them, and debated beliefs without it being acrimonious. People can and do change their minds, if they don't feel like they're being attacked, talked down to, etc. If the goal is to wipe out odious beliefs, then that's the way to go, since it works better than anything.
Also it doesn't increase the reach of odious people, as we have dozens of cases against this: removing people from platforms decreases their reach almost every time. It's only when you replatform them that they can regain their reach. Alex Jones and the other examples in the post I link go directly against what you say and you can't provide any evidence to counter.
You're talking about individuals, not beliefs. Taking away a single person's platform only serves to galvanize those who see someone being punished for having a dissenting opinion, effectively turning them into a martyr. That's the wrong way to go about it, since it only serves to reiterate a siege mentality. And believe it or not, there is evidence to the contrary even where Alex Jones is concerned:


As that article notes: "Initially, a round of media coverage touted flagging traffic to Jones' websites as evidence that "deplatforming works." However, revelations from Jones' defamation trials may point to the existence of a rarified class of extreme internet personalities who are better shielded from efforts to stem the reach of their content."
I mean, that's not what "engagement" is. Again, deprogramming is very different than "engagement" and "discussion". Deplatforming works, and I've provided data that shows that it cuts down on toxicity. I can continue to show data in this regard, but at this point I think you're not really engaging with the premise.
You do realize that the subtitle to the link you posted says "The research suggests that banning certain users on Twitter and Reddit does help cut down hateful content – but it raises other concerns." That's after that same outlet previously pointed out that deplatforming wouldn't stop that same group: "Even if the social network's new policies work perfectly, Q followers can still camouflage their activity or move to other platforms."
It does. It really, really does. Like right now we are in a discussion, and I'm engaging you. I'm not trying to deprogram you. That is what you think engagement is, but it is not: deprogramming something is not just talking on a subject, but a long process of pulling someone away from the edge. Your view of how people change their minds does not really reflect what we know about how people react to their beliefs being changed, and it even ignores the article you posted yourself, where the person did not debate them on topics immediately but found inroads to form a relationship to bring down their belief system. That is not something that you do discussing a topic on a messageboard; in fact, the lack of personal investment makes it almost completely alien in that regard.
Which is why this isn't deprogramming, and talking to people with beliefs that you disagree with isn't deprogramming, and what Daryl Davis did isn't deprogramming. You introduced that element into the conversation, and so pointing out how it's not relevant to what's happening here doesn't serve to advance any particular point in that regard. Talking to people is how you start to bring them back from the edge. That's what all of the articles I've posted have said, and what even some of the articles you've posted have said, such as when you posted a link to the backfire effect, only for it to say that the effect isn't supported by research.

Talking to people, without it being acrimonious or confrontative, is how you get them to change their minds about something. It's not the whole of it, and it isn't immediate, but it's the most surefire way to get them to stop subscribing to an odious belief. Deplatforming just reinforces it, which is what you don't want to do, no matter how good it might feel at the time.
Sure, but Lincoln didn't win the war with just words, and similarly Lincoln did not compromise on the 13th Amendment. Hell, Lincoln suspended habeas corpus in some areas. The ideal and the reality are very different things.
Not in this case, they aren't. If someone is actually shooting at you, then yes, you shoot back. But we're talking about the merit of debating ideas, which are won with words.
And we also don't need to brook bad ideas in our discussions. We don't need to debate Nazis, racists, xenophobes, climate denialists, etc, every time they show their faces. It would grind every discussion we have to a halt. Having a common, established ground for discourse allows us to have fruitful discussions instead of endlessly litigating meaningless tangents.
They're not "our" discussions. They belong to everyone, and others have just as much right to decide what they'll brook as we do. Talking down to someone else about how they're not welcome to share what they believe when everyone else is doesn't serve to change their minds, and runs the risk of making them look sympathetic, even if you then label them as a bigot or some other pejorative. Having an established ground for discourse only works if everyone agrees to the establishment, and if people don't they shouldn't be shown the door for questioning something; that way lies orthodoxy.
 

log in or register to remove this ad

On the contrary, I'm not ignoring that deplatforming gets results in terms of silencing people and controlling the flow of information; that's how it "works." There are many unethical ways to prevent people from speaking and stop people from finding out things that you don't want them to know about, at least for a time. But we're talking about virtue, insofar as what the right thing to do is, and how to achieve a "better" (i.e. more just, more inclusive, more diverse, etc.) society, and by extension, world. In that regard, what you're proposing is counterproductive, as it asserts that you should try and stifle dissent to create a more open society.

While there may be certain ethical conundrums with allowing bad actors to say their piece, but those conundrums themselves speak to the correctness of allowing people to talk, since that's how you seek out answers for them. Otherwise, you're just saying, "ignore the conundrums, and never listen to anyone who doesn't agree with the party line." That's not a stance that's historically been adopted by people I'd want to be associated with.

Again, this treats odious ideas as though they were a contagion, rather than something which can be engaged with and demonstrated to be lacking in virtue. Bigotry (which while often unambiguous nevertheless has many gray areas where reasonable people can disagree what falls under that label and what doesn't) is fairly easy to knock down when you confront it and show how it lacks virtue. The idea that it needs to be hidden away lest people find out about it serves only to empower it, creating enclaves which reinforce the people in them, and serve as alternative outlets for people who, in the course of seeking answers, look in alternative venues.

The idea that "if people hear their message, they'll be swayed" gives those ideas more power than they have, and certainly more than they deserve.

I don't know about "need," but doing so is the surest route to showing them why they shouldn't do that, as opposed to attacking them and making them harden their position.

As for the backfire effect, if you follow the link that you posted, it says the following:



Which goes to show that this idea really is a phantom menace, one even more awful than the movie of the same name. :p

No, I think you're making a distinction where there isn't one; the example I posted earlier was very much with regard to a person who, by their own admission, approached people in settings where they felt comfortable, got to know them, and debated beliefs without it being acrimonious. People can and do change their minds, if they don't feel like they're being attacked, talked down to, etc. If the goal is to wipe out odious beliefs, then that's the way to go, since it works better than anything.

You're talking about individuals, not beliefs. Taking away a single person's platform only serves to galvanize those who see someone being punished for having a dissenting opinion, effectively turning them into a martyr. That's the wrong way to go about it, since it only serves to reiterate a siege mentality. And believe it or not, there is evidence to the contrary even where Alex Jones is concerned:


As that article notes: "Initially, a round of media coverage touted flagging traffic to Jones' websites as evidence that "deplatforming works." However, revelations from Jones' defamation trials may point to the existence of a rarified class of extreme internet personalities who are better shielded from efforts to stem the reach of their content."

You do realize that the subtitle to the link you posted says "The research suggests that banning certain users on Twitter and Reddit does help cut down hateful content – but it raises other concerns." That's after that same outlet previously pointed out that deplatforming wouldn't stop that same group: "Even if the social network's new policies work perfectly, Q followers can still camouflage their activity or move to other platforms."

Which is why this isn't deprogramming, and talking to people with beliefs that you disagree with isn't deprogramming, and what Daryl Davis did isn't deprogramming. You introduced that element into the conversation, and so pointing out how it's not relevant to what's happening here doesn't serve to advance any particular point in that regard. Talking to people is how you start to bring them back from the edge. That's what all of the articles I've posted have said, and what even some of the articles you've posted have said, such as when you posted a link to the backfire effect, only for it to say that the effect isn't supported by research.

Talking to people, without it being acrimonious or confrontative, is how you get them to change their minds about something. It's not the whole of it, and it isn't immediate, but it's the most surefire way to get them to stop subscribing to an odious belief. Deplatforming just reinforces it, which is what you don't want to do, no matter how good it might feel at the time.

Not in this case, they aren't. If someone is actually shooting at you, then yes, you shoot back. But we're talking about the merit of debating ideas, which are won with words.

They're not "our" discussions. They belong to everyone, and others have just as much right to decide what they'll brook as we do. Talking down to someone else about how they're not welcome to share what they believe when everyone else is doesn't serve to change their minds, and runs the risk of making them look sympathetic, even if you then label them as a bigot or some other pejorative. Having an established ground for discourse only works if everyone agrees to the establishment, and if people don't they shouldn't be shown the door for questioning something; that way lies orthodoxy.

There is finite time. Every second spent doing what you propose is a second not spent feeding the hungry, working with youth, counseling the people who under your plan can't be on social media without being victims of bigotry, etc... or living ones life.


I'm glad that some people choose to try and convince the bigots and the like of the error of their ways by engaging them in conversation. I sometimes wish they'd spend more time doing that and less time trying to convince us we should put up with it literally everywhere while we argue back. But then again, all of us need to unwind by discussing things we care about. ;-)
 

I'm glad that some people choose to try and convince the bigots and the like of the error of their ways by engaging them in conversation. I sometimes wish they'd spend more time doing that and less time trying to convince us we should put up with it literally everywhere while we argue back. But then again, all of us need to unwind by discussing things we care about. ;-)
It's not just bigotry, but bad ideas (i.e. ideas that, if allowed to grow and proliferate unchecked end up harming society; or at least that's how I define "bad ideas") in general. I'm of the opinion that deplatforming and otherwise trying to quarantine bad ideas and people who believe in them is itself a bad idea, one that has good intentions but ends up strengthening that which it seeks to suppress, and that the evidence supports my view. Hence, I feel a moral impetus to speak out against it when I see it put forward unchallenged.

But otherwise, I agree with most of what you said. It's up to everyone to decide if that's a worthwhile endeavor to participate in a discussion or not, and no one should begrudge them their choices in that regard.
 

On the contrary, I'm not ignoring that deplatforming gets results in terms of silencing people and controlling the flow of information; that's how it "works." There are many unethical ways to prevent people from speaking and stop people from finding out things that you don't want them to know about, at least for a time. But we're talking about virtue, insofar as what the right thing to do is, and how to achieve a "better" (i.e. more just, more inclusive, more diverse, etc.) society, and by extension, world. In that regard, what you're proposing is counterproductive, as it asserts that you should try and stifle dissent to create a more open society.

Is it unethical to make your platform safer for vulnerable people? To make it less racist, less toxic, less radical?

My version isn't counterproductive: it's the version that works. Moderation does that.

While there may be certain ethical conundrums with allowing bad actors to say their piece, but those conundrums themselves speak to the correctness of allowing people to talk, since that's how you seek out answers for them. Otherwise, you're just saying, "ignore the conundrums, and never listen to anyone who doesn't agree with the party line." That's not a stance that's historically been adopted by people I'd want to be associated with.

No, and if you're going to try and strawman my argument I'll just step out. It's pretty damn dishonest to act like I'm saying "ignore the conundrums". Rather, I think you are far more concerned with having debates with racists while I'm more concerned with having an environment where the vulnerable don't have constantly be confronted with the idea that they don't matter or shouldn't exist.

Again, this treats odious ideas as though they were a contagion, rather than something which can be engaged with and demonstrated to be lacking in virtue. Bigotry (which while often unambiguous nevertheless has many gray areas where reasonable people can disagree what falls under that label and what doesn't) is fairly easy to knock down when you confront it and show how it lacks virtue. The idea that it needs to be hidden away lest people find out about it serves only to empower it, creating enclaves which reinforce the people in them, and serve as alternative outlets for people who, in the course of seeking answers, look in alternative venues.

The idea that "if people hear their message, they'll be swayed" gives those ideas more power than they have, and certainly more than they deserve.

Those ideas do have power, often power beyond facts themselves. If it were only up to facts, then so many of our problems and disagreements wouldn't exist. The problem is that people are often swayed by bad-faith arguments, particularly ones that they are conditioned to hear. The idea that misinformation can simply be combated through engagement is just laughable on its face, even moreso given the time in which we live.

I don't know about "need," but doing so is the surest route to showing them why they shouldn't do that, as opposed to attacking them and making them harden their position.

They can harden it even without it. Again, you're arguing from a standpoint that we don't need moderation, it'll all take care of itself. But we know what that looks like: it's just the Chans. If you want that, fine, more power to you. But that's what it is.

As for the backfire effect, if you follow the link that you posted, it says the following:




Which goes to show that this idea really is a phantom menace, one even more awful than the movie of the same name. :p

Yes, but I'll point out that these people aren't arguing for platforming people, either.

No, I think you're making a distinction where there isn't one; the example I posted earlier was very much with regard to a person who, by their own admission, approached people in settings where they felt comfortable, got to know them, and debated beliefs without it being acrimonious. People can and do change their minds, if they don't feel like they're being attacked, talked down to, etc. If the goal is to wipe out odious beliefs, then that's the way to go, since it works better than anything.

Yes, but that an absolutely moronic way to run a platform and completely impossible way to moderate content. You can't just become friends and slowly change a racist person every time you try to have a discussion with them on a topic. This idea that we can talk it out misses that this is not concept that can be applied to something like a message board.

You're talking about individuals, not beliefs. Taking away a single person's platform only serves to galvanize those who see someone being punished for having a dissenting opinion, effectively turning them into a martyr. That's the wrong way to go about it, since it only serves to reiterate a siege mentality. And believe it or not, there is evidence to the contrary even where Alex Jones is concerned:


As that article notes: "Initially, a round of media coverage touted flagging traffic to Jones' websites as evidence that "deplatforming works." However, revelations from Jones' defamation trials may point to the existence of a rarified class of extreme internet personalities who are better shielded from efforts to stem the reach of their content."

For the love of God, read your own article:

"Jones' resilience is more of an exception than the rule, says Squire. She points to the case of Andrew Anglin, founder of the neo-Nazi website The Daily Stormer. Following the violent 2017 Unite the Right rally in Charlottesville, Va., he lost his web domain and has had to cycle through 14 more, losing traffic each time. Squire says Anglin is on the run from various lawsuits, which include a ruling that he owes $14 million in damages for terrorizing a Jewish woman and her family."

Jones being deplatformed really affected his reach, even though others have given him something of a platform (Joe Rogan springs to mind). You can look at the other ones from my own article to see how well it works. It's pretty easy. Deplatforming does work, it's just not a panacea.

You do realize that the subtitle to the link you posted says "The research suggests that banning certain users on Twitter and Reddit does help cut down hateful content – but it raises other concerns." That's after that same outlet previously pointed out that deplatforming wouldn't stop that same group: "Even if the social network's new policies work perfectly, Q followers can still camouflage their activity or move to other platforms."

It won't stop it, but it doesn't need to. There's no way to just stop a belief, but you can absolutely limit it from spreading. If you want to discuss QAnon, you have to discuss politics and how their beliefs are mainstreamed and validated by certain people which allows them to spread: they have a different vector that is largely related to a different media environment that runs parallel to our own. The failure there is that QAnon's beliefs were allowed to spread with the numbers filed off, creating a pipeline to the extremism that was promoted by dozens of actors.


Which is why this isn't deprogramming, and talking to people with beliefs that you disagree with isn't deprogramming, and what Daryl Davis did isn't deprogramming. You introduced that element into the conversation, and so pointing out how it's not relevant to what's happening here doesn't serve to advance any particular point in that regard. Talking to people is how you start to bring them back from the edge. That's what all of the articles I've posted have said, and what even some of the articles you've posted have said, such as when you posted a link to the backfire effect, only for it to say that the effect isn't supported by research.

Talking to people, without it being acrimonious or confrontative, is how you get them to change their minds about something. It's not the whole of it, and it isn't immediate, but it's the most surefire way to get them to stop subscribing to an odious belief. Deplatforming just reinforces it, which is what you don't want to do, no matter how good it might feel at the time.

That is absolutely deprogramming. What Davis did was deprogramming, where he entered into their system and slowly but surely dismantled it. It's less extreme than having to get someone out from a cult, but it's exactly the same and it isn't a way of actually moderating discussion.

What you are asking is to engage deeply with people beyond a discussion and try to subvert them over a long period of time by challenging their beliefs. While that is important and admirable, it is not a feasible method for content moderation on social media platforms.

Not in this case, they aren't. If someone is actually shooting at you, then yes, you shoot back. But we're talking about the merit of debating ideas, which are won with words.

No, they aren't. In fact, the Civil War is a great example of how they aren't won with words, how sometimes words just don't work.

They're not "our" discussions. They belong to everyone, and others have just as much right to decide what they'll brook as we do. Talking down to someone else about how they're not welcome to share what they believe when everyone else is doesn't serve to change their minds, and runs the risk of making them look sympathetic, even if you then label them as a bigot or some other pejorative. Having an established ground for discourse only works if everyone agrees to the establishment, and if people don't they shouldn't be shown the door for questioning something; that way lies orthodoxy.

But they are. We create the meeting room, we set the rules for the discussion. It's not "talking down to someone", but not letting them demean others, because allowing a Nazi to debate their views on how Jews don't deserve to exist would make Jewish forumgoers feel understandably unwelcome, just as allowing racists to talk about the inherent inferiority of black people would likely make this place harder to stand. You seem to have endless empathy for those who have bad beliefs, but very little for those who those beliefs affect, the people who are being dehumanized by them and who are forced to live through that stuff every day.

If you want to go and have free debate with racists, feel free to make a missionary trip to the Chans. I prefer not to waste my time having to debate the basic humanity of my fellow posters. Maybe that just comes with age and not wanting to waste time with that sort of stuff. But I just don't have time to really set down and have weekly talks with an internet racist so that they can in the future maybe come around to my view.
 

Most people are not really equipped to be Matthew Stevenson or Deeyah Khan unfortunately. I certainly think deplatforming is more effective in the case of out and out demagogues.

Anyway: super tangential but in some ways it is weird that we still use the term "deprogram" in 2023. The plain meaning of the word is rather dehumanizing as computers are still objects, not people (no matter what ChatGPT wants us to think).
 

Is it unethical to make your platform safer for vulnerable people? To make it less racist, less toxic, less radical?

My version isn't counterproductive: it's the version that works. Moderation does that.
It doesn't work if your goal is to end up with fewer and fewer people who hold toxic beliefs, which is what makes society in general (and any given platform in particular) less toxic, etc. Trading in a brief victory at the cost of hardening opposition and adhering more strongly to odious beliefs isn't a good long-term plan for dealing with those ideas.
No, and if you're going to try and strawman my argument
I disagree that what I said was a strawman; the end result of curating content via ostracizing people is exactly what I laid out previously. It's why deplatforming et al doesn't work over the long-term.
Those ideas do have power, often power beyond facts themselves. If it were only up to facts, then so many of our problems and disagreements wouldn't exist. The problem is that people are often swayed by bad-faith arguments, particularly ones that they are conditioned to hear. The idea that misinformation can simply be combated through engagement is just laughable on its face, even moreso given the time in which we live.
Whatever "power" they have doesn't come from any virtue in-and-of those beliefs themselves; bad ideas are bad because virtue is what they lack. That means that a great deal of their power comes from them being propped up as dangerous things which will inevitably win out if they're allowed to be given space, as though they're so cogent that anyone who hears them will be swayed by them. Nothing could be further from the truth; most people, if you present the arguments in a clear and rational manner, without attacking them or belittling them, and giving them time to think things over, will realize that those bad ideas are just that: bad ideas. Trying to curate content on people's behalf only pushes them toward the very thing you're trying to keep them away from.

They can harden it even without it. Again, you're arguing from a standpoint that we don't need moderation, it'll all take care of itself. But we know what that looks like: it's just the Chans. If you want that, fine, more power to you. But that's what it is.
No, without it they tend to be more open to alternative ideas, which is how people end up changing their minds and walking away from odious belief systems. You're not arguing in favor of "moderation," you're arguing in favor of deplatforming and leveling socioeconomic punishments at people for dissenting beliefs, which has a long history of empowering those beliefs rather than stamping them out. You can't negative-reinforce people's way to tolerance, diversity, and inclusion. You have to demonstrate why that's a better way.
Yes, but I'll point out that these people aren't arguing for platforming people, either.
The important takeaway is that the "backfire effect" isn't real, which further supports the idea that engagement works.
Yes, but that an absolutely moronic way to run a platform and completely impossible way to moderate content. You can't just become friends and slowly change a racist person every time you try to have a discussion with them on a topic. This idea that we can talk it out misses that this is not concept that can be applied to something like a message board.
Why not? I've formed lasting friendships with people online, and while I won't go so far as to say you could do that "every time," it's worth leaving that potential open during any given interaction. Even then, there's still merit to the idea, because it works with regard to the invisible audience who reads what we post online without our knowledge. So that's actually more reason to engage, rather than simply dismiss. It let's us all be the change we'd like to see.
For the love of God, read your own article:
This seems funny, given you posting a link that debunked the backfire effect as evidence in support of the backfire effect. ;)
"Jones' resilience is more of an exception than the rule, says Squire. She points to the case of Andrew Anglin, founder of the neo-Nazi website The Daily Stormer. Following the violent 2017 Unite the Right rally in Charlottesville, Va., he lost his web domain and has had to cycle through 14 more, losing traffic each time. Squire says Anglin is on the run from various lawsuits, which include a ruling that he owes $14 million in damages for terrorizing a Jewish woman and her family."

Jones being deplatformed really affected his reach, even though others have given him something of a platform (Joe Rogan springs to mind). You can look at the other ones from my own article to see how well it works. It's pretty easy. Deplatforming does work, it's just not a pancea.
The takeaway here, besides that being the assertion of a single person making an assertion rather than a rock-solid conclusion, is that even when you try to deplatform someone, it can often fail, and in doing so make them more noticeable. Likewise, that doesn't even begin to speak to the issue of this being something applied to individual people and not the ideas they espouse.
It won't stop it, but it doesn't need to. There's no way to just stop a belief, but you can absolutely limit it from spreading. If you want to discuss QAnon, you have to discuss politics and how their beliefs are mainstreamed and validated by certain people which allows them to spread: they have a different vector that is largely related to a different media environment that runs parallel to our own. The failure there is that QAnon's beliefs were allowed to spread with the numbers filed off, creating a pipeline to the extremism that was promoted by dozens of actors.
No, you can't limit an idea from spreading. That's the takeaway here: ideas are not contagions which can be quarantined. Any such attempts will have only limited effect at best, because while you can slow their spread, you can't stop it completely, nor will you be able to prevent people who go looking for them from finding them. At that point, if you haven't given them a series of arguments as to why those ideas are bad, that's when you leave them most vulnerable to being swayed by them, because simply saying "they're bad" isn't enough.

Deplatforming and similar techniques don't work, they slow the spread of ideas at the cost of strengthening them. You have to engage with those ideas and show why they're bad in order to rob them of their appeal.
That is absolutely deprogramming. What Davis did was deprogramming, where he entered into their system and slowly but surely dismantled it. It's less extreme than having to get someone out from a cult, but it's exactly the same and it isn't a way of actually moderating discussion.
That only works if you define all communities which support bad beliefs as a cult, which isn't really a useful definition because every community is defined by (among other things) a shared set of beliefs by (the majority of) its members. Saying that talking to white supremacists in a non-confrontational manner is deprogramming, as opposed to engagement, is at best a six-of-one-half-dozen-of-another comparison, which introduces no salient differences between the two. Engagement is engagement, and it changes hearts and minds. 'nuff said.
What you are asking is to engage deeply with people beyond a discussion and try to subvert them over a long period of time by challenging their beliefs. While that is important and admirable, it is not a feasible method for content moderation on social media platforms.
I'm glad we can agree that what I'm proposing is important and admirable. That said, if you want to talk specifically with regard to the challenges of balancing that with moderation on social media platforms, then I feel like that's a different discussion, since what we're talking about is the ethics of engagement versus deplatforming.
No, they aren't. In fact, the Civil War is a great example of how they aren't won with words, how sometimes words just don't work.
Again, I said that when people start shooting at you, you need to shoot back. But we're talking about engaging with people in a debate, so that's kind of a pointless truism. If the other person is willing to talk to you, talking back is the way to go as a general rule.
But they are. We create the meeting room, we set the rules for the discussion. It's not "talking down to someone", but not letting them demean others, because allowing a Nazi to debate their views on how Jews don't deserve to exist would make Jewish forumgoers feel understandably unwelcome, just as allowing racists to talk about the inherent inferiority of black people would likely make this place harder to stand. You seem to have endless empathy for those who have bad beliefs, but very little for those who those beliefs affect, the people who are being dehumanized by them and who are forced to live through that stuff every day.
Again, I don't know who "we" are in your statement, let alone what specific "meeting room" you're referring to. Within the context of an open society, someone saying what they believe is something they can do because "they" are a part of "we," and the society itself is their meeting room. If someone espouses an odious belief, then the way to make people who are uncomfortable with that belief feel better is for the person saying it to realize that they were wrong, and to abandon said odious belief. That, more than anything else, is how you secure the safety and personhood of others. Changing hearts and minds, rather than ostracizing people, generates more security over a longer period of time than trying to label people as evil and pushing them into enclaves.
If you want to go and have free debate with racists, feel free to make a missionary trip to the Chans. I prefer not to waste my time having to debate the basic humanity of my fellow posters. Maybe that just comes with age and not wanting to waste time with that sort of stuff. But I just don't have time to really set down and have weekly talks with an internet racist so that they can in the future maybe come around to my view.
There are bad ideas beyond racism that I believe also deserve to be called out, as I noted in my previous post. I see the lionization of deplatforming as a means of ending social ills to be a path that ultimately strengthens that which it seeks to defeat, and so feel that there's virtue in pointing out how this isn't the case. Creating a better society for everyone in it isn't something that's quick or easy, all the more so when quick and easy ideas are held up as ideal answers.
 


Can I just point out that both sides here really REALLY need to take a step back and adhere to the no politics policy of the board.

Not taking any side here but folks, you are all WAY out of line.
That's a fair assessment. Given that this seems to be necessarily edging toward breaking forum rules, I'd like to ask @Justice and Rule to PM me his response to my previous post (which I have no doubt he's writing right now), and we can take this discussion private.
 

It doesn't work if your goal is to end up with fewer and fewer people who hold toxic beliefs, which is what makes society in general (and any given platform in particular) less toxic, etc. Trading in a brief victory at the cost of hardening opposition and adhering more strongly to odious beliefs isn't a good long-term plan for dealing with those ideas.

I disagree? Like, I think there is a big difference between not confronting something and not platforming it. Trying to discuss these topics always in good faith misses that most of the time they aren't being discussed in good faith.

Let me ask you: for every racial justice story, should we have someone have to debate a Nazi? Is that always necessary? Or do we have certain baseline beliefs that we can simply use so that we don't have to have Holocaust deniers on the news and giving reach to their methods.

I disagree that what I said was a strawman; the end result of curating content via ostracizing people is exactly what I laid out previously. It's why deplatforming et al doesn't work over the long-term.

No, it can, but again it isn't a panacea. There are multiple steps, the problem is when you are opposed on certain steps. The QAnon example is very good for showing how difficult it is when one side has a massive social presence already and a large part of the population is being driven towards radicalism in the pursuit of power.

But also in moderating social media I think we should prioritize the vulnerable to the hateful. I don't think we should completely play with kids gloves to racists because it's disrespectful to the people they are dehumanizing.

Whatever "power" they have doesn't come from any virtue in-and-of those beliefs themselves; bad ideas are bad because virtue is what they lack. That means that a great deal of their power comes from them being propped up as dangerous things which will inevitably win out if they're allowed to be given space, as though they're so cogent that anyone who hears them will be swayed by them. Nothing could be further from the truth; most people, if you present the arguments in a clear and rational manner, without attacking them or belittling them, and giving them time to think things over, will realize that those bad ideas are just that: bad ideas. Trying to curate content on people's behalf only pushes them toward the very thing you're trying to keep them away from.

No, again, you completely discount the effects of things like moderation and deplatforming and how they work. So much of what you've said basically ignores the actual effects of it that it is hard to take this argument seriously.

No, without it they tend to be more open to alternative ideas, which is how people end up changing their minds and walking away from odious belief systems. You're not arguing in favor of "moderation," you're arguing in favor of deplatforming and leveling socioeconomic punishments at people for dissenting beliefs, which has a long history of empowering those beliefs rather than stamping them out. You can't negative-reinforce people's way to tolerance, diversity, and inclusion. You have to demonstrate why that's a better way.

No, that's how platform moderation works. If you want Alex Jones using his platform to harass the parents of dead kids, that's on you. I listen to a podcast which reviews him every week, I've listened to his depositions, watched his trial, seen his interviews, and I really think you're out of your element trying to talk about what you think does and does not work with him.

Also you talk about "demonstrating a better way", but you miss that in doing so, you are alienating the people who are actually affected by it. It's great that you're okay with platforming hate to try and change their mind, but you are making it less and less safe and plateable for the people 1who have to take that hate to actually stay on those platforms.

The important takeaway is that the "backfire effect" isn't real, which further supports the idea that engagement works.

But it doesn't support the idea that engagement works. In one of the studies you're citing, it shows that they couldn't replicate the backfire effect, but also no change in belief. The backfire effect is only about hardening beliefs, and just because they don't harden their beliefs doesn't mean they change them.

And again, you're talking about engagement that only works on a deep, personal level. That doesn't work as a moderating principle on a social media platform.

Why not? I've formed lasting friendships with people online, and while I won't go so far as to say you could do that "every time," it's worth leaving that potential open during any given interaction. Even then, there's still merit to the idea, because it works with regard to the invisible audience who reads what we post online without our knowledge. So that's actually more reason to engage, rather than simply dismiss. It let's us all be the change we'd like to see.

I've formed lasting relationships with people I've met online, too, but I also didn't do so to try and change their mind. This is not actually talking about what you're talking about: engaging with people deeply who you disagree with to actually change what they believe. It's possible, but not as a principle for a social media company.

This seems funny, given you posting a link that debunked the backfire effect as evidence in support of the backfire effect. ;)

It didn't debunk it, though. There are plenty of studies out there looking at the backfire effect, and the most common thing is that there needs to be better, more replicable standards.

The takeaway here, besides that being the assertion of a single person making an assertion rather than a rock-solid conclusion, is that even when you try to deplatform someone, it can often fail, and in doing so make them more noticeable. Likewise, that doesn't even begin to speak to the issue of this being something applied to individual people and not the ideas they espouse.

It's not an "assertion", you can look up Andrew Anglin if you like. You clearly don't know much about the figures being discussed and how badly their audiences have floundered because of this. Look at Kiwifarms or 8chan for broader examples, but you can find plenty of individual examples.

No, you can't limit an idea from spreading. That's the takeaway here: ideas are not contagions which can be quarantined. Any such attempts will have only limited effect at best, because while you can slow their spread, you can't stop it completely, nor will you be able to prevent people who go looking for them from finding them. At that point, if you haven't given them a series of arguments as to why those ideas are bad, that's when you leave them most vulnerable to being swayed by them, because simply saying "they're bad" isn't enough.

This misses that ideas have reach and power and in self-policing and through cultural pressure we can diminish these things so that they are less acceptable. Again, you're idea would basically be to allow any and all sorts of hate through and try to debate it, which there is just no evidence that works on an impersonal level, especially on the internet.

Deplatforming and similar techniques don't work, they slow the spread of ideas at the cost of strengthening them. You have to engage with those ideas and show why they're bad in order to rob them of their appeal.

You keep saying this, but there has clearly been real success with deplatforming. You just keep declaring that it doesn't work like just saying the words will suddenly give it power.

That only works if you define all communities which support bad beliefs as a cult, which isn't really a useful definition because every community is defined by (among other things) a shared set of beliefs by (the majority of) its members. Saying that talking to white supremacists in a non-confrontational manner is deprogramming, as opposed to engagement, is at best a six-of-one-half-dozen-of-another comparison, which introduces no salient differences between the two. Engagement is engagement, and it changes hearts and minds. 'nuff said.

"Cult" is an arbitrary measure of what is acceptable and isn't, just like beliefs.

Also you are completely ignoring that Davis wasn't just "talking in a non-confrontational manner" to white supremacists, but building up a personal bond before actually challenging them on their beliefs. That's deprogramming, that's how that works, where you slowly build to the idea of challenging their previous belief system. You can try to force it all you want, but deprogramming anyone and everyone you are debating

I'm glad we can agree that what I'm proposing is important and admirable. That said, if you want to talk specifically with regard to the challenges of balancing that with moderation on social media platforms, then I feel like that's a different discussion, since what we're talking about is the ethics of engagement versus deplatforming.

No, talking about the ethics of engagement versus deplatforming absolutely deals with those conundrums, and if you try to ignore them you're just being dishonest. You can't talk about how we need to engage racists in discussion without talking about the flipside of that and how damaging it is to people who are being affected by that.

Again, I said that when people start shooting at you, you need to shoot back. But we're talking about engaging with people in a debate, so that's kind of a pointless truism. If the other person is willing to talk to you, talking back is the way to go as a general rule.

You're missing the point: people had tried to discuss the issue with words, facts, discussion, but it did not change minds. Engagement is nice, but it misses how hard belief can be, especially worldviews about things like race. Further, it misses how broader discussions make the use of personal engagement more difficult.

Again, I don't know who "we" are in your statement, let alone what specific "meeting room" you're referring to. Within the context of an open society, someone saying what they believe is something they can do because "they" are a part of "we," and the society itself is their meeting room. If someone espouses an odious belief, then the way to make people who are uncomfortable with that belief feel better is for the person saying it to realize that they were wrong, and to abandon said odious belief. That, more than anything else, is how you secure the safety and personhood of others. Changing hearts and minds, rather than ostracizing people, generates more security over a longer period of time than trying to label people as evil and pushing them into enclaves.

The "we" is the general "we", and the "meeting room" is the place we discuss: message board, social media platforms, etc. This is getting obtuse.

If someone expresses and odious belief, trying to debate them can have merit, but often merely engaging and discussing doesn't work. Again, you're only example did not work due to engagement from discussion, but forming a personal bond over time. Meanwhile, in allowing this discussion to be had you are ignoring the people whose humanity you might be

There are bad ideas beyond racism that I believe also deserve to be called out, as I noted in my previous post.

Sure, but how many of those are actually in danger of being "deplatformed"? If we're talking "odious personal beliefs", stuff like deplatforming is largely exclusive to racism, antisemitism, homophobia, etc. It's not like we're deplatforming people on the basis of whether they are in favor of tax cuts or not.

I see the lionization of deplatforming as a means of ending social ills to be a path that ultimately strengthens that which it seeks to defeat, and so feel that there's virtue in pointing out how this isn't the case. Creating a better society for everyone in it isn't something that's quick or easy, all the more so when quick and easy ideas are held up as ideal answers.

No one has said it will end social ills, simply that it is a tool to help make things more manageable, to eliminate bad-faith actors and to make things safer for the more vulnerable members of our community.
 


Status
Not open for further replies.
Remove ads

Top