How can AWS cancel Parlor's account but not MeWe's? Aren't they almost identical in terms of content?

  • 0
    They are not the same. MeWe was created in 2016 in response to a market the founder saw for a social network that wasn’t selling out users privacy for ads. Instead, they sell a membership for advanced features and in-network digital swag. There is a wide variety of communities including every political stripe. A heathy LGBTQ community exists there as well.

    Don’t you want there to be a diversity of social networking? Or just the ones who tell you what to think and sell you to the highest bidder?
  • 0
    @stackodev People advocating political violence have been referencing MeWe as their new platform. It seems to have quite a reputation for discriminatory hate space. I'm not anti-diversity, I'm just against violence and hate-speech. I appreciate your feedback.

  • 2
    Parler failed to remove content quick enough to keep up with AWS's demands and the monster spike in content flooding in.

    as far I can determine anyway, so much mixed articles on this one.
  • 1
    Don't worry big brother will purge all of them soon enough.

    Telegram is next
  • 0
    @Stuxnet where is Telegram hosted? Aren’t they now owned by Zoom?
  • 3
    The funny thing about freedom of speech is that only the first part gets the attention. The whole accountability part is an afterthought if at all present. Problem is that holding anonymous accountable is impossible.
    I think it's wrong to just forcibly removed a whole platform. But I also think that clear violence enticing content should be removed.
    Not political views, protest organising, virtual violence (computer games and such) but actual statements like let's beat the crap out of ... tomorrow at lunch on ...street.
  • 1
    @hjk101 I agree that political discourse is healthy, but how do you filter your own platform for specific violent statements when your user base grows so rapidly ?
  • 0
    @devphobe the challenge with Parler was that they didn't want to use AI to moderate content, this is an ideological stance. They object to content being monitored a priori.
  • 0
    @forcepushfixall does MeWe? With no advertising, could their revenue be high enough to implement a good AI in today’s society?
  • 0
    @devphobe I don't know what MeWe is, but from a personal stance I would say the AI-based a priori content policy enforcement is undesirable in any case. Content should reviewed only when it is actually reported by a user (just like the police comes when a crime is reported). We would never accept some kind of predictive policing with AI outside of the digital realm.
  • 1
    @devphobe let users do a lot the work for you with a flagging option.
  • 1
    Twitter has been chock full of calls for rioting and killing for a while. Since June, it only got worse. I reported multiple violations to Twitter and all I got back was that they reviewed it and found that it did not violate their policies. Same for Facebook. They were seemingly happy to be the place where certain people of a certain opinion came to organize and rally people to The Cause.

    And then, in the past week or so, some articles were written to highlight the hypocrisy of so many calls to open violence for The Cause were allowed to remain on their platform for months. Including from one particularly murderous head of state. Immediately those tweets got removed. What changed? Nothing with the AI or the content monitoring. They got caught.
  • 1
    I was on Parler, out of a professional interest, lurking mostly and posting here and there to experiment with the new platform and to study how people interacted. People there were pissed. I don’t begrudge them that. But this idea that the ONLY people on it were insurrectionists and that ALL they were doing was colluding to commit violence is totally false. Like, really false. People act like everywhere you looked it was apparent. Nah. Just a bunch of people shitposting like on Twitter. It was more like Twitter in ways the press will never tell you because the press knows they need Big Tech to survive and don’t want to be canceled themselves.
  • 1
    @devphobe That argument is silly on its face. Just because a subset of morons advocates something, a whole platform has to be wiped out to prevent them from using it? Really? That’s where this should go?
  • 1
    Amazon's story:

    - Amazon saw content that violated its TOS
    - Amazon asked Parler to take it down
    - Parler agreed but failed to do so in a timely manner
    - Amazon suspended their accounts

    Really cut and dry, if you believe Amazon fully. I still don't know what to think about all this. The entire battleground is sad, no matter where you look.
  • 0
    @junon I personally don’t believe Amazon. But that’s just me not trusting big tech power.
  • 0
    I can’t hear major social media company outrage over the roar of their constant hypocrisy and favoritism.
  • 1
    @junon I’ve been on the receiving end of an Amazon terms of service violation. (Somebody filed a copyright infringement claim against content one of our users uploaded) they don’t play around - we had 24 hours to acknowledge and delete the content, from Saturday to Sunday.
  • 0
    @stackodev I’m sure Parlor had multiple user bases, and even if only a small subset consisted of the violence, the real problem is how the platform handles bad actors. Did Parlor have a “report post” feature? Did you ever try it? I commend you for trying it, I wish we could get coffee or a beer and talk more about it.

    In America, we provide legal protections to our Telecom giants (Verizon, AT&T, Comcast) In that world, we’d call it Wiretapping. In the social media world, we apply rules like Section 230 or HIPAA Conduit exceptions for email, ftp, and fax providers.

    The key argument here is the difference between legality and ethics. When your platform condones violence, by not censuring violent users, you’ve crossed the ethical boundary (in my opinion) I imagine the percentage of users Facebook has blocked is way higher than the percentage of Parlor users who were TOS’d out of the platform. That’s just an assumption though.
  • 0
    @devphobe There was a reporting feature. I never saw any content that merited its use. Nothing calling openly for specific violent acts in specific places and such. Did people express themselves strongly and in ways that might make a Ph.D. in Llama Gender Studies at UC Berkeley piss himself? Quite often. But rough speech is generally not something we ought to be criminalizing, even/especially if one doesn’t agree with it.
Add Comment