Why YSK: Beehaw defederated from Lemmy.World and Sh.itjust.works effectively shadowbanning anyone from those instances. You will not be able to interact with their users or posts.

Edit: A lot of people are asking why Beehaw did this. I want to keep this post informational and not color it with my personal opinion. I am adding a link to the Beehaw announcement if you are interested in reading it, you can form your own views. https://beehaw.org/post/567170

  • arayvenn@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    2
    ·
    1 year ago

    This is a bit of a bummer since I’m interested in a lot of the beehaw communities. Should users just make separate accounts to interact with beehaw communities?

    • eee@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      edit-2
      1 year ago

      a lot of the beehaw communities have alternatives in the rest of the lemmyverse. while I can participate in beehaw communities, i personally found it more useful to just block all beehaw communities (so I don’t accidentally post there) and participate in the non-beehaw communities so I’m interacting with the majority of the fediverse.

      most of the other instances are low-drama and don’t have issues with defederating/shadowbanning like beehaw does!

      • t0e@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        1 year ago

        There’s no reason you can’t have it both ways. Ban behaw here, and join beehaw there, if you feel any fear of missing out.

        I think that’s one of the best things about federation. You get to taste both options. If they’re right about high-admission instances leading to lower quality content, the only way to know it is to be in both places. Thankfully, we can.

    • Da_Boom@iusearchlinux.fyi
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      If you want to interact with both beehaw and Lemmy.world you need to find an instance that federates with both.

      My instance (iusearchlinux.fyi) does, but so does a lot of other instances too.

    • rimlogger@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      I use a separate account for Beehaw but ever since they defederated I haven’t seen as much activity on there.

      • thegiddystitcher@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Makes sense. The problem with their approach was it basically punished people on Beehaw rather than people on the instances they deem problematic. I see what they’re trying to do and have absolutely no problem with it, but I and most people I’d got to know there just moved to another instance so we could interact outside the walled garden. Whereas my account on .world was unaffected because there’s alternatives to all the Beehaw stuff elsewhere.

      • t0e@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Sounds like a double-edged sword. High admittance rate may mean spammier as well as trollier content, but low admissions and blacklisting other instances means people will move their stuff to where they expect more people to see it.

    • SpaceCowboy@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      Yeah lemmy.world has a more open sign up which is a double edged sword. It’s good in that it’s easier to set up an account and start talking.

      But the other side of it is that it’s also easier for shitty people to sign up. The kind of people that will say shitty things to the LGBTQ+ communities on beehaw.

      So yeah, you might want to consider signing up for an account on an instance that’s a little more selective. You’ll probably have to write up a few paragraphs introducing yourself, and it might take a little time for it to be reviewed.

      • t0e@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        AI is going to mess with that process so fast I’d be surprised if that hasn’t happened already. While it seems unavoidable, still probably a good idea to have the personal question text box for now. But it seems like only a stopgap. We’ll need something better.

        But how do you proceduralize moderation? Even though it will raise operation costs, it might be necessary to host our own AI on the back end of each opted-in instance, and provide the tools to train it on content that the admins of that instance find objectionable.

        There would be growing pains of course, where some of our comments are held to be reviewed by participating moderators, who are themselves selected by an AI trained on content the admins of the instance find to be exceptional. And it would help to label and share the tensors we mine from this, so a new instance could gain access to a common model and quickly select a few things they don’t want in their instance, even giving them the ability to automatically generate a set of rules based on the options they selected when building the AI for their instance.

        It would take some time for all the instances to figure out which groups they do and don’t want to connect with, both in terms of internal users and external instances. I think you’d end up with two distinct clumps of communities that openly communicate within their clump, with a bigger blurrier clump between them, of centrists, with whom most communities communicate. But on either side there would almost certainly be tiny factions clumped together, who don’t communicate with most of the centrist groups, on the basis that they communicate with the other side. And there will always be private groups as well, some of which may choose their privacy on the basis that they refuse to communicate with any group that communicates with the centrist cloud.

        And in most of our minds, the two groups in question are probably political, but I think a similar pattern will play out in any sufficiently large network of loosely federated instances, even if the spectrum is what side of a sports rivalry you’re on. If we get to the point where there’s an instance or more in almost every household, we may be able to see these kinds of networks form in realtime.

        But the question I can’t seem to answer: Is it good? Or rather, is it good enough?

        People always think of what they would do if they had a time machine and could go back and “change things.” But in terms of federated social media, we already are back, almost at the start. So, if we’re going to think of a better way, now would be a good time.

        If we start to see a high degree of polarization among the instances of lemmy, what is the right thing to do about that? To all turn our backs, take our content and go home, make sure they have to have accounts on our side to see it, and if they ever make a subversive comment on our side of the fence, it’s removed before a human can ever see it, only spot-checked occasionally to make sure the bot is not being too harsh? Because that is one way of doing it, and maybe it’s the right way. If we train the AI well enough. Which depends on many of us doing that well enough across many instances. Maybe that is how you defeat Nazis, to make sure they can only talk about Nazi things in a boring wasteland of their own design.

        But I worry. Once instances are better networked, becoming more about quantity than size, and billionaires are able to set up “instance farms” where AI bots try to influence the rest of the fediverse en masse, will we be ready to head it off? Or similar to how we can’t see the Nazis crawling out from their wasteland to get higher quality memes, will we end up paling around with the bots designed to make our society trend toward slavery while their energy consumption raises the cost of the electricity we have to work for? Of course, if the bots do end up more convincingly human than humans can ever be, who am I to say they don’t deserve a larger cut of our power?

        • SpaceCowboy@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          But how do you proceduralize moderation?

          You don’t. That’s something you need a person to do.

          All the big corporations have been spending ridiculous amounts of money on algorithms to solve these problems and what have they come up with? Does it feel like the algorithms on the corporate social media sites have been working well?

          You can’t come up with an algorithm that can solve human interaction. People will just constantly probe any algorithm to discover it’s weaknesses and exploit them. They’ll come up up with systems with code words, stochastic terrorism, implied threats of violence that an algorithm won’t notice but the recipient of the message will understand.

          One of the effects of social media has been that it’s convinced everyone that people shouldn’t be trusted. That may be true, but it seems we can’t trust algorithms either. We just have to accept that no system that humans are involved in can ever be perfect. Best we can do is try to identify people that are intelligent, responsible, and exercise good judgment to do the job of moderation. Sure people will make mistakes, but so do algorithms. But unlike algorithms, people are capable of empathy. There are are certainly bad people out there, but there are more good people than bad people. And the bad people will exploit an algorithm more easily than they can manipulate an intelligent person that has good judgement.

          Is it good? Or rather, is it good enough?

          I think good enough is all that’s possible in any system that involves humans. And social media is going to involve humans, no way around that. But that’s fine isn’t it? It’s good enough.

          If we start to see a high degree of polarization among the instances of lemmy, what is the right thing to do about that?

          Well everyone has a right to say what they want. But everyone else has the right to ignore people they aren’t interested in listening to. I don’t see things like defederation as a bug, it’s a feature. I think it can be improved, make it clear to the users what’s happening. Maybe there should be an in-between state where instance aren’t completely defederated but the admin can indicate some servers have questionable content the users on their server have to opt in to see.

          The key here is to get away from the idea of controlling content and controlling the users. Maximize choice. People choose their server. The admin can choose to ban them. The User can then choose another server (or even set up their own). The users choose the server based on it’s moderation policies and which servers it’s federated with. Admins choose which servers to federated with. Users can choose not to view content from certain servers. Mods choose which server their communities are hosted on and also can choose to ban users. Users choose communities.

          Yup. It’s all one big mess. But any system with humans making choices is always going to be a mess.

          We’ve tried the corporate model with algorithmic control over everything. It was a failure. So let’s get messy!

          Of course, if the bots do end up more convincingly human than humans can ever be, who am I to say they don’t deserve a larger cut of our power?

          I’m a fan of Phillip K Dick’s work. Also Robocop. What’s the difference between a human mind and an algorithm? Turing was wrong about it being intelligence, because humans are dumb as fuck. It’s empathy. That’s the difference.

          The corporations didn’t just take away Alex Murphy’s humanity, they were taking away everyone’s humanity. Very few people in Robocop have any empathy for anyone else.

          Why would you flip over a tortoise in a desert? You wouldn’t. Because you’re a human and you have empathy.

          The only way an AI would be indistinguishable from a human is if it had empathy. But if the AI has empathy, it would be on our side, not on the side of an evil corporation.

          Anyway I’m tired, not sure if this makes sense.

          Good night!

          • t0e@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I think that makes a lot of sense and it’s exactly the kind of stuff we should be considering at this stage. I also agree that humans are the ideal source of empathy and the best way to get around systems of secret code words and other methods that are used to circumvent algorithmic control.

            But I also think AI-generated algorithms have their place. By design, content moderation is an unpaid task. Many volunteers are very good at moderation, but the work takes up a lot of their time and some of the best minds may decide to step away from moderation if it becomes to burdensome. On reddit, I saw a lot of examples of moderators who, as flawed humans, made choices that were not empathetic, but rather driven by a desire for power and control. Of course, if we make mistakes during the algorithm training process and allow our AI to be trained on the lowest common denominator of moderators, the algorithm may end up being just as power hungry - or even worse, considering that bots do not ever tire or log off.

            But I do think there are ways to get past that, if we’re careful about how we implement such systems. While depending on your definition, bots may not be capable of empathy, based on some conversations with AI chatbots, I think AI can be trained to very closely simulate empathy. But as you mentioned about secret messages, bots will likely always be behind the curve when it comes to recognizing dog whistles and otherwise obfuscated hate speech. But as long as we always have dedicated empathetic humans taking part, the AI should be able to catch up quickly whenever a new pattern emerges. We may even be able to tackle these issues by sending our own bots into enemy territory and learning the dog whistles as they’re being developed, though there could be negative side effects to this strategy as well.

            I think my primary concern when pushing for these kinds of algorithms is to make sure we don’t overburden moderation teams. I’ve worked too long in jobs where too much was expected for too little pay, and all the best and brightest left for greener pastures. I think the best way to make moderation rewarding is to automate the most obvious choices. If someone is blasting hate speech, a bot can be very certain that the comment should be hidden and a moderator can review the bot’s decision at a later time if they wish. I just want to get the most boring repetitive tasks off of moderators’ plates so they can focus on decisions that actually require nuance.

            Something I really like about what you said was the idea of promoting choice. I was on a different social media platform lately, one which has a significant userbase of minors and therefore needs fast over-tuned moderation to limit liabilities (Campfire, the communication tool for Pokémon Go). I was chatting with a friend and a comment I thought was mundane got automatically blocked because it contained the word “trash.” Now, I think this indicates they are using a low quality AI, because context clues would have shown a better AI that the comment was fine. In any case, I was immediately frustrated because I thought my friend would get the impression that I said something really bad, because my comment was blocked. Except I soon found out that you can choose to see hidden comments by clicking on them. Without the choice of seeing the comment, I felt hate towards the algorithm. But when presented with the choice of seeing censored comments, my opinion immediately flipped and I actually appreciated the algorithm because it provides a safe platform where distasteful comments are immediately blocked so the young and impressionable can’t see them, but adults are able to remove the block to see the comments if they desire.

            I think we can take this a step further and have automatically blocked comments show categories of reasons why they were blocked. For example, I might never want to click on comments that were blocked due to containing racial slurs. But when I see comments blocked because of spoilers, maybe I do want to take a peek at select comments. And maybe for general curse words, I want to remove the filter entirely so that on my device, those comments are never hidden from me in the first place. This would allow for some curating of the user experience before moderators even have a chance to arrive on the scene.

            On the whole, I agree with you that humans are the ideal. But I am fearful of a future where bots are so advanced, we have no way to tell what is a human account and what is not. Whether we like it or not, moderators may eventually be bots - not because the system is designed that way but because many accounts will be bots and admins picking their moderation staff won’t be able to reliably tell the difference.

            The most worrisome aspect of this future, in my mind, will be the idea of voting. A message may be hidden because of identified hate speech, and we may eventually have an option for users to vote whether the comment was correctly hidden or if the block should be removed. But if a majority of users are bots, a bad actor could have their bot swarm vote on removing blocks from comments that were correctly hidden due to containing hate speech. Whether it happens at the user level or at the moderator level, this is a risk. So, in my mind, one of the most important tasks we will need AI to perform is identifying other AI. At first, humans will be able to identify AI by the way they talk. But chatbots will become so realistic that eventually, we will need to rely on clues that humans are bad at detecting, such as when a swarm of bots perform similar actions in tandem, coordinating in a way that humans do not.

            And I think it’s important we start this work now, because if the bots controlled by the opposition get good enough before we are able to reliably detect them, our detection abilities will always be behind the curve. In a worst case scenario, we would have a bot that thinks the most realistic swarms of bots are all human and the most fake-sounding groups of humans are all bots. This is the future I’m most concerned about heading off to make sure it doesn’t happen. I know the scenario is not palatable, and at this stage it may feel better to avoid AI entirely, but I think bots taking over this platform is a very real possibility and we should do our best to prevent it.