It wouldn't turn into the first part, because "the banned" of some instance could go create their own instance run by their own moderation rules.
> It would attract types of information that most of us would agree that we do not want around (child porn, terrorism planning, etc) and with any sort of traction, the government would step in to shut it down.
If some instance of a federated system is allowing child porn or other criminal content to flourish then they should be shut down. Every instance would need to be held to a baseline standard for dealing with such content.
> If some instance of a federated system is allowing child porn or other criminal content to flourish then they should be shut down. Every instance would need to be held to a baseline standard for dealing with such content.
Yes, moderation. The ideal isn't a system that is un-moderate-able (for the reasons listed, among others), its a system where individuals can choose their moderators based on the differing goals and preferences of the online communities (but within some reasonable bounds/process for limiting rights-violating content like child porn, criminal incitement, etc)
It wouldn't turn into the first part, because "the banned" of some instance could go create their own instance run by their own moderation rules.
> It would attract types of information that most of us would agree that we do not want around (child porn, terrorism planning, etc) and with any sort of traction, the government would step in to shut it down.
If some instance of a federated system is allowing child porn or other criminal content to flourish then they should be shut down. Every instance would need to be held to a baseline standard for dealing with such content.