Apologies for the late response. You’ve managed to guess most of what we have documented of this issue in the spec, but I think it’s deep enough in the docs right now so that not many people can find these details. Let me summarise what we have — and I’d love to hear if you see any holes in the thinking.
The issue: not everyone is subject to US jurisdiction. That is less an issue than it appears, at least not in the way you’d think — because US laws in the context of free speech are one of the freest, if not the freest in the world. So there is seldom a case where some speech is legal outside the US but not in the US. It’s very rare for US to demand action but not other countries, but it’s very common for other countries to demand action where US does not regarding a writing. That’s a good thing.
So, it is very possible and very common for some speech to be legal in the US and not legal in, say, Germany. I’m picking Germany in particular because, a) the laws on free speech in Germany are visibly different than the US, and b) it is a ‘respectable’ jurisdiction in that most people would not dispute its laws were put in place through due process, and thus reflect the wishes of the citizens of Bundesrepublik Deutschland.
So, how do we deal with this? My thinking is structured in three parts.
First: If we apply US law unto Germans, a) we’d be cultural imperialists, and b) it would not be enough. Somewhat understandably, German law specifies that most conversation regarding, and in support of, actual nazism be curtailed, far as my understanding goes — this is not merely a case of it not being unacceptable in civil society (as is the case in most of the civilised world), it, to my best reading of the matter, is illegal.
Second, most reasonable people would agree that given the historical context of Germany, this is likely a good, practical idea that it is illegal, even though I would personally object to it under free speech grounds. Even then, if this law is real-life useful in preventing a rebirth of nazism in Germany, I am going to go with that over an absolutist free-speech approach.
Third and last: this is a decision made by Germans, for Germans, because as mentioned above, Germany is a jurisdiction whose laws are built upon by collective consent through a functioning democracy. Germans vote on and consent to these laws themselves. Even if those laws mean censorship on what Germans can see, it is not up to us to question their decisions as long as their decisions are made collectively, and the people upon which this censorship would apply has given their consent.
The first part establishes that there is a decision in a country that is relevant to free speech in that country which diverges from the established US notion of free speech. The second part establishes that this limitation is justified, and has real-world benefits. The third part establishes that this limitation is put in place through the consent of the governed. In most cases, third part overrides the second, in that if a country has chosen to do something specific deviating from the baseline Western idea of free speech, and that this change has been approved and put in place through means that indicate popular consent, then for our purposes, this limitation is valid, and we should try to find a way to respect their wishes, even if no one of us might have anything to do with that jurisdiction.
Given that we have established the existence of at least one country where this is the case (Germany), this means we need to have a system through which this could work.
There is a system in Aether that would allow for this, based on the consent of the user. As you know, we have moderators, which tend to their communities. What we also have, and have not granted to any country yet, are a special type of moderator called censors. [1]
These censors are moderators of all network, all communities (i.e. another name for a censor is a global moderator). The censor keys that are given for this purpose represent not individual people, per se, but a particular organisation, intended for a nonprofit or an official branch of a government, that allows them to moderate all content for people who consent to that moderation.
And when, say, a German user joins the network, we make a guess. In the onboarding process, we say something in the form of: “Hey, you seem to be from Germany. This is the moderation provider for German law, Bundesministerium für Verkehr und digitale Infrastruktur. To ensure you remain compliant with German law, if you’re in a German jurisdiction, you should enable this.”. And then, just in case we made a mistake, we offer a list of other censors, likely from other countries, or even some volunteer individuals or a volunteer organisation that does this work as a public good.
The user chooses whether he or she wants to go with this. Even a German user is given the chance to deny this, and take the risk, that is between him/her and Germany. If it is the case that the law has popular consent, then this should not be a problem for Germany to ensure compliance.
Nominally, these censors would be granted authority to exist by the Aether’s primary root key [2], but the enforcement of the censors would only be through the user selecting and activating them (or by not editing the default selection) — so their power has to come from the consent of that end user.
It is also possible to be elected into a position of a global censor (a global mod). However, I do realise this is very hard because the rule for Aether elections is at least 5% of the population has voted in the last 6 months, and 50+% of those votes are in favour. At the network scale, that is a lot of people.
However, if you get elected as a censor by the whole network, then your work applies to the whole network by default unless people disable it explicitly, which is not something that ever happens to censors granted by Aether to individual countries’ executive bodies, should they want it. So the moderation at a global level very closely mirrors moderation at a community level.
The core principle behind this is that moderation has to be based on consent, and it has to be transparent. We ensure transparency ourselves (you can see every mod action today in mod actions tab for each community), and the consent has to be gained either in the form of users voting for you to be their censor/moderator or by the fact that there is enough number of other users that want you to be their moderator (by voting for you) that you get elected and become applied by default. These principles apply to individual moderators the same way as they apply to countries’ official censors, with the one difference being a country’s official censor cannot get elected and become a global censor [3], and a person/nonprofit org could.
There is one carve-out reserved for the United States because our small company is a US company. This means we do have to apply US law regarding free speech, which thankfully is pretty much the freest out there — in essence, it means we sort of lucked out on that regarding censorship because of the First Amendment. We do have to apply DMCA globally, but that is both off-protocol and also not a part of free speech conversation: if you’re doing something that gets you DMCA’d as a user, you are probably violating someone’s intellectual property and that’s not something covered under freedom of speech in any country that I know of.
In summary, since we don’t speak German, we cannot be effective mods for Germany. To alleviate that, we’d love to have Germans volunteer and help us do it, or if the German government wants, they can get a censor key from us and they can do their own enforcement. I think the latter is where all the major online platforms are going to end up in a decade, and we’re already built for that.
[1] The name ‘censor’ is very particularly chosen to make sure no one person or group of people who get appointed as one can call themselves something like Ministry of Truth.
[2] This root private key is also where the trust for things like orange names we used to have for Patreon supporters come from. It can be disabled by modifying the app, but at that point, it’s a different app, and then it’s your risk.
[3] This avoids the case where the majority of people in the network is from one country and by proxy, that country’s censor gets elected as the global censor.