Aether as an app for reading reddit (and Aether)

Since Aether’s forum structure is very similar to reddit’s, and since reddit is a very large popular network with a lot of activity, and since aether might be useful to communities that no longer find reddit hospitable, I wonder if Aether might work better as a way to read reddit.

That is, what if the Aether app let you read an interact with reddit through an embedded browser, but also let you read and interact with p2p-delivered Aether discussions in the same app?

This would make it easier to market to existing reddit users, and it would make it easier for subreddits to transition their users from reddit to aether if reddit stops being hospitable to them.

Just a thought!

Hey, that’s actually not a half bad idea.

One thing we could do is we could allow you to add subreddits as aether communities so that your list would be a combined list of both.

Not sure if Reddit API or license allows this — this could be interesting if it does. As you pointed out, they are compatible to some extent. We would have to remove some features on Reddit subreddits (since they don’t have mod actions for everyone, etc) but it really could work. I’ll take a look this weekend to see what we can do with it.

glad to be helpful!

also, just to be clear, this is a bit speculative on my part, since i’m not currently an active reddit or aether user.

I just like the idea of transitioning people to p2p discussion spaces and this idea cropped up. and I think redditors would be really into this idea in the abstract, and reddit can be an amazing place to reach an audience if you have an idea redditors are into!

i wonder if there’s some way to validate this with redditors before building it?

You could try this good idea with Notabug too:

https://notabug.io

Also with Voat:

Some ideas on how to code this and to see how something similar has been implemented can be seen on Invdious, which allows users the option to see either Youtube or even Reddit comments on videos hosted there. For example, look at these:

https://yewtu.be/watch?v=7AHFqqw1zGM

Gab also has a way of commenting on pages on other websites. It works through an extension through their fork of Brave, Dissenter:

Perhaps there could be an Aether extension for Chrome based browsers much like Gab’s.

That’s an incredible idea. Dissenter is an amazing idea, but it censors things which makes it unappealing. So combining the Dissenter idea with Aether would allow you to comment on any website without censorship, which would be super cool.

I don’t think it would be too hard to create an extension which communicates with the Aether app… I might even create something like that.

There are a few things here. Aether as a browser extension is only viable when we have an Aether server running somewhere since Aether is not an app that by itself can run in a browser. This topic has been discussed many times before, so feel free to search on the app or here. I wish it was such, but unfortunately, it is not.

The second thing is about censorship. It’s important to understand that Aether is still subject to US laws and US law’s definition of free speech. We cannot allow any illegal content, and if it comes to that, there will be ‘censorship’ in the way that we will have to find ways to block such content to remain legal. In this context, my opinion does not matter, no one’s opinion matters, this is the law, and it will happen.

While I’m sympathetic to free speech, and I do, under very specific and rare circumstances, might agree that free speech is curtailed in the Western world (east is a whole another matter), it remains the case that the idea of free speech is often used to demand a free conduit for ideas from the society that the same society finds reprehensible. It is your right to go out on the streets and shout at people all you want. On the other hand, you have no right to ask a newspaper to give you an opinion column.

Another unfortunate result of this dynamic is that any platform that ends up arguing that it is a place for free speech ends up as a cesspool very quickly because it quickly brings all the people who have been booted from all other platforms for being too awful into the same place, which makes all the more mainstream people leave the platform, and thus begins the spiralling descent into the mountains of madness.

As a result, the design of Aether is such that it attempts to reflect this duality of our environment where we want to both allow for as much free speech as possible, ideally up to the limit of the US law (which is extremely wide in its protections for free speech, more than any other country), but we also need to maintain a majority of the mainstream people, and offer them something genuinely interesting, where one’s opinion can be challenged, and where they can learn something, but still very much within the boundaries of civil society.

The reasons for us to maintain this duality is twofold. One of them relates to the mainstream users, another relates to the ‘edge’ users.

The first reason relates to the mainstream users. Without these, a platform, thus its users, lose all legitimacy, and for very good reasons. Nobody wants to be in a place that is full of Nazis — I know this because I myself am a mainstream user of Aether, and I don’t want to see such things myself. There are simply way too many more useful things in my life that I can do instead of trying to scroll through what would essentially be an awful place to be.

On the other hand… if this was the be-all and end-all of this topic, then, well, this is also called the tyranny of the majority. It’s important to realise that our societies are a direct result of thousands of years of social evolution, and if we ended up in a society where free speech is protected in law, there must be very good reasons to do so. The argument above for mainstream users do not address the need for free speech that those societies saw, and that should give us a pause. If all we needed to see was the posts that were agreeable to us, then there is no need for free speech at all. But that’s not how our society works — it explicitly tries to create a ‘marketplace of ideas’, rather than a monoculture. So it is our job to figure out why this is the case and apply our learnings to Aether so that we can build something better than corporate-owned fiefdoms composed of bubble-wrapped saccharine ideas that we have today. The marketplace of ideas needs to be a battlefield, a fair battlefield, not a Garden of Eden, but we also cannot allow weapons of mass destruction into that battlefield, because they not only destroy the combatants, but also the viewers, since no one wants to be associated with, or even know of, a place where most opinion within is distasteful.

Which brings us to the second point: the one that relates to the edge users. Without these, a platform is basically an echo chamber, with no one challenging the mainstream point of view. To create a stronger society, both in our daily lives and also within Aether, this simply cannot be: a society without dissenters is a body whose immune system has atrophied, one that is ready to be devoured by the next challenger to the throne. To be fair, most edge ideas are terrible. That is the nature of the edge. However once in a while somebody on the edge discovers something truly wonderful, and boom, whichever society that is capable of giving space to that idea, a sufficiently pluralistic society, and that benefits enormously the whatever society that it came from, and then humanity as a whole. Ideas like, hey, we should give a shot at making swords out of iron instead of bronze (resulting in the Golden Age of Ancient Greece), or, I have this explody thing from China for fireworks and I decided to make it explode but in a tube (resulting in the Ottoman Empire and European Renaissance), or everyone we give a crown to seem to go mad after 15 years, we should cap this at 10 years and then elect the next one (resulting in the French Revolution), or, sous les paves, la plage (resulting in women joining the workforce en masse, thus making us all richer, and more productive), and so on. I want to emphasise how hilariously dumb these ideas sound at first, but these very ideas are now foundations of every non-awful country that exists today. There aren’t infinite humans around, we cannot afford to exclude the very few brilliant thinkers within the edge even if most of the edge is a cesspool, because this has always been the case. For all we know, somebody might post on Scihub a working paper for supersymmetry, and with it a plan for an implementable Alcubierre drive just next month — and by the next decade we’d be settling off-world colonies in Alpha Centauri.

But we cannot make it so that the edge takes over entirely either. The edge also needs a mainstream majority to be there, because for an edge idea to find purchase in the mainstream, the mainstream needs to be there in the first place. That means, for us to be able to thread this dichotomy in the best way possible, Aether needs to be friendly to the mainstream — if not for anything, for keeping the mainstream there so that whatever diamonds that rise out of the cesspool of the edge can find a mainstream audience. So even if you are a complete edgelord disgusted by all the civility and sanity of the modern world your interests align with still having a mainstream audience in Aether, simply because it’s no fun trolling the trolls because they know all your tricks. :slight_smile:

So the compromise I’ve found off this turned out to be very similar to how the US constitution works in regards to free speech. The speech itself is protected to the max (unless incites violence), but the conduits are not, and those conduits have to be won as spoils in the eternal battlefield that is the marketplace of ideas. Aether is the same way. To protect the majority mainstream audience we have an SFWlist, which is where the most people live within, but we also have an edge outside is, which is not normally communicated directly to the users who live in the SFW listed communities. However, the edge is there, something that is unique to Aether, which is not true in the corporate fiefdoms we have today. This means the mainstream users can, in fact, visit the edge, and if the edge people can make a good argument, well, they’ve just won a supporter and increased their influence. The mainstream wins because they get exposed to edge ideas in a way that is not possible elsewhere, but they also do it on their terms, at an amount and speed they control — if they want to see none of it, they can see none of it. The edge wins, because they get to spread their ideas, in a way that is absolutely not possible in any other place — but they have to do it on civil terms, by convincing people, not by verbal violence and force-occupying mainstream communities. If they don’t do it this way, Aether gives a lot of power to the individual user, mainstream or edge, to block, prevent access, and most importantly prevent being a conduit, so that the mainstream users can very effectively defend themselves.

In essence, as a result of this thought process, we do in fact have censorship in Aether. I’m straight up using the term ‘censorship’ instead of ‘controlling access’ because I don’t feel the need to get into that argument, but ‘blocking access’ is what it means, so please understand it that way. So Aether has several kinds of censorship. Here’s the rundown:

  1. If you are posting anything that is illegal, SFWlisted or not, we have to block this. That should be pretty obvious. Aether is subject to US jurisdiction. This censorship would come from Aether itself.

  2. We have the SFWlist. This list is permeable and can be disabled by the user. It’s not a jail, because the list is not compulsory, every user can choose to disable it. It comes enabled by default because we want a majority mainstream audience. However, even without disabling the list, there are ways within the app that the users can choose to see or interact with non-SFWlisted communities. For example, if you search for anything, the search will give you the option to choose whether you want the search results to include non-SFWlisted content. Or if you subscribe to a non-SFWlisted community, for you, that becomes SFWlisted, etc. The list is provided by Aether, but the decision on applying the list or not, either in wholesale or in part rests entirely with the user. So if you’re an edge user, you cannot force your way into a position of broadcasting your message through other people’s computers for free, and you’d perhaps call this a censorship. On the other hand, what you call censorship, I would call ‘access control’ dictated by the user, which is completely fine.

    This SFWlist will eventually become one of the many lists that we have, some of which are whitelists and others whitelists. I do realise that at this point this is a point of subjective control, but the intent is eventually that I have no personal control over the SFWlist once the community grows to a certain size, and other people create competing SFWlists which can also be subscribed or unsubscribed at will, just like the SFWlist. Once this happens I will stop updating the SFWlist and let those lists take over. Personally, I consider managing the SFWlist one of my biggest chores, so I would rather get to this point sooner rather than later — I have no interest in or have time for being the arbiter of what people consider SFW or not.

  3. The individual communities have moderators which are elected. This means these mods can choose to censor your posts if you are not following the rules. Anybody who accepts those people as mods (most people accept the default mod list of any community) will not see your posts. Anybody who doesn’t will. Mods can be elected and the elections are always ongoing, so in the worst possible interpretation this could be called censorship, but it is in fact simply the results of an election. That said, if you consider you are the only person to choose who gets to see who sees your messages and you’re the supreme ruler regardless of what people vote for, then you could call it censorship. In reality, though, it’s just the result of an election, a consequence of the concept of the consent of the governed.

In short, Aether has censorship — but it largely comes not from the app itself, but from its users in the form of the users choosing what they want to see, and what they want to broadcast. Aether without censorship does not make much sense, because what you call censorship is what Aether calls consent, and without the consent of the users, you have no conduit. Without a conduit, you have no network, thus Aether becomes pointless. Only by working with other people and making them accept your content you can use Aether, because otherwise they will simply not accept your messages to their computers, much less broadcast it to others.

2 Likes

Thanks for clarifying that, and I think the SFWlist may well work. I agree about the Edge (either edge) not just pushing out the majority.

The general guidelines were not too bad about being kind to one another, as personal attacks and discrimination do fall foul of that. We also saw this last week that words can cause harm and even physical violence.

Certainly incitements of violence are not acceptable and neither is “illegal behaviour”. Every “freedom” has its limits because here in South Africa I have freedom of opinion and speech too, but that is limited by not libelling another or discriminating against them through race, religion, gender, age, etc. The US has similar and is covered under the Freedom of Association I think which covers public places. I’d like to have the freedom to drive on the other side of the road, but for community safety and legal reasons I don’t have that freedom. It’s how a community gets to live together.

Certainly we don’t have to agree on ideas and opinions, but then I should be kind and not insult you. It’s about attacking the idea and not the person as already stated.

I can see that a not SFWlist forum needs to exist and as long as they set their own rules and manage it that is fine. Folks just need to realise that the general forums are a “public place” in terms of US (and most countries) law. “Illegal” also constitutes a bit more than just threats of physical violence.

I really like Aether’s community moderation model; it feels to me both beautiful and simple, anarchist and democratic in the appropriate proportions. But I think your approach to illegal content is a bit US-centric. I have a suggestion for decentralizing this without causing legal problems.

The second thing is about censorship. It’s important to understand that Aether is still subject to US laws and US law’s definition of free speech. We cannot allow any illegal content, and if it comes to that, there will be ‘censorship’ in the way that we will have to find ways to block such content to remain legal. In this context, my opinion does not matter, no one’s opinion matters, this is the law, and it will happen.

  1. If you are posting anything that is illegal, SFWlisted or not, we have to block this. That should be pretty obvious. Aether is subject to US jurisdiction. This censorship would come from Aether itself.

It makes sense that content hosted on Aether servers should be subject to US jurisdiction, but a peer floods the network with a message, that message is probably in actuality subject to every jurisdiction in which it is rebroadcast. I suppose this isn’t a problem for the @b or I, considering that we’re American, and so we don’t really have to be worried about being held responsible for other’s edgy posts under Malaysian (or German, etc) law, but this may not be the case for Malaysian users (IANAL). Moreover, this sort of sucks for folks in jurisdictions in which certain works are public domain, but may not be under US law, which allows works to be removed from the public domain by copyright law.

So I propose this: when you launch Aether for the first time, it guesses your legal jurisdiction based on your locale (should be manually editable/correctable by the user). Now we treat each jurisdiction as a sort of community layered over every other community — there are mods for that jurisdiction responsible for blocking illegal content, but they don’t effect people outside that jurisdiction, and our “understanding” of US law is governed democratically in the same way as the rest of Aether. This way:

  1. no user has to download or host any content that is illegal in their jurisdiction, so non-US users don’t accidentally break the law by rehosting stuff that’s legal in the US but illegal in their country
  2. no user is subject to the laws of a jurisdiction that is not their own
  3. the moderation duties are passed from Aether to the community mods. We’d probably still want @b to start as the US mod so that things start sane, but this means that we can eventually stop wasting his time with Section 230 requests (if we scale up enough for that to be a significant time investment).
  4. IANAL, but I believe that this shifts legal responsibility for hosting illegal content from “the network” to users that opt out of their jurisdiction. Moreover, I think the US at least has section-230-style provisions for users that are acting in good faith.

The Six-Hive Transit System welcomes you to America … for a list of local regulations not included in your customary law code, select ‘law.’

If you are posting anything that is illegal, SFWlisted or not, we have to block this. That should be pretty obvious. Aether is subject to US jurisdiction. This censorship would come from Aether itself

Can you elaborate on how this censorship would happen? Would the Aether client run some kind of blocklist that prevented its users from hosting specific pieces of content? Won’t there eventually be other Mim clients in the network that would evade this? There’s no way to remove anything from the graph entirely, right?

1 Like

The max you could do IMO is providing tools for authorities to handle it. Seeing the hardcore abuse that happens with youtube, I’d rather separate that from this system. We’re p2p, we can do external filtering with say some special built-in “users” that can send out removal orders for anyone because anyone can see everything. If the law comes in then we can say that we are prepared. And not just for US jurisdiction, that is not the only one it reaches.
I kinda romanticize the idea of forcing large companies to police their content for themselves, and then push the blame on them if their bot(either theirs or what they were paying for) catches content that is lawful.
That would double down as a “delete” feature for normal users as well.

Can you elaborate on how this censorship would happen?

We do have a list called badlist that the app checks for hashes of content that are illegal. You can find the requisite code that does this by searching for badlist on the Github repository. This is separate and isolated from the rest of the source code and is not part of the protocol. This was a legal requirement because of DMCA: if somebody makes us a DMCA request, we need to legally be able to block it. No one has made us one so far, and the badlist is currently empty. You can also find the exact URL of the badlist from the repository as well, and check for yourself that it is empty.

The presence of this list, while a legal requirement, is beneficial for the users as well, since you share everything on the network from your own computer, too — so you are invested in it being legal content as we all are. What you share from your computer is still your own legal responsibility, but thankfully Aether only distributes text, nothing else, and it’s very hard for text to be illegal in the United States.

Won’t there eventually be other Mim clients in the network that would evade this?

Yes, but those would be other apps, so whoever created that app would also be the responsible party for them. You could also just grab the codebase, and build it without the badlist detection, and that would be a different app as well. Or even simpler, you could just block the badlist URL. But you do want that list to exist and have it remain functional, because it’s in everyone’s benefit to make sure that no one shares illegal content, especially since the list is public and you can check the list to make sure we’re not ‘secretly’ hiding content from you.

Mind that this list is only used for illegal content, not for moderation — moderation is handled at the mod level on a per-community basis.

We do have a list called badlist that the app checks for hashes of content that are illegal.

How does this work in terms of the content graph? Does it just not display the offending item in the client? What happens to everything down the tree from that item?

Also, I’m wondering why your client is responsible for removing DMCA’d content, as you are not the hosts of that content. Wouldn’t that be like requiring Mailhosts to filter/remove any email that contained a particular copyrighted MP3 or uTorrent to refuse to allow torrents for files with hashes known to be copyrighted material?

How does this work in terms of the content graph? Does it just not display the offending item in the client?

It hides the content from the user’s view, and uniquely to this list, it stops transmission of the content from that backend.

What happens to everything down the tree from that item?

It would act like a piece of missing content and handled as such. Right now, the whole downstream disappears (but not blocked — they disappear because the parent is missing). In the case that the content is actually missing (i.e. network update delay) this is proper behaviour since the remainder of the content will pop up once the missing piece arrives, but for a badlisted content, we’ll eventually have to find a way to handle it a little better, like marking the content as a spot and let the children show. Since we never had to use the badlist yet, this is pretty low on the feature requests list, when the time comes, we need to make sure the children who aren’t being blocked can still show and not be affected by their parent being badlisted.

Wouldn’t that be like requiring Mailhosts to filter/remove any email that contained a particular copyrighted MP3 or uTorrent to refuse to allow torrents for files with hashes known to be copyrighted material?

Not exactly — there’s a difference between a mailhost which is a private place for your email and a public forum of discussion. But even in your example, Dropbox has been forced to do exactly that, scan for hashes of copyrighted files and remove it from people’s Dropboxes by law, so it can happen. The saving grace of the mail servers seem to be that they have limits on how big an email attachment can be, so sharing that kind of content via mail is really inconvenient — not that they are somehow exempt from copyright laws.

On the other hand, this is a thorny and legally unknown area, because we are dealing with multiple jurisdictions. Ideally speaking what we are doing does qualify under safe harbour and we would not be responsible, however, there are so many qualifiers to that statement that this is the kind of conversation that buys your lawyer a summer house in South of France.

In short — it’s unclear. A compelling case could be made for us not being responsible, and we do make it very clear that it is completely the user’s responsibility. However, our responsibility for DMCA (to which badlist owes its existence) is probably (to my limited legal understanding) pretty clear because any US company is responsible for it, that I think has no relation to hosting content or other minutiae of technology.

Apologies for the late response. You’ve managed to guess most of what we have documented of this issue in the spec, but I think it’s deep enough in the docs right now so that not many people can find these details. Let me summarise what we have — and I’d love to hear if you see any holes in the thinking.

The issue: not everyone is subject to US jurisdiction. That is less an issue than it appears, at least not in the way you’d think — because US laws in the context of free speech are one of the freest, if not the freest in the world. So there is seldom a case where some speech is legal outside the US but not in the US. It’s very rare for US to demand action but not other countries, but it’s very common for other countries to demand action where US does not regarding a writing. That’s a good thing.

So, it is very possible and very common for some speech to be legal in the US and not legal in, say, Germany. I’m picking Germany in particular because, a) the laws on free speech in Germany are visibly different than the US, and b) it is a ‘respectable’ jurisdiction in that most people would not dispute its laws were put in place through due process, and thus reflect the wishes of the citizens of Bundesrepublik Deutschland.

So, how do we deal with this? My thinking is structured in three parts.

First: If we apply US law unto Germans, a) we’d be cultural imperialists, and b) it would not be enough. Somewhat understandably, German law specifies that most conversation regarding, and in support of, actual nazism be curtailed, far as my understanding goes — this is not merely a case of it not being unacceptable in civil society (as is the case in most of the civilised world), it, to my best reading of the matter, is illegal.

Second, most reasonable people would agree that given the historical context of Germany, this is likely a good, practical idea that it is illegal, even though I would personally object to it under free speech grounds. Even then, if this law is real-life useful in preventing a rebirth of nazism in Germany, I am going to go with that over an absolutist free-speech approach.

Third and last: this is a decision made by Germans, for Germans, because as mentioned above, Germany is a jurisdiction whose laws are built upon by collective consent through a functioning democracy. Germans vote on and consent to these laws themselves. Even if those laws mean censorship on what Germans can see, it is not up to us to question their decisions as long as their decisions are made collectively, and the people upon which this censorship would apply has given their consent.

The first part establishes that there is a decision in a country that is relevant to free speech in that country which diverges from the established US notion of free speech. The second part establishes that this limitation is justified, and has real-world benefits. The third part establishes that this limitation is put in place through the consent of the governed. In most cases, third part overrides the second, in that if a country has chosen to do something specific deviating from the baseline Western idea of free speech, and that this change has been approved and put in place through means that indicate popular consent, then for our purposes, this limitation is valid, and we should try to find a way to respect their wishes, even if no one of us might have anything to do with that jurisdiction.

Given that we have established the existence of at least one country where this is the case (Germany), this means we need to have a system through which this could work.

There is a system in Aether that would allow for this, based on the consent of the user. As you know, we have moderators, which tend to their communities. What we also have, and have not granted to any country yet, are a special type of moderator called censors. [1]

These censors are moderators of all network, all communities (i.e. another name for a censor is a global moderator). The censor keys that are given for this purpose represent not individual people, per se, but a particular organisation, intended for a nonprofit or an official branch of a government, that allows them to moderate all content for people who consent to that moderation.

And when, say, a German user joins the network, we make a guess. In the onboarding process, we say something in the form of: “Hey, you seem to be from Germany. This is the moderation provider for German law, Bundesministerium für Verkehr und digitale Infrastruktur. To ensure you remain compliant with German law, if you’re in a German jurisdiction, you should enable this.”. And then, just in case we made a mistake, we offer a list of other censors, likely from other countries, or even some volunteer individuals or a volunteer organisation that does this work as a public good.

The user chooses whether he or she wants to go with this. Even a German user is given the chance to deny this, and take the risk, that is between him/her and Germany. If it is the case that the law has popular consent, then this should not be a problem for Germany to ensure compliance.

Nominally, these censors would be granted authority to exist by the Aether’s primary root key [2], but the enforcement of the censors would only be through the user selecting and activating them (or by not editing the default selection) — so their power has to come from the consent of that end user.

It is also possible to be elected into a position of a global censor (a global mod). However, I do realise this is very hard because the rule for Aether elections is at least 5% of the population has voted in the last 6 months, and 50+% of those votes are in favour. At the network scale, that is a lot of people.

However, if you get elected as a censor by the whole network, then your work applies to the whole network by default unless people disable it explicitly, which is not something that ever happens to censors granted by Aether to individual countries’ executive bodies, should they want it. So the moderation at a global level very closely mirrors moderation at a community level.

The core principle behind this is that moderation has to be based on consent, and it has to be transparent. We ensure transparency ourselves (you can see every mod action today in mod actions tab for each community), and the consent has to be gained either in the form of users voting for you to be their censor/moderator or by the fact that there is enough number of other users that want you to be their moderator (by voting for you) that you get elected and become applied by default. These principles apply to individual moderators the same way as they apply to countries’ official censors, with the one difference being a country’s official censor cannot get elected and become a global censor [3], and a person/nonprofit org could.

There is one carve-out reserved for the United States because our small company is a US company. This means we do have to apply US law regarding free speech, which thankfully is pretty much the freest out there — in essence, it means we sort of lucked out on that regarding censorship because of the First Amendment. We do have to apply DMCA globally, but that is both off-protocol and also not a part of free speech conversation: if you’re doing something that gets you DMCA’d as a user, you are probably violating someone’s intellectual property and that’s not something covered under freedom of speech in any country that I know of.

In summary, since we don’t speak German, we cannot be effective mods for Germany. To alleviate that, we’d love to have Germans volunteer and help us do it, or if the German government wants, they can get a censor key from us and they can do their own enforcement. I think the latter is where all the major online platforms are going to end up in a decade, and we’re already built for that.

[1] The name ‘censor’ is very particularly chosen to make sure no one person or group of people who get appointed as one can call themselves something like Ministry of Truth.

[2] This root private key is also where the trust for things like orange names we used to have for Patreon supporters come from. It can be disabled by modifying the app, but at that point, it’s a different app, and then it’s your risk.

[3] This avoids the case where the majority of people in the network is from one country and by proxy, that country’s censor gets elected as the global censor.

2 Likes

Thanks for the long reply! My apologies for not checking the docs — I’m realizing now that this is documented in the developer docs, which I hadn’t yet gotten around to reading rather than the surface docs. That said, I think your post does an excellent job of condensing all of that information together.

There is one carve-out reserved for the United States because our small company is a US company. This means we do have to apply US law regarding free speech, which thankfully is pretty much the freest out there — in essence, it means we sort of lucked out on that regarding censorship because of the First Amendment. We do have to apply DMCA globally, but that is both off-protocol and also not a part of free speech conversation: if you’re doing something that gets you DMCA’d as a user, you are probably violating someone’s intellectual property and that’s not something covered under freedom of speech in any country that I know of.

Well, the US is fairly permissive when it comes to political speech[1], but this is not necessarily the case when it comes to technical speech: other countries have been known to have friendlier regulatory regimes for sharing security research.

The DMCA is actually not the worst example of this itself: it includes anti-circumvention provisions that make it illegal to discuss the nature of certain DRM provisions. Mind you, the European Copyright Directive also includes similar — but not exactly the same — terms. But it’s up to each European member state to implement the directive (which they all do slightly differently). And as such, there’s a specific German version of the Copyright Directive (written in German, obviously) that corresponds to something like the DMCA, but they’re not exactly the same (nor could they be — it’s not even in English!). As such it could very well be the case that the US and Germany disagree on who a certain piece of intellectual property belongs to, or if it’s public domain, or what terms of its license are enforceable, and what is considered “fair use”. For example, IIRC, in Germany any clip under 20 seconds counts as legally protected “de minimis” use. So even if you have no worries about the DMCA on freedom of speech grounds[2], there are legalistic and cultural imperialism concerns!

On the other hand, this may be as simple as saying “German users who wish to view content that is illegal in the US must use this separate root key” if it is in fact legally the case that the key serves as an effective liability shield, but IANAL. Personally, I would hope that the user checking “I am German, not subject to US law, so I will download this from someone else who says they are German” would be enough.


[1] Well, apart from ag-gag laws, and depending on what state you’re in, commentary on the middle-east. Some of these have already been ruled unconstitutional violations of the 1st amendment, but others still remain on the books and are enforced.
[2] Assuming you have some answer to the question of how to deal with bad-faith DMCA requests used to chill speech.