I’ve spent the past two years of my life trying to stand up a platform where people can share data. Our team is frustrated by the current state of the internet, where data lives in silos and user activities are strictly locked to a single platform. We’re trying to build an alternative where core concepts like a user’s social graph and content feed can follow them from platform to platform, and any developer can build new infrastructure that leverages all of a user’s existing data.

And while that journey is going well overall, we’ve hit some eye-opening barricades when it comes to managing the content on our platform. Censorship on the Internet happens at every layer, including layers most people don’t even realize exist. Our website has been censored by our hosting providers, by platforms like Twitter, by search engines like Google, and we’ve even been censored by the Chrome browser itself.

My goal today is to walk you through all of the layers of censorship that exist on the web. Most people don’t realize just how many different actors are involved in keeping a website available to the public, nor do they realize just how many of these actors can independently decide that a particular website should no longer exist.

Gone with a Bang


Our story starts in May 2020, when one day our website suddenly vanished.

Our first round of debugging came up completely empty. Cloudflare was configured correctly, community members on numerous continents confirmed the outage, the server was up and responding, and all services were locally reporting healthy. But most importantly, the outage wasn’t some sort of 504 or ‘server is busy’ message. The browser was acting like nobody had registered that domain at all.

We contacted Namecheap and discovered that they had forcibly unregistered our domain because we had been accused of phishing fraud. And upon further inspection, we realized that one of our own partners had issued the abuse complaint that led to our removal.

Without going too deeply into the weeds, we had been working with Uniswap to host an authentic clone of their application on our website. Uniswap’s anti-phishing team had found our clone and mistaken it for a common phishing scam, and thus had asked Namecheap to take our website offline.

We talked to both Namecheap and Uniswap, worked through the mistake, and got our website back online. Our first brush with censorship had been brutally user-facing, but appeared to be an honest mistake. We set up better abuse reporting flows, wrote some spicy Twitter threads, and went back to business as usual. We figured that since we now had better communication with our domain registrar, censorship wasn’t likely to happen again.

US-East is Dead


Things were okay for about a month, at which point we received another round of reports that our website was down. This time, instead of being completely gone we were just getting a standard ‘server not found’ message, and it was only happening in our US-East region. We tried to ssh into our US-East servers and found that they were… not there.

It turns out that our US hosting provider — IONOS — had frozen our servers and our account for abuse. The complaint was once again for phishing, and this time the offending website appeared to be a genuine website that was phishing users for their… IONOS credentials. Unsurprisingly, IONOS was not amused. And unlike Namecheap, they didn’t give our servers back after we had blocked the bad guys and disabled the phishing website.

We stabilized by re-routing US-East to the EU region. This had a latency penalty, but at least ensured uptime for our US customers. We later found a new US hosting provider and migrated everything over. And while the overall impact for our users was small, the impact for our team was more pronounced.

This was the first point where we started to wonder if we were in over our heads. We had been running our website for all of four months and were now dealing with abuse complaints and de-platforming on multiple fronts. And unlike our previous adventure with censorship, the abuse this time appeared to be legitimate. It seemed like things would be getting worse before they would be getting better.

Freedom of Speech


Our website is just infrastructure. If you are unfamiliar with what we do, we build a peer-to-peer network that allows people to share files and data with each other. And when I say ‘files and data’, I mean much more than Linux ISOs and photo albums. We mean “data” in the big tech sense — a user’s social graph, the feed of comments on a video, the algorithms that help build a user’s content feed. We’re trying to re-write the foundation of the Internet.

Our website is a convenient portal to access this data. When you load an application like Uniswap on our website, our server isn’t hosting the application code. The application code is being delivered from the Sia network. When we “block” something on our website, we aren’t taking it offline, we’re just refusing to fetch that data from the Sia network.

This makes content management a lot more difficult for us than for a platform like Imgur or Github. When a user requests a file or application, there’s a good chance that none of our servers have ever seen that data before — we don’t know if it’s malicious. And because our website can be used to load entire applications, we also have no built-in mechanism that can be used to report abusive content; we don’t control the UI that is presented to our users.

On the whole, we believe that this is a good thing for freedom of speech. We believe that anyone should be able to publish a website to the Internet. We believe that anybody should be able to share files with friends and that you shouldn’t need to identify yourself to a corporation or have your data scanned and reviewed before you are allowed to put it out into the world.

For those reasons, we decided after the IONOS event that we were only going to block content from our website (sometimes called a portal) if people sent us abuse reports.

Chrome

I don’t think we even made it a full month before this policy got thrown in our face. This time the problem wasn’t phishing, it was malware. And the catalyst wasn’t one of our infrastructure providers, it was Google Chrome.

In a single update, 2 billion users worldwide were simultaneously blocked from accessing our website. When we went to Google for help, they explained that we had malware on our website. They provided a sample set of links that contained malware, then stated that this was just a sample and that more malware existed. They told us that they would not unblock our website until we had found and removed all of the malware ourselves.

From the lens of censorship and freedom of speech, this action is much more concerning than our previous confrontations. With Namecheap and IONOS, we were facing challenges from a set of providers that we had formally engaged with. We had agreed to Terms of Service and also had alternative infrastructure providers we could switch to in the event of conflict.

With Chrome, we never agreed to a specific Terms of Service. We don’t have any business relationship or agreement with them. Stated bluntly: If Google decides they don’t like you, then for 65% of the world you simply stop existing. You have no recourse.

The terrifying thing about this is that Google is not an elected entity. Google has turned themselves into unelected regulators of the Internet, and they are held accountable only to their own share price. Today the ban is on malware, but over time this may grow to cover any type of content that Google does not see as favorable. Google can kick websites off of the Internet as a whole just as easily as they can kick content creators off of YouTube.

It took us over a year to get all of the pieces right, but we managed to dispel the malware warning by installing a malware scanner on our servers that watches every download request by any user and then runs a scan after the user has completed the download. If the software (we use the open source scanner ClamAV) detects malware, we ban the file for all future users. In practice, this has been enough to appease Google.

It’s a pretty neat solution, but it’s happening in the wrong place. We are a 16 person startup with $3 million dollars in VC funding and Google is a $1.5 trillion dollar entity with over 100,000 employees. Google decided to solve a malware problem for their users by acting as an unelected regulator for everybody else, pushing the costs of operating a website (and competing with Google itself!) to smaller businesses, which has a side effect of reducing competition for Google.

I Have no Mouth and I Must Scream

Somewhere around mid-2021 we experienced a sudden spike in the number infrastructure providers that were de-platforming us. Entities that had previously been pretty chill became hostile and started pulling our servers off of the Internet.

We checked our emails, we checked our banlist, we checked our abuse management tooling. Everything was indicating that we were still processing reports in a timely fashion and keeping our website relatively clean. If anything, the number of abuse reports we were receiving had gone down.

And yet, our hosting providers were acting like we were ignoring their abuse complaints. After a moderate amount of frustration and some disjointed communication, we were able to figure out what was happening.

Emails that contained URLs to our website were being silently dropped. All indications to the sender suggested that the emails were sent successfully, and all indications to the receiver suggested that the emails had never existed at all. They weren’t just going to spam, they were being banished to hell.

Chrome had been bad enough. We were now being censored at the email layer and it was interfering with our ability to operate as a business. We couldn’t remove abusive content because we couldn’t receive reports. Email is not even part of our user-facing stack!

We managed to migrate our emails to a new domain and warn our hosting providers to censor the domain name out of the email when filing abuse reports. Most of our hosting providers were happy to oblige, but we had to drop a few that were unable or unwilling to take that extra step.

And we learned yet another important lesson about censorship. There are powers that be who can decide that you aren’t allowed to use email anymore, and if they don’t want you to exist your business is likely going to fail. It’s not clear who these powers are or what holds them accountable, and it’s not clear how to appeal a mistake if these powers turn against you.

Inter-connectivity and the Need for Neutrality


Every modern business is crucially dependent on a large number of independent services. The political climate of the world is such that every service has started to feel pressure to ensure that all of its customers are acting in a morally upstanding manner. This becomes a challenge when your business depends on services based in dozens of jurisdictions with cultural backgrounds that span hundreds of conflicting moral opinions.

As our economy and services become more deeply intertwined, an increasing number of players have more influence and ability to de-platform a greater number of businesses and users. And these requirements compound against each other. If one service provider is particularly opinionated and quick to de-platform, everybody else is forced to give them a large amount of breathing room and become more oppressive towards their users to avoid potential conflict.

This does not scale. The end result will be a global monoculture where everybody is afraid to take risks or break the status quo because nobody can afford to upset even a single of the hundreds of services that they depend on. Our culture gets established and defined by giants like Facebook and Google rather than users and creators, because only Facebook and Google have the resources to bully everyone else into allowing changes to happen.

The only way to avoid this endgame is to demand infrastructure that remains neutral. At the scale of today’s Internet and global economy, infrastructure that does not remain neutral will inevitably turn on its users and coerce them into a set of moral standards that are both arbitrary and enforced without consent.

Meltdown

Up until this point we had dealt with three major types of abuse. Phishing content was being policed at the DNS layer, malware was being policed by the web browser and at the email layer, and we had occasional brushes with law enforcement regarding terrorist propaganda.

Then one day we started seeing child porn (also called CSAM). While most other types of abusive content are policed selectively, CSAM is actively and aggressively policed by everyone everywhere all the time. Understandably so. When we started seeing CSAM, our infrastructure problems more or less immediately got out of hand.

Our otherwise favorite hosting provider, Hetzner, declared to us that any CSAM abuse complaints had to be handled within one hour (instead of the typical 24) or they would pull our servers offline. Initially, they gave us 30 days to get the issue under control, but 7 days later they changed their mind and said they were immediately terminating our service, and that we had 48 hours to migrate everything to another provider. Did I mention that Hetzner at this point was nearly half of our fleet? Did I mention that these “48 hours” were December 24 and December 25? Merry Christmas indeed.

Mevspace didn’t even bother sending us an abuse report. The first time they received word that CSAM was accessible through our website they pulled our servers off the shelf and immediately mailed our hard drives to the police. We never got our data back.

Between the months of November and January we lost something like 80% of our servers. And then we got hit by this beauty:

Around the world (though mostly in Europe), ISPs had flagged our website as a child abuse website and would no longer let their customers visit. Instead, their customers received the above text, which more or less translates to “STOP, this is a child porn website. Call this hotline and seek help!”

We have millions of non-pedophile users, many of whom were now being greeted by a message accusing them of pedophilia and encouraging them to seek help. As an incident, this is far worse than merely being knocked offline for a few days. And we now get to add ‘local ISPs’ to the list of infrastructure providers that have taken it upon themselves to ensure that we can’t exist.

The climax of our experience was a phone call from my landlord. “Hey uh… the child porn police visited you and left a business card. Is there something you need to tell me…?” This was followed by a conversation with a lawyer that ended with “You should stay in Denver for now because we don’t know if they are going to arrest you when you arrive at the airport in Boston.” My Valentine’s Day was more stressful than yours.

It was too much. We made an executive decision that staying alive was beyond our capabilities, and we let the website collapse.

Building Back Better

The most common suggestion we received in regards to our CSAM problem was to connect our website to Facebook and send them every single file that our users upload, so that Facebook can scan the file and tell us whether the file should be allowed on our website. A similar service is offered by Microsoft, and I believe Apple and Google each have something as well. These services are all free, so it doesn’t seem like that bad of a deal.

It’s a very bad deal. These services are only going to be free until the entire web has become comfortable with the idea that these services should be mandatory in the name of protecting children. At that point, there will be a monopoly by the big four, and they essentially get to set any price that they want for the right to operate a website on the Internet.

There’s also no guarantee that these services will remain restricted to the moderation of child pornography. These services are managed by large corporations with profit agendas. If you can’t live without the CSAM filtering, and they decide that you can’t have the CSAM filter without also agreeing to filter out copyright infringing files, your website is forcibly compelled into whatever copyright monitoring scheme they’ve implemented.

There are also more nefarious things that could happen. Would Facebook refuse an $800 billion deal with China to censor political content? I don’t know, and I don’t want to be in a position where we get to find out.

These considerations ignore the more basic privacy concerns of sending every file we have to Facebook. Many of our users like us precisely because we aren’t Facebook, and if we’re sending literally all of our data to Facebook we’re also putting ourselves at a steep competitive disadvantage. And as we’ve seen with Amazon, large corporations are more than happy to use an unfair advantage to take over your business.

So the Facebook solution is out. We will not be sending our user’s files and data to a third-party service in the name of protecting children.

We’ve now spent more than six months building alternatives. We’ve increased the number of ways that someone can report abuse. We’ve created forms that people can fill out to mass-ban links. We’ve added significantly more robust automated processing of incoming emails. We’ve greatly increased the number of times that we say  “this content is not and was never stored on our servers”.

We’ve directly integrated with the API of NCMEC — the National Center for Missing and Exploited Children. When someone reports CSAM, we immediately forward that file to NCMEC along with metadata like IP addresses, timestamps, and (if we have it) information like email addresses and credit card details.

We’ve improved our blocking practices. We’ve banned more IPs and VPNs than ever, and we’re more consistent about keeping Tor blocked (it’s actually not that trivial to fully block Tor). We’ve started blocking entire IP subnets if there are too many offenders within the subnet.

Over time, we got our website back online. We got the police off our backs. We got the FBI off our backs. We got Chrome off our backs. Emails containing links to our website appear to send successfully again. We got two alternate websites online in case the main one goes down again. The first is signup only and has done a good job of filtering out abusers, who tend to prefer not to sign up for anything. The second requires a monthly credit card payment.

We’re still being censored by platforms like Twitter. I’m actually not going to link to our original website because I’m worried that Medium and/or Google will respond by reducing this post’s search score, which is actually a perfect example of the chilling effect that happens when censorship is arbitrary and opaque.

We’re still dealing with foreign police. We don’t even know where to start with repairing relationships with the ISPs that are calling our users pedophiles. My landlord and neighbors don’t trust me anymore and I don’t think there’s much I can do about that besides move. But we’re alive again and we’re growing.

The Path Forward

Over the next few years, I expect things will get worse rather than better. Based on current comments and conversations coming out of the Biden administration, I do expect Facebook’s child abuse API will become a mandatory part of running a website in America. I think it’s something we need to be actively fighting against, both technologically and politically.

I also think our solution of just blocking pretty much anything that gets reported is a short-term solution. As we’ve seen with the copyright system on YouTube, if you don’t have good controls in place around who is allowed to file abuse claims, you end up de-platforming a lot of legitimate content at the hands of malicious actors. We’ve already had a number of our users get their legitimate content blocked over incorrect abuse complaints.

We have to be careful and proactive about our rights when we pick solutions for content management. Especially when CSAM is involved, people tend to lose their minds and reach for the “burn it all down immediately” solution. And because people get so worked up, CSAM ends up being very useful for pushing malicious corporate and political agendas. When someone claims that a concession or rights infringement is “to protect children”, you need to be extra careful.

Which means there’s a lot to build. Content moderation is an important problem, not just for malware, phishing, and CSAM, but also for more mundane problems like spam. If we don’t tackle the problem head-on, centralized corporations like Facebook and Google will solve the problem for us, and they will do it by taking even greater control of our lives.

I have a lot of thoughts and ideas about things we can be doing to build a future that both protects the freedoms of users and also inhibits content like spam and CSAM. But many of those ideas are still early and are outside the scope of this blog post today. If you’d like to learn more or be involved yourself, check out our company website at https://skynetlabs.com or visit our discord at https://discord.gg/skynetlabs.