Long overdue, but I’m happy to present the latest update for Skynet’s roadmap. Our recent chaos has caused us to reconsider what technology decisions are acceptable, and what it takes to build a robust platform that meets user expectations for decentralization.
Fluid Skynet Portals
It has become clear to us that we cannot use portals as a stable piece of infrastructure. The original idea behind portals was to make them a trusted, centralized infrastructure, but allow users to easily switch between portals in the event that a portal went offline. Users would have to find trustworthy portals but would have no exposure to vendor lock-in.
This original model was (and still is) several steps ahead of most web3 providers. Major infrastructure providers like Infura and Open Sea are both closed source and difficult to operate. Skynet portals on the other hand are open source and readily swapped out from each other.
The challenges with the original model were nonetheless significant. Users on different portals had difficulties sharing files with each other because files uploaded to one portal had a completely different URL than files to another portal. Retrieving files from a different portal required making changes to the URL, which is not difficult but is inconvenient.
The UX annoyances surrounding Skynet portals become more pronounced when we started experiencing regular deplatformings. Siasky became unstable, and even users who were using a common portal had to frequently change out the URL.
To address this, we’ve kicked off a project called the Skynet Kernel, which is a webapp plugin that automatically switches the user to their favorite portal, including being able to automatically use failovers if the user’s preferred portal is offline or otherwise unavailable. Thanks to Kernel, users can expect a reliable and consistent experience regardless of which portals they use, which portals their friends use, and which portals are currently offline.
The Skynet Kernel also means that less user education is required to get people using Skynet because Skynet overall feels significantly more like the traditional web, despite being decentralized.
Skynet Portal APIs
Portals on Skynet were originally, like most other web3 infrastructure, fully trusted. Users were expected to find a reliable and trustworthy portal operator and then hope that the operator did not inject malicious code while they were using Skynet.
Though this is how pretty much all of web3 works, we lost faith in the model. Especially when users are using multiple infrastructure providers and failing over between them, users need to have confidence that they are protected against all forms of malice.
We have since updated the Skynet portal APIs so that all major functions have fully verifiable endpoints. To the best of my knowledge, Skynet is the only web3 platform today that has accomplished this. Whether you are uploading files to Skynet (files of any size!), downloading files from Skynet (again, files of any size!), or using dynamic content, the portals offer cryptographic assurance that the request is being completed correctly.
The one major caveat with this upgrade is that modern web browsers do not have the ability to verify these cryptographic proofs. To get fully proper assurance that the portal is not acting maliciously, users need to install the Skynet Kernel browser extension. Even without a browser extension, we are able to significantly reduce user exposure to malice, but a user is only fully safe if they install the extension.
While the API supports all types of interactions trustlessly, the Skynet Kernel currently only supports registry actions, and small file actions (both uploads and downloads). The browser extension also does not yet support trustless webapp loading. All of these features are being developed, and we will get Skynet to the point of being fully trustless before too long.
Skynet Portal Setup
One critical focus for the Skynet team has been the ability for third parties to easily set up their own portals. A few months back we even had a workshop for people to set up their own portals. The workshop was nearly successful, with just a few hiccups that made a start-to-finish setup difficult for participants.
The portal stack has since gotten a bit more complicated, mostly owing to processes that we’ve had to add as a response to the turbulence of being continuously de-platformed. The portal now has more robust processes related to file repair and more robust processes related to MongoDB. Some of the health checks are also more robust.
As a result, we still haven’t yet achieved the creation of a simple workshop that can get a portal operator from zero to fully established in under 4 hours, but it has become a top priority for us and we hope to have the workshop completed within a month.
Two key upgrades since our previous workshop include switching from LastPass to Hashicorp and enabling support for text file configurations. Both upgrades should make it significantly easier for individuals to set up their own portals, and the removal of LastPass means we no longer have closed-source products as part of our portal stack.
We will reach out to the community soon with details about our next workshop — stay tuned!
Improved Renter Stability
Between the chaos of continuous de-platforming, the influx of new hosts, and the price turbulence on the network, the upload, download, and repair systems of the renter were put under stress. At one point, we lost user files due to our servers being fully unplugged without us having time to react.
Several bugs and instabilities have been fixed related to these systems. Downloads are now more stable and consistently fast. Uploads are also more consistently fast. Files more consistently stay in good health, and the total amount of repair required to keep a set of files online has been reduced.
One key change that we’ve made is that file repair now happens across servers as well as within a server. Prior to this enhancement, if a server got randomly ripped offline (say by an over-eager IT admin who is antsy about some phishing reports), the files that had been on that server would live for about a month before falling off the network, with no easy way to recover.
We now have a cross-server repair system that allows one server to tell if another server is no longer able to repair files. In that case, it’ll assume the responsibility and get the files up to good health. So long as a portal cluster has at least one server with operating capacity, files will not be lost from the cluster even if many of the servers are being brutally ripped offline.
As a part of the transition to a more fully decentralized stack, we’re beginning to transition the developer ecosystem from using
skynet-js to using the
libkmodule libraries, which are part of the Skynet Kernel toolkit. At this point, both
libkmodule are still in alpha and are not recommended for fully deployed production applications. We expect to transition to beta within a few weeks, and then to v1.0 a few weeks after that. If all goes well,
libkmodule should be the default recommended libraries starting in October.
If you are familiar with the Skynet ecosystem, you know that Skynet development is broken into two major categories: webapps and Data Access Controllers. Previously, both types of development were done using
skynet-js, which was effective but especially didn’t give DACs all of the features that they needed to be maximally useful. Furthermore, the standard for DACs was to use iframes, which had significant enough performance limitations that they really couldn’t be used in a composable way.
The new ecosystem tooling addresses this by moving DACs from iframes to web workers, where they are 10–30x faster in the naive case, and as much as 1000x faster in the optimized case. These performance boosts make composability tractable for real-time user applications. Further, DACs now have their own special library —
libkmodule, which provides tooling that removes almost all of the boilerplate involved with writing DACs and allows developers to focus on the fun parts.
We’ve also updated the naming a bit. You will often now hear DACs referred to as “Kernel modules”, because Kernel modules provide substantially more functionality than the original DACs.
The Skynet Kernel
When computing transitioned from most applications being on the desktop to most applications being in the web browser, the user gained an enormous amount of convenience in the form of all of their data and experiences being available from every device, and all of their data surviving adventures like getting a new laptop.
This convenience came at a steep price, which is that corporations now control all of the user’s data, and the user transitions into a feudal state where they have to do things like rent the tools they use for work, and rent the music they listen to and the movies they watch. And they have to live with the fact that key parts of their lives can be taken away from them at any time.
The Skynet Kernel is a new programming paradigm that gives users all of the convenience of apps in the web browser with all of the liberty and control of running an open source desktop like Linux. It effectively allows webapps to run entirely in the client, while simultaneously having access to APIs that act like a cloud.
The Kernel is a fully trustless application environment. Every piece of code that runs is verified in full by the user, and at the user’s discretion can be swapped out with different code. All interactions with Skynet are verified cryptographically, ensuring the user can’t be lied to by the cloud. And because all of the programs run on the client, this also means that apps can’t be upgraded without the user’s permission — features that were once free can’t suddenly start charging money, because the user has the full autonomy to downgrade.
The Kernel can be used across the entire web3 ecosystem. The programming paradigm is fully generalized, which means you can use it to create trustless relays from webapps to things like the Ethereum blockchain. You can even use the Skynet kernel to inject trustless relays into centralized applications like OpenSea, allowing users to see the full list of assets even if the application itself is engaging in censorship.
The Skynet Token Framework
The Kernel is even powerful enough to run entire blockchains from the web browser. One of the biggest struggles of decentralized blockchains to date was figuring out how to convince everyday users to run full nodes. This problem is so difficult that most projects have entirely given up on it, and instead encourage users to depend on relays and trusted infrastructure.
We believe this problem can be solved by placing the blockchain full node directly into the browser via the Skynet Kernel. Users don’t even have to be aware that the full node is there, working in the background to verify all of their transactions. And this is exactly how the Skynet token will work. Every user will be using the blockchain from a full node.
However, the full Skynet token framework is so much more than a blockchain in the browser. It’s a means for Skynet applications to monetize without disrupting user experience. Skynet token is a micropayments platform that allows applications to charge users hundredths of a cent per page view without needing ads or consent paywalls. In one sense it’s like ERC20 on steroids, and in another sense, it’s like Patreon on steroids.
In the full design, every single user gets their own token. And every single application gets its own token. And every single piece of content released on Skynet gets its own token. That’s because on the web, contributions are provided by everyone. On Skynet, Reddit Karma isn’t just fake imaginary points, it’s monetization that pays users back for contributing meaningfully to the Internet.
The Skynet token blockchain features a large number of novel technologies, including Skynet-based block distribution, utreexo-based state management, a low power consumption proof of work algorithm, and a token framework similar to ERC20 that has no MEV. Everything has been optimized to run in the browser, ensuring that users who access Skynet exclusively from their phones are still able to enjoy the full benefits of a trustless blockchain.
Development progress for Skynet token is already well underway. We’re just a few weeks from having a fully functional in-browser blockchain featuring empty blocks. This includes support for miners, mining pools, and support for a peer-to-peer network. Once the empty-block chain is alive, we’ll be working on a wallet with support for the SKT asset (Skynet Token). And after that, we’ll finally implement and release the full token ecosystem.
Focused on Fundamentals
Over the past year, we’ve been quietly and persistently upgrading Skynet. We are sharply focused on two key objectives:
- Uncompromising decentralization
- A user experience that is comparable to web2
Achieving both of those things at the same time has required a substantial amount of innovation, but we are fortunate enough to have a team that is more than capable of consistently churning out new updates that push the boundaries of our industry.