The feeling this far is that there is a super low level protocol for transferring data between servers. That part is mostly done but not really stress tested.
That low level protocol basically translates data of either the string or byte stream variety to the next level up — the stream command layer. This part is mostly hypothetical. Stream commands are CBOR encoded hash tables that can create and manage streams. Servers start up with nothing, but the first person to create a stream claims the server since it can have only one root stream.
The root stream owns the server customizations (custom code) and permissions about how substreams can come into existence if at all, as well as server automation parameters. Fetching data from other servers, applying aggregations, etc. happens there.
That’s about as far as I have imagined. I think I will start to write some tests for this new layer now! It will likely start with setting up a bunch of root streams that follow each other in various configurations and dealing with how replies happen or don’t depending on permissions. That will give rise to our first aggregator — reactions. Then our second — tag counts — which will require some mapping logic to parse out hashtags!
Then I think we have built a much more privacy-centered twitter clone. Fully decentralized since a server can’t create substreams yet. Quite inefficient though since it runs on a Java stack, so we have to make good use of that! My initial tests suggest the server can handle at least 100k concurrent connections without breaking a sweat (the low level protocol was built on netty), so federation shouldn’t be too hard at all!
When I first built the thing it was using a naive server socket implementation with one thread per client connection. My 32-core computer croaked under the load.
So I feel good about layer 1 at least! None of the bloat of HTTP.