to actually tackle this (on the off chance you're serious, I'm assuming not) - this doesn't work.
The payload that implements your crypto cannot be delivered over http, because any intermediate party can just modify your implementation and trivially compromise it.
If you don't trust TLS, you have to pre-share something. In the case of TLS and modern browser security, the "pre-shared" part is the crypto implementation running in the browser, and the default trusted store of root CAs (which lives in the browser or OS, depending).
If you want to avoid trusting that, you've got to distribute your algorithm through an alternative channel you do trust.
You are right presharing is a requirement, unless you hash the keys used to encrypt the secret into the secret itself, but that can only be prooven later on a channel where the same MITM is not present.
Work in progress, that said presharing solve(d/s) enough for the world to dump DNS and HTTPS in a bin and light it on fire now, because nobody has the power to implement all the MITM needed if everyone "makes their own crypto" on top of allready shared secrets!
Yep, my distributed JSON over HTTP database uses the ext4 binary tree for indexing: http://root.rupy.se
It can only handle 3 way multiple cross references by using 2 folders and a file now (meta) and it's very verbose on the disk (needs type=small otherwise inodes run out before disk space)... but it's incredibly fast and practially unstoppable in read uptime!
Also the simplicity in using text and the file system sort of guarantees longevity and stability even if most people like the monolithic garbled mess that is relational databases binary table formats...
Most good projects end up solving a problem permanently and if there is no salary to protect with bogus new features it is then to be considered final?
And that means making closed port 53 illegal on your home fiber... good luck!
Also need to open port 25 outgoing on that same fiber.
And we all need routers that run dd-wrt, the problem is all routers that support dd-wrt are sold out because support always comes late in the production cycle.
It's hard work to self host, but it's the only work worth doing.
> Use HTTP (secure is not the way to decentralize).
This doesn't seem like useful advice. If you're going to use HTTP at all there is essentially zero practical advantage in not using Let's Encrypt.
The better alternative would be to use new protocols that support alternative methods of key distribution (e.g. QR codes, trust on first use) instead of none.
> Selfhost DNS server (hard to scale in practice).
If your DNS port is closed by your ISP, you can't have people use your DNS server from the outside and then you need Google or Amazon which are not decentralized.
Also to be selfhosted you can't just forward what root DNS servers say, you need to store all domains and their IPs in a huge database.
The root certificates are pretty decentralized. There isn't just one and you can use whichever one you like for your certificate. The browsers or other clients then themselves choose which roots to trust.
The main thing that isn't very decentralized here is Google/Chrome being the one to de facto choose who gets to be root CA for the web, but then it seems like your beef should be with people using Chrome rather than people using Let's Encrypt.
> If your DNS port is closed by your ISP, you can't have people use your DNS server from the outside and then you need Google or Amazon which are not decentralized.
It's pretty uncommon for ISPs to close the DNS port and even if they did, you could then use any VPS on any hosting provider.
> Also to be selfhosted you can't just forward what root DNS servers say, you need to store all domains and their IPs in a huge database.
I suspect you're not familiar with how DNS works.
Authoritative DNS servers are only required to have a database of their own domains. If your personal domain is example.com then you only need to store the DNS records for example.com. Even if you were hosting a thousand personal domains, the database would generally be measured in megabytes.
Recursive DNS servers (like 1.1.1.1 or 8.8.8.8) aren't strictly required to store anything except for the root hints file, which is tiny. In practice they will cache responses to queries for the TTL (typically up to a day) so they can answer queries from the cache instead of needing to make another recursive query for each client request, but they aren't required to cache any specific number of records. A lot of DNS caches are designed to have a fixed-sized cache and LRU evict records when it gets full. A recursive DNS server with a 1GB cache will have reasonable performance even under high load because the most commonly accessed records will be in it and the least commonly accessed records are likely to have expired before they're requested again anyway. A much larger cache gets you only a small performance improvement.
DNS records are small so storing a very large number of them can be done on a machine with few resources. A DNS RRset is usually going to be under 100 bytes. You can fit tens of millions of them in RAM on a 4GB Raspberry Pi.
Coroutines generally imply some sort of magic to me.
I would just go straight to tbb and concurrent_unordered_map!
The challenge of parallelism does not come from how to make things parallel, but how you share memory:
How you avoid cache misses, make sure threads don't trample each other and design the higher level abstraction so that all layers can benefit from the performance without suffering turnaround problems.
My challenge right now is how do I make the JVM fast on native memory:
1) Rewrite my own JVM.
2) Use the buffer and offset structure Oracle still has but has deprecated and is encouraging people to not use.
We need Java/C# (already has it but is terrible to write native/VM code for?) with bottlenecks at native performance and one way or the other somebody is going to have to write it?
I wrote a Win32 app 20 year ago, but the limits to how it handles memory made things confusing when you loaded large amounts of data into the GUI.
I would say Java Swing is still the peak of GUI development. Works flawlessly at close to native speeds (GPU acceleration and all) on all platforms including Risc-V that did not exist when it was developed!
So it might be too late as 3688 will be too hot...
Just like routers get dd-wrt when sold out!