Hacker Newsnew | past | comments | ask | show | jobs | submit | bullen's commentslogin

Some 3588 CMs are sold out.

So it might be too late as 3688 will be too hot...

Just like routers get dd-wrt when sold out!


Meanwhile HTTP keeps working just fine and is decentralized.

Just "add your own crypto" on top, which is the ONLY thing a sane person would do.

3... 2... 1... banned?


to actually tackle this (on the off chance you're serious, I'm assuming not) - this doesn't work.

The payload that implements your crypto cannot be delivered over http, because any intermediate party can just modify your implementation and trivially compromise it.

If you don't trust TLS, you have to pre-share something. In the case of TLS and modern browser security, the "pre-shared" part is the crypto implementation running in the browser, and the default trusted store of root CAs (which lives in the browser or OS, depending).

If you want to avoid trusting that, you've got to distribute your algorithm through an alternative channel you do trust.


> default trusted store of root CAs (which lives in the browser or OS, depending).

speaking of that, is there any way to verify that stored certificates are actually valid?


You are right presharing is a requirement, unless you hash the keys used to encrypt the secret into the secret itself, but that can only be prooven later on a channel where the same MITM is not present.

Work in progress, that said presharing solve(d/s) enough for the world to dump DNS and HTTPS in a bin and light it on fire now, because nobody has the power to implement all the MITM needed if everyone "makes their own crypto" on top of allready shared secrets!

Circular arguments, wishful thinking and all...


Did you self-ban?


XD Nope, more like self destruct! ;)


Meanwhile CVS just keeps working fine...


Yep, my distributed JSON over HTTP database uses the ext4 binary tree for indexing: http://root.rupy.se

It can only handle 3 way multiple cross references by using 2 folders and a file now (meta) and it's very verbose on the disk (needs type=small otherwise inodes run out before disk space)... but it's incredibly fast and practially unstoppable in read uptime!

Also the simplicity in using text and the file system sort of guarantees longevity and stability even if most people like the monolithic garbled mess that is relational databases binary table formats...


Dying or stabilizing?

Most good projects end up solving a problem permanently and if there is no salary to protect with bogus new features it is then to be considered final?


What is the GPU? Some sort of FPGA?

Edit: Also picture missing from the page, the keyboard:

https://codeberg.org/TechPaula/LT6502b/src/branch/main/Image...

Edit2: Found a few suspects in BOM: ATF1508AS-7AX100, ATmega88PA-AU, ATmega644P-20A

Would love to know what each will do!?

The previous one hints a bit: https://codeberg.org/TechPaula/LT6502

A blog entry: https://www.maddox.pro/?p=414


Yes, but we need to decentralize DNS first!

And that means making closed port 53 illegal on your home fiber... good luck!

Also need to open port 25 outgoing on that same fiber.

And we all need routers that run dd-wrt, the problem is all routers that support dd-wrt are sold out because support always comes late in the production cycle.

It's hard work to self host, but it's the only work worth doing.


I would say:

1) Use HTTP (secure is not the way to decentralize).

2) Selfhost DNS server (hard to scale in practice).

3) Selfhost SMTP server (also tricky).

4) Know and backup your router (dd-wrt or iptables).

JSON over HTTP is the way.

XML is not bad for certain things too; even if I understand the legacy of abuse.


> Use HTTP (secure is not the way to decentralize).

This doesn't seem like useful advice. If you're going to use HTTP at all there is essentially zero practical advantage in not using Let's Encrypt.

The better alternative would be to use new protocols that support alternative methods of key distribution (e.g. QR codes, trust on first use) instead of none.

> Selfhost DNS server (hard to scale in practice).

This is actually very easy to do.


Let's Encrypt is not part of our friends here.

DNS is easy for yourself, but if you host it for others (1000+ of people) and it needs to have all domains in the world, then it becomes a struggle.


Let's Encrypt is a non-profit that defeated the certificate cartel. The main thing you get from using HTTP without it is bad security.

DNS can answer thousands of queries per second on a Raspberry Pi and crazy numbers on a single piece of old server hardware that costs less than $500.


No root certificate is decentralized.

If your DNS port is closed by your ISP, you can't have people use your DNS server from the outside and then you need Google or Amazon which are not decentralized.

Also to be selfhosted you can't just forward what root DNS servers say, you need to store all domains and their IPs in a huge database.


> No root certificate is decentralized.

The root certificates are pretty decentralized. There isn't just one and you can use whichever one you like for your certificate. The browsers or other clients then themselves choose which roots to trust.

The main thing that isn't very decentralized here is Google/Chrome being the one to de facto choose who gets to be root CA for the web, but then it seems like your beef should be with people using Chrome rather than people using Let's Encrypt.

> If your DNS port is closed by your ISP, you can't have people use your DNS server from the outside and then you need Google or Amazon which are not decentralized.

It's pretty uncommon for ISPs to close the DNS port and even if they did, you could then use any VPS on any hosting provider.

> Also to be selfhosted you can't just forward what root DNS servers say, you need to store all domains and their IPs in a huge database.

I suspect you're not familiar with how DNS works.

Authoritative DNS servers are only required to have a database of their own domains. If your personal domain is example.com then you only need to store the DNS records for example.com. Even if you were hosting a thousand personal domains, the database would generally be measured in megabytes.

Recursive DNS servers (like 1.1.1.1 or 8.8.8.8) aren't strictly required to store anything except for the root hints file, which is tiny. In practice they will cache responses to queries for the TTL (typically up to a day) so they can answer queries from the cache instead of needing to make another recursive query for each client request, but they aren't required to cache any specific number of records. A lot of DNS caches are designed to have a fixed-sized cache and LRU evict records when it gets full. A recursive DNS server with a 1GB cache will have reasonable performance even under high load because the most commonly accessed records will be in it and the least commonly accessed records are likely to have expired before they're requested again anyway. A much larger cache gets you only a small performance improvement.

DNS records are small so storing a very large number of them can be done on a machine with few resources. A DNS RRset is usually going to be under 100 bytes. You can fit tens of millions of them in RAM on a 4GB Raspberry Pi.


There are bridges for Matrix (JSON)-ActivityPub (XML), one in Elixir: https://github.com/technostructures/kazarma/


1) so how do you validate the http the client receives is the http you sent?


Validate it yourself with hashing and PKI. Yes, it needs bootstrapping, just like centralized HTTPS needs bootstrapping.


Wow, thanks!

Also if people need more food for (decentralized) thought:

https://datatracker.ietf.org/doc/html/rfc2289


Coroutines generally imply some sort of magic to me.

I would just go straight to tbb and concurrent_unordered_map!

The challenge of parallelism does not come from how to make things parallel, but how you share memory:

How you avoid cache misses, make sure threads don't trample each other and design the higher level abstraction so that all layers can benefit from the performance without suffering turnaround problems.

My challenge right now is how do I make the JVM fast on native memory:

1) Rewrite my own JVM. 2) Use the buffer and offset structure Oracle still has but has deprecated and is encouraging people to not use.

We need Java/C# (already has it but is terrible to write native/VM code for?) with bottlenecks at native performance and one way or the other somebody is going to have to write it?


> C# (already has it but is terrible to write native/VM code for?)

What do you mean here? Do you mean hand-writing MSIL or native interop (pinvoke) or something else?


No I meant this but for C# is a whole lot more complex:

http://move.rupy.se/file/jvm.txt


> some sort of magic to me.

Your stack is on the heap and it contains an instruction pointer to jump to for resume.


I wrote a Win32 app 20 year ago, but the limits to how it handles memory made things confusing when you loaded large amounts of data into the GUI.

I would say Java Swing is still the peak of GUI development. Works flawlessly at close to native speeds (GPU acceleration and all) on all platforms including Risc-V that did not exist when it was developed!

The JVM is the emulator!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: