Hacker Newsnew | past | comments | ask | show | jobs | submit | empiricus's commentslogin

But I see all the "QR codes" have a hexagonal symmetry? So basically you can use only one corner (1/6) to represent a node? Why do they keep the entire hexagon?

And then you try to actually build a GPS network, and ask yourself: what kind of antennas should we use? what should be the freq? how much power? how will the receiver detect the precise nanosecond when it receives an incredible weak signal? (in current GPS the signal is bellow thermal noise)

You can go into a really deep rabbit hole thinking about those questions.. this page explains the design decisions behind LunaNet AFS navigation signal: https://insidegnss.com/the-augmented-forward-signal-afs-defi...

Some considerations:

- They don't use GPS frequencies because there is receiver on the moon that receives GPS L1 signals (LuGRE and potentially more in the future)

- Make it easy to acquire for low complexity hardware

- Use 5G forward error correction code to reuse existing hardware implementation

- Design the signal in a way so that the user can easily find start of a data frame

And those are RF level considerations... there will be more considerations needed for the data transmitted over those navigation signal that the receivers need to use to determine navigation satellite position as lunar orbit is much more complicated than Earth orbit


(also you receive the signal from all satellites at the same time, on the same freq, and some random reflections. and then you need to extract independent streams of bits for each satellite, each with its own nanosecond timestamp for receive time)

The last time I built my own GPS network, I used a sextant, a watch set to UTC and nautical tables to determine where the orbiting bodies were.

Harrison time pieces were indeed the cutting edge GPS technology of its day. That and moon watching.

Nowadays we have GPS and Pulsar / Quasar watching.

https://en.wikipedia.org/wiki/Very-long-baseline_interferome...


The hw implementation of xor is simpler than sub, so it should consume slightly less energy. Wondering how much energy was saved in the whole world by using xor instead of sub.

I doubt any of that is measurable, since all ALU operations are usually implemented with the same logic (e.g. see https://www.righto.com/2013/09/the-z-80-has-4-bit-alu-heres-...)

For a 32 bit number you're looking at going from using 256 to ~1800 transistors in the operation itself. A modern core will have roughly 1,000,000,000 transistors. Some of those are for vector operations that aren't involved in a xor or sub, but most of them are for allowing the core to extract more parallelism from the instruction stream. It's really just a dust mote compared to the power reduction you could get by, e.g., targeting a 10 MHz lower clock rate.

I guess everything what was saved was burned by the first useless image created per AI

Maybe it's just me, but the visualizations do not help me at all.

I am always puzzled by such articles - its actually very well made, drawings are good, little interactive pipeline animations are fine. But in order to follow it you must already know and understand what its writeen about and if you dont - the content is just noise for you.

The article does say what it expects you to know before reading. However, it has a dead link to the knowledge it wants you to know.

Author here: thanks for flagging the dead link! Unfortunately, I had to remove it. I couldn't find the original slides.

Hi, I'm the author! Thanks for saying it's well made :).

I actually agree with you, the intended audience isn't someone who has never heard of CPUs before.

I tend to either write for myself: you know the saying you don't understand something until you try to explain. Or I'm writing for the person self-studying that is looking for that one explanation where everything finally clicks. I always get a lot out of those type of posts myself, so like to create them for others too.


You could use colors in the step-by-step simulation to show dependencies. Also show some tooltips/comments when things happen (that you described above). Ideally one should press next next next in the simulation, and understand what happens better than the paragraph description above.

well, rhetorical trick or not, it is worth thinking about the fact that the dynamics of the thing are already outside anyone's control. I mean, everyone is racing and you cannot stop.

I found much more interesting the way the gps electronics work. What do you mean you need to know the exact moment you receive a message from a satellite with nanosecond precision? when the message itself is several seconds long.

A long time ago (pre-internet) I heard a normal person can learn to juggle in 1 day. It took me 2 days, but I learned to juggle 3 balls. But soon I realized what you said, the need for a consistent toss. Not sure of the reason, but I always make some errors with physical movements, they are never perfect. Even with typing, no matter how much I exercise, I cannot get bellow ~3% errors. Wondering if this is some kind of genetic effect, and how many ppl have similar issues.

I haven’t tried juggling for decades but I did manage to teach myself basic three-ball juggling when I was at university (any excuse to avoid revising!)

I think it took me a couple of weeks though. I’m a bit malcoordinated for that sort of thing in general. I think you’re right that there’s some sort of natural aptitude that not everybody has. Fortunately basic juggling is just about easy enough that almost any idiot can do it.


If you are on the spectrum at all, you have a high-to-very-high chance of being more clumsy than average due to differences in sensory processing.

I, too, make unpreventable physical errors all the friggin time.

For instance, I attempted to upvote your comment but initially downvoted it. Sigh.


This made me laugh. The number of times I’ve Admiral Ackbar fat-fingered the flag button when I just wanted to hide a post on HN is almost too many to count at this point.

looks interesting, but has the classic "40 patients".


What would be enough? 400 patients, 4000, 40k, 400k, 4M?


Well, reading the study, I'm not sure more patients could rescue it from methodological bias. They assumed the premise basically -- we should find a biomarker, which is kind of what this thread is discussing. Then they went trawling for biomarker in a sea of millions of biomarkers. They did this by training an model that produced the desired result, using a grid search for hyper parameters that even further expanded the available degrees of freedom here beyond what they had from the biology. No pre-registration; There are millions of places where the researchers could have made a different decision -- would they still have gotten a publishable result? Oh plus the authors mostly work for the company whose data they use, which is hoping to sell a diagnostic test.

I'm giving you a thorough response because I'm detecting a cavalier anti scientism which I think is sadly becoming more common. This stuff is hard; are you sure you understand it enough to have an informed opinion?


Even this OpenAI course is from 2020? Are there no useful recent updates on the subject, especially now with everyone working and using RL?


you put the dog in crate with a COPY of your documents.


Your dog has now ordered a hitman to kill you, assume your identity and to live vacariously as a simple bartender at Cheers.


It's a dog eat dog world and I'm wearin a milk bone underwear


Sam!


Step 2 -

you put the dog in crate with a COW of your documents


Make it two copies!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: