Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Somewhat relevant: Threadripper CPUs, which are aimed at the high-end consumer market, are NUMA with two memory domains.

This makes the overall scheduling problem much harder, to the point that they were built with a special "Disable half the cores" mode and supporting hardware to give both memory banks same-speed access to the remaining ones.



Honestly, I believe the only reason they did this is so that when review sites run benchmarks the Threadripper won't look abnormally slow compared to the other chips out there.

I own one and in both work and play I have had zero issues. If I drop a frame here and there in a game due to some memory latency? Eh, could care less. If you can afford a Threadripper you can afford a 1080 Ti and a Gsync monitor to smooth out any issues you might run into.


You also can probably afford enough memory that the kernel can schedule your game entirely on one half of the CPU, but I don't know if that sort of scheduling (and defragmentation) is commonly used yet.


It’s going to be on the application to be NUMA aware, regardless of how much memory you have. Games have never really had to deal with this due to the absolutely minuscule number of people who played games on server-grade dual socket Xeons. It’ll be interesting to see if any of the big names (Unity, Unreal, Crytek/Lunberyard) ever care enough to make a patch for proper NUMA support.


It's entirely possible to do this at the OS level. It makes the scheduling problem much harder, yes, but a user can—for example—force their game to run only in one domain using CPU affinity, then somehow trigger the kernel to migrate all its memory to that domain. I know how to do the former, I haven't tried the latter.

It would be more difficult to do it automatically, but if NUMA systems become more common then I see no reason why it shouldn't be tried.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: