Advent of Code progresses to the point where simple/brute force solutions fail to work and you have to use better datastructures/algorithms.
Other performance work has dependencies tending to be tied to a database, networked services, etc that's not so bite-sized and sharable.
My early descent into programming was constantly battling with performance getting a 1.79 MHz Atari 8-bit to do interesting things in a 1/30 second. Maybe retrocomputing and writing software for vintage machines or recreated new machines that work like them might be a good challenge.
PICO-8 is a good competitive ground for performance coding: it's Lua, but with intentional limiters on size, performance, and source token counts. Writing good PICO-8 code generally means code golfing, with the side effect of the performance limiters being designed to make small code also be fast code - so optimizations are often whole-program.
Real-world performance on modern hardware is often a lot more dull and consists of finding the inner loops, parallelizing what you can, organizing the data to be cache friendly, then getting the loop fast at instruction level. You don't learn the same lessons from that because the majority of the codebase is "dark" and run too infrequently to show up as a bottleneck.
Is there a tool where you can check how many instructions your program, with a set input, required to execute the task?
Imagine this, given an x86 C/C++/Rust compiler we all agree upon, and a set of inputs we also share, produce a program that counts - say - the median length of the arrays given or something. After which we'll run our programs in a profiler of sorts that tells us 'this program achieved the task in 1,104,687 cumulative instructions' and we'll compete based on that metric.
It wouldn't really matter what processor you use because the target is the same and so is the compiler.
Is what I suggest possible? It'd be a nice conciliation between platform independence and speed measuring.
Doesn't GDB allow you to step through code instruction by instruction?
I imagine Ghidra or IDA Pro could be souped up to "referee" pretty reasonably.
I would imagine in-lined assembly could circumvent any kind of compiler restriction (just compile it with an optimized compiler and inline the relevant assembly it produces).
I know it is nonsensical for real world applications but golfing isn't real world programming, so why not eliminate some variance ourselves and try measuring what's left?
Don't just laugh, try and tell me why this is not possible for you. This performance golf seems like an interesting idea.
Other performance work has dependencies tending to be tied to a database, networked services, etc that's not so bite-sized and sharable.
My early descent into programming was constantly battling with performance getting a 1.79 MHz Atari 8-bit to do interesting things in a 1/30 second. Maybe retrocomputing and writing software for vintage machines or recreated new machines that work like them might be a good challenge.