They probably are the most brainwashed and obsessed with the issue, and possibly believe they are scoring points in some imaginary and self-deprecating game.
Sane people of different hues are probably doing something more interesting - reading Ruby books, writing Ruby code, or just having a life.
It's funny that he uses Akira as an example, a movie that famously went over budget because it needed a much larger colour pallet than usual (for the night scenes).
If you start with a video you can do this. You'd transform an X x Y x T dataset (X x Y dimensional frames shown over a period 0-T) into a T x Y x X dataset (T x Y dimensional frames shown over a period 0-X).
For a long video you'd get a very wide image that varies over a pretty short duration. A 1920x1080 video clip that's 80 seconds long @24 fps would come out right. It'll still be 1920x1080 and 80 seconds long after transformation.