So I’m working on a crazy little project. It’s something like GIF on steroids. a codec tailored for pixel art. My first step takes an arbitrary set of frames and turns it into a colour cycling style of animation. This turns out to be much simpler than it sounds.
All you really need to do is treat each pixel in the animation as a color in a single frame, and the color has a lot of components. R,G,B,R,G,B,R,G,B <— repeat for however many frames you need. This works perfectly… unless the number of unique “colors” goes over my limit
I’ve been running the lovely pixel art video called “move your feet” through this. it makes a good test case. here’s what a few scenes look like if you map the palette indexes to grey tones.
with a couple tweaks, I am able to raise the limit on palette indexes to 65536 “colors”. Yes, it absolutely is cheating. Pictured below is a few of the original frames, the palette cycling animation, and the leftmost portion of the palette data. Y axis is time, X axis is index.
I raise the limit by using all the colour channels. G and B for high and low bytes, and R as the xor of the two. (for error correction/checking). The goal is to eventually run a median cut algoithm on the “color” set, merging similar colors. I wanna know what the glitches will be
I’m pretty into the idea of coming up with novel new image and animation compression techniques- not for performance gains, but to see what the compression artefacts look like. Previously: a slit scan photography inspired compression technique.
dreaming about median cut algorithms. turns out there’s more than one way to cut a median so there’s a bunch of different variations on this algorithm.
one thing that always botheree me about it: why average all the colors per bin at the end? why not pick the median there too?
Success! median cut palette reduction on cycling colors is.. strange. The artefacts feel a bit like frosted glass. what happens is every colour boundary creates a new indexed region, and then to reduce the palette, regions get merged and averaged together. a bit like blurring.
appropriately tuned, and without overloading it with too many frames, the artefacts can be barely visible. Plenty of bugs here.. but, I do believe I have the basic code a nearly generalised palette cycling animation automator.
so the question now is how can i design this into a convenient format for a player app?
I found this perfect gif to test out some dynamic lighting theories. I reckon that we could have had a lot of performant dynamically lit sprites back in the dos era, using just indexed palettes.
what’s really rad is that this “lighting sweep” animation compresses into a single indexed color image, and then you can add all kinds of crazy lighting effects just by changing the palette.
original skull artwork by
Here’s what median cut is able to do to reduce the indexed image to 16px. It… Kinda looks like it’s supposed to!
so this actually required an adjustment to my median cut algorithm. initially what was happening was it was biased toward blending together the end of the animation and keeping the start of it crisp.
ultimately i traced this to a part of the algorithm that, in essence, picked the frame with the greatest range of colors to split. since this is, uh BINARY black and white image, all frames have greatest range, so it always just picked the first one.
So that’ll need work.
the way i changed it is to randomly pick either the first or the last, but this isn’t very balanced either
After fixing this and many other little issues in my median cut algorithm I have a … different result.
Median cut has a lot of little things that you can adjust that drastically affect the outcome. I had no idea.
The new settings give a much clearer result on this sequence.
it’s not originally motion blurred. the motion blur seems to now happen as a natural result of quantising the motion colors over time.
one thing missing from all this is that, these aren’t true palrtte “cycles”, really just a sequence of different palettes. a true palette cycle would appear in the palette window as a sequence of perfect diagonals- a section of the palette that is just rotating a step at a time
i wonder if there’s a way to bruteforce find these diagonals inside of a generated animated palette as part of the median cut algorithm. my thought is to split an animated color into the set of colors over time, normalised+a time offset, so cycles match each other.
what function should i use for that? take an arbitrary sequence of colors, plus an offet. find an f where this equation holds true for all x. f(seq)=(seq’,offset), g(seq’,offset)=seq, f(g(seq’,x))=(seq’,x)
well, that almost suggests itself. would just taking the derivative of the sequence work?
it’s not enough for the equation to hold, similar but slightly different sequences should line up so i can merge them more easily. i think a one level fft should do.
or maybe to just use signal phase/angle as a sort dimension.
so i tried to implement this phase detection and it… soorta worked, except in the first insrance i treated the full rgbargbargba vector as a single signal and the median cut turned colors into mud.
forcing it to treat the colors seperately only i@proved maters a little.
i might need to actually abandon median cut altogether and start over with a strategy closer to what i was doing with my slitscan experiments: for each cycled color, detect its phase AND amplitude AND frequency, and do reductions based on similarities in those attributes
or, start there by transforming rgbargbargba vectors into a single rgba, each compsed of frequency, phase, amplitude, and performing median cut on that reduced vector space
i need more sleep, ideally, i find something that is able to find, and potentially reduce srbitrary animation frames to those “true” cycles where possible.
I’m wondering if all the new features I’m trying to implement are performance regressions. Oh well. It still kinda works!
i just realised that if i can get a full phasefrequencyamplitude analysis for each color cycle index, sorted by phase and frequency, it’s very nearly an optical flow algorithm.
pixels with similar and neighboring phases and frequencies represent regions of an image where light is flowing from one region to another.
so the trick is can i translate one region’s light flowing into another region into a block level optical flow. what would an optimisation algorithm for this look like?