Pixel Analytics deconstructs the pixel anatomy of images and rearranges their location based on the pixel's saturation, hue, and brightness values.
The saturation, hue, and brightness levels of an image is extracted from a sample of pixels that make up the whole. A grid of blocks is then constructed and colored based on the hue values of the image's pixels. The location of the blocks can then be altered by applying filters that will group blocks together with others sharing similar saturation and brightness values.
Once an image is loaded into the system, a grid of blocks is constructed based on the user's specification of number of rows and columns. Each block is then colored based on the RGB values of the extracted pixels. The user can then toggle through a set of filters that update the block's position by mapping the saturation and brightness values to the x- and y-axis.