analyzing images using color difference algorithms
My project team, the Courageous Trapeze (artists), inherited the Dvv.js codebase from the team of three Flabbergasted Diplomats. Dvv.js is an "out-of-the-box" distributed computing network that allows developers to identify problems on the server-side, partition them into individual chunks for solving, and outsource them to distributed clients for processing and completion. When all clients are finished solving their chunks, the server compiles the results together and outputs them for viewing.
We decided to apply this distributed computing network to the processing of large images, in the size of 3000x5000 pixels each. How this works:
- Each uploaded image is converted into an array of rgb pixels, where each pixel is an array [ r, g, b ]
- We iterate over the array and check whether each pixel's rgb value is similar to that of a designated color. In our example, we chose the rgb value (255, 0, 0), the most luminous value of red, but the value could be anything.
- We retain all pixels with the designated color of "red" and convert all other pixels to grayscale. This is returned in an array of pixels to our server, which will output the modified photo to the clients.
Step 2 above was my challenge for yesterday. My initial approach was to review the rgb color tables and develop a series of ranges - if a pixel fell within a specific range, we could deem it as "red". However, this approach lacked a systematic, mathematical rationale for two colors being similar. I wanted to find something more substantial.
By reading a bunch of Stack Overflow posts, I stumbled upon two quality npm modules:
1. color-space (https://www.npmjs.com/package/color-space)
2. deltaE (https://www.npmjs.com/package/delta-e)
CIE-76 and CIE-94 are two popular algorithms for determining the difference between two colors. (http://en.wikipedia.org/wiki/Color_difference)
Using these tools, I was able to implement a working color difference algorithm.