-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Antialiasing: potential improvement #120
Comments
That would be cheaper, but I think the effect is a more like blur than subsampling. You wind up computing another grid offset by half a pixel from the pixel centers, then average 4 of them to calculate a pixel center. It will look smoother than not doing any averaging but the current approach will provide more detail. But feel free to try it |
I see... I guess I should look for areas of the image with more "entropy" to find out how it really works compared with the current approach. |
I wouldn't claim the current approach is based on very rigorous theory. Essentially we calculate at a higher resolution and average the results. But a different arrangement of samples could provide better speed/quality trade-off. In particular some random jitter on the subsampled points could reduce moire effects. I do think the 'fast' antialiasing option is pretty useful. The results are indistinguishable from 'best' and much faster. The only difference is we guess 'well, this pixel is the same as it's neighbors so the subsampled are probably the same too' and just skip that pixel. |
I'm dropping here a couple of interesting entries from wikipedia: The 1st one took me to the 2nd, in which you can see different subsampling patterns. I understand we're currently using The |
Here's what I understand of how antialiasing is working:
For every pixel (which is an area, not a point, within the complex plane) we take the center value (in the complex plane) to calculate it's fate, index, and color. This would be the 1st pass, no antialiasing yet.
In the 2nd pass, for every pixel (remember is an area), we divide it into 4 areas, called subpixels, and calculate the corresponding color for the center of each. Then we assign the whole pixel the average color of the 4 subpixels (aggregate every color channel and divide by 4).
Assuming no other improvements in place, this is 4x the cost of calculating the original image but also some extra space is needed: the image class, which is holding the pixel color buffer, also holds some subpixel information (fates and indexes)... I haven't found how it reuses this information, although it seems it has the intention.
I have no background in image processing, so maybe I'm missing something, but given this is not a traditional antialiasing algorithm (it's more like smoothing) I'm wondering if the following improvement is possible (and by the way make use of the subpixel information buffer):
When you divide the pixel in 4 subpixel, instead of calculating the color corresponding to the center of each, calculate the outer vertex. This would mean that adjacent pixels would share common subpixels, reducing the total amount of calculations ( (x+1)*(y+1) to be more precise which is far less than the current x*y*4).
Not sure how this would affect the final result, but since the subpixels would be farther from the center ... I hope it's smoother.
I'm only considering the
best
antialiasing mode in this explanation. There's another mode calledfast
which prevents part of the calculations based on adjacent pixels likeness.The text was updated successfully, but these errors were encountered: