In section 1.1, I show the effects of convolving the finite difference operators Dx = [1 -1] and Dy = [1 -1]T with the image of the cameraman. To combine the partial derivatives into a gradient magnitude image, I computed sqrt(dx**2 + dy**2). Lastly, I binarized the gradient magnitude with a threshold of 75 to create an edge image. The order of images presented from left-to-right is dx, dy, gradient magnitude, and binarized.
To denoise the results from section 1.1, I first apply a Gaussian filter to smooth out the image of the cameraman and then repeat the steps from 1.1. For the Gaussian filter, I used a kernel size of 15 with a standard deviation of 2.5. For the binarization, I used a threshold of 22. The order of images presented from left-to-right is gaussian dx, gaussian dy, gaussian gradient magnitude, and gaussian binarized.
To test the derivative theorem of convolution, I first convolve the Gaussian filter with the finite difference operators Dx and Dy, and then convolve the resulting filters with the image. According to the theorem, the final results should look the same. I found that to make this work in practice, I had to set boundary='symm' when using scipy.signal.convolve2d. I use the same threshold for binarization and parameters for the Gaussian filter as above. The order of images presented from left-to-right is DoG dx, DoG dy, DoG gradient magnitude, and DoG binarized.
To sharpen an image, I first convolve the image with a Gaussian filter and then subtract this result from the original image to extract the high frequency details. Then, I add the high frequency image multiplied by some alpha parameter (which controls the amount of sharpening) to the original image. The sharpened image is calculated with the unsharp mask filter, img + alpha * (img - blurred).
Below I have a sharp image of a waterfall. To test whether sharpening can fully recover the high frequency details of an image, I first blurred the waterfall using a Gaussian filter and then sharpened it using the unsharp mask filter. While the sharpened image (right) does appear sharper than the blurred image (middle), it still fails to recover the full range of high frequency signals found in the original image (left) upon closer inspection.
Given two images, I create a hybrid image by extracting the high frequency components of one image and the low frequency components of the other image and averaging the two results together. To extract the high and low frequency components, I use a Gaussian filter to blur the image.
After tweaking the parameters a bunch this was the best I could get, but the high frequency details of the campanile are too prevalent in the image and can still be seen from afar.
The first level and second level contains the aligned images, the third and fourth level are the high and low-pass filtered images, and the fifth level is the hybrid image. The Fourier transform of each image is to the right.
A Gaussian stack iteratively applies a Gaussian filter to an image at each level to increasingly blur the image. A Laplacian stack iteratively computes the difference between adjacent elements in the Gaussian stack to extract high frequency details from the image. The top level of the Laplacian stack is a copy of the top level of the Gaussian stack so that when we collapse the stack we are able to reconstruct the original image. Note: images in the Laplacian stack have been normalized for visualization.
To blend two images together, we decompose the blending at various levels of the stack and collapse the result at the end. To blend the image at each level i, I compute (img1_laplacian[i] * mask_gaussian[i]) + (img2_laplacian[i] * (1 - mask_gaussian[i])).
I use a centered vertical mask similar to the mask used for the apple and orange for this result.
Bells & Whistles: All implementations of Gaussian and Laplacian stacks and multiresolution blending are done with color.
Takeaways: My biggest takeaways from this project are the powerful transformations we can apply to images with a basic understanding of high-pass and low-filters and image characteristics at different frequencies. If I were to show these results to someone they would probably guess it was the byproduct of a creative prompt + stable diffusion, but instead it only required simple image processing techniques!
Acknowledgements: I used the following prompt with ChatGPT to create a base template for the website: https://chatgpt.com/share/66f23112-a398-8007-8292-c12dcebf2828