Diagnosing The Sony "Star Eater"

Introduction

The original Sony "Star Eater" spatial filtering algorithm is discussed here: Sony Star Eater and the later version found in the Sony A7RII firmware v4.0 is discussed here: Sony Star Eater v2.

The purpose of this page is to show how to recognise it and how the algorithm was reverse engineered.


How to recognise spatial filtering

Contrary to popular belief, the best way to recognise spatial filtering at work is not to take an image of stars. Instead, the best way to do it is to take a well exposed flat frame (e.g. an out of focus image of a white sheet) at a fairly high ISO so the image contains lots of noise. Alternatively a long exposure dark frame with the lens cap in place is also quite effective. Here is an example of two raw files from a Sony A7RII. These are actually night sky images but the background areas are still pretty random - the first one has no spatial filtering and the second suffers from the "Star Eater". The images are scaled up so the individual pixels can be easily seen:

  

  

The noise in the background of the first image is fairly random but the second image has a very obvious noise structure. In the first image it is easy to find single pixels that are brighter than the surroundings and single pixels that are darker then the surrounding ones. Technically speaking, you are looking for pixels that are brighter or darker than their neighbours of the same colour. In the second image it is impossible to find single pixels that are either brigher or darker than their surroundings but instead it is only possible to find a neighbouring pair of pixels that are brighter or darker than their surroundings. I call this the "pixel pairing" effect. It is the distinctive feature of this type of spatial filtering.

Here is another real example of a Bulb mode long exposure dark frame taken with my Sony A7S. The pairs of bright pixels are quite obvious:

     


What is "pixel pairing"?

In the following image I have taken the earlier example and drawn links between most of the paired pixels in a small area. The colour of the link indicates the colour of the linked pixels:

  

To make it easier to see, here is the same image but the individual pixels of the colour filter array are coloured so they can be identified as red, green or blue:

  

Given the noisiness of the data (i.e. the large range of pixel values represented) there is an infinitesimally small probability of such a high density of paired values occurring purely by chance.

The astute reader will recognise that this pattern of pairing comes from the Sony A7RII firmware v4.0 algorithm because many of the green pixels are paired with their immediate diagonal neighbour instead of the next diagonal pixel but one. Also there are a couple obvious green diagonal pairs on the right hand side which are stars. It appears to be the case that the star's red and blue pixels which would have been recorded by the sensor have been "eaten", just leaving just a green diagonal pair. This anomalous survival of green pixels is the effect that turns small stars green in the Sony A7RII firmware v4.0 algorithm.


Reverse Engineering the Algorithm

Once the existence of pixel pairing has been established it is possible to guess the algorithm that might have caused such a pattern. The simplest algorithm that fits the bill is one where the bright pixels have their value reduced to match one of their same-colour neighbours and dim pixels have their value increased. It is this matching of values that creates a pair. Once a candidate algorithm has been suggested, it creates a testable "rule" for the pixel values in the filtered image and it is possible to test the whole image to see if any pixel values break the rule that has been postulated.

In addition it is possible to take carefully controlled shots of artificial stars (i.e. points of light at a distance) and examine the difference with and without spatial filtering. Here are two actual examples of images of a single artifical star taken with the Sony A7S, with and without spatial filtering:

Example 1:

     

Example 2:

     


Detection of Spatial Filtering: Reduced Noise

Another method of detecting noise reduction algorithms or spatial filtering is simply to measure the noise (i.e. standard deviation of pixel values) in a small area of image. This works well when it is possible to take otherwise identical images, with and without noise reduction e.g. Bulb mode on and and Bulb mode off.

This will certainly show the existence of filtering but it doesn't give any information on how the algorithm itself works.


Detection of Spatial Filtering: Fast Fourier Transforms

For the technically minded, another method commonly used to detect the existence of any kind of noise reduction or spatial filtering is the Fourier Transform. Noise reduction will attenuate high frequency information in the image leading to a loss of fine detail. This loss of detail is another "Star Eater" effect sometimes complained about - not just by astro-photographers.

The loss of high frequency information (i.e. fine image detail) can be seen in the 2-dimensional FFT (Fast Fourier Transform):

     

Smoothing has been applied to the above FFT to make the overall structure (the brighter and darker areas) more obvious. If the data in the original image is random noise then the 2-D FFT would be uniformly bright from the centre (the DC and low frequency components) out to the edges (the high frequency components). However in the above 2-D FFT the dark areas near the edges indicate the attenuation of certain frequencies - in this case the high frequencies i.e. the fine image detail. The FFT would normally be performed on a single colour channel extracted directly from the raw file.

Again this technique is excellent for discovering the existence of filtering but doesn't give any information of how the algorithm itself works.
Fourier transform techniques are used with great effect by Jim Kasson, for instance:
      Sony a7RII long exposure spatial filtering
      Sony a7RII FW 4.0 star-eating


Conclusion

Although we can never be 100% certain of exactly what is going on inside the camera's firmware, careful examination of the raw data can be surprisingly effective in determining how the algorithm might be working. Once the algorithm is understood then it's effect on stars in the image can be precisely predicted. For instance, what is the effect on a single pixel star? What is the effect on a star enclosed in a 2x2 block? What is the effect on star that has spread beyond a 2x2 block into into a 3x3 block or a 4x4 block etc.