Currently, missing pixels are excluded from the integration and their value approximated by the average value of the other 'good' pixels.
This seems to be a better approximation than effectively setting the value of the pixels to zero.
I asked AI for a better, but still simple solution. One good idea would be to perform a local interpolation:
@njit(inline='always')
def interpolate_pixel(image, mask, i, j):
h, w = image.shape
# 4-connected neighbors
s = 0.0
c = 0
if i > 0 and not mask[i-1, j]:
s += image[i-1, j]; c += 1
if i < h-1 and not mask[i+1, j]:
s += image[i+1, j]; c += 1
if j > 0 and not mask[i, j-1]:
s += image[i, j-1]; c += 1
if j < w-1 and not mask[i, j+1]:
s += image[i, j+1]; c += 1
if c > 0:
return s / c
# fallback: 8-connected neighbors
s = 0.0
c = 0
for di in (-1, 0, 1):
for dj in (-1, 0, 1):
if di == 0 and dj == 0:
continue
ni = i + di
nj = j + dj
if 0 <= ni < h and 0 <= nj < w:
if not mask[ni, nj]:
s += image[ni, nj]
c += 1
if c > 0:
return s / c
return np.nan
This would be a better fix for single or few clustered missing pixels.
However, the elephant in the room: Detector gaps, are not solved by this.
Currently, missing pixels are excluded from the integration and their value approximated by the average value of the other 'good' pixels.
This seems to be a better approximation than effectively setting the value of the pixels to zero.
I asked AI for a better, but still simple solution. One good idea would be to perform a local interpolation:
This would be a better fix for single or few clustered missing pixels.
However, the elephant in the room: Detector gaps, are not solved by this.