L209 of autoaugment.py
|
# Compute the grayscale histogram, then compute the mean pixel value, |
|
# and create a constant image size of that value. Use that as the |
|
# blending degenerate target of the original image. |
|
hist = tf.histogram_fixed_width(degenerate, [0, 255], nbins=256) |
|
mean = tf.reduce_sum(tf.cast(hist, tf.float32)) / 256.0 |
and L278 of
autoaugment_utils.py of the implementation of contrast()
|
# Compute the grayscale histogram, then compute the mean pixel value, |
|
# and create a constant image size of that value. Use that as the |
|
# blending degenerate target of the original image. |
|
hist = tf.histogram_fixed_width(degenerate, [0, 255], nbins=256) |
|
mean = tf.reduce_sum(tf.cast(hist, tf.float32)) / 256.0 |
seem wrong.
mean is supposed to be the mean pixel value, but as it is it's just summing over the histogram (therefore equal to height * width), divided by 256.
google-research/big_vision and
tensorflow/models have the same bug so ideally should be fixed in the same way. See
google-research/big_vision#109 for details including the manual tests I did.
L209 of
autoaugment.pytpu/models/official/efficientnet/autoaugment.py
Lines 205 to 209 in c3186a4
and L278 of
autoaugment_utils.pyof the implementation of contrast()tpu/models/official/detection/utils/autoaugment_utils.py
Lines 274 to 278 in c3186a4
seem wrong.
meanis supposed to be the mean pixel value, but as it is it's just summing over the histogram (therefore equal to height * width), divided by 256. google-research/big_vision and tensorflow/models have the same bug so ideally should be fixed in the same way. See google-research/big_vision#109 for details including the manual tests I did.