-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why do you normalize each img's pixel value to -1.6~1.6 when training and testing? #8
Comments
This operation is to norm the image from [0,255] to [-1.6,1.6], you can norm the image to other range(better zero centered), [-1,1] or [-1.2,1.2] are fine :) |
Thanks for your reply~ Is 1.6 a particular number that works the best in your case? And why don't you apply a std to each channel? Cause people always set mean and std at the same time. Haha~ |
In dlinknet, each conv-layer is followed by a batchnorm(conv-bn-relu), and, in fact, there's no difference between [-1.6,1.6] and [-1,1](or even [0,255]) in this case. |
Alright, thanks again. Happy New Year btw~ Lol |
Happy New Year~ www |
Hi, thanks for your inspiring work! Just one point I don't understand; why do you set
img = np.array(img, np.float32).transpose(2,0,1)/255.0 * 3.2 - 1.6
when training? Is there any advantages for this operation?
The text was updated successfully, but these errors were encountered: