Padding in ConvTranspose #3023
Unanswered
GoktugGuvercin
asked this question in
Q&A
Replies: 1 comment
-
Hey @GoktugGuvercin, not entirely sure exactly how SAME is implemented, my guess its that it internally results in something like setting layer = nn.ConvTranspose(32, kernel_size=(3,3), strides=(2, 2), padding=((1, 2), (1, 2)))
variables = layer.init(jax.random.PRNGKey(0), input)
y = layer.apply(variables, input)
print(y.shape) # (4, 16, 16, 32) The intuition I have is that SAME follows this formula: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
While building a model with FLAX, I noticed a small detail about padding option: Setting the parameter$1$ and a string value like
padding
to a numerical value such as"SAME"
are not same, they seem like they use different formula for the calculation of output shape.When I execute the code script above, output shape will be$H_{new} = s \cdot (H - 1) - 2p + d \cdot (f - 1) + 1$
(4, 15, 15, 32)
. In this case, the following formula is used, so the shape of height and width is equal to 15:However, if we replace the value 1 in padding argument of transpose convolution by
"SAME"
and reexecute the code again, the shape of the output will be(4, 16, 16, 32)
.What is the reason lying behind it ?
Is additionally output padding added to have same shape ?
Beta Was this translation helpful? Give feedback.
All reactions