We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
你好,请问为什么用layer i-1 于layer i 层的预测来计算relation embedding, 有什么额外的好处吗。 如果之用layer i 层的预测两两配对来计算relation 会发生什么?
No response
The text was updated successfully, but these errors were encountered:
Hi @runfeng-q 这里使用第i-1层和第i层的 $Box$ 来计算relation,是受iterative bounding box refinment启发。网络在Decoder阶段预测的是box的微调量 $\Delta_i=Box_i - Box_{i - 1}$ 。既然网络输出的是两层box之间的微调量,那么我们也相应地用相邻层 $Box_{i}$ 和 $Box_{i-1}$ 来编码位置关系,应该是更有助于预测第i+1层的输出 $Box_{i+1} - Box_{i}$ 的。
如果只用 $layer-i$ 层来计算relation,这种做法的优势是可以多加1层relation,达到6层relation。我印象中是做过一次消融实验的,最后发现6层只用layer i的relation和5层用 layer i-1/layer i的relation 性能差不多。但考虑到后者可以少一层relation计算,就使用layer i-1和layer i来作为默认设置了。
如果您有兴趣,欢迎自己做实验对比一下,代码实现起来也非常简单~
Sorry, something went wrong.
谢谢!!
No branches or pull requests
Question
你好,请问为什么用layer i-1 于layer i 层的预测来计算relation embedding, 有什么额外的好处吗。 如果之用layer i 层的预测两两配对来计算relation 会发生什么?
补充信息
No response
The text was updated successfully, but these errors were encountered: