You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why?
Pytorch onnx unfolds L2* normalization as multiple operators, as opposed to use LpNormalization in onnx.
What?
I feel that it's within the scope of this package to fold multiple operators into a single, no?
Relevance
Some hardware partners, e.g., Qualcomm:QNN/SNPE support LpNormalization. In such cases, model performance might suffer due to an arbitrary choice of Pytorch.
Kindly provide an example to accomplish such task.
*Possibly L1 and others
The text was updated successfully, but these errors were encountered:
Why?
Pytorch onnx unfolds L2* normalization as multiple operators, as opposed to use LpNormalization in onnx.
What?
I feel that it's within the scope of this package to fold multiple operators into a single, no?
Relevance
Some hardware partners, e.g., Qualcomm:QNN/SNPE support LpNormalization. In such cases, model performance might suffer due to an arbitrary choice of Pytorch.
Kindly provide an example to accomplish such task.
*Possibly L1 and others
The text was updated successfully, but these errors were encountered: