You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The method of kaolin.ops.spc.Conv3d is great to handle batches of spc features. However, when I want to write a self-attention layer, I need to use nn.linear. That means, have to transfer the the spc.features of shape (L,C) to (B, l_max, C) with padding. Is there any elegant way to handle this without padding and transfer the features to (B, l_max, C)?
The text was updated successfully, but these errors were encountered:
The method of kaolin.ops.spc.Conv3d is great to handle batches of spc features. However, when I want to write a self-attention layer, I need to use nn.linear. That means, have to transfer the the spc.features of shape (L,C) to (B, l_max, C) with padding. Is there any elegant way to handle this without padding and transfer the features to (B, l_max, C)?
The text was updated successfully, but these errors were encountered: