-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch.prims.convert_element_type to linalg bf16 to f16 fail #3962
Comments
The bug code is from here https://github.com/llvm/torch-mlir/blob/09af3b6030d8d0c0ee8a80840734224d5c4b82a3/lib/Conversion/Utils/Utils.cpp#L337C1-L342C6
When input scalar float is bf16, target dtype is f16, their width is same, the arith::extf op does not supported this kind of cast. And I also tried arith::truncf, it also failed. I didn't find other arith op for float type cast. Does this mean arith dialect doesnot support convert bf16 to f16? |
arith.bitcast might work when width equal. |
I believe that the canonical way to convert between f16<->bf16 is to first upcast to f32. It's not really a general thing, specific to those two types. |
To fix issue #3962 : 'arith.extf' op operand type 'bf16' and result type 'f16' are cast incompatible
'arith.extf' op operand type 'bf16' and result type 'f16' are cast incompatible
This error from llama3_8b_fp8 model
small reproducer input ir
convert.torch.mlir
:torch-mlir-opt --torch-decompose-complex-ops --cse --canonicalize convert.torch.mlir > todtype.torch.mlir
torch-mlir-opt ---convert-torch-to-linalg todtype.torch.mlir
The text was updated successfully, but these errors were encountered: