You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The attention mask is not set and cannot be inferred from input because pad token is same as eos token.As a consequence, you may observe unexpected behavior.
#15
I get the following error :
"The attention mask is not set and cannot be inferred from input because pad token is same as eos token.As a consequence, you may observe unexpected behavior. Please pass your input's attention_mask to obtain reliable results.
You are not running the flash-attention implementation, expect numerical differences."
I already tried "model.generation_config.pad_token_id = tokenizer.pad_token_id" or "pad_token_id=generator.tokenizer.eos_token_id".
I'm just running the QuickStart script
The text was updated successfully, but these errors were encountered:
I get the following error :
"The attention mask is not set and cannot be inferred from input because pad token is same as eos token.As a consequence, you may observe unexpected behavior. Please pass your input's
attention_mask
to obtain reliable results.You are not running the flash-attention implementation, expect numerical differences."
I already tried "model.generation_config.pad_token_id = tokenizer.pad_token_id" or "pad_token_id=generator.tokenizer.eos_token_id".
I'm just running the QuickStart script
The text was updated successfully, but these errors were encountered: