New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate does not take into account config.decoder.eos_token_id #14905
Comments
Hmm, yeah I think I'm fine with adding some |
Do you want to open a PR for it? :-) |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
i will try to fix it |
I pulled a request #15403, but the ci failed. By analysing the ci failture log, I find that it has a hidden logic, if you don't pass a eos_pos_id, you want the model to generate until max-length. That is what the code do
So, adding self.config.decoder.eos_pos_id simply is not enough. cc @NielsRogge @patrickvonplaten |
Hi, the above pull request mentioned offhandedly that this had been fixed, is that the case or is this still open as indicated? |
Seems like this got fixed, closing the issue. |
As reported by some people (see NielsRogge/Transformers-Tutorials#53 and on the forum), the
generate()
method currently does not take into accountconfig.decoder.eos_token_id
, onlyconfig.eos_token_id
to properly stop generation.Hence, models that are made using
EncoderDecoderModel
/VisionEncoderDecoderModel
/SpeechEncoderDecoderModel
will not properly stop generation ifconfig.eos_token_id
is not set.cc @patrickvonplaten @patil-suraj
The text was updated successfully, but these errors were encountered: