We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Our current learning rate scheduler has limitations that impact training efficiency and flexibility, especially for large datasets:
I propose the following enhancements:
train_options/lr_scheduler
train_options/optimizer
I will propose a PR as soon as possiable!
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Background
Our current learning rate scheduler has limitations that impact training efficiency and flexibility, especially for large datasets:
Describe the solution you'd like
I propose the following enhancements:
1. Cyclic Learning Rate (CLR) Support
train_options/lr_scheduler
field2. Step-wise Learning Rate Updates
train_options/optimizer
field to switch between epoch-wise and step-wise updates3. Unit-test
Additional Context
I will propose a PR as soon as possiable!
The text was updated successfully, but these errors were encountered: