-
-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
parameter question #6
Comments
I think it is a little tougher to analyze because the number of vectors is a lot more than an embedding. I can't say which is best or if I am effectively getting the proper number. For example, I rarely see the strength above 0.01x. Magnitude seems to be closer to the embedding values (larger magnitude might make it less flexible). I haven't done enough analysis of these numbers to make any estimates, though. |
OK what is UNET vs Text Encoder in the output? |
@rockerBOO |
You can reference this repo is fine. For your previous UNet vs text encoder, the model is made up of different neural networks that are used together to make images like text (they should create similar embeddings). UNet for the pixel/image part in the generation and text encoder to get the embeddings to draw with. You can train only the UNet and only the Text Encoder, or both. I separated them, so it would be clear and also to show any cases where the Text Encoder or UNet has much higher magnitudes. |
Thank you for your response. |
Since you referenced Zyin055's repo, "Inspect-Embedding-Training"
Which output parameter shows me that the strength is too high (overtrained / inflexible)?
I use his script for embeddings with great success.
Forgive me if I sound like a noob, but which value am I looking for? On what parameter?
For the referenced embedding script, we look for the strength to not get over 0.2
The text was updated successfully, but these errors were encountered: