Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

parameter question #6

Open
robertJene opened this issue Jun 29, 2023 · 5 comments
Open

parameter question #6

robertJene opened this issue Jun 29, 2023 · 5 comments

Comments

@robertJene
Copy link

Since you referenced Zyin055's repo, "Inspect-Embedding-Training"

Which output parameter shows me that the strength is too high (overtrained / inflexible)?

I use his script for embeddings with great success.

Forgive me if I sound like a noob, but which value am I looking for? On what parameter?

For the referenced embedding script, we look for the strength to not get over 0.2

@rockerBOO
Copy link
Owner

I think it is a little tougher to analyze because the number of vectors is a lot more than an embedding. I can't say which is best or if I am effectively getting the proper number.

For example, I rarely see the strength above 0.01x. Magnitude seems to be closer to the embedding values (larger magnitude might make it less flexible). I haven't done enough analysis of these numbers to make any estimates, though.

@robertJene
Copy link
Author

robertJene commented Jul 3, 2023

OK what is UNET vs Text Encoder in the output?
EDIT: I am working on a YouTube video and I will be referencing your script because it's the only one so far

@robertJene
Copy link
Author

@rockerBOO
I'm going to be doing a video about LoRA's and will be using your python script.
Do you have a twitter/instagram/facebook/YouTube for me to plug, or do you just want me to use this repo?

@rockerBOO
Copy link
Owner

You can reference this repo is fine.

For your previous UNet vs text encoder, the model is made up of different neural networks that are used together to make images like text (they should create similar embeddings). UNet for the pixel/image part in the generation and text encoder to get the embeddings to draw with.

You can train only the UNet and only the Text Encoder, or both. I separated them, so it would be clear and also to show any cases where the Text Encoder or UNet has much higher magnitudes.

@robertJene
Copy link
Author

Thank you for your response.
What you say makes sense because this week was studying Loha and Locan (LyCORIS) which do both Unet and text encoder

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants