-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Out of Memory Error #1
Comments
@chrischen Hi, thank you for your interests on our work. |
Do you know the minimum memory requirement? Would 32gb 5090 work? I am testing on consumer level setups so A100 is difficult to access. |
On my machine, style transfer with a resolution of 1024 requires at least 30.67G of GPU memory. |
I set Lowering the resolution didn't help much. Lowering to even 64px still went over 24gb. |
@chrischen Well Done. Thank you for sharing your experience in saving GPU memory. |
I ran it with a 512x512 style and content image and always getting an out of memory on GPU error regardless of resolution.
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 MiB. GPU 0 has a total capacity of 23.68 GiB of which 4.94 MiB is free. Including non-PyTorch memory, this process has 23.67 GiB memory in use. Of the allocated memory 22.71 GiB is allocated by PyTorch, and 662.65 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
Seems to use about 18gb before the StableDiffusionXLImg2ImgPipeline part.
I have an RTX 3090 with 24gb memory. Able to run stuff like InstantStyle just fine.
The text was updated successfully, but these errors were encountered: