You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Attempting to use flux-webui but can't seem to get past this error and it seems to not all the way download some of the files, any tips would be appreciated or if it is a bug would love to know. Tried twice, with same result.
The default interactive shell is now zsh. To update your account to use zsh, please run chsh -s /bin/zsh`.
For more details, please visit https://support.apple.com/kb/HT208050.
<<PINOKIO_SHELL>>eval "$(conda shell.bash hook)" && conda deactivate && conda deactivate && conda deactivate && conda activate base && source /Users/ghost/pinokio/api/flux-webui.git/env/bin/activate /Users/ghost/pinokio/api/flux-webui.git/env && python app.py
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache().
0it [00:00, ?it/s]
/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/utils.py:1002: UserWarning: Expected 2 arguments for function <function update_slider at 0x150bdf9a0>, received 1.
warnings.warn(
/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/utils.py:1006: UserWarning: Expected at least 2 arguments for function <function update_slider at 0x150bdf9a0>, received 1.
warnings.warn(
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 434/434 [00:00<00:00, 3.29MB/s]
.gitattributes: 100%|██████████████████████████████████████████████████████████████████████████████████| 1.52k/1.52k [00:00<00:00, 12.4MB/s]
README.md: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 31.0/31.0 [00:00<00:00, 196kB/s]
quanto_qmap.json: 100%|█████████████████████████████████████████████████████████████████████████████████| 57.8k/57.8k [00:00<00:00, 704kB/s]
(…)ion_pytorch_model.safetensors.index.json: 100%|███████████████████████████████████████████████████████| 294k/294k [00:00<00:00, 2.13MB/s]
(…)ion_pytorch_model.safetensors.index.json: 100%|███████████████████████████████████████████████████████| 294k/294k [00:00<00:00, 2.14MB/s]
(…)pytorch_model-00002-of-00002.safetensors: 100%|█████████████████████████████████████████████████████| 1.94G/1.94G [06:08<00:00, 5.25MB/s]
(…)pytorch_model-00001-of-00002.safetensors: 100%|█████████████████████████████████████████████████████| 9.96G/9.96G [18:10<00:00, 9.14MB/s]
moving device to mps02-of-00002.safetensors: 2%|▊ | 31.5M/1.94G [00:06<06:19, 5.03MB/s]
initializing pipeline...f-00002.safetensors: 100%|█████████████████████████████████████████████████████| 1.94G/1.94G [06:08<00:00, 5.74MB/s]
text_encoder/config.json: 100%|█████████████████████████████████████████████████████████████████████████████| 613/613 [00:00<00:00, 458kB/s]
scheduler/scheduler_config.json: 100%|█████████████████████████████████████████████████████████████████████| 141/141 [00:00<00:00, 1.03MB/s]
text_encoder_2/config.json: 100%|██████████████████████████████████████████████████████████████████████████| 782/782 [00:00<00:00, 9.88MB/s]
(…)t_encoder_2/model.safetensors.index.json: 100%|█████████████████████████████████████████████████████| 19.9k/19.9k [00:00<00:00, 6.98MB/s]
tokenizer/tokenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████| 705/705 [00:00<00:00, 582kB/s]
tokenizer/special_tokens_map.json: 100%|███████████████████████████████████████████████████████████████████| 588/588 [00:00<00:00, 6.93MB/s]
tokenizer_2/special_tokens_map.json: 100%|█████████████████████████████████████████████████████████████| 2.54k/2.54k [00:00<00:00, 25.2MB/s]
tokenizer/merges.txt: 100%|██████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 1.27MB/s]
tokenizer_2/tokenizer_config.json: 100%|███████████████████████████████████████████████████████████████| 20.8k/20.8k [00:00<00:00, 37.9MB/s]
vae/config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████| 774/774 [00:00<00:00, 2.21MB/s]
tokenizer/vocab.json: 100%|█████████████████████████████████████████████████████████████████████████████| 1.06M/1.06M [00:01<00:00, 794kB/s]
spiece.model: 100%|███████████████████████████████████████████████████████████████████████████████████████| 792k/792k [00:01<00:00, 681kB/s]
tokenizer_2/tokenizer.json: 100%|███████████████████████████████████████████████████████████████████████| 2.42M/2.42M [00:02<00:00, 935kB/s]
model.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████████| 246M/246M [01:23<00:00, 2.95MB/s]
diffusion_pytorch_model.safetensors: 100%|███████████████████████████████████████████████████████████████| 168M/168M [01:32<00:00, 1.81MB/s]
model-00001-of-00002.safetensors: 4%|██▍ | 189M/4.99G [01:19<28:42, 2.79MB/s]
model-00001-of-00002.safetensors: 5%|███ | 231M/4.99G [01:32<23:19, 3.40MB/s]
model-00002-of-00002.safetensors: 100%|████████████████████████████████████████████████████████████████| 4.53G/4.53G [16:30<00:00, 4.57MB/s]
model-00001-of-00002.safetensors: 100%|████████████████████████████████████████████████████████████████| 4.99G/4.99G [18:01<00:00, 4.62MB/s]
You set add_prefix_space. The tokenizer needs to be converted from the slow tokenizers████ | 4.14G/4.99G [16:29<03:03, 4.65MB/s]
initialized!of-00002.safetensors: 100%|████████████████████████████████████████████████████████████████| 4.99G/4.99G [18:01<00:00, 8.98MB/s]
Started the inference. Wait...rs: 100%|███████████████████████████████████████████████████████████████▉| 4.53G/4.53G [16:30<00:00, 6.37MB/s]
Traceback (most recent call last):
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/blocks.py", line 1935, in process_api
result = await self.call_function(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/blocks.py", line 1520, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/app.py", line 87, in infer
images = pipe(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 635, in call
) = self.encode_prompt(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 349, in encode_prompt
prompt_embeds = self._get_t5_prompt_embeds(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 234, in _get_t5_prompt_embeds
prompt_embeds = self.text_encoder_2(text_input_ids.to(device), output_hidden_states=False)[0]
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1971, in forward
encoder_outputs = self.encoder(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1106, in forward
layer_outputs = layer_module(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 686, in forward
self_attention_outputs = self.layer[0](
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 592, in forward
normed_hidden_states = self.layer_norm(hidden_states)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 245, in forward
variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
RuntimeError: MPS backend out of memory (MPS allocated: 20.40 GB, other allocations: 2.75 MB, max allowed: 20.40 GB). Tried to allocate 256 bytes on shared pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).`
The text was updated successfully, but these errors were encountered:
Attempting to use flux-webui but can't seem to get past this error and it seems to not all the way download some of the files, any tips would be appreciated or if it is a bug would love to know. Tried twice, with same result.
The default interactive shell is now zsh. To update your account to use zsh, please run
chsh -s /bin/zsh`.For more details, please visit https://support.apple.com/kb/HT208050.
<<PINOKIO_SHELL>>eval "$(conda shell.bash hook)" && conda deactivate && conda deactivate && conda deactivate && conda activate base && source /Users/ghost/pinokio/api/flux-webui.git/env/bin/activate /Users/ghost/pinokio/api/flux-webui.git/env && python app.py
The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling
transformers.utils.move_cache()
.0it [00:00, ?it/s]
/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/utils.py:1002: UserWarning: Expected 2 arguments for function <function update_slider at 0x150bdf9a0>, received 1.
warnings.warn(
/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/utils.py:1006: UserWarning: Expected at least 2 arguments for function <function update_slider at 0x150bdf9a0>, received 1.
warnings.warn(
Running on local URL: http://127.0.0.1:7860
To create a public link, set
share=True
inlaunch()
.config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 434/434 [00:00<00:00, 3.29MB/s]
.gitattributes: 100%|██████████████████████████████████████████████████████████████████████████████████| 1.52k/1.52k [00:00<00:00, 12.4MB/s]
README.md: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 31.0/31.0 [00:00<00:00, 196kB/s]
quanto_qmap.json: 100%|█████████████████████████████████████████████████████████████████████████████████| 57.8k/57.8k [00:00<00:00, 704kB/s]
(…)ion_pytorch_model.safetensors.index.json: 100%|███████████████████████████████████████████████████████| 294k/294k [00:00<00:00, 2.13MB/s]
(…)ion_pytorch_model.safetensors.index.json: 100%|███████████████████████████████████████████████████████| 294k/294k [00:00<00:00, 2.14MB/s]
(…)pytorch_model-00002-of-00002.safetensors: 100%|█████████████████████████████████████████████████████| 1.94G/1.94G [06:08<00:00, 5.25MB/s]
(…)pytorch_model-00001-of-00002.safetensors: 100%|█████████████████████████████████████████████████████| 9.96G/9.96G [18:10<00:00, 9.14MB/s]
moving device to mps02-of-00002.safetensors: 2%|▊ | 31.5M/1.94G [00:06<06:19, 5.03MB/s]
initializing pipeline...f-00002.safetensors: 100%|█████████████████████████████████████████████████████| 1.94G/1.94G [06:08<00:00, 5.74MB/s]
text_encoder/config.json: 100%|█████████████████████████████████████████████████████████████████████████████| 613/613 [00:00<00:00, 458kB/s]
scheduler/scheduler_config.json: 100%|█████████████████████████████████████████████████████████████████████| 141/141 [00:00<00:00, 1.03MB/s]
text_encoder_2/config.json: 100%|██████████████████████████████████████████████████████████████████████████| 782/782 [00:00<00:00, 9.88MB/s]
(…)t_encoder_2/model.safetensors.index.json: 100%|█████████████████████████████████████████████████████| 19.9k/19.9k [00:00<00:00, 6.98MB/s]
tokenizer/tokenizer_config.json: 100%|██████████████████████████████████████████████████████████████████████| 705/705 [00:00<00:00, 582kB/s]
tokenizer/special_tokens_map.json: 100%|███████████████████████████████████████████████████████████████████| 588/588 [00:00<00:00, 6.93MB/s]
tokenizer_2/special_tokens_map.json: 100%|█████████████████████████████████████████████████████████████| 2.54k/2.54k [00:00<00:00, 25.2MB/s]
tokenizer/merges.txt: 100%|██████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 1.27MB/s]
tokenizer_2/tokenizer_config.json: 100%|███████████████████████████████████████████████████████████████| 20.8k/20.8k [00:00<00:00, 37.9MB/s]
vae/config.json: 100%|█████████████████████████████████████████████████████████████████████████████████████| 774/774 [00:00<00:00, 2.21MB/s]
tokenizer/vocab.json: 100%|█████████████████████████████████████████████████████████████████████████████| 1.06M/1.06M [00:01<00:00, 794kB/s]
spiece.model: 100%|███████████████████████████████████████████████████████████████████████████████████████| 792k/792k [00:01<00:00, 681kB/s]
tokenizer_2/tokenizer.json: 100%|███████████████████████████████████████████████████████████████████████| 2.42M/2.42M [00:02<00:00, 935kB/s]
model.safetensors: 100%|█████████████████████████████████████████████████████████████████████████████████| 246M/246M [01:23<00:00, 2.95MB/s]
diffusion_pytorch_model.safetensors: 100%|███████████████████████████████████████████████████████████████| 168M/168M [01:32<00:00, 1.81MB/s]
model-00001-of-00002.safetensors: 4%|██▍ | 189M/4.99G [01:19<28:42, 2.79MB/s]
model-00001-of-00002.safetensors: 5%|███ | 231M/4.99G [01:32<23:19, 3.40MB/s]
model-00002-of-00002.safetensors: 100%|████████████████████████████████████████████████████████████████| 4.53G/4.53G [16:30<00:00, 4.57MB/s]
model-00001-of-00002.safetensors: 100%|████████████████████████████████████████████████████████████████| 4.99G/4.99G [18:01<00:00, 4.62MB/s]
You set
add_prefix_space
. The tokenizer needs to be converted from the slow tokenizers████ | 4.14G/4.99G [16:29<03:03, 4.65MB/s]initialized!of-00002.safetensors: 100%|████████████████████████████████████████████████████████████████| 4.99G/4.99G [18:01<00:00, 8.98MB/s]
Started the inference. Wait...rs: 100%|███████████████████████████████████████████████████████████████▉| 4.53G/4.53G [16:30<00:00, 6.37MB/s]
Traceback (most recent call last):
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/route_utils.py", line 322, in call_process_api
output = await app.get_blocks().process_api(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/blocks.py", line 1935, in process_api
result = await self.call_function(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/blocks.py", line 1520, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, *args)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/gradio/utils.py", line 826, in wrapper
response = f(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/app.py", line 87, in infer
images = pipe(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 635, in call
) = self.encode_prompt(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 349, in encode_prompt
prompt_embeds = self._get_t5_prompt_embeds(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/diffusers/pipelines/flux/pipeline_flux.py", line 234, in _get_t5_prompt_embeds
prompt_embeds = self.text_encoder_2(text_input_ids.to(device), output_hidden_states=False)[0]
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1971, in forward
encoder_outputs = self.encoder(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 1106, in forward
layer_outputs = layer_module(
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 686, in forward
self_attention_outputs = self.layer[0](
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 592, in forward
normed_hidden_states = self.layer_norm(hidden_states)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/ghost/pinokio/api/flux-webui.git/env/lib/python3.10/site-packages/transformers/models/t5/modeling_t5.py", line 245, in forward
variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True)
RuntimeError: MPS backend out of memory (MPS allocated: 20.40 GB, other allocations: 2.75 MB, max allowed: 20.40 GB). Tried to allocate 256 bytes on shared pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).`
The text was updated successfully, but these errors were encountered: