"Accelerate with OpenVINO" option is not present in dropdown menu.
Looking at the first startup log I can see the error (see below for full listing):
*** Error loading script: openvino_accelerate.py
Traceback (most recent call last):
File "/home/mcon/prove/LLaMa/stable-diffusion-webui/modules/scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/home/mcon/prove/LLaMa/stable-diffusion-webui/modules/script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/mcon/prove/LLaMa/stable-diffusion-webui/scripts/openvino_accelerate.py", line 34, in <module>
from openvino.frontend.pytorch.torchdynamo import backend # noqa: F401
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/openvino/frontend/pytorch/torchdynamo/backend.py", line 15, in <module>
from torch._inductor.compile_fx import compile_fx
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 21, in <module>
from . import config, metrics, overrides, pattern_matcher
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 18, in <module>
from . import config, ir
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/ir.py", line 29, in <module>
from . import config, dependencies
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/dependencies.py", line 10, in <module>
from .codegen.common import index_prevent_reordering
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/codegen/common.py", line 13, in <module>
from ..utils import (
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/utils.py", line 32, in <module>
from triton.testing import do_bench
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/__init__.py", line 20, in <module>
from .runtime import (
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/runtime/__init__.py", line 1, in <module>
from .autotuner import Config, Heuristics, autotune, heuristics
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 7, in <module>
from ..compiler import OutOfResources
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/compiler.py", line 1888, in <module>
@static_vars(amdgcn_bitcode_paths = _get_amdgcn_bitcode_paths())
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/compiler.py", line 1867, in _get_amdgcn_bitcode_paths
gfx_arch = _get_amdgcn_bitcode_paths.discovered_gfx_arch_fulldetails[1]
TypeError: 'NoneType' object is not subscriptable
I seem to understand OpenVINO does not recognize my CPU/InternalGPU, but I have no idea how to help it.
I should see the "Accelerate with OpenVINO" option.
This is the full log to date, error is the same as above.
I started `webui.sh`, played a bit with the interface and then I generated a test image (which was done using CPU) and restarted GUI to see if anything changed.
mcon@cinderella:~/prove/LLaMa$ . sd_venv/bin/activate
(sd_venv) mcon@cinderella:~/prove/LLaMa$ cd stable-diffusion-webui/
(sd_venv) mcon@cinderella:~/prove/LLaMa/stable-diffusion-webui$ echo $PYTORCH_TRACING_MODE
(sd_venv) mcon@cinderella:~/prove/LLaMa/stable-diffusion-webui$ export PYTORCH_TRACING_MODE=TORCHFX
(sd_venv) mcon@cinderella:~/prove/LLaMa/stable-diffusion-webui$ export COMMANDLINE_ARGS="--skip-torch-cuda-test --precision full --no-half"
(sd_venv) mcon@cinderella:~/prove/LLaMa/stable-diffusion-webui$ ./webui.sh
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################
################################################################
Running on mcon user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
python venv already activate or run without venv: /home/mcon/prove/LLaMa/sd_venv
################################################################
################################################################
Launching launch.py...
################################################################
Cannot locate TCMalloc (improves CPU memory usage)
fatal: No names found, cannot describe anything.
Python 3.10.6 (main, Feb 1 2025, 19:14:22) [GCC 14.2.0]
Version: 1.6.0
Commit hash: e5a634da06c62d72dbdc764b16c65ef3408aa588
Installing torch and torchvision
Looking in indexes: https://download.pytorch.org/whl/rocm5.4.2
Collecting torch==2.0.1+rocm5.4.2
Downloading https://download.pytorch.org/whl/rocm5.4.2/torch-2.0.1%2Brocm5.4.2-cp310-cp310-linux_x86_64.whl (1536.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.5/1.5 GB 698.6 kB/s eta 0:00:00
Collecting torchvision==0.15.2+rocm5.4.2
Downloading https://download.pytorch.org/whl/rocm5.4.2/torchvision-0.15.2%2Brocm5.4.2-cp310-cp310-linux_x86_64.whl (62.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.4/62.4 MB 17.3 MB/s eta 0:00:00
Collecting typing-extensions
Downloading https://download.pytorch.org/whl/typing_extensions-4.12.2-py3-none-any.whl (37 kB)
Collecting pytorch-triton-rocm<2.1,>=2.0.0
Downloading https://download.pytorch.org/whl/pytorch_triton_rocm-2.0.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (78.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 78.4/78.4 MB 16.3 MB/s eta 0:00:00
Collecting networkx
Downloading https://download.pytorch.org/whl/networkx-3.3-py3-none-any.whl (1.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.7/1.7 MB 19.3 MB/s eta 0:00:00
Collecting filelock
Downloading https://download.pytorch.org/whl/filelock-3.13.1-py3-none-any.whl (11 kB)
Collecting sympy
Downloading https://download.pytorch.org/whl/sympy-1.13.1-py3-none-any.whl (6.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.2/6.2 MB 26.4 MB/s eta 0:00:00
Collecting jinja2
Downloading https://download.pytorch.org/whl/Jinja2-3.1.4-py3-none-any.whl (133 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 133.3/133.3 kB 3.3 MB/s eta 0:00:00
Collecting pillow!=8.3.*,>=5.3.0
Downloading https://download.pytorch.org/whl/pillow-11.0.0-cp310-cp310-manylinux_2_28_x86_64.whl (4.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.4/4.4 MB 27.1 MB/s eta 0:00:00
Collecting numpy
Downloading https://download.pytorch.org/whl/numpy-2.1.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 16.3/16.3 MB 19.7 MB/s eta 0:00:00
Collecting requests
Downloading https://download.pytorch.org/whl/requests-2.28.1-py3-none-any.whl (62 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 62.8/62.8 kB 2.1 MB/s eta 0:00:00
Collecting cmake
Downloading https://download.pytorch.org/whl/cmake-3.25.0-py2.py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23.7 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.7/23.7 MB 32.1 MB/s eta 0:00:00
Collecting lit
Downloading https://download.pytorch.org/whl/lit-15.0.7.tar.gz (132 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 132.3/132.3 kB 726.1 kB/s eta 0:00:00
Preparing metadata (setup.py) ... done
Collecting MarkupSafe>=2.0
Downloading https://download.pytorch.org/whl/MarkupSafe-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB)
Collecting certifi>=2017.4.17
Downloading https://download.pytorch.org/whl/certifi-2022.12.7-py3-none-any.whl (155 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 155.3/155.3 kB 3.7 MB/s eta 0:00:00
Collecting urllib3<1.27,>=1.21.1
Downloading https://download.pytorch.org/whl/urllib3-1.26.13-py2.py3-none-any.whl (140 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 140.6/140.6 kB 2.7 MB/s eta 0:00:00
Collecting idna<4,>=2.5
Downloading https://download.pytorch.org/whl/idna-3.4-py3-none-any.whl (61 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 61.5/61.5 kB 1.7 MB/s eta 0:00:00
Collecting charset-normalizer<3,>=2
Downloading https://download.pytorch.org/whl/charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Collecting mpmath<1.4,>=1.1.0
Downloading https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 536.2/536.2 kB 9.7 MB/s eta 0:00:00
Using legacy 'setup.py install' for lit, since package 'wheel' is not installed.
Installing collected packages: mpmath, lit, cmake, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, filelock, charset-normalizer, certifi, requests, jinja2, pytorch-triton-rocm, torch, torchvision
Running setup.py install for lit ... done
Successfully installed MarkupSafe-2.1.5 certifi-2022.12.7 charset-normalizer-2.1.1 cmake-3.25.0 filelock-3.13.1 idna-3.4 jinja2-3.1.4 lit-15.0.7 mpmath-1.3.0 networkx-3.3 numpy-2.1.2 pillow-11.0.0 pytorch-triton-rocm-2.0.1 requests-2.28.1 sympy-1.13.1 torch-2.0.1+rocm5.4.2 torchvision-0.15.2+rocm5.4.2 typing-extensions-4.12.2 urllib3-1.26.13
WARNING: There was an error checking the latest version of pip.
Installing clip
Installing open_clip
Cloning Stable Diffusion into /home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/stable-diffusion-stability-ai...
Cloning into '/home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/stable-diffusion-stability-ai'...
remote: Enumerating objects: 580, done.
remote: Counting objects: 100% (2/2), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 580 (delta 0), reused 0 (delta 0), pack-reused 578 (from 2)
Receiving objects: 100% (580/580), 73.44 MiB | 39.85 MiB/s, done.
Resolving deltas: 100% (283/283), done.
Cloning Stable Diffusion XL into /home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/generative-models...
Cloning into '/home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/generative-models'...
remote: Enumerating objects: 1064, done.
remote: Counting objects: 100% (477/477), done.
remote: Compressing objects: 100% (124/124), done.
remote: Total 1064 (delta 376), reused 353 (delta 353), pack-reused 587 (from 1)
Receiving objects: 100% (1064/1064), 53.60 MiB | 38.22 MiB/s, done.
Resolving deltas: 100% (562/562), done.
Cloning K-diffusion into /home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/k-diffusion...
Cloning into '/home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/k-diffusion'...
remote: Enumerating objects: 1350, done.
remote: Counting objects: 100% (1350/1350), done.
remote: Compressing objects: 100% (444/444), done.
remote: Total 1350 (delta 951), reused 1254 (delta 899), pack-reused 0 (from 0)
Receiving objects: 100% (1350/1350), 233.36 KiB | 1.91 MiB/s, done.
Resolving deltas: 100% (951/951), done.
Cloning CodeFormer into /home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/CodeFormer...
Cloning into '/home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/CodeFormer'...
remote: Enumerating objects: 614, done.
remote: Counting objects: 100% (297/297), done.
remote: Compressing objects: 100% (114/114), done.
remote: Total 614 (delta 208), reused 183 (delta 183), pack-reused 317 (from 3)
Receiving objects: 100% (614/614), 17.31 MiB | 23.30 MiB/s, done.
Resolving deltas: 100% (296/296), done.
Cloning BLIP into /home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/BLIP...
Cloning into '/home/mcon/prove/LLaMa/stable-diffusion-webui/repositories/BLIP'...
remote: Enumerating objects: 277, done.
remote: Counting objects: 100% (183/183), done.
remote: Compressing objects: 100% (46/46), done.
remote: Total 277 (delta 145), reused 137 (delta 137), pack-reused 94 (from 1)
Receiving objects: 100% (277/277), 7.04 MiB | 18.66 MiB/s, done.
Resolving deltas: 100% (152/152), done.
Installing requirements for CodeFormer
Installing requirements
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'No HIP GPUs are available', memory monitor disabled
Downloading: "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" to /home/mcon/prove/LLaMa/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 3.97G/3.97G [00:40<00:00, 105MB/s]
*** Error loading script: openvino_accelerate.py
Traceback (most recent call last):
File "/home/mcon/prove/LLaMa/stable-diffusion-webui/modules/scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/home/mcon/prove/LLaMa/stable-diffusion-webui/modules/script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/mcon/prove/LLaMa/stable-diffusion-webui/scripts/openvino_accelerate.py", line 34, in <module>
from openvino.frontend.pytorch.torchdynamo import backend # noqa: F401
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/openvino/frontend/pytorch/torchdynamo/backend.py", line 15, in <module>
from torch._inductor.compile_fx import compile_fx
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 21, in <module>
from . import config, metrics, overrides, pattern_matcher
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 18, in <module>
from . import config, ir
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/ir.py", line 29, in <module>
from . import config, dependencies
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/dependencies.py", line 10, in <module>
from .codegen.common import index_prevent_reordering
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/codegen/common.py", line 13, in <module>
from ..utils import (
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/utils.py", line 32, in <module>
from triton.testing import do_bench
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/__init__.py", line 20, in <module>
from .runtime import (
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/runtime/__init__.py", line 1, in <module>
from .autotuner import Config, Heuristics, autotune, heuristics
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 7, in <module>
from ..compiler import OutOfResources
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/compiler.py", line 1888, in <module>
@static_vars(amdgcn_bitcode_paths = _get_amdgcn_bitcode_paths())
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/compiler.py", line 1867, in _get_amdgcn_bitcode_paths
gfx_arch = _get_amdgcn_bitcode_paths.discovered_gfx_arch_fulldetails[1]
TypeError: 'NoneType' object is not subscriptable
---
Calculating sha256 for /home/mcon/prove/LLaMa/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors: Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 346.9s (prepare environment: 294.6s, import torch: 5.6s, import gradio: 0.6s, setup paths: 0.6s, other imports: 0.5s, list SD models: 41.3s, load scripts: 1.7s, create ui: 0.9s, gradio launch: 0.9s).
6ce0161689b3853acaa03779ec93eafe75a02f4ced659bee03f50797806fa2fa
Loading weights [6ce0161689] from /home/mcon/prove/LLaMa/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /home/mcon/prove/LLaMa/stable-diffusion-webui/configs/v1-inference.yaml
/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/huggingface_hub/file_download.py:795: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
warnings.warn(
vocab.json: 100%|██████████████████████████████████████████████████████████████████████████████████████| 961k/961k [00:00<00:00, 3.05MB/s]
merges.txt: 100%|██████████████████████████████████████████████████████████████████████████████████████| 525k/525k [00:00<00:00, 37.6MB/s]
special_tokens_map.json: 100%|███████████████████████████████████████████████████████████████████████████| 389/389 [00:00<00:00, 5.18MB/s]
tokenizer_config.json: 100%|█████████████████████████████████████████████████████████████████████████████| 905/905 [00:00<00:00, 3.19MB/s]
config.json: 100%|███████████████████████████████████████████████████████████████████████████████████| 4.52k/4.52k [00:00<00:00, 10.6MB/s]
Applying attention optimization: InvokeAI... done.
Model loaded in 23.9s (calculate hash: 10.6s, load weights from disk: 0.2s, create model: 2.9s, apply weights to model: 10.1s).
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [02:28<00:00, 7.45s/it]
Total progress: 100%|█████████████████████████████████████████████████████████████████████████████████████| 20/20 [02:26<00:00, 7.34s/it]
Restarting UI...100%|█████████████████████████████████████████████████████████████████████████████████████| 20/20 [02:26<00:00, 7.02s/it]
Closing server running on port: 7860
*** Error loading script: openvino_accelerate.py
Traceback (most recent call last):
File "/home/mcon/prove/LLaMa/stable-diffusion-webui/modules/scripts.py", line 382, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/home/mcon/prove/LLaMa/stable-diffusion-webui/modules/script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/mcon/prove/LLaMa/stable-diffusion-webui/scripts/openvino_accelerate.py", line 34, in <module>
from openvino.frontend.pytorch.torchdynamo import backend # noqa: F401
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/openvino/frontend/pytorch/torchdynamo/backend.py", line 15, in <module>
from torch._inductor.compile_fx import compile_fx
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/compile_fx.py", line 21, in <module>
from . import config, metrics, overrides, pattern_matcher
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/pattern_matcher.py", line 18, in <module>
from . import config, ir
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/ir.py", line 29, in <module>
from . import config, dependencies
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/dependencies.py", line 10, in <module>
from .codegen.common import index_prevent_reordering
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/codegen/common.py", line 13, in <module>
from ..utils import (
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/torch/_inductor/utils.py", line 32, in <module>
from triton.testing import do_bench
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/__init__.py", line 20, in <module>
from .runtime import (
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/runtime/__init__.py", line 1, in <module>
from .autotuner import Config, Heuristics, autotune, heuristics
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/runtime/autotuner.py", line 7, in <module>
from ..compiler import OutOfResources
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/compiler.py", line 1888, in <module>
@static_vars(amdgcn_bitcode_paths = _get_amdgcn_bitcode_paths())
File "/home/mcon/prove/LLaMa/sd_venv/lib/python3.10/site-packages/triton/compiler.py", line 1867, in _get_amdgcn_bitcode_paths
gfx_arch = _get_amdgcn_bitcode_paths.discovered_gfx_arch_fulldetails[1]
TypeError: 'NoneType' object is not subscriptable
---
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 0.8s (load scripts: 0.2s, create ui: 0.5s).
Is there an existing issue for this?
What happened?
"Accelerate with OpenVINO" option is not present in dropdown menu.
Looking at the first startup log I can see the error (see below for full listing):
I seem to understand OpenVINO does not recognize my CPU/InternalGPU, but I have no idea how to help it.
Steps to reproduce the problem
webui.shscriptcombo box.What should have happened?
I should see the "Accelerate with OpenVINO" option.
Sysinfo
sysinfo-2025-02-01-20-27.txt
What browsers do you use to access the UI ?
Mozilla Firefox
Console logs
Additional information
On top of what is shown by
sysinfoI also have a discrete GPU (it actually is a RX580X 8Gb VRAM, if it matters):Can it be presence of the discrete GPU "confuses" OpenVINO?
sysinfoalso seems to have problems getting my CPU type; this is what my system reports: