Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatTTS下载好后不能运行,其它的模型都可以起启 #2780

Open
1 of 3 tasks
cq134cq opened this issue Jan 23, 2025 · 4 comments
Open
1 of 3 tasks

ChatTTS下载好后不能运行,其它的模型都可以起启 #2780

cq134cq opened this issue Jan 23, 2025 · 4 comments
Labels
Milestone

Comments

@cq134cq
Copy link

cq134cq commented Jan 23, 2025

System Info / 系統信息

:\soft\xxx>conda env config vars set XINFERENCE_HOME=c:\soft\models

C:\soft\xxx>conda activate c:\soft\xxx

(c:\soft\xxx) C:\soft\xxx>xinference-local --host 10.0.100.57 --port 9997

Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?

  • docker / docker
  • pip install / 通过 pip install 安装
  • installation from source / 从源码安装

Version info / 版本信息

1.2.0

The command used to start Xinference / 用以启动 xinference 的命令

c:\soft\xxx>conda env config vars set XINFERENCE_HOME=c:\soft\models

C:\soft\xxx>conda activate c:\soft\xxx

(c:\soft\xxx) C:\soft\xxx>xinference-local --host 10.0.100.57 --port 9997

Reproduction / 复现过程

(c:\soft\xxx) C:\soft\xxx>xinference-local --host 10.0.100.57 --port 9997
2025-01-23 17:11:12,381 xinference.core.supervisor 6008 INFO Xinference supervisor 10.0.100.57:17503 started
2025-01-23 17:11:12,403 xinference.core.worker 6008 INFO Starting metrics export server at 10.0.100.57:None
2025-01-23 17:11:12,404 xinference.core.worker 6008 INFO Checking metrics export server...
2025-01-23 17:11:14,001 xinference.core.worker 6008 INFO Metrics server is started at: http://10.0.100.57:58705
2025-01-23 17:11:14,001 xinference.core.worker 6008 INFO Purge cache directory: c:\soft\models\cache
2025-01-23 17:11:14,003 xinference.core.worker 6008 INFO Connected to supervisor as a fresh worker
2025-01-23 17:11:14,023 xinference.core.worker 6008 INFO Xinference worker 10.0.100.57:17503 started
2025-01-23 17:11:15,021 xinference.api.restful_api 11396 INFO Starting Xinference at endpoint: http://10.0.100.57:9997
2025-01-23 17:11:15,110 uvicorn.error 11396 INFO Uvicorn running on http://10.0.100.57:9997 (Press CTRL+C to quit)
2025-01-23 17:11:27,671 xinference.core.worker 6008 INFO [request 06dd7fe1-d96a-11ef-9c9f-047c1678e4a5] Enter launch_builtin_model, args: <xinference.core.worker.WorkerActor object at 0x00000249D7BA44F0>, kwargs: model_uid=ChatTTS-0,model_name=ChatTTS,model_size_in_billions=None,model_format=None,quantization=None,model_engine=None,model_type=audio,n_gpu=auto,request_limits=None,peft_model_config=None,gpu_idx=None,download_hub=None,model_path=None,xavier_config=None
2025-01-23 17:11:27,998 xinference.model.utils 6008 INFO Use model cache from a different hub.
2025-01-23 17:11:31,485 xinference.core.model 5616 INFO Start requests handler.
2025-01-23 17:11:32,296 xinference.model.audio.chattts 5616 INFO Load ChatTTS model with kwargs: {}
check models in custom path C:\soft\models\cache\ChatTTS failed.
2025-01-23 17:11:32,298 xinference.core.worker 6008 ERROR Failed to load model ChatTTS-0
Traceback (most recent call last):
File "c:\soft\xxx\lib\site-packages\xinference\core\worker.py", line 908, in launch_builtin_model
await model_ref.load()
File "c:\soft\xxx\lib\site-packages\xoscar\backends\context.py", line 231, in send
return self._process_result_message(result)
File "c:\soft\xxx\lib\site-packages\xoscar\backends\context.py", line 102, in _process_result_message
raise message.as_instanceof_cause()
File "c:\soft\xxx\lib\site-packages\xoscar\backends\pool.py", line 667, in send
result = await self._run_coro(message.message_id, coro)
File "c:\soft\xxx\lib\site-packages\xoscar\backends\pool.py", line 370, in _run_coro
return await coro
File "c:\soft\xxx\lib\site-packages\xoscar\api.py", line 384, in on_receive
return await super().on_receive(message) # type: ignore
File "xoscar\core.pyx", line 558, in on_receive
raise ex
File "xoscar\core.pyx", line 520, in xoscar.core._BaseActor.on_receive
async with self._lock:
File "xoscar\core.pyx", line 521, in xoscar.core._BaseActor.on_receive
with debug_async_timeout('actor_lock_timeout',
File "xoscar\core.pyx", line 526, in xoscar.core._BaseActor.on_receive
result = await result
File "c:\soft\xxx\lib\site-packages\xinference\core\model.py", line 455, in load
self._model.load()
File "c:\soft\xxx\lib\site-packages\xinference\model\audio\chattts.py", line 61, in load
raise Exception(f"The ChatTTS model is not correct: {self._model_path}")
Exception: [address=10.0.100.57:58719, pid=5616] The ChatTTS model is not correct: C:\soft\models\cache\ChatTTS
2025-01-23 17:11:32,344 xinference.core.worker 6008 ERROR [request 06dd7fe1-d96a-11ef-9c9f-047c1678e4a5] Leave launch_builtin_model, error: [address=10.0.100.57:58719, pid=5616] The ChatTTS model is not correct: C:\soft\models\cache\ChatTTS, elapsed time: 4 s
Traceback (most recent call last):
File "c:\soft\xxx\lib\site-packages\xinference\core\utils.py", line 94, in wrapped
ret = await func(*args, **kwargs)
File "c:\soft\xxx\lib\site-packages\xinference\core\worker.py", line 908, in launch_builtin_model
await model_ref.load()
File "c:\soft\xxx\lib\site-packages\xoscar\backends\context.py", line 231, in send
return self._process_result_message(result)
File "c:\soft\xxx\lib\site-packages\xoscar\backends\context.py", line 102, in _process_result_message
raise message.as_instanceof_cause()
File "c:\soft\xxx\lib\site-packages\xoscar\backends\pool.py", line 667, in send
result = await self._run_coro(message.message_id, coro)
File "c:\soft\xxx\lib\site-packages\xoscar\backends\pool.py", line 370, in _run_coro
return await coro
File "c:\soft\xxx\lib\site-packages\xoscar\api.py", line 384, in on_receive
return await super().on_receive(message) # type: ignore
File "xoscar\core.pyx", line 558, in on_receive
raise ex
File "xoscar\core.pyx", line 520, in xoscar.core._BaseActor.on_receive
async with self._lock:
File "xoscar\core.pyx", line 521, in xoscar.core._BaseActor.on_receive
with debug_async_timeout('actor_lock_timeout',
File "xoscar\core.pyx", line 526, in xoscar.core._BaseActor.on_receive
result = await result
File "c:\soft\xxx\lib\site-packages\xinference\core\model.py", line 455, in load
self._model.load()
File "c:\soft\xxx\lib\site-packages\xinference\model\audio\chattts.py", line 61, in load
raise Exception(f"The ChatTTS model is not correct: {self._model_path}")
Exception: [address=10.0.100.57:58719, pid=5616] The ChatTTS model is not correct: C:\soft\models\cache\ChatTTS
2025-01-23 17:11:32,347 xinference.api.restful_api 11396 ERROR [address=10.0.100.57:58719, pid=5616] The ChatTTS model is not correct: C:\soft\models\cache\ChatTTS
Traceback (most recent call last):
File "c:\soft\xxx\lib\site-packages\xinference\api\restful_api.py", line 1002, in launch_model
model_uid = await (await self._get_supervisor_ref()).launch_builtin_model(
File "c:\soft\xxx\lib\site-packages\xoscar\backends\context.py", line 231, in send
return self._process_result_message(result)
File "c:\soft\xxx\lib\site-packages\xoscar\backends\context.py", line 102, in _process_result_message
raise message.as_instanceof_cause()
File "c:\soft\xxx\lib\site-packages\xoscar\backends\pool.py", line 667, in send
result = await self._run_coro(message.message_id, coro)
File "c:\soft\xxx\lib\site-packages\xoscar\backends\pool.py", line 370, in _run_coro
return await coro
File "c:\soft\xxx\lib\site-packages\xoscar\api.py", line 384, in on_receive
return await super().on_receive(message) # type: ignore
File "xoscar\core.pyx", line 558, in on_receive
raise ex
File "xoscar\core.pyx", line 520, in xoscar.core._BaseActor.on_receive
async with self._lock:
File "xoscar\core.pyx", line 521, in xoscar.core._BaseActor.on_receive
with debug_async_timeout('actor_lock_timeout',
File "xoscar\core.pyx", line 526, in xoscar.core._BaseActor.on_receive
result = await result
File "c:\soft\xxx\lib\site-packages\xinference\core\supervisor.py", line 1103, in launch_builtin_model
await _launch_model()
File "c:\soft\xxx\lib\site-packages\xinference\core\supervisor.py", line 1046, in _launch_model
subpool_address = await _launch_one_model(
File "c:\soft\xxx\lib\site-packages\xinference\core\supervisor.py", line 1003, in _launch_one_model
subpool_address = await worker_ref.launch_builtin_model(
File "xoscar\core.pyx", line 284, in __pyx_actor_method_wrapper
async with lock:
File "xoscar\core.pyx", line 287, in xoscar.core.__pyx_actor_method_wrapper
result = await result
File "c:\soft\xxx\lib\site-packages\xinference\core\utils.py", line 94, in wrapped
ret = await func(*args, **kwargs)
File "c:\soft\xxx\lib\site-packages\xinference\core\worker.py", line 908, in launch_builtin_model
await model_ref.load()
File "c:\soft\xxx\lib\site-packages\xoscar\backends\context.py", line 231, in send
return self._process_result_message(result)
File "c:\soft\xxx\lib\site-packages\xoscar\backends\context.py", line 102, in _process_result_message
raise message.as_instanceof_cause()
File "c:\soft\xxx\lib\site-packages\xoscar\backends\pool.py", line 667, in send
result = await self._run_coro(message.message_id, coro)
File "c:\soft\xxx\lib\site-packages\xoscar\backends\pool.py", line 370, in _run_coro
return await coro
File "c:\soft\xxx\lib\site-packages\xoscar\api.py", line 384, in on_receive
return await super().on_receive(message) # type: ignore
File "xoscar\core.pyx", line 558, in on_receive
raise ex
File "xoscar\core.pyx", line 520, in xoscar.core._BaseActor.on_receive
async with self._lock:
File "xoscar\core.pyx", line 521, in xoscar.core._BaseActor.on_receive
with debug_async_timeout('actor_lock_timeout',
File "xoscar\core.pyx", line 526, in xoscar.core._BaseActor.on_receive
result = await result
File "c:\soft\xxx\lib\site-packages\xinference\core\model.py", line 455, in load
self._model.load()
File "c:\soft\xxx\lib\site-packages\xinference\model\audio\chattts.py", line 61, in load
raise Exception(f"The ChatTTS model is not correct: {self._model_path}")
Exception: [address=10.0.100.57:58719, pid=5616] The ChatTTS model is not correct: C:\soft\models\cache\ChatTTS

Expected behavior / 期待表现

可以正常运行

@XprobeBot XprobeBot added the gpu label Jan 23, 2025
@XprobeBot XprobeBot added this to the v1.x milestone Jan 23, 2025
@cq134cq
Copy link
Author

cq134cq commented Jan 23, 2025

Image

@cq134cq
Copy link
Author

cq134cq commented Jan 23, 2025

Image

Image

@cq134cq cq134cq changed the title chhtts下载好后不能运行,其它的模型都可以起启 ChatTTS下载好后不能运行,其它的模型都可以起启 Jan 23, 2025
@qinxuye
Copy link
Contributor

qinxuye commented Jan 24, 2025

@codingl2k1 can you help?

@cq134cq
Copy link
Author

cq134cq commented Jan 24, 2025

can you help?

或者告诉我问题出在那里

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants