-
Notifications
You must be signed in to change notification settings - Fork 416
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, hello. How do I skip api validation and go straight to my locally deployed Qwen2-VL model #381
Comments
Hello @Mrguanglei , I don't understand the question, could you rephrase it and maybe provide an example of what you are trying to do? |
Hello, I don't want to use the big model of OpenAI, but the big model of qwen-vl deployed locally. How can I use it in this project? Can you provide a demo for reference? |
@dillonalaird Here, you use ollama to call other models, I want to use VLLM to encapsulate the local Qwen-VL-72B model to call, but your config does not seem to have, I use vllm to encapsulate the local model into an api interface. However, there were many errors when the code was running, although the model was able to reply. (vision-agent) ubuntu@ubuntu-SYS-4028GR-TR:/apps/llms/vision-agent$ python generate.py ----- stderr ----- ----- Error -----
|
@dillonalaird This is openLLM in the llm I modified class OpenAILMM(LMM):
|
Here are some debugging steps and possible solutions for using Qwen-VL with Vision Agent. Step 1: Confirm Qwen-VL API is RunningBefore troubleshooting Vision Agent, verify that Qwen-VL is correctly served. Run: curl -X POST "http://0.0.0.0:8000/v1/chat/completions"
-H "Content-Type: application/json"
-d '{"model": "qwen-vl", "messages": [{"role": "user", "content": "Hello!"}]}'
Step 2: Test Text-Only Requests in Vision AgentIf text-based queries work but images fail, the issue might be how images are processed. response = client.chat.completions.create(
model="qwen-vl",
messages=[{"role": "user", "content": "Describe this image."}]
)
print(response)
Step 3: Fix the NumPy AttributeErrorThe error suggests if isinstance(images, np.ndarray):
images = {"image": images} # Wrap array in a dictionary Also, print print(f"DEBUG: Type of images -> {type(images)}")
print(f"DEBUG: Content of images -> {images}") Step 4: Ensure Vision Agent Uses the Correct APIVerify that Vision Agent is actually calling Qwen-VL rather than OpenAI: print(f"DEBUG: Using API base URL -> {self.client.base_url}") Step 5: Debug Execution Errors in Vision Agent
pip install --upgrade nbclient jupyter-core asyncio
print(f"DEBUG: Running code action -> {prompt}") Next StepsFollow these debugging steps and let us know where you get stuck. If you have logs or outputs, sharing them would help in troubleshooting further! |
No description provided.
The text was updated successfully, but these errors were encountered: