-
Notifications
You must be signed in to change notification settings - Fork 589
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to use images #448
Comments
我试过像REST API那样把图像的base64码放在message列表传给模型,但是模型识别不了该图像,而是回答我传了一串符号或者公式;此外,我还试过把图片路径直接放到message列表传递,但模型回答热仍然是错误的。目前,我只在cmd下使用模型正确识别到了图片,所以我怀疑ollama-python库还没有这个功能. |
python实现REST API请求传递图片给大模型,亲测可行请求的URLurl = "http://localhost:11434/api/generate" 图像路径image_path = r'D:\用户\智学伴\demo4.png' 将图像转换为Base64编码def image_to_base64(image_path): 获取图像的Base64编码base64_string = image_to_base64(image_path) print(base64_string)请求的Payload数据data = { 发送POST请求response = requests.post(url, json=data) |
感谢!
我之前没有转base64,现在转了后还是返回的是空,可能这个模型不能这样弄。不用ollama,单独调这个模型可以。
试了几个图片,发现有的图片可以识别出来,有的不行,还是模型原因。
|
How to use images for Q&A using the Moonbeam model?
data= ollama.generate( model="moondream:latest", images=[image_data], prompt="What is the error message shown in the code?" )
print(data.response) # is ''
Like this, but no response, where is the configuration problem.
The text was updated successfully, but these errors were encountered: