Skip to content

Commit

Permalink
Merge pull request #639 from roboflow/bring-back-transformers-extras
Browse files Browse the repository at this point in the history
Bring back transformers extras
  • Loading branch information
PawelPeczek-Roboflow authored Sep 10, 2024
2 parents 3f4a264 + 46dd3c9 commit 17aa174
Show file tree
Hide file tree
Showing 13 changed files with 38 additions and 7 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/integration_tests_inference_models.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ jobs:
run: |
python -m pip install --upgrade pip
pip install --upgrade setuptools
pip install --extra-index-url https://download.pytorch.org/whl/cpu -r requirements/_requirements.txt -r requirements/requirements.cpu.txt -r requirements/requirements.sdk.http.txt -r requirements/requirements.test.unit.txt -r requirements/requirements.http.txt -r requirements/requirements.yolo_world.txt -r requirements/requirements.sam.txt
pip install --extra-index-url https://download.pytorch.org/whl/cpu -r requirements/_requirements.txt -r requirements/requirements.cpu.txt -r requirements/requirements.sdk.http.txt -r requirements/requirements.test.unit.txt -r requirements/requirements.http.txt -r requirements/requirements.yolo_world.txt -r requirements/requirements.sam.txt -r requirements/requirements.transformers.txt
- name: 🧪 Integration Tests of Inference models
timeout-minutes: 45
run: MAX_BATCH_SIZE=6 python -m pytest tests/inference/models_predictions_tests
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ jobs:
- name: 📦 Installing `inference` package...
run: |
wheel_name=`ls ./dist/inference_gpu-*-py3-none-any.whl | head -n 1`
pip install "${wheel_name}[clip,gaze,grounding-dino,sam,waf,yolo-world,http,hosted]"
pip install "${wheel_name}[clip,gaze,grounding-dino,sam,waf,yolo-world,http,hosted,transformers]"
- name: 🧪 Testing package installation
working-directory: "/"
run: |
Expand All @@ -47,3 +47,4 @@ jobs:
python -c "from inference.models.sam import SegmentAnything"
python -c "from inference.models.grounding_dino import GroundingDINO"
python -c "from inference.models.yolo_world import YOLOWorld"
python -c "from inference.models.florence2 import Florence2, LoRAFlorence2"
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ jobs:
- name: 📦 Installing `inference` package...
run: |
wheel_name=`ls ./dist/inference-*-py3-none-any.whl | head -n 1`
pip install "${wheel_name}[clip,gaze,grounding-dino,sam,waf,yolo-world,http,hosted]"
pip install "${wheel_name}[clip,gaze,grounding-dino,sam,waf,yolo-world,http,hosted,transformers]"
- name: 🧪 Testing package installation
working-directory: "/"
run: |
Expand All @@ -47,3 +47,4 @@ jobs:
python -c "from inference.models.sam import SegmentAnything"
python -c "from inference.models.grounding_dino import GroundingDINO"
python -c "from inference.models.yolo_world import YOLOWorld"
python -c "from inference.models.florence2 import Florence2, LoRAFlorence2"
1 change: 1 addition & 0 deletions .release/pypi/inference.core.setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@ def read_requirements(path):
"sam": read_requirements("requirements/requirements.sam.txt"),
"waf": read_requirements("requirements/requirements.waf.txt"),
"yolo-world": read_requirements("requirements/requirements.yolo_world.txt"),
"transformers": read_requirements("requirements/requirements.transformers.txt"),
},
classifiers=[
"Development Status :: 5 - Production/Stable",
Expand Down
1 change: 1 addition & 0 deletions .release/pypi/inference.cpu.setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ def read_requirements(path):
"sam": read_requirements("requirements/requirements.sam.txt"),
"waf": read_requirements("requirements/requirements.waf.txt"),
"yolo-world": read_requirements("requirements/requirements.yolo_world.txt"),
"transformers": read_requirements("requirements/requirements.transformers.txt"),
},
classifiers=[
"Development Status :: 5 - Production/Stable",
Expand Down
1 change: 1 addition & 0 deletions .release/pypi/inference.gpu.setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ def read_requirements(path):
"sam": read_requirements("requirements/requirements.sam.txt"),
"waf": read_requirements("requirements/requirements.waf.txt"),
"yolo-world": read_requirements("requirements/requirements.yolo_world.txt"),
"transformers": read_requirements("requirements/requirements.transformers.txt"),
},
classifiers=[
"Development Status :: 5 - Production/Stable",
Expand Down
1 change: 1 addition & 0 deletions .release/pypi/inference.setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ def read_requirements(path):
"sam": read_requirements("requirements/requirements.sam.txt"),
"waf": read_requirements("requirements/requirements.waf.txt"),
"yolo-world": read_requirements("requirements/requirements.yolo_world.txt"),
"transformers": read_requirements("requirements/requirements.transformers.txt"),
},
classifiers=[
"Development Status :: 5 - Production/Stable",
Expand Down
2 changes: 2 additions & 0 deletions docker/dockerfiles/Dockerfile.onnx.cpu
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ COPY requirements/requirements.sam.txt \
requirements/requirements.doctr.txt \
requirements/requirements.groundingdino.txt \
requirements/requirements.yolo_world.txt \
requirements/requirements.transformers.txt \
requirements/_requirements.txt \
./

Expand All @@ -37,6 +38,7 @@ RUN pip3 install --upgrade pip && pip3 install \
-r requirements.doctr.txt \
-r requirements.groundingdino.txt \
-r requirements.yolo_world.txt \
-r requirements.transformers.txt \
jupyterlab \
wheel>=0.38.0 \
--upgrade \
Expand Down
4 changes: 2 additions & 2 deletions docker/dockerfiles/Dockerfile.onnx.gpu
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ COPY requirements/requirements.sam.txt \
requirements/requirements.cogvlm.txt \
requirements/requirements.yolo_world.txt \
requirements/_requirements.txt \
requirements/requirements.pali.server.txt \
requirements/requirements.transformers.txt \
requirements/requirements.pali.flash_attn.txt \
./

Expand All @@ -41,7 +41,7 @@ RUN python3 -m pip install --extra-index-url https://download.pytorch.org/whl/cu
-r requirements.doctr.txt \
-r requirements.cogvlm.txt \
-r requirements.yolo_world.txt \
-r requirements.pali.server.txt \
-r requirements.transformers.txt \
jupyterlab \
--upgrade \
&& rm -rf ~/.cache/pip
Expand Down
12 changes: 11 additions & 1 deletion docs/foundation/paligemma.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,16 @@ You can use PaliGemma to:

You can deploy PaliGemma object detection models with Inference, and use PaliGemma for object detection.

### Installation

To install inference with the extra dependencies necessary to run PaliGemma, run

```pip install inference[transformers]```

or

```pip install inference-gpu[transformers]```

### How to Use PaliGemma (VQA)

Create a new Python file called `app.py` and add the following code:
Expand All @@ -17,7 +27,7 @@ import inference

from inference.models.paligemma.paligemma import PaliGemma

pg = PaliGemma(api_key="YOUR ROBOFLOW API KEY")
pg = PaliGemma("paligemma-3b-mix-224", api_key="YOUR ROBOFLOW API KEY")

from PIL import Image

Expand Down
1 change: 1 addition & 0 deletions docs/models.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,7 @@ Some functionality requires extra dependencies. These can be installed by specif
| `http` | Ability to run the http interface |
| `sam` | Ability to run the core `Segment Anything` model (by Meta AI) |
| `doctr` | Ability to use the core `doctr` model (by <a href="https://github.com/mindee/doctr" target="_blank">Mindee</a>) |
| `transformers` | Ability to use transformers based multi-modal models such as `Florence2` and `PaliGemma`. To use Florence2 you will need to manually install <a href="https://github.com/Dao-AILab/flash-attention/" target="_blank">flash_attn</a> |

**_Note:_** Both CLIP and Segment Anything require PyTorch to run. These are included in their respective dependencies however PyTorch installs can be highly environment dependent. See the <a href="https://pytorch.org/get-started/locally/" target="_blank">official PyTorch install page</a> for instructions specific to your enviornment.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ timm~=1.0.0
accelerate>=0.25.0,<=0.32.1
xformers>=0.0.22
einops>=0.7.0,<=0.8.0
peft~=0.11.1
peft~=0.11.1
12 changes: 12 additions & 0 deletions tests/inference/models_predictions_tests/test_florence2.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
import pytest
from inference.models.florence2 import Florence2
import numpy as np


@pytest.mark.slow
def test_florence2_caption(
example_image: np.ndarray,
) -> None:
model = Florence2("florence-pretrains/1")
response = model.infer(example_image, prompt="<CAPTION>")[0].response
assert response == "a close up of a dog looking over a fence"

0 comments on commit 17aa174

Please sign in to comment.