diff --git a/README.md b/README.md index 04b8c2f5..d72d6bfa 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,15 @@ # Text Embeddings Inference on Habana Gaudi +## Table of contents +- [Get started](#get-started) +- [Supported Models](#supported-models) + - [Text Embeddings](#text-embeddings) + - [Sequence Classification and Re-Ranking](#sequence-classification-and-re-ranking) +- [How to Use](#how-to-use) + - [Using Re-rankers models](#using-re-rankers-models) + - [Using Sequence Classification models](#using-sequence-classification-models) + - [Using SPLADE pooling](#using-splade-pooling) +## Get started To use [🤗 text-embeddings-inference](https://github.com/huggingface/text-embeddings-inference) on Habana Gaudi/Gaudi2, follow these steps: 1. Pull the official Docker image with: @@ -32,23 +42,94 @@ To use [🤗 text-embeddings-inference](https://github.com/huggingface/text-embe -H 'Content-Type: application/json' ``` -For more information and documentation about Text Embeddings Inference, checkout [the README](https://github.com/huggingface/text-embeddings-inference#text-embeddings-inference) of the original repo. +For more information and documentation about Text Embeddings Inference, checkout [README](https://github.com/huggingface/text-embeddings-inference#text-embeddings-inference) of the original repo. -Not all features of TEI are currently supported as this is still a work in progress. -## Validated Models +## Supported Models +### Text Embeddings +`tei-gaudi` currently supports Nomic, BERT, CamemBERT, XLM-RoBERTa models with absolute positions, JinaBERT model with Alibi positions and Mistral, Alibaba GTE and Qwen2 models with Rope positions. -| Architecture | Model Type | Pooling | Models | -|--------------|------------|---------|--------| -| BERT | Embedding | Cls |
  • [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)
  • [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
  • [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2)
  • [sentence-transformers/multi-qa-MiniLM-L6-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1)
  • [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
  • [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2)
  • | -| BERT | Embedding | Splade |
  • [naver/efficient-splade-VI-BT-large-query](https://huggingface.co/naver/efficient-splade-VI-BT-large-query)
  • | -| MPNet | Embedding | Mean |
  • [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
  • [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
  • [sentence-transformers/multi-qa-mpnet-base-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1)
  • | -| ALBERT | Embedding | Mean |
  • [sentence-transformers/paraphrase-albert-small-v2](https://huggingface.co/sentence-transformers/paraphrase-albert-small-v2)
  • | -| Mistral | Embedding | Last token |
  • [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct)
  • [Salesforce/SFR-Embedding-2_R](https://huggingface.co/Salesforce/SFR-Embedding-2_R)
  • | -| GTE | Embedding | Cls |
  • [Alibaba-NLP/gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5)
  • | -| JinaBERT | Embedding | Mean |
  • [jinaai/jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en)
  • | -| Roberta | Sequence Classification | N/A |
  • [SamLowe/roberta-base-go_emotions](https://huggingface.co/SamLowe/roberta-base-go_emotions)
  • | +Below are some examples of our validated models: +| Architecture | Pooling | Models | +|--------------|---------|--------| +| BERT | Cls/Mean/Last token |
  • [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)
  • [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2)
  • [sentence-transformers/all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2)
  • [sentence-transformers/multi-qa-MiniLM-L6-cos-v1](https://huggingface.co/sentence-transformers/multi-qa-MiniLM-L6-cos-v1)
  • [sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2)
  • [sentence-transformers/paraphrase-MiniLM-L3-v2](https://huggingface.co/sentence-transformers/paraphrase-MiniLM-L3-v2)
  • | +| BERT | Splade |
  • [naver/efficient-splade-VI-BT-large-query](https://huggingface.co/naver/efficient-splade-VI-BT-large-query)
  • | +| MPNet | Cls/Mean/Last token |
  • [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)
  • [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2)
  • [sentence-transformers/multi-qa-mpnet-base-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1)
  • | +| ALBERT | Cls/Mean/Last token |
  • [sentence-transformers/paraphrase-albert-small-v2](https://huggingface.co/sentence-transformers/paraphrase-albert-small-v2)
  • | +| Mistral | Cls/Mean/Last token |
  • [intfloat/e5-mistral-7b-instruct](https://huggingface.co/intfloat/e5-mistral-7b-instruct)
  • [Salesforce/SFR-Embedding-2_R](https://huggingface.co/Salesforce/SFR-Embedding-2_R)
  • | +| GTE | Cls/Mean/Last token |
  • [Alibaba-NLP/gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5)
  • | +| JinaBERT | Cls/Mean/Last token |
  • [jinaai/jina-embeddings-v2-base-en](https://huggingface.co/jinaai/jina-embeddings-v2-base-en)
  • | + +### Sequence Classification and Re-Ranking +`tei-gaudi` currently supports CamemBERT, and XLM-RoBERTa Sequence Classification models with absolute positions. + +Below are some examples of the currently supported models: +| Task | Model Type | Model ID | +|--------------------|-------------|---------------------------------------------------------------------------------------------| +| Re-Ranking | XLM-RoBERTa | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | +| Re-Ranking | XLM-RoBERTa | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | +| Sentiment Analysis | RoBERTa | [SamLowe/roberta-base-go_emotions](https://huggingface.co/SamLowe/roberta-base-go_emotions) | + +## How to Use +### Using Re-rankers models + +```shell +model=BAAI/bge-reranker-large +volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run + +docker run -p 8080:80 -v $volume:/data --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none -e MAX_WARMUP_SEQUENCE_LENGTH=512 --cap-add=sys_nice --ipc=host ghcr.io/huggingface/tei-gaudi:latest --model-id $model +``` + +And then you can rank the similarity between a query and a list of texts with: + +```bash +curl 127.0.0.1:8080/rerank \ + -X POST \ + -d '{"query":"What is Deep Learning?", "texts": ["Deep Learning is not...", "Deep learning is..."]}' \ + -H 'Content-Type: application/json' +``` + +### Using Sequence Classification models + +You can also use classic Sequence Classification models like `SamLowe/roberta-base-go_emotions`: + +```shell +model=SamLowe/roberta-base-go_emotions +volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run + +docker run -p 8080:80 -v $volume:/data --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none -e MAX_WARMUP_SEQUENCE_LENGTH=512 --cap-add=sys_nice --ipc=host ghcr.io/huggingface/tei-gaudi:latest --model-id $model +``` + +Once you have deployed the model you can use the `predict` endpoint to get the emotions most associated with an input: + +```bash +curl 127.0.0.1:8080/predict \ + -X POST \ + -d '{"inputs":"I like you."}' \ + -H 'Content-Type: application/json' +``` + +### Using SPLADE pooling + +You can choose to activate SPLADE pooling for Bert and Distilbert MaskedLM architectures: + +```shell +docker build -f Dockerfile-hpu -t tei_gaudi . +model=naver/efficient-splade-VI-BT-large-query +volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run + +docker run -p 8080:80 -v $volume:/data --runtime=habana -e HABANA_VISIBLE_DEVICES=all -e OMPI_MCA_btl_vader_single_copy_mechanism=none -e MAX_WARMUP_SEQUENCE_LENGTH=512 --cap-add=sys_nice --ipc=host tei_gaudi --model-id $model --pooling splade +``` + +Once you have deployed the model you can use the `/embed_sparse` endpoint to get the sparse embedding: + +```bash +curl 127.0.0.1:8080/embed_sparse \ + -X POST \ + -d '{"inputs":"I like you."}' \ + -H 'Content-Type: application/json' +``` > The license to use TEI on Habana Gaudi is the one of TEI: https://github.com/huggingface/text-embeddings-inference/blob/main/LICENSE > > Please reach out to api-enterprise@huggingface.co if you have any question.