Sionic AI Embedding API v2
About Sionic AI
Sionic AI delivers more accessible and cost-effective AI technology addressing the various needs to boost productivity and drive innovation.
The Large Language Model (LLM) is not for research and experimentation. We offer solutions that leverage LLM to add value to your business. Anyone can easily train and control AI.
How to get embeddings
Currently, we open the beta version of embedding APIs. To get embeddings, you should call API endpoint to send your text. You can send either a single sentence or multiple sentences. The embeddings that correspond to the inputs will be returned.
API Endpoint : https://api.sionic.ai/v2/embedding
Command line Example
Request:
curl https://api.sionic.ai/v2/embedding \
-H "Content-Type: application/json" \
-d '{
"inputs": ["first query", "second query", "third query"]
}'
Response:
{
"embedding": [
[
0.5567971,
-1.1578958,
-0.7148851,
-0.2326297,
0.4394867,
...
],
[
0.5049863,
-0.8253384,
-1.0041373,
-0.6503708,
0.5007141,
...
],
[
0.6059823,
-1.0369557,
-0.6705063,
-0.4467056,
0.8618057,
...
]
]
}
Python code Example
Get embeddings by directly calling embedding API.
from typing import List
import numpy as np
import requests
def get_embedding(queries: List[str], url):
response = requests.post(url=url, json={'inputs': queries})
return np.asarray(response.json()['embedding'], dtype=np.float32)
url = "https://api.sionic.ai/v2/embedding"
inputs1 = ["first query", "second query"]
inputs2 = ["third query", "fourth query"]
embedding1 = get_embedding(inputs1, url=url)
embedding2 = get_embedding(inputs2, url=url)
cos_similarity = (embedding1 / np.linalg.norm(embedding1)) @ (embedding2 / np.linalg.norm(embedding1)).T
print(cos_similarity)
Using pre-defined SionicEmbeddingModel to obtain embeddings.
from model_api import SionicEmbeddingModel
import numpy as np
inputs1 = ["first query", "second query"]
inputs2 = ["third query", "fourth query"]
model = SionicEmbeddingModel(url="https://api.sionic.ai/v2/embedding",
dimension=3072)
embedding1 = model.encode(inputs1)
embedding2 = model.encode(inputs2)
cos_similarity = (embedding1 / np.linalg.norm(embedding1)) @ (embedding2 / np.linalg.norm(embedding1)).T
print(cos_similarity)
We apply the instruction to encode short queries for retrieval tasks.
By using encode_queries()
, you can use the instruction to encode queries which is prefixed to each query as the following example.
The recommended instruction for both v1 and v2 models is "query: "
.
from model_api import SionicEmbeddingModel
import numpy as np
query = ["first query", "second query"]
passage = ["This is a passage related to the first query", "This is a passage related to the second query"]
model = SionicEmbeddingModel(url="https://api.sionic.ai/v2/embedding",
instruction="query: ",
dimension=3072)
query_embedding = model.encode_queries(query)
passage_embedding = model.encode_corpus(passage)
cos_similarity = (query_embedding / np.linalg.norm(query_embedding)) @ (passage_embedding / np.linalg.norm(passage_embedding)).T
print(cos_similarity)
Massive Text Embedding Benchmark (MTEB) Evaluation
Both versions of Sionic AI's embedding show the state-of-the-art performances on the MTEB! You can find a code to evaluate MTEB datasets using Sionic embedding APIs here.
Model Name | Dimension | Sequence Length | Average (56) |
---|---|---|---|
sionic-ai/sionic-ai-v2 | 3072 | 512 | 65.23 |
sionic-ai/sionic-ai-v1 | 2048 | 512 | 64.92 |
bge-large-en-v1.5 | 1024 | 512 | 64.23 |
gte-large-en | 1024 | 512 | 63.13 |
text-embedding-ada-002 | 1536 | 8191 | 60.99 |
Spaces using sionic-ai/sionic-ai-v2 3
Evaluation results
- accuracy on MTEB AmazonCounterfactualClassification (en)test set self-reported76.567
- ap on MTEB AmazonCounterfactualClassification (en)test set self-reported39.865
- f1 on MTEB AmazonCounterfactualClassification (en)test set self-reported70.603
- accuracy on MTEB AmazonPolarityClassificationtest set self-reported93.464
- ap on MTEB AmazonPolarityClassificationtest set self-reported90.363
- f1 on MTEB AmazonPolarityClassificationtest set self-reported93.453
- accuracy on MTEB AmazonReviewsClassification (en)test set self-reported48.772
- f1 on MTEB AmazonReviewsClassification (en)test set self-reported48.167
- map_at_1 on MTEB ArguAnatest set self-reported40.185
- map_at_10 on MTEB ArguAnatest set self-reported56.114