all-mpnet-base-v2 | HuggingFace

METRIC

cosine, dot product, euclidean

DIMENSION

768

MAX INPUT TOKENS

384

TASK

embedding

Overview

all-mpnet-base-v2 is a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks. all-mpnet-base-v2 is a fine-tuned model that uses the pretrained microsoft/mpnet-base model under the hood. This model has the best quality of the sbert all family of models.

Using the model

Installation:


!pip install -U sentence-transformers pinecone

Create Index


from pinecone import Pinecone, ServerlessSpec

pc = Pinecone(api_key="API_KEY")

index_name = "all-mpnet-base-v2"

if not pc.has_index(index_name):
    pc.create_index(
        name=index_name,
        dimension=768,
        metric="cosine",
        spec=ServerlessSpec(
            cloud='aws',
            region='us-east-1'
        )
    )

index = pc.Index(index_name)

Embed & Upsert


from sentence_transformers import SentenceTransformer
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'

model = SentenceTransformer('sentence-transformers/all-mpnet-base-v2').to(device)

data = [
    {"id": "vec1", "text": "Apple is a popular fruit known for its sweetness and crisp texture."},
    {"id": "vec2", "text": "The tech company Apple is known for its innovative products like the iPhone."},
    {"id": "vec3", "text": "Many people enjoy eating apples as a healthy snack."},
    {"id": "vec4", "text": "Apple Inc. has revolutionized the tech industry with its sleek designs and user-friendly interfaces."},
    {"id": "vec5", "text": "An apple a day keeps the doctor away, as the saying goes."},
]
sentences = [x["text"] for x in data]
embeddings =  model.encode(sentences)  

vectors = []
for d, e in zip(data, embeddings):
    vectors.append({
        "id": d['id'],
        "values": e,
        "metadata": {'text': d['text']}
    })

index.upsert(
    vectors=vectors,
    namespace="ns1"
)

Query


query = "Tell me about the tech company known as Apple"

query_embedding = model.encode(query).tolist()
print(query_embedding)

results = index.query(
    namespace="ns1",
    vector=query_embedding,
    top_k=3,
    include_values=False,
    include_metadata=True
)

print(results)

Lorem Ipsum

Was this page helpful?