Overview
all-MiniLM-L12-v2 is a sentence and short paragraph encoder. Given an input text, it outputs a vector which captures the semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
all-MiniLM-L12-v2 is a fine-tuned model that uses the pretrained microsoft/MiniLM-L12-H384-uncased model under the hood.
This model is 5x faster than all-mpnet-base-v2, while still offering good quality. It comes from the sbert all family of models.
Using the model
Installation:
!pip install -U sentence-transformers pinecone
Create Index
from pinecone import Pinecone, ServerlessSpec
pc = Pinecone(api_key="API_KEY")
# Create Index
index_name = "all-minilm-l12-v2"
if not pc.has_index(index_name):
pc.create_index(
name=index_name,
dimension=384,
metric="cosine",
spec=ServerlessSpec(
cloud='aws',
region='us-east-1'
)
)
index = pc.Index(index_name)
Embed & Upsert
from sentence_transformers import SentenceTransformer
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
model = SentenceTransformer('sentence-transformers/all-MiniLM-L12-v2').to(device)
data = [
{"id": "vec1", "text": "Apple is a popular fruit known for its sweetness and crisp texture."},
{"id": "vec2", "text": "The tech company Apple is known for its innovative products like the iPhone."},
{"id": "vec3", "text": "Many people enjoy eating apples as a healthy snack."},
{"id": "vec4", "text": "Apple Inc. has revolutionized the tech industry with its sleek designs and user-friendly interfaces."},
{"id": "vec5", "text": "An apple a day keeps the doctor away, as the saying goes."},
]
sentences = [x["text"] for x in data]
embeddings = model.encode(sentences)
vectors = []
for d, e in zip(data, embeddings):
vectors.append({
"id": d['id'],
"values": e,
"metadata": {'text': d['text']}
})
index.upsert(
vectors=vectors,
namespace="ns1"
)
import time
Query
query = "Tell me about the tech company known as Apple"
query_embedding = model.encode(query).tolist()
print(query_embedding)
results = index.query(
namespace="ns1",
vector=query_embedding,
top_k=3,
include_values=False,
include_metadata=True
)
print(results)