Documentation Index
Fetch the complete documentation index at: https://docs.pinecone.io/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The model is optimized for precision in RAG reranking tasks It assigns a relevance score from 0 to 1 for each query-document pair, with higher scores indicating a stronger match. To maintain accuracy, we’ve set the model’s maximum context length to 512 tokens — an optimal limit for preserving ranking quality in reranking tasks.Installation
Rerank
from pinecone import Pinecone
pc = Pinecone("API-KEY")
query = "Tell me about Apple's products"
results = pc.inference.rerank(
model="pinecone-rerank-v0",
query=query,
documents=[
"Apple is a popular fruit known for its sweetness and crisp texture.",
"Apple is known for its innovative products like the iPhone.",
"Many people enjoy eating apples as a healthy snack.",
"Apple Inc. has revolutionized the tech industry with its sleek designs and user-friendly interfaces.",
"An apple a day keeps the doctor away, as the saying goes.",
],
top_n=3,
return_documents=True,
parameters= {
"truncate": "END"
}
)
print(query)
for r in results.data:
print(r.score, r.document.text)
Lorem Ipsum