pinecone-sparse-english-v0 | Pinecone

Inference API
METRIC

dot product

MAX INPUT TOKENS

512

TASK

embedding

PRICE

$0.08 / 1M Tokens

Built on the innovations of the DeepImpact architecture, the model directly estimates the lexical importance of tokens by leveraging their context, unlike traditional retrieval models like BM25, which rely solely on term frequency. The model outperforms BM25 by up to 44% (average 23%) NDCG@10 on Text REtrieval Conference (TREC) Deep Learning Tracks and up to 24% (8% on average) on BEIR. For more information see our blog post on cascading retrieval

You must specify the input_type as either query or passage. You can optionally return the string tokens using "return_tokens": True.

Installation

pip install pinecone pinecone-plugin-inference

Upsert

from pinecone import Pinecone

pc = Pinecone("API-KEY")

pc.inference.embed(
       model="pinecone-sparse-english-v0",
       inputs=["The share price of NVIDIA is $10"],
	   parameters={
         "input_type": "passage", # or query
         "return_tokens": True,
      }
)

Query


pc.inference.embed(
       model="pinecone-sparse-english-v0",
       inputs=["what is NVIDIA's share price"],
	   parameters={
         "input_type": "query", # or query
         "return_tokens": True,
      }
)

Lorem Ipsum

Was this page helpful?