This page shows you how to search a sparse index for records that most exactly match the words or phrases in a query. This is often called lexical search or keyword search.

Lexical search uses sparse vectors, which have a very large number of dimensions, where only a small proportion of values are non-zero. The dimensions represent words from a dictionary, and the values represent the importance of these words in the document. Words are scored independently and then summed, with the most similar records scored highest.

This feature is in public preview.

Search with text

Searching with text is supported only for indexes with integrated embedding.

To search a sparse index with a query text, use the search_records operation with the following parameters:

  • The namespace to query. To use the default namespace, set the namespace to an empty string ("").
  • The query.inputs.text parameter with the query text. Pinecone uses the embedding model integrated with the index to convert the text to a sparse vector automatically.
  • The query.top_k parameter with the number of similar records to return.
  • Optionally, you can specify the fields to return in the response. If not specified, the response will include all fields.

For example, the following code converts the query “What is AAPL’s outlook, considering both product launches and market conditions?” to a sparse vector and then searches for the 3 most similar vectors in the example-namespaces namespace:

from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

results = index.search(
    namespace="example-namespace", 
    query={
        "inputs": {"text": "What is AAPL's outlook, considering both product launches and market conditions?"}, 
        "top_k": 3
    },
    fields=["chunk_text", "quarter"]
)

print(results)

The results will look as follows. The most similar records are scored highest.

{'result': {'hits': [{'_id': 'vec2',
                      '_score': 10.77734375,
                      'fields': {'chunk_text': "Analysts suggest that AAPL'''s "
                                               'upcoming Q4 product launch '
                                               'event might solidify its '
                                               'position in the premium '
                                               'smartphone market.',
                                 'quarter': 'Q4'}},
                     {'_id': 'vec3',
                      '_score': 6.49066162109375,
                      'fields': {'chunk_text': "AAPL'''s strategic Q3 "
                                               'partnerships with '
                                               'semiconductor suppliers could '
                                               'mitigate component risks and '
                                               'stabilize iPhone production.',
                                 'quarter': 'Q3'}},
                     {'_id': 'vec1',
                      '_score': 5.3671875,
                      'fields': {'chunk_text': 'AAPL reported a year-over-year '
                                               'revenue increase, expecting '
                                               'stronger Q3 demand for its '
                                               'flagship phones.',
                                 'quarter': 'Q3'}}]},
 'usage': {'embed_total_tokens': 18, 'read_units': 1}}

Search with a sparse vector

To search a sparse index with a sparse vector representation of a query, use the query operation with the following parameters:

  • The namespace to query. To use the default namespace, set the namespace to an empty string ("").
  • The sparse_vector parameter with the sparse vector values and indices.
  • The top_k parameter with the number of results to return.
  • Optionally, you can set include_values and/or include_metadata to true to include the vector values and/or metadata of the matching records in the response. However, when querying with top_k over 1000, avoid returning vector data or metadata for optimal performance.

For example, the following code uses a sparse vector representation of the query “What is AAPL’s outlook, considering both product launches and market conditions?” to search for the 3 most similar vectors in the example-namespaces namespace:

from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

results = index.query(
    namespace="example-namespace",
    sparse_vector={
      "values": [1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0],
      "indices": [767227209, 1640781426, 1690623792, 2021799277, 2152645940, 2295025838, 2443437770, 2779594451, 2956155693, 3476647774, 3818127854, 4283091697]
    }, 
    top_k=3,
    include_metadata=True,
    include_values=False
)

print(results)

The results will look as follows. The most similar records are scored highest.

{'matches': [{'id': 'vec2',
              'metadata': {'category': 'technology',
                           'quarter': 'Q4',
                           'chunk_text': "Analysts suggest that AAPL'''s "
                                          'upcoming Q4 product launch event '
                                          'might solidify its position in the '
                                          'premium smartphone market.'},
              'score': 10.9042969,
              'values': []},
             {'id': 'vec3',
              'metadata': {'category': 'technology',
                           'quarter': 'Q3',
                           'chunk_text': "AAPL'''s strategic Q3 partnerships "
                                          'with semiconductor suppliers could '
                                          'mitigate component risks and '
                                          'stabilize iPhone production'},
              'score': 6.48010254,
              'values': []},
             {'id': 'vec1',
              'metadata': {'category': 'technology',
                           'quarter': 'Q3',
                           'chunk_text': 'AAPL reported a year-over-year '
                                          'revenue increase, expecting '
                                          'stronger Q3 demand for its flagship '
                                          'phones.'},
              'score': 5.3671875,
              'values': []}],
 'namespace': 'example-namespace',
 'usage': {'read_units': 1}}

Search with a record ID

When you search with a record ID, Pinecone uses the sparse vector associated with the record as the query. To search a sparse index with a record ID, use the query operation with the following parameters:

  • The namespace to query. To use the default namespace, set the namespace to an empty string ("").
  • The id parameter with the unique record ID containing the sparse vector to use as the query.
  • The top_k parameter with the number of results to return.
  • Optionally, you can set include_values and/or include_metadata to true to include the vector values and/or metadata of the matching records in the response. However, when querying with top_k over 1000, avoid returning vector data or metadata for optimal performance.

For example, the following code uses an ID to search for the 3 records in the example-namespace namespace that best match the sparse vector in the record:

from pinecone.grpc import PineconeGRPC as Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

index.query(
    namespace="example-namespace",
    id="rec2", 
    top_k=3,
    include_metadata=True,
    include_values=False
)