Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.pinecone.io/llms.txt

Use this file to discover all available pages before exploring further.

A Pinecone index can hold any combination of the following:
  • Documents are the unit of data in an index with a document schema — JSON records whose ranking fields are indexed according to a schema you declare at index creation. An index with a document schema can mix dense_vector, sparse_vector, and FTS-enabled string ranking fields in the same record, alongside any number of metadata fields (auto-indexed at upsert time). Use documents for full-text search (BM25 ranking on string fields with full_text_search enabled), and to combine multiple scoring methods on the same data via score_by.
  • Dense vectors are numerical representations of the meaning and relationships of text, images, or other data. Indexes of dense vectors are used for semantic search, or together with sparse vectors for hybrid search.
  • Sparse vectors are high-dimensional vectors with mostly zero values, produced by a sparse embedding model such as pinecone-sparse-english-v0. Indexes of sparse vectors are used for sparse-vector lexical search, or together with dense vectors for hybrid search.
You can create an index using the Pinecone console.
An index with a document schema stores typed JSON documents. The schema declares how each ranking field is indexed: as a string field with full_text_search enabled for BM25 ranking, a dense_vector for ANN similarity, or a sparse_vector. A single index can mix all three ranking field types; at query time, pick the ranking signal with score_by. Metadata fields (anything else you upsert) are not declared in the schema — they’re auto-indexed for filtering at upsert time.
Full-text search is not integrated embedding. A string field with full_text_search is indexed for BM25 ranking and Lucene queries. It does not call an embedding model. Integrated embedding remains available for vector API indexes.
Indexes with document schemas are in public preview and use API version 2026-01.alpha. The preview supports REST and the Python SDK; for other languages, call the REST endpoint directly.

Minimal: BM25 on a single text field

The example below creates an articles index whose body field is indexed for BM25 ranking. Other fields included at upsert time are stored on each document and auto-indexed for filtering as metadata.
curl
curl -X POST "https://api.pinecone.io/indexes" \
  -H "Api-Key: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -H "X-Pinecone-Api-Version: 2026-01.alpha" \
  -d '{
    "name": "articles",
    "deployment": {
      "deployment_type": "managed",
      "cloud": "aws",
      "region": "us-east-1"
    },
    "schema": {
      "fields": {
        "body": {
          "type": "string",
          "full_text_search": {}
        }
      }
    }
  }'

Multi-field schema: BM25 + dense vector

A single index with a document schema can hold FTS-enabled string and dense_vector ranking fields together (the same schema can also include a sparse_vector field). A single search request ranks by one scoring type — multi-field BM25 is supported (multiple text clauses on different fields, or one query_string clause spanning fields), and any scoring method can be combined with metadata filters, including text-match filters ($match_phrase, $match_all, $match_any) on FTS-enabled string fields.
curl
curl -X POST "https://api.pinecone.io/indexes" \
  -H "Api-Key: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -H "X-Pinecone-Api-Version: 2026-01.alpha" \
  -d '{
    "name": "articles-multi",
    "deployment": {
      "deployment_type": "managed",
      "cloud": "aws",
      "region": "us-east-1"
    },
    "schema": {
      "fields": {
        "title":    { "type": "string", "full_text_search": {} },
        "body":     { "type": "string", "full_text_search": {} },
        "embedding":{ "type": "dense_vector", "dimension": 1536, "metric": "cosine" }
      }
    }
  }'
You can include additional fields (for example, category or year) at upsert time. All metadata fields are automatically indexed for filtering — they don’t need to be declared in the schema. The schema is for ranking fields only; declaring a metadata-only field (string without full_text_search, string_list, float, or boolean) is rejected at index creation. For the full schema reference (all field types, language and analyzer options, dedicated read capacity, and Python SDK examples), see Full-text search.
Schema migration is not yet supported. Once an index with a document schema is created, you cannot add, remove, or modify fields. Plan your schema carefully — if you need to change a schema, delete the index and create a new one.

Create an index for dense vectors

You can create an index that stores dense vectors with integrated vector embedding, or one that stores vectors generated with an external embedding model.

Integrated embedding

Indexes with integrated embedding do not support updating or importing with text.
If you want to upsert and search with source text and have Pinecone convert it to dense vectors automatically, create an index with integrated embedding as follows:
  • Provide a name for the index.
  • Set cloud and region to the cloud and region where the index should be deployed.
  • Set embed.model to one of Pinecone’s hosted embedding models.
  • Set embed.field_map to the name of the field in your source document that contains the data for embedding.
Other parameters are optional. See the API reference for details.
from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

index_name = "integrated-dense-py"

if not pc.has_index(index_name):
    pc.create_index_for_model(
        name=index_name,
        cloud="aws",
        region="us-east-1",
        embed={
            "model":"llama-text-embed-v2",
            "field_map":{"text": "chunk_text"}
        }
    )

Bring your own vectors

If you use an external embedding model to convert your data to dense vectors, create an index as follows:
  • Provide a name for the index.
  • Set the vector_type to dense.
  • Specify the dimension and similarity metric of the vectors you’ll store in the index. This should match the dimension and metric supported by your embedding model.
  • Set spec.cloud and spec.region to the cloud and region where the index should be deployed. For Python, you also need to import the ServerlessSpec class.
Other parameters are optional. See the API reference for details.
from pinecone.grpc import PineconeGRPC as Pinecone
from pinecone import ServerlessSpec

pc = Pinecone(api_key="YOUR_API_KEY")

index_name = "standard-dense-py"

if not pc.has_index(index_name):
    pc.create_index(
        name=index_name,
        vector_type="dense",
        dimension=1536,
        metric="cosine",
        spec=ServerlessSpec(
            cloud="aws",
            region="us-east-1"
        ),
        deletion_protection="disabled",
        tags={
            "environment": "development"
        }
    )

Create an index for sparse vectors

You can create an index that stores sparse vectors with integrated vector embedding, or one that stores vectors generated with an external embedding model.

Integrated embedding

If you want to upsert and search with source text and have Pinecone convert it to sparse vectors automatically, create an index with integrated embedding as follows:
  • Provide a name for the index.
  • Set cloud and region to the cloud and region where the index should be deployed.
  • Set embed.model to one of Pinecone’s hosted sparse embedding models.
  • Set embed.field_map to the name of the field in your source document that contains the text for embedding.
  • If needed, embed.read_parameters and embed.write_parameters can be used to override the default model embedding behavior.
Other parameters are optional. See the API reference for details.
from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

index_name = "integrated-sparse-py"

if not pc.has_index(index_name):
    pc.create_index_for_model(
        name=index_name,
        cloud="aws",
        region="us-east-1",
        embed={
            "model":"pinecone-sparse-english-v0",
            "field_map":{"text": "chunk_text"}
        }
    )

Bring your own vectors

If you use an external embedding model to convert your data to sparse vectors, create an index as follows:
  • Provide a name for the index.
  • Set the vector_type to sparse.
  • Set the distance metric to dotproduct. Indexes that store sparse vectors do not support other distance metrics.
  • Set spec.cloud and spec.region to the cloud and region where the index should be deployed.
Other parameters are optional. See the API reference for details.
from pinecone import Pinecone, ServerlessSpec

pc = Pinecone(api_key="YOUR_API_KEY")

index_name = "standard-sparse-py"

if not pc.has_index(index_name):
    pc.create_index(
        name=index_name,
        vector_type="sparse",
        metric="dotproduct",
        spec=ServerlessSpec(cloud="aws", region="us-east-1")
    )

Create an index from a backup

You can restore an index from a backup, regardless of whether it stores dense or sparse vectors. For more details, see Restore an index.

Metadata indexing

This feature is in early access and available only on the 2025-10 version of the API. The CLI does not yet support this feature.
Pinecone indexes all metadata fields by default. However, large amounts of metadata can cause slower index building as well as slower query execution, particularly when data is not cached in a query executor’s memory and local SSD and must be fetched from object storage. To prevent performance issues due to excessive metadata, you can limit metadata indexing to the fields that you plan to use for query filtering.

Set metadata indexing

You can set metadata indexing during index creation or namespace creation:
  • Index-level metadata indexing rules apply to all namespaces that don’t have explicit metadata indexing rules.
  • Namespace-level metadata indexing rules overrides index-level metadata indexing rules.
For example, let’s say you want to store records that represent chunks of a document, with each record containing many metadata fields. Since you plan to use only a few of the metadata fields to filter queries, you would specify the metadata fields to index as follows.
Metadata indexing cannot be changed after index or namespace creation.
PINECONE_API_KEY="YOUR_API_KEY"

curl "https://api.pinecone.io/indexes" \
  -H "Accept: application/json" \
  -H "Content-Type: application/json" \
  -H "Api-Key: $PINECONE_API_KEY" \
  -H "X-Pinecone-Api-Version: 2025-10" \
  -d '{
        "name": "example-index-metadata",
        "vector_type": "dense",
        "dimension": 1536,
        "metric": "cosine",
        "spec": {
          "serverless": {
            "cloud": "aws",
            "region": "us-east-1",
            "schema": {
              "fields": {
                "document_id": {
                  "filterable": true
                },
                "document_title": {
                  "filterable": true
                },
                "chunk_number": {
                  "filterable": true
                },
                "document_url": {
                  "filterable": true
                },
                "created_at": {
                  "filterable": true
                }
              }
            }
          }
        },
        "deletion_protection": "disabled"
      }'

Check metadata indexing

To check which metadata fields are indexed, you can describe the index or namespace:
PINECONE_API_KEY="YOUR_API_KEY"

curl -X GET "https://api.pinecone.io/indexes/example-index-metadata" \
     -H "Api-Key: $PINECONE_API_KEY" \
     -H "X-Pinecone-Api-Version: 2025-10"
The response includes the schema object with the names of the metadata fields explicitly indexed during index or namespace creation.
The response does not include unindexed metadata fields or metadata fields indexed by default.
{
  "id": "751ab850-6e61-4f92-bd23-fa129803d207",
  "vector_type": "dense",
  "name": "example-index",
  "metric": "cosine",
  "dimension": 1536,
  "status": {
    "ready": false,
    "state": "Initializing"
  },
  "host": "example-index-fa77d8e.svc.aped-4627-b74a.pinecone.io",
  "spec": {
    "serverless": {
      "region": "us-east-1",
      "cloud": "aws",
      "read_capacity": {
        "mode": "OnDemand",
        "status": "Ready"
      },
      "schema": {
        "fields": {
          "document_id": {
            "filterable": true
          },
          "document_title": {
            "filterable": true
          },
          "created_at": {
            "filterable": true
          },
          "chunk_number": {
            "filterable": true
          },
          "document_url": {
            "filterable": true
          }
        }
      }
    }
  },
  "deletion_protection": "disabled",
  "tags": null
}

Index options

Cloud regions

When creating an index, you must choose the cloud and region where you want the index to be hosted. The following table lists the available public clouds and regions and the plans that support them:
CloudRegionSupported plansAvailability phase
awsus-east-1 (Virginia)Starter, Builder, Standard, EnterpriseGeneral availability
awsus-west-2 (Oregon)Standard, EnterpriseGeneral availability
awseu-west-1 (Ireland)Standard, EnterpriseGeneral availability
awseu-central-1 (Frankfurt)Standard, EnterpriseGeneral availability
awsap-southeast-1 (Singapore)Standard, EnterpriseGeneral availability
gcpus-central1 (Iowa)Standard, EnterpriseGeneral availability
gcpeurope-west4 (Netherlands)Standard, EnterpriseGeneral availability
azureeastus2 (Virginia)Standard, EnterpriseGeneral availability
The cloud and region cannot be changed after a serverless index is created.
On the Starter and Builder plans, you can create serverless indexes in the us-east-1 region of AWS only. To create indexes in other regions, upgrade to the Standard or Enterprise plan.

Similarity metrics

When creating an index that stores dense vectors, you can choose from the following similarity metrics. For the most accurate results, choose the similarity metric used to train the embedding model for your vectors. For more information, see Vector Similarity Explained.
Indexes that store sparse vectors must use the dotproduct metric.
Querying indexes with this metric returns a similarity score equal to the squared Euclidean distance between the result and query vectors.This metric calculates the square of the distance between two data points in a plane. It is one of the most commonly used distance metrics. For an example, see our IT threat detection example.When you use metric='euclidean', the most similar results are those with the lowest similarity score.
This is often used to find similarities between different documents. The advantage is that the scores are normalized to [-1,1] range. For an example, see our generative question answering example.
This is used to multiply two vectors. You can use it to tell us how similar the two vectors are. The more positive the answer is, the closer the two vectors are in terms of their directions. For an example, see our semantic search example.

Embedding models

Dense vectors and sparse vectors are the basic units of data in Pinecone and what Pinecone was specially designed to store and work with. Dense vectors represents the semantics of data such as text, images, and audio recordings, while sparse vectors represent documents or queries in a way that captures keyword information. To transform data into vector format, you use an embedding model. Pinecone hosts several embedding models so it’s easy to manage your vector storage and search process on a single platform. You can use a hosted model to embed your data as an integrated part of upserting and querying, or you can use a hosted model to embed your data as a standalone operation. The following embedding models are hosted by Pinecone.
To understand how cost is calculated for embedding, see Embedding cost. To get model details via the API, see List models and Describe a model.

multilingual-e5-large

multilingual-e5-large is an efficient dense embedding model trained on a mixture of multilingual datasets. It works well on messy data and short queries expected to return medium-length passages of text (1-2 paragraphs). Details
  • Vector type: Dense
  • Modality: Text
  • Dimension: 1024
  • Recommended similarity metric: Cosine
  • Max sequence length: 507 tokens
  • Max batch size: 96 sequences
For rate limits, see Embedding tokens per minute and Embedding tokens per month. Parameters The multilingual-e5-large model supports the following parameters:
ParameterTypeRequired/OptionalDescriptionDefault
input_typestringRequiredThe type of input data. Accepted values: query or passage.
truncatestringOptionalHow to handle inputs longer than those supported by the model. Accepted values: END or NONE.

END truncates the input sequence at the input token limit. NONE returns an error when the input exceeds the input token limit.
END

llama-text-embed-v2

llama-text-embed-v2 is a high-performance dense embedding model optimized for text retrieval and ranking tasks. It is trained on a diverse range of text corpora and provides strong performance on longer passages and structured documents. Details
  • Vector type: Dense
  • Modality: Text
  • Dimension: 1024 (default), 2048, 768, 512, 384
  • Recommended similarity metric: Cosine
  • Max sequence length: 2048 tokens
  • Max batch size: 96 sequences
For rate limits, see Embedding tokens per minute and Embedding tokens per month. Parameters The llama-text-embed-v2 model supports the following parameters:
ParameterTypeRequired/OptionalDescriptionDefault
input_typestringRequiredThe type of input data. Accepted values: query or passage.
truncatestringOptionalHow to handle inputs longer than those supported by the model. Accepted values: END or NONE.

END truncates the input sequence at the input token limit. NONE returns an error when the input exceeds the input token limit.
END
dimensionintegerOptionalDimension of the vector to return.1024

pinecone-sparse-english-v0

pinecone-sparse-english-v0 is a sparse embedding model for converting text to sparse vectors for sparse-vector lexical search or hybrid search. Built on the innovations of the DeepImpact architecture, the model directly estimates the lexical importance of tokens by leveraging their context, unlike traditional retrieval models like BM25, which rely solely on term frequency. Details
  • Vector type: Sparse
  • Modality: Text
  • Recommended similarity metric: Dotproduct
  • Max sequence length: 512 or 2048
  • Max batch size: 96 sequences
For rate limits, see Embedding tokens per minute and Embedding tokens per month. Parameters The pinecone-sparse-english-v0 model supports the following parameters:
ParameterTypeRequired/OptionalDescriptionDefault
input_typestringRequiredThe type of input data. Accepted values: query or passage.
max_tokens_per_sequenceintegerOptionalMaximum number of tokens to embed. Accepted values: 512 or 2048.512
truncatestringOptionalHow to handle inputs longer than those supported by the model. Accepted values: END or NONE.

END truncates the input sequence at the the max_tokens_per_sequence limit. NONE returns an error when the input exceeds the max_tokens_per_sequence limit.
END
return_tokensbooleanOptionalWhether to return the string tokens.false