This page shows you how to create a dense or sparse serverless index.
  • Dense indexes store dense vectors, which are numerical representations of the meaning and relationships of text, images, or other types of data. You use dense indexes for semantic search or in combination with sparse indexes for hybrid search.
  • Sparse indexes store sparse vectors, which are numerical representations of the words or phrases in a document. You use sparse indexes for lexical search, or in combination with dense indexes for hybrid search.
You can create an index using the Pinecone console.

Create a dense index

You can create a dense index with integrated vector embedding or a dense index for storing vectors generated with an external embedding model.

Integrated embedding

Indexes with integrated embedding do not support updating or importing with text.
If you want to upsert and search with source text and have Pinecone convert it to dense vectors automatically, create a dense index with integrated embedding as follows:
  • Provide a name for the index.
  • Set cloud and region to the cloud and region where the index should be deployed.
  • Set embed.model to one of Pinecone’s hosted embedding models.
  • Set embed.field_map to the name of the field in your source document that contains the data for embedding.
Other parameters are optional. See the API reference for details.
from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

index_name = "integrated-dense-py"

if not pc.has_index(index_name):
    pc.create_index_for_model(
        name=index_name,
        cloud="aws",
        region="us-east-1",
        embed={
            "model":"llama-text-embed-v2",
            "field_map":{"text": "chunk_text"}
        }
    )

Bring your own vectors

If you use an external embedding model to convert your data to dense vectors, create a dense index as follows:
  • Provide a name for the index.
  • Set the vector_type to dense.
  • Specify the dimension and similarity metric of the vectors you’ll store in the index. This should match the dimension and metric supported by your embedding model.
  • Set spec.cloud and spec.region to the cloud and region where the index should be deployed. For Python, you also need to import the ServerlessSpec class.
Other parameters are optional. See the API reference for details.
from pinecone.grpc import PineconeGRPC as Pinecone
from pinecone import ServerlessSpec

pc = Pinecone(api_key="YOUR_API_KEY")

index_name = "standard-dense-py"

if not pc.has_index(index_name):
    pc.create_index(
        name=index_name,
        vector_type="dense",
        dimension=1536,
        metric="cosine",
        spec=ServerlessSpec(
            cloud="aws",
            region="us-east-1"
        ),
        deletion_protection="disabled",
        tags={
            "environment": "development"
        }
    )

Create a sparse index

You can create a sparse index with integrated vector embedding or a sparse index for storing vectors generated with an external embedding model.

Integrated embedding

If you want to upsert and search with source text and have Pinecone convert it to sparse vectors automatically, create a sparse index with integrated embedding as follows:
  • Provide a name for the index.
  • Set cloud and region to the cloud and region where the index should be deployed.
  • Set embed.model to one of Pinecone’s hosted sparse embedding models.
  • Set embed.field_map to the name of the field in your source document that contains the text for embedding.
  • If needed, embed.read_parameters and embed.write_parameters can be used to override the default model embedding behavior.
Other parameters are optional. See the API reference for details.
from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

index_name = "integrated-sparse-py"

if not pc.has_index(index_name):
    pc.create_index_for_model(
        name=index_name,
        cloud="aws",
        region="us-east-1",
        embed={
            "model":"pinecone-sparse-english-v0",
            "field_map":{"text": "chunk_text"}
        }
    )

Bring your own vectors

If you use an external embedding model to convert your data to sparse vectors, create a sparse index as follows:
  • Provide a name for the index.
  • Set the vector_type to sparse.
  • Set the distance metric to dotproduct. Sparse indexes do not support other distance metrics.
  • Set spec.cloud and spec.region to the cloud and region where the index should be deployed.
Other parameters are optional. See the API reference for details.
from pinecone import Pinecone, ServerlessSpec

pc = Pinecone(api_key="YOUR_API_KEY")

index_name = "standard-sparse-py"

if not pc.has_index(index_name):
    pc.create_index(
        name=index_name,
        vector_type="sparse",
        metric="dotproduct",
        spec=ServerlessSpec(cloud="aws", region="us-east-1")
    )

Create an index from a backup

You can create a dense or sparse index from a backup. For more details, see Restore an index.

Metadata indexing

This feature is in early access and available only on the 2025-10 version of the API.
Pinecone indexes all metadata fields by default. However, large amounts of metadata can cause slower index building as well as slower query execution, particularly when data is not cached in a query executor’s memory and local SSD and must be fetched from object storage. To prevent performance issues due to excessive metadata, you can limit metadata indexing to the fields that you plan to use for query filtering.

Set metadata indexing

You can set metadata indexing during index creation or namespace creation:
  • Index-level metadata indexing rules apply to all namespaces that don’t have explicit metadata indexing rules.
  • Namespace-level metadata indexing rules overrides index-level metadata indexing rules.
For example, let’s say you want to store records that represent chunks of a document, with each record containing many metadata fields. Since you plan to use only a few of the metadata fields to filter queries, you would specify the metadata fields to index as follows.
Metadata indexing cannot be changed after index or namespace creation.
PINECONE_API_KEY="YOUR_API_KEY"

curl "https://api.pinecone.io/indexes" \
  -H "Accept: application/json" \
  -H "Content-Type: application/json" \
  -H "Api-Key: $PINECONE_API_KEY" \
  -H "X-Pinecone-API-Version: 2025-10" \
  -d '{
        "name": "example-index-metadata",
        "vector_type": "dense",
        "dimension": 1536,
        "metric": "cosine",
        "spec": {
            "serverless": {
                "cloud": "aws",
                "region": "us-east-1",
                "schema": {
                    "fields": { 
                        "document_id": {"filterable": true},
                        "document_title": {"filterable": true},
                        "chunk_number": {"filterable": true},
                        "document_url": {"filterable": true},
                        "created_at": {"filterable": true}
                    }
                }
            }
        },
        "deletion_protection": "disabled"
      }'

Check metadata indexing

To check which metadata fields are indexed, you can describe the index or namespace:
PINECONE_API_KEY="YOUR_API_KEY"

curl "https://api.pinecone.io/indexes/example-index-metadata" \
    -H "Api-Key: $PINECONE_API_KEY" \
    -H "X-Pinecone-API-Version: 2025-10"
The response includes the schema object with the names of the metadata fields explicitly indexed during index or namespace creation.
The response does not include unindexed metadata fields or metadata fields indexed by default.
{
    "id": "294a122f-44e7-4a95-8d77-2d2d04200aa4",
    "vector_type": "dense",
    "name": "example-index",
    "metric": "cosine",
    "dimension": 1536,
    "status": {
        "ready": false,
        "state": "Initializing"
    },
    "host": "example-index-metadata-govk0nt.svc.aped-4627-b74a.pinecone.io",
    "spec": {
        "serverless": {
            "region": "us-east-1",
            "cloud": "aws",
            "read_capacity": {
                "mode": "OnDemand",
                "status": "Ready"
            },
            "schema": {
                "fields": {
                    "document_id": {
                        "filterable": true
                    },
                    "document_title": {
                        "filterable": true
                    },
                    "created_at": {
                        "filterable": true
                    },
                    "chunk_number": {
                        "filterable": true
                    },
                    "document_url": {
                        "filterable": true
                    }
                }
            }
        }
    },
    "deletion_protection": "disabled",
    "tags": null
}

Index options

Cloud regions

When creating an index, you must choose the cloud and region where you want the index to be hosted. The following table lists the available public clouds and regions and the plans that support them:
CloudRegionSupported plansAvailability phase
awsus-east-1 (Virginia)Starter, Standard, EnterpriseGeneral availability
awsus-west-2 (Oregon)Standard, EnterpriseGeneral availability
awseu-west-1 (Ireland)Standard, EnterpriseGeneral availability
gcpus-central1 (Iowa)Standard, EnterpriseGeneral availability
gcpeurope-west4 (Netherlands)Standard, EnterpriseGeneral availability
azureeastus2 (Virginia)Standard, EnterpriseGeneral availability
The cloud and region cannot be changed after a serverless index is created.
On the free Starter plan, you can create serverless indexes in the us-east-1 region of AWS only. To create indexes in other regions, upgrade your plan.

Similarity metrics

When creating a dense index, you can choose from the following similarity metrics. For the most accurate results, choose the similarity metric used to train the embedding model for your vectors. For more information, see Vector Similarity Explained.
Sparse indexes must use the dotproduct metric.

Embedding models

Dense vectors and sparse vectors are the basic units of data in Pinecone and what Pinecone was specially designed to store and work with. Dense vectors represents the semantics of data such as text, images, and audio recordings, while sparse vectors represent documents or queries in a way that captures keyword information. To transform data into vector format, you use an embedding model. Pinecone hosts several embedding models so it’s easy to manage your vector storage and search process on a single platform. You can use a hosted model to embed your data as an integrated part of upserting and querying, or you can use a hosted model to embed your data as a standalone operation. The following embedding models are hosted by Pinecone.
To understand how cost is calculated for embedding, see Embedding cost. To get model details via the API, see List models and Describe a model.

multilingual-e5-large

multilingual-e5-large is an efficient dense embedding model trained on a mixture of multilingual datasets. It works well on messy data and short queries expected to return medium-length passages of text (1-2 paragraphs). Details
  • Vector type: Dense
  • Modality: Text
  • Dimension: 1024
  • Recommended similarity metric: Cosine
  • Max sequence length: 507 tokens
  • Max batch size: 96 sequences
For rate limits, see Embedding tokens per minute and Embedding tokens per month. Parameters The multilingual-e5-large model supports the following parameters:
ParameterTypeRequired/OptionalDescriptionDefault
input_typestringRequiredThe type of input data. Accepted values: query or passage.
truncatestringOptionalHow to handle inputs longer than those supported by the model. Accepted values: END or NONE.

END truncates the input sequence at the input token limit. NONE returns an error when the input exceeds the input token limit.
END

llama-text-embed-v2

llama-text-embed-v2 is a high-performance dense embedding model optimized for text retrieval and ranking tasks. It is trained on a diverse range of text corpora and provides strong performance on longer passages and structured documents. Details
  • Vector type: Dense
  • Modality: Text
  • Dimension: 1024 (default), 2048, 768, 512, 384
  • Recommended similarity metric: Cosine
  • Max sequence length: 2048 tokens
  • Max batch size: 96 sequences
For rate limits, see Embedding tokens per minute and Embedding tokens per month. Parameters The llama-text-embed-v2 model supports the following parameters:
ParameterTypeRequired/OptionalDescriptionDefault
input_typestringRequiredThe type of input data. Accepted values: query or passage.
truncatestringOptionalHow to handle inputs longer than those supported by the model. Accepted values: END or NONE.

END truncates the input sequence at the input token limit. NONE returns an error when the input exceeds the input token limit.
END
dimensionintegerOptionalDimension of the vector to return.1024

pinecone-sparse-english-v0

pinecone-sparse-english-v0 is a sparse embedding model for converting text to sparse vectors for keyword or hybrid semantic/keyword search. Built on the innovations of the DeepImpact architecture, the model directly estimates the lexical importance of tokens by leveraging their context, unlike traditional retrieval models like BM25, which rely solely on term frequency. Details
  • Vector type: Sparse
  • Modality: Text
  • Recommended similarity metric: Dotproduct
  • Max sequence length: 512 or 2048
  • Max batch size: 96 sequences
For rate limits, see Embedding tokens per minute and Embedding tokens per month. Parameters The pinecone-sparse-english-v0 model supports the following parameters:
ParameterTypeRequired/OptionalDescriptionDefault
input_typestringRequiredThe type of input data. Accepted values: query or passage.
max_tokens_per_sequenceintegerOptionalMaximum number of tokens to embed. Accepted values: 512 or 2048.512
truncatestringOptionalHow to handle inputs longer than those supported by the model. Accepted values: END or NONE.

END truncates the input sequence at the the max_tokens_per_sequence limit. NONE returns an error when the input exceeds the max_tokens_per_sequence limit.
END
return_tokensbooleanOptionalWhether to return the string tokens.false