Create a serverless index
This page shows you how to create a dense or sparse serverless index.
-
Dense indexes store dense vectors, which are numerical representations of the meaning and relationships of text, images, or other types of data. You use dense indexes for semantic search or in combination with sparse indexes for hybrid search.
-
Sparse indexes store sparse vectors, which are numerical representations of the words or phrases in a document. You use sparse indexes for lexical search, or in combination with dense indexes for hybrid search.
You can create an index using the Pinecone console.
Create a dense index
You can create a dense index with integrated vector embedding or a dense index for storing vectors generated with an external embedding model.
Integrated embedding
If you want to upsert and search with source text and have Pinecone convert it to dense vectors automatically, create a dense index with integrated embedding as follows:
- Provide a
name
for the index. - Set
cloud
andregion
to the cloud and region where the index should be deployed. - Set
embed.model
to one of Pinecone’s hosted embedding models. - Set
embed.field_map
to the name of the field in your source document that contains the data for embedding.
Other parameters are optional. See the API reference for details.
Bring your own vectors
If you use an external embedding model to convert your data to dense vectors, create a dense index as follows:
- Provide a
name
for the index. - Set the
vector_type
todense
. - Specify the
dimension
and similaritymetric
of the vectors you’ll store in the index. This should match the dimension and metric supported by your embedding model. - Set
spec.cloud
andspec.region
to the cloud and region where the index should be deployed. For Python, you also need to import theServerlessSpec
class.
Other parameters are optional. See the API reference for details.
Create a sparse index
You can create a dense index with integrated vector embedding or a dense index for storing vectors generated with an external embedding model.
Integrated embedding
If you want to upsert and search with source text and have Pinecone convert it to sparse vectors automatically, create a sparse index with integrated embedding as follows:
- Provide a
name
for the index. - Set
cloud
andregion
to the cloud and region where the index should be deployed. - Set
embed.model
to one of Pinecone’s hosted sparse embedding models. - Set
embed.field_map
to the name of the field in your source document that contains the text for embedding.
Other parameters are optional. See the API reference for details.
Bring your own vectors
If you use an external embedding model to convert your data to sparse vectors, create a sparse index as follows:
- Provide a
name
for the index. - Set the
vector_type
tosparse
. - Set the distance
metric
todotproduct
. Sparse indexes do not support other distance metrics. - Set
spec.cloud
andspec.region
to the cloud and region where the index should be deployed.
Other parameters are optional. See the API reference for details.
Create an index from a backup
You can create a dense or sparse index from a backup. For more details, see Restore an index.
Index options
Cloud regions
When creating an index, you must choose the cloud and region where you want the index to be hosted. The following table lists the available public clouds and regions and the plans that support them:
Cloud | Region | Supported plans | Availability phase |
---|---|---|---|
aws | us-east-1 (Virginia) | Starter, Standard, Enterprise | General availability |
aws | us-west-2 (Oregon) | Standard, Enterprise | General availability |
aws | eu-west-1 (Ireland) | Standard, Enterprise | General availability |
gcp | us-central1 (Iowa) | Standard, Enterprise | General availability |
gcp | europe-west4 (Netherlands) | Standard, Enterprise | General availability |
azure | eastus2 (Virginia) | Standard, Enterprise | General availability |
The cloud and region cannot be changed after a serverless index is created.
On the free Starter plan, you can create serverless indexes in the us-east-1
region of AWS only. To create indexes in other regions, upgrade your plan.
Similarity metrics
When creating a dense index, you can choose from the following similarity metrics. For the most accurate results, choose the similarity metric used to train the embedding model for your vectors. For more information, see Vector Similarity Explained.
dotproduct
metric.Euclidean
Euclidean
Querying indexes with this metric returns a similarity score equal to the squared Euclidean distance between the result and query vectors.
This metric calculates the square of the distance between two data points in a plane. It is one of the most commonly used distance metrics. For an example, see our IT threat detection example.
When you use metric='euclidean'
, the most similar results are those with the lowest similarity score.
Cosine
Cosine
This is often used to find similarities between different documents. The advantage is that the scores are normalized to [-1,1] range. For an example, see our generative question answering example.
Dotproduct
Dotproduct
This is used to multiply two vectors. You can use it to tell us how similar the two vectors are. The more positive the answer is, the closer the two vectors are in terms of their directions. For an example, see our semantic search example.
Embedding models
Dense vectors and sparse vectors are the basic units of data in Pinecone and what Pinecone was specially designed to store and work with. Dense vectors represents the semantics of data such as text, images, and audio recordings, while sparse vectors represent documents or queries in a way that captures keyword information.
To transform data into vector format, you use an embedding model. Pinecone hosts several embedding models so it’s easy to manage your vector storage and search process on a single platform. You can use a hosted model to embed your data as an integrated part of upserting and querying, or you can use a hosted model to embed your data as a standalone operation.
The following embedding models are hosted by Pinecone.
To understand how cost is calculated for embedding, see Embedding cost. To get model details via the API, see List models and Describe a model.
multilingual-e5-large
multilingual-e5-large
multilingual-e5-large
is an efficient dense embedding model trained on a mixture of multilingual datasets. It works well on messy data and short queries expected to return medium-length passages of text (1-2 paragraphs).
Details
- Vector type: Dense
- Modality: Text
- Dimension: 1024
- Recommended similarity metric: Cosine
- Max sequence length: 507 tokens
- Max batch size: 96 sequences
For rate limits, see Embedding tokens per minute and Embedding tokens per month.
Parameters
The multilingual-e5-large
model supports the following parameters:
Parameter | Type | Required/Optional | Description | Default |
---|---|---|---|---|
input_type | string | Required | The type of input data. Accepted values: query or passage . | |
truncate | string | Optional | How to handle inputs longer than those supported by the model. Accepted values: END or NONE .END truncates the input sequence at the input token limit. NONE returns an error when the input exceeds the input token limit. | END |
llama-text-embed-v2
llama-text-embed-v2
llama-text-embed-v2
is a high-performance dense embedding model optimized for text retrieval and ranking tasks. It is trained on a diverse range of text corpora and provides strong performance on longer passages and structured documents.
Details
- Vector type: Dense
- Modality: Text
- Dimension: 1024 (default), 2048, 768, 512, 384
- Recommended similarity metric: Cosine
- Max sequence length: 2048 tokens
- Max batch size: 96 sequences
For rate limits, see Embedding tokens per minute and Embedding tokens per month.
Parameters
The llama-text-embed-v2
model supports the following parameters:
Parameter | Type | Required/Optional | Description | Default |
---|---|---|---|---|
input_type | string | Required | The type of input data. Accepted values: query or passage . | |
truncate | string | Optional | How to handle inputs longer than those supported by the model. Accepted values: END or NONE .END truncates the input sequence at the input token limit. NONE returns an error when the input exceeds the input token limit. | END |
dimension | integer | Optional | Dimension of the vector to return. | 1024 |
pinecone-sparse-english-v0
pinecone-sparse-english-v0
pinecone-sparse-english-v0
is a sparse embedding model for converting text to sparse vectors for keyword or hybrid semantic/keyword search. Built on the innovations of the DeepImpact architecture, the model directly estimates the lexical importance of tokens by leveraging their context, unlike traditional retrieval models like BM25, which rely solely on term frequency.
Details
- Vector type: Sparse
- Modality: Text
- Recommended similarity metric: Dotproduct
- Max sequence length: 512 tokens
- Max batch size: 96 sequences
For rate limits, see Embedding tokens per minute and Embedding tokens per month.
Parameters
The pinecone-sparse-english-v0
model supports the following parameters:
Parameter | Type | Required/Optional | Description | Default |
---|---|---|---|---|
input_type | string | Required | The type of input data. Accepted values: query or passage . | |
truncate | string | Optional | How to handle inputs longer than those supported by the model. Accepted values: END or NONE .END truncates the input sequence at the input token limit. NONE returns an error when the input exceeds the input token limit. | END |
return_tokens | boolean | Optional | Whether to return the string tokens. | False |