Upsert data
This page shows you how to upsert records into a namespace in an index. Namespaces let you partition records within an index and are essential for implementing multitenancy when you need to isolate the data of each customer/user.
If a record ID already exists, upserting overwrites the entire record. To update only part of a record, use the update
operation instead.
To control costs when ingesting very large datasets (10,000,000+ records), use import
instead of upsert
.
Pinecone is eventually consistent, so there can be a slight delay before new or changed records are visible to queries. See Understanding data freshness to learn about data freshness in Pinecone and how to check the freshness of your data.
Upsert limits
Metric | Limit |
---|---|
Max upsert size | 2MB or 1000 records |
Max metadata size per record | 40 KB |
Max length for a record ID | 512 characters |
Max dimensionality for dense vectors | 20,000 |
Max non-zero values for sparse vectors | 1000 |
Max dimensionality for sparse vectors | 4.2 billion |
When upserting larger amounts of data, it is recommended to upsert records in large batches. A batch of upserts should be as large as possible (up to 1000 records) without exceeding the maximum request size of 2MB.
To understand the number of records you can fit into one batch based on the vector dimensions and metadata size, see the following table:
Dimension | Metadata (bytes) | Max batch size |
---|---|---|
386 | 0 | 1000 |
768 | 500 | 559 |
1536 | 2000 | 245 |
Upsert vectors
To upsert vectors into an index, use the upsert
operation as follows:
- Specify the
namespace
to upsert into. If the namespace doesn’t exist, it is created. To use the default namespace, set the namespace to an empty string (""
). - Format your input data as records, each with the following:
- An
id
field with a unique record identifier for the index namespace. - A
values
field with the dense vector values. - Optionally, a
metadata
field with key-value pairs to store additional information or context. When you query the index, you can then filter by metadata to ensure only relevant records are scanned. For more information, see Metadata Filtering. - Optionally, a
sparse_values
field with sparse vector values. This allows you to perform hybrid search, or semantic and keyword search, in one query for more relevant results. For more information, see Upsert sparse-dense vectors.
- An
Upsert text
Upserting text is supported only for indexes with integrated embedding.
To upsert source text into an index, use the upsert_records
operation. Pinecone converts the text to vectors automatically using the hosted embedding model associated with the index.
- Specify the
namespace
to upsert into. If the namespace doesn’t exist, it is created. To use the default namespace, set the namespace to an empty string (""
). - Format your input data as records, each with the following:
- An
_id
field with a unique record identifier for the index namespace.id
can be used as an alias for_id
. - A field with the source text to convert to a vector. This field must match the
field_map
specified in the index. - Additional fields will be stored as record metadata and can be returned in search results or used to filter search results.
- An
For example, the following code converts the sentences in the source_text
fields to sparse vectors and then upserts them into example-namespace
in example-index
. The additional category
field is stored as metadata.
Upsert in batches
When upserting larger amounts of data, it is recommended to upsert records in large batches. This should be as large as possible (up to 1000 records) without exceeding the maximum request size of 2MB. To understand the number of records you can fit into one batch, see the Upsert limits section.
To control costs when ingesting very large datasets (10,000,000+ records), use import
instead of upsert
.
Upsert in parallel
Python SDK v6.0.0 and later provide async
methods for use with asyncio. Asyncio support makes it possible to use Pinecone with modern async web frameworks such as FastAPI, Quart, and Sanic. For more details, see Asyncio support.
Send multiple upserts in parallel to help increase throughput. Vector operations block until the response has been received. However, they can be made asynchronously as follows:
Python SDK with gRPC
Using the Python SDK with gRPC extras can provide higher upsert speeds. Through multiplexing, gRPC is able to handle large amounts of requests in parallel without slowing down the rest of the system (HoL blocking), unlike REST. Moreover, you can pass various retry strategies to the gRPC SDK, including exponential backoffs.
To install the gRPC version of the SDK:
To use the gRPC SDK, import the pinecone.grpc
subpackage and target an index as usual:
To launch multiple read and write requests in parallel, pass async_req
to the upsert
operation:
It is possible to get write-throttled faster when upserting using the gRPC SDK. If you see this often, we recommend you use a backoff algorithm(e.g., exponential backoffs)
while upserting.
The syntax for upsert, query, fetch, and delete with the gRPC SDK remain the same as the standard SDK.
Upsert a dataset as a dataframe
To quickly ingest data when using the Python SDK, use the upsert_from_dataframe
method. The method includes retry logic andbatch_size
, and is performant especially with Parquet file data sets.
The following example upserts the uora_all-MiniLM-L6-bm25
dataset as a dataframe.
Was this page helpful?