Upsert data
This page shows you how to use the upsert operation to write records into an index namespace. If a record ID already exists, upsert
overwrites the entire record. To update only part of a record, use the update operation instead.
You can upsert data using the Pinecone console.
Pinecone is eventually consistent, so there can be a slight delay before new or changed records are visible to queries. See Understanding data freshness to learn about data freshness in Pinecone and how to check the freshness of your data.
Upsert limits
The max upsert size is 2MB or 1000 records, whichever is reached first.
When upserting larger amounts of data, it is recommended to upsert records in large batches. A batch of upserts should be as large as possible (up to 1000 records) without exceeding the maximum request size of 2MB.
To understand the number of records you can fit into one batch based on the vector dimensions and metadata size, see the following table:
Dimension | Metadata (bytes) | Max batch size |
---|---|---|
386 | 0 | 1000 |
768 | 500 | 559 |
1536 | 2000 | 245 |
Upsert records into the default namespace
When upserting records without specifying a namespace, the records are added to the default namespace (""
).
Upsert records into non-default namespaces
Namespaces let you partition vectors within a single index. They are a best practice for speeding up queries, which can be filtered by namespace, and they are essential for implementing multitenancy when you need to isolate the data of each customer/user.
Upsert records with metadata
You can attach metadata key-value pairs to records. This lets you then filter queries to retrieve only records that match the metadata filter. For more information, see Metadata Filtering.
Upsert records with sparse values
You can upsert records with sparse vector values alongside dense vector values. This allows you to perform hybrid search, or semantic and keyword search, in one query for more relevant results.
See Upsert sparse-dense vectors.
This feature is in public preview.
Upsert records in batches
When upserting larger amounts of data, it is recommended to upsert records in large batches. This should be as large as possible (up to 1000 records) without exceeding the maximum request size of 2MB. To understand the number of records you can fit into one batch, see the Upsert limits section.
Send upserts in parallel
Send multiple upserts in parallel to help increase throughput.
Standard Pinecone SDKs
Using the standard Pinecone SDKs, all vector operations block until the response has been received. However, they can be made asynchronously as follows:
gRPC Python SDK
The gRPC version of the Python SDK can provide higher upsert speeds than the standard SDK. Through multiplexing, gRPC is able to handle large amounts of requests in parallel without slowing down the rest of the system (HoL blocking), unlike REST. Moreover, you can pass various retry strategies to the gRPC SDK, including exponential backoffs.
To install the gRPC version of the SDK:
pip install "pinecone[grpc]"
To use the gRPC SDK, import the pinecone.grpc
subpackage and target an index as usual:
from pinecone.grpc import PineconeGRPC as Pinecone
pc = Pinecone(api_key='YOUR_API_KEY') # This is gRPC client aliased as "Pinecone"
index = pc.Index('example-index')
To launch multiple read and write requests in parallel, pass async_req
to the upsert
operation:
def chunker(seq, batch_size):
return (seq[pos:pos + batch_size] for pos in range(0, len(seq), batch_size))
async_results = [
index.upsert(vectors=chunk, async_req=True)
for chunk in chunker(data, batch_size=200)
]
# Wait for and retrieve responses (in case of error)
[async_result.result() for async_result in async_results]
It is possible to get write-throttled faster when upserting using the gRPC SDK. If you see this often, we recommend you use a backoff algorithm(e.g., exponential backoffs)
while upserting.
The syntax for upsert, query, fetch, and delete with the gRPC SDK remain the same as the standard SDK.
Upsert a dataset as a dataframe
To quickly ingest data when using the Python SDK, use the upsert_from_dataframe
method. The method includes retry logic andbatch_size
, and is performant especially with Parquet file data sets.
The following example upserts the uora_all-MiniLM-L6-bm25
dataset as a dataframe.
from pinecone import Pinecone, ServerlessSpec
from pinecone_datasets import list_datasets, load_dataset
pc = Pinecone(api_key="API_KEY")
dataset = load_dataset("quora_all-MiniLM-L6-bm25")
pc.create_index(
name="example-index",
dimension=384,
metric="cosine",
spec=ServerlessSpec(
cloud="aws",
region="us-east-1"
)
)
index = pc.Index("example-index")
index.upsert_from_dataframe(dataset.drop(columns=["blob"]))
Troubleshoot index fullness errors
Serverless indexes automatically scale as needed.
However, pod-based indexes can run out of capacity. When that happens, upserting new records will fail with the following error:
Index is full, cannot accept data.
While a full pod-based index can still serve queries, you need to scale your index to accommodate more records.
Was this page helpful?