Manage pod-based indexes
This page shows you how to manage pod-based indexes.
For guidance on serverless indexes, see Manage serverless indexes.
Describe a pod-based index
Use the describe_index
endpoint to get a complete description of a specific index:
Delete a pod-based index
Use the delete_index
operation to delete a pod-based index and all of its associated resources.
You are billed for a pod-based index even when it is not in use.
If deletion protection is enabled on an index, requests to delete it will fail and return a 403 - FORBIDDEN
status with the following error:
Before you can delete such an index, you must first disable deletion protection.
You can delete an index using the Pinecone console. For the index you want to delete, click the three dots to the right of the index name, then click Delete.
Selective metadata indexing
For pod-based indexes, Pinecone indexes all metadata fields by default. When metadata fields contains many unique values, pod-based indexes will consume significantly more memory, which can lead to performance issues, pod fullness, and a reduction in the number of possible vectors that fit per pod.
To avoid indexing high-cardinality metadata that is not needed for filtering your queries and keep memory utilization low, specify which metadata fields to index using the metadata_config
parameter.
Since high-cardinality metadata does not cause high memory utilization in serverless indexes, selective metadata indexing is not supported.
The value for the metadata_config
parameter is a JSON object containing the names of the metadata fields to index.
Example
The following example creates a pod-based index that only indexes the genre
metadata field. Queries against this index that filter for the genre
metadata field may return results; queries that filter for other metadata fields behave as though those fields do not exist.
Prevent index deletion
This feature requires Pinecone API version 2024-07
, Python SDK v5.0.0, Node.js SDK v3.0.0, Java SDK v2.0.0, or Go SDK v1.0.0 or later.
You can prevent an index and its data from accidental deleting when creating a new index or when configuring an existing index. In both cases, you set the deletion_protection
parameter to enabled
.
To enable deletion protection when creating a new index:
To enable deletion protection when configuring an existing index:
When deletion protection is enabled on an index, requests to delete the index fail and return a 403 - FORBIDDEN
status with the following error:
Disable deletion protection
Before you can delete an index with deletion protection enabled, you must first disable deletion protection as follows:
Delete an entire namespace
In pod-based indexes, reads and writes share compute resources, so deleting an entire namespace with many records can increase the latency of read operations. In such cases, consider deleting records in batches.
Delete records in batches
In pod-based indexes, reads and writes share compute resources, so deleting an entire namespace or a large number of records can increase the latency of read operations. To avoid this, delete records in batches of up to 1000, with a brief sleep between requests. Consider using smaller batches if the index has active read traffic.
Delete records by metadata
In pod-based indexes, if you are targeting a large number of records for deletion and the index has active read traffic, consider deleting records in batches.
To delete records based on their metadata, pass a metadata filter expression to the delete
operation. This deletes all vectors matching the metadata filter expression.
For example, to delete all vectors with genre “documentary” and year 2019 from an index, use the following code:
Tag an index
When configuring an index, you can tag the index to help with index organization and management. For more details, see Tag an index.
Troubleshoot index fullness errors
Serverless indexes automatically scale as needed.
However, pod-based indexes can run out of capacity. When that happens, upserting new records will fail with the following error:
While a full pod-based index can still serve queries, you need to scale your index to accommodate more records.
Was this page helpful?