Deletes, like upserts and updates, are blocking operations in Pinecone using pods. This means that if you submit a delete operation, subsequent reads (queries or fetches) may suffer increased latency until the delete is complete.

To work around this issue, we recommend batching your deletes the same way you would upserts or updates. Setting specific vector IDs in batches of 100 to 200 at a time will allow the index to process those deletes in sequence, and also allow for other operations to occur in between the batches.

The challenge to doing so is that most customers issue deletes either with metadata, or deleting all vectors in a namespace. This can impact performance across the index, including other namespaces.

So what do you do when you have to batch your deletes, but don’t have the IDs but do know the associated metadata you want to filter for? Just query for the IDs first, then batch delete those. This is an example of doing so using our Python client.

def batch_deletes_by_metadata(index, filter, namespace=None, dimensions=1536, batch=100):
  # Import some libraries in case they weren't elsewhere
  import numpy as np 
  import time
  # Next, we'll create a random vector to use as a query.
  query_vector = np.random.uniform(-1, 1, size=dimensions).tolist()
 # Now, cycle through the index, and add a slight sleep time in between batches to make sure we don't overwhelm the index.
 deletes = []
 deleted = 0
 results = index.query(vector=query\_vector, filter=filter, namespace=namespace, top\_k=batch)
 while len(results['matches']) > 0:
 ids = [i['id'] for i in results['matches']]
 index.delete(ids=ids,namespace=namespace)
 deleted += len(ids)
 time.sleep(0.01)
 results = index.query(vector=query\_vector, filter=filter, namespace=namespace, top\_k=batch)
 return deleted

Using this method will allow more control over how many deletes are performed at a time and help avoid tying up other resources while they are being processed.