This page provides recommendations and best practices for preparing your Pinecone indexes for production, anticipating production issues, and enabling reliability and growth.

For high-scale use cases, consider using the Pinecone AWS Reference Architecture as a starting point, and read up on code best practices.

Prepare your project structure

One of the first steps towards building a production-ready Pinecone index is configuring your project correctly.

  • Consider creating a separate project for your development and production indexes, to allow for testing changes to your index before deploying them to production.
  • Ensure that you have properly configured user access within your production environment, so that only those users who need to access the production index can do so.
  • Consider how best to manage the API key(s) associated with your production project.

Test your query results

Before you move your index to production, make sure that your index is returning accurate results in the context of
your application by identifying the appropriate metrics for
evaluating your results.

Load test your indexes

Before moving your project to production, consider determining whether your index configuration can serve the load of queries you anticipate from your application. You can write load tests in Python from scratch or using a load testing framework like Locust.

Back up your indexes

In order to enable long-term retention, compliance archiving, and deployment of new indexes, consider backing up your production indexes by creating collections.

Serverless indexes do not support collections.

Tune for performance

Before serving production workloads, identify ways to improve latency by making changes to your deployment, project configuration, or client.

Configure monitoring

Prepare to monitor the production performance and availability of your indexes.

Estimate your index size

This guidance applies to pod-based indexes only. With serverless indexes, you don’t configure any compute or storage resources, and you don’t manually manage those resources to meet demand, save on cost, or ensure high availability. Instead, serverless indexes scale automatically based on usage.

Depending on your data and the types of workloads you intend to run, your pod-based index may require a different number and size of pods and replicas. Factors to consider include the number of vectors, the dimensions per vector, the amount and cardinality of metadata, and the acceptable queries per second (QPS). Use the index fullness metric to identify how much of your current resources your indexes are using. You can use collections to create indexes with different pod types and sizes to experiment.

Plan for scaling

This guidance applies to pod-based indexes only. With serverless indexes, you don’t configure any compute or storage resources, and you don’t manually manage those resources to meet demand, save on cost, or ensure high availability. Instead, serverless indexes scale automatically based on usage.

Before going to production, consider planning ahead for how you might scale your indexes when the need arises. Identify metrics that may indicate the need to scale, such as index fullness and average request latency. Plan for increasing the number of pods, changing to a more performant pod type, vertically scaling the size of your pods, increasing the number of replicas, or increasing storage capacity with a storage-optimized pod type.

Know how to get support

If you need help, visit support.pinecone.io, or talk to the Pinecone community. Ensure that your plan tier matches the support and availability SLAs you need. This may require you to upgrade to Enterprise.