Pinecone Documentation
Pinecone is the leading AI infrastructure for building accurate, secure, and scalable AI applications. Use Pinecone Database to store and search vector data at scale, or start with Pinecone Assistant to get a RAG application running in minutes.
Database quickstart
Set up a fully managed vector database for high-performance similarity search
Assistant quickstart
Create an AI assistant that answers complex questions about your proprietary data
Workflows
To store and search with automatic vector embedding and result reranking, use integrated inference.
Embed data
Use an embedding model to convert data into vector embeddings, the data format required for similarity search.
Create an index
Create an index to store your vector embeddings. Specify the dimension and similarity metric of the embedding model you used.
Ingest data
Load vector embeddings and metadata into your index using Pinecone’s import or upsert feature. Use namespaces to partition data for faster queries and multitenant isolation between customers.
Search
Convert queries into vector embeddings and use them to search your index for vectors that are semantically similar.
Optimize performance
Filter queries by metadata to limit the scope of your search, rerank results based on their relevance to the query, or use hybrid search to combine the strengths of both similarity and keyword searching.
Resources
Guides
Practical guides and best practices to get you up and running quickly.
Reference
Comprehensive details about the Pinecone APIs, SDKs, utilities, and architecture.
Examples
Hands-on notebooks and sample apps with common AI patterns and tools.
Models
Details and guidance on popular embedding and reranking models.
Integrations
Pinecone’s growing number of third-party integrations.
Troubleshooting
Resolve common Pinecone issues with our troubleshooting guide.
Releases
News about features and changes in Pinecone and related tools.
Was this page helpful?