Pinecone Database
Pinecone is the leading vector database for building accurate and performant AI applications at scale in production.
Database quickstart
Set up a fully managed vector database for high-performance semantic search
Assistant quickstart
Create an AI assistant that answers complex questions about your proprietary data
Inference
Leading embedding and reranking models hosted by Pinecone. Explore all models.
llama-text-embed
State of the art model 1B text embedding model
cohere-rerank-3.5
State of the art reranking model for search
pinecone-sparse-v0
Sparse vector model for keyword-style search
Database workflows
Use integrated embedding to upsert and search with text and have Pinecone generate vectors automatically.
Create an index
Create an index that is integrated with one of Pinecone’s hosted embedding models. Dense indexes and vectors enable semantic search, while sparse indexes and vectors enable lexical search.
Upsert text
Upsert your source text and have Pinecone convert the text to vectors automatically. Use namespaces to partition data for faster queries and multitenant isolation between customers.
Search with text
Search the index with a query text. Again, Pinecone uses the index’s integrated model to convert the text to a vector automatically.
Improve relevance
Filter by metadata to limit the scope of your search, rerank results to increase search accuracy, or add lexical search to capture both semantic understanding and precise keyword matches.
Start building
API Reference
Comprehensive details about the Pinecone APIs, SDKs, utilities, and architecture.
Integrated Inference
Simplify vector search with integrated embedding & reranking.
Examples
Hands-on notebooks and sample apps with common AI patterns and tools.
Integrations
Pinecone’s growing number of third-party integrations.
Troubleshooting
Resolve common Pinecone issues with our troubleshooting guide.
Releases
News about features and changes in Pinecone and related tools.
Was this page helpful?