Pinecone is the leading vector database for building accurate and performant AI applications at scale in production.
Set up a fully managed vector database for high-performance semantic search
Create an AI assistant that answers complex questions about your proprietary data
Use integrated embedding to upsert and search with text and have Pinecone generate vectors automatically.
Create an index
Create an index that is integrated with one of Pinecone’s hosted embedding models. Dense indexes and vectors enable semantic search, while sparse indexes and vectors enable lexical search.
Prepare data
Prepare your data for efficient ingestion, retrieval, and management in Pinecone.
Upsert text
Upsert your source text and have Pinecone convert the text to vectors automatically. Use namespaces to partition data for faster queries and multitenant isolation between customers.
Search with text
Search the index with a query text. Again, Pinecone uses the index’s integrated model to convert the text to a vector automatically.
Improve relevance
Filter by metadata to limit the scope of your search, rerank results to increase search accuracy, or add lexical search to capture both semantic understanding and precise keyword matches.
Use integrated embedding to upsert and search with text and have Pinecone generate vectors automatically.
Create an index
Create an index that is integrated with one of Pinecone’s hosted embedding models. Dense indexes and vectors enable semantic search, while sparse indexes and vectors enable lexical search.
Prepare data
Prepare your data for efficient ingestion, retrieval, and management in Pinecone.
Upsert text
Upsert your source text and have Pinecone convert the text to vectors automatically. Use namespaces to partition data for faster queries and multitenant isolation between customers.
Search with text
Search the index with a query text. Again, Pinecone uses the index’s integrated model to convert the text to a vector automatically.
Improve relevance
Filter by metadata to limit the scope of your search, rerank results to increase search accuracy, or add lexical search to capture both semantic understanding and precise keyword matches.
If you use an external embedding model to generate vectors, you can upsert and search with vectors directly.
Generate vectors
Use an external embedding model to convert data into dense or sparse vectors.
Create an index
Create an index that matches the characteristics of your embedding model. Dense indexes and vectors enable semantic search, while sparse indexes and vectors enable lexical search.
Prepare data
Prepare your data for efficient ingestion, retrieval, and management in Pinecone.
Ingest vectors
Load your vectors and metadata into your index using Pinecone’s import or upsert feature. Use namespaces to partition data for faster queries and multitenant isolation between customers.
Search with a vector
Use an external embedding model to convert a query text to a vector and search the index with the vector.
Improve relevance
Filter by metadata to limit the scope of your search, rerank results to increase search accuracy, or add lexical search to capture both semantic understanding and precise keyword matches.
Comprehensive details about the Pinecone APIs, SDKs, utilities, and architecture.
Simplify vector search with integrated embedding and reranking.
Hands-on notebooks and sample apps with common AI patterns and tools.
Pinecone’s growing number of third-party integrations.
Resolve common Pinecone issues with our troubleshooting guide.
News about features and changes in Pinecone and related tools.
Pinecone is the leading vector database for building accurate and performant AI applications at scale in production.
Set up a fully managed vector database for high-performance semantic search
Create an AI assistant that answers complex questions about your proprietary data
Use integrated embedding to upsert and search with text and have Pinecone generate vectors automatically.
Create an index
Create an index that is integrated with one of Pinecone’s hosted embedding models. Dense indexes and vectors enable semantic search, while sparse indexes and vectors enable lexical search.
Prepare data
Prepare your data for efficient ingestion, retrieval, and management in Pinecone.
Upsert text
Upsert your source text and have Pinecone convert the text to vectors automatically. Use namespaces to partition data for faster queries and multitenant isolation between customers.
Search with text
Search the index with a query text. Again, Pinecone uses the index’s integrated model to convert the text to a vector automatically.
Improve relevance
Filter by metadata to limit the scope of your search, rerank results to increase search accuracy, or add lexical search to capture both semantic understanding and precise keyword matches.
Use integrated embedding to upsert and search with text and have Pinecone generate vectors automatically.
Create an index
Create an index that is integrated with one of Pinecone’s hosted embedding models. Dense indexes and vectors enable semantic search, while sparse indexes and vectors enable lexical search.
Prepare data
Prepare your data for efficient ingestion, retrieval, and management in Pinecone.
Upsert text
Upsert your source text and have Pinecone convert the text to vectors automatically. Use namespaces to partition data for faster queries and multitenant isolation between customers.
Search with text
Search the index with a query text. Again, Pinecone uses the index’s integrated model to convert the text to a vector automatically.
Improve relevance
Filter by metadata to limit the scope of your search, rerank results to increase search accuracy, or add lexical search to capture both semantic understanding and precise keyword matches.
If you use an external embedding model to generate vectors, you can upsert and search with vectors directly.
Generate vectors
Use an external embedding model to convert data into dense or sparse vectors.
Create an index
Create an index that matches the characteristics of your embedding model. Dense indexes and vectors enable semantic search, while sparse indexes and vectors enable lexical search.
Prepare data
Prepare your data for efficient ingestion, retrieval, and management in Pinecone.
Ingest vectors
Load your vectors and metadata into your index using Pinecone’s import or upsert feature. Use namespaces to partition data for faster queries and multitenant isolation between customers.
Search with a vector
Use an external embedding model to convert a query text to a vector and search the index with the vector.
Improve relevance
Filter by metadata to limit the scope of your search, rerank results to increase search accuracy, or add lexical search to capture both semantic understanding and precise keyword matches.
Comprehensive details about the Pinecone APIs, SDKs, utilities, and architecture.
Simplify vector search with integrated embedding and reranking.
Hands-on notebooks and sample apps with common AI patterns and tools.
Pinecone’s growing number of third-party integrations.
Resolve common Pinecone issues with our troubleshooting guide.
News about features and changes in Pinecone and related tools.