Pinecone is the leading AI infrastructure for building accurate, secure, and scalable AI applications. Use Pinecone Database to store and search vector data at scale, or start with Pinecone Assistant to get a RAG application running in minutes.

Workflows

To store and search with automatic vector embedding and result reranking, use integrated inference.

1

Embed data

Use an embedding model to convert data into vector embeddings, the data format required for similarity search.

2

Create an index

Create an index to store your vector embeddings. Specify the dimension and similarity metric of the embedding model you used.

3

Ingest data

Load vector embeddings and metadata into your index using Pinecone’s import or upsert feature. Use namespaces to partition data for faster queries and multitenant isolation between customers.

4

Search

Convert queries into vector embeddings and use them to search your index for vectors that are semantically similar.

5

Optimize performance

Filter queries by metadata to limit the scope of your search, rerank results based on their relevance to the query, or use hybrid search to combine the strengths of both similarity and keyword searching.

Resources

Was this page helpful?