LangChain
This guide shows you how to integrate Pinecone, a high-performance vector database, with LangChain, a framework for building applications powered by large language models (LLMs).
Pinecone enables developers to build scalable, real-time recommendation and search systems based on vector similarity search. LangChain, on the other hand, provides modules for managing and optimizing the use of language models in applications. Its core philosophy is to facilitate data-aware applications where the language model interacts with other data sources and its environment.
By integrating Pinecone with LangChain, you can add knowledge to LLMs via Retrieval Augmented Generation (RAG), greatly enhancing LLM ability for autonomous agents, chatbots, question-answering, and multi-agent systems.
This guide demonstrates only one way out of many that you can use LangChain and Pinecone together. For additional examples, see:
1. Set up your environment
Before you begin, install some necessary libraries and set environment variables for your Pinecone and OpenAI API keys:
pip install -qU \
pinecone-client==3.0.0 \
pinecone-datasets==0.7.0 \
langchain-pinecone==0.0.3 \
langchain-openai==0.0.7 \
langchain==0.1.9
# Set environment variables for API keys
export PINECONE_API_KEY=<your Pinecone API key available at app.pinecone.io>
export OPENAI_API_KEY=<your OpenAI API key, available at platform.openai.com/api-keys>
pinecone_api_key = os.environ.get('PINECONE_API_KEY')
openai_api_key = os.environ.get('OPENAI_API_KEY')
2. Build the knowledge base
-
Load a sample Pinecone dataset into memory:
Pythonimport pinecone_datasets dataset = pinecone_datasets.load_dataset('wikipedia-simple-text-embedding-ada-002-100K') len(dataset)
100000
-
Reduce the dataset and format it for upserting into Pinecone:
Python# we drop sparse_values as they are not needed for this example dataset.documents.drop(['metadata'], axis=1, inplace=True) dataset.documents.rename(columns={'blob': 'metadata'}, inplace=True) # we will use rows of the dataset up to index 30_000 dataset.documents.drop(dataset.documents.index[30_000:], inplace=True)
3. Index the data in Pinecone
-
Decide whether to use a serverless or pod-based index. Pod-based indexes are the traditional Pinecone architecture; they are available on Pinecone’s (free) starter tier. Serverless is the new Pinecone architecture offering large cost savings, easier scaling, and more — there is no free tier available for Serverless yet, but when signing up, you can get $100 in free credits.
Pythonimport os use_serverless = True
-
Initialize your client connection to Pinecone and create an index. This step uses the Pinecone API key you set as an environment variable earlier.
Pythonfrom pinecone import Pinecone, ServerlessSpec, PodSpec import time # configure client pc = Pinecone(api_key=pinecone_api_key) if use_serverless: spec = ServerlessSpec(cloud='aws', region='us-west-2') else: # if not using a starter index, you should specify a pod_type too spec = PodSpec() # check for and delete index if already exists index_name = 'langchain-retrieval-augmentation-fast' if index_name in pc.list_indexes().names(): pc.delete_index(index_name) # create a new index pc.create_index( index_name, dimension=1536, # dimensionality of text-embedding-ada-002 metric='dotproduct', spec=spec ) # wait for index to be initialized while not pc.describe_index(index_name).status['ready']: time.sleep(1)
-
Target the index and check its current stats:
Pythonindex = pc.Index(index_name) index.describe_index_stats()
{'dimension': 1536, 'index_fullness': 0.0, 'namespaces': {}, 'total_vector_count': 0}
You’ll see that the index has a
total_vector_count
of0
, as you haven’t added any vectors yet. -
Now upsert the data to Pinecone:
Pythonfor batch in dataset.iter_documents(batch_size=100): index.upsert(batch)
-
Once the data is indexed, check the index stats once again:
Pythonindex.describe_index_stats()
{'dimension': 1536, 'index_fullness': 0.0, 'namespaces': {}, 'total_vector_count': 70000}
4. Initialize a LangChain vector store
Now that you’ve built your Pinecone index, you need to initialize a LangChain vector store using the index. This step uses the OpenAI API key you set as an environment variable earlier. Note that OpenAI is a paid service and so running the remainder of this tutorial may incur some small cost.
-
Initialize a LangChain embedding object:
Pythonfrom langchain_openai import OpenAIEmbeddings # get openai api key from platform.openai.com model_name = 'text-embedding-ada-002' embeddings = OpenAIEmbeddings( model=model_name, openai_api_key=openai_api_key )
-
Initialize the LangChain vector store:
Pythonfrom langchain_pinecone import PineconeVectorStore text_field = "text" vectorstore = PineconeVectorStore( index, embeddings, text_field )
-
Now you can query the vector store directly using
vectorstore.similarity_search
:Pythonquery = "who was Benito Mussolini?" vectorstore.similarity_search( query, # our search query k=3 # return 3 most relevant docs )
[Document(page_content='Benito Amilcare Andrea Mussolini KSMOM GCTE (29 July 1883 – 28 April 1945) was an Italian politician and journalist...', metadata={'chunk': 0.0, 'source': 'https://simple.wikipedia.org/wiki/Benito%20Mussolini', 'title': 'Benito Mussolini', 'wiki-id': '6754'}), Document(page_content='Fascism as practiced by Mussolini\nMussolini\'s form of Fascism, "Italian Fascism"- unlike Nazism, the racist ideology...', metadata={'chunk': 1.0, 'source': 'https://simple.wikipedia.org/wiki/Benito%20Mussolini', 'title': 'Benito Mussolini', 'wiki-id': '6754'}), Document(page_content='Veneto was made part of Italy in 1866 after a war with Austria. Italian soldiers won Latium in 1870. That was when...', metadata={'chunk': 5.0, 'source': 'https://simple.wikipedia.org/wiki/Italy', 'title': 'Italy', 'wiki-id': '363'})]
All of these are good, relevant results. But what else can you do with this? There are many tasks, one of the most interesting (and well supported by LangChain) is called “Generative Question-Answering” or GQA.
5. Use Pinecone and LangChain for RAG
In RAG, you take the query as a question that is to be answered by a LLM, but the LLM must answer the question based on the information it is seeing from the vectorstore.
-
To do this, initialize a
RetrievalQA
object like so:Pythonfrom langchain_openai import ChatOpenAI from langchain.chains import RetrievalQA # completion llm llm = ChatOpenAI( openai_api_key=OPENAI_API_KEY, model_name='gpt-3.5-turbo', temperature=0.0 ) qa = RetrievalQA.from_chain_type( llm=llm, chain_type="stuff", retriever=vectorstore.as_retriever() ) qa.run(query)
Benito Mussolini was an Italian politician and journalist who served as the Prime Minister of Italy from 1922 until 1943. He was the leader of the National Fascist Party and played a significant role in the rise of fascism in Italy...
-
You can also include the sources of information that the LLM is using to answer your question using a slightly different version of
RetrievalQA
calledRetrievalQAWithSourcesChain
:Pythonfrom langchain.chains import RetrievalQAWithSourcesChain qa_with_sources = RetrievalQAWithSourcesChain.from_chain_type( llm=llm, chain_type="stuff", retriever=vectorstore.as_retriever() ) qa_with_sources(query)
{'question': 'who was Benito Mussolini?', 'answer': "Benito Mussolini was an Italian politician and journalist who served as the Prime Minister of Italy from 1922 until 1943. He was the leader of the National Fascist Party and played a significant role in the rise of fascism in Italy...", 'sources': 'https://simple.wikipedia.org/wiki/Benito%20Mussolini'}
6. Clean up
When you no longer need the index, use the delete_index
operation to delete it:
pc.delete_index(index_name)
Was this page helpful?