Extractive Question Answering

Open In Colab Open nbviewer Open Github

This notebook demonstrates how Pinecone helps you build an extractive question-answering application. To build an extractive question-answering system, we need three main components:

  • A vector index to store and run semantic search
  • A retriever model for embedding context passages
  • A reader model to extract answers

We will use the SQuAD dataset, which consists of questions and context paragraphs containing question answers. We generate embeddings for the context passages using the retriever, index them in the vector database, and query with semantic search to retrieve the top k most relevant contexts containing potential answers to our question. We then use the reader model to extract the answers from the returned contexts.

Install Dependencies

!pip install -qU datasets pinecone-client sentence-transformers torch

Load Dataset

Now let's load the SQuAD dataset from the HuggingFace Model Hub. We load the dataset into a pandas dataframe and filter the title and context columns, and we drop any duplicate context passages.

from datasets import load_dataset

# load the squad dataset into a pandas dataframe
df = load_dataset("squad", split="train").to_pandas()
# select only title and context column
df = df[["title", "context"]]
# drop rows containing duplicate context passages
df = df.drop_duplicates(subset="context")
df
title context
0 University_of_Notre_Dame Architecturally, the school has a Catholic cha...
5 University_of_Notre_Dame As at most other universities, Notre Dame's st...
10 University_of_Notre_Dame The university is the major seat of the Congre...
15 University_of_Notre_Dame The College of Engineering was established in ...
20 University_of_Notre_Dame All of Notre Dame's undergraduate students are...
... ... ...
87574 Kathmandu Institute of Medicine, the central college of ...
87579 Kathmandu Football and Cricket are the most popular spor...
87584 Kathmandu The total length of roads in Nepal is recorded...
87589 Kathmandu The main international airport serving Kathman...
87594 Kathmandu Kathmandu Metropolitan City (KMC), in order to...

18891 rows × 2 columns

Initialize Pinecone Index

The Pinecone index stores vector representations of our context passages which we can retrieve using another vector (query vector). We first need to initialize our connection to Pinecone to create our vector index. For this, we need a free API key and an environment value. You can find your environment value in the Pinecone console under API Keys.,

We initialize the connection like so:

import pinecone

# connect to pinecone environment
pinecone.init(
    api_key="YOUR_API_KEY",
    environment="YOUR_ENVIRONMENT"
)

Now we create a new index called "question-answering" — we can name the index anything we want. We specify the metric type as "cosine" and dimension as 384 because the retriever we use to generate context embeddings is optimized for cosine similarity and outputs 384-dimension vectors.

index_name = "extractive-question-answering"

# check if the extractive-question-answering index exists
if index_name not in pinecone.list_indexes():
    # create the index if it does not exist
    pinecone.create_index(
        index_name,
        dimension=384,
        metric="cosine"
    )

# connect to extractive-question-answering index we created
index = pinecone.Index(index_name)

Initialize Retriever

Next, we need to initialize our retriever. The retriever will mainly do two things:

  • Generate embeddings for all context passages (context vectors/embeddings)
  • Generate embeddings for our questions (query vector/embedding)

The retriever will generate embeddings in a way that the questions and context passages containing answers to our questions are nearby in the vector space. We can use cosine similarity to calculate the similarity between the query and context embeddings to find the context passages that contain potential answers to our question.

We will use a SentenceTransformer model named multi-qa-MiniLM-L6-cos-v1 designed for semantic search and trained on 215M (question, answer) pairs from diverse sources as our retriever.

import torch
from sentence_transformers import SentenceTransformer

# set device to GPU if available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# load the retriever model from huggingface model hub
retriever = SentenceTransformer('multi-qa-MiniLM-L6-cos-v1', device=device)
retriever
SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
  (2): Normalize()
)

Generate Embeddings and Upsert

Next, we need to generate embeddings for the context passages. We will do this in batches to help us more quickly generate embeddings and upload them to the Pinecone index. When passing the documents to Pinecone, we need an id (a unique value), context embedding, and metadata for each document representing context passages in the dataset. The metadata is a dictionary containing data relevant to our embeddings, such as the article title, context passage, etc.

from tqdm.auto import tqdm

# we will use batches of 64
batch_size = 64

for i in tqdm(range(0, len(df), batch_size)):
    # find end of batch
    i_end = min(i+batch_size, len(df))
    # extract batch
    batch = df.iloc[i:i_end]
    # generate embeddings for batch
    emb = retriever.encode(batch["context"].tolist()).tolist()
    # get metadata
    meta = batch.to_dict(orient="records")
    # create unique IDs
    ids = [f"{idx}" for idx in range(i, i_end)]
    # add all to upsert list
    to_upsert = list(zip(ids, emb, meta))
    # upsert/insert these records to pinecone
    _ = index.upsert(vectors=to_upsert)

# check that we have all vectors in index
index.describe_index_stats()
100%|██████████| 296/296 [02:57<00:00, 1.99it/s]

{'dimension': 384,
 'index_fullness': 0.0,
 'namespaces': {'': {'vector_count': 18891}},
 'total_vector_count': 18891}

Initialize Reader

We use the deepset/electra-base-squad2 model from the HuggingFace model hub as our reader model. We load this model into a "question-answering" pipeline from HuggingFace transformers and feed it our questions and context passages individually. The model gives a prediction for each context we pass through the pipeline.

from transformers import pipeline

model_name = "deepset/electra-base-squad2"
# load the reader model into a question-answering pipeline
reader = pipeline(tokenizer=model_name, model=model_name, task="question-answering", device=device)

Now all the components we need are ready. Let's write some helper functions to execute our queries. The get_context function retrieves the context embeddings containing answers to our question from the Pinecone index, and the extract_answer function extracts the answers from these context passages.

# gets context passages from the pinecone index
def get_context(question, top_k):
    # generate embeddings for the question
    xq = retriever.encode([question]).tolist()
    # search pinecone index for context passage with the answer
    xc = index.query(xq, top_k=top_k, include_metadata=True)
    # extract the context passage from pinecone search result
    c = [x["metadata"]["context"] for x in xc["matches"]]
    return c

question = "How much oil is Egypt producing in a day?"
context = get_context(question, top_k = 1)
context
['Egypt was producing 691,000 bbl/d of oil and 2,141.05 Tcf of natural gas (in 2013), which makes Egypt as the largest oil producer not member of the Organization of the Petroleum Exporting Countries (OPEC) and the second-largest dry natural gas producer in Africa. In 2013, Egypt was the largest consumer of oil and natural gas in Africa, as more than 20% of total oil consumption and more than 40% of total dry natural gas consumption in Africa. Also, Egypt possesses the largest oil refinery capacity in Africa 726,000 bbl/d (in 2012). Egypt is currently planning to build its first nuclear power plant in El Dabaa city, northern Egypt.']

As we can see, the retiever is working and returns the context passage that contains the answer to our question. Now let's use the reader to extract the exact answer from the context passage.

from pprint import pprint

# extracts answer from the context passage
def extract_answer(question, context):
    results = []
    for c in context:
        # feed the reader the question and contexts to extract answers
        answer = reader(question=question, context=c)
        # add the context to answer dict for printing both together
        answer["context"] = c
        results.append(answer)
    # sort the result based on the score from reader model
    sorted_result = pprint(sorted(results, key=lambda x: x["score"], reverse=True))
    return sorted_result

extract_answer(question, context)
[{'answer': '691,000 bbl/d',
  'context': 'Egypt was producing 691,000 bbl/d of oil and 2,141.05 Tcf of '
             'natural gas (in 2013), which makes Egypt as the largest oil '
             'producer not member of the Organization of the Petroleum '
             'Exporting Countries (OPEC) and the second-largest dry natural '
             'gas producer in Africa. In 2013, Egypt was the largest consumer '
             'of oil and natural gas in Africa, as more than 20% of total oil '
             'consumption and more than 40% of total dry natural gas '
             'consumption in Africa. Also, Egypt possesses the largest oil '
             'refinery capacity in Africa 726,000 bbl/d (in 2012). Egypt is '
             'currently planning to build its first nuclear power plant in El '
             'Dabaa city, northern Egypt.',
  'end': 33,
  'score': 0.9999852180480957,
  'start': 20}]

The reader model predicted with 99% accuracy the correct answer 691,000 bbl/d as seen from the context passage. Let's run few more queries.

question = "What are the first names of the men that invented youtube?"
context = get_context(question, top_k=1)
extract_answer(question, context)
[{'answer': 'Hurley and Chen',
  'context': 'According to a story that has often been repeated in the media, '
             'Hurley and Chen developed the idea for YouTube during the early '
             'months of 2005, after they had experienced difficulty sharing '
             "videos that had been shot at a dinner party at Chen's apartment "
             'in San Francisco. Karim did not attend the party and denied that '
             'it had occurred, but Chen commented that the idea that YouTube '
             'was founded after a dinner party "was probably very strengthened '
             'by marketing ideas around creating a story that was very '
             'digestible".',
  'end': 79,
  'score': 0.9999276399612427,
  'start': 64}]
question = "What is Albert Eistein famous for?"
context = get_context(question, top_k=1)
extract_answer(question, context)
[{'answer': 'his theories of special relativity and general relativity',
  'context': 'Albert Einstein is known for his theories of special relativity '
             'and general relativity. He also made important contributions to '
             'statistical mechanics, especially his mathematical treatment of '
             'Brownian motion, his resolution of the paradox of specific '
             'heats, and his connection of fluctuations and dissipation. '
             'Despite his reservations about its interpretation, Einstein also '
             'made contributions to quantum mechanics and, indirectly, quantum '
             'field theory, primarily through his theoretical studies of the '
             'photon.',
  'end': 86,
  'score': 0.9500371217727661,
  'start': 29}]

Let's run another question. This time for top 3 context passages from the retriever.

question = "Who was the first person to step foot on the moon?"
context = get_context(question, top_k=3)
extract_answer(question, context)
[{'answer': 'Armstrong',
  'context': 'The trip to the Moon took just over three days. After achieving '
             'orbit, Armstrong and Aldrin transferred into the Lunar Module, '
             'named Eagle, and after a landing gear inspection by Collins '
             'remaining in the Command/Service Module Columbia, began their '
             'descent. After overcoming several computer overload alarms '
             'caused by an antenna switch left in the wrong position, and a '
             'slight downrange error, Armstrong took over manual flight '
             'control at about 180 meters (590 ft), and guided the Lunar '
             'Module to a safe landing spot at 20:18:04 UTC, July 20, 1969 '
             '(3:17:04 pm CDT). The first humans on the Moon would wait '
             'another six hours before they ventured out of their craft. At '
             '02:56 UTC, July 21 (9:56 pm CDT July 20), Armstrong became the '
             'first human to set foot on the Moon.',
  'end': 80,
  'score': 0.9998037815093994,
  'start': 71},
 {'answer': 'Aldrin',
  'context': 'The first step was witnessed by at least one-fifth of the '
             'population of Earth, or about 723 million people. His first '
             "words when he stepped off the LM's landing footpad were, "
             '"That\'s one small step for [a] man, one giant leap for '
             'mankind." Aldrin joined him on the surface almost 20 minutes '
             'later. Altogether, they spent just under two and one-quarter '
             'hours outside their craft. The next day, they performed the '
             'first launch from another celestial body, and rendezvoused back '
             'with Columbia.',
  'end': 246,
  'score': 0.6958656907081604,
  'start': 240},
 {'answer': 'Frank Borman',
  'context': 'On December 21, 1968, Frank Borman, James Lovell, and William '
             'Anders became the first humans to ride the Saturn V rocket into '
             'space on Apollo 8. They also became the first to leave low-Earth '
             'orbit and go to another celestial body, and entered lunar orbit '
             'on December 24. They made ten orbits in twenty hours, and '
             'transmitted one of the most watched TV broadcasts in history, '
             'with their Christmas Eve program from lunar orbit, that '
             'concluded with a reading from the biblical Book of Genesis. Two '
             'and a half hours after the broadcast, they fired their engine to '
             'perform the first trans-Earth injection to leave lunar orbit and '
             'return to the Earth. Apollo 8 safely landed in the Pacific ocean '
             "on December 27, in NASA's first dawn splashdown and recovery.",
  'end': 34,
  'score': 0.49247056245803833,
  'start': 22}]

We return the correct answer first, followed by relevant answers on similar parallel topics. We have recieved great results.

Example Application

To try out an application like this one, see this example application.