Table Question Answering
Table Question Answering (Table QA) refers to providing precise answers from tables to answer a user's question. With recent works on Table QA, is it now possible to answer natural language queries from tabular data. This notebook demonstrates how you can build a Table QA system that can answer your natural language queries using the Pinecone vector database.
We need three main components to build the Table QA system:
- A vector index to store table embeddings
- A retriever model for embedding queries and tables
- A reader model to read the tables and extract answers
Install Dependencies
# torch-scatter may take few minutes to install
!pip install datasets pinecone-client sentence_transformers torch-scatter
Load the Dataset
We will work with a subset of the Open Table-and-Text Question Answering (OTT-QA) dataset, consisting of texts and tables from Wikipedia. The subset contains 20,000 tables, and it can be loaded from the Huggigface Datasets hub as follows:
from datasets import load_dataset
# load the dataset from huggingface datasets hub
data = load_dataset("ashraq/ott-qa-20k", split="train")
data
Dataset({
features: ['url', 'title', 'header', 'data', 'section_title', 'section_text', 'uid', 'intro'],
num_rows: 20000
})
data[2]
{'url': 'https://en.wikipedia.org/wiki/1976_New_York_Mets_season',
'title': '1976 New York Mets season',
'header': ['Level', 'Team', 'League', 'Manager'],
'data': [['AAA', 'Tidewater Tides', 'International League', 'Tom Burgess'],
['AA', 'Jackson Mets', 'Texas League', 'John Antonelli'],
['A', 'Lynchburg Mets', 'Carolina League', 'Jack Aker'],
['A', 'Wausau Mets', 'Midwest League', 'Bill Monbouquette'],
['Rookie', 'Marion Mets', 'Appalachian League', 'Al Jackson']],
'section_title': 'Farm system',
'section_text': 'See also : Minor League Baseball',
'uid': '1976_New_York_Mets_season_7',
'intro': 'The New York Mets season was the 15th regular season for the Mets, who played home games at Shea Stadium. Led by manager Joe Frazier, the team had an 86-76 record and finished in third place in the National League East.'}
As we can see, the dataset includes both textual and tabular data that are related to one another. Let's extract and transform the dataset's tables into pandas dataframes as we will only be using the tables in this example.
import pandas as pd
# store all tables in the tables list
tables = []
# loop through the dataset and convert tabular data to pandas dataframes
for doc in data:
table = pd.DataFrame(doc["data"], columns=doc["header"])
tables.append(table)
tables[2]
Level | Team | League | Manager | |
---|---|---|---|---|
0 | AAA | Tidewater Tides | International League | Tom Burgess |
1 | AA | Jackson Mets | Texas League | John Antonelli |
2 | A | Lynchburg Mets | Carolina League | Jack Aker |
3 | A | Wausau Mets | Midwest League | Bill Monbouquette |
4 | Rookie | Marion Mets | Appalachian League | Al Jackson |
<svg xmlns="http://www.w3.org/2000/svg" height="24px"viewBox="0 0 24 24"
width="24px">
Initialize Retriever
The retriever transforms natural language queries and tabular data into embeddings/vectors. It will generate embeddings in a way that the natural language questions and tables containing answers to our questions are nearby in the vector space.
We will use a SentenceTransformer model trained specifically for embedding tabular data for retrieval tasks. The model can be loaded from the Huggingface Models hub as follows:
import torch
from sentence_transformers import SentenceTransformer
# set device to GPU if available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# load the table embedding model from huggingface models hub
retriever = SentenceTransformer("deepset/all-mpnet-base-v2-table", device=device)
retriever
SentenceTransformer(
(0): Transformer({'max_seq_length': 384, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
The retriever expects tables to be in a particular format. Let's write a function to convert the tables to this format.
def _preprocess_tables(tables: list):
processed = []
# loop through all tables
for table in tables:
# convert the table to csv and
processed_table = "\n".join([table.to_csv(index=False)])
# add the processed table to processed list
processed.append(processed_table)
return processed
Notice that we are only using tables here. However, if you want the retriever to take the metadata into account while retrieving the tables, you can join any metadata strings, such as title, section_title, etc., separated by new line characters at the beginning of the processed table.
Let's take a look at the formatted tables.
# format all the dataframes in the tables list
processed_tables = _preprocess_tables(tables)
# display the formatted table
processed_tables[2]
'Level,Team,League,Manager\nAAA,Tidewater Tides,International League,Tom Burgess\nAA,Jackson Mets,Texas League,John Antonelli\nA,Lynchburg Mets,Carolina League,Jack Aker\nA,Wausau Mets,Midwest League,Bill Monbouquette\nRookie,Marion Mets,Appalachian League,Al Jackson\n'
The formatted table may not make sense to us, but the embedding model is trained to understand it and generate accurate embeddings.
Initialize Pinecone Index
We will use the Pinecone vector database as our vector index. The Pinecone index stores vector representations of our tables which we can retrieve using a natural language query (query vector). Pinecone does this by computing the similarity between the query vector and the embedded tables stored in the vector index.
To use Pinecone, we first need to initialize a connection to Pinecone. For this, we need a free API key. You can find your environment in the Pinecone console under API Keys. We initialize the connection like so:
import pinecone
# connect to pinecone environment
pinecone.init(
api_key="YOUR API KEY",
environment="YOUR_ENVIRONMENT"
)
Now we create a new index. We specify the metric type as "cosine" and dimension as 768 because the retriever we use to generate context embeddings outputs 768-dimension vectors. Pinecone will use cosine similarity to compute the similarity between the query and table embeddings.
# you can choose any name for the index
index_name = "table-qa"
# check if the table-qa index exists
if index_name not in pinecone.list_indexes():
# create the index if it does not exist
pinecone.create_index(
index_name,
dimension=768,
metric="cosine"
)
# connect to table-qa index we created
index = pinecone.Index(index_name)
Generate Embeddings and Upsert
Next we need to generate the table embeddings and upload it to the Pinecone index. We can easily do that as follows:
from tqdm.auto import tqdm
# we will use batches of 64
batch_size = 64
for i in tqdm(range(0, len(processed_tables), batch_size)):
# find end of batch
i_end = min(i+batch_size, len(processed_tables))
# extract batch
batch = processed_tables[i:i_end]
# generate embeddings for batch
emb = retriever.encode(batch).tolist()
# create unique IDs ranging from zero to the total number of tables in the dataset
ids = [f"{idx}" for idx in range(i, i_end)]
# add all to upsert list
to_upsert = list(zip(ids, emb))
# upsert/insert these records to pinecone
_ = index.upsert(vectors=to_upsert)
# check that we have all vectors in index
index.describe_index_stats()
100%|██████████| 313/313 [09:12<00:00, 1.49s/it]
{'dimension': 768,
'index_fullness': 0.0,
'namespaces': {'': {'vector_count': 20000}},
'total_vector_count': 20000}
Now the Pinecone index is ready for querying. Let's test to see if it returns tables relevant to our queries.
query = "which country has the highest GDP in 2020?"
# generate embedding for the query
xq = retriever.encode([query]).tolist()
# query pinecone index to find the table containing answer to the query
result = index.query(xq, top_k=1)
result
{'matches': [{'id': '19931', 'score': 0.822087, 'values': []}], 'namespace': ''}
The Pinecone index has returned the id
of a table that would contain the answer to our query with 82.2% confidence. Let's see if this table actually contains the answer. We can use the returned id
as an index to get the relevant pandas dataframe from the tables
list.
id = int(result["matches"][0]["id"])
tables[id].head()
Rank | Country | GDP ( PPP , Peak Year ) millions of USD | Peak Year | |
---|---|---|---|---|
0 | 1 | China | 27,804,953 | 2020 |
1 | 2 | India | 11,321,280 | 2020 |
2 | 3 | Russia | 4,389,960 | 2019 |
3 | 4 | Indonesia | 3,778,134 | 2020 |
4 | 5 | Brazil | 3,596,841 | 2020 |
<svg xmlns="http://www.w3.org/2000/svg" height="24px"viewBox="0 0 24 24"
width="24px">
The table returned by the Pinecone index indeed has the answer to our query. Now we need a model that can read this table and extract the precise answer.
Initialize Table Reader
As the reader, we will use a TAPAS model fine-tuned for the Table QA task. TAPAS is a BERT-like Transformer model pretrained in a self-supervised manner on a large corpus of English language data from Wikipedia. We load the model and tokenizer from the Huggingface model hub into a question-answering pipeline.
from transformers import pipeline, TapasTokenizer, TapasForQuestionAnswering
model_name = "google/tapas-base-finetuned-wtq"
# load the tokenizer and the model from huggingface model hub
tokenizer = TapasTokenizer.from_pretrained(model_name)
model = TapasForQuestionAnswering.from_pretrained(model_name, local_files_only=False)
# load the model and tokenizer into a question-answering pipeline
pipe = pipeline("table-question-answering", model=model, tokenizer=tokenizer, device=device)
Let's run the table returned by the Pinecone index and the query we used before into the question-answering pipeline to extract the answer.
pipe(table=tables[id], query=query)
{'answer': 'China',
'coordinates': [(0, 1)],
'cells': ['China'],
'aggregator': 'NONE'}
The model has precisely answered our query. Let's run some more queries.
Querying
First, we will define two function to handle our queries and extract answers from tables.
def query_pinecone(query):
# generate embedding for the query
xq = retriever.encode([query]).tolist()
# query pinecone index to find the table containing answer to the query
result = index.query(xq, top_k=1)
# return the relevant table from the tables list
return tables[int(result["matches"][0]["id"])]
def get_answer_from_table(table, query):
# run the table and query through the question-answering pipeline
answers = pipe(table=table, query=query)
return answers
query = "which car manufacturers produce cars with a top speed of above 180 kph?"
table = query_pinecone(query)
table
Manufacturer | Model | Engine | Power Output | Max . Speed ( kph ) | Dry Weight ( kg ) | |
---|---|---|---|---|---|---|
0 | Fiat | 805-405 | FIAT 1979cc S6 supercharged | 130 bhp | 220 | 680 |
1 | Alfa Romeo | GPR ( P1 ) | Alfa Romeo 1990cc S6 | 95 bhp | 180 | 850 |
2 | Diatto | Tipo 20 S | Diatto 1997cc S4 | 75 bhp | 155 | 700 |
3 | Bugatti | Type 32 | Bugatti 1991cc S8 | 100 bhp | 190 | 660 |
4 | Voisin | C6 Laboratoire | Voisin 1978cc S6 | 90 bhp | 175 | 710 |
5 | Sunbeam | Sunbeam 1988cc S6 | 108 bhp | 180 | 675 | |
6 | Mercedes | M7294 | Mercedes 1990cc S4 supercharged | 120 bhp | 180 | 750 |
7 | Benz | RH Tropfenwagen | Benz 1998cc S6 | 95 bhp | 185 | 745 |
8 | Miller | 122 | Miller 1978cc S8 | 120 bhp | 186 | 850 |
<svg xmlns="http://www.w3.org/2000/svg" height="24px"viewBox="0 0 24 24"
width="24px">
get_answer_from_table(table, query)
{'answer': 'Fiat, Bugatti, Benz, Miller',
'coordinates': [(0, 0), (3, 0), (7, 0), (8, 0)],
'cells': ['Fiat', 'Bugatti', 'Benz', 'Miller'],
'aggregator': 'NONE'}
query = "which scientist is known for improving the steam engine?"
table = query_pinecone(query)
table.head()
Year | Name | Location | Rationale | |
---|---|---|---|---|
0 | 1839 | Robert Hare | Philadelphia , Pennsylvania | Inventor of the oxy-hydrogen blowpipe |
1 | 1862 | John Ericsson | New York , New York | His work improved the field of heat management... |
2 | 1865 | Daniel Treadwell | Cambridge , Massachusetts | Heat management . He was awarded especially fo... |
3 | 1866 | Alvan Clark | Cambridge , Massachusetts | Improved refracting telescopes |
4 | 1869 | George Henry Corliss | Providence , Rhode Island | For improving the steam engine |
<svg xmlns="http://www.w3.org/2000/svg" height="24px"viewBox="0 0 24 24"
width="24px">
get_answer_from_table(table, query)
{'answer': 'George Henry Corliss',
'coordinates': [(4, 1)],
'cells': ['George Henry Corliss'],
'aggregator': 'NONE'}
query = "What is the Maldivian island name for Oblu Select at Sangeli resort?"
table = query_pinecone(query)
table.head()
Name | Resort Name | Geographic Atoll | |
---|---|---|---|
0 | Asdhoo | Asdu Sun Island Resort | North Male Atoll |
1 | Akirifushi | Oblu Select at Sangeli | North Male Atoll |
2 | Baros | Baros Island Resort | North Male Atoll |
3 | Biyaadhoo | Biyadhoo Island Resort | South Male Atoll |
4 | Bodubandos | Bandos Maldives Resort | North Male Atoll |
<svg xmlns="http://www.w3.org/2000/svg" height="24px"viewBox="0 0 24 24"
width="24px">
get_answer_from_table(table, query)
{'answer': 'Akirifushi',
'coordinates': [(1, 0)],
'cells': ['Akirifushi'],
'aggregator': 'NONE'}
As we can see, our Table QA system can retrieve the correct table from the Pinecone index and extract precise answers from the table. The TAPAS model we use supports more advanced queries. It has an aggregation head which indicates whether we need to count, sum, or average cells to answer the questions. Let's run some advanced queries that require aggregation to answer.
query = "what was the total GDP of China and Indonesia in 2020?"
table = query_pinecone(query)
table.head()
Rank | Country | GDP ( PPP , Peak Year ) millions of USD | Peak Year | |
---|---|---|---|---|
0 | 1 | China | 27,804,953 | 2020 |
1 | 2 | India | 11,321,280 | 2020 |
2 | 3 | Russia | 4,389,960 | 2019 |
3 | 4 | Indonesia | 3,778,134 | 2020 |
4 | 5 | Brazil | 3,596,841 | 2020 |
<svg xmlns="http://www.w3.org/2000/svg" height="24px"viewBox="0 0 24 24"
width="24px">
get_answer_from_table(table, query)
{'answer': 'SUM > 27,804,953, 3,778,134',
'coordinates': [(0, 2), (3, 2)],
'cells': ['27,804,953', '3,778,134'],
'aggregator': 'SUM'}
Here the QA system suggests the correct cells to add in order to get the total GDP of China and Indonesia in 2020.
query = "what is the average carbon emission of power stations in australia, canada and germany?"
table = query_pinecone(query)
table.head()
CO 2 intensity ( kg/kWh ) | Power station | Country | |
---|---|---|---|
0 | 1.58 | Hazelwood Power Station , Victoria closed 31 M... | Australia |
1 | 1.56 | Edwardsport IGCC , Edwardsport , Indiana , clo... | United States |
2 | 1.27 | Frimmersdorf power plant , Grevenbroich | Germany |
3 | 1.25 | HR Milner Generating Station , Grande Cache , ... | Canada |
4 | 1.18 | C. TG . Portes Gil , Río Bravo | Mexico |
<svg xmlns="http://www.w3.org/2000/svg" height="24px"viewBox="0 0 24 24"
width="24px">
get_answer_from_table(table, query)
{'answer': 'AVERAGE > 1.58, 1.27, 1.25',
'coordinates': [(0, 0), (2, 0), (3, 0)],
'cells': ['1.58', '1.27', '1.25'],
'aggregator': 'AVERAGE'}
As we can see, the QA system correctly identified which cells to average to answer our question.
Updated 4 months ago