Video Transcript Search
We will work through an example of indexing and querying YouTube video transcriptions data. The prerequisite packages can be installed with:
!pip install -U datasets sentence-transformers pinecone-client tqdm
We start by loading the dataset.
from datasets import load_dataset
ytt = load_dataset(
"pinecone/yt-transcriptions",
split="train",
revision="926a45"
)
ytt
Dataset({
features: ['video_id', 'text', 'start_second', 'end_second', 'url', 'title', 'thumbnail'],
num_rows: 11298
})
Each sample includes video-level information (ID, title, url and thumbnail) and snippet-level information (text, start_second, end_second).
for x in ytt:
print(x)
break
{'video_id': 'ZPewmEu7644', 'text': " hi this is Jeff Dean welcome to applications of deep neural networks of Washington University in this video we're going to look at how we can use ganz to generate additional training data for the latest on my a I course and projects click subscribe in the bell next to it to be notified of every new video Dan's have a wide array of uses beyond just the face generation that you", 'start_second': 0, 'end_second': 20, 'url': 'https://www.youtube.com/watch?v=ZPewmEu7644&t=0s', 'title': 'GANS for Semi-Supervised Learning in Keras (7.4)', 'thumbnail': 'https://i.ytimg.com/vi/ZPewmEu7644/maxresdefault.jpg'}
Inserting Documents to Pinecone Index
The next step is indexing this dataset in Pinecone. For this, we need a sentence transformer model to encode the text into embeddings and a Pinecone index.
We will initialize the sentence transformer first.
from sentence_transformers import SentenceTransformer
retriever = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_mpnet-base')
retriever
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
We can see the embedding dimension of 768
above. We will need this when creating our Pinecone index.
embed_dim = retriever.get_sentence_embedding_dimension()
embed_dim
768
Now we can initialize our index.
import pinecone
# get api key from app.pinecone.io
pinecone.init(
api_key="<<YOUR_API_KEY>>",
environment="YOUR_ENVIRONMENT"
)
# create index
pinecone.create_index(
"youtube-search",
dimension=embed_dim,
metric="cosine"
)
# connect to new index
index = pinecone.Index("youtube-search")
You can find your environment in the Pinecone console under API Keys.
We will index our data in batches of 64
. The data we insert into our index will contain records (here, documents) containing a unique document/snippet ID, embedding, and metadata in the following format:
{
'doc-id',
[0.0, 0.3, 0.1, ...],
{'title': '???', 'start_seconds': 12, ...}
}
To create these documents and insert them to Pinecone, we run the following loop:
from tqdm.auto import tqdm
docs = [] # this will store IDs, embeddings, and metadata
batch_size = 64
for i in tqdm(range(0, len(ytt), batch_size)):
i_end = min(i+batch_size, len(ytt))
# extract batch from YT transactions data
batch = ytt[i:i_end]
# encode batch of text
embeds = retriever.encode(batch['text']).tolist()
# each snippet needs a unique ID
# we will merge video ID and start_seconds for this
ids = [f"{x[0]}-{x[1]}" for x in zip(batch['video_id'], batch['start_second'])]
# create metadata records
meta = [{
'video_id': x[0],
'title': x[1],
'text': x[2],
'start_second': x[3],
'end_second': x[4],
'url': x[5],
'thumbnail': x[6]
} for x in zip(
batch['video_id'],
batch['title'],
batch['text'],
batch['start_second'],
batch['end_second'],
batch['url'],
batch['thumbnail']
)]
# create list of (IDs, vectors, metadata) to upsert
to_upsert = list(zip(ids, embeds, meta))
# add to pinecone
index.upsert(vectors=to_upsert)
index.describe_index_stats()
{'dimension': 768,
'index_fullness': 0.01,
'namespaces': {'': {'vector_count': 11298}}}
Using index.describe_index_stats()
we can see that the index now contains 11'298 vectors, the full pinecone/yt-transcriptions
dataset.
Querying
When query we encode our text with the same retriever model and pass it to the Pinecone query
endpoint.
query = "What is deep learning?"
xq = retriever.encode(query).tolist()
xc = index.query(xq, top_k=5,
include_metadata=True)
for context in xc['matches']:
print(context['metadata']['text'], end="\n---\n")
terms of optimization but what's the algorithm for updating the parameters or updating whatever the state of the network is and then the the last part is the the data set like how do you actually represent the world as it comes into your machine learning system so I think of deep learning as telling us something about what does the model look like and basically to qualify as deep I
---
any theoretical components any theoretical things that you need to understand about deep learning can be sick later for that link again just watched the word doc file again in that I mentioned the link also the second channel is my channel because deep learning might be complete deep learning playlist that I have created is completely in order okay to the other
---
under a rock for the last few years you have heard of the deep networks and how they have revolutionised computer vision and kind of the standard classic way of doing this is it's basically a classic supervised learning problem you are giving a network which you can think of as a big black box a pairs of input images and output labels XY pairs okay and this big black box essentially you
---
do the task at hand. Now deep learning is just a subset of machine learning which takes this idea even a step further and says how can we automatically extract the useful pieces of information needed to inform those future predictions or make a decision And that's what this class is all about teaching algorithms how to learn a task directly from raw data. We want to
---
algorithm and yelled at everybody in a good way that nobody was answering it correctly everybody knew what the alkyl it was graduate course everybody knew what an algorithm was but they weren't able to answer it well let me ask you in that same spirit what is deep learning I would say deep learning is any kind of machine learning that involves learning parameters of more than one consecutive
---
Example application
To try out an application like this one, see this example
application.
Updated 4 months ago