When you upsert raw text for Pinecone to convert to vectors automatically, each record consists of the following:
ID: A unique string identifier for the record.
Text: The raw text for Pinecone to convert to a dense vector for semantic search or a sparse vector for lexical search, depending on the embedding model integrated with the index. This field name must match the embed.field_map defined in the index.
Metadata (optional): All additional fields are stored as record metadata. You can filter by metadata when searching or deleting records.
{ "_id": "document1#chunk1", "chunk_text": "First chunk of the document content...", // Text to convert to a vector. "document_id": "document1", // This and subsequent fields stored as metadata. "document_title": "Introduction to Vector Databases", "chunk_number": 1, "document_url": "https://example.com/docs/document1", "created_at": "2024-01-15", "document_type": "tutorial"}
When you upsert raw text for Pinecone to convert to vectors automatically, each record consists of the following:
ID: A unique string identifier for the record.
Text: The raw text for Pinecone to convert to a dense vector for semantic search or a sparse vector for lexical search, depending on the embedding model integrated with the index. This field name must match the embed.field_map defined in the index.
Metadata (optional): All additional fields are stored as record metadata. You can filter by metadata when searching or deleting records.
{ "_id": "document1#chunk1", "chunk_text": "First chunk of the document content...", // Text to convert to a vector. "document_id": "document1", // This and subsequent fields stored as metadata. "document_title": "Introduction to Vector Databases", "chunk_number": 1, "document_url": "https://example.com/docs/document1", "created_at": "2024-01-15", "document_type": "tutorial"}
When you upsert pre-generated vectors, each record consists of the following:
Metadata (optional): A flat JSON document containing key-value pairs with additional information (nested objects are not supported). You can filter by metadata when searching or deleting records.
When importing data from object storage, records must be in Parquet format. For more details, see Import data.
Link related chunks: Use fields like document_id and chunk_number to keep track of related records and enable efficient chunk deletion and document updates.
Link back to original data: Include chunk_text or document_url for traceability and user display.
Metadata keys must be strings, and metadata values must be one of the following data types:
String
Number (integer or floating point, gets converted to a 64-bit floating point)
This example demonstrates how to manage document chunks in Pinecone using structured IDs and comprehensive metadata. It covers the complete lifecycle of chunked documents: upserting, searching, fetching, updating, and deleting chunks, and updating an entire document.
from pinecone import Pineconepc = Pinecone(api_key="YOUR_API_KEY")# To get the unique host for an index, # see https://docs.pinecone.io/guides/manage-data/target-an-indexindex = pc.Index(host="INDEX_HOST")filtered_results = index.search( namespace="example-namespace", query={ "inputs": {"text": "What is a vector database?"}, "top_k": 3, "filter": {"document_id": "document1"} }, fields=["chunk_text"])print(filtered_results)
from pinecone import Pineconepc = Pinecone(api_key="YOUR_API_KEY")# To get the unique host for an index, # see https://docs.pinecone.io/guides/manage-data/target-an-indexindex = pc.Index(host="INDEX_HOST")filtered_results = index.search( namespace="example-namespace", query={ "inputs": {"text": "What is a vector database?"}, "top_k": 3, "filter": {"document_id": "document1"} }, fields=["chunk_text"])print(filtered_results)
Python
Copy
from pinecone.grpc import PineconeGRPC as Pineconepc = Pinecone(api_key="YOUR_API_KEY")# To get the unique host for an index, # see https://docs.pinecone.io/guides/manage-data/target-an-indexindex = pc.Index(host="INDEX_HOST")filtered_results = index.query( namespace="example-namespace", vector=[0.0236663818359375,-0.032989501953125, ..., -0.01041412353515625,0.0086669921875], top_k=3, filter={ "document_id": {"$eq": "document1"} }, include_metadata=True, include_values=False)print(filtered_results)
To retrieve all chunks for a specific document, first list the record IDs using the document prefix, and then fetch the complete records:
Python
Copy
from pinecone.grpc import PineconeGRPC as Pineconepc = Pinecone(api_key="YOUR_API_KEY")# To get the unique host for an index, # see https://docs.pinecone.io/guides/manage-data/target-an-indexindex = pc.Index(host="INDEX_HOST")# List all chunks for document1 using ID prefixchunk_ids = []for record_id in index.list(prefix='document1#', namespace='example-namespace'): chunk_ids.append(record_id)print(f"Found {len(chunk_ids)} chunks for document1")# Fetch the complete records by IDif chunk_ids: records = index.fetch(ids=chunk_ids, namespace='example-namespace') for record_id, record_data in records['vectors'].items(): print(f"Chunk ID: {record_id}") print(f"Chunk text: {record_data['metadata']['chunk_text']}") # Process the vector values and metadata as needed
Pinecone is eventually consistent, so it’s possible that a write (upsert, update, or delete) followed immediately by a read (query, list, or fetch) may not return the latest version of the data. If your use case requires retrieving data immediately, consider implementing a small delay or retry logic after writes.
To update specific chunks within a document, first list the chunk IDs, and then update individual records:
Python
Copy
from pinecone.grpc import PineconeGRPC as Pineconepc = Pinecone(api_key="YOUR_API_KEY")# To get the unique host for an index, # see https://docs.pinecone.io/guides/manage-data/target-an-indexindex = pc.Index(host="INDEX_HOST")# List all chunks for document1chunk_ids = []for record_id in index.list(prefix='document1#', namespace='example-namespace'): chunk_ids.append(record_id)# Update specific chunks (e.g., update chunk 2)if 'document1#chunk2' in chunk_ids: index.update( id='document1#chunk2', values=[<new dense vector>], set_metadata={ "document_id": "document1", "document_title": "Introduction to Vector Databases - Revised", "chunk_number": 2, "chunk_text": "Updated second chunk content...", "document_url": "https://example.com/docs/document1", "created_at": "2024-01-15", "updated_at": "2024-02-15", "document_type": "tutorial" }, namespace='example-namespace' ) print("Updated chunk 2 successfully")
from pinecone.grpc import PineconeGRPC as Pineconepc = Pinecone(api_key="YOUR_API_KEY")# To get the unique host for an index, # see https://docs.pinecone.io/guides/manage-data/target-an-indexindex = pc.Index(host="INDEX_HOST")# Delete chunks 1 and 3index.delete( namespace="example-namespace", filter={ "document_id": {"$eq": "document1"}, "chunk_number": {"$in": [1, 3]} })# Delete all chunks for a documentindex.delete( namespace="example-namespace", filter={ "document_id": {"$eq": "document1"} })
Pinecone is eventually consistent, so it’s possible that a write (upsert, update, or delete) followed immediately by a read (query, list, or fetch) may not return the latest version of the data. If your use case requires retrieving data immediately, consider implementing a small delay or retry logic after writes.