Skip to main content

Record format

When you upsert raw text for Pinecone to convert to vectors automatically, each record consists of the following:
  • ID: A unique string identifier for the record.
  • Text: The raw text for Pinecone to convert to a dense vector for semantic search or a sparse vector for lexical search, depending on the embedding model integrated with the index. This field name must match the embed.field_map defined in the index.
  • Metadata (optional): All additional fields are stored as record metadata. You can filter by metadata when searching or deleting records.
Upserting raw text is supported only for indexes with integrated embedding.
Example:
{
  "_id": "document1#chunk1", 
  "chunk_text": "First chunk of the document content...", // Text to convert to a vector. 
  "document_id": "document1", // This and subsequent fields stored as metadata. 
  "document_title": "Introduction to Vector Databases",
  "chunk_number": 1,
  "document_url": "https://example.com/docs/document1", 
  "created_at": "2024-01-15",
  "document_type": "tutorial"
}

Use structured IDs

Use a structured, human-readable format for record IDs, including ID prefixes that reflect the type of data you’re storing, for example:
  • Document chunks: document_id#chunk_number
  • User data: user_id#data_type#item_id
  • Multi-tenant data: tenant_id#document_id#chunk_id
Choose a delimiter for your ID prefixes that won’t appear elsewhere in your IDs. Common patterns include:
  • document1#chunk1 - Using hash delimiter
  • document1_chunk1 - Using underscore delimiter
  • document1:chunk1 - Using colon delimiter
Structuring IDs in this way provides several advantages:
  • Efficiency: Applications can quickly identify which record it should operate on.
  • Clarity: Developers can easily understand what they’re looking at when examining records.
  • Flexibility: ID prefixes enable list operations for fetching and updating records.

Include metadata

Include metadata key-value pairs that support your application’s key operations, for example: Metadata keys must be strings, and metadata values must be one of the following data types:
  • String
  • Number (integer or floating point, gets converted to a 64-bit floating point)
  • Boolean (true, false)
  • List of strings
Pinecone supports 40 KB of metadata per record.

Example

This example demonstrates how to manage document chunks in Pinecone using structured IDs and comprehensive metadata. It covers the complete lifecycle of chunked documents: upserting, searching, fetching, updating, and deleting chunks, and updating an entire document.

Upsert chunks

When upserting documents that have been split into chunks, combine structured IDs with comprehensive metadata:
Upserting raw text is supported only for indexes with integrated embedding.
Python
from pinecone.grpc import PineconeGRPC as Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

index.upsert_records(
  "example-namespace",
  [
    {
      "_id": "document1#chunk1", 
      "chunk_text": "First chunk of the document content...",
      "document_id": "document1",
      "document_title": "Introduction to Vector Databases",
      "chunk_number": 1,
      "document_url": "https://example.com/docs/document1",
      "created_at": "2024-01-15",
      "document_type": "tutorial"
    },
    {
      "_id": "document1#chunk2", 
      "chunk_text": "Second chunk of the document content...",
      "document_id": "document1",
      "document_title": "Introduction to Vector Databases", 
      "chunk_number": 2,
      "document_url": "https://example.com/docs/document1",
      "created_at": "2024-01-15",
      "document_type": "tutorial"
    },
    {
      "_id": "document1#chunk3", 
      "chunk_text": "Third chunk of the document content...",
      "document_id": "document1",
      "document_title": "Introduction to Vector Databases",
      "chunk_number": 3, 
      "document_url": "https://example.com/docs/document1",
      "created_at": "2024-01-15",
      "document_type": "tutorial"
    },
  ]
)

Search chunks

To search the chunks of a document, use a metadata filter expression that limits the search appropriately:
Searching with text is supported only for indexes with integrated embedding.
Python
from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

filtered_results = index.search(
    namespace="example-namespace", 
    query={
        "inputs": {"text": "What is a vector database?"}, 
        "top_k": 3,
        "filter": {"document_id": "document1"}
    },
    fields=["chunk_text"]
)

print(filtered_results)

Fetch chunks

To retrieve all chunks for a specific document, first list the record IDs using the document prefix, and then fetch the complete records:
Python
from pinecone.grpc import PineconeGRPC as Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

# List all chunks for document1 using ID prefix
chunk_ids = []
for record_id in index.list(prefix='document1#', namespace='example-namespace'):
    chunk_ids.append(record_id)

print(f"Found {len(chunk_ids)} chunks for document1")

# Fetch the complete records by ID
if chunk_ids:
    records = index.fetch(ids=chunk_ids, namespace='example-namespace')
    
    for record_id, record_data in records['vectors'].items():
        print(f"Chunk ID: {record_id}")
        print(f"Chunk text: {record_data['metadata']['chunk_text']}")
        # Process the vector values and metadata as needed
Pinecone is eventually consistent, so it’s possible that a write (upsert, update, or delete) followed immediately by a read (query, list, or fetch) may not return the latest version of the data. If your use case requires retrieving data immediately, consider implementing a small delay or retry logic after writes.

Update chunks

To update specific chunks within a document, first list the chunk IDs, and then update individual records:
Python
from pinecone.grpc import PineconeGRPC as Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

# List all chunks for document1
chunk_ids = []
for record_id in index.list(prefix='document1#', namespace='example-namespace'):
    chunk_ids.append(record_id)

# Update specific chunks (e.g., update chunk 2)
if 'document1#chunk2' in chunk_ids:
    index.update(
        id='document1#chunk2',
        values=[<new dense vector>],
        set_metadata={
            "document_id": "document1",
            "document_title": "Introduction to Vector Databases - Revised",
            "chunk_number": 2,
            "chunk_text": "Updated second chunk content...",
            "document_url": "https://example.com/docs/document1",
            "created_at": "2024-01-15",
            "updated_at": "2024-02-15",
            "document_type": "tutorial"
        },
        namespace='example-namespace'
    )
    print("Updated chunk 2 successfully")

Delete chunks

To delete chunks of a document, use a metadata filter expression that limits the deletion appropriately:
Python
from pinecone.grpc import PineconeGRPC as Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

# Delete chunks 1 and 3
index.delete(
    namespace="example-namespace",
    filter={
        "document_id": {"$eq": "document1"},
        "chunk_number": {"$in": [1, 3]}
    }
)

# Delete all chunks for a document
index.delete(
    namespace="example-namespace",
    filter={
        "document_id": {"$eq": "document1"}
    }
)

Update an entire document

When the amount of chunks or ordering of chunks for a document changes, the recommended approach is to first delete all chunks using a metadata filter, and then upsert the new chunks:
Python
from pinecone.grpc import PineconeGRPC as Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

# Step 1: Delete all existing chunks for the document
index.delete(
    namespace="example-namespace",
    filter={
        "document_id": {"$eq": "document1"}
    }
)

print("Deleted existing chunks for document1")

# Step 2: Upsert the updated document chunks
index.upsert(
  namespace="example-namespace", 
  vectors=[
    {
      "id": "document1#chunk1",
      "values": [<updated dense vector>],
      "metadata": {
        "document_id": "document1",
        "document_title": "Introduction to Vector Databases - Updated Edition",
        "chunk_number": 1,
        "chunk_text": "Updated first chunk with new content...",
        "document_url": "https://example.com/docs/document1",
        "created_at": "2024-02-15",
        "document_type": "tutorial",
        "version": "2.0"
      }
    },
    {
      "id": "document1#chunk2",
      "values": [<updated dense vector>],
      "metadata": {
        "document_id": "document1",
        "document_title": "Introduction to Vector Databases - Updated Edition",
        "chunk_number": 2,
        "chunk_text": "Updated second chunk with new content...",
        "document_url": "https://example.com/docs/document1",
        "created_at": "2024-02-15",
        "document_type": "tutorial",
        "version": "2.0"
      }
    }
    # Add more chunks as needed for the updated document
  ]
)

print("Successfully updated document1 with new chunks")

Data freshness

Pinecone is eventually consistent, so it’s possible that a write (upsert, update, or delete) followed immediately by a read (query, list, or fetch) may not return the latest version of the data. If your use case requires retrieving data immediately, consider implementing a small delay or retry logic after writes.

Design for multi-tenancy

Many applications have a concept of tenants—users, organizations, projects, or other groups that should only access their own data. How you model this access control significantly impacts query performance and cost.

Use namespaces for tenant isolation

The most efficient way to implement multi-tenancy is to use namespaces to separate data by tenant. With this approach, each tenant has their own namespace, and queries only scan that tenant’s data—resulting in better performance and lower costs. For a complete implementation guide with examples across all SDKs, see Implement multitenancy.
When you use namespaces for multi-tenancy:
  • Lower query costs and faster performance: Query cost is based on namespace size. If you have 100 tenants with 1 GB each, querying one tenant’s namespace costs 1 RU and scans only 1 GB. With metadata filtering in a single namespace (100 GB total), the same query costs 100 RUs and scans all 100 GB, even though the filter narrows results.
  • Natural isolation: Reduces the risk of application bugs that could query the wrong tenant’s data (for example, by passing an incorrect filter value).

Avoid filtering by high-cardinality IDs

A common anti-pattern is storing all data in a single namespace and using metadata filters to scope queries to specific users:
# Anti-pattern: Filtering by many user IDs
query_vector = [0.1, 0.2, 0.3, ...]  # Your query vector
results = index.query(
    vector=query_vector,
    top_k=10,
    filter={
        "allowed_user_ids": {"$in": ["user_1", "user_2", ..., "user_10000"]}
    }
)
This approach has several drawbacks:
  • Performance degradation: Large $in filters increase network payload size and query latency.
  • Hard limits: Each $in or $nin operator is limited to 10,000 values. Exceeding this limit will cause the request to fail. See Metadata filter limits.

Use access control groups instead of individual IDs

If data must be shared across many tenants, design your access control using the smallest number of groups that describe a user’s access:
# Better: Filter by organization or role instead of individual users
query_vector = [0.1, 0.2, 0.3, ...]  # Your query vector
results = index.query(
    vector=query_vector,
    top_k=10,
    filter={
        "$or": [
            {"organization_id": {"$eq": "org_A"}},
            {"project_id": {"$eq": "project_B"}}
        ]
    }
)
Instead of passing thousands of user IDs, this filter uses only 2 group identifiers to achieve the same access control.

Multitenancy patterns

The following table provides general guidelines for choosing a multitenancy approach. Evaluate your specific use case, access patterns, and requirements to determine the best fit for your application.
Data patternRecommended approachQuery costPerformance
Each tenant’s data is completely separateOne index, one namespace per tenantLowest (scans only tenant namespace)Fastest
Large tenants with many sub-groupsOne index per large tenant, namespaces for sub-groupsLow (scans only sub-group namespace)Fast
Data shared across tenantsOne index, shared namespace, filter by group IDs (org, project, role)Higher (scans entire shared namespace)Slower
Avoid filtering by large lists of individual user IDs (for example, {"user_id": {"$in": ["user_1", "user_2", ..., "user_10000"]}}). This approach has the following drawbacks:
  • Hard limits: Each $in or $nin operator is limited to 10,000 values. Exceeding this limit will cause requests to fail.
  • Performance: Large filters increase query latency.
  • Higher costs: You pay for scanning the entire shared namespace, even though the filter narrows results.
Instead, consider these alternatives:
  • Use one namespace per tenant (see row 1 in the table above).
  • Filter by broader groups like organization, project, or role rather than individual user IDs (see row 3 in the table above).
  • Retrieve a larger top K without filtering (for example, top 1000), then filter the results client-side.
For a complete step-by-step implementation guide, see Implement multitenancy.