Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.pinecone.io/llms.txt

Use this file to discover all available pages before exploring further.

Documents

A document is the unit of data in an index with a document schema — a JSON object with a required _id field plus any fields declared in the index’s schema. Documents support multiple field types in a single record: a dense_vector field (for semantic search), a sparse_vector field (for sparse-vector lexical search), one or more string fields with full_text_search enabled (for full-text search with BM25 and Lucene queries), and filterable metadata fields of type string, string_list, float, or boolean. The schema, declared at index creation, tells Pinecone how to index each field. Field types:
  • dense_vector — indexed for ANN similarity search.
  • sparse_vector — indexed for sparse-vector lexical search.
  • string with a nested full_text_search config object ({} enables with all defaults; optional sub-fields: language, stemming, stop_words) — indexed for BM25 ranking and Lucene queries. Lowercasing and the token length cap are server-applied and cannot be overridden.
  • string, string_list, float, boolean with filterable: true — indexed for metadata filtering.
Document fields can hold structured values: a string_list field holds an array of strings; a dense_vector field holds an array of floats; a sparse_vector field is an object with two parallel arrays — indices (token positions) and values (token weights). A schema can declare up to 100 string fields with full_text_search enabled, but at most one dense_vector field and at most one sparse_vector field per index. Additional fields not declared in the schema are stored on the document and returned via include_fields. Today, these are also auto-indexed for filtering. Forward-looking: in a future release, only schema-declared fields with filterable: true will be indexed for filtering — declare metadata fields in the schema now to be future-proof. Example document for an index with title, body, embedding, and category fields:
{
  "_id": "document1#chunk1",
  "title": "Introduction to Vector Databases",
  "body": "First chunk of the document content...",
  "embedding": [0.0236, -0.0329, ..., -0.0104, 0.0086],
  "category": "tutorial"
}
Field-name rules:
  • Must be unique, non-empty strings.
  • Must not start with _ (reserved for system-managed fields like _id and _score) or $ (reserved for filter operators).
  • Limited to 64 bytes.
For the full schema reference (language and analyzer options, multi-field schemas, scoring methods), see Full-text search.
Chunking granularity. A document is the unit of retrieval — top_k and _score are computed per document, not per sub-section. In public preview, Pinecone does not split a single document into multiple in-document chunks at index time. If your source content is longer than what you want to retrieve as one hit (a long article, a PDF, a transcript), do the chunking in your application before upsert and store each chunk as its own document, with an ID like document1#chunk1, document1#chunk2, and a metadata field that ties chunks back to the parent document for grouping at query time.

Schema patterns

The same document model supports several common schema shapes. Pick the one that matches the signal you want to rank by, and plan your fields up front: in public preview, schema migration is not supported after index creation. Filters are deterministic per document and apply before scoring; choose your hard yes/no constraints (including text-match operators on FTS-enabled string fields) first, then pick a score_by method to rank whatever remains. See Filters vs. scoring.
The Python snippets in each accordion below assume an initialized client and the schema-builder import:
Python
from pinecone import Pinecone
from pinecone.preview import SchemaBuilder

pc = Pinecone(api_key="YOUR_API_KEY")
Each accordion shows the pattern-specific schema, an example document, and a search snippet. The control-plane (pc.preview.indexes.create(...)) and data-plane (index = pc.preview.index(name=...)) calls in the snippets reuse this pc.
Use when you want BM25 keyword ranking on one piece of text per document (a review body, a support ticket, a product description) and you don’t have embeddings to manage.
Python
from pinecone.preview import SchemaBuilder

schema = (
    SchemaBuilder()
    .add_string_field("review_text", full_text_search={"language": "en"})
    .build()
)

pc.preview.indexes.create(name="book-reviews", schema=schema)
A document upserted into this index looks like:
{
  "_id": "review-1234",
  "review_text": "Beautifully written exploration of contact, communication, and civilization across cosmic distances. The pacing is uneven but the central premise carries you through."
}
Search with a single text clause (the score_by type, not a field type — this clause runs BM25 ranking on the named string field):
Python
index.documents.search(
    namespace="reviews",
    top_k=10,
    score_by=[{"type": "text", "field": "review_text", "query": "civilization"}],
)
See Full-text search.
Use when a document has more than one piece of text that should both contribute to ranking — for example, a long review_text plus a short review_summary. Pinecone combines the per-field BM25 scores into one ranking per document.
Python
schema = (
    SchemaBuilder()
    .add_string_field("review_text", full_text_search={"language": "en"})
    .add_string_field("review_summary", full_text_search={"language": "en"})
    .add_string_field("category", filterable=True)
    .add_float_field("rating", filterable=True)
    .build()
)

pc.preview.indexes.create(name="book-reviews-multi", schema=schema)
A document upserted into this index looks like:
{
  "_id": "review-1234",
  "review_text": "Beautifully written exploration of contact, communication, and civilization across cosmic distances. The pacing is uneven but the central premise carries you through.",
  "review_summary": "Monumental science fiction with uneven pacing",
  "category": "science-fiction",
  "rating": 4.5
}
Pass two text clauses in score_by; the server combines them into one ranking, with each contributing field weighted equally in 2026-01.alpha.
Python
index = pc.preview.index(name="book-reviews-multi")

index.documents.search(
    namespace="reviews",
    top_k=5,
    score_by=[
        {"type": "text", "field": "review_text",    "query": "disappointing"},
        {"type": "text", "field": "review_summary", "query": "Disappointing"},
    ],
    include_fields=["*"],
)
Most workloads that combine semantic ranking with keyword matching reach for this pattern: rank by dense (or sparse) similarity, restricted to documents that contain a specific term or phrase. Common examples include semantic search over patents, regulatory filings, internal knowledge bases, or other technical literature where the right answer must contain a specific term. A single schema can include one dense_vector field plus any number of FTS-enabled string fields:
Python
schema = (
    SchemaBuilder()
    .add_string_field("book_title", full_text_search={"language": "en"})
    .add_string_field("review_text", full_text_search={"language": "en"})
    .add_dense_vector_field("review_embedding", dimension=1024, metric="cosine")
    .build()
)

pc.preview.indexes.create(name="book-reviews-dense", schema=schema)
A document upserted into this index looks like:
{
  "_id": "review-1234",
  "book_title": "The Three-Body Problem",
  "review_text": "Beautifully written exploration of contact, communication, and civilization across cosmic distances.",
  "review_embedding": [0.012, -0.087, 0.153, ...]
}
review_embedding is a 1024-dim list of floats produced by your dense embedding model. Use the same model at query time so the query vector lives in the same space.A single search request ranks by one scoring type. With this schema you have two query options:Option A — dense ranking restricted by a text-match filter (the most common hybrid pattern):
Python
index = pc.preview.index(name="book-reviews-dense")

# query_embedding is a 1024-dim list of floats from your embedding model.
query_embedding = embed("beautifully written, hard sci-fi")

index.documents.search(
    namespace="reviews",
    top_k=5,
    score_by=[
        {"type": "dense_vector", "field": "review_embedding", "values": query_embedding},
    ],
    filter={"review_text": {"$match_phrase": "beautifully written"}},
)
Option B — run BM25 and dense searches separately and merge client-side (when you want both signals to contribute to ranking, e.g. via reciprocal rank fusion):
Python
dense_hits = index.documents.search(
    namespace="reviews", top_k=50,
    score_by=[{"type": "dense_vector", "field": "review_embedding", "values": query_embedding}],
)
bm25_hits = index.documents.search(
    namespace="reviews", top_k=50,
    score_by=[{"type": "text", "field": "review_text", "query": "beautifully written"}],
)
# Merge dense_hits + bm25_hits in your application (e.g. RRF) to produce final ranking.
See Hybrid search for a fuller discussion.
The dense_vector field’s source content is independent of the FTS-enabled string fields it sits alongside. You can embed images (e.g., with a multimodal model like Gemini Embedding 2 or a CLIP-style model) and pair them with FTS-enabled string fields holding captions, geography, or taxonomy — then query the image vector with a text description and restrict matches with FTS filters on those string fields. The schema doesn’t constrain what the dense vector represents; it just stores a vector of the declared dimension.
Use when a single document is best described by more than one ranking signal — for example, a video catalog where each item has frame embeddings (dense), auto-generated captions you’ve encoded as sparse vectors (sparse), and a transcript text field (BM25/Lucene). One schema declares all three; you pick the ranking signal per query with score_by. No second index, no cross-index linkage to maintain.
Python
schema = (
    SchemaBuilder()
    .add_dense_vector_field("frame_embedding", dimension=1024, metric="cosine")
    .add_sparse_vector_field("caption_sparse")
    .add_string_field("transcript", full_text_search={"language": "en"})
    .add_string_field("language", filterable=True)
    .build()
)

pc.preview.indexes.create(name="video-catalog", schema=schema)
A document upserted into this index looks like:
{
  "_id": "video-7890#scene-3",
  "frame_embedding": [0.012, -0.087, 0.153, ...],
  "caption_sparse": {
    "indices": [42, 1077, 9821],
    "values":  [0.41, 0.33, 0.18]
  },
  "transcript": "I think we should go now before it gets dark.",
  "language": "en"
}
frame_embedding is a 1024-dim list of floats from your dense vision model. caption_sparse is the output of your sparse encoder — an object with parallel indices (token IDs) and values (token weights) arrays.The same index supports three different query shapes. All three assume:
Python
index = pc.preview.index(name="video-catalog")

# Replace with the outputs of your encoders.
query_embedding = embed_image(query_image)                 # 1024-dim list of floats
query_sparse    = sparse_encode("scene with a lighthouse") # {"indices": [...], "values": [...]}
Semantic frame search — rank by visual similarity:
Python
index.documents.search(
    namespace="videos",
    top_k=10,
    score_by=[{"type": "dense_vector", "field": "frame_embedding", "values": query_embedding}],
)
Caption lexical search — rank by sparse-vector lexical similarity over your encoded captions:
Python
index.documents.search(
    namespace="videos",
    top_k=10,
    score_by=[{"type": "sparse_vector", "field": "caption_sparse", "sparse_values": query_sparse}],
)
Semantic search restricted to spoken phrase — semantic frame ranking, narrowed to clips where the transcript contains a specific phrase:
Python
index.documents.search(
    namespace="videos",
    top_k=10,
    score_by=[{"type": "dense_vector", "field": "frame_embedding", "values": query_embedding}],
    filter={"transcript": {"$match_phrase": "I love you"}},
)
score_by selects one ranking signal per request, but every signal stays addressable on the same documents.
Use when you’re modeling data with the vector API (not the document API) and want to combine a sparse and dense vector in one record on a single index. For new document-centric projects with text data, prefer the document-shape Dense + FTS pattern above.
{
  "id": "doc1#chunk1",
  "values": [0.0236, -0.0329, ..., -0.0104, 0.0086],
  "sparse_values": {
    "indices": [822745112, 1009084850, ...],
    "values":  [1.7958984, 0.41577148, ...]
  },
  "metadata": { "document_id": "doc1", "chunk_number": 1 }
}
See Hybrid search.

Records

Records are how you model data for indexes with dense vectors and indexes with sparse vectors. Each record carries one vector (dense, sparse, or both for single-index hybrid) plus optional metadata, and you can upsert raw text in place of a vector when the index is integrated with an embedding model.
When you upsert pre-generated vectors, each record consists of the following:
  • ID: A unique string identifier for the record.
  • Vector: A dense vector for semantic search, a sparse vector for sparse-vector lexical search, or both for single-index hybrid search (vector API).
  • Metadata (optional): A flat JSON document containing key-value pairs with additional information (nested objects are not supported). You can filter by metadata when searching or deleting records.
When importing data from object storage, records must be in Parquet format. For more details, see Import data.
Example:
{
  "id": "document1#chunk1", 
  "values": [0.0236663818359375, -0.032989501953125, ..., -0.01041412353515625, 0.0086669921875], 
  "metadata": {
    "document_id": "document1",
    "document_title": "Introduction to Vector Databases",
    "chunk_number": 1,
    "chunk_text": "First chunk of the document content...",
    "document_url": "https://example.com/docs/document1",
    "created_at": "2024-01-15",
    "document_type": "tutorial"
  }
}

Use structured IDs

Use a structured, human-readable format for record IDs, including ID prefixes that reflect the type of data you’re storing, for example:
  • Document chunks: document_id#chunk_number
  • User data: user_id#data_type#item_id
  • Multi-tenant data: tenant_id#document_id#chunk_id
Choose a delimiter for your ID prefixes that won’t appear elsewhere in your IDs. Common patterns include:
  • document1#chunk1 - Using hash delimiter
  • document1_chunk1 - Using underscore delimiter
  • document1:chunk1 - Using colon delimiter
Structuring IDs in this way provides several advantages:
  • Efficiency: Applications can quickly identify which record it should operate on.
  • Clarity: Developers can easily understand what they’re looking at when examining records.
  • Flexibility: ID prefixes enable list operations for fetching and updating records.

Include metadata

Include metadata key-value pairs that support your application’s key operations, for example: Metadata keys must be strings, and metadata values must be one of the following data types:
  • String
  • Number (stored as a 64-bit floating point)
  • Boolean (true, false)
  • List of strings
Pinecone supports 40 KB of metadata per record.

Example

This example demonstrates how to manage document chunks in Pinecone using structured IDs and comprehensive metadata. It covers the complete lifecycle of chunked documents: upserting, searching, fetching, updating, and deleting chunks, and updating an entire document.

Upsert chunks

When upserting documents that have been split into chunks, combine structured IDs with comprehensive metadata:
Upserting raw text is supported only for indexes with integrated embedding.
Python
from pinecone.grpc import PineconeGRPC as Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

index.upsert_records(
  "example-namespace",
  [
    {
      "_id": "document1#chunk1", 
      "chunk_text": "First chunk of the document content...",
      "document_id": "document1",
      "document_title": "Introduction to Vector Databases",
      "chunk_number": 1,
      "document_url": "https://example.com/docs/document1",
      "created_at": "2024-01-15",
      "document_type": "tutorial"
    },
    {
      "_id": "document1#chunk2", 
      "chunk_text": "Second chunk of the document content...",
      "document_id": "document1",
      "document_title": "Introduction to Vector Databases", 
      "chunk_number": 2,
      "document_url": "https://example.com/docs/document1",
      "created_at": "2024-01-15",
      "document_type": "tutorial"
    },
    {
      "_id": "document1#chunk3", 
      "chunk_text": "Third chunk of the document content...",
      "document_id": "document1",
      "document_title": "Introduction to Vector Databases",
      "chunk_number": 3, 
      "document_url": "https://example.com/docs/document1",
      "created_at": "2024-01-15",
      "document_type": "tutorial"
    },
  ]
)

Search chunks

To search the chunks of a document, use a metadata filter expression that limits the search appropriately:
Searching with text is supported only for indexes with integrated embedding.
Python
from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

filtered_results = index.search(
    namespace="example-namespace", 
    query={
        "inputs": {"text": "What is a vector database?"}, 
        "top_k": 3,
        "filter": {"document_id": "document1"}
    },
    fields=["chunk_text"]
)

print(filtered_results)

Fetch chunks

To retrieve all chunks for a specific document, first list the record IDs using the document prefix, and then fetch the complete records:
Python
from pinecone.grpc import PineconeGRPC as Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

# List all chunks for document1 using ID prefix
chunk_ids = []
for record_id in index.list(prefix='document1#', namespace='example-namespace'):
    chunk_ids.append(record_id)

print(f"Found {len(chunk_ids)} chunks for document1")

# Fetch the complete records by ID
if chunk_ids:
    records = index.fetch(ids=chunk_ids, namespace='example-namespace')
    
    for record_id, record_data in records['vectors'].items():
        print(f"Chunk ID: {record_id}")
        print(f"Chunk text: {record_data['metadata']['chunk_text']}")
        # Process the vector values and metadata as needed
Pinecone is eventually consistent, so it’s possible that a write (upsert, update, or delete) followed immediately by a read (query, list, or fetch) may not return the latest version of the data. If your use case requires retrieving data immediately, consider implementing a small delay or retry logic after writes.

Update chunks

To update specific chunks within a document, first list the chunk IDs, and then update individual records:
Python
from pinecone.grpc import PineconeGRPC as Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

# List all chunks for document1
chunk_ids = []
for record_id in index.list(prefix='document1#', namespace='example-namespace'):
    chunk_ids.append(record_id)

# Update specific chunks (e.g., update chunk 2)
if 'document1#chunk2' in chunk_ids:
    new_vector = ...  # from your embedding model
    index.update(
        id='document1#chunk2',
        values=new_vector,
        set_metadata={
            "document_id": "document1",
            "document_title": "Introduction to Vector Databases - Revised",
            "chunk_number": 2,
            "chunk_text": "Updated second chunk content...",
            "document_url": "https://example.com/docs/document1",
            "created_at": "2024-01-15",
            "updated_at": "2024-02-15",
            "document_type": "tutorial"
        },
        namespace='example-namespace'
    )
    print("Updated chunk 2 successfully")

Delete chunks

To delete chunks of a document, use a metadata filter expression that limits the deletion appropriately:
Python
from pinecone.grpc import PineconeGRPC as Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

# Delete chunks 1 and 3
index.delete(
    namespace="example-namespace",
    filter={
        "document_id": {"$eq": "document1"},
        "chunk_number": {"$in": [1, 3]}
    }
)

# Delete all chunks for a document
index.delete(
    namespace="example-namespace",
    filter={
        "document_id": {"$eq": "document1"}
    }
)

Update an entire document

When the amount of chunks or ordering of chunks for a document changes, the recommended approach is to first delete all chunks using a metadata filter, and then upsert the new chunks:
Python
from pinecone.grpc import PineconeGRPC as Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

# Step 1: Delete all existing chunks for the document
index.delete(
    namespace="example-namespace",
    filter={
        "document_id": {"$eq": "document1"}
    }
)

print("Deleted existing chunks for document1")

# Step 2: Upsert the updated document chunks
chunk1_vector = ...  # from your embedding model
chunk2_vector = ...
index.upsert(
  namespace="example-namespace", 
  vectors=[
    {
      "id": "document1#chunk1",
      "values": chunk1_vector,
      "metadata": {
        "document_id": "document1",
        "document_title": "Introduction to Vector Databases - Updated Edition",
        "chunk_number": 1,
        "chunk_text": "Updated first chunk with new content...",
        "document_url": "https://example.com/docs/document1",
        "created_at": "2024-02-15",
        "document_type": "tutorial",
        "version": "2.0"
      }
    },
    {
      "id": "document1#chunk2",
      "values": chunk2_vector,
      "metadata": {
        "document_id": "document1",
        "document_title": "Introduction to Vector Databases - Updated Edition",
        "chunk_number": 2,
        "chunk_text": "Updated second chunk with new content...",
        "document_url": "https://example.com/docs/document1",
        "created_at": "2024-02-15",
        "document_type": "tutorial",
        "version": "2.0"
      }
    }
    # Add more chunks as needed for the updated document
  ]
)

print("Successfully updated document1 with new chunks")

Data freshness

Pinecone is eventually consistent, so it’s possible that a write (upsert, update, or delete) followed immediately by a read (query, list, or fetch) may not return the latest version of the data. If your use case requires retrieving data immediately, consider implementing a small delay or retry logic after writes.

Design for multi-tenancy

Many applications have a concept of tenants—users, organizations, projects, or other groups that should only access their own data. How you model this access control significantly impacts query performance and cost.

Use namespaces for tenant isolation

The most efficient way to implement multi-tenancy is to use namespaces to separate data by tenant. With this approach, each tenant has their own namespace, and queries only scan that tenant’s data—resulting in better performance and lower costs. For a complete implementation guide with examples across all SDKs, see Implement multitenancy.
When you use namespaces for multi-tenancy:
  • Lower query costs and faster performance: Query cost is based on namespace size. If you have 100 tenants with 1 GB each, querying one tenant’s namespace costs 1 RU and scans only 1 GB. With metadata filtering in a single namespace (100 GB total), the same query costs 100 RUs and scans all 100 GB, even though the filter narrows results.
  • Natural isolation: Reduces the risk of application bugs that could query the wrong tenant’s data (for example, by passing an incorrect filter value).

Avoid filtering by high-cardinality IDs

A common anti-pattern is storing all data in a single namespace and using metadata filters to scope queries to specific users:
# Anti-pattern: Filtering by many user IDs
query_vector = [0.1, 0.2, 0.3, ...]  # Your query vector
results = index.query(
    vector=query_vector,
    top_k=10,
    filter={
        "allowed_user_ids": {"$in": ["user_1", "user_2", ..., "user_10000"]}
    }
)
This approach has several drawbacks:
  • Performance degradation: Large $in filters increase network payload size and query latency.
  • Hard limits: Each $in or $nin operator is limited to 10,000 values. Exceeding this limit will cause the request to fail. See Metadata filter limits.

Use access control groups instead of individual IDs

If data must be shared across many tenants, design your access control using the smallest number of groups that describe a user’s access:
# Better: Filter by organization or role instead of individual users
query_vector = [0.1, 0.2, 0.3, ...]  # Your query vector
results = index.query(
    vector=query_vector,
    top_k=10,
    filter={
        "$or": [
            {"organization_id": {"$eq": "org_A"}},
            {"project_id": {"$eq": "project_B"}}
        ]
    }
)
Instead of passing thousands of user IDs, this filter uses only 2 group identifiers to achieve the same access control.

Multitenancy patterns

The following table provides general guidelines for choosing a multitenancy approach. Evaluate your specific use case, access patterns, and requirements to determine the best fit for your application.
Data patternRecommended approachQuery costPerformance
Each tenant’s data is completely separateOne index, one namespace per tenantLowest (scans only tenant namespace)Fastest
Large tenants with many sub-groupsOne index per large tenant, namespaces for sub-groupsLow (scans only sub-group namespace)Fast
Data shared across tenantsOne index, shared namespace, filter by group IDs (org, project, role)Higher (scans entire shared namespace)Slower
Avoid filtering by large lists of individual user IDs (for example, {"user_id": {"$in": ["user_1", "user_2", ..., "user_10000"]}}). This approach has the following drawbacks:
  • Hard limits: Each $in or $nin operator is limited to 10,000 values. Exceeding this limit will cause requests to fail.
  • Performance: Large filters increase query latency.
  • Higher costs: You pay for scanning the entire shared namespace, even though the filter narrows results.
Instead, consider these alternatives:
  • Use one namespace per tenant (see row 1 in the table above).
  • Filter by broader groups like organization, project, or role rather than individual user IDs (see row 3 in the table above).
  • Retrieve a larger top K without filtering (for example, top 1000), then filter the results client-side.
For a complete step-by-step implementation guide, see Implement multitenancy.