Documentation Index
Fetch the complete documentation index at: https://docs.pinecone.io/llms.txt
Use this file to discover all available pages before exploring further.
May 2026
Public preview: Pinecone Marketplace
Pinecone Marketplace is now in public preview. Marketplace lets you build, publish, and operate AI-powered knowledge applications on top of Pinecone, with a managed deployment lifecycle and end-user chat interface.Highlights:- Templates and connectors — Start from pre-built templates, connect data sources (Google Drive, manual upload), and configure knowledge processing with the Knowledge Agent Toolkit (KAT).
- Multi-domain routing — Route end-user queries across multiple knowledge domains within a single deployment.
- Evaluations and analytics — Run evaluations against your deployment and monitor usage with event logs.
- Versioning and rollback — Publish deployment versions and roll back to previous configurations.
- End-user experience — Authenticated chat interface with citations, visual components, and feedback collection.
- Increased Starter plan limits - The Starter plan is currently offering 1M input tokens per month (500K before promotion) to help explore Marketplace apps until June 30, 2026.
New Builder plan
The Builder plan is now available at $20/month (flat). Builder is designed for individual developers who need higher limits than Starter without committing to usage-based pricing. Key differences from Starter:- 10 serverless indexes (up from 5)
- 10 GB storage per organization
- 100 namespaces per index (up from 20)
- Prometheus and Datadog monitoring
Public preview: Full-text search
Full-text search is now in public preview. Full-text search uses a typed document model: you upsert data as JSON documents, declare ranking fields in a schema, and Pinecone indexes them accordingly. Schema field types:string with full_text_search (indexed for BM25 ranking), dense_vector, and sparse_vector. Any other fields you upsert are stored as metadata and automatically indexed for filtering — no schema declaration required.Highlights:- Four scoring methods via
score_by:text(BM25),query_string(Lucene syntax, including cross-field boolean queries),dense_vector, andsparse_vector. - New data plane endpoints under
/namespaces/{namespace}/documents/:upsert,search,fetch, anddelete. - New filter operator
$match_phrasefor phrase matching against text fields, composable with anyscore_bymethod. - Flexible deployment: on-demand read capacity (
read_capacity.mode: "OnDemand") and dedicated read capacity (read_capacity.mode: "Dedicated") are both supported on managed (serverless) indexes.
2026-01.alpha to access the feature.New AWS regions for serverless indexes
You can now deploy serverless indexes in two new AWS regions:eu-central-1 (Frankfurt) and ap-southeast-1 (Singapore). Both regions are available on Standard and Enterprise plans. For the full list of supported regions, see Cloud regions.April 2026
General availability: Fetch by metadata
The Fetch by metadata operation is now generally available and recommended for production usage. Use a metadata filter expression to fetch matching records without knowing their IDs, and paginate withpaginationToken to retrieve result sets larger than 10,000 records per response.For more information, see Fetch records by metadata.Upsert files with custom IDs in Assistant
Pinecone Assistant now supports upserting files with user-provided file IDs, so you can create or replace a file by a stable custom identifier instead of relying on system-generated UUIDs. For more information, see File identifiers.As part of this update, upload, upsert, and delete operations now return an operation object that can be polled for status and progress. Assistant also includes new API endpoints to list and describe file operations.This update applies to the API only; SDK support is not yet available.General availability: Dedicated Read Nodes
Dedicated Read Nodes are now generally available and recommended for production usage on Standard and Enterprise plans. You can provision read hardware for large, high-throughput indexes that need predictable, low latency using the console and API.Pinecone Assistant usage-based pricing and monthly Starter limits
Pinecone Assistant pricing is now fully usage-based. The hourly per-assistant fee has been removed. On Standard and Enterprise, you pay for what you use: ingestion (file uploads and updates on an assistant), storage, and chat, context, and evaluation tokens — with no base charge per assistant.For the Starter plan, Assistant included allowances are now monthly and reset each billing period (they are no longer all-time project totals). Starter includes 500,000 chat input tokens, 300,000 chat output tokens, 500,000 context retrieval tokens, and 1,000 ingestion units per month. When you upload files to an assistant, usage is measured in ingestion units (approximately one unit per chunk, ~400 tokens); multimodal PDF chunks account for more ingestion units per chunk than standard text on the same meter.Per-assistant file count limits have been removed for all plans. Usage is governed by ingestion and storage allowances, file size and page limits, and rate limits instead of a cap on the number of stored documents.For details, see Pricing and limits and Pinecone pricing.March 2026
General availability: platform and operations features
The following capabilities are now generally available and recommended for production usage:- Namespace creation — Create namespace API
- Pinecone MCP server — Integrate AI agents with Pinecone
- Assistant MCP server — Use an Assistant MCP server
- Bulk metadata updates — Update metadata across multiple records
- Customer-managed encryption keys (CMEK) — Configure CMEK
- Data import from object storage (Amazon S3, Google Cloud Storage, Azure Blob Storage) — Import records
- Audit logs — Configure audit logs
- Admin API and service accounts (organization- and project-level) — Manage service accounts, Manage service accounts at the project level
- Backups and restore (serverless indexes) — Back up an index, Restore an index
- Pinecone Local (local development emulator) — Local development
- Automated testing with Pinecone Local — Automated testing
- Indexes with sparse vectors — Indexes with sparse vectors
pinecone-sparse-english-v0— Sparse English embedding model- Prometheus monitoring (serverless indexes) — Monitor with Prometheus
- Evaluate answers (
metrics_alignment) — Evaluate answers - Manage storage integrations — Manage storage integrations
February 2026
BYOC now available on AWS, GCP, and Azure
Bring Your Own Cloud (BYOC) is now available in public preview on AWS, GCP, and Azure. BYOC lets you run Pinecone’s data plane inside your own cloud account with a zero-access operating model — Pinecone never needs SSH, VPN, or inbound network access to your infrastructure.Deploy using a self-serve Pulumi-based setup wizard, with pull-based operations that execute locally in your cluster. Your vectors, metadata, and queries never leave your environment.HIPAA compliance add-on for Standard plan
HIPAA compliance is now available as an optional add-on for Standard plan customers. For $190 per month, you get HIPAA-ready infrastructure, encrypted data storage, audit logging, enhanced security controls, and BAA execution and compliance documentation support.Full HIPAA compliance remains included with the Enterprise plan. To enable the add-on on the Standard plan, contact sales or see Understanding cost — HIPAA compliance add-on.January 2026
Claude model deprecation for Assistant
Anthropic has deprecated the Claude 3.5 Sonnet and Claude 3.7 Sonnet models. Pinecone Assistant automatically routes all chat requests that specifyclaude-3-5-sonnet or claude-3-7-sonnet to Claude Sonnet 4.5, which provides enhanced intelligence at the same price. No code changes are required.To update your code to explicitly use Claude Sonnet 4.5, set model: "claude-sonnet-4-5" in your chat requests. For more information, see Choose a model.Pinecone Assistant node for n8n
The official Pinecone Assistant n8n node brings Assistant’s end-to-end RAG capabilities directly into n8n workflows, letting you connect any data source to AI-backed automation.For more information, see the Assistant quickstart for n8n.Claude Sonnet 4.5 now available for Assistant chat
Pinecone Assistant now supports Anthropic’s Claude Sonnet 4.5 model. To use this model, setmodel: "claude-sonnet-4-5" in your chat requests. In the Pinecone console, Claude Sonnet 4.5 is also available as a selection in the Chat model dropdown menu in the playground for each assistant.For more information, see Choose a model.Metadata filter limit: 10,000 values per $in/$nin operator
Pinecone now enforces a limit of 10,000 values per $in or $nin operator in metadata filter expressions. This limit helps ensure consistent query performance and protects shared infrastructure from excessive load caused by very large filters.Requests that exceed this limit will fail with a 400 - BAD_REQUEST error.If your application currently uses large $in filters (especially for access control), consider these approaches:- Namespace-based isolation (recommended): Create separate namespaces for each tenant instead of filtering by thousands of tenant IDs. This can also reduce query costs (queries on a 1 GB namespace cost 1 RU instead of 100 RUs for a 100 GB namespace with filtering).
- Access control groups: Filter by organization, project, or role identifiers instead of individual user IDs.
- Post-filter client-side: Retrieve a larger top K without filtering, then filter results in your application.
Request-per-second limits for data plane operations
Pinecone now enforces request-per-second rate limits on data plane operations (query, upsert, delete, and update) at the namespace level. These limits are set to 100 requests per second per namespace for all plans and provide protection against excessive request rates.Request-per-second limits are enforced in addition to existing read unit and write unit limits. If you exceed a request-per-second limit, you’ll receive a429 - TOO_MANY_REQUESTS error.For more information, see Database limits.Pagination support for fetch by metadata
The Fetch by metadata operation now supports pagination, allowing you to fetch large result sets in multiple requests. Use thepaginationToken parameter to retrieve the next page of results.When there are more results available, the response includes a pagination object with a next token. Pass this token as the paginationToken parameter in subsequent requests to fetch the next page. When there are no more results, the response does not include a pagination object.For more information, see Fetch records by metadata.