April 2026
Pinecone Assistant usage-based pricing and monthly Starter limits
Pinecone Assistant pricing is now fully usage-based. The hourly per-assistant fee has been removed. On Standard and Enterprise, you pay for what you use: ingestion (file uploads and updates on an assistant), storage, and chat, context, and evaluation tokens — with no base charge per assistant.For the Starter plan, Assistant included allowances are now monthly and reset each billing period (they are no longer all-time project totals). Starter includes 500,000 chat input tokens, 300,000 chat output tokens, 500,000 context retrieval tokens, and 1,000 ingestion units per month. When you upload files to an assistant, usage is measured in ingestion units (approximately one unit per chunk, ~400 tokens); multimodal PDF chunks account for more ingestion units per chunk than standard text on the same meter.Per-assistant file count limits have been removed for all plans. Usage is governed by ingestion and storage allowances, file size and page limits, and rate limits instead of a cap on the number of stored documents.For details, see Pricing and limits and Pinecone pricing.General availability: Dedicated Read Nodes
Dedicated Read Nodes are now generally available and recommended for production usage on Standard and Enterprise plans. You can provision read hardware for large, high-throughput indexes that need predictable, low latency using the console and API.Upsert files with custom IDs in Assistant
Pinecone Assistant now supports upserting files with user-provided file IDs, so you can create or replace a file by a stable custom identifier instead of relying on system-generated UUIDs. For more information, see File identifiers.As part of this update, upload, upsert, and delete operations now return an operation object that can be polled for status and progress. Assistant also includes new API endpoints to list and describe file operations.This update applies to the API only; SDK support is not yet available.March 2026
General availability: platform and operations features
The following capabilities are now generally available and recommended for production usage:- Namespace creation — Create namespace API
- Pinecone MCP server — Integrate AI agents with Pinecone
- Assistant MCP server — Use an Assistant MCP server
- Bulk metadata updates — Update metadata across multiple records
- Customer-managed encryption keys (CMEK) — Configure CMEK
- Data import from object storage (Amazon S3, Google Cloud Storage, Azure Blob Storage) — Import records
- Audit logs — Configure audit logs
- Admin API and service accounts (organization- and project-level) — Manage service accounts, Manage service accounts at the project level
- Backups and restore (serverless indexes) — Back up an index, Restore an index
- Pinecone Local (local development emulator) — Local development
- Automated testing with Pinecone Local — Automated testing
- Sparse-only indexes — Sparse indexes
pinecone-sparse-english-v0— Sparse English embedding model- Prometheus monitoring (serverless indexes) — Monitor with Prometheus
- Evaluate answers (
metrics_alignment) — Evaluate answers - Manage storage integrations — Manage storage integrations
February 2026
BYOC now available on AWS, GCP, and Azure
Bring Your Own Cloud (BYOC) is now available in public preview on AWS, GCP, and Azure. BYOC lets you run Pinecone’s data plane inside your own cloud account with a zero-access operating model — Pinecone never needs SSH, VPN, or inbound network access to your infrastructure.Deploy using a self-serve Pulumi-based setup wizard, with pull-based operations that execute locally in your cluster. Your vectors, metadata, and queries never leave your environment.HIPAA compliance add-on for Standard plan
HIPAA compliance is now available as an optional add-on for Standard plan customers. For $190 per month, you get HIPAA-ready infrastructure, encrypted data storage, audit logging, enhanced security controls, and BAA execution and compliance documentation support.Full HIPAA compliance remains included with the Enterprise plan. To enable the add-on on the Standard plan, contact sales or see Understanding cost — HIPAA compliance add-on.January 2026
Claude model deprecation for Assistant
Anthropic has deprecated the Claude 3.5 Sonnet and Claude 3.7 Sonnet models. Pinecone Assistant automatically routes all chat requests that specifyclaude-3-5-sonnet or claude-3-7-sonnet to Claude Sonnet 4.5, which provides enhanced intelligence at the same price. No code changes are required.To update your code to explicitly use Claude Sonnet 4.5, set model: "claude-sonnet-4-5" in your chat requests. For more information, see Choose a model.Pinecone Assistant node for n8n
The official Pinecone Assistant n8n node brings Assistant’s end-to-end RAG capabilities directly into n8n workflows, letting you connect any data source to AI-backed automation.For more information, see the Assistant quickstart for n8n.Claude Sonnet 4.5 now available for Assistant chat
Pinecone Assistant now supports Anthropic’s Claude Sonnet 4.5 model. To use this model, setmodel: "claude-sonnet-4-5" in your chat requests. In the Pinecone console, Claude Sonnet 4.5 is also available as a selection in the Chat model dropdown menu in the playground for each assistant.For more information, see Choose a model.Metadata filter limit: 10,000 values per $in/$nin operator
Pinecone now enforces a limit of 10,000 values per $in or $nin operator in metadata filter expressions. This limit helps ensure consistent query performance and protects shared infrastructure from excessive load caused by very large filters.Requests that exceed this limit will fail with a 400 - BAD_REQUEST error.If your application currently uses large $in filters (especially for access control), consider these approaches:- Namespace-based isolation (recommended): Create separate namespaces for each tenant instead of filtering by thousands of tenant IDs. This can also reduce query costs (queries on a 1 GB namespace cost 1 RU instead of 100 RUs for a 100 GB namespace with filtering).
- Access control groups: Filter by organization, project, or role identifiers instead of individual user IDs.
- Post-filter client-side: Retrieve a larger top K without filtering, then filter results in your application.
Request-per-second limits for data plane operations
Pinecone now enforces request-per-second rate limits on data plane operations (query, upsert, delete, and update) at the namespace level. These limits are set to 100 requests per second per namespace for all plans and provide protection against excessive request rates.Request-per-second limits are enforced in addition to existing read unit and write unit limits. If you exceed a request-per-second limit, you’ll receive a429 - TOO_MANY_REQUESTS error.For more information, see Database limits.Pagination support for fetch by metadata
The Fetch by metadata operation now supports pagination, allowing you to fetch large result sets in multiple requests. Use thepaginationToken parameter to retrieve the next page of results.When there are more results available, the response includes a pagination object with a next token. Pass this token as the paginationToken parameter in subsequent requests to fetch the next page. When there are no more results, the response does not include a pagination object.For more information, see Fetch records by metadata.