Monitor your usage
This page shows you how to monitor the overall usage and costs for your Pinecone organization as well as usage and performance metrics for individual indexes.
Monitor organization-level usage
You must be the organization owner to view usage across your Pinecone organization. Also, this feature is available only to organizations on the Standard or Enterprise plans.
To see a calculation of usage and cost across your organization, you can use the Usage dashboard.
- Go to Settings > Usage in the Pinecone console.
- Select the time range to report on. This defaults to the last 30 days.
- Select the scope for your report:
- SKU: The usage and cost for each billable SKU, for example, read units per cloud region, storage size ber cloud region, or tokens per embedding model.
- Project: The aggregated cost for each project in your organization.
- Service: The aggregated cost for each service your organization uses, for example, database (includes serverless back up and restore), assistants, inference (embedding and reranking), and collections.
- Choose the specific skus, projects, or services you want to report on. This defaults to all.
Dates are shown in UTC to match billing invoices, and cost data is delayed up to three days.
To download a usage report as a CSV file, click Download.
Monitor index-level usage
You can monitor index-level usage directly in the Pinecone console, or you can pull them into Prometheus. For more details, see Monitoring.
Monitor operation-level usage
Read units
Read operations like query
and fetch
return a usage
parameter with the read unit consumption of each request that is made. For example, a query to an example index might return this result and summary of read unit usage:
For a more in-depth demonstration of how to use read units to inspect read costs, see this notebook.
Embedding tokens
Requests to one of Pinecone’s hosted embedding models, either directly via the embed
operation or automatically when upserting or querying an index with integrated embedding, return a usage
parameter with the total tokens generated.
For example, the following request to use the multilingual-e5-large
model to generate embeddings for sentences related to the word “apple” might return this request and summary of embedding tokens generated:
The returned object looks like this: