After uploading files to an assistant, you can chat with the assistant.
You can chat with an assistant using the Pinecone console. Select the assistant to chat with, and use the Assistant playground.
The standard chat interface can return responses in three different formats:
This is the recommended way to chat with an assistant, as it offers more functionality and control over the assistant’s responses and references. However, if you need your assistant to be OpenAI-compatible or need inline citations, use the OpenAI-compatible chat interface.
The following example sends a message and requests a default response:
The content
parameter in the request cannot be empty.
The example above returns a result like the following:
The following example sends a message and requests a streaming response:
The content
parameter in the request cannot be empty.
The example above returns a result like the following:
There are four types of messages in a streaming chat response:
"role":"assistant"
, which indicates that the assistant is responding to the user’s message.content
field (e.g., "content":"The"
), which is part of the assistant’s streamed response to the user’s message."finish_reason":"stop"
, which indicates that the assistant has finished responding to the user’s message.The following example uses the json_response
parameter to instruct the assistant to return the response as JSON key-value pairs. This is useful if you need to parse the response programmatically.
JSON response cannot be used with the stream
parameter.
The example above returns a result like the following:
In the assistant’s response, the message string is contained in the following JSON object:
message.content
for the default chat responsedelta.content
for the streaming chat responsemessage.content
for the JSON responseYou can extract the message content and print it to the console:
This creates output like the following:
This creates output like the following:
This creates output like the following:
This creates output like the following:
Pinecone Assistant supports the following models:
gpt-4o
(default)gpt-4.1
o4-mini
claude-3-5-sonnet
claude-3-7-sonnet
gemini-2.5-pro
To choose a non-default model for your assistant, set the model
parameter in the request:
Models lack memory of previous requests, so any relevant messages from earlier in the conversation must be present in the messages
object.
In the following example, the messages
object includes prior messages that are necessary for interpreting the newest message.
The example returns a response like the following:
You can filter which documents to use for chat completions. The following example filters the responses to use only documents that include the metadata "resource": "encyclopedia"
.
This is available in API versions 2025-04
and later.
To limit the number of input tokens used, you can control the context size by tuning top_k * snippet_size
. These parameters can be adjusted by setting context_options
in the request:
snippet_size
: Controls the max size of a snippet (default is 2048 tokens). Note that snippet size can vary and, in rare cases, may be bigger than the set snippet_size
. Snippet size controls the amount of context the model is given for each chunk of text.top_k
: Controls the max number of context snippets sent to the LLM (default is 16). top_k
controls the diversity of information sent to the model.While additional tokens will be used for other parameters (e.g., the system prompt, chat input), adjusting the top_k
and snippet_size
can help manage token consumption.
The example will return up to 10 snippets and each snippet will be up to 2500 tokens in size.
To better understand the context retrieved using these parameters, you can retrieve context from an assistant.
This is available in API versions 2025-04
and later.
Temperature is a parameter that controls the randomness of a model’s predictions during text generation. Lower temperatures (~0.0) yield more consistent, predictable answers, while higher temperatures increase the model’s explanatory power and is generally better for creative tasks.
To control the sampling temperature for a model, set the temperarture
parameter in the request. If a model does not support a temperature parameter, the parameter is ignored.
Citation highlights are available in the Pinecone console or API versions 2025-04
and later.
When using the standard chat interface, every response includes a citation
object. The object includes a reference to the document that the assistant used to generate the response. Additionally, you can include highlights, which are the specific parts of the document that the assistant used to generate the response, by setting the include_highlights
parameter to true
in the request:
The example returns response like the following:
Enabling highlights will increase token usage.
After uploading files to an assistant, you can chat with the assistant.
You can chat with an assistant using the Pinecone console. Select the assistant to chat with, and use the Assistant playground.
The standard chat interface can return responses in three different formats:
This is the recommended way to chat with an assistant, as it offers more functionality and control over the assistant’s responses and references. However, if you need your assistant to be OpenAI-compatible or need inline citations, use the OpenAI-compatible chat interface.
The following example sends a message and requests a default response:
The content
parameter in the request cannot be empty.
The example above returns a result like the following:
The following example sends a message and requests a streaming response:
The content
parameter in the request cannot be empty.
The example above returns a result like the following:
There are four types of messages in a streaming chat response:
"role":"assistant"
, which indicates that the assistant is responding to the user’s message.content
field (e.g., "content":"The"
), which is part of the assistant’s streamed response to the user’s message."finish_reason":"stop"
, which indicates that the assistant has finished responding to the user’s message.The following example uses the json_response
parameter to instruct the assistant to return the response as JSON key-value pairs. This is useful if you need to parse the response programmatically.
JSON response cannot be used with the stream
parameter.
The example above returns a result like the following:
In the assistant’s response, the message string is contained in the following JSON object:
message.content
for the default chat responsedelta.content
for the streaming chat responsemessage.content
for the JSON responseYou can extract the message content and print it to the console:
This creates output like the following:
This creates output like the following:
This creates output like the following:
This creates output like the following:
Pinecone Assistant supports the following models:
gpt-4o
(default)gpt-4.1
o4-mini
claude-3-5-sonnet
claude-3-7-sonnet
gemini-2.5-pro
To choose a non-default model for your assistant, set the model
parameter in the request:
Models lack memory of previous requests, so any relevant messages from earlier in the conversation must be present in the messages
object.
In the following example, the messages
object includes prior messages that are necessary for interpreting the newest message.
The example returns a response like the following:
You can filter which documents to use for chat completions. The following example filters the responses to use only documents that include the metadata "resource": "encyclopedia"
.
This is available in API versions 2025-04
and later.
To limit the number of input tokens used, you can control the context size by tuning top_k * snippet_size
. These parameters can be adjusted by setting context_options
in the request:
snippet_size
: Controls the max size of a snippet (default is 2048 tokens). Note that snippet size can vary and, in rare cases, may be bigger than the set snippet_size
. Snippet size controls the amount of context the model is given for each chunk of text.top_k
: Controls the max number of context snippets sent to the LLM (default is 16). top_k
controls the diversity of information sent to the model.While additional tokens will be used for other parameters (e.g., the system prompt, chat input), adjusting the top_k
and snippet_size
can help manage token consumption.
The example will return up to 10 snippets and each snippet will be up to 2500 tokens in size.
To better understand the context retrieved using these parameters, you can retrieve context from an assistant.
This is available in API versions 2025-04
and later.
Temperature is a parameter that controls the randomness of a model’s predictions during text generation. Lower temperatures (~0.0) yield more consistent, predictable answers, while higher temperatures increase the model’s explanatory power and is generally better for creative tasks.
To control the sampling temperature for a model, set the temperarture
parameter in the request. If a model does not support a temperature parameter, the parameter is ignored.
Citation highlights are available in the Pinecone console or API versions 2025-04
and later.
When using the standard chat interface, every response includes a citation
object. The object includes a reference to the document that the assistant used to generate the response. Additionally, you can include highlights, which are the specific parts of the document that the assistant used to generate the response, by setting the include_highlights
parameter to true
in the request:
The example returns response like the following:
Enabling highlights will increase token usage.