Default response: The assistant returns a structured response and separate citation information.
Streaming response: The assistant returns the response as a text stream.
JSON response: The assistant returns the response as JSON key-value pairs.
This is the recommended way to chat with an assistant, as it offers more functionality and control over the assistant’s responses and references. However, if you need your assistant to be OpenAI-compatible or need inline citations, use the OpenAI-compatible chat interface.
The following example sends a message and requests a default response:
The content parameter in the request cannot be empty.
# To use the Python SDK, install the plugin:# pip install --upgrade pinecone pinecone-plugin-assistantfrom pinecone import Pineconefrom pinecone_plugins.assistant.models.chat import Messagepc = Pinecone(api_key="YOUR_API_KEY")assistant = pc.assistant.Assistant(assistant_name="example-assistant")msg = Message(role="user", content="Who is the CFO of Netflix?")resp = assistant.chat(messages=[msg])# Alternatively, you can provide a dictionary as the message:# msg = {"role": "user", "content": "Who is the CFO of Netflix?"}# resp = assistant.chat(messages=[msg])print(resp)
The example above returns a result like the following:
The following example sends a message and requests a streaming response:
The content parameter in the request cannot be empty.
# To use the Python SDK, install the plugin:# pip install --upgrade pinecone pinecone-plugin-assistantfrom pinecone import Pineconefrom pinecone_plugins.assistant.models.chat import Messagepc = Pinecone(api_key="YOUR_API_KEY")assistant = pc.assistant.Assistant(assistant_name="example-assistant")msg = Message(role="user", content="What is the inciting incident of Pride and Prejudice?")response = assistant.chat(messages=[msg], stream=True)for data in response: if data: print(data)
The example above returns a result like the following:
The following example uses the json_response parameter to instruct the assistant to return the response as JSON key-value pairs. This is useful if you need to parse the response programmatically.
JSON response cannot be used with the stream parameter.
# To use the Python SDK, install the plugin:# pip install --upgrade pinecone pinecone-plugin-assistantimport jsonfrom pinecone import Pineconefrom pinecone_plugins.assistant.models.chat import Messagepc = Pinecone(api_key="YOUR_API_KEY")assistant = pc.assistant.Assistant(assistant_name="example-assistant")msg = Message(role="user", content="Who is the CFO and CEO of Netflix?")response = assistant.chat(messages=[msg], json_response=True)print(json.loads(response.message.content))
The example above returns a result like the following:
Models lack memory of previous requests, so any relevant messages from earlier in the conversation must be present in the messages object.
In the following example, the messages object includes prior messages that are necessary for interpreting the newest message.
# To use the Python SDK, install the plugin:# pip install --upgrade pinecone pinecone-plugin-assistantfrom pinecone import Pineconefrom pinecone_plugins.assistant.models.chat import Messagepc = Pinecone(api_key="YOUR_API_KEY")# Get your assistant.assistant = pc.assistant.Assistant( assistant_name="example-assistant", )# Chat with the assistant.chat_context = [ Message(content="What is the maximum height of a red pine?", role="user"), Message(content="The maximum height of a red pine (Pinus resinosa) is up to 25 meters.", role="assistant"), Message(content="What is its maximum diameter?", role="user")]response = assistant.chat(messages=chat_context)
The example returns a response like the following:
{ "finish_reason":"stop", "message":{ "role":"assistant", "content":"The maximum diameter of a red pine (Pinus resinosa) is up to 1 meter." }, "id":"0000000000000000236a24a17e55309a", "model":"gpt-4o-2024-05-13", "usage":{ "prompt_tokens":21377, "completion_tokens":20, "total_tokens":21397 }, "citations":[...]}
# To use the Python SDK, install the plugin:# pip install --upgrade pinecone pinecone-plugin-assistantfrom pinecone import Pineconefrom pinecone_plugins.assistant.models.chat import Messagepc = Pinecone(api_key="YOUR_API_KEY")# Get your assistant.assistant = pc.assistant.Assistant( assistant_name="example-assistant", )# Chat with the assistant.chat_context = [Message(role="user", content="What is the maximum height of a red pine?")]response = assistant.chat(messages=chat_context, stream=True, filter={"resource": "encyclopedia"})
This is available in API versions 2025-04 and later.
To limit the number of input tokens used, you can control the context size by tuning top_k * snippet_size. These parameters can be adjusted by setting context_options in the request:
snippet_size: Controls the max size of a snippet (default is 2048 tokens). Note that snippet size can vary and, in rare cases, may be bigger than the set snippet_size. Snippet size controls the amount of context the model is given for each chunk of text.
top_k: Controls the max number of context snippets sent to the LLM (default is 16). top_k controls the diversity of information sent to the model.
While additional tokens will be used for other parameters (e.g., the system prompt, chat input), adjusting the top_k and snippet_size can help manage token consumption.
# To use the Python SDK, install the plugin:# pip install --upgrade pinecone pinecone-plugin-assistantfrom pinecone import Pineconefrom pinecone_plugins.assistant.models.chat import Messagepc = Pinecone(api_key="YOUR_API_KEY")assistant = pc.assistant.Assistant(assistant_name="example-assistant")msg = Message(role="user", content="Who is the CFO of Netflix?")resp = assistant.chat(messages=[msg], context_options={snippet_size=2500, top_k=10})print(resp)
The example will return up to 10 snippets and each snippet will be up to 2500 tokens in size.
Pinecone Assistant uses the gpt-4o model by default. Alternatively, you can use the claude-3-5-sonnet model. Select the LLM to use by setting the model parameter in the request:
# To use the Python SDK, install the plugin:# pip install --upgrade pinecone pinecone-plugin-assistantfrom pinecone import Pineconefrom pinecone_plugins.assistant.models.chat import Messagepc = Pinecone(api_key="YOUR_API_KEY")# Get your assistant.assistant = pc.assistant.Assistant( assistant_name="example-assistant", )# Chat with the assistant.chat_context = [Message(role="user", content="What is the maximum height of a red pine?")]response = assistant.chat(messages=chat_context, stream=True, model="claude-3-5-sonnet")
Citation highlights are available in the Pinecone console or API versions 2025-04 and later.
When using the standard chat interface, every response includes a citation object. The object includes a reference to the document that the assistant used to generate the response. Additionally, you can include highlights, which are the specific parts of the document that the assistant used to generate the response, by setting the include_highlights parameter to true in the request:
The assistant’s response is returned in a JSON response object along with other information. The message string is contained in the following JSON object:
choices.[0].message.content for the default chat response
choices[0].delta.content for the streaming chat response
You can extract the message content and print it to the console:
import sys# Print the assistant's response to the console.print(str(response.choices[0].message.content))
This creates output like the following:
A red pine, scientifically known as *Pinus resinosa*, is a medium-sized tree that can grow up to 25 meters high and 75 centimeters in diameter. [1, pp. 1]
import sys# Print the assistant's response to the console.print(str(response.choices[0].message.content))
This creates output like the following:
A red pine, scientifically known as *Pinus resinosa*, is a medium-sized tree that can grow up to 25 meters high and 75 centimeters in diameter. [1, pp. 1]
import sys# Store streaming response.response = assistant.chat(messages=chat_context, stream=True)for data in response: if data: print(str(data.choices[0].delta.content))
This creates output like the following:
Streaming response
The maximum height of a red pine (Pinus resinosa) is up to twenty-five meters [1, pp. 1].