POST
/
assistant
/
chat
/
{assistant_name}
/
chat
/
completions
from pinecone import Pinecone
from pinecone_plugins.assistant.models.chat import Message

pc = Pinecone(api_key="YOUR_API_KEY")

# Get your assistant.
assistant = pc.assistant.describe_assistant("YOUR_ASSISTANT_NAME")

# Chat with the Assistant.
chat_context = [Message(content='What does 3M do?')]
response = assistant.chat_completions(messages=chat_context)
{
  "chat_completion": {
    "id": "<string>",
    "choices": [
      {
        "finish_reason": "Stop",
        "index": 123,
        "message": {
          "role": "<string>",
          "content": "<string>"
        }
      }
    ],
    "model": "<string>"
  }
}

This feature is in public beta and is not recommended for production usage. Join the beta waitlist and review the preview terms for more details.

from pinecone import Pinecone
from pinecone_plugins.assistant.models.chat import Message

pc = Pinecone(api_key="YOUR_API_KEY")

# Get your assistant.
assistant = pc.assistant.describe_assistant("YOUR_ASSISTANT_NAME")

# Chat with the Assistant.
chat_context = [Message(content='What does 3M do?')]
response = assistant.chat_completions(messages=chat_context)

Authorizations

Api-Key
string
headerrequired

Pinecone API Key

Path Parameters

assistant_name
string
required

The name of the assistant to be described.

Body

application/json
messages
object[]
required

An array of objects that represent the messages in a conversation.

streaming
boolean
default: false

Whether to return a streaming response.

Response

200 - application/json
chat_completion
object

The ChatCompletionModel describes the response format of a chat request