POST
/
embed
from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

a = pc.inference.embed(
    model="multilingual-e5-large",
    inputs=["The quick brown fox jumps over the lazy dog"],
    parameters={
        "input_type": "passage",
        "truncate": "END"
    }
)
EmbeddingsList(
  model='multilingual-e5-large',
  data=[
    {'values': [0.02117919921875, -0.0093994140625, ..., -0.0384521484375, 0.016326904296875]}
  ],
  usage={'total_tokens': 16}
)
from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

a = pc.inference.embed(
    model="multilingual-e5-large",
    inputs=["The quick brown fox jumps over the lazy dog"],
    parameters={
        "input_type": "passage",
        "truncate": "END"
    }
)
EmbeddingsList(
  model='multilingual-e5-large',
  data=[
    {'values': [0.02117919921875, -0.0093994140625, ..., -0.0384521484375, 0.016326904296875]}
  ],
  usage={'total_tokens': 16}
)

Authorizations

Api-Key
string
header
required

An API Key is required to call Pinecone APIs. Get yours from the console.

Body

application/json
Generate embeddings for inputs.
model
string
required

The model to use for embedding generation.

Example:

"multilingual-e5-large"

inputs
object[]
required

List of inputs to generate embeddings for.

parameters
object

Model-specific parameters.

Response

200
application/json
OK

Embeddings generated for the input

model
string
required

The model used to generate the embeddings

Example:

"multilingual-e5-large"

data
object[]
required

The embeddings generated for the inputs.

Embedding of a single input

usage
object
required

Usage statistics for the model inference.

Was this page helpful?