The base-size text embedding model from the second generation of Voyage AI models. Access to models is through Python API. You must register for Voyage API keys to access.Voyage-2 is optimized for latency and quality and is part of the second-gen Voyage family. It includes a very reasonable context length of 4K tokens.
# Embed datadata = [ {"id": "vec1", "text": "Apple is a popular fruit known for its sweetness and crisp texture."}, {"id": "vec2", "text": "The tech company Apple is known for its innovative products like the iPhone."}, {"id": "vec3", "text": "Many people enjoy eating apples as a healthy snack."}, {"id": "vec4", "text": "Apple Inc. has revolutionized the tech industry with its sleek designs and user-friendly interfaces."}, {"id": "vec5", "text": "An apple a day keeps the doctor away, as the saying goes."},]import voyageaivo = voyageai.Client(api_key = VOYAGE_API_KEY)model_id = "voyage-2"def embed(docs: list[str], input_type: str) -> list[list[float]]: # can embed up to 128 docs at once embeddings = vo.embed(docs, model=model_id,input_type=input_type).embeddings return embeddings# Use "document" input type for documentsembeddings = embed([d["text"] for d in data], input_type="document")vectors = []for d, e in zip(data, embeddings): vectors.append({ "id": d['id'], "values": e, "metadata": {'text': d['text']} })index.upsert( vectors=vectors, namespace="ns1")
query = "Tell me about the tech company known as Apple"# Use "query" input type for queriesx = embed([query], input_type="query")results = index.query( namespace="ns1", vector=x[0], top_k=3, include_values=False, include_metadata=True)print(results)
Lorem Ipsum
Was this page helpful?
Assistant
Responses are generated using AI and may contain mistakes.