This page shows you how to import records from object storage into an index and interact with the import. Importing from object storage is the most efficient and cost-effective way to load large numbers of records into an index.

To run through this guide in your browser, see the Bulk import colab notebook.

This feature is in public preview and available only on Standard and Enterprise plans.

Before you import

Before you can import records, ensure you have a serverless index, a storage integration, and data formatted in a Parquet file and uploaded to the Amazon S3 bucket.

Create an index

Create a serverless index for your data.

  • Import does not support integrated embedding, so make sure your index is not associated with an integrated embedding model.
  • Import only supports AWS S3 as a data source, so make sure your index is also on AWS.
  • You cannot import records into existing namespaces, so make sure your index does not have namespaces with the same name as the namespaces you want to import into.

Add a storage integration

To import records from a secure data source, you must create an integration to allow Pinecone access to data in your object storage. For information on how to add, edit, and delete a storage integration, see Manage storage integrations.

To import records from a public data source, a storage integration is not required.

Prepare your data

For each namespace you want to import into, create a Parquet file and upload it to object storage.

Dense index

To import into a dense index, the Parquet file must contain the following columns:

Column nameParquet typeDescription
idSTRINGRequired. The unique identifier for each record.
valuesLIST<FLOAT>Required. A list of floating-point values that make up the dense vector embedding.
metadataSTRINGOptional. Additional metadata for each record. To omit from specific rows, use NULL.

The Parquet file cannot contain additional columns.

For example:

id | values                   | metadata
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1  | [ 3.82  2.48 -4.15 ... ] | {"year": 1984, "month": 6, "source": "source1", "title": "Example1", "text": "When ..."}
2  | [ 1.82  3.48 -2.15 ... ] | {"year": 1990, "month": 4, "source": "source2", "title": "Example2", "text": "Who ..."}

Sparse index

To import into a sparse index, the Parquet file must contain the following columns:

Column nameParquet typeDescription
idSTRINGRequired. The unique identifier for each record.
sparse_valuesLIST<INT> and LIST<FLOAT>Required. A list of floating-point values (sparse values) and a list of integer values (sparse indices) that make up the sparse vector embedding.
metadataSTRINGOptional. Additional metadata for each record. To omit from specific rows, use NULL.

The Parquet file cannot contain additional columns.

For example:

id | sparse_values                                                                                       | metadata
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1  | {"indices": [ 822745112 1009084850 1221765879 ... ], "values": [1.7958984 0.41577148 2.828125 ...]} | {"year": 1984, "month": 6, "source": "source1", "title": "Example1", "text": "When ..."}
2  | {"indices": [ 504939989 1293001993 3201939490 ... ], "values": [1.4383747 0.72849722 1.384775 ...]} | {"year": 1990, "month": 4, "source": "source2", "title": "Example2", "text": "Who ..."}

Import records into an index

Review current limitations before starting an import.

Use the start_import operation to start an asynchronous import of vectors from object storage into an index.

To import from a private bucket, specify the Integration ID (integration) of the Amazon S3 integration you created. The ID is found on the Storage integrations page of the Pinecone console. An ID is not needed to import from a public bucket.

The operation returns an id that you can use to check the status of the import.

If you set the import to continue on error, the operation will skip records that fail to import and continue with the next record. The operation will complete, but there will not be any notification about which records, if any, failed to import. To see how many records were successfully imported, use the describe_import operation.

from pinecone import Pinecone, ImportErrorMode

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")
root = "s3://BUCKET_NAME/PATH/TO/DIR"

index.start_import(
    uri=root,
    error_mode=ImportErrorMode.CONTINUE, # or ImportErrorMode.ABORT
    integration_id="a12b3d4c-47d2-492c-a97a-dd98c8dbefde" # Optional for public buckets
)
Response
{
   "operation_id": "101"
}

Once all the data is loaded, the index builder will index the records, which usually takes at least 10 minutes. During this indexing process, the expected job status is InProgress, but 100.0 percent complete. Once all the imported records are indexed and fully available for querying, the import operation will be set to Completed.

You can start a new import using the Pinecone console. Find the index you want to import into, and click the ellipsis (..) menu > Import data.

Manage imports

List imports

Use the list_imports operation to list all of the recent and ongoing imports. By default, the operation returns up to 100 imports per page. If the limit parameter is passed, the operation returns up to that number of imports per page instead. For example, if limit=3, up to 3 imports are returned per page. Whenever there are additional imports to return, the response includes a pagination_token for fetching the next page of imports.

When using the Python SDK, list_import paginates automatically.

Python
from pinecone import Pinecone, ImportErrorMode

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

# List using a generator that handles pagination
for i in index.list_imports():
    print(f"id: {i.id} status: {i.status}")

# List using a generator that fetches all results at once
operations = list(index.list_imports())
print(operations)
Response
{
  "data": [
    {
      "id": "1",
      "uri": "s3://BUCKET_NAME/PATH/TO/DIR",
      "status": "Pending",
      "started_at": "2024-08-19T20:49:00.754Z",
      "finished_at": "2024-08-19T20:49:00.754Z",
      "percent_complete": 42.2,
      "records_imported": 1000000
    }
  ],
  "pagination": {
    "next": "Tm90aGluZyB0byBzZWUgaGVyZQo="
  }
}

You can view the list of imports for an index in the Pinecone console. Select the index and navigate to the Imports tab.

Describe an import

Use the describe_import operation to get details about a specific import.

from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

index.describe_import(id="101")
Response
{
  "id": "101",
  "uri": "s3://BUCKET_NAME/PATH/TO/DIR",
  "status": "Pending",
  "created_at": "2024-08-19T20:49:00.754Z",
  "finished_at": "2024-08-19T20:49:00.754Z",
  "percent_complete": 42.2,
  "records_imported": 1000000
}

You can view the details of your import using the Pinecone console.

Cancel an import

The cancel_import operation cancels an import if it is not yet finished. It has no effect if the import is already complete.

from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

index.cancel_import(id="101")
Response
{}

You can cancel your import using the Pinecone console. To cancel an ongoing import, select the index you are importing into and navigate to the Imports tab. Then, click the ellipsis (..) menu > Cancel.