This document explains how to create, upload, and list your dataset for use by other Pinecone users. This guide shows how to create your own dataset using your own storage; you cannot upload your own dataset to the Pinecone dataset directory.
The Pinecone datasets project uses poetry
for dependency management and supports python versions 3.8+.
To install poetry, run the following command from the project root directory:
To create a public dataset, you may need to generate dataset metadata.
Example
The following example creates a metadata object meta
containing metadata for a dataset test_dataset
.
If you intend to list your dataset, you can save the dataset metadata using the following command. Write permission to location is needed.
To see the complete schema, run the following command:
To run tests locally, run the following command:
Pinecone datasets can load a dataset from any storage bucket where it has access using the default access controls for s3, gcs or local permissions.
Pinecone datasets expects data to be uploaded with the following directory structure:
Figure 1: Expected directory structure for Pinecone datasets
├── base_path # path to where all datasets
│ ├── dataset_id # name of dataset
│ │ ├── metadata.json # dataset metadata (optional, only for listed)
│ │ ├── documents # datasets documents
│ │ │ ├── file1.parquet
│ │ │ └── file2.parquet
│ │ ├── queries # dataset queries
│ │ │ ├── file1.parquet
│ │ │ └── file2.parquet
└── …
Pinecone datasets scans storage and lists every dataset with metadata file.
Example
The following shows the format of an example s3 bucket address for a datasets metadata file:
s3://my-bucket/my-dataset/metadata.json
By default, the Pinecone SDK uses Pinecone’s public datasets bucket on GCS. You can use your own bucket by setting the PINECONE_DATASETS_ENDPOINT
environment variable.
Example
The following export command changes the default dataset storage endpoint to gs://my-bucket
. Calling list_datasets
or load_dataset
now scans that bucket and list all datasets.
You can also use s3://
as a prefix to your bucket to access an s3 bucket.
Pinecone Datasets supports GCS and S3 storage buckets, using default authentication as provided by the fsspec implementation: gcsfs for GCS and s3fs for AWS.
To authenticate to an AWS s3 bucket using the key/secret method, follow these steps:
PINECONE_DATASETS_ENDPOINT
to the s3 address for your bucket.Example
list_datasets
and load_dataset
functions.Example
To access a non-listed dataset, load it directly using the Dataset
constructor.
Example
The following loads the dataset non-listed-dataset
.
This document explains how to create, upload, and list your dataset for use by other Pinecone users. This guide shows how to create your own dataset using your own storage; you cannot upload your own dataset to the Pinecone dataset directory.
The Pinecone datasets project uses poetry
for dependency management and supports python versions 3.8+.
To install poetry, run the following command from the project root directory:
To create a public dataset, you may need to generate dataset metadata.
Example
The following example creates a metadata object meta
containing metadata for a dataset test_dataset
.
If you intend to list your dataset, you can save the dataset metadata using the following command. Write permission to location is needed.
To see the complete schema, run the following command:
To run tests locally, run the following command:
Pinecone datasets can load a dataset from any storage bucket where it has access using the default access controls for s3, gcs or local permissions.
Pinecone datasets expects data to be uploaded with the following directory structure:
Figure 1: Expected directory structure for Pinecone datasets
├── base_path # path to where all datasets
│ ├── dataset_id # name of dataset
│ │ ├── metadata.json # dataset metadata (optional, only for listed)
│ │ ├── documents # datasets documents
│ │ │ ├── file1.parquet
│ │ │ └── file2.parquet
│ │ ├── queries # dataset queries
│ │ │ ├── file1.parquet
│ │ │ └── file2.parquet
└── …
Pinecone datasets scans storage and lists every dataset with metadata file.
Example
The following shows the format of an example s3 bucket address for a datasets metadata file:
s3://my-bucket/my-dataset/metadata.json
By default, the Pinecone SDK uses Pinecone’s public datasets bucket on GCS. You can use your own bucket by setting the PINECONE_DATASETS_ENDPOINT
environment variable.
Example
The following export command changes the default dataset storage endpoint to gs://my-bucket
. Calling list_datasets
or load_dataset
now scans that bucket and list all datasets.
You can also use s3://
as a prefix to your bucket to access an s3 bucket.
Pinecone Datasets supports GCS and S3 storage buckets, using default authentication as provided by the fsspec implementation: gcsfs for GCS and s3fs for AWS.
To authenticate to an AWS s3 bucket using the key/secret method, follow these steps:
PINECONE_DATASETS_ENDPOINT
to the s3 address for your bucket.Example
list_datasets
and load_dataset
functions.Example
To access a non-listed dataset, load it directly using the Dataset
constructor.
Example
The following loads the dataset non-listed-dataset
.