The official Pinecone Python SDK for building vector search applications with AI/ML.
Pinecone is a vector database that makes it easy to add vector search to production applications. Use Pinecone to store, search, and manage high-dimensional vectors for applications like semantic search, recommendation systems, and RAG (Retrieval-Augmented Generation).
- Vector Operations: Store, query, and manage high-dimensional vectors with metadata filtering
- Serverless & Pod Indexes: Choose between serverless (auto-scaling) or pod-based (dedicated) indexes
- Integrated Inference: Built-in embedding and reranking models for end-to-end search workflows
- Async Support: Full asyncio support with
PineconeAsynciofor modern Python applications - GRPC Support: Optional GRPC transport for improved performance
- Type Safety: Full type hints and type checking support
- Documentation
- Prerequisites
- Installation
- Quickstart
- Pinecone Assistant
- More Information
- Issues & Bugs
- Contributing
Note
The official SDK package was renamed from pinecone-client to pinecone beginning in version 5.1.0.
Please remove pinecone-client from your project dependencies and add pinecone instead to get
the latest updates.
For notes on changes between major versions, see Upgrading
- The Pinecone Python SDK requires Python 3.10 or greater. It has been tested with CPython versions from 3.10 to 3.13.
- Before you can use the Pinecone SDK, you must sign up for an account and find your API key in the Pinecone console dashboard at https://app.pinecone.io.
The Pinecone Python SDK is distributed on PyPI using the package name pinecone. The base installation includes everything you need to get started with vector operations, but you can install optional extras to unlock additional functionality.
Base installation includes:
- Core Pinecone client (
Pinecone) - Vector operations (upsert, query, fetch, delete)
- Index management (create, list, describe, delete)
- Metadata filtering
- Pinecone Assistant plugin
Optional extras:
pinecone[asyncio]- Addsaiohttpdependency and enablesPineconeAsynciofor async/await support. Use this if you're building applications with FastAPI, aiohttp, or other async frameworks.pinecone[grpc]- Addsgrpcioand related libraries for GRPC transport. Provides modest performance improvements for data operations likeupsertandquery. See the guide on tuning performance.
Configuration: The SDK can read your API key from the PINECONE_API_KEY environment variable, or you can pass it directly when instantiating the client.
# Install the latest version
pip3 install pinecone
# Install the latest version, with optional dependencies
pip3 install "pinecone[asyncio,grpc]"uv is a modern package manager that runs 10-100x faster than pip and supports most pip syntax.
# Install the latest version
uv add pinecone
# Install the latest version, optional dependencies
uv add "pinecone[asyncio,grpc]"Installing with poetry
# Install the latest version
poetry add pinecone
# Install the latest version, with optional dependencies
poetry add pinecone --extras asyncio --extras grpcThis example shows how to create an index, add vectors with embeddings you've generated, and query them. This approach gives you full control over your embedding model and vector generation process.
from pinecone import (
Pinecone,
ServerlessSpec,
CloudProvider,
AwsRegion,
VectorType
)
# 1. Instantiate the Pinecone client
# Option A: Pass API key directly
pc = Pinecone(api_key='YOUR_API_KEY')
# Option B: Use environment variable (PINECONE_API_KEY)
# pc = Pinecone()
# 2. Create an index
index_config = pc.create_index(
name="index-name",
dimension=1536,
spec=ServerlessSpec(
cloud=CloudProvider.AWS,
region=AwsRegion.US_EAST_1
),
vector_type=VectorType.DENSE
)
# 3. Instantiate an Index client
idx = pc.Index(host=index_config.host)
# 4. Upsert embeddings
idx.upsert(
vectors=[
("id1", [0.1, 0.2, 0.3, 0.4, ...], {"metadata_key": "value1"}),
("id2", [0.2, 0.3, 0.4, 0.5, ...], {"metadata_key": "value2"}),
],
namespace="example-namespace"
)
# 5. Query your index using an embedding
query_embedding = [...] # list should have length == index dimension
idx.query(
vector=query_embedding,
top_k=10,
include_metadata=True,
filter={"metadata_key": { "$eq": "value1" }}
)This example demonstrates using Pinecone's integrated inference capabilities. You provide raw text data, and Pinecone handles embedding generation and optional reranking automatically. This is ideal when you want to focus on your data and let Pinecone handle the ML complexity.
from pinecone import (
Pinecone,
CloudProvider,
AwsRegion,
EmbedModel,
IndexEmbed,
)
# 1. Instantiate the Pinecone client
# The API key can be passed directly or read from PINECONE_API_KEY environment variable
pc = Pinecone(api_key='YOUR_API_KEY')
# 2. Create an index configured for use with a particular embedding model
# This sets up the index with the right dimensions and configuration for your chosen model
index_config = pc.create_index_for_model(
name="my-model-index",
cloud=CloudProvider.AWS,
region=AwsRegion.US_EAST_1,
embed=IndexEmbed(
model=EmbedModel.Multilingual_E5_Large,
field_map={"text": "my_text_field"}
)
)
# 3. Instantiate an Index client for data operations
idx = pc.Index(host=index_config.host)
# 4. Upsert records with raw text data
# Pinecone will automatically generate embeddings using the configured model
idx.upsert_records(
namespace="my-namespace",
records=[
{
"_id": "test1",
"my_text_field": "Apple is a popular fruit known for its sweetness and crisp texture.",
},
{
"_id": "test2",
"my_text_field": "The tech company Apple is known for its innovative products like the iPhone.",
},
{
"_id": "test3",
"my_text_field": "Many people enjoy eating apples as a healthy snack.",
},
{
"_id": "test4",
"my_text_field": "Apple Inc. has revolutionized the tech industry with its sleek designs and user-friendly interfaces.",
},
{
"_id": "test5",
"my_text_field": "An apple a day keeps the doctor away, as the saying goes.",
},
{
"_id": "test6",
"my_text_field": "Apple Computer Company was founded on April 1, 1976, by Steve Jobs, Steve Wozniak, and Ronald Wayne as a partnership.",
},
],
)
# 5. Search for similar records using text queries
# Pinecone handles embedding the query and optionally reranking results
from pinecone import SearchQuery, SearchRerank, RerankModel
response = idx.search_records(
namespace="my-namespace",
query=SearchQuery(
inputs={
"text": "Apple corporation",
},
top_k=3
),
rerank=SearchRerank(
model=RerankModel.Bge_Reranker_V2_M3,
rank_fields=["my_text_field"],
top_n=3,
),
)The pinecone-plugin-assistant package is now bundled by default when installing pinecone. It does not need to be installed separately in order to use Pinecone Assistant.
For more information on Pinecone Assistant, see the Pinecone Assistant documentation.
Detailed information on specific ways of using the SDK are covered in these guides:
Index Management:
- Serverless Indexes - Learn about auto-scaling serverless indexes that scale automatically with your workload
- Pod Indexes - Understand dedicated pod-based indexes for consistent performance
Data Operations:
- Working with vectors - Comprehensive guide to storing, querying, and managing vectors with metadata filtering
Advanced Features:
- Inference API - Use Pinecone's integrated embedding and reranking models
- FAQ - Common questions and troubleshooting tips
If you notice bugs or have feedback, please file an issue.
You can also get help in the Pinecone Community Forum.
If you'd like to make a contribution, or get setup locally to develop the Pinecone Python SDK, please see our contributing guide