Python Libraries Guide¶
Choose the perfect Python library for your LLM integration needs. Our expert analysis covers 15+ libraries across all categories, from simple HTTP clients to comprehensive AI frameworks.
Library Categories¶
-
:material-web:{ .lg .middle } HTTP Clients
Simple and flexible HTTP clients for direct API access with minimal dependencies.
-
:material-brain-2:{ .lg .middle } OpenAI-Compatible
Libraries optimized for OpenAI's API format with broad provider support.
-
:material-framework:{ .lg .middle } Full Frameworks
Comprehensive frameworks with agents, tools, and advanced orchestration.
-
:material-tools:{ .lg .middle } Specialized Tools
Purpose-built libraries for specific use cases like structured output and control.
Library Rankings¶
Our expert analysis rates libraries on a 7-star scale across multiple criteria:
Overall Top Picks¶
Rank | Library | Rating | Category | Best For |
---|---|---|---|---|
1 | httpx | ⭐⭐⭐⭐⭐⭐⭐ | HTTP Client | Modern async/sync HTTP |
2 | openai | ⭐⭐⭐⭐⭐⭐⭐ | OpenAI-Compatible | Official OpenAI integration |
3 | pydantic_ai | ⭐⭐⭐⭐⭐⭐ | Framework | Type-safe AI development |
4 | instructor | ⭐⭐⭐⭐⭐⭐ | OpenAI-Compatible | Structured output with Pydantic |
5 | outlines | ⭐⭐⭐⭐⭐⭐ | Specialized | Guaranteed structured generation |
By Category Leaders¶
- httpx ⭐⭐⭐⭐⭐⭐⭐ - Modern async/sync HTTP client
- aiohttp ⭐⭐⭐⭐⭐⭐ - High-performance async HTTP
- requests ⭐⭐⭐⭐⭐ - Simple synchronous HTTP
- openai ⭐⭐⭐⭐⭐⭐⭐ - Official OpenAI library
- instructor ⭐⭐⭐⭐⭐⭐ - Structured output with Pydantic
- litellm ⭐⭐⭐⭐⭐⭐ - Universal provider interface
- pydantic_ai ⭐⭐⭐⭐⭐⭐ - Type-safe AI framework
- langchain ⭐⭐⭐⭐ - Comprehensive ecosystem
- llamaindex ⭐⭐⭐⭐ - RAG and data integration
- outlines ⭐⭐⭐⭐⭐⭐ - Guaranteed structured output
- guidance ⭐⭐⭐⭐⭐ - Constrained generation
- portkey-ai ⭐⭐⭐⭐⭐⭐ - Enterprise AI gateway
Library Selection Guide¶
Choose by Use Case¶
Need: Basic HTTP requests to LLM APIs
Recommended: httpx
or requests
import httpx
async with httpx.AsyncClient() as client:
response = await client.post(
"https://api.openai.com/v1/chat/completions",
headers={"Authorization": f"Bearer {api_key}"},
json={
"model": "gpt-4o-mini",
"messages": [{"role": "user", "content": "Hello!"}]
}
)
return response.json()
Pros: Minimal dependencies, full control, fast Cons: More boilerplate, no provider abstraction
Need: Work with OpenAI and compatible providers
Recommended: openai
or litellm
from openai import AsyncOpenAI
client = AsyncOpenAI(api_key="your-key")
response = await client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello!"}]
)
Pros: Official support, excellent docs, streaming Cons: OpenAI-focused, less flexibility
Need: Reliable JSON/Pydantic output from LLMs
Recommended: instructor
or outlines
import instructor
from openai import OpenAI
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
client = instructor.from_openai(OpenAI())
user = client.chat.completions.create(
model="gpt-4o-mini",
response_model=User,
messages=[{"role": "user", "content": "Extract: John is 25"}]
)
Pros: Type safety, validation, reliability Cons: Learning curve, provider limitations
Need: Switch between different LLM providers
Recommended: litellm
or custom httpx
from litellm import acompletion
# Works with any provider
response = await acompletion(
model="gpt-4o-mini", # or "claude-3-haiku", "gemini-pro"
messages=[{"role": "user", "content": "Hello!"}]
)
Pros: Provider flexibility, unified API Cons: Abstraction overhead, feature limitations
Need: Agents, tools, complex workflows
Recommended: pydantic_ai
or langchain
from pydantic_ai import Agent
from pydantic import BaseModel
class DatabaseQuery(BaseModel):
sql: str
agent = Agent(
'openai:gpt-4o-mini',
result_type=DatabaseQuery,
system_prompt="You are a SQL expert."
)
result = await agent.run("Get all users created today")
Pros: Full-featured, production-ready Cons: Complexity, larger dependencies
Choose by Project Size¶
Project Type | Library | Why |
---|---|---|
Prototype | requests + json |
Fast to implement, no dependencies |
Small App | openai or httpx |
Official support, good docs |
Medium App | instructor or litellm |
Type safety, multi-provider |
Large App | pydantic_ai |
Full framework, maintainable |
Enterprise | portkey-ai + custom |
Observability, control |
Feature Comparison Matrix¶
Library | Async | Streaming | Structured | Multi-Provider | Type Safe | Docs |
---|---|---|---|---|---|---|
httpx | ✅ | ✅ | ❌ | ✅ | ✅ | ⭐⭐⭐⭐⭐ |
openai | ✅ | ✅ | ❌ | ❌ | ✅ | ⭐⭐⭐⭐⭐⭐⭐ |
instructor | ✅ | ❌ | ✅ | ❌ | ✅ | ⭐⭐⭐⭐⭐ |
litellm | ✅ | ✅ | ❌ | ✅ | ❌ | ⭐⭐⭐⭐ |
pydantic_ai | ✅ | ✅ | ✅ | ✅ | ✅ | ⭐⭐⭐⭐ |
outlines | ❌ | ❌ | ✅ | ✅ | ✅ | ⭐⭐⭐ |
langchain | ✅ | ✅ | ✅ | ✅ | ❌ | ⭐⭐⭐⭐ |
Installation and Setup¶
Quick Installation¶
# HTTP clients
pip install httpx # Modern async/sync HTTP
pip install requests # Simple synchronous HTTP
pip install aiohttp # High-performance async
# OpenAI-compatible
pip install openai # Official OpenAI library
pip install instructor # Structured output
pip install litellm # Multi-provider support
# Full frameworks
pip install pydantic-ai # Type-safe AI framework
pip install langchain # Comprehensive ecosystem
pip install llama-index # RAG specialist
# Specialized tools
pip install outlines # Guaranteed structured output
pip install guidance # Constrained generation
pip install portkey-ai # Enterprise AI gateway
Environment Setup¶
Most libraries need API keys:
# Core providers
export OPENAI_API_KEY="your-openai-key"
export ANTHROPIC_API_KEY="your-anthropic-key"
export GOOGLE_API_KEY="your-google-key"
# For litellm and similar multi-provider libraries
export GROQ_API_KEY="your-groq-key"
export MISTRAL_API_KEY="your-mistral-key"
Performance Comparison¶
Benchmarks¶
We tested common scenarios across libraries:
Library | Simple Request | Structured Output | Multi-Provider | Bundle Size |
---|---|---|---|---|
httpx | 50ms | N/A | Manual | 2MB |
openai | 55ms | N/A | N/A | 5MB |
instructor | 65ms | 80ms | N/A | 8MB |
litellm | 70ms | N/A | 75ms | 12MB |
pydantic_ai | 75ms | 85ms | 80ms | 15MB |
langchain | 120ms | 140ms | 130ms | 50MB |
Benchmarks on simple chat completion requests, M2 MacBook Pro
Memory Usage¶
Library Category | RAM Usage | Description |
---|---|---|
HTTP Clients | 10-20MB | Minimal overhead |
OpenAI-Compatible | 30-50MB | Moderate overhead |
Frameworks | 100-200MB | Full-featured |
Specialized | 50-100MB | Feature-dependent |
Migration Guides¶
From Requests to httpx¶
# Old (requests)
import requests
response = requests.post(url, json=data, headers=headers)
# New (httpx)
import httpx
async with httpx.AsyncClient() as client:
response = await client.post(url, json=data, headers=headers)
From OpenAI to Instructor¶
# Old (openai)
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Extract user info"}]
)
# New (instructor)
from pydantic import BaseModel
class User(BaseModel):
name: str
age: int
user = client.chat.completions.create(
model="gpt-4",
response_model=User,
messages=[{"role": "user", "content": "Extract user info"}]
)
Getting Library Recommendations¶
Use Knollm's CLI to get personalized recommendations:
# Get recommendations by use case
knollm libraries recommend async
knollm libraries recommend structured_output
knollm libraries recommend multi_provider
# Compare libraries
knollm libraries compare httpx requests aiohttp
# Get detailed information
knollm libraries info instructor
What's Next?¶
Explore our comprehensive library analysis:
- Library Comparison → - Detailed feature comparison
- Recommendations → - Get personalized suggestions
- Integration Examples → - See libraries in action
- Best Practices → - Production deployment tips
🚀 Quick Decision Guide
- Just starting? Use
openai
library - Need structured output? Use
instructor
- Multiple providers? Use
litellm
orhttpx
- Building complex apps? Use
pydantic_ai
- Maximum control? Use
httpx
directly