Overview
CircuitNotionAI is a powerful Python client library for the CircuitNotion AI API. It provides simple and intuitive access to advanced AI models for content generation, conversational AI, and various natural language processing tasks.
AI Content Generation
Generate high-quality content using state-of-the-art language models with customizable parameters and fine-grained control.
Conversational AI
Build interactive conversational applications with context-aware multi-turn conversations and memory management.
Streaming Responses
Real-time streaming responses for enhanced user experience with immediate feedback and progressive content delivery.
Batch Processing
Efficient batch processing capabilities for handling multiple requests simultaneously with optimized resource usage.
Key Features
- Multiple AI model support (circuit-2-turbo and more)
- Customizable generation parameters
- Built-in error handling and retry logic
- Comprehensive response metadata
- Rate limiting and quota management
- Secure API key authentication
- Flexible configuration options
- Extensive documentation and examples
Installation
Requirements: Python 3.7+ with pip package manager
Install CircuitNotionAI
# Install using pip
pip install CircuitNotionAI
# Verify installation
python -c "import CircuitNotionAI; print('CircuitNotionAI installed successfully')"
Core Dependencies
- •
requests
- HTTP API client - •
pydantic
- Data validation - •
typing-extensions
- Type hints
Optional Dependencies
- •
aiohttp
- Async HTTP support - •
websockets
- Streaming support - •
python-dotenv
- Environment variables
Authentication
API Key Required: Get your API key from the CircuitNotion dashboard to access the AI models.
Direct API Key Configuration
from CircuitNotionAI import CNAI
# Initialize client with API key
client = CNAI.Client(api_key="your_api_key_here")
# Verify connection
try:
models = client.models.list()
print("✓ Successfully connected to CircuitNotion AI")
except Exception as e:
print(f"✗ Connection failed: {e}")
Environment Variables (Recommended)
# Set environment variable
export CIRCUITNOTION_API_KEY="your_api_key_here"
# In your Python code
import os
from CircuitNotionAI import CNAI
client = CNAI.Client(api_key=os.getenv("CIRCUITNOTION_API_KEY"))
# Or use automatic environment detection
client = CNAI.Client() # Automatically looks for CIRCUITNOTION_API_KEY
Quick Start Guide
Basic Content Generation
from CircuitNotionAI import CNAI
# Initialize client with API key
client = CNAI.Client(api_key="your_api_key")
# Generate content using the default model
response = client.models.generate_content(
model="circuit-2-turbo",
contents="explain simply how to beat procrastination",
temperature=0.7,
max_tokens=200
)
print(response.txt)
Initialize Client
Create CNAI client with your API key for authentication.
Generate Content
Use the models API to generate content with custom parameters.
Process Response
Access generated text and metadata from the response object.
Content Generation
Advanced content generation with fine-grained control over AI model parameters and output formatting.
Advanced Generation Parameters
# Advanced content generation with custom parameters
response = client.models.generate_content(
model="circuit-2-turbo",
contents="Write a Python function to calculate fibonacci numbers",
temperature=0.3, # Lower temperature for more focused output
max_tokens=500,
top_p=0.9, # Nucleus sampling
frequency_penalty=0.1,
presence_penalty=0.1
)
print("Generated Code:")
print(response.txt)
print(f"Tokens used: {response.usage.total_tokens}")
Generation Parameters
- • temperature: Controls randomness (0.0-1.0)
- • max_tokens: Maximum response length
- • top_p: Nucleus sampling parameter
- • frequency_penalty: Reduces repetition
Response Metadata
- • usage.total_tokens: Token consumption
- • usage.completion_tokens: Generated tokens
- • model: Model used for generation
- • finish_reason: Completion status
Conversational AI
Multi-turn Conversations
# Multi-turn conversation
conversation = client.conversations.create()
# First message
response1 = conversation.add_message(
role="user",
content="What are the main principles of clean code?"
)
print("AI:", response1.txt)
# Follow-up question
response2 = conversation.add_message(
role="user",
content="Can you give me a specific example of the single responsibility principle?"
)
print("AI:", response2.txt)
# Get conversation history
history = conversation.get_history()
for message in history:
print(f"{message.role}: {message.content}")
Conversation Features
Context Management
- • Maintains conversation history
- • Context-aware responses
- • Memory persistence across turns
- • Configurable context window
Advanced Features
- • Role-based message handling
- • Conversation branching
- • Export conversation history
- • Custom system prompts
Streaming Responses
Real-time Streaming Generation
# Streaming responses for real-time generation
def handle_stream_chunk(chunk):
print(chunk.text, end='', flush=True)
# Stream content generation
client.models.generate_content_stream(
model="circuit-2-turbo",
contents="Write a detailed explanation of machine learning concepts",
temperature=0.7,
max_tokens=1000,
on_chunk=handle_stream_chunk
)
# Or use iterator approach
stream = client.models.generate_content_stream(
model="circuit-2-turbo",
contents="Explain quantum computing basics",
temperature=0.6
)
full_response = ""
for chunk in stream:
full_response += chunk.text
print(chunk.text, end='', flush=True)
print(f"\n\nComplete response length: {len(full_response)} characters")
⚡ Real-time Output
Stream responses as they're generated for immediate user feedback.
🔄 Progressive Delivery
Process chunks incrementally for better user experience.
⚙️ Custom Handlers
Implement custom chunk processing and display logic.
Batch Processing
Efficient Batch Operations
# Batch processing multiple prompts
prompts = [
"Explain artificial intelligence in one paragraph",
"What are the benefits of renewable energy?",
"How does blockchain technology work?",
"Describe the importance of cybersecurity"
]
# Process all prompts
responses = client.models.batch_generate(
model="circuit-2-turbo",
prompts=prompts,
temperature=0.7,
max_tokens=150
)
for i, response in enumerate(responses):
print(f"Prompt {i+1}: {prompts[i]}")
print(f"Response: {response.txt}")
print(f"Tokens: {response.usage.total_tokens}")
print("-" * 50)
Model Management
Working with Different AI Models
# Working with different models
available_models = client.models.list()
print("Available models:")
for model in available_models:
print(f"- {model.name}: {model.description}")
# Use specific model for code generation
code_response = client.models.generate_content(
model="circuit-code-expert", # Specialized for code
contents="Create a REST API using FastAPI with authentication",
temperature=0.2, # Lower temperature for precise code
max_tokens=800
)
# Use creative model for storytelling
story_response = client.models.generate_content(
model="circuit-creative-writer", # Specialized for creative writing
contents="Write a short science fiction story about AI and humanity",
temperature=0.9, # Higher temperature for creativity
max_tokens=1000
)
Available Models
circuit-2-turbo
General-purpose model for content generation, Q&A, and creative tasks.
circuit-code-expert
Specialized model optimized for code generation and programming tasks.
circuit-creative-writer
Creative writing focused model for stories, poetry, and artistic content.
circuit-analyst
Data analysis and reasoning model for complex problem solving.
Error Handling
Comprehensive Error Management
from CircuitNotionAI import CNAI
from CircuitNotionAI.exceptions import (
APIError,
AuthenticationError,
RateLimitError,
ModelNotFoundError
)
client = CNAI.Client(api_key="your_api_key")
try:
response = client.models.generate_content(
model="circuit-2-turbo",
contents="Generate a creative story",
temperature=0.8,
max_tokens=500
)
print(response.txt)
except AuthenticationError:
print("Error: Invalid API key. Please check your credentials.")
except RateLimitError as e:
print(f"Rate limit exceeded. Retry after {e.retry_after} seconds.")
except ModelNotFoundError:
print("Error: The specified model is not available.")
except APIError as e:
print(f"API Error: {e.message} (Status: {e.status_code})")
except Exception as e:
print(f"Unexpected error: {str(e)}")
Common Exceptions
- •
AuthenticationError
- Invalid API key - •
RateLimitError
- API quota exceeded - •
ModelNotFoundError
- Invalid model name - •
APIError
- General API errors
Best Practices
- • Always wrap API calls in try-catch blocks
- • Implement exponential backoff for rate limits
- • Log errors with appropriate detail levels
- • Provide fallback responses for critical paths
Configuration Management
Advanced Configuration Options
# Configuration management
config = CNAI.Config(
api_key="your_api_key",
base_url="https://api.circuitnotion.com",
timeout=30,
max_retries=3,
retry_delay=1,
default_model="circuit-2-turbo",
default_temperature=0.7,
default_max_tokens=500
)
client = CNAI.Client(config=config)
# Override default settings per request
response = client.models.generate_content(
contents="Explain machine learning algorithms",
temperature=0.3, # Override default temperature
max_tokens=1000 # Override default max_tokens
)
# Environment-based configuration
import os
client = CNAI.Client(
api_key=os.getenv("CIRCUITNOTION_API_KEY"),
base_url=os.getenv("CIRCUITNOTION_BASE_URL", "https://api.circuitnotion.com")
)
Configuration Parameters
Parameter | Type | Default | Description |
---|---|---|---|
api_key | string | None | CircuitNotion API authentication key |
base_url | string | api.circuitnotion.com | API base URL |
timeout | int | 30 | Request timeout in seconds |
max_retries | int | 3 | Maximum retry attempts |
default_model | string | circuit-2-turbo | Default model for generation |
API Reference
Client Class
CNAI.Client(api_key, base_url, timeout, max_retries, config)
Initialize CircuitNotionAI client with authentication and configuration
client.models.list() → List[Model]
Retrieve list of available AI models
client.models.get(model_name) → Model
Get detailed information about a specific model
Content Generation Methods
generate_content(model, contents, temperature, max_tokens, **kwargs) → Response
Generate content using specified model and parameters
generate_content_stream(model, contents, **kwargs) → Iterator[ChunkResponse]
Stream content generation with real-time chunks
batch_generate(model, prompts, **kwargs) → List[Response]
Process multiple prompts in a single batch request
Conversation Methods
conversations.create(system_prompt) → Conversation
Create new conversation with optional system prompt
conversation.add_message(role, content, **kwargs) → Response
Add message to conversation and get AI response
conversation.get_history() → List[Message]
Retrieve complete conversation history
Response Objects
Response.txt → str
Generated text content
Response.usage → Usage
Token usage information (total_tokens, completion_tokens)
Response.model → str
Model used for generation
Response.finish_reason → str
Completion status (completed, length, error)
Practical Examples
Code Generation Assistant
from CircuitNotionAI import CNAI
client = CNAI.Client(api_key="your_api_key")
def generate_code(description, language="python"):
prompt = f"Generate {language} code for: {description}"
response = client.models.generate_content(
model="circuit-code-expert",
contents=prompt,
temperature=0.2, # Low temperature for precise code
max_tokens=800
)
return response.txt
# Examples
flask_api = generate_code("REST API with user authentication using Flask")
data_processor = generate_code("function to process CSV data with pandas")
algorithm = generate_code("implementation of binary search tree", "python")
print(flask_api)
Content Writing Assistant
def create_content(topic, content_type="blog", tone="professional"):
prompts = {
"blog": f"Write a comprehensive blog post about {topic}",
"summary": f"Create a concise summary of {topic}",
"tutorial": f"Write a step-by-step tutorial on {topic}",
"email": f"Compose a {tone} email about {topic}"
}
response = client.models.generate_content(
model="circuit-creative-writer",
contents=prompts.get(content_type, prompts["blog"]),
temperature=0.8, # Higher temperature for creativity
max_tokens=1000
)
return response.txt
# Generate various content types
blog_post = create_content("machine learning basics", "blog")
tutorial = create_content("Python web scraping", "tutorial")
summary = create_content("blockchain technology", "summary")
Interactive Chat Application
class ChatBot:
def __init__(self, api_key, system_prompt=None):
self.client = CNAI.Client(api_key=api_key)
self.conversation = self.client.conversations.create(
system_prompt=system_prompt or "You are a helpful AI assistant."
)
def chat(self, user_message):
try:
response = self.conversation.add_message(
role="user",
content=user_message,
temperature=0.7
)
return response.txt
except Exception as e:
return f"Error: {str(e)}"
def get_chat_history(self):
return self.conversation.get_history()
# Usage
chatbot = ChatBot(
api_key="your_api_key",
system_prompt="You are a coding mentor helping students learn programming."
)
while True:
user_input = input("You: ")
if user_input.lower() in ['quit', 'exit']:
break
response = chatbot.chat(user_input)
print(f"AI: {response}")
Use Cases
💻 Development Tools
Code generation, documentation writing, debugging assistance
📝 Content Creation
Blog posts, marketing copy, technical documentation
🤖 Chatbots & Assistants
Customer support, educational tutoring, personal assistants
📊 Data Analysis
Report generation, data insights, automated summaries
Support & Resources
Documentation & Code
License & Legal
MIT License
CircuitNotionAI is released under the MIT License. You're free to use, modify, and distribute the library in your projects.
API Terms
Usage is subject to CircuitNotion's API Terms of Service. Please review rate limits and usage policies.