Overview
CircuitNotion AI is fully compatible with the official OpenAI Python SDK. Simply point the SDK to our base URL and use all OpenAI features including chat completions, streaming, function calling, tools, and more with CircuitNotion's powerful AI models.
OpenAI SDK Compatible
Use the standard OpenAI Python SDK you already know. No need to learn a new API - same interface, CircuitNotion models.
Conversational AI
Build interactive conversational applications with context-aware multi-turn conversations and memory management.
Streaming Responses
Real-time streaming responses for enhanced user experience with immediate feedback and progressive content delivery.
Function Calling & Tools
Full support for OpenAI function calling and tools. Build AI agents that can interact with external APIs and databases.
Key Features
- 100% OpenAI SDK compatible interface
- Function calling and tools support
- Streaming responses with SSE
- Async/await support
- Multiple AI model support
- Built-in retry logic and error handling
- System and user message support
- Comprehensive response metadata
Installation
Requirements: Python 3.7.1+ with pip package manager. The official OpenAI SDK is maintained by OpenAI.
Install CircuitNotionAI
# Install the official OpenAI Python SDK
pip install openai
# Verify installation
python -c "import openai; print('OpenAI SDK installed successfully')"Main Package
- •
openai- Official OpenAI SDK - •
httpx- Modern HTTP client - •
pydantic- Data validation
Recommended
- •
python-dotenv- Environment variables - •
asyncio- Async operations (built-in) - •
typing- Type hints (built-in)
Authentication
API Key Required: Get your API key from the CircuitNotion dashboard to access the AI models.
Direct API Key Configuration
from openai import OpenAI
# Initialize client with CircuitNotion base URL
client = OpenAI(
base_url="https://apis.circuitnotion.com/v1",
api_key="your_api_key_here"
)
# Verify connection
try:
response = client.chat.completions.create(
model="circuit-2-turbo",
messages=[{"role": "user", "content": "Hello!"}]
)
print("✓ Successfully connected to CircuitNotion AI")
print(response.choices[0].message.content)
except Exception as e:
print(f"✗ Connection failed: {e}")Environment Variables (Recommended)
# Set environment variable
export CIRCUITNOTION_API_KEY="your_api_key_here"
# In your Python code
import os
from openai import OpenAI
client = OpenAI(
base_url="https://apis.circuitnotion.com/v1",
api_key=os.getenv("CIRCUITNOTION_API_KEY")
)
# Or use .env file with python-dotenv
from dotenv import load_dotenv
load_dotenv()
client = OpenAI(
base_url="https://apis.circuitnotion.com/v1",
api_key=os.getenv("CIRCUITNOTION_API_KEY")
)Quick Start Guide
Basic Content Generation
from openai import OpenAI
# Initialize client with CircuitNotion base URL and API key
client = OpenAI(
base_url="https://apis.circuitnotion.com/v1",
api_key="your_api_key_here"
)
# Generate content using circuit-2-turbo model
response = client.chat.completions.create(
model="circuit-2-turbo",
messages=[
{"role": "user", "content": "Explain simply how to beat procrastination"}
],
temperature=0.7,
max_tokens=200
)
print(response.choices[0].message.content)Install OpenAI SDK
Install the official OpenAI Python package from PyPI.
Configure Base URL
Point SDK to CircuitNotion API endpoint with your key.
Start Generating
Use standard OpenAI SDK methods to generate content.
Content Generation
Advanced content generation with fine-grained control over AI model parameters and output formatting.
Advanced Generation Parameters
# Advanced content generation with custom parameters
response = client.chat.completions.create(
model="circuit-2-turbo",
messages=[
{"role": "system", "content": "You are an expert Python programmer."},
{"role": "user", "content": "Write a Python function to calculate fibonacci numbers"}
],
temperature=0.3, # Lower temperature for more focused output
max_tokens=500,
top_p=0.9, # Nucleus sampling
frequency_penalty=0.1,
presence_penalty=0.1
)
print("Generated Code:")
print(response.choices[0].message.content)
print(f"Tokens used: {response.usage.total_tokens}")
print(f"Completion tokens: {response.usage.completion_tokens}")Generation Parameters
- • temperature: Controls randomness (0.0-1.0)
- • max_tokens: Maximum response length
- • top_p: Nucleus sampling parameter
- • frequency_penalty: Reduces repetition
Response Metadata
- • usage.total_tokens: Token consumption
- • usage.completion_tokens: Generated tokens
- • model: Model used for generation
- • finish_reason: Completion status
Conversational AI
Multi-turn Conversations
# Multi-turn conversation with message history
messages = [
{"role": "system", "content": "You are a helpful coding assistant."},
{"role": "user", "content": "What are the main principles of clean code?"}
]
# First message
response1 = client.chat.completions.create(
model="circuit-2-turbo",
messages=messages
)
assistant_reply = response1.choices[0].message.content
print("AI:", assistant_reply)
# Add assistant's response to history
messages.append({"role": "assistant", "content": assistant_reply})
# Follow-up question
messages.append({
"role": "user",
"content": "Can you give me a specific example of the single responsibility principle?"
})
response2 = client.chat.completions.create(
model="circuit-2-turbo",
messages=messages
)
print("AI:", response2.choices[0].message.content)
# Full conversation history is in messages list
for msg in messages:
print(f"{msg['role']}: {msg['content'][:50]}...")Conversation Features
Context Management
- • Maintains conversation history
- • Context-aware responses
- • Memory persistence across turns
- • Configurable context window
Advanced Features
- • Role-based message handling
- • Conversation branching
- • Export conversation history
- • Custom system prompts
Streaming Responses
Real-time Streaming Generation
# Streaming responses for real-time generation
print("AI Response: ", end="", flush=True)
stream = client.chat.completions.create(
model="circuit-2-turbo",
messages=[
{"role": "user", "content": "Write a detailed explanation of machine learning concepts"}
],
temperature=0.7,
max_tokens=1000,
stream=True # Enable streaming
)
full_response = ""
for chunk in stream:
if chunk.choices[0].delta.content:
content = chunk.choices[0].delta.content
full_response += content
print(content, end="", flush=True)
print(f"\n\nComplete response length: {len(full_response)} characters")
# Stream with error handling
try:
for chunk in client.chat.completions.create(
model="circuit-2-turbo",
messages=[{"role": "user", "content": "Explain quantum computing"}],
stream=True
):
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
except Exception as e:
print(f"\nStreaming error: {e}")Real-time Output
Stream responses as they're generated for immediate user feedback.
Progressive Delivery
Process chunks incrementally for better user experience.
Custom Handlers
Implement custom chunk processing and display logic.
Function Calling & Tools
CircuitNotion AI supports OpenAI's function calling and tools feature, enabling AI models to interact with external functions, APIs, and databases. Perfect for building AI agents and assistants.
Function Calling Example
# Function calling / Tools support
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and country, e.g. London, UK"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["location"]
}
}
}
]
response = client.chat.completions.create(
model="circuit-2-turbo",
messages=[
{"role": "user", "content": "What's the weather like in Kigali?"}
],
tools=tools,
tool_choice="auto" # Let the model decide when to use tools
)
# Check if the model wants to call a function
if response.choices[0].message.tool_calls:
tool_call = response.choices[0].message.tool_calls[0]
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
print(f"Function to call: {function_name}")
print(f"Arguments: {function_args}")
# Call your actual function here and send result back
# function_result = get_weather(**function_args)
# Then continue the conversation with the function resultUse Cases
- • Call external APIs (weather, database queries)
- • Execute business logic functions
- • Retrieve real-time data
- • Perform calculations and transformations
Key Benefits
- • AI decides when to call functions
- • Type-safe parameter schemas
- • Multiple functions in one call
- • Structured data extraction
Batch Processing
Efficient Batch Operations
# Process multiple prompts efficiently
import concurrent.futures
prompts = [
"Explain artificial intelligence in one paragraph",
"What are the benefits of renewable energy?",
"How does blockchain technology work?",
"Describe the importance of cybersecurity"
]
def generate_response(prompt):
response = client.chat.completions.create(
model="circuit-2-turbo",
messages=[{"role": "user", "content": prompt}],
temperature=0.7,
max_tokens=150
)
return {
"prompt": prompt,
"response": response.choices[0].message.content,
"tokens": response.usage.total_tokens
}
# Process in parallel using ThreadPoolExecutor
with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
results = list(executor.map(generate_response, prompts))
for i, result in enumerate(results):
print(f"\nPrompt {i+1}: {result['prompt']}")
print(f"Response: {result['response']}")
print(f"Tokens: {result['tokens']}")
print("-" * 50)Model Management
Working with Different AI Models
# Working with different CircuitNotion models
# All models are accessed through the same OpenAI SDK interface
# General-purpose model
response = client.chat.completions.create(
model="circuit-2-turbo",
messages=[{"role": "user", "content": "Explain machine learning"}]
)
# For code generation (use lower temperature)
code_response = client.chat.completions.create(
model="circuit-2-turbo",
messages=[
{"role": "system", "content": "You are an expert programmer."},
{"role": "user", "content": "Create a REST API using FastAPI with authentication"}
],
temperature=0.2, # Lower temperature for precise code
max_tokens=800
)
# For creative writing (use higher temperature)
story_response = client.chat.completions.create(
model="circuit-2-turbo",
messages=[
{"role": "system", "content": "You are a creative science fiction writer."},
{"role": "user", "content": "Write a short story about AI and humanity"}
],
temperature=0.9, # Higher temperature for creativity
max_tokens=1000
)
# List available models
models = client.models.list()
for model in models.data:
print(f"Model: {model.id}")Available Models
circuit-2-turbo
General-purpose model for content generation, Q&A, and creative tasks.
circuit-code-expert
Specialized model optimized for code generation and programming tasks.
circuit-creative-writer
Creative writing focused model for stories, poetry, and artistic content.
circuit-analyst
Data analysis and reasoning model for complex problem solving.
Error Handling
Comprehensive Error Management
from openai import OpenAI, OpenAIError, APIError, RateLimitError, AuthenticationError
client = OpenAI(
base_url="https://apis.circuitnotion.com/v1",
api_key="your_api_key_here"
)
try:
response = client.chat.completions.create(
model="circuit-2-turbo",
messages=[
{"role": "user", "content": "Generate a creative story"}
],
temperature=0.8,
max_tokens=500
)
print(response.choices[0].message.content)
except AuthenticationError as e:
print("Error: Invalid API key. Please check your credentials.")
print(f"Details: {e}")
except RateLimitError as e:
print(f"Rate limit exceeded. Please try again later.")
print(f"Details: {e}")
except APIError as e:
print(f"API Error: {e}")
print(f"Status code: {e.status_code}")
except OpenAIError as e:
print(f"OpenAI SDK Error: {e}")
except Exception as e:
print(f"Unexpected error: {str(e)}")Common Exceptions
- •
AuthenticationError- Invalid API key - •
RateLimitError- API quota exceeded - •
ModelNotFoundError- Invalid model name - •
APIError- General API errors
Best Practices
- • Always wrap API calls in try-catch blocks
- • Implement exponential backoff for rate limits
- • Log errors with appropriate detail levels
- • Provide fallback responses for critical paths
Configuration Management
Advanced Configuration Options
# Configuration management
import os
from openai import OpenAI
# Basic configuration
client = OpenAI(
base_url="https://apis.circuitnotion.com/v1",
api_key=os.getenv("CIRCUITNOTION_API_KEY"), # From environment
timeout=30.0, # Request timeout in seconds
max_retries=3 # Automatic retries
)
# Advanced configuration with custom headers
from openai import DefaultHttpxClient
import httpx
http_client = DefaultHttpxClient(
timeout=60.0,
limits=httpx.Limits(max_keepalive_connections=5, max_connections=10)
)
client = OpenAI(
base_url="https://apis.circuitnotion.com/v1",
api_key=os.getenv("CIRCUITNOTION_API_KEY"),
http_client=http_client,
default_headers={
"X-Custom-Header": "value"
}
)
# Async client for async operations
from openai import AsyncOpenAI
async_client = AsyncOpenAI(
base_url="https://apis.circuitnotion.com/v1",
api_key=os.getenv("CIRCUITNOTION_API_KEY")
)
# Use in async function
async def generate_async():
response = await async_client.chat.completions.create(
model="circuit-2-turbo",
messages=[{"role": "user", "content": "Hello!"}]
)
return response.choices[0].message.contentConfiguration Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key | string | None | CircuitNotion API authentication key |
base_url | string | api.circuitnotion.com | API base URL |
timeout | int | 30 | Request timeout in seconds |
max_retries | int | 3 | Maximum retry attempts |
default_model | string | circuit-2-turbo | Default model for generation |
API Reference
Client Class
CNAI.Client(api_key, base_url, timeout, max_retries, config)Initialize CircuitNotionAI client with authentication and configuration
client.models.list() → List[Model]Retrieve list of available AI models
client.models.get(model_name) → ModelGet detailed information about a specific model
Content Generation Methods
generate_content(model, contents, temperature, max_tokens, **kwargs) → ResponseGenerate content using specified model and parameters
generate_content_stream(model, contents, **kwargs) → Iterator[ChunkResponse]Stream content generation with real-time chunks
batch_generate(model, prompts, **kwargs) → List[Response]Process multiple prompts in a single batch request
Conversation Methods
conversations.create(system_prompt) → ConversationCreate new conversation with optional system prompt
conversation.add_message(role, content, **kwargs) → ResponseAdd message to conversation and get AI response
conversation.get_history() → List[Message]Retrieve complete conversation history
Response Objects
Response.txt → strGenerated text content
Response.usage → UsageToken usage information (total_tokens, completion_tokens)
Response.model → strModel used for generation
Response.finish_reason → strCompletion status (completed, length, error)
Practical Examples
Code Generation Assistant
from openai import OpenAI
client = OpenAI(
base_url="https://apis.circuitnotion.com/v1",
api_key="your_api_key"
)
def generate_code(description, language="python"):
response = client.chat.completions.create(
model="circuit-2-turbo",
messages=[
{"role": "system", "content": f"You are an expert {language} programmer."},
{"role": "user", "content": f"Generate {language} code for: {description}"}
],
temperature=0.2, # Low temperature for precise code
max_tokens=800
)
return response.choices[0].message.content
# Examples
flask_api = generate_code("REST API with user authentication using Flask")
data_processor = generate_code("function to process CSV data with pandas")
ts_component = generate_code("React component with TypeScript", "typescript")
print(flask_api)Content Writing Assistant
def create_content(topic, content_type="blog", tone="professional"):
prompts = {
"blog": f"Write a comprehensive blog post about {topic}",
"summary": f"Create a concise summary of {topic}",
"tutorial": f"Write a step-by-step tutorial on {topic}",
"email": f"Compose a {tone} email about {topic}"
}
response = client.chat.completions.create(
model="circuit-2-turbo",
messages=[
{"role": "system", "content": "You are a professional content writer."},
{"role": "user", "content": prompts.get(content_type, prompts["blog"])}
],
temperature=0.8, # Higher temperature for creativity
max_tokens=1000
)
return response.choices[0].message.content
# Generate various content types
blog_post = create_content("machine learning basics", "blog")
tutorial = create_content("Python web scraping", "tutorial")
summary = create_content("blockchain technology", "summary")Interactive Chat Application
class ChatBot:
def __init__(self, api_key, system_prompt=None):
self.client = OpenAI(
base_url="https://apis.circuitnotion.com/v1",
api_key=api_key
)
self.messages = []
if system_prompt:
self.messages.append({"role": "system", "content": system_prompt})
def chat(self, user_message):
try:
self.messages.append({"role": "user", "content": user_message})
response = self.client.chat.completions.create(
model="circuit-2-turbo",
messages=self.messages,
temperature=0.7
)
assistant_message = response.choices[0].message.content
self.messages.append({"role": "assistant", "content": assistant_message})
return assistant_message
except Exception as e:
return f"Error: {str(e)}"
def get_chat_history(self):
return self.messages
# Usage
chatbot = ChatBot(
api_key="your_api_key",
system_prompt="You are a coding mentor helping students learn programming."
)
while True:
user_input = input("You: ")
if user_input.lower() in ['quit', 'exit']:
break
response = chatbot.chat(user_input)
print(f"AI: {response}")Use Cases
Development Tools
Code generation, documentation writing, debugging assistance
Content Creation
Blog posts, marketing copy, technical documentation
Chatbots & Assistants
Customer support, educational tutoring, personal assistants
Data Analysis
Report generation, data insights, automated summaries
Support & Resources
Documentation & Code
License & Legal
MIT License
CircuitNotionAI is released under the MIT License. You're free to use, modify, and distribute the library in your projects.
API Terms
Usage is subject to CircuitNotion's API Terms of Service. Please review rate limits and usage policies.