from openai import OpenAI
import json
from dotenv import load_dotenv
import os
from IPython.display import Markdown, display
load_dotenv()
= OpenAI(
client =os.getenv('openai_api_key')
api_key )
OpenAI SDK
openai
) — a comprehensive guide covering setup, authentication, endpoints, examples, best practices, and advanced features including streaming, function calling, tool use, and billing queries.
Example
= client.chat.completions.create(
completion ="gpt-4o-mini",
model=True,
store=[
messages"role": "user", "content": "what language model is this?"}
{
],=150,
max_tokens
)
= completion.choices[0].message.content
content display(Markdown(content))
I am based on OpenAI’s GPT-3 model, which is a language model designed for natural language understanding and generation. If you have any questions or need assistance, feel free to ask!
📦 1. Installation
pip install openai
✅ This installs the official SDK
🔐 2. Authentication
Options:
from openai import OpenAI
import os
= OpenAI(api_key=os.getenv("OPENAI_API_KEY")) client
Other methods: - Environment variable: OPENAI_API_KEY
- .openai
config file - Azure or custom base URL: base_url="https://..."
⚙️ 3. SDK Client Overview
from openai import OpenAI
= OpenAI() client
Client supports: - client.chat.completions.create(...)
- client.images.generate(...)
- client.embeddings.create(...)
- client.audio.transcriptions.create(...)
- client.files.create(...)
- client.fine_tuning.jobs.create(...)
- and more…
💬 4. Chat Completions
Basic usage:
= client.chat.completions.create(
response ="gpt-4o",
model=[{"role": "user", "content": "Hello!"}]
messages
)
print(response.choices[0].message.content)
Options:
max_tokens
,temperature
,top_p
,presence_penalty
tools
,tool_choice
function_call
(deprecated in favor oftools
)
🔁 5. Streaming Responses
= client.chat.completions.create(
stream ="gpt-4o",
model=[{"role": "user", "content": "Stream this response"}],
messages=True,
stream
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="")
✅ Useful for real-time apps, CLI tools, or dashboards.
🧠 6. Tool Use / Function Calling
= client.chat.completions.create(
response ="gpt-4o",
model=[{"role": "user", "content": "What's the weather in Paris?"}],
messages=[
tools
{"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather by city",
"parameters": {
"type": "object",
"properties": {
"city": {"type": "string"},
},"required": ["city"]
}
}
}
],="auto"
tool_choice )
Check for tool_calls
in response and call your function accordingly.
🖼️ 7. Image Generation (DALL·E
)
= client.images.generate(
image ="A futuristic city skyline at dusk",
prompt="dall-e-3",
model=1,
n="1024x1024"
size
)print(image.data[0].url)
🧬 8. Embeddings
= client.embeddings.create(
response input="Deep learning is powerful.",
="text-embedding-3-small"
model
)= response.data[0].embedding vector
Use for semantic search, vector databases, etc.
🔊 9. Audio Transcription (Whisper)
with open("audio.mp3", "rb") as file:
= client.audio.transcriptions.create(
transcript ="whisper-1", file=file
model
)print(transcript.text)
📁 10. File Upload + Fine-tuning
Upload a file:
file=open("data.jsonl", "rb"), purpose="fine-tune") client.files.create(
Fine-tuning:
="file-abc123") client.fine_tuning.jobs.create(training_file
📊 11. Billing / Usage (manual requests)
The SDK does not expose billing endpoints, but you can query them:
import requests
= {"Authorization": f"Bearer {api_key}"}
headers = requests.get("https://api.openai.com/v1/dashboard/billing/usage?start_date=2024-04-01&end_date=2024-04-30", headers=headers)
r print(r.json())
📎 12. Error Handling
from openai import OpenAI, APIError, RateLimitError
try:
...except RateLimitError:
print("Rate limited.")
except APIError as e:
print(f"OpenAI API error: {e}")
✅ 13. Models Available (as of 2024-2025)
Model | Type | Use Case |
---|---|---|
gpt-4o |
Chat | Fast, cheap, vision, voice |
gpt-4 |
Chat | Highest reasoning |
gpt-3.5-turbo |
Chat | Cost-effective |
dall-e-3 |
Image | Prompt-to-image |
whisper-1 |
Audio | Speech to text |
text-embedding-3 |
Embeddings | Vector search |
🧰 14. Advanced Tips
response.usage.total_tokens
to track billing- Use
.store=True
to save conversations if supported - Use
logprobs
for token-level info (GPT-3 only)
🧪 15. Testing in Notebooks
from IPython.display import Markdown
= client.chat.completions.create(...)
res 0].message.content)) display(Markdown(res.choices[
🔚 Summary
Task | SDK Support |
---|---|
Chat Completion | ✅ .chat.completions.create() |
Image Generation | ✅ .images.generate() |
Audio Transcribe | ✅ .audio.transcriptions.create() |
Embeddings | ✅ .embeddings.create() |
File Upload | ✅ .files.create() |
Billing Info | ❌ Use requests manually |