ChatGPT API

The ChatGPT API, provided by OpenAI, allows developers to integrate ChatGPT’s conversational capabilities into their applications. It is a RESTful API that enables natural language understanding and generation at scale.
Author

Benedict Thekkel

Key Features

  • Natural Language Understanding (NLU): Understand user intent and context.
  • Dynamic Response Generation: Generate creative, informative, or contextually relevant replies.
  • Scalable: Suitable for small apps to enterprise-level integrations.
  • Customizable: Tailor interactions using system-level instructions.
  • Multi-Turn Conversations: Maintain conversational context over multiple requests.

Use Cases

  • Customer Support: Automate interactions with customers in chatbots or help desks.
  • Virtual Assistants: Build conversational agents for personal or business use.
  • Content Creation: Automate writing tasks like blogs, social media posts, and more.
  • Educational Tools: Offer tutoring, language translation, or explanation services.
  • Code Assistance: Provide coding advice, debugging help, or generate code snippets.

Getting Started

Prerequisites

  • Sign up for an OpenAI API key.
  • Familiarity with REST APIs and HTTP requests.

Install OpenAI Python Library (Optional)

For Python integration:

pip install openai

Making a Request

A typical API request sends a conversation history and receives a model-generated response.

HTTP Endpoint

POST https://api.openai.com/v1/chat/completions

Request Headers

Authorization: Bearer YOUR_API_KEY
Content-Type: application/json

Request Body

{
  "model": "gpt-4",
  "messages": [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Tell me about the solar system."}
  ],
  "max_tokens": 150,
  "temperature": 0.7
}

Response

{
  "id": "chatcmpl-7aA9aB...",
  "object": "chat.completion",
  "created": 1689219200,
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "The solar system consists of the Sun and all the objects that orbit it..."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 20,
    "completion_tokens": 30,
    "total_tokens": 50
  }
}

Key Parameters

Parameter Description Default
model Specifies the model (e.g., gpt-4, gpt-3.5-turbo). Required
messages List of conversation messages with roles (system, user, assistant). Required
temperature Controls response randomness (0 = deterministic, 1 = more random). 1
max_tokens Limits the length of the response. 4096 tokens (model-specific)
top_p Controls diversity via nucleus sampling. Lower values focus responses on high-probability words. 1
frequency_penalty Penalizes repeated phrases (higher values reduce repetition). 0
presence_penalty Encourages discussion of new topics (higher values make responses more varied). 0

Multi-Turn Conversations

Maintain context by appending past interactions to the messages array:

[
  {"role": "system", "content": "You are an AI tutor."},
  {"role": "user", "content": "What is the capital of France?"},
  {"role": "assistant", "content": "The capital of France is Paris."},
  {"role": "user", "content": "What about Germany?"}
]

Token Usage and Pricing

  • Tokens: Represent text pieces (e.g., “ChatGPT is great!” = 6 tokens).
  • Pricing: Billed per token based on the model:
    • gpt-4: Higher cost, more powerful.
    • gpt-3.5-turbo: Cheaper, suitable for many use cases.

Rate Limits

  • Rate limits depend on your OpenAI subscription tier.
  • Monitor usage via the OpenAI dashboard.

Error Handling

Common API errors: - 401 Unauthorized: Invalid or missing API key. - 429 Too Many Requests: Exceeded rate limit or quota. - 500 Server Error: Retry after some time.


Advanced Features

System Messages

Define assistant behavior:

{"role": "system", "content": "You are a friendly travel advisor."}

Fine-Tuning

Train custom models with domain-specific data.

Streaming Responses

Stream results for real-time applications:

"stream": true

Function Calling (Experimental)

Define and call functions for structured outputs:

"functions": [
  {
    "name": "get_weather",
    "parameters": {
      "type": "object",
      "properties": {
        "location": {"type": "string"},
        "unit": {"type": "string", "enum": ["C", "F"]}
      }
    }
  }
]

Example Code Snippets

Python

import openai

openai.api_key = "your-api-key"

response = openai.ChatCompletion.create(
    model="gpt-4",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What's the weather like in Sydney?"}
    ]
)

print(response['choices'][0]['message']['content'])

JavaScript (Node.js)

```javascript const { Configuration, OpenAIApi } = require(“openai”);

const configuration = new Configuration({ apiKey: “your-api-key”, }); const openai = new OpenAIApi(configuration);

async function ask() { const response = await openai.createChatCompletion({ model: “gpt-4”, messages: [ { role: “system”, content: “You are

Back to top