Skip to main content

Overview

OpenAI provides the GPT family of models, including the latest GPT-5 series and specialized reasoning models. These models offer industry-leading performance across a wide range of tasks from simple chat to complex reasoning.

Available Models

GPT-5 Series (Latest)

GPT-5

5 credits ” Flagship model with exceptional reasoning and creativity
  • 200K context window
  • Best for: Complex tasks, creative writing, advanced reasoning
  • Speed: Medium ” Cost: High

GPT-5 Mini

3 credits ” Balanced performance and speed
  • 200K context window
  • Best for: Everyday tasks with strong reasoning
  • Speed: Fast ” Cost: Medium

GPT-5 Nano

2 credits ” Ultra-lightweight and fast
  • 128K context window
  • Best for: Simple tasks, high-volume applications
  • Speed: Very Fast ” Cost: Low

GPT-5.1

4 credits ” Enhanced reasoning model
  • 200K context window
  • Reasoning model with extended thinking
  • Speed: Medium ” Cost: High

O-Series Reasoning Models

Reasoning models use explicit step-by-step thinking for complex problem-solving. They require higher minimum completion tokens (2000-3000) and are best for analytical tasks.
ModelCreditsContextMin TokensBest For
O16200K3000Deep reasoning, complex problems
O1 Mini3128K2000Faster reasoning with strong analytics
O3 Mini3128K2000Next-gen compact reasoning

GPT-4 Series

ModelCreditsContextDescription
GPT-4o5128KSpecialized for specific tasks, excellent performance
GPT-4o Mini1128KBalanced for everyday tasks, efficient
GPT-4.1 Nano0.564KLightweight for speed and low-cost
GPT-4.1 Mini1128KEfficient with solid everyday performance

Setup

Using BoostGPT-Hosted API Keys

1

Select OpenAI Models

In your BoostGPT dashboard, simply select any OpenAI model when creating or configuring your bot. No API key needed!
2

Choose Your Model

Select based on your needs:
  • GPT-5: Complex reasoning, creative tasks
  • GPT-5 Mini: Balanced everyday use
  • O1/O3 Mini: Deep analytical work
  • GPT-4o Mini: Cost-effective simple tasks

Using Your Own OpenAI API Key

Want to use your own OpenAI API key? See the Bring Your Own Keys guide.
1

Navigate to Integrations

Go to app.boostgpt.co and select Integrations
2

Select OpenAI

Find and click on the OpenAI provider
3

Add API Key

Enter your OpenAI API key and select which agents will use this key
4

Save Configuration

Click save to apply your custom API key
Using your own API key can reduce costs for high-volume applications and gives you direct control over rate limits.

Model Selection Guide

When to Use Each Model

Best for:
  • Creative writing and content generation
  • Complex reasoning and analysis
  • Multi-step problem solving
  • High-stakes customer interactions
Avoid for:
  • Simple FAQ responses
  • High-volume/cost-sensitive applications
Cost: 5 credits per request
Best for:
  • Customer support chatbots
  • General conversation
  • Content moderation
  • Most production use cases
Sweet spot: Balance of performance and costCost: 3 credits per request
Best for:
  • High-volume applications
  • Simple queries and responses
  • Quick classifications
  • Prototyping and testing
Cost: 2 credits per request (most affordable GPT-5)
Best for:
  • Mathematical problem solving
  • Code debugging and optimization
  • Scientific analysis
  • Strategic planning
Note: Higher minimum tokens (2000+), slower responsesCost: 3-6 credits per request
Best for:
  • Development and testing
  • Simple chatbots
  • Low-budget projects
  • Learning and experimentation
Cost: 1 credit per request (cheapest option)

Troubleshooting

Cause: Too many requests to OpenAI APISolutions:
  • Implement request throttling
  • Use exponential backoff retries
  • Upgrade your OpenAI API tier
  • Switch to GPT-4o Mini for high-volume use
Cause: Input + completion tokens exceed model’s context windowSolutions:
  • Truncate conversation history
  • Use models with larger context (GPT-5: 200K)
  • Implement sliding window for message history
Expected behavior: Reasoning models take longerSolutions:
  • Set user expectations (show “thinking…” indicator)
  • Use O1 Mini instead of O1 for faster responses
  • Reserve reasoning models for truly complex tasks
Solutions:
  • Use GPT-5 Mini or GPT-4o Mini for most tasks
  • Implement dynamic model selection
  • Cache common responses
  • Set max_tokens to limit completion length

Next Steps