Skip to main content

Overview

Google’s Gemini models offer industry-leading context windows (up to 2 million tokens), exceptional multimodal capabilities, and strong performance across text, code, and reasoning tasks.

Available Models

Gemini 3 Pro Preview (Latest)

Gemini 3 Pro Preview

5 credits • Cutting-edge preview model
  • 2,000,000 token context window (largest available)
  • Exceptional reasoning and contextual understanding
  • Speed: Medium • Cost: High
  • Best for: Research, massive documents, advanced R&D

Gemini 2.5 Series (Stable)

Gemini 2.5 Pro

3 credits • Most advanced stable model
  • 2,000,000 token context window
  • Exceptional reasoning and accuracy
  • Speed: Slow • Cost: High
  • Best for: Complex reasoning, long documents

Gemini 2.5 Flash

2 credits • Fast and capable
  • 1,000,000 token context window
  • Excellent reasoning with speed
  • Speed: Fast • Cost: Medium
  • Best for: Production applications

Gemini 2.5 Flash Lite

1 credit • Ultra-fast and efficient
  • 1,000,000 token context window
  • Good reasoning at lowest cost
  • Speed: Very Fast • Cost: Very Low
  • Best for: High-volume, simple tasks

Gemini 2.0 Flash Thinking

3 credits • Reasoning model
  • 1,000,000 token context window
  • Explicit thinking process for analysis
  • Speed: Medium • Cost: Medium
  • Reasoning model for problem-solving

Setup

Using BoostGPT-Hosted API Keys

1

Select Gemini Model

In your BoostGPT dashboard, select any Gemini model when creating or configuring your bot.
2

Choose Your Model

  • Gemini 2.5 Flash: Best for most production use cases
  • Gemini 2.5 Pro: When you need massive 2M context
  • Gemini 2.5 Flash Lite: High-volume, cost-sensitive
  • Gemini 2.0 Flash Thinking: Complex reasoning tasks

Using Your Own Google AI API Key

1

Navigate to Integrations

Go to app.boostgpt.co and select Integrations
2

Select Google AI

Find and click on the Google provider
3

Add API Key

Get your API key from Google AI StudioEnter the API key and select which agents will use it
4

Save Configuration

Click save to apply your custom API key
Google AI offers a generous free tier for testing and development!

Model Selection Guide

Best for:
  • Production chatbots and customer support
  • General-purpose applications
  • Fast responses with strong reasoning
  • 1M context for long conversations
Sweet spot: Best balance of speed, cost, and capabilityCost: 2 credits per request
Best for:
  • Analyzing entire codebases
  • Processing very long documents (books, research papers)
  • Multi-turn conversations with full history
  • Maximum context retention (2M tokens)
Standout feature: Largest context window availableCost: 3 credits per request
Best for:
  • High-volume applications (thousands of requests)
  • Simple queries and responses
  • Cost-sensitive production
  • Quick classifications
Cost: 1 credit per request (most affordable)
Best for:
  • Mathematical problem solving
  • Code analysis and debugging
  • Multi-step logical reasoning
  • Scientific tasks
Note: Reasoning model with explicit thinkingCost: 3 credits per request
Best for:
  • Research and experimentation
  • Testing next-generation capabilities
  • Maximum context + latest features
Note: Preview model, may have breaking changesCost: 5 credits per request

Troubleshooting

Expected: Pro prioritizes accuracy over speedSolutions:
  • Use Flash for faster responses
  • Reduce input length when possible
  • Add loading indicators
Rare: 1M-2M context handles most casesSolutions:
  • Use Pro for maximum 2M context
  • Implement message pruning for extreme cases
  • Split very large documents
Cause: Long contexts consume many tokensSolutions:
  • Use Flash Lite for simple tasks (1 credit)
  • Implement context pruning
  • Set max_reply_tokens limits
  • Monitor token usage in dashboard

Next Steps