Skip to content

xiaoY233/Chat2API

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

44 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Chat2API

Chat2API Logo

Release License
Electron React TypeScript Platform

δΈ­ζ–‡ | Official Website | Documentation

Multi-platform AI Service Unified Management Tool

Chat2API enables zero-cost access to leading AI models by leveraging official web UIs. It supports providers such as DeepSeek, GLM, Kimi, MiniMax, Qwen, and Z.ai, and seamlessly integrates with tools like openlcaw, Cline, and Roo-Code β€” making any OpenAI-compatible client work out of the box.

✨ Features

  • OpenAI Compatible API: Provides standard OpenAI-compatible API endpoints for seamless integration
  • Multi-Provider Support: Connect DeepSeek, GLM, Kimi, MiniMax, Qwen, Z.ai and more
  • πŸ†• Multi-turn Conversation: Full support for multi-turn dialogue with session management and context retention
  • πŸ†• Function Calling Support: Universal tool calling capability for all models via prompt engineering, compatible with Cherry Studio, Kilo Code, and other clients
  • πŸ†• Model Mapping: Flexible model name mapping with wildcard support and preferred provider/account selection
  • πŸ†• Custom Parameters: Support for custom HTTP headers to enable web search, thinking mode, and deep research features
  • Dashboard Monitoring: Real-time request traffic, token usage, and success rates
  • API Key Management: Generate and manage keys for your local proxy
  • Model Management: View and manage available models from all providers
  • Request Logs: Detailed request logging for debugging and analysis
  • Proxy Configuration: Flexible proxy settings and routing strategies
  • System Tray Integration: Quick access to status from menu bar
  • Multilingual: English and Simplified Chinese support
  • Modern UI: Clean, responsive interface with dark/light theme support

πŸ€– Supported Providers

Provider Auth Type OAuth Models
DeepSeek User Token Yes DeepSeek-V3.2
GLM Refresh Token Yes GLM-5
Kimi JWT Token Yes kimi-k2.5
MiniMax JWT Token Yes MiniMax-M2.5
Qwen (CN) SSO Ticket Yes Qwen3.5-Plus, Qwen3-Max, Qwen3-Flash, Qwen3-Coder, qwen-max-latest
Qwen AI (Global) JWT Token Yes Qwen3.5-Plus, Qwen3-Max, Qwen3-VL-Plus, Qwen3-Coder-Plus, Qwen-Plus, Qwen-Turbo
Z.ai JWT Token Yes GLM-5, GLM-4.7, GLM-4.6V, GLM-4.6

πŸ“₯ Installation

Download

Download the latest release from GitHub Releases:

Platform Download
macOS (Apple Silicon) Chat2API-x.x.x-arm64.dmg
macOS (Intel) Chat2API-x.x.x-x64.dmg
Windows Chat2API-x.x.x-x64-setup.exe
Linux Chat2API-x.x.x-x64.AppImage or .deb

Build from Source

Requirements:

  • Node.js 18+
  • npm
  • Git
# Clone the repository
git clone https://github.com/xiaoY233/Chat2API.git
cd Chat2API

# Install dependencies
npm install

# Start development server
npx electron-vite dev 2>&1

Build for Production

npm run build              # Build the application
npm run build:mac          # Build for macOS (dmg, zip)
npm run build:win          # Build for Windows (nsis)
npm run build:linux        # Build for Linux (AppImage, deb)
npm run build:all          # Build for all platforms

πŸ“– Usage

Step 1: Launch the App

After installation, launch Chat2API. You'll see the main dashboard.

Step 2: Add a Provider

  1. Navigate to Providers from the sidebar
  2. Click Add Provider button
  3. Select a built-in provider (e.g., DeepSeek)
  4. Enter your authentication credentials

For example, to get a DeepSeek token:

  1. Visit DeepSeek Chat
  2. Start any conversation
  3. Press F12 to open Developer Tools
  4. Go to Application > Local Storage
  5. Find userToken and copy its value

Step 3: Configure Proxy

  1. Navigate to Proxy Settings from the sidebar
  2. Set the port (default: 8080)
  3. Choose a load balancing strategy:
    • Round Robin: Distributes requests evenly across accounts
    • Fill First: Uses one account until limit is reached
    • Failover: Automatically switches on failure
  4. Click Start Proxy

Step 4: Test the API

Using Python (OpenAI SDK):

from openai import OpenAI

client = OpenAI(
    api_key="your-api-key",
    base_url="http://localhost:8080/v1"
)

response = client.chat.completions.create(
    model="DeepSeek-V3.2",
    messages=[
        {"role": "user", "content": "Hello, who are you?"}
    ]
)

print(response.choices[0].message.content)

Step 5: Manage API Keys (Optional)

For security, you can enable API Key authentication:

  1. Go to API Keys page
  2. Click New API Key
  3. Enter a name and description
  4. Copy the generated key

πŸ“Έ Screenshots

Dashboard Providers
Dashboard Providers
Proxy Settings API Keys
Proxy API Keys
Models Session
Models Session

βš™οΈ Settings

  • Port: Change the proxy listening port (default: 8080)
  • Routing Strategy: Round Robin or Fill First
  • Auto-start: Launch proxy automatically on app startup
  • Theme: Light, Dark, or System preference
  • Language: English or Simplified Chinese

πŸ—οΈ Architecture

Chat2API/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ main/                    # Electron main process
β”‚   β”‚   β”œβ”€β”€ index.ts            # App entry point
β”‚   β”‚   β”œβ”€β”€ tray.ts             # System tray integration
β”‚   β”‚   β”œβ”€β”€ proxy/              # Proxy server management
β”‚   β”‚   β”œβ”€β”€ ipc/                # IPC handlers
β”‚   β”‚   └── utils/              # Utilities
β”‚   β”œβ”€β”€ preload/                # Context bridge
β”‚   └── renderer/               # React frontend
β”‚       β”œβ”€β”€ components/         # UI components
β”‚       β”œβ”€β”€ pages/              # Page components
β”‚       β”œβ”€β”€ stores/             # Zustand state
β”‚       └── hooks/              # Custom hooks
β”œβ”€β”€ build/                      # Build resources
└── scripts/                    # Build scripts

πŸ”§ Tech Stack

Component Technology
Framework Electron 33+
Frontend React 18 + TypeScript
Styling Tailwind CSS
State Zustand
Build Vite + electron-vite
Packaging electron-builder
Server Koa

πŸ“ Data Storage

Application data is stored in ~/.chat2api/ directory:

  • config.json - Application configuration
  • providers.json - Provider settings
  • accounts.json - Account credentials (encrypted)
  • logs/ - Request logs

❓ FAQ

macOS: "App is damaged and can't be opened"

Due to macOS security mechanisms, apps downloaded outside the App Store may trigger this warning. Run the following command to fix it:

sudo xattr -rd com.apple.quarantine "/Applications/Chat2API.app"

How to update?

Check for updates in the About page, or download the latest version from GitHub Releases.

🀝 Contributing

  1. Fork the project
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit changes (git commit -m 'Add amazing feature')
  4. Push to branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

πŸ“„ License

GNU General Public License v3.0. See LICENSE for details.

This means:

  • βœ… Free to use, modify, and distribute
  • βœ… Derivative works must be open-sourced under the same license
  • βœ… Must preserve original copyright notices

πŸ™ Acknowledgments

About

Chat2API enables zero-cost access to leading AI models by leveraging official web UIs. It supports providers such as DeepSeek, GLM, Kimi, MiniMax, Qwen, and Z.ai, and seamlessly integrates with tools like openlcaw, Cline, and Roo-Code.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages