An open-source, multi-model AI chat platform with image generation and web search capabilities
Prompt Thing is a modern alternative to t3.chat that provides seamless access to multiple AI models, image generation, and advanced reasoning capabilities in a beautiful, intuitive interface.
- ✅ Chat with Various LLMs - Support for multiple language models and providers
- ✅ Authentication & Sync - User authentication with chat history synchronization
- ✅ Browser Friendly - Web-based interface accessible from any browser
- ✅ Easy to Try - Simple setup and deployment process
- ✅ Attachment Support - Upload and process files (images and PDFs)
- ✅ Image Generation Support - AI-powered image generation capabilities
- ✅ Syntax Highlighting - Beautiful code formatting and highlighting
- ✅ Resumable Streams - Continue generation after page refresh
- ✅ Chat Branching - Create alternative conversation paths
- ✅ Chat Sharing - Share conversations with others
- ✅ Web Search - Integrate real-time web search
- ✅ Bring Your Own Key - Use your own API keys
- Node.js 18+ or Bun
- A Convex account for database and real-time features
-
Clone the repository
git clone https://github.com/alifarooq9/promptthing cd promptthing -
Install dependencies
npm install # or bun install -
Set up environment variables
Create a
.env.localfile in the root directory:# Convex (Required) NEXT_PUBLIC_CONVEX_URL=your_convex_url_here CONVEX_DEPLOY_KEY=your_convex_deploy_key_here # Redis (Optional - enables resumable streams) REDIS_URL=redis://localhost:6379 # AI Providers (Add keys for models marked as "always" available) GOOGLE_GENERATIVE_AI_API_KEY=your_google_key_here # Image Generation RUNWARE_API_KEY=your_runware_key_here
-
Start the development server
npm run dev # or bun dev -
Start the Convex dev server
npm convex dev
-
Open your browser
Navigate to http://localhost:3000 to start chatting!
| Provider | Models | Reasoning | BYOK Required |
|---|---|---|---|
| Google Gemini | Gemini 2.5 Pro Preview, 2.5 Flash Thinking, 2.5 Flash, 2.0 Flash Thinking, 2.0 Flash, 2.0 Flash Lite, 2.0 Pro | ✅ (Some) | Mixed |
| OpenAI | GPT-4.1 Nano, GPT-4.1, o3, o3 Mini | ✅ (o3 series) | ✅ |
| Anthropic | Claude 4 Sonnet, Claude 4 Thinking, Claude 4 Opus, Claude 3.7 Sonnet Thinking, Claude 3.7 Sonnet, Claude 3.5 Sonnet | ✅ (Thinking/Opus) | ✅ |
| OpenRouter | DeepSeek R1 | ✅ | ✅ |
| Provider | Models | Features |
|---|---|---|
| OpenAI | DALL-E 3, GPT Image Gen | High-quality, versatile |
| Runware | Runware 101@1, 100@1 | Fast generation, cost-effective |
🔑 BYOK (Bring Your Own Key) - Some models require you to provide your own API keys
🧠 Reasoning Models - Advanced models with enhanced problem-solving capabilities
-
Add model configuration in
config/models.ts:const modelDefinitions: Record<string, ModelConfig> = { "your-model-id": { model: "actual-model-name", provider: "provider-name", modelName: "Display Name", canReason: true, // or false supportsWebSearch: true, // or false icon: "icon-name", availableWhen: "always" | "byok", canUseTools: true, // or false apiKeyEnv: "ENV_VAR_NAME", // only if availableWhen is "always" }, // ...existing models };
-
For new providers, also update:
- Add provider to
Providertype inconfig/models.ts - Install SDK:
npm install @ai-sdk/provider-name - Add case in
createModel()function inlib/models.ts - Update
isProviderSupported()array inlib/models.ts
- Add provider to
-
Add model configuration in
config/models.ts:const imageGenModelDefinitions: Record<string, ImageGenModelConfig> = { "model-id": { model: "actual-model-name", provider: "provider-name", modelName: "Display Name", imageToImage: true, // or false icon: "icon-name", availableWhen: "always" | "byok", }, // ...existing models };
-
Add provider support in
createImageGenModel()function inlib/models.ts
- Frontend: Next.js 14 with App Router, Tailwind CSS, Radix UI
- Backend: Convex for database, real-time subscriptions, and serverless functions
- AI Integration: AI SDK for unified model access
- Styling: Tailwind CSS and ShadCN UI
- State Management: Zustand for client-side state
This project is licensed under the MIT License - see the LICENSE file for details.
Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
If you find this project helpful, please consider giving it a star on GitHub!