Skip to content

deskelleher/tav-backend

Repository files navigation

TAV Backend API

A Node.js backend API that integrates Google Gemini and OpenAI ChatGPT APIs. The API receives a prompt, sends it to Gemini, then combines the response with a configurable prepend prompt and sends it to ChatGPT for final processing.

Features

  • Bearer Token Authentication: Protected endpoints with secure token validation
  • Input Validation: Zod schema validation for all requests
  • Rate Limiting: Built-in rate limiting to prevent abuse
  • Security: Helmet.js for security headers and CORS support
  • Error Handling: Comprehensive error handling and logging
  • Environment Configuration: Flexible configuration via environment variables

API Flow

  1. Receive POST request with prompt (protected by bearer token)
  2. Send prompt to Google Gemini API
  3. Take Gemini's response and prepend a configurable prompt
  4. Send combined prompt/response to OpenAI ChatGPT API
  5. Return ChatGPT's final response to the client

Setup

Prerequisites

  • Node.js (v16 or higher) OR Docker
  • npm or yarn (if not using Docker)
  • Google Gemini API key
  • OpenAI API key

Installation

  1. Clone the repository:
git clone <repository-url>
cd tav-backend
  1. Install dependencies:
npm install
  1. Copy the environment example file:
cp env.example .env
  1. Configure your environment variables in .env:
# Server Configuration
PORT=3000
NODE_ENV=development

# Authentication
BEARER_TOKEN=your-secure-bearer-token-here

# Google Gemini API
GEMINI_API_KEY=your-gemini-api-key-here
GEMINI_MODEL=gemini-pro

# OpenAI ChatGPT API
OPENAI_API_KEY=your-openai-api-key-here
OPENAI_MODEL=gpt-3.5-turbo

# Configuration
PREPEND_PROMPT=Please analyze and improve the following response:

# API Configuration
OPENAI_MAX_TOKENS=2000
OPENAI_TEMPERATURE=0.7

Running the Application

Option 1: Using Node.js directly

Development mode:

npm run dev

Production mode:

npm start

Option 2: Using Docker (Recommended)

Production with Docker:

docker-compose up --build

Development with Docker (hot reloading):

docker-compose -f docker-compose.dev.yml up --build

The server will start on port 3000 (or the port specified in your .env file).

For detailed Docker instructions, see DOCKER.md.

For VPS deployment instructions, see VPS_DEPLOYMENT.md.

API Documentation

Health Check

GET /health

Returns server status.

Response:

{
  "status": "OK",
  "timestamp": "2024-01-01T12:00:00.000Z"
}

Process Prompt

POST /api/prompt

Process a prompt through Gemini and ChatGPT.

Headers:

Authorization: Bearer your-bearer-token
Content-Type: application/json

Request Body:

{
  "prompt": "Your prompt here"
}

Response:

{
  "response": "ChatGPT's final response",
  "geminiResponse": "Gemini's original response",
  "processingTime": 1500
}

Error Responses:

  • 400 Bad Request: Invalid request body or validation errors
  • 401 Unauthorized: Missing or invalid bearer token
  • 429 Too Many Requests: Rate limit exceeded
  • 500 Internal Server Error: Server or API errors

Environment Variables

Variable Description Required Default
PORT Server port No 3000
NODE_ENV Environment (development/production) No development
BEARER_TOKEN Authentication token Yes -
GEMINI_API_KEY Google Gemini API key Yes -
GEMINI_MODEL Gemini model to use No gemini-pro
OPENAI_API_KEY OpenAI API key Yes -
OPENAI_MODEL OpenAI model to use No gpt-3.5-turbo
OPENAI_MAX_TOKENS Maximum tokens for OpenAI responses No 2000
OPENAI_TEMPERATURE Temperature for OpenAI responses (0-2) No 0.7
PREPEND_PROMPT Text to prepend to Gemini response No "Please analyze and improve the following response:"

Security Features

  • Bearer Token Authentication: All API endpoints are protected
  • Rate Limiting: 100 requests per 15 minutes per IP
  • Security Headers: Helmet.js for security headers
  • CORS: Configurable CORS support
  • Input Validation: Zod schema validation
  • Error Sanitization: Errors don't expose sensitive information in production

Error Handling

The API includes comprehensive error handling:

  • Input validation errors (400)
  • Authentication errors (401)
  • Rate limiting (429)
  • API service errors (500)
  • Generic error responses with appropriate status codes

Development

Project Structure

src/
├── index.js          # Main application entry point
├── middleware/
│   └── auth.js       # Authentication middleware
├── routes/
│   └── prompt.js     # Prompt processing routes
├── schemas/
│   └── prompt.js     # Zod validation schemas
├── services/
│   ├── gemini.js     # Gemini API service
│   └── chatgpt.js    # ChatGPT API service
└── utils/
    └── config.js     # Configuration validation

# Docker files
Dockerfile            # Production Docker image
Dockerfile.dev        # Development Docker image
docker-compose.yml    # Production Docker Compose
docker-compose.dev.yml # Development Docker Compose
.dockerignore         # Docker ignore rules

Testing

Run tests:

npm test

Logs

The application logs important events:

  • API requests and responses
  • Error details
  • Processing times
  • Authentication attempts

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Make your changes
  4. Add tests if applicable
  5. Submit a pull request

License

MIT License

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors