AI-powered tool for generating professional bios, project summaries, and learning reflections using Google's Gemini AI.
- Interactive Chat Interface: Natural conversation with AI to build your portfolio content
- Multiple Content Types: Generate bios, project descriptions, skills summaries, and more
- Context-Aware: Maintains conversation history for coherent, personalized responses
- Modern UI: Clean, responsive interface built with Streamlit
- Python 3.10 or higher
- Google AI API Key (Get one here)
- Git (optional)
git clone https://github.com/dev-api-org/ai-portfolio-assistant.git
cd ai-portfolio-assistantWindows (PowerShell):
py -m venv .venv
.\.venv\Scripts\ActivatemacOS/Linux:
python3 -m venv .venv
source .venv/bin/activate# Upgrade pip (recommended)
python -m pip install --upgrade pip
# Install all dependencies
pip install -r requirements.txt# Copy the example file
cp .env.example .env
# Edit .env and add your Google API key
# GOOGLE_API_KEY=your_actual_api_key_herestreamlit run frontend/streamlit_chat_canvas.pyThe app will open in your browser at http://localhost:8501
Ensure these files are in your repository:
- ✅
requirements.txt(unified dependencies) - ✅
backend/__init__.py(makes backend a Python package) - ✅
.streamlit/config.toml(Streamlit configuration) - ✅
.env.example(template for local development) - ✅
.streamlit/secrets.toml.example(template for cloud secrets)
- Sign in to Streamlit Cloud
- Click "New app"
- Configure your app:
- Repository:
dev-api-org/ai-portfolio-assistant - Branch:
main - Main file path:
frontend/streamlit_chat_canvas.py
- Repository:
- Click "Advanced settings"
- Set Python version:
3.10or higher
In Streamlit Cloud, go to App Settings > Secrets and add:
GOOGLE_API_KEY = "your_google_api_key_here"
MODEL_NAME = "gemini-2.0-flash-exp"
MODEL_TEMPERATURE = "0.7"Click "Deploy" and wait for the build to complete.
- ❌
.envfile with real API keys - ❌
.streamlit/secrets.tomlwith real secrets - ✅ Use
.env.exampleand.streamlit/secrets.toml.exampleas templates
The app uses the following environment variables:
| Variable | Required | Default | Description |
|---|---|---|---|
GOOGLE_API_KEY |
✅ Yes | - | Your Google AI API key |
MODEL_NAME |
No | gemini-2.0-flash-exp |
Gemini model to use |
MODEL_TEMPERATURE |
No | 0.7 |
Model creativity (0.0-1.0) |
GLOBAL_SYSTEM_PROMPT |
No | From config | Custom system prompt |
ai-portfolio-assistant/
├── backend/
│ ├── __init__.py # Makes backend a package
│ ├── chat_core.py # Core chat logic
│ ├── config.py # Configuration management
│ ├── session_memory.py # Session state management
│ ├── prompts.json # Prompt templates
│ └── systemprompts.json # System prompts
├── frontend/
│ ├── components/ # Reusable UI components
│ ├── img/ # Images and assets
│ ├── pages/ # Additional pages
│ ├── streamlit_chat_canvas.py # Main app
│ └── utils.py # Utility functions
├── .streamlit/
│ ├── config.toml # Streamlit configuration
│ └── secrets.toml.example # Secrets template
├── requirements.txt # Python dependencies
├── .env.example # Environment template
├── .gitignore # Git ignore rules
└── README.md # This file
To test the backend independently:
python backend/llm_service.pyThis runs a terminal-based chat to verify your API connection.
Solution: Ensure backend/__init__.py exists (should be an empty file).
Solution:
- Verify your API key at Google AI Studio
- Check that
GOOGLE_API_KEYis set in.env(local) or Streamlit secrets (cloud) - Ensure no extra spaces or quotes in the key
Solution:
- Check that
requirements.txtis in the repository root - Verify all imports use proper package structure (
from backend import ...) - Review build logs in Streamlit Cloud dashboard
- The app maintains conversation history per session
- Session data is stored in memory (resets on restart)
- For production, consider adding persistent storage
- Rate limits apply based on your Google AI API tier
- Fork the repository
- Create a feature branch
- Make your changes
- Test locally
- Submit a pull request
This project is for educational and portfolio purposes.