Skip to content

Commit c7d1456

Browse files
authored
Merge pull request #8 from MPK2004/agentic-sceduler
Agentic sceduler
2 parents 5bef357 + 48584c8 commit c7d1456

10 files changed

Lines changed: 1243 additions & 407 deletions

File tree

README.md

Lines changed: 27 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -6,24 +6,35 @@
66
![TailwindCSS](https://img.shields.io/badge/tailwindcss-%2338B2AC.svg?style=for-the-badge&logo=tailwind-css&logoColor=white)
77
![Supabase](https://img.shields.io/badge/Supabase-3ECF8E?style=for-the-badge&logo=supabase&logoColor=white)
88

9-
A modern, responsive, and dynamic Event Scheduling application built with React, Vite, TypeScript, and Supabase. The application allows users to manage their events seamlessly, featuring an intuitive calendar interface, categorical filtering, search functionality, and automated Telegram notifications.
9+
A modern, responsive, and dynamic Event Scheduling application built with React, Vite, TypeScript, and Supabase. The application allows users to manage their events seamlessly using AI-powered natural language processing, voice commands, and image recognition.
1010

1111
## Live Demo
1212

1313
- **Web Application:** [https://smart-scheduling.vercel.app/](https://smart-scheduling.vercel.app/)
1414
- **Telegram Bot:** [@Maantisbot](https://t.me/Maantisbot) (Link: [https://t.me/Maantisbot](https://t.me/Maantisbot))
1515

16-
1716
## Features
1817

18+
- **Conversational AI Agent:** Interact with an intelligent chatbot interface (`ChatPanel`) that manages context, processes intents, and handles multi-turn scheduling entirely with natural language.
19+
- **AI Event Parsing:** Create structured events from raw text, voice, or images using LLMs.
20+
- **Voice-to-Event:** Record your voice and let the AI transcribe and schedule the event automatically.
21+
- **Image/OCR Scheduling:** Take a picture of a physical note or schedule and extract event details instantly.
1922
- **User Authentication:** Secure signup and login using Supabase Auth.
2023
- **Guest Mode:** Experience the app without creating an account (events are stored in-memory).
2124
- **Interactive Calendar:** Visual event calendar for easy date selection and event viewing.
22-
- **Event Management:** Add, edit, and delete events with details like title, description, category, and date.
2325
- **Telegram Notifications:** Get notified via Telegram bot when an event starts.
2426
- **Advanced Filtering & Search:** Search events by keyword and filter them by custom categories.
2527
- **Responsive Design:** Fully responsive UI built with Tailwind CSS and Shadcn UI components.
26-
- **Real-time Toasts:** Instant feedback on user actions using Sonner.
28+
29+
## AI Capabilities
30+
31+
The "Smart" in Smart Scheduling is powered by a robust AI pipeline:
32+
33+
- **Natural Language Processing:** Powered by Groq (Llama 3.1 8B) to extract titles, dates, times, and categories from unstructured text.
34+
- **Voice Transcription:** Utilizes Groq Whisper (whisper-large-v3-turbo) for high-accuracy voice-to-text conversion.
35+
- **Date Intelligence:** Combines LLM analysis with Chrono-node for precise, deterministic date and recurrence calculations.
36+
- **OCR (Optical Character Recognition):** Integrated with Tesseract.js to process images and extract textual event data.
37+
2738

2839
## Tech Stack
2940

@@ -50,6 +61,8 @@ erDiagram
5061
bigint telegram_chat_id "Linked Telegram Chat ID"
5162
text link_code "Unique linking code for Telegram"
5263
uuid last_event_id FK "Reference to the last interacted event"
64+
text last_bot_response "Text of the most recent agent response"
65+
text conversation_history "JSON structure storing sequential user-agent chat history"
5366
}
5467
5568
events {
@@ -72,6 +85,8 @@ erDiagram
7285

7386
The application leverages Supabase Edge Functions and `pg_cron` for background tasks:
7487

88+
- **agent:** A state-aware conversational routing function that interprets advanced temporal logic, handles complex intents, and drives the chatbot UI.
89+
- **analyze-event:** An Edge Function that utilizes Groq Llama and Whisper models to parse events from text and audio inputs.
7590
- **send-notifications:** An Edge Function that scans for upcoming events and sends Telegram messages to users with linked accounts.
7691
- **pg_cron:** Managed via the `process-notifications-every-minute` job, which triggers the notification engine every minute.
7792

@@ -109,16 +124,20 @@ Ensure you have the following installed on your local machine:
109124
VITE_SUPABASE_ANON_KEY=your_supabase_anon_key
110125
```
111126

112-
### Telegram Bot Setup
127+
## Setup Secrets
113128

114-
To enable notifications, you must configure a Telegram Bot:
129+
To enable full functionality, you must configure the following Supabase Secrets:
115130

116-
1. Create a bot using [@BotFather](https://t.me/botfather) and obtain the `TELEGRAM_BOT_TOKEN`.
117-
2. Set the token in your Supabase project secrets:
131+
1. **Telegram Notifications:** Create a bot via [@BotFather](https://t.me/botfather) and set the token:
118132
```bash
119133
supabase secrets set TELEGRAM_BOT_TOKEN=your_token
120134
```
121135

136+
2. **Groq AI Capabilities:** Obtain an API key from Groq Console and set it:
137+
```bash
138+
supabase secrets set GROQ_API_KEY=your_key
139+
```
140+
122141
4. **Start the Development Server:**
123142
```bash
124143
npm run dev

src/components/ChatPanel.tsx

Lines changed: 283 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,283 @@
1+
import { useState, useRef, useEffect } from "react";
2+
import { Dialog, DialogContent } from "@/components/ui/dialog";
3+
import { Button } from "@/components/ui/button";
4+
import { Input } from "@/components/ui/input";
5+
import { ScrollArea } from "@/components/ui/scroll-area";
6+
import { supabase } from "@/integrations/supabase/client";
7+
import { toast } from "sonner";
8+
import { Send, Mic, Image, Square, Loader2, Bot, User, Wrench } from "lucide-react";
9+
10+
interface Message {
11+
role: "user" | "assistant" | "status";
12+
content: string;
13+
toolCalls?: { tool: string; args: any }[];
14+
}
15+
16+
interface ChatPanelProps {
17+
open: boolean;
18+
onOpenChange: (open: boolean) => void;
19+
onEventChanged?: () => void;
20+
}
21+
22+
const ChatPanel = ({ open, onOpenChange, onEventChanged }: ChatPanelProps) => {
23+
const [messages, setMessages] = useState<Message[]>([
24+
{ role: "assistant", content: "Hey! I'm Maantis, your AI scheduling assistant. I can create events, check your schedule, resolve conflicts, and more. What can I help with?" }
25+
]);
26+
const [input, setInput] = useState("");
27+
const [isLoading, setIsLoading] = useState(false);
28+
const [isRecording, setIsRecording] = useState(false);
29+
const scrollRef = useRef<HTMLDivElement>(null);
30+
const fileInputRef = useRef<HTMLInputElement>(null);
31+
const mediaRecorder = useRef<MediaRecorder | null>(null);
32+
const audioChunks = useRef<Blob[]>([]);
33+
const conversationHistory = useRef<any[]>([]);
34+
35+
useEffect(() => {
36+
if (scrollRef.current) {
37+
scrollRef.current.scrollTop = scrollRef.current.scrollHeight;
38+
}
39+
}, [messages]);
40+
41+
const sendToAgent = async (userMessage: string, inputType: string = "text", fileData?: string) => {
42+
setIsLoading(true);
43+
setMessages(prev => [...prev, { role: "status", content: "Thinking..." }]);
44+
45+
try {
46+
const { data: { user } } = await supabase.auth.getUser();
47+
if (!user) {
48+
toast.error("Please log in first");
49+
setIsLoading(false);
50+
return;
51+
}
52+
53+
const { data, error } = await supabase.functions.invoke('agent', {
54+
body: {
55+
user_id: user.id,
56+
message: userMessage,
57+
input_type: inputType,
58+
file_data: fileData,
59+
conversation_history: conversationHistory.current.slice(-10),
60+
},
61+
});
62+
63+
if (error) throw new Error(error.message);
64+
65+
setMessages(prev => prev.filter(m => m.role !== "status"));
66+
67+
if (data.transcription && inputType === "voice") {
68+
setMessages(prev => [...prev, { role: "status", content: `Heard: "${data.transcription}"` }]);
69+
}
70+
71+
const assistantMsg: Message = {
72+
role: "assistant",
73+
content: data.response || "I processed your request.",
74+
toolCalls: data.tool_calls_made,
75+
};
76+
setMessages(prev => [...prev, assistantMsg]);
77+
78+
conversationHistory.current.push({ role: "user", content: userMessage });
79+
conversationHistory.current.push({ role: "assistant", content: data.response });
80+
81+
if (data.tool_calls_made?.some((t: any) => ["create_event", "update_event", "delete_event"].includes(t.tool))) {
82+
onEventChanged?.();
83+
}
84+
85+
} catch (err: any) {
86+
setMessages(prev => prev.filter(m => m.role !== "status"));
87+
setMessages(prev => [...prev, { role: "assistant", content: `Error: ${err.message}` }]);
88+
} finally {
89+
setIsLoading(false);
90+
}
91+
};
92+
93+
const handleSubmit = async (e: React.FormEvent) => {
94+
e.preventDefault();
95+
if (!input.trim() || isLoading) return;
96+
const msg = input.trim();
97+
setInput("");
98+
setMessages(prev => [...prev, { role: "user", content: msg }]);
99+
await sendToAgent(msg);
100+
};
101+
102+
const startRecording = async () => {
103+
try {
104+
const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
105+
mediaRecorder.current = new MediaRecorder(stream);
106+
audioChunks.current = [];
107+
108+
mediaRecorder.current.ondataavailable = (e) => {
109+
if (e.data.size > 0) audioChunks.current.push(e.data);
110+
};
111+
112+
mediaRecorder.current.onstop = async () => {
113+
const audioBlob = new Blob(audioChunks.current, { type: 'audio/webm' });
114+
stream.getTracks().forEach(track => track.stop());
115+
116+
const reader = new FileReader();
117+
reader.onloadend = async () => {
118+
const base64 = (reader.result as string).split(',')[1];
119+
setMessages(prev => [...prev, { role: "user", content: "Voice message" }]);
120+
await sendToAgent("", "voice", base64);
121+
};
122+
reader.readAsDataURL(audioBlob);
123+
};
124+
125+
mediaRecorder.current.start();
126+
setIsRecording(true);
127+
} catch (err) {
128+
toast.error("Could not access microphone.");
129+
}
130+
};
131+
132+
const stopRecording = () => {
133+
if (mediaRecorder.current && isRecording) {
134+
mediaRecorder.current.stop();
135+
setIsRecording(false);
136+
}
137+
};
138+
139+
const handleImageUpload = async (e: React.ChangeEvent<HTMLInputElement>) => {
140+
const file = e.target.files?.[0];
141+
if (!file) return;
142+
143+
const reader = new FileReader();
144+
reader.onloadend = async () => {
145+
const base64 = (reader.result as string).split(',')[1];
146+
setMessages(prev => [...prev, { role: "user", content: `Image: ${file.name}` }]);
147+
await sendToAgent(input || "", "image", base64);
148+
setInput("");
149+
};
150+
reader.readAsDataURL(file);
151+
e.target.value = "";
152+
};
153+
154+
return (
155+
<Dialog open={open} onOpenChange={onOpenChange}>
156+
<DialogContent className="sm:max-w-[500px] h-[600px] flex flex-col p-0 gap-0">
157+
<div className="flex items-center gap-3 p-4 border-b bg-primary/5">
158+
<div className="w-9 h-9 rounded-full bg-primary flex items-center justify-center">
159+
<Bot className="h-5 w-5 text-white" />
160+
</div>
161+
<div>
162+
<h3 className="font-semibold text-sm">Maantis Agent</h3>
163+
<p className="text-xs text-muted-foreground">AI Scheduling Assistant</p>
164+
</div>
165+
</div>
166+
167+
<ScrollArea className="flex-1 p-4" ref={scrollRef}>
168+
<div className="space-y-4">
169+
{messages.map((msg, i) => (
170+
<div key={i} className={`flex gap-2 ${msg.role === "user" ? "justify-end" : "justify-start"}`}>
171+
{msg.role === "assistant" && (
172+
<div className="w-7 h-7 rounded-full bg-primary/10 flex items-center justify-center shrink-0 mt-0.5">
173+
<Bot className="h-4 w-4 text-primary" />
174+
</div>
175+
)}
176+
<div className={`max-w-[80%] rounded-2xl px-4 py-2.5 text-sm leading-relaxed ${
177+
msg.role === "user"
178+
? "bg-primary text-primary-foreground rounded-br-md"
179+
: msg.role === "status"
180+
? "bg-muted text-muted-foreground italic text-xs py-1.5"
181+
: "bg-muted rounded-bl-md"
182+
}`}>
183+
<p className="whitespace-pre-wrap">{msg.content}</p>
184+
{msg.toolCalls && msg.toolCalls.length > 0 && (
185+
<div className="mt-2 pt-2 border-t border-border/50">
186+
<p className="text-[10px] uppercase tracking-wider text-muted-foreground mb-1 flex items-center gap-1">
187+
<Wrench className="h-3 w-3" /> Tools used
188+
</p>
189+
{msg.toolCalls.map((tc, j) => (
190+
<span key={j} className="inline-block text-[11px] bg-background rounded px-1.5 py-0.5 mr-1 mb-0.5 font-mono">
191+
{tc.tool}
192+
</span>
193+
))}
194+
</div>
195+
)}
196+
</div>
197+
{msg.role === "user" && (
198+
<div className="w-7 h-7 rounded-full bg-primary flex items-center justify-center shrink-0 mt-0.5">
199+
<User className="h-4 w-4 text-white" />
200+
</div>
201+
)}
202+
</div>
203+
))}
204+
{isLoading && (
205+
<div className="flex gap-2">
206+
<div className="w-7 h-7 rounded-full bg-primary/10 flex items-center justify-center shrink-0">
207+
<Bot className="h-4 w-4 text-primary animate-pulse" />
208+
</div>
209+
<div className="bg-muted rounded-2xl rounded-bl-md px-4 py-2.5">
210+
<Loader2 className="h-4 w-4 animate-spin text-muted-foreground" />
211+
</div>
212+
</div>
213+
)}
214+
</div>
215+
</ScrollArea>
216+
217+
<div className="p-3 border-t bg-background">
218+
<form onSubmit={handleSubmit} className="flex items-center gap-2">
219+
<input
220+
type="file"
221+
ref={fileInputRef}
222+
accept="image/*"
223+
className="hidden"
224+
onChange={handleImageUpload}
225+
/>
226+
<Button
227+
type="button"
228+
variant="ghost"
229+
size="icon"
230+
className="shrink-0 h-9 w-9"
231+
onClick={() => fileInputRef.current?.click()}
232+
disabled={isLoading}
233+
>
234+
<Image className="h-4 w-4 text-muted-foreground" />
235+
</Button>
236+
237+
{isRecording ? (
238+
<Button
239+
type="button"
240+
variant="destructive"
241+
size="icon"
242+
className="shrink-0 h-9 w-9 animate-pulse"
243+
onClick={stopRecording}
244+
>
245+
<Square className="h-4 w-4" />
246+
</Button>
247+
) : (
248+
<Button
249+
type="button"
250+
variant="ghost"
251+
size="icon"
252+
className="shrink-0 h-9 w-9"
253+
onClick={startRecording}
254+
disabled={isLoading}
255+
>
256+
<Mic className="h-4 w-4 text-muted-foreground" />
257+
</Button>
258+
)}
259+
260+
<Input
261+
value={input}
262+
onChange={(e) => setInput(e.target.value)}
263+
placeholder="Ask me anything..."
264+
disabled={isLoading || isRecording}
265+
className="h-9 text-sm"
266+
/>
267+
268+
<Button
269+
type="submit"
270+
size="icon"
271+
className="shrink-0 h-9 w-9"
272+
disabled={isLoading || !input.trim()}
273+
>
274+
<Send className="h-4 w-4" />
275+
</Button>
276+
</form>
277+
</div>
278+
</DialogContent>
279+
</Dialog>
280+
);
281+
};
282+
283+
export default ChatPanel;

src/components/EventManager.tsx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -111,5 +111,6 @@ export const useEventManager = () => {
111111
addEvent,
112112
updateEvent,
113113
deleteEvent,
114+
fetchEvents,
114115
};
115116
};

0 commit comments

Comments
 (0)