Skip to content

Conversation

@matt423
Copy link
Member

@matt423 matt423 commented Dec 22, 2025

Description

AIT-209

Adds a JS and React example for message per token response streaming. Example mimics a basic prompt -> answer scenario with a Disconnect button to simulate network loss and token recovery using history and untilAttach

Review App

Note: No image for the examples page yet

Checklist

Summary by CodeRabbit

  • New Features

    • Added AI Transport token streaming examples for JavaScript and React, demonstrating real-time token streaming with Ably.
    • Includes complete working demo applications with getting started guides and local development setup instructions.
  • Documentation

    • Added comprehensive README files with setup steps, command examples, and references to additional AI Transport resources.

✏️ Tip: You can customize this high-level summary in your review settings.

@matt423 matt423 self-assigned this Dec 22, 2025
@coderabbitai
Copy link

coderabbitai bot commented Dec 22, 2025

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

This pull request introduces comprehensive examples for AI Transport token streaming in both JavaScript and React, demonstrating real-time LLM token consumption via Ably. Includes complete project scaffolding with documentation, UI components, backend simulation service, and configuration.

Changes

Cohort / File(s) Summary
JavaScript Example - Documentation & Configuration
examples/ai-transport-token-streaming/javascript/README.md, examples/ai-transport-token-streaming/javascript/package.json, examples/ai-transport-token-streaming/javascript/vite.config.ts, examples/ai-transport-token-streaming/javascript/tailwind.config.ts
Adds documentation for the example, npm manifest with Vite/TypeScript/Tailwind setup, and build/style configurations extending shared base configs
JavaScript Example - UI & Styling
examples/ai-transport-token-streaming/javascript/index.html, examples/ai-transport-token-streaming/javascript/src/styles.css
Introduces HTML page with placeholder elements (prompt display, status indicator, response area, prompt buttons), and Tailwind CSS directives
JavaScript Example - Client Logic
examples/ai-transport-token-streaming/javascript/src/script.ts
Implements client-side token streaming: channel subscription, message handling with responseId filtering, connection state monitoring, disconnect/reconnect with history rehydration, and UI state synchronization
JavaScript Example - Backend & Config
examples/ai-transport-token-streaming/javascript/src/BackendLLMService.ts, examples/ai-transport-token-streaming/javascript/src/config.ts
Adds simulated LLM service that tokenizes responses and publishes to Ably with randomized delays, plus Ably key configuration from environment
React Example - Documentation & Configuration
examples/ai-transport-token-streaming/react/README.md, examples/ai-transport-token-streaming/react/package.json, examples/ai-transport-token-streaming/react/vite.config.ts, examples/ai-transport-token-streaming/react/tailwind.config.ts, examples/ai-transport-token-streaming/react/postcss.config.js
Adds documentation, npm manifest for Vite/React/TypeScript setup, and build/style configurations extending shared base configs
React Example - TypeScript Configuration
examples/ai-transport-token-streaming/react/tsconfig.json, examples/ai-transport-token-streaming/react/tsconfig.node.json
Adds TypeScript compiler options for React/ESNext targeting and node config for build tools
React Example - UI & Entry Points
examples/ai-transport-token-streaming/react/index.html, examples/ai-transport-token-streaming/react/src/index.tsx, examples/ai-transport-token-streaming/react/src/styles/styles.css
Introduces React entry point, HTML page, and Tailwind CSS directives
React Example - Component Logic
examples/ai-transport-token-streaming/react/src/App.tsx
Implements React component with Ably channel subscription, token aggregation via useChannel hook, connection state monitoring, prompt handling, and disconnect/reconnect with paginated history fetching and message deduplication
React Example - Backend & Config
examples/ai-transport-token-streaming/react/src/BackendLLMService.ts, examples/ai-transport-token-streaming/react/src/config.ts
Adds LLMService interface and BackendLLMService implementation that tokenizes and publishes responses to Ably, plus Ably key configuration
Example Registry
src/data/examples/index.ts
Registers new ai-transport-token-streaming example in examples array, updates Chat presence layout, and renames product key from aitransport to ai_transport for consistency

Sequence Diagram

sequenceDiagram
    actor User
    participant Client as Client<br/>(JS/React)
    participant Ably as Ably Channel
    participant Backend as Backend LLM<br/>Service

    User->>Client: Select prompt
    activate Client
    Client->>Client: Generate responseId<br/>Update UI (processing)
    Client->>Backend: requestLLMProcessing<br/>(prompt, responseId)
    deactivate Client
    
    activate Backend
    Backend->>Backend: Tokenize response<br/>into chunks
    loop For each token
        Backend->>Backend: Randomized delay
        Backend->>Ably: Publish token message<br/>(token, responseId header)
    end
    Backend->>Ably: Publish stream-complete<br/>(responseId header)
    deactivate Backend

    activate Client
    Ably->>Client: Message event (token)
    Client->>Client: Filter by responseId<br/>Append to messages
    Client->>Client: Aggregate & render
    Ably->>Client: Message event (stream-complete)
    Client->>Client: Mark processing complete<br/>Update UI
    deactivate Client

    User->>Client: Disconnect
    activate Client
    Client->>Ably: Detach channel
    deactivate Client

    User->>Client: Reconnect
    activate Client
    Client->>Ably: Re-attach channel
    Ably->>Client: Paginated history<br/>(untilAttach)
    Client->>Client: Filter & deduplicate<br/>messages by responseId
    Client->>Client: Aggregate & render
    deactivate Client
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 Hops with joy at token streams,
Where AI dreams dance through Ably beams,
Two examples born, JS and React so fine,
Real-time chunks of wisdom align!
History rehydrates when reconnections bind,
A streaming rabbit's perfect find!

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Linked Issues check ❓ Inconclusive The PR partially addresses AIT-148 by providing working example implementations for token streaming, but does not explicitly demonstrate investigation of Sandpack capabilities, design assessment of external dependencies, or documentation of LLM optimization requirements. Clarify whether Sandpack feasibility investigation and design documentation for supporting external dependencies were completed as separate deliverables or are expected within this PR's scope.
✅ Passed checks (4 passed)
Check name Status Explanation
Out of Scope Changes check ✅ Passed All changes directly support the example implementations: JavaScript and React projects with token streaming, documentation, configuration, and a minor product key rename aligning naming conventions (aitransport to ai_transport).
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Title check ✅ Passed The PR title 'AIT-209 Message per token examples' directly corresponds to the main addition of two complete example implementations (JavaScript and React) for real-time AI/LLM token streaming via Ably AI Transport.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

Comment @coderabbitai help to get the list of available commands and usage tips.

@matt423 matt423 added the review-app Create a Heroku review app label Dec 22, 2025
@ably-ci ably-ci temporarily deployed to ably-docs-ait-148-messa-ryzjxo December 22, 2025 17:47 Inactive
@matt423 matt423 force-pushed the ait-148-message-per-token-examples branch from 53fafec to ea06c40 Compare December 22, 2025 17:49
@matt423 matt423 temporarily deployed to ably-docs-ait-148-messa-ryzjxo December 22, 2025 17:50 Inactive
@matt423 matt423 changed the base branch from main to AIT-129-AIT-Docs-release-branch December 22, 2025 17:50
@matt423 matt423 force-pushed the ait-148-message-per-token-examples branch from ea06c40 to 0ba3d4e Compare December 22, 2025 17:52
@matt423 matt423 temporarily deployed to ably-docs-ait-148-messa-ryzjxo December 22, 2025 17:53 Inactive
@matt423 matt423 force-pushed the ait-148-message-per-token-examples branch from 0ba3d4e to 1c06661 Compare December 23, 2025 10:40
@matt423 matt423 temporarily deployed to ably-docs-ait-148-messa-ryzjxo December 23, 2025 10:41 Inactive
@matt423 matt423 force-pushed the AIT-129-AIT-Docs-release-branch branch from 400eb09 to f8056cb Compare December 23, 2025 10:41
@matt423 matt423 force-pushed the ait-148-message-per-token-examples branch from 1c06661 to 73d9273 Compare December 23, 2025 10:45
@matt423 matt423 temporarily deployed to ably-docs-ait-148-messa-ryzjxo December 23, 2025 10:45 Inactive
@matt423 matt423 temporarily deployed to ably-docs-ait-148-messa-ryzjxo December 23, 2025 10:55 Inactive
@matt423
Copy link
Member Author

matt423 commented Dec 23, 2025

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Dec 23, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

♻️ Duplicate comments (1)
examples/ai-transport-token-streaming/react/postcss.config.js (1)

1-6: Configuration is correct but depends on missing packages.

This PostCSS configuration is standard for Tailwind CSS integration. However, it references tailwindcss and autoprefixer packages that are currently missing from package.json devDependencies. Once those dependencies are added (as flagged in the package.json review), this configuration will work correctly.

🧹 Nitpick comments (12)
examples/ai-transport-token-streaming/javascript/README.md (1)

42-42: Reduce repetitive phrasing in sequential instructions.

Lines 42 and 60 use similar structures ("to be your" and "to use your") when describing how to update the VITE_ABLY_KEY variable. Varying the phrasing would improve readability across the getting started steps.

🔎 Suggested rewording

Line 42 can remain as-is, and line 60 could be revised for clarity:

- In CodeSandbox, rename the `.env.example` file to `.env.local` and update the value of your `VITE_ABLY_KEY` variable to use your Ably API key.
+ In CodeSandbox, rename the `.env.example` file to `.env.local` and set `VITE_ABLY_KEY` to your Ably API key.

Also applies to: 60-60

examples/ai-transport-token-streaming/react/tsconfig.json (1)

12-12: Consider aligning moduleResolution with Vite best practices.

The moduleResolution is set to "Node" here, while tsconfig.node.json uses "bundler". For consistency and to align with modern Vite practices, consider using "bundler" in both configurations.

🔎 Proposed change
-    "moduleResolution": "Node",
+    "moduleResolution": "bundler",
examples/ai-transport-token-streaming/javascript/src/config.ts (1)

1-3: Consider standardizing the fallback key across examples.

The JavaScript example uses 'YOUR_ABLY_KEY_HERE' as the fallback, while the React example uses 'demo-key-for-examples:YOUR_ABLY_KEY_HERE' (which follows Ably's key format pattern). For consistency in reference implementations, both examples should use the same placeholder format.

🔎 Standardize with React example format
 export const config = {
-  ABLY_KEY: import.meta.env.VITE_ABLY_KEY || 'YOUR_ABLY_KEY_HERE',
+  ABLY_KEY: import.meta.env.VITE_ABLY_KEY || 'demo-key-for-examples:YOUR_ABLY_KEY_HERE',
 };
examples/ai-transport-token-streaming/javascript/src/BackendLLMService.ts (2)

10-21: Add disposal method for the Ably client.

The Ably Realtime client is instantiated but never closed. For production use, consider adding a dispose() method to properly close the connection and release resources.

🔎 Proposed disposal method
+  dispose(): void {
+    // Cancel all active streams
+    this.activeStreams.forEach((timeouts) => {
+      timeouts.forEach((timeout) => clearTimeout(timeout));
+    });
+    this.activeStreams.clear();
+    
+    // Close the Ably connection
+    this.client.close();
+  }

81-86: Implement cancellation using the activeStreams map.

The activeStreams map stores timeouts but they are never cancelled. Consider adding a cancelStream(responseId) method to clear pending timeouts, enabling proper cancellation of in-progress streams.

🔎 Proposed cancellation method
+  cancelStream(responseId: string): void {
+    const timeouts = this.activeStreams.get(responseId);
+    if (timeouts) {
+      timeouts.forEach((timeout) => clearTimeout(timeout));
+      this.activeStreams.delete(responseId);
+    }
+  }
examples/ai-transport-token-streaming/react/src/BackendLLMService.ts (2)

10-21: Add disposal method for the Ably client.

The Ably Realtime client is instantiated but never closed. For production use, consider adding a dispose() method to properly close the connection and release resources.

🔎 Proposed disposal method
+  dispose(): void {
+    // Cancel all active streams
+    this.activeStreams.forEach((timeouts) => {
+      timeouts.forEach((timeout) => clearTimeout(timeout));
+    });
+    this.activeStreams.clear();
+    
+    // Close the Ably connection
+    this.client.close();
+  }

81-86: Implement cancellation using the activeStreams map.

The activeStreams map stores timeouts but they are never cancelled. Consider adding a cancelStream(responseId) method to clear pending timeouts, enabling proper cancellation of in-progress streams.

🔎 Proposed cancellation method
+  cancelStream(responseId: string): void {
+    const timeouts = this.activeStreams.get(responseId);
+    if (timeouts) {
+      timeouts.forEach((timeout) => clearTimeout(timeout));
+      this.activeStreams.delete(responseId);
+    }
+  }
examples/ai-transport-token-streaming/react/src/App.tsx (3)

59-77: Consider providing user feedback on error.

The error handling in handlePromptClick silently fails and only resets the isProcessing flag. For better user experience, consider displaying an error message in the UI when requestLLMProcessing fails.

🔎 Suggested enhancement
+ const [error, setError] = useState<string | null>(null);

  const handlePromptClick = async (selectedPrompt: string) => {
    if (isProcessing || connectionState !== 'connected' || isChannelDetached) {
      return;
    }

    setIsProcessing(true);
    setMessages([]);
    setCurrentResponse('');
    setPrompt(selectedPrompt);
+   setError(null);

    const responseId = `request-${crypto.randomUUID()}`;
    setCurrentResponseId(responseId);

    try {
      await requestLLMProcessing(selectedPrompt, responseId, config.ABLY_KEY, CHANNEL_NAME);
    } catch (error) {
+     console.error('Error requesting LLM processing:', error);
+     setError('Failed to process prompt. Please try again.');
      setIsProcessing(false);
    }
  };

Then display the error in the UI:

  <div className="p-4 border border-gray-300 rounded-lg bg-gray-50 h-48 overflow-y-auto whitespace-pre-wrap text-base leading-relaxed">
+   {error && <div className="text-red-500 mb-2">{error}</div>}
    {currentResponse || (isProcessing ? 'Thinking...' : 'Select a prompt below to get started')}
    {isProcessing && <span className="text-blue-600">▋</span>}
  </div>

84-117: Consider optimizing history rehydration for large message sets.

The current implementation performs deduplication and sorting inside the state setter for each historical message (lines 99-107). For large histories, this could result in O(n²) complexity. Consider collecting all missed messages first, then deduplicating and sorting once before updating state.

🔎 Proposed optimization
  const handleReconnect = async () => {
    setIsChannelDetached(false);
    await channel.attach();

    // Fetch missed messages for current response
    if (currentResponseId) {
+     const missedMessages: ProcessedMessage[] = [];
+     let streamCompleted = false;
      let page = await channel.history({ untilAttach: true });

      // Paginate backwards through history
      while (page) {
        for (const message of page.items) {
          const responseId = message.extras.headers.responseId;
          if (responseId === currentResponseId) {
            if (message.name === 'token') {
              const messageOrder = message.timestamp;
-             setMessages((prev) => {
-               // Only add if not already present
-               if (prev.find((m) => m.messageOrder === messageOrder)) {
-                 return prev;
-               }
-               return [...prev, { token: message.data.token, messageOrder }].sort(
-                 (a, b) => a.messageOrder - b.messageOrder,
-               );
-             });
+             missedMessages.push({ token: message.data.token, messageOrder });
            } else if (message.name === 'stream-complete') {
-             setIsProcessing(false);
+             streamCompleted = true;
            }
          }
        }
        // Move to next page if available
        page = page.hasNext() ? await page.next() : null;
      }
+
+     // Merge and deduplicate all messages at once
+     setMessages((prev) => {
+       const allMessages = [...prev, ...missedMessages];
+       const uniqueMessages = allMessages.filter(
+         (msg, index, self) => self.findIndex((m) => m.messageOrder === msg.messageOrder) === index,
+       );
+       return uniqueMessages.sort((a, b) => a.messageOrder - b.messageOrder);
+     });
+
+     if (streamCompleted) {
+       setIsProcessing(false);
+     }
    }
  };

29-46: Add defensive checks for message structure.

Line 30 accesses message.extras.headers.responseId without checking if extras or headers exist. While this works for messages published by your BackendLLMService, it could throw if the channel receives unexpected message formats.

🔎 Proposed defensive check
  const { channel } = useChannel(CHANNEL_NAME, (message: RealtimeMessage) => {
-   const responseId = message.extras.headers.responseId;
+   const responseId = message.extras?.headers?.responseId;

    if (!currentResponseId || responseId !== currentResponseId) {
      return; // Ignore messages not for current response
    }

    if (message.name === 'token') {
      const newMessage: ProcessedMessage = {
        token: message.data.token,
        messageOrder: message.timestamp,
      };

      setMessages((prev) => [...prev, newMessage]);
    } else if (message.name === 'stream-complete') {
      setIsProcessing(false);
    }
  });
examples/ai-transport-token-streaming/javascript/src/script.ts (2)

185-205: Static analysis warning is a false positive, but consider optimization.

The static analysis tool flagged line 186 for XSS risk, but this is a false positive—you're setting innerHTML to an empty string and the subsequent button text comes from a hardcoded array. However, recreating all buttons on every UI update is inefficient. Consider rendering buttons once on initialization and updating their disabled state instead.

🔎 Proposed optimization
+// Render prompt buttons once on initialization
+function renderPromptButtons() {
+  availablePrompts.forEach((promptText) => {
+    const button = document.createElement('button');
+    button.textContent = promptText;
+    button.onclick = () => handlePromptClick(promptText);
+    button.dataset.prompt = promptText;
+    promptButtons.appendChild(button);
+  });
+}

-// Update prompt buttons
+// Update prompt button states
 function updatePromptButtons() {
-  promptButtons.innerHTML = '';
-
-  availablePrompts.forEach((promptText) => {
-    const button = document.createElement('button');
-    button.textContent = promptText;
-    button.onclick = () => handlePromptClick(promptText);
-
-    const disabled = isProcessing || connectionState !== 'connected' || isChannelDetached;
-    button.disabled = disabled;
-
-    button.className = `px-3 py-2 text-sm border rounded-md transition-colors ${
-      disabled
-        ? 'bg-gray-100 text-gray-400 cursor-not-allowed border-gray-200'
-        : 'bg-white hover:bg-blue-50 border-gray-300 hover:border-blue-300 cursor-pointer'
-    }`;
-
-    promptButtons.appendChild(button);
+  const disabled = isProcessing || connectionState !== 'connected' || isChannelDetached;
+  const buttons = promptButtons.querySelectorAll('button');
+  buttons.forEach((button) => {
+    button.disabled = disabled;
+    button.className = `px-3 py-2 text-sm border rounded-md transition-colors ${
+      disabled
+        ? 'bg-gray-100 text-gray-400 cursor-not-allowed border-gray-200'
+        : 'bg-white hover:bg-blue-50 border-gray-300 hover:border-blue-300 cursor-pointer'
+    }`;
   });
 }

+// Render buttons on initialization
+renderPromptButtons();
+
 // Initial UI update
 updateUI();

111-145: Consider optimizing history rehydration for large message sets.

The current implementation performs deduplication and sorting for each historical message (lines 128-131). For large histories, this could result in O(n²) complexity. Consider collecting all missed messages first, then deduplicating and sorting once.

🔎 Proposed optimization
 async function handleReconnect() {
   isChannelDetached = false;
   await channel.attach();

   // Fetch missed messages for current response
   if (currentResponseId) {
+    const missedMessages: ProcessedMessage[] = [];
+    let streamCompleted = false;
     let page = await channel.history({ untilAttach: true });

     // Paginate backwards through history
     while (page) {
       for (const message of page.items) {
         const responseId = message.extras.headers.responseId;
         if (responseId === currentResponseId) {
           if (message.name === 'token') {
             const messageOrder = message.timestamp;
-            // Only add if not already present
-            if (!messages.find((m) => m.messageOrder === messageOrder)) {
-              messages.push({ token: message.data.token, messageOrder });
-              messages.sort((a, b) => a.messageOrder - b.messageOrder);
-            }
+            missedMessages.push({ token: message.data.token, messageOrder });
           } else if (message.name === 'stream-complete') {
-            isProcessing = false;
+            streamCompleted = true;
           }
         }
       }

       // Move to next page if available
       page = page.hasNext() ? await page.next() : null;
     }
+
+    // Merge and deduplicate all messages at once
+    const allMessages = [...messages, ...missedMessages];
+    const uniqueMessages = allMessages.filter(
+      (msg, index, self) => self.findIndex((m) => m.messageOrder === msg.messageOrder) === index,
+    );
+    messages = uniqueMessages.sort((a, b) => a.messageOrder - b.messageOrder);
+
+    if (streamCompleted) {
+      isProcessing = false;
+    }
     updateCurrentResponse();
   }

   updateUI();
 }
📜 Review details

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Jira integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between f8056cb and 2ee184a.

📒 Files selected for processing (23)
  • examples/ai-transport-token-streaming/javascript/README.md
  • examples/ai-transport-token-streaming/javascript/index.html
  • examples/ai-transport-token-streaming/javascript/package.json
  • examples/ai-transport-token-streaming/javascript/src/BackendLLMService.ts
  • examples/ai-transport-token-streaming/javascript/src/config.ts
  • examples/ai-transport-token-streaming/javascript/src/script.ts
  • examples/ai-transport-token-streaming/javascript/src/styles.css
  • examples/ai-transport-token-streaming/javascript/tailwind.config.ts
  • examples/ai-transport-token-streaming/javascript/vite.config.ts
  • examples/ai-transport-token-streaming/react/README.md
  • examples/ai-transport-token-streaming/react/index.html
  • examples/ai-transport-token-streaming/react/package.json
  • examples/ai-transport-token-streaming/react/postcss.config.js
  • examples/ai-transport-token-streaming/react/src/App.tsx
  • examples/ai-transport-token-streaming/react/src/BackendLLMService.ts
  • examples/ai-transport-token-streaming/react/src/config.ts
  • examples/ai-transport-token-streaming/react/src/index.tsx
  • examples/ai-transport-token-streaming/react/src/styles/styles.css
  • examples/ai-transport-token-streaming/react/tailwind.config.ts
  • examples/ai-transport-token-streaming/react/tsconfig.json
  • examples/ai-transport-token-streaming/react/tsconfig.node.json
  • examples/ai-transport-token-streaming/react/vite.config.ts
  • src/data/examples/index.ts
🧰 Additional context used
🧬 Code graph analysis (3)
examples/ai-transport-token-streaming/react/src/App.tsx (2)
examples/ai-transport-token-streaming/react/src/config.ts (1)
  • config (1-3)
examples/ai-transport-token-streaming/react/src/BackendLLMService.ts (1)
  • requestLLMProcessing (91-98)
examples/ai-transport-token-streaming/react/src/BackendLLMService.ts (1)
examples/pub-sub-message-annotations/javascript/src/config.ts (1)
  • channelName (5-5)
examples/ai-transport-token-streaming/javascript/src/BackendLLMService.ts (1)
examples/pub-sub-message-annotations/javascript/src/config.ts (1)
  • channelName (5-5)
🪛 ast-grep (0.40.3)
examples/ai-transport-token-streaming/javascript/src/script.ts

[warning] 186-186: Direct modification of innerHTML or outerHTML properties detected. Modifying these properties with unsanitized user input can lead to XSS vulnerabilities. Use safe alternatives or sanitize content first.
Context: promptButtons.innerHTML = ''
Note: [CWE-79] Improper Neutralization of Input During Web Page Generation [REFERENCES]
- https://owasp.org/www-community/xss-filter-evasion-cheatsheet
- https://cwe.mitre.org/data/definitions/79.html

(dom-content-modification)

🪛 LanguageTool
examples/ai-transport-token-streaming/javascript/README.md

[style] ~42-~42: This adverb was used twice in the sentence. Consider removing one of them or replacing them with a synonym.
Context: ...the value of VITE_ABLY_KEY to be your Ably API key. 5. Install dependencies: `...

(ADVERB_REPETITION_PREMIUM)


[style] ~60-~60: This adverb was used twice in the sentence. Consider removing one of them or replacing them with a synonym.
Context: ...ur VITE_ABLY_KEY variable to use your Ably API key.

(ADVERB_REPETITION_PREMIUM)

examples/ai-transport-token-streaming/react/README.md

[style] ~42-~42: This adverb was used twice in the sentence. Consider removing one of them or replacing them with a synonym.
Context: ...the value of VITE_ABLY_KEY to be your Ably API key. 5. Install dependencies: `...

(ADVERB_REPETITION_PREMIUM)


[style] ~60-~60: This adverb was used twice in the sentence. Consider removing one of them or replacing them with a synonym.
Context: ...ur VITE_ABLY_KEY variable to use your Ably API key.

(ADVERB_REPETITION_PREMIUM)

🔇 Additional comments (13)
examples/ai-transport-token-streaming/javascript/README.md (1)

1-60: Overall documentation structure is clear and complete.

The README provides a well-organized introduction to AI Transport token streaming with clear setup instructions, resource links, and alternative deployment options. The step-by-step getting started guide is easy to follow and the terminology is consistent throughout.

examples/ai-transport-token-streaming/react/tsconfig.node.json (1)

1-10: LGTM!

The TypeScript configuration for Vite tooling is correct and follows standard practices for Vite-based projects.

examples/ai-transport-token-streaming/react/index.html (1)

1-12: LGTM!

The HTML entry point is correctly structured with proper meta tags and module script configuration for the React application.

examples/ai-transport-token-streaming/react/tsconfig.json (1)

1-11: LGTM!

The TypeScript configuration is well-structured with appropriate strict mode settings, JSX configuration, and project references.

Also applies to: 13-20

examples/ai-transport-token-streaming/react/README.md (1)

1-60: LGTM!

The README is comprehensive and well-structured, providing clear instructions for setup and usage of the AI Transport token streaming example.

src/data/examples/index.ts (2)

16-25: LGTM!

The layout change for the chat-presence example from double-horizontal to single-large looks intentional and properly configured.


300-302: LGTM!

The product key rename from aitransport to ai_transport is consistent with the new example entry that references ai_transport at line 10.

examples/ai-transport-token-streaming/javascript/src/styles.css (1)

1-3: LGTM!

Standard Tailwind CSS configuration is correct.

examples/ai-transport-token-streaming/react/src/styles/styles.css (1)

1-3: LGTM!

Standard Tailwind CSS configuration is correct.

examples/ai-transport-token-streaming/react/src/index.tsx (1)

1-9: LGTM!

The React entry point follows best practices with StrictMode enabled and correct usage of React 18's createRoot API.

examples/ai-transport-token-streaming/javascript/vite.config.ts (1)

1-7: LGTM: Consistent configuration pattern.

The JavaScript example follows the same Vite configuration pattern as the React example, extending the shared base configuration. This consistency across examples is good for maintainability.

examples/ai-transport-token-streaming/react/src/config.ts (1)

1-3: Better fallback format than JavaScript example.

This configuration uses a fallback key format ('demo-key-for-examples:YOUR_ABLY_KEY_HERE') that follows Ably's key pattern, making it clearer to users what format is expected. This is preferable to the plain placeholder in the JavaScript example.

examples/ai-transport-token-streaming/javascript/package.json (1)

10-19: Dependency versions are current and valid for example code.

All specified versions exist and have no reported security vulnerabilities. The caret ranges allow automatic updates to newer patch and minor versions within their respective major versions (e.g., ably@^2.4.0 will use 2.16.0 when installed). While major version upgrades are available for vite (7.3.0) and tailwindcss (4.1.18), these would require code changes and testing. The current pinned versions are safe and appropriate for reference implementation code.

@matt423 matt423 force-pushed the ait-148-message-per-token-examples branch from 2ee184a to b157c70 Compare December 23, 2025 11:42
@matt423 matt423 temporarily deployed to ably-docs-ait-148-messa-ryzjxo December 23, 2025 11:43 Inactive
@matt423 matt423 changed the title [AIT-148] Message per token examples [AIT-209] Message per token examples Dec 23, 2025
@matt423 matt423 marked this pull request as ready for review December 23, 2025 11:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

review-app Create a Heroku review app

Development

Successfully merging this pull request may close these issues.

3 participants