The DLinRT.eu reviewer assignment system provides a comprehensive platform for:
- Multi-Dimensional Preferences - Categories, companies, and products
- Intelligent Assignment - Weighted scoring with workload balancing
- Manual Override - Full control over assignments with preview
- Assignment Tracking - Complete audit trail of all changes
- Automated Notifications - Email alerts for new assignments
- Reviewer Preferences System
- Review Round Management
- Assignment Algorithm
- Assignment History & Audit Trail
- Email Notifications
- Best Practices
- Troubleshooting
- Technical Reference
Reviewers can now specify expertise across three dimensions:
- Task Categories (10 categories: Auto-Contouring, Treatment Planning, etc.)
- Companies (All companies in the database)
- Products (All products in the database)
Location: Profile Page → "Review Preferences" section
Purpose: General expertise areas
How to Use:
- Click on category buttons to select/deselect
- Set priority for each (1 = highest expertise, 10 = minimal)
- Categories include:
- Auto-Contouring
- Clinical Prediction
- Image Enhancement
- Image Synthesis
- Performance Monitor
- Platform
- Reconstruction
- Registration
- Tracking
- Treatment Planning
Priority Guide:
- 1-3: Primary expertise - Can review independently and provide deep insights
- 4-6: Secondary expertise - Familiar, can review with occasional consultation
- 7-10: Tertiary expertise - Basic knowledge, review only if needed
Purpose: Indicate familiarity with specific vendors
How to Use:
- Use search box to find companies
- Click "Add" to include company in preferences
- Set priority level (1-10 scale)
- Bulk Action: Click 📦+ icon to "Select all products from this company"
- Automatically adds all products from selected company to Product preferences
- Inherits same priority level
- Useful for vendors you're very familiar with
Use Cases:
- Worked with company's products clinically
- Attended training from vendor
- Familiar with company's development approach
- Used multiple products from same vendor
Purpose: Direct expertise with specific products
How to Use:
- Search by product name, company, or category
- Use filters to narrow results:
- Filter by company
- Filter by category
- Click "Add" to include product
- Set priority level (1-10 scale)
- Add notes (optional) - e.g., "Used clinically for 3 years"
Advanced Search:
- Type to search across product names
- Results show: Product Name | Company | Category
- Real-time filtering as you type
Purpose: Share preference templates or backup preferences
Export:
- Click "Export" button
- Downloads JSON file with all preferences
- Includes: categories, companies, products, priorities, notes
- Filename:
reviewer-preferences-[date].json
Import:
- Click "Import" button
- Select previously exported JSON file
- Review preview showing what will be imported
- Existing preferences are preserved
- Only new items are added
- Click "Confirm Import"
Use Cases:
- Institutional templates (e.g., "Hospital physics team expertise")
- Backup before making changes
- Share with colleagues with similar expertise
- Onboard new reviewers with team's preferences
Purpose: Reset all preferences and start fresh
How to Use:
- Click "Clear All Preferences" button (red, bottom of page)
- Confirmation dialog shows: "X categories, Y companies, Z products will be removed"
- Click "Clear All Preferences" to confirm
- All
reviewer_expertiseentries deleted - Button is disabled if no preferences exist
Warning: This action cannot be undone. Consider exporting preferences first as backup.
Location: Admin Dashboard → Review Rounds
Step 1: Basic Information
- Click "Create New Round"
- Fill in:
- Round name
- Deadline
- Description
- Click "Create Round"
- Round appears with status "draft"
Step 2: Start Round (Enhanced Flow)
- Click "Start Round" button
- Reviewer Selection Dialog opens
Purpose: Choose which reviewers will participate in this round
Interface:
- Table showing all reviewers with:
- Name & Email
- Expertise counts (categories, companies, products)
- Current workload (pending + in-progress reviews)
- Estimated new assignments (calculated in real-time)
Actions:
- ✅ All reviewers selected by default
- Click checkboxes to select/deselect individual reviewers
- "Select All" / "Deselect All" toggle button
- Search to find specific reviewers
Summary Panel (updates dynamically):
- Total selected: X reviewers
- Products to assign: Y products
- Average per reviewer: ~Z products each
- Max variance: ±1 product (balanced distribution)
Example Workflow:
Initial state:
- 5 reviewers available
- 100 products to assign
- All selected by default
- Estimated: 20 products each
Admin notices Reviewer A has 50 pending reviews:
- Deselect Reviewer A
- Updated: 4 reviewers, ~25 products each
Admin clicks "Continue"
Purpose: Review and manually adjust proposed assignments before finalizing
Interface:
Summary Panel (sticky header):
- Total assignments: X
- Reviewers involved: Y
- Unassigned products: Z (if any)
- Workload distribution visual (horizontal bars showing min-max-avg)
- Match quality indicators
Assignment Table:
| Product | Company | Category | Assigned To | Match Score | Actions |
|---|---|---|---|---|---|
| Product A | Company X | Auto-Contouring | Reviewer 1 | ⭐⭐⭐⭐⭐ (95%) | Reassign |
| Product B | Company Y | Treatment Planning | Reviewer 2 | ⭐⭐⭐⭐ (80%) | Reassign |
Match Score Indicators:
- ⭐⭐⭐⭐⭐ (90-100%): Perfect match - Product + Company + Category expertise
- ⭐⭐⭐⭐ (70-89%): Excellent - Company + Category expertise
- ⭐⭐⭐ (50-69%): Good - Category expertise only
- ⭐⭐ (30-49%): Fair - Secondary expertise
- ⭐ (0-29%): Limited - No direct match
Grouping Options:
- Group by reviewer (shows workload per reviewer)
- Group by category
- Group by company
Sort Options:
- By match score (highest/lowest first)
- By product name
- By reviewer workload
- By category
Individual Product:
- Click "Reassign" button on product row
- Dropdown shows all selected reviewers with:
- Name
- Current assignment count
- Expertise match: ✓ Expert | ~ Familiar | ✗ No match
- Color: 🟢 Below avg | 🟡 At avg | 🔴 Above avg
- Click reviewer name to reassign
- Table updates in real-time
- Summary panel recalculates
Bulk Actions:
- Select multiple products (checkboxes)
- Click "Bulk Actions" dropdown:
- "Reassign all to..." → Choose reviewer
- "Distribute evenly" → Auto-balance among selected reviewers
- "Assign by expertise only" → Ignore workload, maximize match
- "Remove from round" → Unassign selected products
Auto-Balance Button:
- Redistributes to minimize variance (max ±1 product)
- Maintains expertise matches where possible
- Shows before/after comparison
- Requires confirmation
Rerun Algorithm Button:
- Discards all manual changes
- Reruns automatic assignment
- Uses current reviewer selection
- Useful if you want to start over
Warnings (shown automatically):
⚠️ "Reviewer X has 15 assignments (50% above average)"⚠️ "5 products assigned to reviewers with no matching expertise"⚠️ "Workload variance: 8 products (rebalancing recommended)"
Footer Buttons:
- Cancel: Discard all changes, return to rounds list
- Save as Draft: Save assignments but keep round in draft status
- Confirm & Start Round:
- Inserts assignments into database
- Changes round status to "active"
- Records all assignments in history
- Triggers email notifications to all assigned reviewers
The algorithm uses weighted expertise scoring to match products with reviewers:
Scoring Formula:
Total Score = (Product Match Score) + (Company Match Score) + (Category Match Score)
Component Scores:
-
Product Match (Highest Priority):
- Base weight: 5x
- Score: (11 - priority) × 5
- Example: Priority 1 = 10×5 = 50 points
-
Company Match (Medium Priority):
- Base weight: 2x
- Score: (11 - priority) × 2
- Example: Priority 3 = 8×2 = 16 points
-
Category Match (Base Priority):
- Base weight: 3x
- Score: (11 - priority) × 3
- Example: Priority 2 = 9×3 = 27 points
Example Scoring:
Product: "Limbus Contour" (Limbus AI, Auto-Contouring)
Reviewer A preferences:
- Product: "Limbus Contour" (Priority 1) → 50 points
- Company: "Limbus AI" (Priority 2) → 18 points
- Category: "Auto-Contouring" (Priority 1) → 30 points
Total: 98 points (Excellent match!)
Reviewer B preferences:
- Category: "Auto-Contouring" (Priority 4) → 21 points
Total: 21 points (Fair match)
Assignment: Product goes to Reviewer A
Balanced Distribution Algorithm:
-
Calculate Target:
Total products = 100 Selected reviewers = 4 Base allocation = floor(100/4) = 25 per reviewer Remainder = 100 % 4 = 0 (no bonus slots needed) -
Distribute with Max Variance of 1:
- Some reviewers get N products
- Some get N+1 products
- Never N+2 or more difference
- Example: 100 products, 3 reviewers → 33, 33, 34
-
Consider Current Workload:
- If reviewer already has 20 pending reviews:
- Reduce new assignments proportionally
- Redistribute to reviewers with lighter load
- Maintains team balance
- If reviewer already has 20 pending reviews:
-
Real-Time Tracking:
- Algorithm tracks assignments as it progresses
- Adjusts remaining assignments to maintain balance
- If reviewer reaches target, skips to next best match
Example Distribution:
Round: 30 products to assign
Reviewers: 3 selected
Initial workload:
- Reviewer A: 5 pending reviews
- Reviewer B: 15 pending reviews
- Reviewer C: 10 pending reviews
Target for round: 10 products each (30/3)
Algorithm adjusts for current load:
- Reviewer B gets 8 products (has high existing load)
- Reviewer A gets 12 products (has low existing load)
- Reviewer C gets 10 products (has average load)
Result:
- A: 5+12 = 17 total (balanced)
- B: 15+8 = 23 total (protected from overload)
- C: 10+10 = 20 total (balanced)
Priority affects scoring (lower number = higher expertise):
- Priority 1-3: Gets 8-10 points multiplier
- Priority 4-6: Gets 5-7 points multiplier
- Priority 7-10: Gets 1-4 points multiplier
This ensures:
- Primary experts get relevant products first
- Secondary experts fill gaps
- Tertiary experts only receive if no better match exists
Every assignment change is tracked in the assignment_history table, creating a complete audit trail.
Change Types:
initial: Product assigned when round startedreassign: Product moved from one reviewer to anotherremove: Product unassigned from reviewer
Information Recorded:
- Review round ID
- Product ID
- Assigned to (current reviewer)
- Previous assignee (for reassignments)
- Changed by (admin who made the change)
- Change type
- Reason (optional notes)
- Timestamp
Location: Admin Dashboard → Review Rounds → Click round name → "Assignment History" tab
Interface:
Current Assignments Tab:
- Shows active assignments for the round
- Reviewer name, product list, status
- Links to product details
Assignment History Tab:
- Complete chronological log
- Filterable by:
- Change type (initial, reassign, remove)
- Reviewer
- Date range
- Product
- Admin who made change
History Table:
| Timestamp | Action | Product | From | To | Changed By | Notes |
|---|---|---|---|---|---|---|
| 2025-11-04 14:32 | Initial | Product A | - | Reviewer 1 | System | Auto-assigned |
| 2025-11-04 15:45 | Reassign | Product A | Reviewer 1 | Reviewer 2 | Admin Name | Workload balance |
| 2025-11-04 16:10 | Remove | Product B | Reviewer 3 | - | Admin Name | Product withdrawn |
Audit & Compliance:
- Track who changed what and when
- Demonstrate fair distribution process
- Review decision-making rationale
Process Improvement:
- Analyze manual override patterns
- Identify frequently reassigned products (may need better matching)
- Track admin intervention frequency
Conflict Resolution:
- Verify original assignments
- Understand reassignment decisions
- Document workload adjustments
Automatic email notifications are sent to reviewers when they're assigned products in a review round.
Prerequisites:
- Resend API Key: Must be configured in Supabase
- Location: Project Settings → Edge Functions → Secrets
- Secret name:
RESEND_API_KEY
- Verified Domain: Domain must be verified in Resend dashboard
- Edge Function:
notify-reviewer-assignmentmust be deployed
Triggered by:
- Starting a new review round (all assigned reviewers notified)
- Confirming assignments in preview dialog
Not triggered by:
- Saving assignments as draft (only when confirmed/started)
- Reassignments (future enhancement)
- Assignment removal
Subject: "New Review Assignments - [Round Name]"
Contains:
- Header: DLinRT.eu branding with gradient
- Greeting: Personalized with reviewer name
- Round Details:
- Round name
- Deadline (formatted as human-readable date)
- Description
- Assignment Summary:
- Number of products assigned
- List of product names (bulleted)
- Call-to-Action:
- "Go to Review Dashboard" button
- Direct link to reviewer dashboard
- Deadline Reminder:
- Highlighted deadline date
- Urgency indicator if deadline is soon
- Footer:
- Help/support contact
Example Email:
Subject: New Review Assignments - Q1 2025 Product Review
Hi John,
You have been assigned 5 products to review for the Q1 2025 Product Review round.
Deadline: January 31, 2025
Products assigned to you:
• Limbus Contour v2.0 (Limbus AI)
• MVision Contours+ (MVision AI)
• RayStation 12B (RaySearch)
• ProteusOne (ProteusOne)
• Pinnacle³ Auto-Planning (Philips)
[Go to Review Dashboard]
Please complete your reviews by the deadline.
Best regards,
The DLinRT.eu Team
Technical Details:
- Sent via Resend API
- HTML formatted with responsive design
- Plain text fallback included
Grouping Logic:
- One email per reviewer per round
- Multiple products grouped in single email
- Batch processing for efficiency
Error Handling:
- Failed emails logged to edge function logs
- Admin notified of delivery failures (future)
- Retry mechanism for transient failures (future)
Location: Supabase Dashboard → Edge Functions → notify-reviewer-assignment → Logs
Information Available:
- Email send attempts
- Success/failure status
- Error messages
- Reviewer email addresses
- Timestamp
Setting Up Preferences:
- Start with Categories: Set your general expertise areas first
- Add Companies: Include vendors you're familiar with
- Use Bulk Actions: For companies you know well, use "Select all products from this company"
- Be Honest with Priorities: Accurate priorities ensure you get appropriate assignments
- Keep Updated: Review and update preferences as you gain experience
- Export Regularly: Backup your preferences before major changes
Priority Guidelines:
- Don't set everything to Priority 1 (dilutes your expertise)
- Use full range 1-10 for better matching
- Priority 1-3: Products/areas you can review authoritatively
- Priority 4-6: Products/areas you're familiar with
- Priority 7-10: Products/areas you've had limited exposure to
Import/Export:
- Export before clearing preferences
- Share templates with team members with similar roles
- Update imported preferences with your own experience
Creating Rounds:
- Clear Naming: Use descriptive names (e.g., "Q1 2025 - New Products Review")
- Reasonable Deadlines: Allow 2-4 weeks for thorough reviews
- Balanced Selection: Include mix of experienced and newer reviewers
- Check Workload: Review current assignments before selecting reviewers
Reviewer Selection:
- Start with All: Begin with all reviewers selected, then refine
- Consider Workload: Deselect reviewers with >30 pending reviews
- Expertise Match: Ensure selected reviewers have relevant expertise
- Team Balance: Include diverse perspectives and backgrounds
Assignment Preview:
- Review Match Scores: Check for low-scoring assignments
- Manual Adjustments: Reassign products with poor matches
- Workload Check: Use auto-balance if variance >2 products
- Document Reasoning: Add notes for significant manual changes
After Launch:
- Monitor Progress: Check completion rates regularly
- Send Reminders: Contact reviewers nearing deadline
- Review History: Analyze assignment history for insights
- Adjust Future Rounds: Use data to improve next round's assignments
Institutional Templates:
- Create preference template for role (e.g., "Medical Physics Team")
- Export and share with team members
- Each person customizes to their specific experience
- Maintain team template repository
Onboarding New Reviewers:
- Provide relevant preference template
- Review and import template
- Customize based on individual experience
- Start with Priority 5-7 initially, adjust as they gain familiarity
Quality Assurance:
- Regular audits of assignment history
- Review manual override frequency
- Analyze reviewer feedback on assignments
- Adjust algorithm weights if needed (contact admin)
Problem: "Clear All Preferences" button is disabled
- Cause: No preferences exist in database
- Solution: Add at least one preference first
Problem: Can't find company in search
- Cause: Company not in database yet
- Solution: Company must have at least one product added first
Problem: Import shows "0 items to import"
- Cause: All items in file already exist in preferences
- Solution: This is normal - import only adds new items
Problem: Bulk "Select all products" adds too many
- Cause: Company has many products
- Solution: Review added products, remove unwanted ones individually
Problem: No reviewers shown in selection dialog
- Cause: No users with reviewer role exist
- Solution: Assign reviewer role to users first
Problem: All match scores are low
- Cause: Reviewers haven't set preferences for this product area
- Solution: Ask reviewers to update preferences before round
Problem: Workload is unbalanced despite auto-balance
- Cause: Odd number of products or existing workload differences
- Solution: ±1 variance is expected and acceptable
Problem: Assignment preview shows unassigned products
- Cause: No reviewers with expertise in that category
- Solution: Manually assign or recruit reviewers for that area
Problem: Emails not being sent
- Cause: RESEND_API_KEY not configured
- Solution: Add API key in Supabase → Project Settings → Edge Functions → Secrets
Problem: Emails going to spam
- Cause: Domain not verified in Resend
- Solution: Verify domain in Resend dashboard
Problem: Reviewer didn't receive email
- Cause: Email address incorrect or bounced
- Solution: Check edge function logs for delivery status
Problem: "assignment_history table doesn't exist"
- Cause: Migration not run
- Solution: Run migration
20251104225808_*.sql
Problem: TypeScript errors for assignment_history
- Cause: Types not regenerated after migration
- Solution: Regenerate types:
supabase gen types typescript
Problem: Preference search is slow
- Cause: Large product database
- Solution: Use filters to narrow search, add more specific keywords
Problem: Assignment preview loads slowly
- Cause: Many products/reviewers
- Solution: Normal for large rounds (>100 products), wait for initial load
reviewer_expertise:
CREATE TABLE reviewer_expertise (
id UUID PRIMARY KEY,
user_id UUID REFERENCES auth.users,
preference_type TEXT CHECK (preference_type IN ('category', 'company', 'product')),
category TEXT,
company_id TEXT,
product_id TEXT,
priority INTEGER CHECK (priority BETWEEN 1 AND 10),
notes TEXT,
created_at TIMESTAMP DEFAULT NOW(),
updated_at TIMESTAMP DEFAULT NOW()
);assignment_history:
CREATE TABLE assignment_history (
id UUID PRIMARY KEY,
review_round_id UUID REFERENCES review_rounds,
product_id TEXT,
assigned_to UUID REFERENCES auth.users,
previous_assignee UUID REFERENCES auth.users,
changed_by UUID REFERENCES auth.users,
change_type TEXT CHECK (change_type IN ('initial', 'reassign', 'remove')),
reason TEXT,
created_at TIMESTAMP DEFAULT NOW()
);notify-reviewer-assignment:
- Purpose: Send email notifications to reviewers
- Trigger: Called by
bulkAssignProductsfunction - Inputs: reviewerId, roundId, assignedProducts
- Output: Email sent via Resend API
- Logs: Available in Supabase Dashboard
calculateProposedAssignments():
- Location:
src/utils/reviewRoundUtils.ts - Purpose: Generate balanced assignment proposals
- Inputs: products[], selectedReviewerIds[]
- Output: ProposedAssignment[]
- Algorithm: Weighted scoring + workload balancing
bulkAssignProducts():
- Location:
src/utils/reviewRoundUtils.ts - Purpose: Insert assignments and trigger notifications
- Inputs: roundId, proposedAssignments[]
- Side Effects: Database inserts, email sends, history logging
Planned improvements:
- Notification Preferences: Let reviewers choose email frequency
- Assignment Analytics: Dashboard showing match quality metrics
- Automated Reminders: Email reminders for approaching deadlines
- Reviewer Availability: Calendar integration for vacation periods
- Historical Performance: Track review quality and completion rates
- Machine Learning: Suggest optimal assignments based on past reviews
- Bulk Reassignment: Move multiple products between reviewers
- Assignment Templates: Save and reuse assignment patterns
- Reviewer Groups: Assign to teams rather than individuals
- Real-time Notifications: In-app notifications for new assignments
For questions, issues, or feature requests:
- Email: info@dlinrt.eu
- GitHub: Open an issue
- Documentation: https://github.com/DLinRT-eu/website/docs
- 2025-11-04: Complete overhaul with multi-dimensional preferences, balanced assignments, history tracking, and email notifications
- 2025-01-15: Initial reviewer expertise system