Official website for the LLMEval research series by Fudan NLP Lab — comprehensive evaluation frameworks for large language models.
| Paper | Venue | GitHub |
|---|---|---|
| LLMEval-Fair | ACL 2026 Main | LLMEval-Fair |
| LLMEval-Med | EMNLP 2025 Findings | LLMEval-Med |
| LLMEval | AAAI 2024 | LLMEval-1 · LLMEval-2 |
| LLMEval-Gaokao2024-Math | Technical Report | Llmeval-Gaokao2024-Math |
- Next.js 16 (App Router, Static Export)
- Tailwind CSS v4 + Geist Font
- TypeScript
- @tanstack/react-table — interactive leaderboard with sorting/filtering
- Framer Motion — smooth animations
- next-themes — dark/light mode
npm install
npm run devOpen http://localhost:3000.
src/
app/ # Pages (Home, Papers, Leaderboard, Blog)
components/ # Reusable UI components
data/
papers.ts # Paper metadata (title, authors, venues, links)
leaderboard.ts # LLMEval-Fair leaderboard (30 models, 10 disciplines)
blog.ts # Blog posts with paper summaries
Papers — Edit src/data/papers.ts, add entries to the papers array.
Leaderboard — Edit src/data/leaderboard.ts, update leaderboardData.
Blog — Edit src/data/blog.ts, add new post objects with Markdown content.
Auto-deploys to GitHub Pages on push to main via .github/workflows/deploy.yml.
Manual build:
npm run build # generates static site in out/- Email: mingzhang23@m.fudan.edu.cn
- WeChat: zanyingluan
- Platform: llmeval.com