A modern lightweight web app to create, run, and export interview questionnaires with scoring, recruiter notes, domain-based evaluation, and CSV recap export.
Interview Scorecard helps recruiters, hiring managers, and tech leads structure interviews with a reusable questionnaire system.
It provides:
- a questionnaire editor
- a live interview scoring interface
- a domain-based summary
- a weak point overview
- a CSV export for HR or hiring follow-up
This project is built with plain HTML, CSS, and JavaScript, with no framework and no build step.
- π Load questionnaires from JSON files
- π οΈ Create and edit questionnaires directly in the browser
- π Add recruiter notes during interviews
- β Score each answer with a simple rating system
- π View:
- total score
- completion rate
- global level
- levels by domain
- weak points
- π€ Export interview results to CSV
- πΎ Persist questionnaire state in
localStorage - π Live preview in editor mode
- π Clean dark UI
- π± Responsive layout
Run a questionnaire during a candidate interview and evaluate each answer in real time.
Create or update a questionnaire structure with:
- sections
- questions
- expected answers
- labels
- descriptions
Generate:
- a JSON questionnaire file
- a CSV interview summary ready to share with HR or managers
.
βββ index.html # Main application layout
βββ styles.css # UI styling and responsive layout
βββ app.js # Application logic, editor, scoring, export
- HTML5
- CSS3
- Vanilla JavaScript
No dependencies. No bundler. No framework. No backend required.
You can open the project directly in your browser:
open index.htmlOr serve it locally with a lightweight HTTP server:
python3 -m http.server 8000Then open:
http://localhost:8000
The app uses a JSON-based questionnaire structure.
{
"meta": {
"title": "Linux Systems & DevOps Engineer Interview",
"subtitle": "Evaluation questionnaire for recruiter or tech lead",
"maxScorePerQuestion": 2
},
"grading": {
"labels": {
"2": "Correct",
"1": "Partial",
"0": "Incorrect"
},
"globalLevels": [
{ "min": 85, "label": "Very strong" },
{ "min": 70, "label": "Good" },
{ "min": 50, "label": "Average / needs improvement" },
{ "min": 0, "label": "Insufficient" }
]
},
"sections": [
{
"id": "section-linux",
"title": "Linux",
"description": "Core Linux administration questions",
"questions": [
{
"id": "q-linux-1",
"label": "Linux fundamentals",
"text": "Explain the difference between a process and a thread.",
"expectedAnswer": [
"A process has its own memory space",
"Threads share the same process memory",
"Threads are lighter than processes"
]
}
]
}
]
}You can either:
- import an existing JSON questionnaire
- create a new one from the built-in editor
For each question, the interviewer can:
- assign a score
- add notes
- review expected answers
The app computes:
- total score
- completion ratio
- overall level
- levels by domain
- weak areas
Export a CSV summary including:
- questionnaire metadata
- overall evaluation
- domain breakdown
- question-by-question notes
The current questionnaire is automatically saved in the browser using localStorage.
Stored keys:
app.interview.config
app.interview.filename
This allows you to reopen the app and continue working from the latest saved state.
The editor exports the full questionnaire as a formatted JSON file.
The interview view exports a CSV file containing:
- questionnaire title and subtitle
- export timestamp
- overall score
- overall level
- per-domain summary
- detailed recap of each question
- recruiter notes
- modern dark theme
- sticky summary sidebar
- clear score visualization
- section-level scoring
- weak point detection
- editor live preview
- responsive layout for smaller screens
This project is ideal for:
- technical interviews
- recruiter screening
- hiring scorecards
- structured candidate evaluations
- internal interview templates
- candidate profile section
- interviewer name / interview metadata
- autosave indicator
- questionnaire validation
- duplicate section / question actions
- import/export history
- multi-language support
- PDF export
- separate interview session storage
- weighted scoring per section
- authentication and backend persistence
You can release this project under the MIT License if you want to keep it open and reusable.
Contributions, improvements, and UI enhancements are welcome.
You can contribute by:
- improving the questionnaire editor
- enhancing exports
- adding validation
- refining the user experience
- extending the scoring model