SAQUET - Scalable Automatic Question Usability Evaluation Toolkit

Last Updated: 11-12-2025, SAQUET now uses gpt-5-mini with medium reasoning effort for evaluation and o4-mini for correction, mostly due to slightly improved performance and the 10 million free tokens per day OpenAI provides.
Also if you're interested in contributing to this or you want the source code, reach out and I'll be in touch!

This tool helps you analyze the quality of your multiple-choice questions (MCQs) and provides feedback on potential errors.
Upload your multiple-choice questions as a formated CSV file get feedback on quality and potential errors.
After the quality analysis is done, you can click the 'Fix Questions' button to fix the identified flaws and download an updated CSV of your questions.

Note: The first analysis might take a few minutes if the server needs to start up. Subsequent analyses should be much faster!
For instance, running 50 questions through analysis and fixing would take about ~3 minutes

Want to test it with some sample data? Download the demo CSV file here and try uploading it yourself.
These columns are expected in the CSV file: text,answer, a, b, (c-h if needed), hint, feedback, although everything after b can be empty/null.

Input Questions
Upload a file, paste a Google Doc link, or create questions manually

Drag & drop your CSV file here

Supports CSV files up to 10MB

No analysis results available yet. Upload a file to see results here.

Have feedback or suggestions? Let us know!Submit Feedback

by Steven Moore