Towards better transparency





How This Works

Enter a secret code to retrieve an existing checklist, or generate a new code to start a new checklist.

Fill out the checklist and submit your responses.

You can download your report in HTML or PDF format after submission.

EV Checklist Database

Browse all published EV research checklists. Use the search box to find specific studies, filter columns, and download reports.


Search & Filter


Column Visibility

Tip: Use column filters below each header to refine results

Admin Login

Admin Panel


Download Complete CSV









Filters





Report Template Preview

This shows how the actual report will look with all questions (including empty ones).






LLM Extraction Configuration

Manage the prompts and settings used by the AI assistant to extract data from manuscripts.





Change Admin Password



AI Extraction Configuration

Configure settings for automatic manuscript extraction using Google Gemini AI.


API Configuration

Model Selection

Add New Model


Add new Gemini model IDs as they become available.
Extraction Parameters
Lower = more focused/deterministic, Higher = more creative. Default: 0.05
Minimum confidence to accept extracted fields. Default: 50%
Caches manuscript in conversation history for subsequent batches. Recommended: ON
Sends batches in parallel using Gemini's Context Caching API (90% cached token discount). Requires manuscript > ~8,000 words. Falls back to sequential if caching fails. Note: disables implicit caching when active.
Number of batches sent simultaneously. Lower if API gets stuck or rate-limited. Default: 3
Automatically apply all extracted fields above the confidence threshold without review.

Current Settings