
A new AI code review for every solution you submit
Our new AI code review on Frontend Mentor scores your work across five categories with line-level findings, so you know what's working and where to focus next.
Table of contents
Today, we're rolling out a completely new AI code review on Frontend Mentor. It reads your code, scores it across five categories, and shows you specifically what's working and what to focus on next. It's the biggest upgrade we've made to code review since we first added AI feedback, and it's live on every solution you submit from this moment on.
Pro members get the AI code review on every submission, every time. If you're a free member, you get one AI review per month, and automated checks for HTML, CSS, JavaScript, and accessibility are always free.
Here's a look at what's new in our revamped AI code review.
What you get now
When you submit a solution, you get one of two reports.
Automated checks

An instant scan of your HTML, CSS, JavaScript, and accessibility using well-known linters (e.g., ESLint, Stylelint). It flags issues such as missing alt text, unused selectors, contrast problems, and broken markup to highlight invalid or troublesome code. This is the same automated review we've always had, and it's what free members fall back to once you've used your monthly AI credit.
AI code review

It reads your code and scores it across five categories:
- Best Practices: readability, naming, modern syntax, DRY principles, error handling
- File Organization: folder structure, file naming, framework conventions
- Architecture: component design, state management, separation of concerns
- Testing: assertion quality, coverage, test organization
- Accessibility: semantic HTML, ARIA, keyboard navigation, focus management, color contrast
These categories roll up into two top-level scores: Code Quality (Best Practices, File Organization, Architecture, and Testing) and Design & UX (Accessibility for now, with more on the way). Each category contributes a different weight to your overall score, with the most foundational ones carrying the most weight.
You get an overall score from 0 to 10, scores for each top-level group and each category, and a breakdown of what's working, what to suggest, and what needs attention. Every finding cites the actual line of code in your submission. Findings are also tagged with the language or technology you used (HTML, React, Vue, Tailwind) and the underlying skill (Component Design, Semantic HTML, Error Handling, Color Contrast, and others).
The scoring is difficulty-aware. A Newbie challenge is graded only on the fundamentals: Best Practices and Accessibility. File Organization unlocks at Junior level, Architecture unlocks at Intermediate, and Testing unlocks at Advanced. You're never penalized for skipping topics that aren't expected at your difficulty level. As challenges get harder, more categories unlock, so you're prompted to learn new topics as your skill progresses.
The workflow keeps changing. The quality bar doesn't.
The way developers work has shifted dramatically over the past couple of years. But whether you're handwriting the whole thing to lock in the fundamentals, working alongside AI to move faster and ask questions, or learning to orchestrate AI agents, the quality of what you ship still matters. The new review tells you exactly where you stand. Strengths show you what to keep doing. Suggestions show you where you're close. Needs Attention findings point to the issues that matter most. Whatever workflow you use, the same five categories tell you the same thing: how good is the code you ended up with, and where should you focus next? This helps identify gaps in your understanding so you can dig deeper into relevant topics.
Still community-first
Frontend Mentor is community-led, and that hasn't changed. The AI code review guarantees specific feedback the moment you submit, even before another developer has had a chance to review your code. But the deeper learning still comes from the community: getting reviewed by other developers, reviewing their work back, and seeing how different people approach the same brief. AI gives you instant insight. The community gives you another developer's take and an opportunity to meet others and form connections. Both matter.
About credits
Free members used to get two AI reports a month. They now get one. The new AI review costs us more to run than the old one did. It's a deeper analysis, more categories, and more code being read. We'd rather give everyone access to a much better review once a month than a thinner one twice. Pro members get unlimited AI reviews on every submission, as before. We'll keep an eye on costs and revisit credit limits as we go.
About your old reports
If you submitted solutions before today and got an "AI-enhanced" report, those reports now show only the automated checks. We needed to do this to ensure a clear separation between the free automated checks and our new AI code review. To see the new AI code review on a previous submission, open it and hit regenerate report. Free members: regeneration uses your monthly credit. Pro members: regenerate freely.
What's next
This is the foundation of our new reports. More categories are coming (performance, commit quality, and a few others) along with ways to track your scores across challenges so you can see how you're progressing. The new review is also live on Frontend Mentor for Teams, so every team member now gets it on every submission and learns more from each piece of work they submit.
If you've got feedback after trying it, please feel free to drop it in our Discord. I read everything and would love to hear your thoughts on the revamped reports and any ideas you might have.
Take your skills to the next level
- AI-powered solution reviews
- 50+ portfolio-ready premium projects
- Professional Figma design files