Skip to content
  • Unlock Pro
  • Log in with GitHub
Solution
Submitted 29 days ago

Interactive Frontend Quiz with AI Chatbot. Vanilla JS & Vercel

fetch, node, pure-css, van-js
P
Jayco•470
@jayco01
A solution to the Frontend Quiz app challenge
View live sitePreview (opens in new tab)View codeCode (opens in new tab)

Solution retrospective


What are you most proud of, and what would you do differently next time?
  1. Getting the AI Chatbot Up and Running! Integrating the AI and learning to handle the API key safely was a good lesson, since AI is such a big thing these days.

  2. My JavaScript Getting Cleaner: I'm also really proud of how much better I am at refactoring my JavaScript. It's not perfect, but it keeps getting better every time I do a project.

Looking back, the main thing I'd want to improve is the chatbot's CSS styling.

  • Right now, it's pretty basic, just enough to work.
  • Next time, I’d spend more effort making it look a lot nicer. Maybe I'll try to check out some chatbot template online for inspiration and see how I can make the chat window more slick and user-friendly.
What challenges did you encounter, and how did you overcome them?
  1. Making the JavaScript Do All the Things:

    • Getting the quiz logic in main.js to run smoothly was a bit of a puzzle. Things like:
      • Pulling the right questions for HTML, CSS, etc., from the data.json file.
      • Keeping tabs on the score as you go.
      • Making sure the page actually showed the right question, updated the options, and moved the progress bar correctly – all that UI stuff had to sync up.
    • How I fixed it: console.log() was very useful, just to see what was going on. I also found it helped to break the big problems down into smaller chunks. If I got stuck, I’d search around for how other people solved similar problems or look up specific JavaScript tricks until things started working.
  2. Figuring Out the Google Gemini API:

    • Using Google's AI for the chatbot was totally new territory for me, so getting the @google/generative-ai library to play nice in my Vercel function (api/gemini-chat.js) took a bit of effort.
    • The tricky bits were stuff like:
      • Just getting the API client set up right.
      • Sending the chat history back and forth so the bot remembered what we were talking about.
      • Making sure my API key was kept secret using those environment variables on Vercel, and then handling the responses from the AI without the app crashing if something went sideways.
    • How I fixed it: I posted my issues at the Vercel community board, which was very helpful since they are very active.
What specific areas of your project would you like help with?

The main thing I'm looking to get some feedback on is my JavaScript code in main.js.

Specifically, I'm trying to get better at:

Refactoring for Clarity and Efficiency: * I'd love any tips or suggestions on how I could have refactored my JavaScript functions to make them cleaner, easier to read, or maybe even a bit more efficient"

Code
Select a file

Please log in to post a comment

Log in with GitHub

Community feedback

No feedback yet. Be the first to give feedback on Jayco's solution.

Join our Discord community

Join thousands of Frontend Mentor community members taking the challenges, sharing resources, helping each other, and chatting about all things front-end!

Join our Discord
Frontend Mentor logo

Stay up to datewith new challenges, featured solutions, selected articles, and our latest news

Frontend Mentor

  • Unlock Pro
  • Contact us
  • FAQs
  • Become a partner

Explore

  • Learning paths
  • Challenges
  • Solutions
  • Articles

Community

  • Discord
  • Guidelines

For companies

  • Hire developers
  • Train developers
© Frontend Mentor 2019 - 2025
  • Terms
  • Cookie Policy
  • Privacy Policy
  • License

Oops! 😬

You need to be logged in before you can do that.

Log in with GitHub

Oops! 😬

You need to be logged in before you can do that.

Log in with GitHub

How does the accessibility report work?

When a solution is submitted, we use axe-core to run an automated audit of your code.

This picks out common accessibility issues like not using semantic HTML and not having proper heading hierarchies, among others.

This automated audit is fairly surface level, so we encourage to you review the project and code in more detail with accessibility best practices in mind.

How does the CSS report work?

When a solution is submitted, we use stylelint to run an automated check on the CSS code.

We've added some of our own linting rules based on recommended best practices. These rules are prefixed with frontend-mentor/ which you'll see at the top of each issue in the report.

The report will audit all CSS, SCSS and Less files in your repository.

How does the HTML validation report work?

When a solution is submitted, we use html-validate to run an automated check on the HTML code.

The report picks out common HTML issues such as not using headings within section elements and incorrect nesting of elements, among others.

Note that the report can pick up “invalid” attributes, which some frameworks automatically add to the HTML. These attributes are crucial for how the frameworks function, although they’re technically not valid HTML. As such, some projects can show up with many HTML validation errors, which are benign and are a necessary part of the framework.

How does the JavaScript validation report work?

When a solution is submitted, we use eslint to run an automated check on the JavaScript code.

The report picks out common JavaScript issues such as not using semicolons and using var instead of let or const, among others.

The report will audit all JS and JSX files in your repository. We currently do not support Typescript or other frontend frameworks.