Back

Exercise Interface

A large part of the CS Academy curriculum revolves around completing autograded exercises through our interface. Thus, it is essential that our exercise interface is easy to use and understand so that students can focus on learning to code instead of learning how to operate their "IDE".

We found through teacher training and our support line that many teachers were struggling to understand the use of our test cases and the exercise checklist. We also saw the need to better reference notes and course material through the exercise interface through observation from previous visits to high schools. Therefore, we set out to create interactions that better reflected the purpose of the test cases.

Tools + Skills

Figma, User testing

Duration

7 weeks, June - July 2020

Team

Monica Chang, V Shiau, Michelle Lee, Isabel Yi, Angela Chen

Team Role

Ideate interactions and layout, interview teachers, create surveys for teachers and students, user testing synthesis, refine and clean up final design, deliver and explain to development team

Goal

Design an abstracted test case and autograding experience

Process

Ideate, design, test, iterate, polish

Overview of exercise before coding begins

When first entering an exercise, students are prompted to read about the task through the checklist and play around with the interactive program they are about to replicate. This is to understand the problem they are about to tackle.

Initial state with no test cases selected

Once students hit "Start Coding!", they are able to write code in our IDE. They are also able to reference the checklist again and the relevant test cases related to each checklist task.

Hovering on a test case

Hovering over a test case card will show an add icon and some descriptive text, informing students about how they are about to interact with this test case.

Test case added to code

Clicking on a test case will add the test case code and comment into the student's editor and run their code. This lets the student check if their canvas matches the solution canvas when a test case is run.

Initial checklist and test cases

Checklist with a test case selected

Failing a test case when checking autograder

Passing all test cases under a checklist item

Initial unactive state

On card hover

On (+) hover

On click

Active state

Selected but not running (pause or edit)

Test case removed

Failed a test case

On (x) hover

Passed (successfully autograded)

Process

The beginning of our design process was inspired by teacher comments we received during teacher training and support tickets. It was apparent through these conversations with our users that the current interface had a few problems: students were not reading the checklist to get hints to understand the problem, and both students and teachers struggled to understand the use of our test cases on a deeper level.

Initial iterations

Our initial iterations focused on ways we could highlight the checklist further so that students would want to engage with it more meaningfully. We experimented with ideas such as having it as a draggable modal, separating our checklist items (introduction and instructions) and test cases (to run to test the code) and making other helpful pages more accessible from the coding editor screen.

From our initial iterations, discussions with our program manager, and with the entire design team, we settled on two different designs for user testing. We wanted to test the design with both students and teachers but since this work was done during the summer and most students were out of class, we were only able to talk to teachers. To help get around this, we also created a user survey to get both students and teacher's opinions on the new designs as well as our old interface. This way, we were able to reach a few students to get their feedback.

Design 1 for user testing: checklist only

Design 1 for user testing: checklist and test cases

Design 2 for user testing: checklist only

Design 2 for user testing: checklist and test cases

After user testing and collecting data from our survey, we took our findings and made an affinity map to help synthesize the information. Some of our key take aways were that teachers strongly preferred simplicity and familiarity with interfaces. Students on the other hand enjoyed many new features if they allowed them to work faster or reference key information more quickly.

Affinity mapping after user testing with teachers and students

With more insight on our designs from our users, we made some more iterations on our design. From there, we showed it to the software development team to get their input on implementation. With their feedback and help in developing an initial draft of the design, we were able to test our interactions more throughly to refine and expand upon them. This lead to our final design.

Refinement after user testing: checklist only

Refinement after user testing: checklist and test cases

Since finishing this design and implementing it, we have made a few tweaks to the design based off of continuous product user feedback. Some changes include adding a caret to the right of the checklist item to better show the relationship between checklist and test cases.

© 2021 Janet Peng. All Rights Reserved. Designed with Webflow.