Company Overview
Applause is a test case and issue management quality assurance application designed to help deliver authentic, real-world feedback on the quality of your digital experiences by leveraging the power of a community of over 1 million digital experts world-wide.
My Role and Responsibilities
Senior Product Design Leader, UX Writer/Researcher
I was responsible for managing the product design of Applause's flagship Enterprise customer-facing product, the Applause Test Case Management application. My role in this project was to provide innovative suggestions for how we could incorporate ChatGPT into our application and then to design the high-fidelity mockups for the feature. I was also responsible for working with the Engineering teams to make sure the design was translated correctly. As such I provided a UX-level of quality assurance both before and after rollout.
Platforms
Customer Web App
Tools
Figma, JIRA, Confluence, Github, Miro, Aha!
Collaborators
Product Managers, Product Designers, Back/Front‑end Developers, QA Engineers, Customer Operations
Problem Statement
Generative AI bursts onto the technology scene
With the peak explosion of Generative Artificial Intelligence in 2023, like all SaaS companies, we were encouraged to find a way to incorporate Gen AI into our product.
This feature release was the result of a team wide initiative to introduce Generative AI into our Enterprise Platform. As a team we all met together to brainstorm on different ways to incorporate ChatGPT into our app. After an afternoon of collaboration, and follow up meetings with the Engineering teams and execs, we settled on a practical solution.
The goal was to create a useful application of Gen AI that benefited our customers by making their jobs easier. Our hope was that by simplifying test case creation we could allow companies to create their library of test cases much faster. Allowing them to get the results they desired even faster as well. We aimed to provide text-based improvement suggestions that the user perceived as valuable. We decided to go with the name I suggested, "Smart Suggestion", as a way to hint at the AI aspect without being direct and additionally for the alliteration.
Users and Audience
Applause's users are any company who's looking to test their products thoroughly by working with a community of testers from across the globe. Quite frequently these are big tech companies with highly visible products like software applications, streaming services, IoT, and more. Because of the frequency at which these companies find themselves testing, it often becomes tedious to craft well articulated test case descriptions and instructions. Additionally, finding the right messaging that enables their test cases to be interpreted by the testing community can often be hit or miss depending on how easy it is to understand and accomplish. Generative AI could simplify these messages in a way where they may see both an improvement in the amount of responses and the quality of those responses.
Scope and Constraints
The primary constraint was that this new feature needed to use Generative AI. There weren't any other particular constraints on this effort other than time. Because of the relevance and timeliness of ChatGPT, we wanted to release this new feature as soon as possible. Doing so painted us in a highly favorable light with our investors and users, and allowed our company to be on trend.
Process
A new feature enabled with Company-level editing permissions
Before we could introduce the smart suggestion feature, we first needed to figure out how this new feature would be enabled. We didn't want it to be on by default for fear of disrupting the normal workflow for our very active daily users. So we had to introduce it in a way that would be highly visible, yet easy to set up. After notifying our clients about the new feature via email and in-app announcement, we placed a simple checkbox within the "Integrations" section of the app. We decided to create a new "Advanced Features" tab which would give us a location to place additional new feature enablements as well. We also added an information callout that provided a link back to the knowledge base article that was also linked via the feature introduction email and announcement.
"Smart Suggestion" enablement
In order to activate this new feature users also needed to agree to legal terms and conditions. We opted to place this information within a pop up upon selecting the checkbox for the new feature, this way it was very clear for users what they needed to do first before opting in to this new feature.
Legal terms and conditions agreement
Then upon returning to the page, we now showed the feature enabled, who it was enabled by, and when. Additionally, we also showed that this feature was enabled within the "Product Setup" pages, both as an additional verification point, and as a way for others to see this who did not also have permissions to enable "Integrations". The feature would also show as enabled within the "Integrations" tab while setting up a new product for testing. Another location where this was visible to all, regardless of roles or permissions. Here, users who had permissions to set up new products could choose to enable or disable this feature per product, even if they did not have permission to enable the use of smart suggestions within the "Integrations" page. Finally, in order to activate the new smart suggestion feature, we placed a button labeled "Smart Suggestion beta" in the top right within the header where we placed all of our prominent, most important page related actions. Adding the phrase "beta" to this made it clear to the users what to expect and that results may vary depending on how ChatGPT processed their text.
Enablement active
"Smart Suggestion" shown as enabled within Product Setup after creation
"Smart Suggestion" shown as enabled within Product Setup during creation
"Smart Suggestion beta" button within test case management
Showing all Step Smart Suggestions at once was key
After multiple design reviews with our users it became clear that they heavily favored being able to see all options at once rather than just the text box they were focused on. The approach I took to this design was to take our existing page and find a way to split it horizontally without breaking the page. To do this I worked within the constraints of the recent header design update and split the page in half just below that so that as the user scrolled down the page, a follow-up update would leave the header fixed so that context would not be lost as to where you were.
"Smart Suggestion" in use showing suggestion selection while editing test case steps
A test case using "Smart Suggestion" showing the fixed header
Robust Yet Simple
At the top of the page I reiterated that the view they were seeing is that of our new "Smart Suggestion" and again linked back to our knowledge base article. Then I made it very clear which side of the page represented which half as I added the labels "original text" and "smart suggestion text" and also made these labels fixed and pinned to the top of the page as the user scrolls down. Making each option a radio box meant it was very easy to select between the two options available. Additionally, we allowed the text boxes to remain editable in case a user wanted to make edits to either side.
The left side, original text options were selected by default for ease of completion. I used our light blue background with a darker blue border to show as the hover style when the user highlights over the selection. Once a selection is made, the box now appears as a slightly darker blue background, and removed its border making it clear it was the active choice. Finally, in order to speed up the selection process I added a button to the fixed label section that allowed the user to "Accept All" of the smart suggestions being offered on the right side. We decided to apply smart suggestions not only to the test case steps, but to the settings page as well, suggesting better test case names and descriptions; which were often the things testers read first to decide if they wanted to take on the testing for that request.
"Smart Suggestion" as seen on the test case settings page
Alternative Designs Found To Be Less Effective
During the wireframing brainstorming phase I had developed a couple of other designs with similar effects. One approach had all of the content suggestions aggregated but this meant that the left and right sides often did not match up as there would be content like images and similar on the left, but no suggestions for images so the text would appear much shorter and we found that it was much harder for users to cognitively process. Another design approach we had was to only offer up suggestions as a user went through and selected each section on the left. This however felt much more tedious and wasn't easy to see the big picture where sometimes a smart suggestion would break a set of instructions into multiple steps. Ultimately we landed on this solution where users could see all smart suggestions offered up at once, while keeping them aligned vertically with their left side counterparts they were meant to replace.
Asking For Timely, Relevant Feedback
After users completed their selection process for both the steps and settings tabs, they could save their selections. Once saved, a modal would pop up asking the users about their experience. We wanted to keep this timely and relevant, but we also didn't want the user to feel obligated to do so. Presenting it this way increased the feedback we received on the feature, but still allowed users to opt out of leaving feedback if they wanted to keep focus and move to creating the next test case using smart suggestions.
"Smart Suggestion" feedback modal
Outcomes and Lessons
By minimizing the time needed to create well constructed test cases, customers took shorter time to construct test suites. And by using Generative AI to simplify and clarify language test cases became easier to read and understand, allowing them to be completed faster. This in turn reduced the amount of average time it took quality assurance testers to complete their work, allowing them to complete more tests allowing them to be paid more as well. This allowed our customers to receive their results faster leading to an overall increased satisfaction and CSAT score with our core product.