User Experience Research Team

As a maturing UX team, we constantly were looking for ways to expand our influence within the Product team. We knew that providing UI designs was helpful for our team but we all had the desire to do more.

It became apparent that in order for us to expand our influence, we needed to become subject matter experts not only in our products but also by understanding our users and how they were using our products. The issue was that we didn’t have a formal process of accessing our users or a way to leverage them when we wanted to conduct research.

This lead to the formation of our problem statement: How might we create a system that allows us to connect with users of our products and conduct various types of research with them?

Discovery and research

To learn more about what was holding us back as a team, we had a meeting where we discussed some of our roadblocks when it came to connecting with users. From that meeting, we learned a few key points.

  • Time and Access to Users: Whenever we wanted to test with users, we didn’t have a quick way to reach out to them. Reaching out to them when we had in-flight designs worked on occasion but was not scalable because it took too long to connect with them. We’d have to reach out to account managers who would then potentially connect us. This resulted in delayed production.
  • Cost of Running Tests: We currently didn’t have any budget for this or know how much it would cost to run these tests.
  • Fear and Uncertainty: No one on the team had formal training on how to conduct user research and thus there was hesitancy on how well we could actually conduct research.

From this research we discovered the real problem and thus were able to focus our design on addressing these pain points.

Defining the problem

Our team needed a way to access our users during our quick design sprints. We couldn’t afford the time it takes to find, recruit, and schedule participants so we needed a repository of users that we could reach out to when the time was right. We also needed a scheduled, recurring cadence in which we would run tests so that it didn’t become an afterthought in our process. Finally we needed a way to incentivize our users to participate.

Determining the process

When it came to understanding how to implement a process at our company, I researched how others had started User Research teams at their companies. I found that we needed to do 3 things:

  • Create a way for users to sign up to be participants in our research studies. By doing this, we could reach out to them when we were ready to conduct research efforts
  • Create a standardized process for planning the tests and what team members that needed to be involved
  • Recruit users to participate in our monthly tests.

By making this framework, we would be on a good path to establishing a system for our team. However, we ran into hurdles and roadblocks on the way that were valuable lessons…

Sign up form for UX Research

Creating a sign-up form for potential participants

One of the main issues we had was that when the time came to test, we didn’t have access to users at that time. This was a problem because then it became a time crunch to get ahold of them.

 

I determined that if we created a sign up form for users, they could fill it out and show that they are interested in participating in tests. I made a design for a sign up page and worked with our Marketing department to get it implemented onto our company website. Eventually we started sharing this website out with our Account Managers who, in turn, shared it with our customers and users. We also shared the site on our various social medias and this increased sign up rates.

 

The goal was to create a list of people who are interesting in user testing. The list of people who signed up were the people we would call upon when we were ready to test. This was the first step in our new process.

Planning our tests

When it came to planning our tests, we wanted to make sure we implemented a repeatable system that could be done on a monthly cadence. In order to do this, we took an average month, 4 weeks, and broke it down by what we needed to do each week. Working backwards, we determined that we wanted to conduct our tests on the last week of the month.

 

According to Nielsen Norman Group, after roughly 5 tests, you’ll have enough information to determine areas of improvements and recognize patterns with user behavior. With that in mind, we decided that we’d want to schedule 5 tests per month.

 

In order to get 5 tests scheduled, we’d have to have a dedicated member of the team reaching out to our participants and checking for their availability. In an effort to prevent a lot of back and forth in scheduling, we created a team Calend.ly page which made it really easy for the prospective participant to choose a time that worked for them. For weeks 2 and 3 of each month, we’d have a dedicated member of the team reach out to participants and get them signed up.

Our Calend.ly page where we would book our participants

Determining what to test

This brings us to the first week of the month. During the first week, we’d determine what needs to be tested, who needs to be involved, and what artifacts we’d need to have ready by the time of the tests.

 

In order to do this, we’d ask members of our product team if they had anything in particular they wanted to test in their product. We have a vast product line and many different Product Managers who manage them. This opened up the floodgates as everyone had something they wanted to get insights on. To corral all the requests, we created a simple Google Form for our team and prioritized the requests based on current business goals.

 

Once we chose the product we wanted to learn more about, we set up a meeting with the designer of that product as well as the Product Manager. During that meeting, we’d brainstorm to identify what needs to be tested and how we could conduct the tests. In most cases, we determined use cases that we’d share with our testers and what we were looking to learn from the tests. Once we established this, we designer of the project would provide the necessary artifacts (prototypes, mocks, etc.) and we would come up with the script.

 

By doing this on the first week, we had about 2-3 weeks to get all this ready before the tests would be conducted.

Conducting the tests

Since we had ample time to reach out and schedule participants and prepare our tests, it made running the actual tests pretty straightforward. Unfortunately no one on our team had any past experience running tests so the first few that I conducted were a bit rocky. Over time, I gained confidence in running the tests and sticking to the script when needed and deviating when appropriate.

 

We were able to negotiate a budget for these tests and awarded our participants with a $25 gift card for the tests. Advertising this incentive proved to be very helpful in gaining new prospective participants.

Conducting a user test via Zoom with one of our customers.

Analyzing our findings

After we completing our week of testing, we User Research team would meet to go over what we learned during the tests and write a report that collected our findings. We would share this report with the broader Product team and work with the Product Managers to prioritize enhancements to be made.

A portion of our Research Report. We'd discuss the issues, who ran into them, how often it occurred and offered suggestions for improvements.

Impact of the User Research team

By implementing this system of testing, we gained a significant impact on the Product team. The tests results provided valuable insights into how our products could be improved and what we needed to do to do so. The process continues to run, our participant list is growing, and we continue to gain valuable insights that in turn give the UX team more value to the Product team.

Visit the ServiceMax User Research Sign Up page