Skip to main content

User Research Frameworks

A UX research framework is a systematic approach that guides the process of conducting research. It helps to ensure that the process is well-structured and organized, and contains a set of research tools, methods and principles that researchers utilize for understanding the users and their needs better.

Having an established UX research framework helps to create a structured approach, save time, align research and business goals and foster collaboration. 

There’s a list of key components for a classical research framework, that includes research methods, tools, research plan, analysis techniques and ethical considerations.

A good UX research framework should provide valuable insight into the company’s research processes and how they’re structured.

To make your framework even more helpful and save time, include handy templates, scripts and cheat sheets that you often use in your research.


Here are all the things it can help with:

Creating a structured approach: UX research frameworks bring structure to your organization. Having one assures that all your teams and researchers are on the same page, maintaining consistency across different projects.

Saving time: following the previous point, having a predefined structured approach to research allows you to save time on reinventing the process for each project and focus on uncovering insights instead.

Aligning research and business goals: a research framework helps to align your research activities with the broader goals of the organization and ensure it always has a positive impact on the company’s product.

Fostering collaboration and communication: above all, a clearly defined research framework helps to establish effective communication among teams.


Best practices for creating UX research framework 

Involve a diverse team: This will help to ensure that your framework includes a wide range of insights from people with different backgrounds and experiences, making it insightful and useful for everyone.

Introduce user-centered design principles: Design your framework with the user-centered design principles in mind. It should clearly communicate why research is important and why we want to understand the users.

Use comprehensive language

  • Some tips to keep in mind are:
  • Avoid jargon
  • Include a glossary to explain complex terms if necessary
  • Provide examples
  • Use visuals to aid with comprehension

Make it shareable: Our last tip is to make sure your UX research framework is a shareable document that is easy to read and navigate. 

Components to include in a UX research framework 

1. Introduction: What is UX research?

2. User Research Plan

Describe its structure, format and key elements, such as: 

  • Research Scope, Goals
  • Stakeholders, Participants
  • Methods, Team and Roles
  • Timelines and Budget

3. Research methods and their use cases

4. Documentation methods 

5. Methods for processing and analyzing data 

6. Ways to communicate findings 

7. Ethical considerations






A framework for Decision Driven Research 

In this framework, the idea is to map your research methods to the types of decisions you want to enable. There are 4 types of decisions:

Vision decisions establish a potential company, product, or service direction. To enable vision decisions, you should choose methods that give you clarity on participants’ big-picture beliefs, philosophies, and experiences.

Strategy decisions determine how you will achieve your vision. To enable strategic decisions, you should choose methods that give you detailed insights into participants’ big-picture beliefs, philosophies, and experiences.

Definition decisions determine whether or not you pursue a specific design direction. To enable these decisions, you should choose methods that allow you to get early feedback on potential design directions and to better understand how a participant may interact with a potential product or service.

Evaluation decisions concern iterations to existing products or services. To enable evaluation decisions, you should choose methods that allow you to continuously identify problems, bugs, or confusion that customers may encounter during use. 


A 3-dimensional framework for UXR 

The Attitudinal vs. Behavioral Dimension 

This distinction can be summed up by contrasting "what people say" versus "what people do" (very often the two are quite different). The purpose of attitudinal research is usually to understand or measure people's stated beliefs, but it is limited by what people are aware of and willing to report.

Card sorting provides insights about users' mental model of an information space and can help determine the best information architecture for your product, application, or website. Surveys measure and categorize attitudes or collect self-reported data that can help track or discover important issues to address. Focus groups tend to be less useful for usability purposes, for a variety of reasons, but can provide a top-of-mind view of what people think about a brand or product concept in a group setting.

A/B testing presents changes to a site's design to random samples of site visitors but attempts to hold all else constant, in order to see the effect of different site-design choices on behavior, while eyetracking seeks to understand how users visually interact with a design or visual stimulus.

The Qualitative vs. Quantitative Dimension 

Studies that are qualitative in nature generate data about behaviors or attitudes based on observing or hearing them directly, whereas in quantitative studies, the data about the behavior or attitudes in question are gathered indirectly, through a measurement or an instrument such as a survey or an analytics tool.

The kind of data collected in quantitative methods is predetermined — it could include task time, success, whether the user has clicked on a given UI element or whether they selected a certain answer to a multiple-choice question.

Qualitative methods are much better suited for answering questions about why or how to fix a problem, whereas quantitative methods do a much better job answering how many and how much types of questions.



The Context of Product Use 

The third distinction has to do with how and whether participants in the study are using the product or service in question. This can be described as:

  • Natural or near-natural use of the product
  • Scripted use of the product
  • Limited in which a limited form of the product is used to study a specific aspect of the user experience
  • Not using the product during the study (decontextualized)

When studying natural use of the product, the goal is to minimize interference from the study in order to understand behavior or attitudes as close to reality as possible. 

A scripted study of product usage is done in order to focus the insights on specific product areas, such as a newly redesigned flow.

Limited methods use a limited form of a product to study a specific or abstracted aspect of the experience. For example, participatory-design methods allow users to interact with and rearrange design elements that could be part of a product experience.

Studies where the product is not used are conducted to examine issues that are broader than usage and usability, such as a study of the brand or discovering the aesthetic attributes that participants associate with a specific design style.

In the beginning of the product-development process, you are typically more interested in the strategic question of what direction to take the product, so methods at this stage are often generative in nature, because they help generate ideas and answers about which way to go.  Once a direction is selected, the design phase begins, so methods in this stage are well-described as formative, because they inform how you can improve the design.  After a product has been developed enough to measure it, it can be assessed against earlier versions of itself or competitors, and methods that do this are called summative. This following table describes where many methods map to these stages in time:



20 UX Methods in Brief

Here’s a short description of the user research methods shown in the above chart:

Usability testing (aka usability-lab studies): Participants are brought into a lab, one-on-one with a researcher, and given a set of scenarios that lead to tasks and usage of specific interest within a product or service.

Field studies: Researchers study participants in their own environment (work or home), where they would most likely encounter the product or service being used in the most realistic or natural environment.

Contextual inquiry: Researchers and participants collaborate together in the participants own environment to inquire about and observe the nature of the tasks and work at hand. This method is very similar to a field study and was developed to study complex systems and in-depth processes.

Participatory design: Participants are given design elements or creative materials in order to construct their ideal experience in a concrete way that expresses what matters to them most and why.

Focus groups: Groups of 3–12 participants are led through a discussion about a set of topics, giving verbal and written feedback through discussion and exercises.

Interviews: a researcher meets with participants one-on-one to discuss in depth what the participant thinks about the topic in question.

Eyetracking: An eyetracking device is configured to precisely measure where participants look as they perform tasks or interact naturally with websites, applications, physical products, or environments.

Usability benchmarking: tightly scripted usability studies are performed with larger numbers of participants, using precise and predetermined measures of performance, usually with the goal of tracking usability improvements of a product over time or comparing with competitors.

Remote moderated testing: Usability studies are conducted remotely, with the use of tools such as video conferencing, screen-sharing software, and remote-control capabilities.

Unmoderated testing: An automated method that can be used in both quantitative and qualitative studies and that uses a specialized research tool to capture participant behaviors and attitudes, usually by giving participants goals or scenarios to accomplish with a site, app, or prototype. The tool can  record a video stream of each user session, and can gather usability metrics such as success rate, task time, and perceived ease of use.

Concept testing: A researcher shares an approximation of a product or service that captures the key essence (the value proposition) of a new concept or product in order to determine if it meets the needs of the target audience. It can be done one-on-one or with larger numbers of participants, and either in person or online.

Diary studies: Participants are using a mechanism (e.g., paper or digital diary, camera, smartphone app) to record and describe aspects of their lives that are relevant to a product or service or simply core to the target audience. Diary studies are typically longitudinal and can be done only for data that is easily recorded by participants.

Customer feedback: Open-ended and/or close-ended information is provided by a self-selected sample of users, often through a feedback link, button, form, or email.

Desirability studies: Participants are offered different visual-design alternatives and are expected to associate each alternative with a set of attributes selected from a closed list. These studies can be both qualitative and quantitative.

Card sorting: A quantitative or qualitative method that asks users to organize items into groups and assign categories to each group. This method helps create or refine the information architecture of a site by exposing users’ mental models.

Tree testing: A quantitative method of testing an information architecture to determine how easy it is to find items in the hierarchy. This method can be conducted on an existing information architecture to benchmark it and then again, after the information architecture is improved with card sorting, to demonstrate improvement.

Analytics: Analyzing data collected from user behavior like clicks, form filling, and other recorded interactions. It requires the site or application to be instrumented properly in advance.

Clickstream analytics:  A particular type of analytics that involves analyzing the sequence of pages that users visit as they use a site or software application.

A/B testing (aka multivariate testing, live testing, or bucket testing): A method of scientifically testing different designs on a site by randomly assigning groups of users to interact with each of the different designs and measuring the effect of these assignments on user behavior.

Surveys: A quantitative measure of attitudes through a series of questions, typically more closed-ended than open-ended.  A survey that is triggered during the use of a site or application is an intercept survey, often triggered by user behavior. More typically, participants are recruited from an email message or reached through some other channel such as social media.



Comments

Popular posts from this blog

Mobile app notifications for business meetings

Annoying notification is one of the top reasons why people uninstall mobile apps, so the notification designers have to be very careful while designing their behavior. At first the app should ask users as to when they would like to be reminded about the meetings, because the user behavior  vary; some people might want to be reminded a day before and also 30 minutes before the meeting starts as they might have to prepare for the meeting; and some people might be just ready to jump into the meeting without any delay so they would want to set the reminder to 5 minutes. So, let the user set time to receive a notification. Once the notification is on screen; the user should be able to dismiss it or open the meetings app - this feature can be used with the slide option. In case if the user doesn't attend the meeting and the meeting time is over, then the notification should still sit on screen but in negative state to let the user know that he/she has missed it. And there shou

Techniques for User Research

User research is a crucial component of designing products and services that truly meet the needs and expectations of your target audience. Here are some techniques you can use to conduct effective user research: Surveys and Questionnaires:  Create online surveys or questionnaires to gather quantitative data about user preferences, behaviors, and demographics. Tools like Google Forms or SurveyMonkey can help you collect and analyze responses. Interviews: Conduct one-on-one interviews with users to gain deeper insights into their thoughts, feelings, and experiences. Structured, semi-structured, or unstructured interviews can help uncover user motivations, pain points, and desires. Focus Groups: Organize small group discussions with users to facilitate open conversations and gather diverse perspectives. This technique is particularly useful for exploring group dynamics and uncovering shared opinions. Observational Studies: Observe users as they interact with your product or a similar

The importance of usability testing in design process

Usability testing is a crucial and integral part of the design process, regardless of whether you're designing a physical product, a digital application, a website, or any other user-centric solution. It involves observing real users as they interact with your design prototype or product to evaluate its usability, identify potential issues, and gather valuable feedback. Here are some key reasons why usability testing is of utmost importance in the design process: 1. User-Centered Design: Usability testing places the user at the center of the design process. By involving actual users, you gain insights into how real people interact with your design, which helps you create solutions that cater to their needs, preferences, and behaviors. 2. Identifying Pain Points: Usability testing helps you uncover usability issues and pain points that might not be apparent during the design phase. Users might struggle with navigation, encounter confusing interfaces, or face difficulties in comple