Web App

Dial - Developing An Inclusion Assessment Library

Role

UX Designer & Researcher

Team

Stakeholder, Product Owner, Software Developer, UX Designer

Tools

Figma, Illustrator, DiceBear Library

Duration

10 Weeks

Live Link

Problem Statement


DIAL




Designing a Research Discovery Platform for Inclusive Social Science Instruments



Arizona State University | 2025




Overview



DIAL (Developing an Inclusion Assessment Library) is a responsive web application designed to help researchers and educators discover instruments used in the social sciences to study and implement inclusive practices.


Before DIAL, there wasn’t a centralized, structured way for faculty to find validated instruments. Researchers were piecing together tools from scattered sources, relying on informal recommendations, and spending unnecessary time verifying credibility.


I was involved in the project from its early stages, helping shape the product from concept through Alpha release — defining the experience, structuring the discovery system, and collaborating closely through implementation.




The Problem



Researchers needed:


  • A centralized repository of inclusion-focused instruments

  • A way to search by keyword

  • Structured filtering based on research criteria

  • Clear metadata to evaluate credibility quickly



But this wasn’t just about building a searchable database.


It was about designing a system that:


  • Felt intuitive to first-time users

  • Supported advanced filtering for experienced researchers

  • Scaled as more instruments were added

  • Followed ASU brand and accessibility standards

  • Could be delivered within a focused Alpha scope





My Role



UX Designer


I worked across:


  • Early requirement clarification and user flow definition

  • Information architecture and metadata modeling

  • Search and filtering UX

  • Wireframes and high-fidelity UI

  • Responsive design

  • Accessibility considerations (WCAG 2.1 AA)

  • Usability testing (SUS, NPS, qualitative feedback)

  • Ongoing collaboration with developers through implementation



This was a hands-on design role with systems thinking at its core.




Designing the Discovery Experience




Supporting Two Types of Users



Through stakeholder discussions and early exploration, two user behaviors became clear:


The Searcher

Someone who knows what they’re looking for and starts with keywords.


The Explorer

Someone who prefers browsing by construct, methodology, validation status, or year.


The experience had to support both without overwhelming either.




Landing Page Strategy



The homepage was intentionally layered:


  • A prominent search bar at the top

  • Category cards for point-and-click browsing

  • A collapsible “Advanced Search” panel



This structure allowed users to start simple and progressively refine their search.


The goal wasn’t visual complexity — it was clarity of entry.




Structuring the Metadata



One of the most important parts of this project was defining how instruments would be structured.


Each entry included:



Searchable Metadata



  • Keywords

  • Category values (used in filters and as tags)




Descriptive Metadata



  • Author(s)

  • Year

  • Description/Purpose

  • Validation notes

  • Usage context



This structure shaped both the UX and the backend logic. Filters needed to reflect how researchers actually evaluate instruments, not just arbitrary categories.


Getting this right early made the rest of the system coherent.




Iteration: Clarifying Instruments vs. Resources



As the design evolved, a structural challenge surfaced.


The repository included two related content types:


  • Instruments (measurement tools)

  • Resources (supporting materials, frameworks, references)



Initially, both appeared within a unified results flow.



What We Observed



In walkthroughs and early feedback:


  • Some users weren’t immediately clear on the difference

  • Mixed result lists increased scanning time

  • Filtering felt heavier than necessary



The issue wasn’t visual styling — it was mental model clarity.




Design Iterations




1. Unified Repository View



Left filter panel, mixed results, card-based layout.


Functionally correct, but cognitively dense.




2. Card System Exploration



I explored multiple card patterns:


  • With image vs without image

  • Tag-based metadata chips

  • Different label hierarchies

  • Compact vs expanded layouts



This improved scanability and consistency but didn’t fully solve content-type clarity.




3. Clear Structural Separation



The final direction introduced a clear separation between Resources and Instruments using a right-aligned tab switch.


This allowed:


  • Users to focus on one content type at a time

  • Filters to persist across switching

  • Reduced cognitive load

  • A cleaner mental model of the system



This decision significantly improved clarity without adding complexity.




Repository Page



The final repository experience includes:


  • Persistent left-side filters

  • Search bar with visible applied filter chips

  • Structured result cards

  • 10 results per page with pagination

  • Clear “View” CTAs



Each card balances density and readability, showing just enough metadata to support quick comparison.


The design needed to feel academic but not overwhelming.




Detail Views



Clicking into an instrument reveals:


  • Full metadata

  • Validation information

  • Author and year

  • Download/View option

  • PDF preview

  • Related resources or instruments



Hierarchy was critical here — researchers need credibility cues immediately.




Accessibility & Responsiveness



The platform was designed with accessibility in mind:


  • WCAG 2.1 AA contrast considerations

  • Keyboard navigable filters

  • Clear semantic structure

  • Screen-reader compatibility



Responsiveness was considered throughout:


  • Filters stack vertically on mobile

  • Tabs remain accessible

  • Cards maintain hierarchy across breakpoints



The system adapts cleanly without losing clarity.




Usability Testing



After implementation, we conducted usability evaluation.



Methods Used



  • System Usability Scale (SUS)

  • Net Promoter Score (NPS)

  • Task-based questions

  • Open-ended feedback



Participants evaluated:


  • Landing page clarity

  • Search effectiveness

  • Filter logic

  • Results presentation

  • Clarity between instruments and resources



(Insert your SUS/NPS scores here if you’d like.)


The testing validated that the search and filtering structure worked well, and that separating content types improved understanding.




Scope & Tradeoffs



Because this was an Alpha release, several features were intentionally out of scope:


  • Admin dashboard

  • Saved searches

  • Export/download tools

  • Usage tracking

  • Semantic search



The focus was on building a strong foundation: reliable search, structured metadata, and a clear discovery experience.




Impact



By Fall 2025, the Alpha version of DIAL was deployed with:


  • Keyword and advanced search

  • Structured filtering

  • Instrument and resource separation

  • Responsive, accessible UI

  • Technical documentation and shared repository



Researchers now have a centralized, structured way to explore inclusion-focused instruments — reducing friction in early-stage research.




Reflection



This project reinforced how much clarity in UX comes from structure, not styling.


The biggest improvement wasn’t a visual change — it was clarifying how content was organized and how users moved between types of information.


It also deepened my experience in:


  • Designing metadata-driven systems

  • Balancing density with usability

  • Iterating based on feedback

  • Collaborating closely with developers

  • Delivering within scoped constraints



DIAL wasn’t about designing pages.


It was about designing a research tool that feels structured, intentional, and usable from day one.



After reviewing the platform and gathering team feedback, I identified three key issues: