NATALIA GULBRANSEN-DIAZ

The future demands hope and imagination. I’m passionate about deploying critical, design-led approaches to address complex social challenges, with particular focus on investigating how we can design with/and/for communities envisioning positive futures.

Through practice-led inquiry and real-world collaborations with Australian NPOs, I work to understand the conditions and relationships that enable design to be generative rather than extractive, bridging theory and application to create responsive, situated engagements that contribute to ongoing conversations about design's role in community and public life.

My recently completed PhD, Design with/and/for Value, explored how design can support non-profit organisations in realising their collective ambitions beyond economic measures.

Email
CV
Publications [Google Scholar]

Research
  1. Design with/and/for Value
  2. Waste to Resilience: Sanitation against Stunting and Climate Vulnerability in Indonesian Informal Coastal Areas [Coming Soon] 
  3. Computational Creativity in the Classroom: Student-Led Co-Design of Generative AI Pedagogies in Design [Coming Soon]
  4. Sonic Street Technologies: Australia
  5. Broadening Horizons: Using Curiosity to Diversify Behaviour
  6. Usability Issues in Self-Service Technologies
  7. COVID-19 Smart IoT Screening System (Pilot) at Sydney Children’s Hospitals Network
  8. Introspect
Usability Issues in Self-Service Technologies

Research Assistant, 2021
The University of Sydney School of Architecture, Design and Planning

[usability evaluation] [self-service technologies]
Publications:
Henderson, H., Grace, K., Gulbransen-Diaz, N., Klaassens, B., Leong, T. W., & Tomitsch, M. (2024). From Parking Meters to Vending Machines: A Study of Usability Issues in Self-Service Technologies. International Journal of Human–Computer Interaction, 40(16), 4365–4379. https://doi.org/10.1080/10447318.2023.2212228 

Self-service technologies are everywhere — parking meters, ATMs, ticket machines, grocery checkouts — and they fail people with surprising regularity. This project set out to understand why, and to document what that failure actually looks like in practice. Working as the primary research assistant on Hamish Henderson's PhD research, I helped run a mixed-methods usability study across seven SSTs in Sydney, one of the more comprehensive evaluations of everyday self-service technology in the HCI literature.

The seven SSTs used in this study.


IN THE FIELD


The study involved 30 participants, each taken through all seven SSTs across a single 90-minute session. For each technology, I guided participants through task-based think-aloud protocols, recorded observations of their behaviour and body language, administered a System Usability Scale evaluation, and conducted a short semi-structured interview to capture their experience before moving on to the next one. Running sessions across Sydney — at the actual machines, in the actual conditions people use them — meant working with the noise, the time pressure, and the self-consciousness that comes with using these systems in public.

WHAT THE DATA SAID


The results were stark. 84% of user responses fell within D or F grade ranges on the System Usability Scale. Not a single participant rated any SST as a genuinely good experience. Average scores ranged from 29.25 for the parking meter to 56.57 for the train ticket machine — for context, microwaves score around 86.9, and Google Search 93.4.

Through inductive thematic analysis — which I led, working collaboratively with the research team through independent coding, facilitated workshops, and iterative synthesis — nine global themes emerged across all seven SSTs: clarity of guidance, confidence and trust, interface cohesion, efficiency and legibility, feedback, recoverability, social pressure, assumed knowledge, and accessibility. Across all nine, the SSTs consistently placed the burden of failure on the user. When something went wrong, there was rarely a path back.

WHAT IT MEANS FOR DESIGN


The research pointed to three areas where SST design most urgently needs attention. First, cognitive load: SST interactions happen in conditions of limited attention — in public, often under time pressure — and most systems do nothing to account for this. Progressive disclosure, inline guidance, and clearer feedback loops would help significantly. Second, error recovery: users who made mistakes rarely had a clear path forward, and human assistance wasn't always available or welcome. Designing for recoverability, not just task completion, would change the experience substantially. Third, context sensitivity: the social dimension of using these systems in public — the awareness of queues, of being watched, of not wanting to appear incompetent — shaped behaviour in ways the interface design largely ignored.

BUILDING THE TOOLS FOR SYNTHESIS


With data coming from seven different SSTs across 30 participants, consistency in analysis mattered. I developed the frameworks and resources that structured the team's synthesis process, allowing findings from a parking meter session to sit alongside findings from a vending machine session in a way that was comparable and traceable. This made triangulation across thematic codes, field notes, and SUS data possible, and kept the nine themes grounded across the full dataset.

    The Miro analysis template I developed for the study (left: blank structure, right: completed example for SST #1, the vending machine), capturing SST context, SUS scores, thematic analysis, field note comments, and design improvements.

    SUS scores per SST, presented alongside rating scales from Lewis and Sauro (2018) and Bangor et al. (2008). SST usability was bad across the board, with several SSTs (the vending machine and parking meter) being exceptionally bad. Not a single rating from any user exceeded a score of 72, considered the bottom-end of “good”.

    LISTENING TO THE SIGNAL

    What I took from this project was a clearer sense of what rigorous fieldwork actually involves — not just running sessions, but staying attentive across all of them and building analysis processes robust enough that the findings could hold up to scrutiny. For a study making claims about everyday technologies that millions of people use, that rigour felt like the minimum requirement.



    This research was partially funded through support from the Henry Halloran Trust.

    ©2026 Natalia Gulbransen-Diaz