Discover how Nuwa can transform your organisation. Get in touch today.Contact Us
Nuwa

XRisis Platform Validated with Action Contre la Faim Emergency Roster in Paris

Eight emergency response professionals participate in comprehensive three-pilot evaluation workshop generating quantitative usability metrics and qualitative feedback informing commercial SimExBuilder platform development.

Published by Nuwa Team
Funded by the European Union

Funded by the European Union

This project has received funding from the European Union. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.

Grant agreement number: 101070192

Validation Workshop Execution

Nuwa conducted comprehensive validation workshop in Paris on 14 May 2025 with eight participants drawn from Action Contre la Faim's internal emergency roster representing roles typical of country office emergency response teams including programme leads, logistics specialists, and finance managers. The workshop sequenced three pilot experiences over approximately 90 minutes following structured induction session: individual arrival briefing with AI mentor Maud introducing emergency management concepts, collaborative alert and response strategy development in four-person teams responding to flooding scenario, and individual implementation simulation requiring stakeholder negotiation with AI-powered characters addressing realistic operational challenges. Participants completed written evaluation surveys employing System Usability Scale, component-specific added value ratings, overall satisfaction metrics, and open-ended narrative questions, supplemented by facilitated verbal group debrief exploring themes including perceived training value, usability challenges, comparison to conventional simulation exercises, and recommendations for future development.

Participant Profile and Evaluation Credibility

Participant selection engaged genuine target user population rather than convenient volunteers or specially recruited test subjects, strengthening confidence that results would generalise to operational deployment contexts. Six of eight participants had previously participated in three or more conventional simulation exercises providing informed comparison basis for assessing whether XR capabilities delivered genuine incremental benefits versus merely different delivery of equivalent learning experiences. Five of eight participants possessed emergency deployment experience whilst three had not yet completed field assignments, creating demographic diversity enabling evaluation of platform value across experience levels from newly trained personnel through seasoned emergency responders. Age ranges spanned late twenties through late fifties with varied prior gaming experience and general technology comfort levels, addressing concerns about whether technical literacy would create accessibility barriers for certain demographic groups. This participant composition ensured validation engaged informed, representative assessors capable of providing actionable feedback grounded in operational understanding rather than superficial reactions to technological novelty.

Multi-Method Evaluation Approach

The evaluation methodology combined quantitative instruments providing measurable comparable outcomes with qualitative methods capturing contextual nuances and explanatory insights that numbers alone cannot reveal. System Usability Scale's standardised ten-item psychometric structure enabled direct comparison against benchmark data from thousands of previous studies whilst the validated scoring algorithm reduced response bias through alternating positively and negatively framed statements. Component-specific added value ratings disaggregated platform assessment enabling granular identification of which capabilities delivered value (implementation simulation) versus which required reconsideration (theoretical briefing), avoiding aggregate impressions that would obscure important performance variations. Separate verbal debriefs with participants versus facilitators and project team members enabled candid discussion within each group without cross-group concerns about social desirability or evaluation anxiety influencing feedback authenticity. The multi-method triangulation approach recognised that comprehensive evaluation requires both measurable metrics demonstrating quantified outcomes and rich narrative feedback explaining patterns, revealing contextual factors shaping user experiences, and generating actionable improvement recommendations.

Immediate Observations and Preliminary Themes

Facilitators observed generally successful scenario engagement with participants navigating virtual environments, communicating through Rainbow CPaaS, interacting with AI avatars, and completing assigned tasks despite occasional interface confusion and technical hiccups that did not prevent exercise completion. Participants demonstrated clear enthusiasm for implementation simulation scenarios featuring stakeholder negotiation whilst expressing more modest engagement during theoretical briefing and mixed reactions to collaborative planning exercises, preliminary patterns that subsequent quantitative analysis would confirm through numerical ratings. Technical reliability proved generally acceptable with isolated issues including AI speech recognition failures requiring scenario resets, momentary audio connection instabilities during high participant activity periods, and navigation confusion when participants attempted interactions that the interface did not support, none severe enough to derail overall workshop success yet sufficient to impact usability perceptions and satisfaction ratings. The validation event successfully demonstrated platform operational capability with target users in realistic deployment context achieving Technology Readiness Level 7, whilst simultaneously generating comprehensive evaluation evidence that will inform commercial platform development priorities addressing identified improvement opportunities.

Next Steps and Analysis Phase

The team is conducting detailed analysis of written survey responses, debrief transcripts, and facilitator observations to generate comprehensive evaluation report documenting quantitative outcomes, qualitative themes, strategic recommendations, and development priorities for SimExBuilder commercial platform evolution. Results will inform decisions about which capabilities merit continued investment versus which require fundamental reconsideration, how extensively interface refinement can address usability barriers versus whether architectural changes become necessary, and what market positioning emerges from validation evidence about differential training value across emergency management applications. The validation outcomes will directly shape commercial discussions with Action Contre la Faim regarding Service Level Agreement structures for ongoing platform access beyond research funding period, early adopter programme design for peer NGO engagement, and go-to-market strategy balancing rapid market entry against adequate capability maturity ensuring sustainable adoption rather than premature deployment creating negative first impressions that would impede subsequent market development.