The Human Readiness Framework: a scoring rubric for real-world robot deployment
Most robot deployments fail not because the robot breaks — but because the humans around it weren't ready. The machine works. The context doesn't. This is the gap the Human Readiness Framework is designed to close.
Developed through fieldwork across logistics, healthcare, and hospitality environments, the HRF is a scoring rubric that evaluates an organisation's readiness to deploy robotic systems along five dimensions: spatial legibility, staff orientation, process alignment, failure tolerance, and feedback loops.
Each dimension is scored from 1 to 5, producing a composite readiness score that teams can use before, during, and after deployment. A score below 12 indicates high risk of adoption failure — not technical failure. The robot will function. The people around it won't know what to do.
The framework is deliberately non-technical. It was designed to be administered by product managers, experience designers, and operations leads — not robotics engineers. The questions it asks are about people, not machines: Do staff know what to do when the robot stops? Do customers understand what the robot is for? Is there a clear path for reporting a bad interaction?
Early pilots using the HRF showed a 34% reduction in escalation events during the first 90 days of deployment. Teams that scored high on "feedback loops" — meaning they had structured ways to collect and act on frontline observations — outperformed low-scoring peers on nearly every adoption metric.
The HRF is available as part of the REP curriculum and will be published in full as an open research tool later this year.