Skip to content

Join our next Problem Management workshop, 16 February 2026: book online

Banner Mobile Image

The First Thing People Meet Is the Service  

12/01/26 By Dan Newton, Northumbria University

Most service desks now have a virtual assistant involved in some way…

…shaping the user experience of the service desk from the very first interaction.

Sometimes it’s something you’ve chosen carefully and put real thought into. Sometimes it appears as part of a broader platform decision, and everyone (more or less) agrees to work with it.

Either way, the conversation usually starts in the same place. How many contacts can it handle? How much demand can it deflect? How quickly does it close conversations? How much money does it save? 

They’re fair questions. They’re also very easy ones to answer. 

What gets asked far less often is a simpler question: What is it actually like to use? 

From the User’s Point of View, This Is the Service 

From a user’s point of view, none of the internal debates really matter. They don’t care whether it’s called AI, automation, orchestration, or something else entirely. If it talks to them and there’s no obvious human involved, it is the service. And they judge its behaviour. 

Does it listen?
Does it remember what they’ve already said?
Does it feel like it’s trying to help, or trying to get them out of the way? 

A useful way to think about this is to stop talking about the technology and imagine a new starter joining your service desk. 

On paper, they’re brilliant. Always available. Never tired. Never sick. Happy to work any shift you need covering. From a resourcing and management point of view, they look like a dream. 

Then you sit down and listen to how they actually work. 

A user explains their issue and mentions what they’ve already tried. The new starter listens politely, then asks them to do the same things again. Another user raises a slightly awkward question and gets a confident answer to a simpler problem, stepping neatly around the messy part. The new starter is asked by someone else to pick one of four categories so a ticket can be logged. They explain that none of them really fit. They’re told they must choose one to continue. The user picks the least wrong option, already knowing they will be explaining it properly later. 

When the conversation drifts off script, things get even more uncomfortable. Apologies appear. The tone changes. Eventually, it ends with “Sorry, I can’t help with that” or “Please rephrase your question”. It doesn’t feel like support. It feels like a closed door. You hover nearby, suddenly very aware of the distinctive cadence of someone typing a complaint that will definitely include the phrase “at no point during this interaction”. 

From a reporting point of view, everything looks good. Volumes are down. Handling times look healthy. Deflection targets are being hit. 

What those numbers don’t show is the effort being pushed back onto users. The time spent rephrasing questions. The context they have to carry into the next interaction. The hesitation before they think about getting in touch again, because they’re not sure it’s worth it. 

The work hasn’t gone away. It’s just been moved around. 

Efficiency Has a Cost, Someone Always Pays It 

For some organisations, this is a conscious and defensible choice. In certain internal service environments, the balance may genuinely favour containment over experience. Staff may be expected to self-serve, tolerate a degree of friction, or work within systems that prioritise efficiency over polish. In those contexts, a service that handles simple, predictable requests well and deliberately limits everything else may be acceptable and, in some cases, effective. 

The calculation changes once the service is external, customer-facing, or operating in an environment where users have a choice. At that point, experience stops being a nice-to-have and becomes part of the service itself. The question shifts from whether the service is efficient to whether users decide it is worth engaging with at all. What you optimise for matters, but so does what you’re willing to trade away. 

What This Looks Like at Scale 

These observations come from running a virtual assistant in a live, shared-service environment at scale. I work within Norman Managed Services, based at Northumbria University, supporting staff and students across more than forty universities, with a potential user base of well over a million people. It’s the largest shared service in higher education and has been operating for nearly two decades. 

Within that environment, Ember, our virtual assistant, has been live for around five months and already operates at a scale that leaves very little room for theory or idealised user journeys. It’s one of the largest operational deployments of generative AI in higher education professional services, which means it must cope with reality rather than tidy assumptions. 

Ember has been described by some users as the “best AI experience they’ve ever had”. That did not happen by accident. It’s the result of over a year of focused development, the use of frontier AI models, and extensive prompting and directives designed to shape tone, adaptability, and pacing. Much of this work was tested and refined in a simulation environment long before users ever saw it. Crucially, Ember is backed by our 24/7/365 human service desk, with handovers treated as good judgment, not failure. 

Users range from students logging in for the first time to senior academics who have been around longer than most ITSM frameworks. They arrive with problems shaped by their own context, pressures, and priorities, often at moments when something has already gone wrong. They’re not thinking in categories, workflows, or channels. They’re trying to explain what isn’t working in language that makes sense to them. 

Ember is a good example of conversational interaction and generative AI earning its place by handling the nuance and context people naturally bring to their help requests, in the same way we would expect from a capable colleague at the desk. 

More broadly, when designed with care, generative AI can cope with incomplete information, maintain context across a conversation, adjust tone, and recognise when human input is needed. As these approaches become more established, rigid behaviour increasingly reflects design choices rather than technological limitations. 

The Human Test 

There is a simple test that cuts through all of this, and it’s one we kept coming back to when building Ember. It also applies whether you’re building something yourself or buying it in. If this behaviour came from a human colleague at the desk, would we accept it simply because they were efficient on paper? Most service desk managers wouldn’t. 

We already know how to coach people to deal with ambiguity, recognise frustration before it turns into anger, and take responsibility for outcomes rather than simply closing interactions. Virtual assistants are increasingly the front door to our services. From the user’s point of view, they are the service, shaping trust, expectations, and whether people feel supported enough to engage again next time. 

If an interaction would prompt a quiet word with a colleague, it should prompt the same level of scrutiny when it comes from a machine. The technology may be new, but the expectation is not. 

This thinking underpins the session led by Gillian Hitchens and me from Norman Managed Services. Drawing on our journey and the reality of running Ember at scale. We’ll talk candidly about design decisions and what it takes to put generative AI at the front door without lowering service standards. There will be no feature lists or grand claims, just practical experience! 

Join Dan and Gillian on Day 1 in the AI & Automation stream, 14:50 to 15:20, for a grounded conversation about delivering generative AI in complex services at scale.

If nothing else, you’ll leave with a clearer answer to a simple question. If “AI” were a new starter on your service desk, would you be praising them, or booking a one-to-one? 

 

SDI Conference 2026