Skip to content

New ITIL Foundation Training Course: 10-11 March book online

Banner Mobile Image

AI in Service Management: Why Data & Readiness Matter

26/02/26 By Darren Rose

Mind the Gap: Why Data and Organisational Readiness Determine AI Success in Service Management

By Darren Rose, Service Transformation Partner, FSP

AI is arriving in IT service management at speed (it’s already here in fact), and the pressure on teams to adopt AI is real. Vendors promise transformation through utilising their latest AI capabilities, leadership teams are asking difficult questions and pushing for efficiency gains, and service desk professionals are facing a landscape where the technology itself is unlikely to be the hardest part of the challenge.

For this article, I wanted to look at three areas that will impact the successful adoption of AI – quality of your data, the maturity of your processes, and the readiness of your organisation.

Data Quality

AI in service management is only as good as the data it learns from and operates on. For most service desks, that data is in a worse state than they realise. CMDBs drift out of date as soon as they are built. Incident records are inconsistently categorised, severely lacking information, or utilise free-text descriptions that mean different things to different people. Knowledge bases, where they exist, are a patchwork of articles – some accurate, many outdated, some never used at all.

This is the reality AI encounters when introduced into a typical service management environment.

Establishing data governance for AI should be seen as a foundational requirement that covers accuracy, quality, hygiene, and ownership. When these are absent, AI doesn’t just underperform – it actively erodes trust.

An AI tool that confidently suggests the wrong fix or surfaces a knowledge article that contradicts what service desk teams know to be true damages credibility in a way that is difficult to recover from. Poor data quality doesn’t produce poor AI recommendations quietly; it produces them confidently at scale, and at speed.

To prepare, organisations must make an honest assessment of the current state before any AI tool goes live. That means reviewing the CMDB for completeness and accuracy, reviewing incident categorisation for consistency, and evaluating whether the knowledge base reflects how issues are resolved today. Importantly, this should not be treated as a one-time clean-up exercise. Data governance needs to be a continuous, owned discipline – with defined responsibilities and regular review.

Structured data capture throughout the service management lifecycle is the foundation on which AI-assisted self-service and intelligent suggestion features depend. Investing in data quality before AI deployment is not a delay – it can determine whether the investment pays off.

Process Maturity

Process maturity represents an often-overlooked dimension. AI doesn’t transform processes – it amplifies their strengths and weaknesses. This is perhaps one of the most important truths for service management professionals to grasp before embarking on any AI initiative.

When processes are ad hoc, inconsistently followed, or poorly documented, AI has little reliable foundation to work from. Incident categorisation that varies by individual, knowledge articles that are incomplete or out of date, and CMDB records that don’t reflect reality – these aren’t problems that AI will fix. There are problems that AI will magnify. Automation built on weak processes simply delivers poor outcomes faster and at greater scale.

Organisations with well-defined, consistently executed processes find that AI genuinely amplifies their capabilities. Mature processes produce cleaner data, clearer patterns, and more predictable outcomes – all of which AI can analyse and act upon with confidence.

The good news is that organisations don’t need to resolve every process weakness before taking action – they simply need to know where they stand today, so that they can plan for tomorrow. Structured maturity assessments can provide exactly that starting point: an honest, evidence-based picture of where processes are consistently followed and where they are not. Taking the time to evaluate the current state rigorously rather than overstating process confidence in the rush to deploy AI is key. Once completed, shift the focus to improving consistency iteratively, building stronger foundations progressively rather than attempting to fix everything at once. Organisations that invest in this kind of structured self-awareness before pursuing AI will find they not only make better implementation decisions but also develop cleaner data and more predictable outcomes that AI genuinely needs to deliver value.

Organisational Readiness

The most common mistake organisations make is treating AI adoption as a technology project. They procure a tool, configure it, and announce it – without first asking whether the culture and the people who will work alongside AI are genuinely ready for what is being introduced. At the heart of this challenge is trust. Service desk teams are understandably asking questions that rarely get answered directly: Will AI take my job? If it gives a wrong answer, who is accountable?

When these concerns go unanswered, they don’t simply disappear – they go underground, manifesting as passive resistance and low adoption. Resistance and mistrust, not technical limitations, are the primary reasons AI initiatives underdeliver.

The starting point for assessing operational readiness is to foster honest conversations. Leaders need to actively address concerns about job security, accountability, and skills – not through a single all-hands announcement, but through consistent, visible behaviour that demonstrates AI is something being introduced with people rather than to them.

Leadership behaviour is an adoption accelerator. When leaders visibly champion AI – not just in strategy documents but in daily practice – teams follow. When they don’t, even the best technology will sit underused.

Psychological safety – the confidence for teams to raise concerns and challenge decisions without fear – matters enormously here, and creating it is a key leadership responsibility. Alongside that, structured AI literacy and ethics training helps teams understand not just how to use AI tools, but when to trust them and when to exercise their own judgement.

The ITIL AI Governance white paper published at the end of 2025 is unambiguous on this point: successful AI governance requires AI literacy and cultural readiness at every level, from frontline staff to senior leaders. Governance cannot compensate for a culture that isn’t ready.

Promotional graphic for the SDI Conference 2026, featuring event details, speaker photos, and networking crowds. Text highlights tickets from £99 and the Hilton Metropole, Birmingham, on 19–20 March 2026 with a focus on global service desk transformation.

I’ll be exploring these themes in depth at Spark26, where we’ll examine practical approaches to building the readiness foundations that determine AI success. Join me to discuss how your organisation can mind the gap between AI aspiration and reality.