What enterprises cannot see in AI adoption may matter more than what they can. Recent reporting by The Guardian on AI-generated health misinformation exposed a hard truth about enterprise AI adoption. The most visible gains, in terms of faster workflows and problem solving, and lower costs, are only the tip of the iceberg. Beneath that surface lies a far more complex challenge, namely one of trust. And in industries like healthcare and insurance, that challenge is not optional. Organisations usually introduce AI as a productivity tool. Automating routine tasks, summarising long documents and faster analysis are measurably driving improvements in all manner of domains. These use cases are also relatively easy to deploy. In regulated, high-stakes environments, guardrails for safe and ethical AI use also matter. AI models alone cannot create the trust levels that high-risk and compliance-led environments need to operate them. Trust is built carefully with embedded governance and data discipline, and of course, human oversight. That level of maturity sits well beyond early experimentation or pilot programs. At EXL, the iceberg metaphor is a useful explanatory model for AI adoption. The visible productivity gains are at the tip. They are easy to see and easy to measure. But the bulk of the iceberg, the part that determines whether AI succeeds or fails at scale, sits below the waterline. That ‘underwater’ part of the iceberg includes operating models, controls, people, and processes. In practice, those controls include governance frameworks that monitor fairness and bias in AI systems, and safeguard sensitive data through strong privacy and security standards. They ensure AI-driven decisions remain explainable and auditable. Just as importantly, organisations must define clear accountability for how AI is deployed, monitored, and corrected when outcomes fall short of expectations. AI Productivity Is the Tip of the Iceberg. Trust Lies Beneath. In other words, the hidden part of the iceberg is where most of the work happens, and where most organisations underestimate the effort required. When organisations struggle with AI, it is rarely because AI is not powerful enough. Today’s models are extraordinarily capable. The failure point is almost always operational. Organisations sometimes treat AI as a plug-and-play upgrade. To tap into its full potential, AI is better treated as a living capability that is designed, governed, and continuously managed. Responsible AI at scale requires a fundamentally different way of working. We see this emerging as multidisciplinary delivery teams, referred to as “pods”, that bring together the full range of expertise needed to make AI trustworthy in realworld conditions. These teams typically include domain experts who understand the regulatory and operational realities of sectors like healthcare or insurance. Their role is critical. Without that human-led contextual knowledge, even technically accurate outputs can be misapplied, sometimes dangerously. They also include data and database engineers (especially those working with retrieval-augmented generation) who ensure models draw from accurate and auditable data By Vilas Madan Sr Vice President – Growth Leader at EXL. Where he works with enterprises operating in complex environments across APAC to operationalise data, analytics, and AI at scale.
RkJQdWJsaXNoZXIy MTUyMDQwMA==