Measuring the Success of AI Initiatives: Highlights from Consero’s Chief Data & AI Officer Executive Summit


On April 3rd, Ethyca was proud to sponsor and participate in the Consero Chief Data & AI Officer Executive Summit at the Yale Club of New York City. The event convened senior leaders from the world’s top enterprises to discuss the evolving role of data and AI in business. Our CEO, Cillian Kieran, moderated a standout panel discussion titled “Measuring the Success of AI Initiatives”, featuring data and analytics executives from Citi, Mastercard, FreeWheel, and UnitedHealth Group.
In a time when every boardroom wants to talk about AI, this panel aimed to dig deeper: What does real success look like when deploying AI systems at scale? And how can organizations ensure that their AI investments deliver not just ROI, but sustainable, governed outcomes?
Here are a few of the key themes that emerged during the conversation:
1. Defining Success Beyond Hype
Panelists agreed that the AI landscape has matured beyond experimentation. Organizations are no longer chasing “shiny object” POCs—they’re under pressure to prove measurable impact. Whether that’s in the form of operational efficiency, regulatory compliance, or customer experience lift, the definition of AI success is now highly contextual.
Sami Huovilainen of Citi shared how GenAI use cases in customer service are tracked daily, with performance metrics tied directly to expected financial outcomes. Mastercard’s Nathan Bruns emphasized the importance of counterfactual modeling to isolate the real impact of AI, especially in customer-facing use cases. At FreeWheel, Bob Bress described how back-end AI systems—though invisible to customers—can drive significant cost savings and engineering productivity.
2. ROI vs. Return on Risk
Cillian introduced a concept that resonated across the panel: the tradeoff between Return on Investment and Return on Risk. Short-term financial gains may be tempting, but AI systems that erode user trust, miss ethical guardrails, or amplify bias present long-term costs that are harder to quantify—but no less real.
From developing human-in-the-loop testing frameworks to stress-testing LLM outputs for fairness degradation, panelists shared how they’re proactively addressing risk before models go live. Organizations are learning to prioritize fewer, more strategic AI initiatives—investing in governance up front, rather than cleaning up later.
3. Shadow AI & Governance Maturity
As GenAI tools become more accessible across teams, “shadow AI” is quickly emerging as a risk vector—where models are developed outside central oversight. Citi and Mastercard both described how they’ve implemented approval workflows, platform constraints, and cross-functional forums to manage this challenge. The consensus: centralized governance doesn’t have to slow innovation, but it does need to evolve to meet new realities.
Panelists also discussed the importance of robust data infrastructure—ML Ops tooling, lineage tracking, and permissioning workflows—as foundational to scaling AI responsibly.
4. Ethical Deployment Is a Data Problem First
Perhaps most important, the group recognized that ethical and effective AI starts with knowing your data. Whether it’s compliance with evolving global regulations or managing reputational risk, every successful AI initiative is built on a reliable understanding of where data lives, how it’s used, and what risks it introduces.
Ethyca’s Role in the AI-Driven Enterprise
At Ethyca, we were honored to facilitate this discussion—and to support the growing community of data leaders building and operationalizing the next generation of AI systems. Our privacy product suite helps enterprises create structured, scalable data catalogs that form the foundation for both regulatory compliance and ethical AI deployment.
We work with some of the world’s largest brands to simplify data governance and automate oversight—so their teams can focus on high-impact innovation without compromising trust.
To learn more about how Ethyca supports data, compliance, and privacy teams, get in touch with us here.