Artificial Intelligence

Agentic AI is here: Three questions financial services leaders need to answer now

The financial industry is adjusting following the SEC’s February announcement of an extension to the compliance dates for the U.S. Treasury Clearing Mandate.

Executive Summary: Agentic AI is rapidly moving from experimentation to operational deployment. Its greatest value lies in orchestrating work across systems. Financial services leaders must now decide what they’re prepared to delegate to agentic AI — and under what conditions.

Agentic AI is moving quickly from experimentation to deployment.

It is beginning to coordinate workflows, operate across systems, and execute tasks with increasing autonomy. But as agentic AI’s role in business operations expands, it is creating a new set of strategic, operational, and governance issues for financial services leaders.

In this article, Joseph Lo, Broadridge’s Head of AI Innovation, shares his perspective on the important questions that financial firms now need to address.

Q1: Why is agentic AI such a big deal now? What changed in the last 12 months?

For the past several years, AI has largely played the role of an assistant. It could be prompted to summarize information, to generate content, to surface insights, or to automate isolated steps in a workflow. Valuable, but bounded.

Agentic AI changes that role. Instead of responding to prompts, it plans work, executes multi-step actions, interacts across systems, and makes adjustments based on outcomes. In short, agentic AI shifts artificial intelligence from supporting work to driving it.

To some leaders, this may sound like another turn of the AI cycle. But the real shift over the past 12 months is who’s doing the orchestrating. We’ve gone from humans directing the AI to AI starting to direct the work itself — and that’s a fundamental shift.

Firms are starting to apply agentic approaches to real operational processes, particularly where work is high-volume, exception-driven, and coordination-heavy. AI agents are routing cases, gathering context across systems, initiating follow-up actions, and handling routine exceptions — escalating complex or judgment-dependent issues to humans only when necessary.

Unlike traditional AI and workflow tools — which operate step by step as tasks are triggered, completed, and passed along — agentic systems monitor conditions, act when thresholds are met, and keep processes moving without constant human intervention.

When AI begins coordinating the flow of work end-to-end, it moves from the periphery of the organization into its operational core. At that point, agentic AI is no longer just a technology story; it becomes a leadership concern. It changes how decisions are delegated, how risk is managed, and how execution scales.

Q2: How do firms avoid getting stuck in the pilot stage to capture business value from agentic AI?

Many organizations experiment enthusiastically, only to see their initiatives stuck in pilot mode with impressive demos but limited impact.

The difference between pilots that stall and deployments that scale isn’t sophistication but relevance.

Most pilots fail because the problem isn’t big enough. If the AI doesn’t materially change how work gets done, it won’t scale no matter how impressive the technology looks.

Agentic AI creates value when it’s applied to work that is repetitive, judgment-heavy, and operationally central. In financial services, that often means inbound workflows, exception management, investigations, reconciliations, and service interactions where humans spend more time coordinating than deciding.

Firms that make progress also start with clarity about what they want to change. Traditional AI and GenAI can deliver value at the task level: generating content, summarizing information, or accelerating individual steps in a process. Agentic AI, by contrast, only shows its full value when applied to the flow of work itself. Leading firms focus on where work slows down, piles up, or breaks across handoffs and systems.

One example is post-trade exception management, where volume, variability, and manual coordination make traditional automation brittle. Early attempts to apply AI at the task level helped teams identify and understand issues better, but they rarely changed outcomes.

With an agentic approach, responsibility shifts to the workflow itself. Agents monitor inbound breaks, gather context across multiple systems, determine likely root causes, initiate corrective actions, and track issues through to resolution — escalating them to humans only when predefined risk or judgment thresholds are crossed. The value doesn’t just come from better prediction but from reducing handoffs and accelerating decision making throughout the process.

At that point, agentic AI stops being a pilot and starts quietly running the work behind the scenes.

“Agentic AI forces leaders to decide what they’re willing to delegate, and under what conditions.”
Joseph Lo, Head of AI Innovation, Broadridge

Q3: As AI agents take on more execution and decision making, how do leaders stay in control?

As agentic AI becomes more capable, the conversation inevitably turns to risk. What happens when AI systems act independently? How do firms remain accountable in regulated environments? And where should humans stay firmly in the loop?

But those questions start with the wrong premise.

Agentic AI forces leaders to decide what they’re willing to delegate, and under what conditions. Control doesn’t come from stopping agents from acting — it comes from designing the guardrails they operate within.

Firms that do this well define where agents can act independently, where approval is required, and how decisions will be monitored. Clear visibility into agent actions is essential for trust, auditability, and regulatory confidence.

This isn’t a technology-only challenge. Staying in control requires collaboration across business leadership, risk, compliance, and technology. As agents become embedded in core workflows, governance models must evolve alongside them, shifting from after-the-fact oversight to continuous supervision.

What should leaders do next?

Agentic AI won’t transform financial institutions overnight, but the momentum is clear. Broadridge’s 2026 Digital Transformation & Next-Gen Technology Study finds that 57% of firms are currently making moderate to large investments in agentic AI, a sign that the market is moving toward a real strategic commitment. To translate that momentum into business value, firms should focus on:

  • Creating safe environments in which to experiment: Use sandboxes where agents can operate against real workflows, with clear boundaries and human oversight.
  • Strengthening their foundations: Develop clean data, observable systems, and architectures that allow agents to interact reliably across applications.
  • Embedding governance early: Define accountability, escalation paths, and controls as autonomy increases — not after issues arise.
  • Starting where the work is heaviest and most relevant: Focus on processes dominated by coordination, exceptions, and manual handoffs, not novelty use cases.

Ultimately, the goal isn’t to deploy agentic AI as a concept but to embed it into how work gets done.

We’ll know agentic AI has really arrived when we stop talking about it. When the value is clear, the work just flows, and it becomes part of how the organization operates.