The AI‑powered asset manager: Redefining investing, risk, and client engagement

So now we're gonna be talking about AI. if I could please have Roger Burkhart, CTO, from Broadridge, join me on the panel, have a quick chat before we begin. So, Roger, welcome to the panel. Thank you for joining us. Obviously there's been huge changes in the use of ai. Financial institutions have been using them, using AI for quite a while actually, but now generative AI seems things have been supercharged just to take stock.

Where are we seeing the biggest changes right now, just your clients and your own organization and where might we be heading next? I mean, it's, it's worth, it's great to be here. It's worth reflecting on, you know, how long we have been using AI in this industry. And then perhaps come to that second part of the question. So I started using AI as a CTO at the New York Stock Exchange for market surveillance, looking at people doing pump and dump routines on social media, looking at people trying to mark their clothes with patterns of, of trading. And that was in the early two thousands, right?

So the last 25 years, AI has been through some winters and some summers and some springs. But what has really changed, I think everyone in this room knows was November 22, 23, excuse me, with chat GPT, right? So the ability for everyone, pretty much who's got any kind of compute capacity whatsoever on the planet to get access with a very simple chat interface, to me was probably the most revolutionary aspect of, it's the democratization of ai. As you probably know, open AI got to a hundred million users in two months. TikTok took nine months. Netflix streaming took 10 years.

So the pace of adoption of the democratization, I think is actually the biggest change. But then you've gotta recognize that these models are enormously powerful, right? and so when I look at what I'm seeing in our business, both in the way we use AI to make ourselves more effective, to enable us to serve our clients better, but also how we're embedding it in our products. first thing is just breadth of adoption. So 85% of our 15,000 associates use AI regularly. We have our own platform provides them access to different types of models, both sort of open AI type models, SaaS models, but also locally hosted ones.

And we're seeing really, really tremendous improvements in productivity. I love the chat fireside chat, by Rob Goldstein yesterday, and he was talking about how historically to be an Aladdin user you had to be a bit of an expert. And he was saying that a really interesting use case for them is to have an AI kind of be the per what you turn to, to look at how to use Aladdin. And that was our experience too. We first launched a gen AI product in, June of 2023, a product called Bond, GPT. And that basically allowed a bond trader to do pre-trade bond analytics with a completely natural, user interface, right?

So they just need to, they didn't have to know their way around a particular terminal. Many of them were expert Bloomberg users, right? So it's not like they didn't have good tools. But what we found was this natural language interface to data was a really important new thing. And it allowed us then to roll it out to nine different products over the last two years. So one area we're seeing is just providing more rapid access to data.

And then another important area, I'm sure we'll talk at greater length with the panel. Don't chew up too much time here. but, another important area is many faces. You're trying to extract data from documents. Sometimes it's very complex data, like you've got a private credit contract, it's 200 pages long, the subordination clauses really, really matter, right? All the details really matter.

Having an AI do most of the work in extracting that information and then, putting into an environment where the analysts can do credit analysis is one of those really, really powerful data extraction use cases. So we're seeing a lot around data extraction. We're seeing a lot around operations and, automating operations. And we're seeing just a very broad based productivity boost. Very interesting. And before we welcome the panel very quickly, I wanted to ask you, how is it affecting how you and other firms hire right Now?

It's interesting. I know we had a discussion. The panel, other panelists have lots of great experience on this too. I think, it's most obvious in early career hiring. Stanford put out a study based on very, very granular payroll data about a month ago where they compare, compare. They looked at age 21 through 26, they looked at, professions that are very affected by ai, like software development or analysts, researchers versus professions that are less affected by ai.

And they found a 13% decrease, for the most effective professions. So I think we have to start rethinking some of those early career roles. We're gonna expect more of these folks. We're not just gonna bring someone in from university and ask them to do grunt work anymore, quite candidly. So what we've done traditionally 'cause that kind of grunt work can and should be automated. Instead, we're gonna require them to know how to do prompt engineering, our most important to be good critical evaluators of what some AI is doing.

'cause we all know that generative AI can make mistakes. And so having that kind of mindset critical assessment, is gonna be really, really important. So, but we'll have to rethink these early career roles. Really interesting. and we'll come back to that in the panel. Can I please welcome the rest of the panel on so we can get this discussion fully started?

We have Umesh Subramanian, CTO of Citadel and Cinda Whitten, head of Global Investment Operations at Naveen. And I was wondering if we could start with you, Cinda. how is your organization changing right now? We just heard, for instance, operations transforming rapidly. What sort of efficiencies are you, you seeing? Like, can you quantify it somehow, for our, our audience?

Yeah. I think we're quite honestly, in very early days of being able to quantify, what I would say, however, is the early use cases, as you talked about, are all around data extraction and how do you get efficiencies? And, and the private credit example you used is a great one. We're, we're in the middle of that right now, actually at our firm. And it's, you can actually think about it from the end-to-end value chain. If you start to think about documents like the covenants, right?

And private credit from a credit analyst clearly to how you need to use that information and operations, that use case actually becomes really powerful end to end. And so we're seeing a lot of opportunity in the data extraction, data enrichment space, how you use that data in a more automated way. And as we go into 2026, we're really going to be shifting to take a look at how do we now start to think about age agentic and age agentic capabilities in the operations space. So, for example, intelligent recon, right? Not, not that operations associates in themselves aren't intelligent, but how do you actually put, an agent in place to actually do all of the research, serve up the information, and then that analyst can spend time to say, okay, this is the right decision, and then be able to take that next action and that next decision, which I think is important. But we're in the early days of that.

And how, how long do you think that might take to get that fully implemented? Do you have a rough timeline? Oh, I, I wish I had a crystal ball. I don't. Right. I think the good news is, and you heard Jose talk about this yesterday with agents, bank of New York may have a hundred, now it'd be a thousand by the end of the year, right?

Just using rough estimates. I, and they're using it in the reconciliation space as a, for example. So I think that the good news is in the asset servicing space, I think they're a little further than asset managers. And I think in the software space, they're a little further along. I think we'll benefit from that. But whether that takes six months, 12 months, or longer, I don't know.

What I can tell you is adoption, as you indicated. Mm-hmm. Right? Like the rate of adoption, is dramatically increased in the business. And I think part of it is, as you said, the democratization of, of AI and chat GBT, everybody's using it in their personal lives, and that personal life over the professional life makes that adjustment a little bit easier. And I think that'll have a really profound impact.

One of the, use cases we were chatting about before we came on was, email the bane of operations and new mesh is working really hard to stop anyone sending email, but most of us can't. We don't have a choice, right? So in our BPO, we have a thousand people in our BPO. So obviously we provide software to the industry. We also provide outsource services. And our BPO, we found that having a, an AI platform to make sense of these thousands of emails eliminate duplication or like 50% reduction right off the bat and duplicate chaser emails to the primary emails and then routing it to the right people was really, really powerful.

And so, although we'd like to have no emails until you could make that happen for us, yeah. that's a really, really powerful use case. We've actually just recently invested in, in the platform we use for that. We think it's a really big one. Meh. Your, your organization's obviously very focused on human talent, entertaining, retaining human talent.

How is it transforming, if at all, that sort of core process? So Citadel, so I work for Citadel Citadel's, a hedge fund. I'm the CTO for Citadel. it's, so what do we do? What do we do for a living? We have a small systematic investment pocket, which is quantitatively driven, taking bets in the market entirely.

Systematic models are trained, and like you said, that segment has been using machine learning for a long time. So we're not gonna talk about that. We're gonna talk about the rest of our businesses, which are discretionary businesses, which means we, we look at our, the equity markets, the macro markets, the fixed income markets, commodities markets, and try to form a fundamental view of where we think a particular asset is going, and then take bets against it. So the research process to drive the fundamental process becomes really important. So given that that's what we do, the kind of people that we hire in, in investing, the kind of people that we hire that have quantitative skills and the kind of people that we have hired in software skills, I honestly don't think it has really undergone much of a change. We've always hired not for a job, but for a career.

And we always expect the person that we hire to be able to move out of the job into a higher level job on their own accord. So the core hiring principle has not changed on the margin, has it changed? Sure. Right. If I have to break the segment to three parts, like especially with ai, core set of people that are actually tuning the model, a set of people that are using existing models to be able to change the workflow around them, and then everybody else, two and three are unaffected, we're hiring on the margin, call it two, 3%, even that's an overstatement of people that belong in the category of one, right? That's, that's the way I would describe it.

Yeah. And in terms of the actual, you know, having your analyst, supporting the portfolio manager coming up with ideas, you know, building models, analyzing stocks, how has that changed at, if at all? I mean, is there greater efficiency in terms of, you can cover more names if you're an analyst. I mean, is it just quicker in terms of building models? Do you need less analysts potentially? So that's a good question, right?

So that's where I think the real, money is. So if you think about what an analyst in our, in my industry, in my business, an analyst can cover somewhere between 50 to 80 stocks, right? that's a, that's a lot of companies. And when I'm thinking about covering 80 companies, I'm thinking about the news research reports that come out, filings that come out. And it's not just those companies that matters the entire supply chain around it. Mm-hmm.

Right? So the amount of information that I need to consume to be able to make a decision, change my models, and then have conviction behind it, is a lot. So now the way we are changing that process is to the extent that we can remove friction from that process, that's benefit to the, to the, to the extent that you can create certainty, not certainty, conviction around an outcome, we will size up our bets. So if you think about those two vectors, what AI is doing is, is reducing the friction. People ask me this question like, is it going to automate an analyst? And I don't think so.

I'm not of that camp, I'm of the camp that AI is not trying to predict the future. It helps you see the present far more clearly and quickly. And so when you bring that into the equation, it's another tool in the toolbox for an analyst or a portfolio manager to be able to cover a plethora of stocks and then make decisions and consume the right set of information quickly with more conviction. So it makes the flywheel go faster. Yeah. Very interesting.

I agree with that. And I think, I don't think we should think about it as less, maybe in the future you don't need to add as many. So maybe you can think about it that way. But I think about it as the ability to cover more. So to your point, if an analyst can cover 15 to 20 names, now maybe they can cover 50 and maybe those other 35 you weren't covering before, right? Or at a very, very granular level.

So think about how information is power in the decision making and how that can actually turn into alpha from a returns perspective by being able to have better coverage across the universe. So I think about it as more, and being able to go deeper in that decision making. Has that been a fo focus among clients, Roger? Yeah. And I think, I mean, I, I think that's, I agree with, those comments. you can also look at just the language capability, right?

Mm-hmm. To the extent that you're trying to look in different markets where there isn't the same kind of English language coverage. Yeah. The ability to get language translations is, is pretty straightforward. Very, very powerful. Right?

I do think that most of us in the room are experiencing AI as helping us on a task level. So right now it's like, we'll summarize that release Okay. Extract the key terms from this perspective, right? But I think we're now evolving towards a situation where we'll have a little team of, of agents that are doing more work than just something very bite-sized and task oriented. So there was a, an interesting paper, came out, recently about some folks, BlackRock folks, so it was the BlackRock paper, but people from BlackRock who had set up a little agentic system, and the system would run three different analysis, a traditional analysis, fundamental analysis, it would do a sentiment analysis, right? And it would do a technical analysis, and then the agents would kind of challenge each other a bit.

You know, two of them will buy one of 'em to sell. It'll be a little back and forth. now whether that's gonna really move the needle like tomorrow, I don't know. But what it, what it does speak to is the ability to delegate a rather large piece of work and to get back different points of view and to explore some different perspectives. So We've tried that, we've tried that in development software development, you can have an a, you can create a specification and you can have an agent code to that specification. Mm-hmm.

You can have another agent write tests to that specification, another one try to break it. Mm-hmm. And so you have all these things competing and doing something to the code base and creating things in the test space and see how it develops. Mm-hmm. It's very interesting. It's early days.

Mm-hmm. It's promising. And I would also caution that it is also poten potentially costly and wasteful. We gotta make sure that in trying all these things, we're not just burning token and burning money that we know when to not do more of it. Mm-hmm. Mm-hmm.

Yeah. Does that make sense? Yeah. We talked About always using the simplest performance tool for a job. Right? Exactly.

And there is, there is a danger that everyone always goes for the latest, sexiest piece of ai. I know we were Talking about, we were talking about emails and we were having this conversation as to like, okay, now we're gonna have AI agents read emails, but I have my AI agents creating those emails, sending to your email. And I mean, it, it's almost not a joke. 'cause it, I think is happening right now, right here, right now, while a simple API could have solved the problem, and we don't seem to take that step, right? I think there's a big risk to applying AI when something underlying is not properly structured or thought through and paint it for the broad stroke and make it easier. But that is, that is a risk that You take.

Yeah. And if I can just add to that, I think one of the challenges, it's so easy to create some things that with a GN AI these days, but testing them is not necessarily that easy. That's right. So you may have a, like bond GPT, it took us three months to get a product to market, right? Probably took us about a year to really get our arms around what it means to test a non-deterministic product. Right?

and so there is a kind of tale of expense with some of these things. And if that was a case with a single large language model and we now talking about, you know, six agents talking to each other, of course the testing is gonna become even more challenging. So we have to recognize those real costs. Yeah. Are you worried that with Introduction of these agents, and I mean we had a little bit of that there, sort of like agents talking to each other, agents debating, agents emailing each other. You know, you have asset managers have obligations to their investors, they have obligations to regulators.

Are you worried that as you sort of equip your organizations with these sort of advanced tools that you're starting to lose control of cri either critical functions, critical decision making, accountability starts to go? Are you concerned about that Umesh? Sure. We're totally worried about that. I think, I think our investors, hold us to a standard that we, that we take care of the investments that we make, we manage the risk appropriately, and we're not intellectually becoming lazier, right? it is easy to think that a AI is a multiplier.

It's not the force, right? So we bring the force to the equation, right? so I'm worried about that. So we internally look at how people are using AI tools to see, are they using it as a tool to get the information that they need and get the things that AI can do best? Or are they potentially crossing the line of offloading judgment? And that's not something that we want, right?

And for an active investing, a very competitive active investing market, that small shift by two three degrees could be a big shift in our returns. And we don't want that. So, so we need to be precise about that. And we are setting standards around that, and we are monitoring that at the end of the day, holding investors, portfolio managers entirely accountable for the decisions that they take and not just say, AI ate my lunch or something. It's not a, not a possibility. Yeah.

Ci, are you worried about that? Yeah. I don't know if I'm worried about it as, as much as I'm, I think it's gotta be at the forefront of every decision that we're making. And so one of the things that I think we did right, is we really started with an, responsible AI policy. And as we thought about setting up AI governance, it really was with risk compliance audit all around the table to think about how do we put those right guardrails in place to protect our clients and participants, because we have a fiduciary responsibility, right? And so I think that's been at the forefront quite frankly, before our implementation even.

And so one of the things I'm thinking a lot about is we're getting ready to take the next steps on the AI journey, is one, I'm asking two questions. And I asked these guys this morning like, how do I know the right tool for the job? Which is what we just talked about, right? Because there's this bright shiny thing out there that ev like, we wanna use this. Well, how do I know that this is right and fit for purpose? And then the second question is, how far do I take it?

And where I draw the line is decision making, right? Collecting information, putting together analysis, serving that up in a more efficient way I think can be very powerful. But the decision making has to still sit with an individual, in my opinion. Yeah. And I think, I think we're all pretty much aligned on this panel that accountability stays with a human. Yeah.

There is a human in the loop for all these decisions. and so then the question for me becomes how do you surface the information to the decision maker so that they can do be good at their job? Right? And I think it becomes harder and harder, ironically, the more and more accurate the AI is. So we have some data extraction use cases, we do a lot of that for the industry. And if the AI gets it right, nine times out of 10, the human who's checking it, I think pays more attention than if the AI gets it right.

999 times out of a thousand, right? Yeah. So we gotta avoid a situation where people just click yes, yes, yes. Right? And so that gets to the questions about design, right? Can you give some kind of confidence level?

Can you sort of highlight some things that maybe they should look at, right? when do you need four eyes? Should you have two different agents do the same job and look at the differences? We do that also, right? Or two different prompts. So I think this whole thing about how do we design the human in the loop piece of this so that the humans could be effective as well as hiring people who are skeptical, and are not inclined just to click yes, yes, yes, yes, yes.

Right? Because that can be an issue. So I, I'll just put it into investment risk taking case and everything that is post investment, like post rate. Mm-hmm. We do let the computers make decisions on the margin that is our systematic business. And we do that when that can be back tested means there's a probability that we get it right.

Probably we get it wrong. We are taking risk in the markets, but we can see in the past what would this have done. Mm-hmm. Right? Now, if you think about ai, the core model itself is moving so fast, the data that is the context to the core model is also moving too fast. So if you think about automated investing, I've been asked the question before, like, can we automate investing with ai, discretionary investing with ai?

Like how do you think about, how do you think about that? If I can actually systematically back test it, it'll be part of my systematic business. Mm-hmm. Great. Right? So there's always a room for this.

There's another room for this. And that's the investment side. On the, on the post-investment side, I agree with exactly what you both said, which is how do you insert a human in the loop to create the efficiency is important, and where can we let AI actually truly automate a task is important? And where you end up using it for automating the task. Is there another way to automate the task in a more structured way? Is an important question to have asked and answer.

And one thing we should, we really should just say is that the, there is a tendency to go and automate the current tasks. Yeah. That's not the task. And that we don't wanna automate bad process, right? Yeah. So, you know, one of the things that we're looking at with Broadridge is if we have four or five different organizations doing something like a customer service function, because we've grown a lot by acquisition, could we consolidate that and have one customer service organization?

Then how do we simplify the process? How do we re-engineer the process and let's hold ourselves to doing those things before what we run around automate again, right? Yeah, absolutely. but it, it is important to, I think, to do those steps to start with. And as we grew a lot by acquisition, we have quite a lot of functions where there isn't like a head of customer service and we're sort of taking action on that so that we can be effective, you know? Yeah.

Yep. What sort of infrastructure build out does this actually require? Umesh? I mean, if you, when you're sort of, you mentioned sort of the cost of of, of sort of deploying some of these, you know, people, it might be a cheaper way to do something versus just asking in AI and using all these tokens, what sort of infrastructure are you using? Is it like cloud you renting or buying chips or building data centers? How are you guys navigating that?

So we're definitely using the cloud. so it, it really depends on whether you're trying to tune and train a model or whether you're trying to use a model and then let it influence a workflow of some sort. whether, whether you're in the systematic business and you truly are trying to use a machine learning model, at which point of time you need GPUs, right? So we have all of the above. in, in my, in my world, we have, we have a significant in investment in on-premises hardware. We have a significant investment in the cloud.

And in the cloud we have disparate investments in things that look like the kind of things that you would use to train models. And the GPUs, you have a set of use cases where, I mean, we, we, we are public about this is one of the things that we do in Citadel is to predict weather, right? Mm-hmm. and why do we do that? We do that. we can go into that if you want, but like that is a high H PC problem.

It's a high performance compute problem, right? That requires very specialized type of a setup. so we do that as well. So see, at the end of the day, going back to right tool for the job, I think we need the right kind of an infrastructure and support for the job. And we want make sure that the people who are employed at Citadel are doing the marginally most value added work for us. There's no point doing the things that we can do if we shouldn't be doing them, right?

So getting cloud vendors and getting infrastructure to be able to come from outside and that to be serviced in the way we want it to be so that we can focus on things that use them rather than operate them, becomes important. Yeah, we, we very much are cloud first. we, so our AI platform is built once for the whole of Broadridge, which is a change for us. We've pivoted over the last couple of years being very much a platform company. Many of you may know Broadridge has done a lot of acquisitions, and we've got a lot of different tech stacks through acquisitions. We made a decision at the board level.

We wanted to have a single platform, and the, and make life much simpler for our clients. So you, if you want us to settle a trade, there's one API and behind that then maybe 13 different systems is up to us to work out which one does it, right. So, cloud first platform first, I've been struck by actually how relatively inexpensive, some of these fine tuning approaches are. And to be clear, we generally use off the shelf models simple as possible solution, right? But we have found situations where if we can fine tune a model, we can get a better results. So a concrete example we did with a s recently was we have a very high performance proprietary trading platform called TB bricks.

People use it for options, market making, vault trading. It's a very low latency. So 50 micro mics from tick to trade. it's hard to program. And if you get an off the shelf, tool like gi, GitHub, co-pilot, it's never seen that code. It doesn't know how to complete code for you.

So whereas we were getting the very high code completion acceptance rates by our programmers on GitHub copilot for all kinds of other systems, when we tried it on this T bricks platform, it was 7%. So 7% of the time the programmer said, that's a good code suggestion, so that's just no good, right? And so we actually did, fine tune a model and it cost a few thousand dollars and we got up to about a 90% acceptance rate by the programmers. So I've been surprised at how relatively inexpensive, even things like fine tuning are where the big investment I've seen is in sorting out the data, right? So we've spent a lot of time harmonizing our data across different asset classes with a common ontology. And although AI can be somewhat helpful, the margin, it's just a lot of work.

Just a lot of work getting to a, a consistent use of data. So our experience so far is cloud first platform first, and you just have to invest in your data. Hmm. Lucinda. I'm very, very thankful that we have the enterprise of T-I-A-A-A to be thinking about those things, like the platform. I don't have to wake up every day and think about that.

I get to think about all the ways that we get to use and leverage it. But I hear very consistent cloud first we're building platforms, and from an AI perspective, our enterprise is really thinking about build by partner. Mm-hmm. And where do we wanna be on at different places in that journey? and then how do we leverage that, in the business is really kind of where, where I come in and think about what's the right tool for the job working with my tech partners. What sort of issues has the introduction of say generative AI on a mass scale brought to organizations and risks that you need to con to control for?

'cause obviously with the introduction of agents, a lot of these things might, might actually be accentuated. And how are you dealing with them? Take that one. Well, we, we, took a view two, two and a half years ago that we wanted to be a leader in the FinTech space. We're not gonna try and compete with Google or Anthropic, that's not us, but in the FinTech space to be a leader. And we recognized that that meant for us platform first, but also learn by doing.

And some of our big banking clients had some very long governance processes they had to go through before they could get started. And we said, no, we're gonna be agile, we're gonna be risk conscious, but we're gonna move quickly. And one of the things we did was recognize that we needed some common guardrails for all of our 15,000 employees so that they don't put stuff they shouldn't put into these systems, right? So we defined some rules, no health information, no credit card information, that's kind of standard stuff, but our platform actually checks for them. So rather than relying on many human being compliance managers, we embedded controls in our platform for those really very straightforward kind of concerns. And then I think the other thing is, what I referred to earlier was how do we test non-deterministic systems?

We've got hundreds and hundreds of people who are very good at testing, but for their whole career, it's been one set of inputs. There's a right set of outputs. If you don't get that output, it fails, right? And with these systems, that's not the case. You can have different outputs that are correct. And so we have taken all the learning from those early products, op, GPT, bond, GPT, other products like that, and we put them in our central, QA platform called the engineering team and said, if you're gonna go and build a product using generative ai, go and use this, this framework to test it so that we don't have product teams who are doing it for the first time and who get inadequate testing.

Yeah, I agree. I I, I think when I think about this, I also think about people, process and technology. And you talked about the technology piece. what I would add to it is, I think you also have to think about doing this at scale. What does that mean from a process perspective? Roger, to your point, you, you don't wanna recreate a bad process, right?

And so you need people who are process engineers who can really rethink how do I really create the most effective end-to-end process? And then people process, right? The people side of it, this is really changing the way people interact with their day-to-day jobs and how they feel about their day-to-day jobs. what they do really matters. And so as you think about taking people on this journey, there's a huge change management component. Yeah.

Not only from training, but clear through to how do they think about like what being an operations leader, we're having a lot of conversations around what is the future of an operations professional? What does that look like? how do we re-envision that? So I think we also have to really think about the process and people piece of this, because if the technology's great, but if you don't get those things right, it won't scale. Absolutely. What what actually remains the preserve of humans after all of this change is better down, I guess this change.

I love that question has gone for many years. What actually remains the preserve of humans? Is it leadership? Is it, you know, the portfolio management? Is it, what is it? What is it that we offer that AI will never be able to?

I'll start and then I'll add to you guys, I love this question. I think it's great to look, look backwards, to look forward on this. And so if we think about history, right? What we can learn from that, this isn't the first tech innovation we've experienced over the last 50 years, right? When the computers came out, when cell phones came out, the blackberry, then the iPhone, right? The, the rise of the internet.

Like, look at this room of people. We are all here doing something. And technology has changed dramatically over the last 30 years. And so what I think we can learn from that is what do we know? We know with the rise of technology, workforce still also increased. Although I am a little worried after hearing the end of the last panel and all the poor drivers in the world that are no longer going to have jobs.

'cause that's a harder problem to solve, I think. But workforce continued to rise. Standard processes were automated, but people had to rise up the value chain in the critical thinking. our world has become more complex as the global, like we are much more of a global society than ever. And the geopolitical events impact our markets now more than ever. And then when you think about product, right?

And how product continues to change, I think what that all means is AI helps us be more efficient, but I think humans actually rise to the next frontier, which is on the client service side. How do we, how do we actually spend more time with clients helping them solve their problems? How do we think about solutions? And then how do we bring kind of our investment capabilities to those solutions for them? How do we provide a richer experience when they're coming in and onboarding so that doing business with us feels easier than it has historically? from an investment management perspective, how do you spend more time actually drawing insights from the data to make the best decisions operationally?

How do you think about having a risk mindset that you're making the processes the most efficient, but you're doing it with a different set of skills because you're not doing the process? And so I, I think, listen, I think there's still going to be jobs. Those jobs are going to change. And that's why I think the change management piece, that accompanies this is hugely important along with empathetic leadership. I would say, look, if, if we play this cycle forward, there's a lot of momentum in the way people are thinking about how AI is gonna progress. And some people will say it's a hype.

Some people will say it's it in its infancy and going very slow regardless of where you are. Let's just play this out, right? Let's assume that your potential in state is played out. I think the things that are of human preserve is definitively creativity. Mm-hmm. Like a creative person that looks at something and thinks about something that, and creates something from nothing, I think is always going to be a human preserve.

They're gonna be aided by computers to be able to do that. But there's a high level of creativity that we, we believe human beings have that they're gonna continue powering through. The second is, judgment, especially now narrow, narrow, more narrowly in our business. if, if you just play a thought experiment out, and let's assume that it is computers and AI that is doing almost all investing. And if you put a human being in the MI mix, what would that human being do? The human being would take what the computers can do, what AI thinks it will do, study that to do something that is more alpha generating in some ways more easily alpha generating.

'cause there's a little bit of a herd forming here, right? usually when herds form, there's, there's some, a human can pick that herd out and then say, this is not the way to go. That's the way to go. Or let's get there faster, right? So if you just do that thought experiment, and if, if that person only has AI and machines competing with that person, plus that person has access to those tools, by definition a creative person with a high judgment is going to perform better. We can all agree on that, right?

Just in a de case. So I know you can extend it from there, that there's always gonna be pockets of creativity and judgment that is going to be formed with information that can come through these sources very quickly, which is going to then be pushed further into automation stack for you to rise above that, right? so that is how even in the, in the fastest momentum case, I see that play out. But does the advent of AI tools that are sort of ac accessible to everyone, other asset managers, retail investors, does it erode your edge at all? Well, I think you could ask the counter question, right? If there's a herd that's forming, does it erode my edge or does it actually increase my hedge?

Right? That's, that's where you want to be. So you could be in one side of the equation where you're like, you are the herd, right? And you therefore don't have the edge or you do the things that you need to get out of that herd. Not get out of the herd for the sake of getting out of it, but get above it. that's the real, interesting question.

I wanted to pick up on your point about looking at the history, right? Mm-hmm. way, way back when, when the ATM first rolled out, there were all these predictions that, you know, there's been a catastrophic loss of jobs to bank tellers. Yeah. Does anyone know what happened to the number of bank tellers? They went up, the number of tellers per branch went down 'cause you'd need to do cash, but there were 43% more branches.

So the, you tell us with a higher, more responsible job and more of them. So I think that, you know, having an abundance mindset's important and as we think about the people issues, the communication issues, that's gonna be important to keep our our teams sort of comfortable. I I thought your comments, both of you were fabulous, won't reproduce them. One thing I would add to them is that we just seeing the end of the last generation where we're managing a human only workforce, so even the most junior programmer in the next few years is gonna be managing a set of agents. Even the most junior programmer is gonna be reviewing code that are written by agents today. Junior programmers write code and senior programmers review them.

So the jobs are gonna change a great deal. and I think increasingly we're all gonna be involved in reinventing the work, not just doing the work. And you were talking about your citizen engineers, right? Yeah. Right, right. And I talked to a head of ops at a major bank the other day who is creating a new job profile called the operations engineer.

So better paid, more tech savvy, all about re-engineering the operational processes. So I think we're gonna move from just executing tasks to much more redesigning our own work. And so that we've gotta think about how do we incent our, our associates to learn how to do that. Mm-hmm. How do we give 'em the confidence to feel that there's gonna be a more rich job for them? And we also have to keep that little bit of skepticism, right?

Because, you know, these, these systems are rather hard to predict, right? We have to have a little humility about our ability to predict them. Yeah. And there will be some bumps along the Road for sure. Hmm. And since I asked you how far we're away, we would be from agents, but I was wondering umesh within the hedge fund, like how far away are you from agents being actually sort of implemented within the workflow of analysts or portfolio managers?

And it's sort of a fully trusted component there? There's a lot in that question. The reason I say that is it we, we have agents roll, we have AI power tools roll out for our research community. Okay. and it is definitely increasing in its adoption. We are monitoring, its increased adoption to make sure that we are channeling it in the right place.

and then you said something like fully trusted, right? that's where the risks begin because it's in a prob in the probabilistic world of investing, fully trusted is not a thing. that, if I'm searching for a fact, it needs to be fully trusted. If I'm searching for an opinion, it's, it's, it's not right? And maybe you shouldn't be. So we're still not, not at the stage of the last part of your question where it is fully trusted and it can be delegated to do a lot of things.

It may get there, but it really depends on the questions that you're using it for. I think for, for, software development for example, it's also getting there for smaller and smaller small tasks. But the question now is with agents, like you said, Roger, can it take on more and more and overnight, you know, not overnight, you know, somebody, a team in a different part of the world would do something for you and then get it back to you and then you, you don't need to do that anymore. Can the agents do that particular part of the task? Right. And, we've seen, we've seen it go both ways.

It's been very good in certain cases and it's been exceptionally off in a few cases. And the risk of it being exceptionally off or even slightly off subtly off is, is real. Yeah. And it's moving so quickly. I mean, one of the areas we're applying agent technology is in automating test cases. So as we looked across all of our testing, we found a very large number of test cases that were still manual.

This about a company we acquired and came with a whole bunch of manual test cases, people sitting reading Excel, and I mean, we shouldn't be doing this, right? We're determined to stop doing it and we have a whole big investment to make all those automated. And one of the ways we're doing that is with the Syngen technology and just in the last three weeks, the success rate has gone from 60% to 80% just with a new release of Opus 4.1. So it really is moving dramatically quickly. And I think the people issues, the change management issues we've been talking about, the risk assessment issues are, are ones we really gotta focus on. 'cause this technology is moving very, very quickly.

Yeah. I I think it's imminent, right? And I think a year from now when this group is back together again, if you did a show of hands of how many people actually have agents as a part of their digital workforce, you, you'd probably have 25 to 50% of the group that at least have started. for us, we will start to employ that in 2026. And we're starting to think about our first use case and how we would do that in the reconciliation space. So intelligent reconciliation.

Now, I don't think we'll get to a point of fully trusted like that, that to me is like off the table, but I'm a huge skeptic. but actually using agents to do pieces of the work to augment and actually make associates more efficient, I think by June of next year we'll be very much on our journey to starting to embed that in our workforce. We have two minutes left. Can I have two quick fire questions? the gentleman in the front here. Anyone else?

I don't see anyone else. thank you. I've seen a lot of commentary recently about the large language models and how they're evolving and talking to people in professions, like in accounting, giving feedback, but it's, it's not quite ready for prime time, in their fields. the models still hallucinate and they don't have the latest, data. And I'm, I'm hearing a new term in, in AI called SLM Small Language Models, where people were building their own database on a proprietary information and that's a lot more useful and can maintain a competitive edge. I'd be curious as to pay on those Thoughts.

So, so I think, well first of all, when I said 85% of our associates are using ai, they're using large language models. So people are using it day in and day out. So yes, it can hallucinate. one of, when we build products like the data chat product I talked about, we use a large language model to manage the conversation. We don't use it to generate any data. All the data is Broadridge curated data.

So there are, there are safe design patterns that, that we and others have created to deal with those issues. On the question of small language models versus large language models, I think having 175 billion parameter model to wrap that email I was talking about before is a bit like using a, you know, a massive sledgehammer to hit, hit a walnut. And of course it's driving enormous power consumption, which I do worry about. so yeah, we use more language models sometimes they're much more cost effective and you just don't need something that understands every word of text that humans have ever written. Right? There's another technology that's coming out that is, most of you probably are familiar with this.

It's, it's called a model context protocol. It's an MCP. So it's basically your AI engine interacts with very structured data and services that you might have in which you might have built a GUI on top of it, a user interface on top of it, and you know exactly what you're getting, right? But these models can now interact through what's called an MCP to get to the exact data that you want. So it's almost looking like a very old school systematic interaction with a backend service to get predictable answers interweaved into an LLM like setup. So that shows a lot of promise.

And I think one thing we haven't talked about in this panel just quickly is it's not as if predictive deterministic models have gone away. I think some of the most valuable AI solutions we have at Broadridge are predictive models, right? So for example, our global demand model predicts fund flows across all the Morningstar categories for three years out so that, you know, heads of distribution, heads of strategy can make decisions about where to launch a product, where to do sales. That's totally old school predictive, right? And so there's a little bit of tendency in the, in the technology world that any new bright shiny object replaces everything previous. No, no, not true.

Yeah. Amazing. Thank you for the question. Thank you so much for the panel for joining me. Can we give them a thank you? thank you.

What's next for your business?

We want to hear more about what you need to improve your business and drive transformative innovation, efficiency, and growth.
required
required
required
required
required
required
required
required
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Want to speak with a specialist?
North America
+1 800 353 0103(option 3)
Australia +61 743 569 934
Hong Kong +852 3004 3094
Singapore +65 31 351 278