Second Notification Icon
Amid this period of ongoing change, with the potential for continued market volatility, Broadridge stands ready to help you navigate what’s next. Our highest priorities are keeping our associates safe and ensuring our ability to serve our clients. View resources.
Close

The right insights, right now

Access the latest news, analysis and trends impacting your business.

About Broadridge

Webinar

AI, ML, and RPA: No Longer SciFi

How capital markets and wealth firms are already leveraging AI and ML in their operations (panel discussion from SIMFA Ops 2019).

AI, ML, and RPA: No Longer SciFi
View icon View Transcript

Mike Tae [00:00:14] Welcome everybody. I'll start with a story: I was talking to my wife today, and she was asking me what I was doing, and I said I was talking on the phone with her. No, but she said, “What are you doing at the conference?” I’m like, “one things I'm doing is I'm moderating a panel on AI”. And she's like, "What’s AI, like artificial intelligence?”. She’s like, “You mean like the Terminator?” I’m like, “kind of, actually.” But really, you know the purpose of today’s panel and discussion is really to demystify that talk about. Artificial intelligence machine learning robotics process automation and the things that it’s you know it’s not just science fiction, but there are actual things you know transformational that we’re doing in banks that we’re applying implementing and so we have three individuals on the panel today which I’m really excited about to talk through some of the things that banks are doing to be driving the adoption of these technologies. I just want to point out that Broadridge, we did a survey from last year’s SIFMA Ops, and most firms were our most firms are already doing this. They’re already getting their toes in the water. And so 80 percent of respondents said that they’re doing at least some form of assessment around AI, ML, RPA. But there’s a gap, right? Because there’s the assessment is there, and there’s the actual doing. And so only 22 percent of firms are actually in production, and so there’s a lot of room for industry participants to really pick up best practices. And so that’s why I’m really excited for the conversation today. So three panelists. First is Bob Anselmo. He’s a managing director and head of technology for global wealth management at UBS. We have David Easthope, he’s the head of capital markets at Celent, and then RP Sandilya is the VP of strategic solutions at Broadridge.

Mike Tae [00:02:09] So thank you, everybody. So the first question is I’ll hand it off to Bob. But since last year, In your perspective, how do you think the industry has progressed in its role in its relation to adoption of AI, ML RPA.

Mike Tae [00:02:27] Maybe you can just give a little bit of a flavor of your journey and how you’ve sort of come across that.

Bob Anselmo [00:02:32] So I’m surprised your wife doesn’t know about AI because it’s like the top of the hype cycle, right? When Microsoft is putting on 30-second spots about Carlsberg beer and how AI is going to make better Carlsberg beer, you know you’re at the top of the hype cycle, right. Who is that commercial for? Is that the people that drink Carlsberg beer or no idea what it’s about, but so I would say that look as far as the industry and everyone talking about it, AI isn’t a lot about self-driving cars. It’s about everything right in the financial services industry. I think we’re also at the top of the hype cycle. As I sit and listen to a lot of our colleagues up on stage, it’s almost a badge of honor when people talk about the number of robots they’ve put into production right. We’ve got 800. We’ve got a thousand. We’ve got 12 hundred robots. Well, that’s well, that’s interesting. I’ll give my perspective on why I think we might be measuring the wrong thing in some of that spot. So clearly, I’d say the on the RPA  side, we’ve hit saturation. I don’t know. I don’t know of any firm or any colleague yet another wealth management or asset management firm who isn’t dabbling in RPA, especially maybe some areas that might even be a little further away from yours, so certainly in the infrastructure space in the cyber ops space and you know kind of in the tech-tech space you know we’ve taken out you know we’ve implemented a thousand robots, but we've taken out a significant amount of what you consider to be low value very manual intensive work even on the upside - space requests for databases, log rolls, things like that. Over time you know, we've actually taken a certain percentage of those out where I think the probably the machine learning and kind of real artificial intelligence and financial services you see a lot more of it coming in the packages.

Bob Anselmo [00:04:13] There's some interesting stuff going on with internal developed software, but you definitely see a lot of it out in the fraud space. So bringing in tools on the fraud side, there's a lot of machine learning algorithms that are out there in mass-market about how to detect fraud more and more quickly in the cyber space, how to look at different attack vectors, and how to actually look at zero-day exploits take all that information and then protect your perimeter better with AI tools and machine learning tools.

Bob Anselmo [00:04:39] So I'd say there's enough, so for us at UBS, we're probably we're about two years into the journey since we started with RPA very mature. I hate the term center of expertise, but I would have a center of excellence. We have a very mature central function across the bank globally that actually did a lot of work to make sure that the RPA work that we did was robust. We'll talk a little bit about that about some more of that but where I think we're heading. A lot of interesting stuff going on. So you know chatbots where everyone's kind of getting into the chatbot space; you see that as a consumer. A lot of the banks, you know, push the service out to the client let themselves surf. We're doing some of that now. I wouldn't really call it AI. They're really not. They're not learning bots; they're natural language. So when I talk about NLP. So a nice easy way where you don't have to understand much about a system or you can just open up a window and say, you know, I'm a new hire service rep out in the field, and I want to put a travel watch on a credit card. You don't have to know what system to go to or what you know what to navigate to do something simple like that. You just say I'm on the phone with Mrs. Smith. She'd like to put a travel watch on a credit card. We're going to take the next call. We are taking the next step. Probably what I would say in the next 18 to 24 months of a digital virtual agent, so an AI-assisted learning so listening to the phone calls listening or monitoring the chat window and actually picking up new picking up new learned behaviors instead of just narrowing focus to the ones that you've taught it.

Mike Tae [00:06:01] David.

David Easthope [00:06:03] Yeah, thanks, Michael. I like the word used in the introduction about demystifying because I see as that industry analyst here on the panel to kind of give you examples and to be specific if I can. I think definitely what we're observing at Celent is that firms are going beyond RPA. So that's why we have reached saturation. That's where things start to get more interesting for sure where we are observing an acceleration certainly in the machine learning sort of predictive analytics area. So, for example, in the asset management side of the business, you see firms developing these quantum mental strategies, and these quantum mental strategies are taking machine learning and applying that to different data sources be unstructured or structured alternative data, and they're providing tools and information up to their quantitative research team so they can develop better investment returns steadily seeing that UBS is doing that with their QED division and Goldman Sachs with their QIS team. So quantum mental strategies, I think you'll be hearing a lot more about that and how AI is informing that.

David Easthope [00:07:08] On the bank side, we talked about RPA definitely more widespread, an emphasis on robots going out and doing things and performing tasks. So we'll see that in areas such as in the allocations, reconciliation trade fails. Anytime you can create more efficiency by letting the machine do what a human would otherwise be doing and then deploy those humans to more value-added activities is always useful. So that middle back office area, you're seeing a lot of activity on RPA robotic process automation. Other interesting things happening in the front office, actually, which may be the shift in the last sort of year or so you see firms like JP Morgan actually deploying predictive analytics and machine learning in their fixed income division. Those businesses, in some senses, are quite behind the curve in terms of technology. So it's good to see them take a leap forward. ING is using a vendor called Katana for predictive analytics in their emerging markets bond division sales and trading, and a great example just this is sort of long-running example is Credit Suisse using a firm called Narrative Insights to deploy through natural language research commentary and to allow the research analysts to focus on getting more value-added activities instead of writing up earnings reports, and thus you may not know this, but Narrative Science also has helped people write sports scores and so some of these articles you're reading about the Blue Jays beat the Brewers 2 to 1 is written by machines today. So it's also good to know on the wealth side. I think some of it is still a little bit of a buzz word. But other areas you do species specific, excuse me specific example. So Goldman Sachs is using a firm called Kensho to enable their AI some contextual delivery of advice. In some sense, AI may be the ultimate end game of this robo concept, which is can the machine deliver better investment decisions to the client. So AI-enabled advice. And I think that we'll continue to see more on the wealth side as what Vanguard Vanguard's using automated insights and very conservative companies are actually using some of the most advanced technology NLG natural language generation to communicate with clients about returns about performance. So when you see Vanguard and USAA out there using these firms, you know that you've reached kind of a major inflection point.

David Easthope [00:09:34] So lots of evidence of an acceleration. Michael.

Mike Tae: Great, thanks.RP?

RP SANDILYA [00:09:39] Certainly. So Broadridge just had a strategic interest in the space since 2016. So we've been building a lot of products investing in the space, more classical RPA and more recently smart automation and on machine learning and cognitive sciences as well, so multiple products that have been out there for a while. So we have dabbled with natural language processing and trade allocations. We've worked on some cognitive sciences for smart insights that help financial advisors match up to their customers more intelligently, and more recently, there's been a lot more focus on deep learning. And we are looking at illiquid securities and how we can we take all the activity and the transactions that are happening in the marketplace and provide more intelligence to the asset managers by doing some complex pattern matching. But as an organization pretty much a significant interests top down, we have ended up creating a solid COE; we have north of hundred and fifty people that are actively involved in it, that are dedicated to this, roughly 50 percent of them working on classical RPA. Now the other 50 percent focused on what I would call “smart automation”: machine learning and deep learning and so on and so forth. Perhaps the biggest thing that's also happening is this partnership with the industry at large, so as a fintech player that has a vested interest in, you know, we mutualize technology and operations costs. We have the access to all of our clients' processes and hard data, and there's a lot of interest for clients to want to partner with us on use cases on how to optimally use that data. And we are also partnering with the academia. So the educational institutions are partnering with us on some really interesting use cases as well. So, all in all, some active products that are already out there but the predominant interest for us, whether it's RPA or ML, is more inward-facing. And that's because of the primary interest we have in mutualizing operations. And so our business process operations outfit is our lab, and that's where we have pretty much actively testing some of these use cases where we have 45 clients. So lots of very interesting bots coming to play these are high-end functions, and we'll talk about some of the detailed areas where we've seen some success, but we're beginning to implement some of these for internal productivity and internal risk management with the aim that very soon we'll be able to productize this and hand it out to our clients. So a lot of very interesting activity happening in brokerage in this space.

Mike Tae [00:11:54] Thanks, RP. So my next question is around investments and projects and the expectations you get out of that, and whether you the benefits to actually yield matching those expectations and so for Bob, you know in the projects that you have successfully executed in this space, how is that sort of played out for you.

Bob Anselmo [00:12:15] So it's even in early days with RPA we actually did. So. We have a group inside of technology that does it, but there's also a group in operations that runs it so that they're not all technicians. So we're automating our infrastructure side the operations team is the one building out the OPs robot, so they're kind of tight supervision. The business case, like every other business case, was kind of a business case built on saves and productivity. What I would say is, you know, productivity, saves, lower error rates. I think we got that, but I think what we learned about making those business cases for those of you that are either starting it up or thinking of going back and touching up the business case is to make sure you don't drop all of those saves to the bottom line. There's something to be reinvested when you know we'll talk a little bit about. What when you use a robot, why use a robot. But building up those business cases to actually you know while it's great to say look I'm about to automate a thousand people out and that's going to be 10 million dollars of the bottom line we're gonna take all 10 million because we need all 10 million. There's some hygiene, and there's some reinvestment required and some strong governance around this. The worst mistake you can make is just turn a bunch of people loose with, you know, Automation Anywhere or some software tool and end up with what we used to call end-user applications owned and maintained by a person tucked away on their desk where the code in the script is theirs where it's not centralized, where you don't treat it like like software, where you don't have a release process you don't store the source code you don't have an owner you don't have a, you know, you don't have a dependency matrix where if your robot is reading a screen and I'm making a change to that screen on the technology side I need to know what robots in the inventory are actually using that screen so we can make sure we test them getting all that governance set up. That's the reinvestment you need. Now you'll be able to put together a really nice business case on self-funding that but don't get pressured by your business to drop all of it to the bottom line because you'll end up with bot sprawl; you'll end up with tons and tons of these things in the maintenance of it is going to be a nightmare. The first time your tech guys do a quarterly release, it's going to break half your robots.

Bob Anselmo [00:14:19] You need to make sure that some structure and investment around it. So we didn't get that right year one. I think we actually had to kind of go back and fight to actually make sure that there was the right support and governance around it. And I think you know we've kind of got a pretty mature process for that, but it took a little a few fits and starts, but I would say you know back to the dollars and cents and for the objectives it definitely achieved.

Mike Tae [00:14:40] And David, how do you see that. What Bob just said. How does that apply across the industry, and what are some of the desired outcomes that you see there.

David Easthope [00:14:49] Yeah, I think cost savings are always a common place that people start. It's good to hear that you're reinvesting that some of that. I've heard that discussed consistently that people do need you need to reskill retrain but also keep that budget internally. So I think also some of the benefit of this technology is improvement in other things so such as you invest performance. Right. How well do you do? Because you're generating alpha by taking on some of these tools that RP just mentioned. So and you can keep reinvesting those profits back in the eye and differentiate yourself your investment performance. Other areas where you see is in better and faster decision making. So it's not just about cost. So an example is the ING Katana example I gave, which is related to the fixed income business. They're actually able to eliminate 25 percent of their trading costs in this division because they're actually making better pricing decisions, and they're responding to their clients faster in 90 percent of the cases. So the machine is making them better and faster. So they're serving their customers better. Then there's the potential to potentially to scale up. So the Credit Suisse examples a good one because this allows them to cover more equity. So I think they go from 15 hundred stocks covered to 5000. And those 20 analysts that would have been writing report writing earnings reports and I wrote you to write those actually aftermarket hours going late in the evening writing reports that are going to add no value.

David Easthope [00:16:21] Instead, those people were on the phone talking to clients. In this case, Credit Suisse they're actually able to redeploy those analysts towards higher-value activity. So I think this theme of reinvesting is key, but also, can you just get better and faster? That's I think where this technology starts to really have more of an effect. And you're not just fixing a broken process; you're actually improving your business performance.

Mike Tae [00:16:46] That's great. And talking a little we just talking about the benefits and the things that are driving adoption. How about on the flip side, which is you know when you have a project you have an idea you have a concept you're going into production then you to get it to a place where you're actually implementing it across the industry and seeing the actual tangible benefits. And there are gaps to get from each of those stages. But for RP, for you from your perspective, what are some of the challenges that you think are out there and how you how we can overcome those challenges.

RP SANDILYA [00:17:14] So the challenges are different, of course, whether you're an industry participant, an industry player, a technology vendor, or a partner.

RP SANDILYA [00:17:20] Some of the challenges for us. The interesting aspect was to pick the right areas for us to invest in. So we needed to understand if these were the right processes that we needed to get after an implement. So for us as a mutualization exercise where we are taking technology and operations from firms and mutual using them, it was important for us to think about it, not as a productivity and efficiency exercise. That was an element of that, but it was very important for us to think about it as an operations risk exercise. So how can we mitigate some of the process failures associated with what we do for a living? And that's where some of these investments came into play. So it was important for us to identify where we had the right context of data where we had isolated processes with not a lot of upstream and downstream dependency. So it was important for us to consider whether we should reengineer the process rather than go fix it. And again, the benefit that Broadridge had here was the fact that we were using these technologies to first look inwards at our own operation before we went out and productize some of these things. So to us and there is a white paper that's coming out pretty soon or introducing introduced at this event that talks about what are the four or five very key things that any firm out there needs as they go about investing. What are the very key elements that they need to focus on when they get out and invest in this space because everybody's out there doing that? And for us, every one of those challenges has been addressed in the last two to three years, and then we refer to that as are the five S's essentially. So we talk about it as having good strategy as having good organizational structure as having the right systems in place as having the right skills in place, and having the right kind of staff management in place.

RP SANDILYA [00:18:58] And those were five very key elements for us as we went about investing in the space as well.

Mike Tae [00:19:04] Bob, do you have any thoughts on this as well?

Bob Anselmo [00:19:07] A look at the challenges I laid some of them out earlier, but I think some of the other learnings are: look, to me that the best robot is the one you don't have to build; sometimes robots are used to fix broken processes. And I think we said that a few times, spending, you know, even if you need to free up that capital, so a break in a process that drops a bunch of items to a reconciliation report. That then gets loaded to an Excel spreadsheet and uploaded into a journal process, let's go upstream and figure out how we stop that broken process from generating the breaks. Is there a missing edit on a front end screen? So that's the reinvestment that you talk about. And the second-best robot is when you decommission. So I stopped counting the number of robots that we put in, although I have to count that I can't. We took 200 out. So that's the best measure I have of actually going back upstream and figuring things out. Not every robot that's created is because of a broken process, but a significant number are where there's exception reports that get dumped to excel and go to someone's inbox. That's the exception report is there because something didn't work right. My tech failed, or some process failed. Going back upstream and investing the time to do that and to have a program in place to take the robots out I think it's as important as having a robust process to put them in. And again, it took us so it took us a while to learn that. And I would also say that the challenges we spent a lot of time talking about ownership and governance and, you know, setting some ground rules. Every robot needs an owner that that owner needs to be a human being, that human being needs to be registered someplace, and they need to actually take accountability for that. So think of a decommissioned plan or make sure the robot is functioning properly or when the testing needs to be done because system changes made who takes that ownership. So, in the beginning, it was just, you know, we threw them out into a dumb terminal, we let them run, we ran them under systems accounts, and then when they broke, nobody knew who owned it, who wrote it. What was it for? Why was it there so all these challenges that you find over time that come with the maturing of the process?

Bob Anselmo [00:20:58] If you don't have a decent centralized group responsible for these things and treat them like code, treat them like they need to be migrated to production they need to be approved they need to go through a lifecycle, and that lifecycle like legacy systems includes a plan to decommission it or when I go back and look at rewriting a system go back and fix those processes so that they're straight through an end to end. Probably some of the bigger ones. Interesting.

RP SANDILYA [00:21:22] As Bob was talking, this came to mind is again one of the key things one of the challenges we faced was in the standardization of the process it was extremely important for us not to get into the conflict of creating bespoke processes.

RP SANDILYA [00:21:36] So they'll give you an example and a simple DTC settlement use case. When we looked across 28 plus clients where we did this process, we ended up finding there were 15 bespoke flavors of how we could implement it. And it was extremely important when we went out and took these use cases and built automation around it that we came up with for lack of another term of best practice a standardized practice. So without having these bots having too many parameters and too many variables that made them function differently, it was very, very important for standardization of the process before you automated it. So there was another interesting challenge that we noticed.

[00:22:10] So. At Broadridge, we do a lot of work in distributed ledger technology, and regulatory considerations are a big part of that in terms of the adoption and upkeep. Is that something in your experience for the whole panel that you're seeing as a block or a hindrance or a consideration as you think about applying some of these technologies?

RP SANDILYA [00:22:28] So I'll take that first. So the regulatory angle is extremely important for us. The outfit that actually engages in brokerage operations mutualization is a registered broker-dealer. So it's a core conservatives. Conservatism is built into how we think about some of these processes. So it isn't important for us to think about what would the regulators want to see. Most of these machine learning components tend to be black boxes, and there was a requirement to be able to justify the decision-making process. It was important for us to do what we call explainable AI extremely important to be able to lay down the audit trails the logs. See at Broadridge for us these elements that we think of these automated elements of digital labor. They're sitting right alongside the human labor. So in many ways, we are treating these components very similarly, which means written supervisory procedures, procedural documentations, unique user ID that are extremely secure and monitored, so every process, whether it's a human or digital process, is heavily monitored. But that wasn't enough. As these regulators walk in the door and they look at what Broadridge is doing on behalf of its 45 clients, they need some sort of a vision or an inventory into what's happening across the bot space, if you will. And so we are building what we call out AI vision, which is not just an inventory of all of the components that are out there, but it's basically telling you what's each one of them doing. What's the throughput? What's that error rate? Are they functioning within the boundaries of performance that are being defined? And each one of these bonds to something that Bob mentioned are tied to a function, an area, or a department where end-user access they are the spigots that control this process. They have a view into how well the process is functioning, and they have the opportunity to kill about. If it were to go off of its well-defined parameters so constantly ensuring that we are not using private data that the type of data that we are using is appropriate that we are looking at cyber threats. These are all very, very important, and having that regulatory framework in place is super important for us as we have thought about this whole automation investment.

David Easthope [00:24:35] That's really good.

Bob Anselmo [00:24:37] If you take a look at some of the things. Regulators, internal audit, They like to look at the paperwork, right? They want to see the specs and when where you're going through things like trying to explain you know why your AML system picked up this transaction and not being able to say that like we laid out the parameters. But basically, it's machine learning. It travels through a lot of the data. We set the parameters out, but it looks at how many false flags we had, and it continues to evolve the model. You spent a lot of time talking; we spent a lot of time talking to regulators and to internal compliance folks, and even internal audit. So, for example, go. Going back to that, the brute force bot. We will never automate more than 50 percent of a team.

Bob Anselmo [00:25:18] So even if I can take a function that's ten people and I could take all ten people out. We don't. We'll take five out because what if the bot doesn't function? We'll be out of business. What about the institutional knowledge of how that process gets done? So setting all those parameters up and making sure that you feel like you've got the right set of parameters and boundaries is going to probably put you in a lot better place when you're having discussions with regulators and auditors on how some of this stuff works.

David Easthope [00:25:43] I think we saw some of that you know when people were building out just the basic rules engines and corporate actions and reconciliations that they found that they couldn't always eliminate the team they needed to keep those skills in-house they need to have people that can explain what's going on and they have that subject matter expertise. Seems pretty consistent.

Mike Tae [00:26:03] And then how about from a digital labor perspective in terms of just the impact that this will have on companies and people on culture. Just your perspective on that.

RP SANDILYA [00:26:13] So again, this was we had this huge benefit rather than thinking about having a hypothesis about how people might react to some of these bots. We had these people that actually mutualize operations from other firms. And so as we design some of these bots, we had to look with them as to how our human labor would react to some of this digital labor coming, and I talked briefly about strategy and structure. So it was extremely important that there was the there was a very well thought out strategy that was very well communicated to people that we had managed our staff very well in terms of what their roles would be in the future but most importantly we made sure that the operations teams were part of this cross-functional series. So it wasn't just data scientists. It wasn't just technologists who were working on UI path for automation anywhere, but it was also our operations folks that were actively involved. So they have part of these cross-functional teams. We have a pretty large millennial team that works on these operations teams, so we realized we need to make this useful for them in their day-to-day business as well. So there is an awful lot of gamification that we've got going on on some of the tools we use internally. We have these people at the associate level at the supervisory level have a very interesting way of rating themselves visa vie their peers. How are you doing in terms of throughput? Mean time to resolve an open item and so on and so forth. So there's been that there's been that involvement, and we've been pleasantly surprised because a large number of these operations associates were more than happy to get trained on these technologies. So we have a very large number of those in the COE who come from our operations teams, and so they realized it's not artificial intelligence replacing them it's augmented intelligence. And I think one of the most successful things so far has been the level of communication. So it's not seen as a threat but as an opportunity to kind of further their own personal interests. So it's been useful that way.

David Easthope [00:27:58] People are seeking that training. You think they understand that to improve throughout their career that they need to seek those out those opportunities and externally as well. I think that they're the talent wants to see that you have a process in place, but then also you have interesting data to play with.

RP SANDILYA [00:28:15] That's true.

David Easthope [00:28:15] That's where all the common theme across the staffing and the headcount issue is. OK, well, what data do we get to use? Because if you don't have good data, if you're starting with a bad data set, you're not going to really get the benefit from machine learning.

Bob Anselmo [00:28:29] But let's be honest, a lot of these robots replacing work that was already in its lowest cost location.

RP SANDILYA [00:28:35] That's right.

Bob Anselmo [00:28:35] And my turnover was 40 45 percent. So it's not like anybody really loved the work they were doing. So in IT operations, the people that were restarting JVMs or bumping up space in a database didn't feel especially fulfilled with the work they were doing, working overnight shifts and doing that kind of stuff. So. And like I said, not every one of them. Over 500 of those people were retrained to actually build bots. You don't need a computer science degree or somebody to drag and drop, but some basic javascript skills you could pick up a book, and you know we actually did train a lot of non-technical folks.

Bob Anselmo [00:29:06] And it's hard to skew the age demographic.

Bob Anselmo [00:29:09] I mean, I'm a guy who's 50 now, but the younger people who were doing some of this work and they were entry-level jobs just even if they weren't technicians took to it with the veracity that I just I haven't seen. They were just so excited they want to do more they're, and they're asking me, have, you know, what other parts of your system are exposed an API so that I can call in services, and I'm like that's a programming question, and you're an ops guy you're asking me whether I have an API you can call to do that. So it really is that they're much more fulfilled. And I think like it's not like I'm going out and finding new people right at the same staff we had that are intimately familiar with the process that are helping to solve the problem.

Bob Anselmo [00:29:53] Well, we're going to the tech questions all right. So, in general, we've had a lot of debates on a productivity measure. I don't measure individual productivity; I measure team productivity. So backlogs, user stories, complexity points, function points. So when I look at, you know, who are my highly productive teams, they're generally cranking out cranking out more work. But it's it's impossible. We brought in tools to measure developer productivity. We had a, there was a tool called Blue Optima that would look at your source code and try to analyze, you know, analyze how effective a programmer was. But every time I tried to take that back to my management team and said, “You know what, why is Joe so unproductive? He's in three meetings, and he's working on this problem,” so it is a bit of elusive you know what where it's difficult you know I know who my good developers are right there. They're generally the ones that are, I guess, in solving the more complex problems. But you can generally measure it more on a team basis than you can an individual basis.

Mike Tae [00:30:52] My next question is still on this topic, which is around automation, which is: can you, in your view can you over-automate something? Can you let something in, you kind of alluded to this a little bit, but just around, can you over-automate? Can you let a process be a hundred percent be automated? But how do you, what is your view on that?

Bob Anselmo [00:31:10] You shouldn't do it. I mean, it's so just back to taking a look at how some of these RPA tools work. Right they are there. They can be fragile, right. If you're looking for a specific element on a screen, like you know, if they're operating on my green screens, I don't change those green screens ever, so I'm never gonna break those robots. But if you're operating on some of my front end screens and your kind of screen scraping and you don't have like an element that you can pull off off the HTML if you're literally looking in that field and I change that screen, and I go in, and I make a change on that screen they'll break. And being at a business making you know doing your margin calls because you've taken half the clerks out and you've generated the alerts. If that process breaks, it's not going to put you in a good position. And again, you need subject matter expertise. You need process owners right that you can automate away the work, but the owner and the team still needs to actually continuously improve that function that they're doing.

Bob Anselmo [00:32:00] So I again, we had a lot of discussions about that, and it was actually it was internal. It was some kind of compliance and audit that actually set that target. We may see over the course of a year where maybe 50 percent can be 60 percent. But right now, we don't go further than 50.

Mike Tae [00:32:14] Interesting.

RP SANDILYA [00:32:15] But I think if you look at the concept of 100 percent automation has taken in more recent times in other industries, and unfortunately in aviation, you've seen more 100 percent automation can do where it takes control over what the human being could have done to override it. So there are some. So there are some cases to be made as to why and why not. But coming from a BPM background, which is where I came from and built some of these products, it was very important for us to create those spigots those controls, so there is no such thing as 100 percent automation. So for any of those robots that we have in our own operations areas, that is, at the end of the day, a human that's attached to these digital Labor components. And I talked about how they have that vision they have a view into how these bots are functioning and whether they're operating within their well-defined parameters of they are out of it, to be able to take action and kill them so that the concept of hundred person automation is not something that we strive for either. So we are constantly looking at how I think HP are called it obliterate versus automate.

RP SANDILYA [00:33:11] Try not to automate everything and don't go all the way. Identify what you need to automate and then get after that idea pretty solidly.

David Easthope [00:33:19] You know, just add that some of these tools are focused on the front office right the advisors and the investors, and these tools are really empowering tools. So they'll allow them to do more. So not really going to automate their job away; they're actually going empower the financial advisor or the investment manager just to do more throughout their day and higher value-added stuff.

David Easthope [00:33:38] So they're key in information to respond to an inquiry you've got Kensho for that.

Mike Tae [00:33:45] This time RP I just you reminded me of something I heard, which is you can have artificial intelligence, but you can also have artificial stupidity. And it's really drawing a distinction between between the two and the balance, and really how do you get the mix between you know utilizing and leveraging the technology while bringing in the humans and able to bring that judgment as well. And so that was interesting. OK, well, moving on to another topic, which is Broadridge is developing what we're calling an AI readiness tool and our AI Readiness Index. And it really helps firms evaluate how ready they are to adopt AI, machine learning, RPA to their operations. And so, from your perspectives and we're not going to go into details of what that evaluation looks like. How would you characterize the readiness of your operations and technology to adopt some of these technologies, and you know some examples: alignment with change management, process improvement, data strategy, APIs, that sort of thing. So maybe you start with Bob.

Bob Anselmo [00:34:49] Sure. So I probably give us a solid seven out of 10. I think we've got very mature processes around even when you start talking about machine learning, right. So model validation, model governance, because that's what these tools are. Right. So they are, you know, when you set aside rules or set out rules for how you're going to do your email surveillance, right. That's a model that actually has to go through the same way and the same rigor that an investment model does actually be validated for the right bounds. So we have good governance on that side where I think it does. It gets difficult for us as an organization is that a lot of my software, although my ops partner did stand on stage and say he had the best technology. I paid him before he got up there. Is. A lot of a lot of our technology is more monolithic, so a lot of our applications. So if I wanted that example I gave, where I wanted to actually go in and interact with the credit card and actually say I want to put a travel watch on a credit card and I'm going three screens deep into an application to actually put the dates and put the location into the system the way we build software now again a much better way microservices API is I should be able to expose that credit card watch as a service. I should be able to do, you know, a restful service. I can call from robotics. I can call it our front end screen. The user can self-service for my online. The unbundling of our software estate is probably the biggest challenge we have to actually start getting. To taking these microservices and exposing them so they can be used properly and reused properly across the estate is probably where we're the most limited.

David Easthope [00:36:24] I think this Readiness Index is a really good idea just because what I observe from the industry is probably two common themes. One is that firms are not ready because their data is not ready. So I would emphasize that data management is key. And secondly, this: do they have the skills internally? Do they know what skills are really needed to follow through on these projects? Those are probably the top two, but in all sorts of data, I think that's the most common theme. So. Let's see that's a big part of your Readiness Index.

RP SANDILYA [00:36:50] Pretty huge, yeah, because I think when we talk about data, we didn't just talk about volume of data. We talk about the variety of the data, and we talk about the quality of the data, but all three components are extremely critical for us to be able to put some automation out. Absolutely.

Mike Tae [00:37:06] And of those, what do you think is the most important factor.

RP SANDILYA [00:37:09] I would say suddenly cultural acceptance very important as an organization because this is impacting everybody, not just the technologists but the operations, product management, strategy, people looking at the future of the organization as well. And suddenly, education because and knowing what it is allows it to no longer be a myth about whether it's something that can augment who we are, replace who we are, or just be a nuisance. So I think it's extremely important to me. It's a combination of cultural acceptance and education. We are the two most important aspects of it.

Mike Tae [00:37:43] So I think a lot of folks in the audience probably in various varying places in the journey of AI, ML, RPA and so we could take a little bit of time just talk about lessons learned. Given your experience, the folks on the panel here who have really had spend a lot of time and a lot of experience implementing some of these projects. What what are you? What are some thoughts that you have with the quote-unquote right way to approach this? Other things that you can see along those lines.

RP SANDILYA [00:38:10] So I can jump in. So as we set out to do this, one of the things that we talk about it and that Readiness Index, but we actually spent a lot of time thinking about where do you begin where how do you pick the right kind of use cases to go after we had as you might imagine 45 clients, 25 different functional areas in which we support our clients. We had north of a thousand use cases, and so we had to create our own order why if you will, which is really RPA Opportunity Index, we call this whole new concept where we were looking at the feasibility and several parameters across all of these use cases, and it had everything to do from time-saving, labor cost, value, complexity, number of inputs and outputs, the quality of the underlying data. So we put together several different attributes to be able to judge and to be able to evaluate, if you will, the merit of each one of these use cases. And we had pretty much, we were able to kill 50 percent using an opportunity index like that, which eventually led down to about 40 to 50 real use cases that got into production. So it was extremely important. That was huge because, given how the business units are at Broadridge, innovations happening in multiple pockets. And so, even though we had the center of excellence, we did not want We didn't want to let the pockets of innovation to grow, but we didn't want that to go out of control, so it was extremely important for us to create an index of that sort. So to us, one of the biggest lessons learned is that entire infrastructure around governance as to how do you identify the best use cases to invest in and then take it all the way. So that was huge.

Bob Anselmo [00:39:47] I think I gave you guys my pitfalls right. You can avoid the mistakes we made. Start small, don't plow every bit of saves, you know, right to the bottom line. You need to reinvest in this process. It needs to be owned by someone, so whether it's like ours is owned by ops, there's actually a team, a couple of experts, but treat it like what it is in software right. So work together with your IT partners, make sure you've got a good process to take these things and steward them all the way through to production.

Bob Anselmo [00:40:14] I think that's probably the best advice I can give.

Mike Tae [00:40:18] Bob, are there ones that you just flat out abandoned?

Bob Anselmo [00:40:23] I wouldn't say abandoned, but so certainly if you look outside the robotic space so we in our bank and wealth management business right you know kind of face time with either an economist or someone from our CIO or our Chief Investment Office. A lot of clients want to spend some time hear from you know economic thoughts. We actually we are, my global partners actually created an avatar. So it's an image, machine learning. So it's almost it reads all the market events; it has the firm's intellectual capital, and you can have a voice conversation, and it actually looks like one. Like someone from our investment group, I don't know that it's going to get the kind of play the thought would be like somehow you know that's going to replace a client or an institutional client wanting to have a meeting with an analyst or wanting to have a conversation with one of our economic advisors. I don't think people with 10 million and above want to talk to a machine why we've talked about that on the robo space several times. So look it's out there in pilot. Maybe we're thinking about maybe flipping it on its side. Maybe putting it into some of the retail bank branches, go to a little more mass market. I just don't know that it's going to. It's going to play well for for for what it originally started out. So I wouldn't say it's a failure, but I wouldn't say it was a rousing success. Sometimes we just wanted to. Play around with AI, play around with machine learning, get something out there that that seemed like it was can be pretty cool I just don't know that it's met the business case benefit of what it was supposed to be.

David Easthope [00:41:48] I would just add in terms of lessons learned and things that we see. I think Bob hit on the keyword, which is partner, find knowledgeable partners. Don't do this alone. In some cases like RPA, there is a bewildering array of vendors that are out there to find lean on your partners in subject matter experts, whether they be consultants, vendors, systems integrators; figure out what they did wrong. Don't repeat their mistakes. I would say a lot of common mistakes do cluster around not really having a well-defined objective and then really not having the data to support what you want to achieve.

David Easthope [00:42:23] So I think those are some good lessons learned but find good partners.

Mike Tae [00:42:28] Great. And then in terms of machine learning, so there's robotics and machine learning, and then there's AI, and there's a natural progression if you think about the three, and they kind of build on each other. Is that the way that you have interacted with these technologies, or has it been kind of an independent thing where you just saw an opportunity for automation, so you did robotics versus how it.. Just talk a little bit about your journey.

Bob Anselmo [00:42:51] Yeah, I don't think they're linear. I think we use some of the funding from the saves to really move up the stack a little bit. But if you look at the barrier to entry for robotics is you do need to buy a tool right. You've got to. I mean, I guess you could kind of roll your own if you wanted to build your own kind of scripting language, but there's plenty of good tools out there when you actually start thinking about machine learning. My guys can swipe their credit cards at Amazon, Microsoft Azure, Google Cloud. You can get a nice AI analytic services data services Big Data deep that you can play around for free. I mean relatively low back in the day if you wanted to toy around with an application that did you know optical character recognition and form scanning, you have to go out and find a form scanning party to get the infrastructure you have to get it into your data center. Then you'd have to work on the integration. All this stuff comes out of the box now. So you know it's as I joke with all my guys playing around with this stuff is easy institutionalizing it and making it ready to go. So we can play around that Microsoft Azure getting it past my complaint, and we can send cleansed data sets out there and toy around and make a pretty good business case. And we are doing some of that stuff around character, you know, around OCR and scanning. I don't need to buy anything.

Bob Anselmo [00:43:58] It's all open source it's all out there you want Google you want Amazon you want Microsoft there's probably every tool out there for machine learning for a character recognition, visual recognition, voice recognition, text to voice, voice to text, that it's all in the toolbox. It's going to get hard, and we have to bring that stuff back in, you know, back into our corporate environment, but for right now, the barrier is pretty low. So there really is no there's no reason if you're you or your tech guys aren't dabbling a little bit in some of that machine learning and not even natural language processing, which is I think even you know while foundational necessarily machine learning is looking at the voice assistants, looking at how people work. They know the days of. You know. Having to figure out what system to go to and what menu to navigate to get to what screen you think about some of those chatbots, and you think about how people should just ask for what they want right. Put in the window or speak and let the system figure it out. I mean, that stuff is this the kind of stuff that I think is making big moves.

RP SANDILYA [00:44:53] I agree with Bob it wasn't linear even for Broadridge. Well, when we set out one of our first products, so we put out there for people to use, had a natural language processing element to it on trade allocations, some being able to take these unstructured data that was coming in and put some out, I'll be on top of it and automate how these straight allocations got into the settlement systems. And then we were beginning to focus on a lot of cognitive sciences as well which point one of the key things there for Broadridge to do was to take a step back and say, well if we need to make this industrial strength and bring it out we need to go through a POC, and we ourselves had to pick the right technology vendor which we did after a comprehensive POC. So that has been us taking a step back as well saying well why we're continuing to invest in NLP, ML now some pretty advanced deep learning systems around illiquid securities we have nevertheless taken a step back to try and institutionalize if you will the way we build robotic process automation elements from start to finish.

RP SANDILYA [00:45:49] So it's been completely non-linear for us as well, which I think is how it is when everybody.

David Easthope [00:45:55] Makes sense if you're solving real business problems or applying the right technology for the right problem or challenge, whether it be NLP or NLG or with machine learning that makes sense.

Mike Tae [00:46:06] So David, just from your perspective, in an industry where do you think we're going to be in a year what do you think that the industry looks like in terms of readiness and then another question around just capital markets and wealth. Where are the opportunities there as well?

David Easthope [00:46:20] Good question. Yeah. The crystal ball time, right. I think we're in the state of very, very high readiness for this technology. I think despite a lot of the buzz around AI and machine learning, I think it's it's warranted. One of those rare cases where you look at it, you look for examples, and you find them, and you see implementations happening. DLT blockchain sometimes that feels like is it worthy of the buzz, depends on which quarter of the year you're in. I think there's gonna be a lot of emphasis. I mean, right now on RPA, there's just there's the partners that you can pick. It's very easy to do, so you'll see that get beyond the 80 percent to 90, you know close to close to a hundred percent. I think there's going to be an emphasis on NLP and voice in particular. So think about things like monitoring traders and trading surveillance and all the craziness that happens on a trading floor and chat windows and voice and having a machine actually be able to be predictive and figure out what the traders are doing. So you see that across trading desks; you see it at the SRO as you see it at the exchanges. I think the data combined with machine learning revolution will continue on the asset management side. I think there's a huge potential for those strategies to show their merit, and do you see more investment in combining all data with leveraging the machine learning to inform your investment decisions to make better trade decisions, so that could be just the investment ideation, could also be in the execution so the algorithm that you're using to place that order of 10000 shares by IBM you could pick the right algo. So you want the machines picking those those those algos taking those off the shelf. So I think crystal balls me high readiness high adoption. Continued questions around talent and skills, though. Where should the talent come from? Should it come from, you know, big tech is it going to be, or is Wall Street going to be continuing to recruit at engineering schools? Where are these people going to come from, and what skills are you looking for? Because the ideas are great, but not every firm really knows how to go and get that good engineering talent, and they're competing against the Googles of the world. So I'd say that's going to be a continued question but also, as usual, picking the right partners. As I mentioned earlier, I think that's always the right thing, and we certainly feel that we sit between the institutions that are deploying the technology and the vendor selection. So we see that kind of the process, and we think people should continue to be very thorough, pick partners that they know and trust and hold them accountable too. Make sure that they're developing and investing in their solutions and bringing in the technology that you want to see that's going to create gains for you. So if you're interested in more about that, there's actually a great Celent report that's on the broader site: Intelligent Automation. So if you were to read that and think about your future, your plans, please go and see that. And thank you, Broadridge, for putting that up on the website.

Mike Tae [00:49:19] So we have 10 minutes. I'd love to open up the floor for any questions. If anyone has any questions for the panel, that'd be great.

David Easthope [00:49:30] It's the landing page for the Broadridge presence at SIFMA Ops. It includes the event and the Intelligent Automation report. 

<audience question>

Bob Anselmo [00:49:55] I kind of look at it as machine learning as a component of AI. It's the roll-up and other things below that natural language. All those things together. You know, is bundled under that umbrella.

RP SANDILYA [00:50:06] When we set out to answer that question, we literally looked at the Oxford definition of what artificial intelligence is. And it literally talks about how you are augmenting human intelligence through the uses of multiple technologies, including to Bob's point, whether it's robots, whether it's machine learning or other deep learning components, so we see it as a component of AI. We are being the envelope if you will.

David Easthope [00:50:29] I agree. The envelope and you have to be careful out there with your "AAI": artificial artificial intelligence, things that are not truly AI. So please, go see through the vendor marketing when you can. But yeah, we see machine learning, and actually, we separate out NLG/NLP as kind of a separate technology. So we look at those two machine learning NLG/NLP as a components of artificial intelligence umbrella. RPA is sort of on that tech stack right within it separate.

[00:51:01] Thanks.

<audience question>

Bob Anselmo [00:51:11] I saw what we were talking about that that Google assistant that's out there, so I think that's, you know, theoretically you wouldn't even know if you were talking to a bot, right. I mean, so where right now some of our chatbots actually do drop off to a rep, and you know in the end you know if you kind of go up out of the bounds of what the tool knows how to answer, you're going to end up somewhere my call center with somebody actually chatting you respond. So I don't have any good examples in my shop.

David Easthope [00:51:40] That's a natural evolution though I think if you've got the humans, you know, trying to figure out a broken process using RPA, you have a bot talking to another human. You could have a bot talking to a bot. Right. Thinking of all the crazy things that custodians do and some of their bad data and you have a manager seeking information that phone call could come from a bot. And that's the other side of that could be another bot talking to each other trying to resolve the problem. It's crazy when they're talking past each other sometimes; there's some funny videos.

Bob Anselmo [00:52:10] Has anyone seen you not the Google. The Google and the Amazon Alexa. They put all that the distance together.  Yeah, better than the Carlsberg beer ad. I agree with you too. 

<audience question>

Bob Anselmo [00:52:39] So that was part of the reason we actually have owners, and they are responsible for the lifecycle. Think about one of those things as a system. Right I have. System owners service and product managers that work together with the business and draw a roadmap. The same has to be done for a robot. I think it needs to. So if we are rewriting our migrating our back office and rewriting our margin process, we would look at all those robots and decide they target state or are they just filling a gap.

Bob Anselmo [00:53:04] So I think it is it is the stewardship of a human being that actually owns, and it's not one to one. So some of these robot owners will own 20 or 30 of them, but it is part of the job not only to be the operator but to be the person you got to set out a roadmap for your suite. Again maybe the tool change. I think the robotics process will be around for a while, but you've got a robot that's running for 10 15 years, you probably get a broken process you need to think about.

RP SANDILYA [00:53:30] We do one of the things we had to think about, and I had talked about the whole AI vision concept this was to be able to have a pretty good inventory of these robots, so it was really important for us, and we talked about it from the context of the regulatory standpoint, but it also helps from a versioning standpoint simply because when you're serving multiple clients, and you have instances of these bots running on multiple client instances of data on operations it's extremely important for us to be able to maintain visibility into whether a particular bug that's functioning is relevant, is current, and adheres to all of the policies not just to the overall regulatory framework but for that particular appliance customized operations as well.

RP SANDILYA [00:54:07] So it's extremely important for us to maintain that inventory, and attaching that to a group or a department or an individual makes it that much easier to control the status of the box, so to speak.

Bob Anselmo [00:54:18] Those bots sit in the same IT inventory tool that my systems do.

Bob Anselmo [00:54:23] So it's like I've got one place where I keep my IT inventory, and a bot looks the same as a software component or an API. There's an owner there's a lifecycle you know you define it as target state or not. So that was one of the things again we had to learn that's, in the beginning, it was just a mad rush to build a building build and build and get it out there save the money, and then you look back and you just sitting on a big heap of garbage and you know that that's kind of reeled it back in and brought the governance.

<audience question>

David Easthope [00:55:06] A harder one. I think that is what Well I. I mean, there's already so many things you can safely do with AI. I think it's just you, well, we always focus on is technology being applied this specific excuse me specific perfect purpose. Sure go with it, and there are only so many uses now for AI, and you have to be careful with that. So I think the industry is going through a natural cycle of taking RPA out OK. Focusing a lot of broken processes and building back office. I really do think the front office was slow to adopt this because it didn't always make sense until You were solving for some problems like scaling up your fixed income trading or your research division. You can actually serve more clients with existing technology spend or less actually do more value-added things, so I think firms jumped into that and jumping, and then the AI stuff is still kind of within the realm of sci-fi a little bit. Right. I mean, you've got the driverless cars out there, but I think that capital markets is still happy with things that work, especially if you can improve investment performance, right. AI is useful for a specific case.

David Easthope [00:56:20] I always contend that if traders could make money, you know, sending messages on carrier pigeons or using post-it notes and handing it to the next guy, That's what they would do. They don't care about technology. They care about making money. So I don't see a huge amount of AI. Artificial General Intelligence, I'm assuming what you're referring to the overall end state. I don't see that coming in yet. All right, what do you think?

RP SANDILYA [00:56:41] Well, the way I think about it is this is one of the questions we asked ourselves as to what happens to the classical BPMS components that we have built over these years classic workflow and process automation components. Are they just getting replaced as RPA and slightly more advanced version of the same? Considering that we are almost always bridging hybrid systems multigenerational technologies more as a technology vendor partner but as well as our clients, I do see that the RPA elements will stay relevant for a long time to come, even as some of the more advanced smart automation elements come into play. So that is a time and place for RPA where you're not really building self-learning mechanisms where you're performing isolated tasks on a well-defined data set, which is inherently collaborative in nature. I truly believe that RPA components will stay for a long time even as we continue to develop some of these self-learning mechanisms.

RP SANDILYA [00:57:34] So that's the way we're thinking about it.

Mike Tae [00:57:37] OK, so last question for me. So there are we did a survey last year is about the vision of the future of AI and what movie best represents what future of AI will look like. So this is crystal ball time.

Mike Tae [00:57:55] So one option is Star Wars, and that's the R2D2 C3-PO, bots are friendly. Number two is Transformers, which is that good robots and bad robots. And then the last, I regret, does actually ruin X. I said Transformers, but Transformers where the robots rule the world and they control the humans, or maybe the matrix is another example of Terminator Terminator. 

Bob Anselmo [00:58:21] No. Oh. Did you see Terminator? Yeah. Because I was. You mean that. Did you mean it? No. OK. I don't think there's a lot of quotes out there.

Bob Anselmo [00:58:33] I mean like Stephen Hawking. There's a lot of people out there, you know, kind of dropping the hammer on AI right now. It's gonna be the thing that brings us all down. I worry when it starts writing code that it's taking food off my table, and I'll worry about it.

Bob Anselmo [00:58:45] But for now, I don't think it's you know it's still so early days.

David Easthope [00:58:48] It's Star Wars. I mean, the bots are helping us, and I actually think you've got to combine the technology with sort of philosophy. What. What are we actually building? We're kind of building versions of ourselves. We're building ourselves, trying to mimic the way that the nervous system works, and these are extensions of us.

David Easthope [00:59:06] And I think I've been firmly in the Star Wars camp. I don't buy the crazy Terminator stuff.

Mike Tae [00:59:13] Just assuming that he's all people are good and that maybe I'm not. I mean, I'm often philosophical here.

RP SANDILYA [00:59:20] I'll step back a little and just talk about were given the title here was this is not sci-fi it's real. I think it's important for what we believe for people to know that unlike past movements or unlike past technological advances in this particular advancement, it doesn't pay to be a fast follower simply because these artificial intelligence systems and we talked about it in our White Paper as well. They're self-learning by definition. They tend to be opaque, and they are constantly changing who they are. It gets harder as a fast follower to be able to reverse engineer them. So it's extremely important to kind of stay as close as possible to the cusp of what's going on and therefore finding the right partner that can bring the right network effect is extremely important. Personally, though and as a part of the team that works, I still on the Transformer camp, so I think we'll end up building a few Decepticons and a few Primes, but the Primes will come out on top. So that's how I feel.

David Easthope [01:00:14] So I like the way the robots were portrayed in actually Interstellar. Yes, you talked about those robots and that when they were really helping and the human can kind of program their humanity up or down or how funny they want them to be or how serious they wanted to the ethic that was a really good thing.

[01:00:29] I need one of those as well. that we can please just thank you.

Disruptive technologies like Artificial Intelligence (AI), Machine Learning (ML), Robotic Process Automation (RPA), and Smart Process Automation (SPA) are no longer the stuff of science fiction, nor the “next big thing”. Successful firms are leveraging these technologies today in many facets of their operations: from trade analytics to advisor insights to smarter processes. This workshop will highlight current successes and observation on innovation from industry-leading financial institutions and service providers. Join Broadridge for this On-Demand Webinar, No Longer SciFi - How Capital Markets and Wealth Firms Are Already Leveraging AI and ML In Their Operations, to hear how capital markets firms are transforming with AI. Our panel of industry leaders from financial institutions, analyst firms and fintech providers will share their perspectives on the AI market landscape.

View the Webinar