Data Disruption

A Charles River Podcast Series

Podcast Series

Unlocking the Power of AI in Private Markets: Challenges and Benefits

Dive into the world of data disruption as we explore how artificial intelligence is transforming the private markets. Join host Kali Jakobi and industry expert Randy Swanberg to uncover the challenges, benefits, and practical applications of AI in reshaping the financial landscape.
Transcription:

Kali Jakobi:

Welcome to Data Disruption, a podcast all about data problems, solutions and innovations disrupting the private markets. I’m your host, Kali Jakobi. Let’s talk data.

Hi, everyone, and welcome back to Data Disruption. As always, I’m your host, Kali Jakobi, and today I have a very special guest with me to discuss an industry-wide burning topic. That is, artificial intelligence. Randy Swanberg joins us as head of the bionics organization within State Street Global Technology Services. Now, what is a bionics organization, you ask? Well, it includes a wide range of areas, such as artificial intelligence, robotics, automation, workflow orchestration, data engineering, data analytics, data visualization and business intelligence. Needless to say, Randy is more than qualified to speak with us today about how AI is shaping the financial markets. Without further ado, Randy, welcome to the show.

Randy Swanberg:

Thanks, Kali. Great to be here.

Kali Jakobi:

So let’s jump right in. Randy, we’re going to start off by talking about some modernization through AI. Starting off a little bit broadly, what challenges are on the horizon when it comes to applying AI to various aspects of the financial markets?

Randy Swanberg:

Let’s see. So challenges, probably the first challenge area is for AI to work, you’ve got to have data. And you’ve got to have accurate data, clean data, and sufficient quantities of data. And as we’ve experienced ourselves at State Street, this is typically the first and earliest challenge when faced with a new use case that we encounter. The business has an idea of, “Hey, let’s go tackle this use case and provide this new outcome and efficiency.” And then we start digging in and we figure out that, well, we actually need you to start capturing better data. We need your humans to start recording what was the action they took when they saw this data. So that’s probably the first and biggest challenge, is just making sure that there’s a good data architecture, that there’s cleanliness of data and sufficient data.

Another one that’s looming, and they’ve been very active for the past few years, is really the regulatory bodies. So there’s lots of regulatory uncertainty, especially for the financial services industry, as to what’s coming with respect to satisfying them. And that could actually be smothering. It could actually start to stifle some innovation. It could start to limit what we do. So that is one of the challenges. And then the common challenges that everybody tends to talk about, the general risks of AI, you’ve got the doomsday predictions of it being the end of mankind. But there’s actually some more real and tangible things that we have to worry about: cybersecurity risks, the risk of ethics around artificial intelligence, especially in the financial services industry that’s dealing with end user consumers and personal data and credit lending. You hear about bias, you want to remove bias out of all of the AI use cases. So lots of challenges to deal with, but they’re not insurmountable.

Kali Jakobi:

So assuming we can take care of the security risks, we can get around our regulatory issues and we have our data in check, what benefits can come from leveraging a technology like AI in both public and private markets?

Randy Swanberg:

There’s a number of benefits, and we’re actually already starting to see them in some of our early use case deployments at State Street. But one of the top ones, it doesn’t sound very exciting, but operational efficiency. In the financial services industry, we’ve got tons of humans doing manual work. In fact, the term bionics of our organization is really inspired by accelerating humans. And I think this kind of coexistence of humans and AI will help to both satisfy some of these risks and concerns that we’re talking about. But it’s really the sweet spot of what we’re trying to achieve is leverage AI to make humans more productive, to make them more efficient, to accelerate what they do. And there’s no shortage of opportunities like that. In our industry there’s massive amounts of unstructured data, data that was intended for humans to consume, from fund prospectus documents to broker confirmation statements to you name it.

And as we talk more about private markets, I’m sure we’ll get into some of those examples. But being able to consume that data, leveraging natural language processing, which is a whole subfield of artificial intelligence, huge opportunities and benefits there. Also, just the types of data we deal with in the financial services industry, so critical. We’re talking about trades and transactions and financial data and market data and pricing data. AI can play a tremendous role in helping us to gain confidence in that data, to detect anomalies in that data, outliers, prevent errors. Which then leads to this overall field of risk management and better operational controls leveraging AI. Fraud detection and prevention, big use case in the financial services industry, for example. But overall, at the end, all of these will translate to better client experience, transforming the client experience, immediate access to answers and data, higher qualities of service, all of that.

Kali Jakobi:

You already touched on something that I want to dive a little bit deeper into, and that is risk reduction and data quality. Can you discuss a little bit more on use cases that you’re seeing and the impact of these applications and their role in improving that overall process? What does that actually look like?

Randy Swanberg:

Sure. I can give you three very specific examples. I’ll call them flagship use cases at State Street. One is what we call the AI NAV benchmark. So in our traditional custody and accounting business where we’re striking the NAV on literally tens of thousands of funds every day, it’s very critical, obviously, that we get that NAV correct when we’re calculating that NAV for each individual fund. And traditionally, there’s lots of controls and rules-based thresholds and things like that, but one of the controls has always been a default benchmark for each fund. And sometimes that benchmark is specified in the fund prospectus for that fund, or maybe it’s a fund accountant who’s gained what we call tribal knowledge, being responsible for a fund for some number of years. And they will select a particular market index, for example, that seems to correlate. But that benchmark is critical during the pricing window as kind of a, are we directionally correct? Does it make sense with respect to tolerance within its benchmark?

And what we’ve discovered, one of our AI use cases, and it really came from talking to these fund accountants that would say, “Well, based on my intuition, I know that this fund seems to correlate with the TFI equity index, but it also kind of correlates with the S&P 500.” So we thought, well, that’s a classical machine learning problem. So don’t limit yourself. Let’s start with 5,000 market indices and let the machine learning models and algorithms learn what those correlations are. So we’ve been able to produce an AI-derived benchmark for each individual fund, and on average they’re two and a half times more accurate than the default benchmark. And that translates to reduction in these what we call false positives or false break to benchmark alerts. Which means that, again, accelerating the humans, they don’t have to waste time sifting through a bunch of noise in order to really focus on things that could be material.

And that same example, that applies across just data quality in general. So we’ve got cases to where, as we replace rules-based controls and frameworks, we’re being able to see like 90% reduction in these false positives on controls against market data that we use. Like in global markets. All of the data they consume for trading decisions has to be tightly scrutinized, and we’re able to reduce those false exceptions by 90%. Portfolio valuation data and all the components that feed into it: trades, client flows, income, fees, reference data, pricing, being able to reduce, again, the noise. But the key is to catch the material errors, and that’s what we’re seeing. These AI techniques are able to do both. Reduce the noise, but still catch what needs to be caught.

Kali Jakobi:

That’s amazing, Randy. So already I’m hearing a lot of risk reduction, a lot of efficiency being improved. Let’s talk about the digitization process. The financial markets are definitely an area in an industry that is on the later side to modernize. So what does that look like for AI coming into the picture? How does it accelerate the digitization process?

Randy Swanberg:

We’ve got examples of where the industry has evolved into some digitized forms of communications, SWIFT messages being one example. But the more and more that the service providers in the financial industry and the clients all get onboarded into these programmatic API interfaces, that’s really the future. But the here and now, the truth of the matter is tons of data is still being passed around in the forms of documents. So one immediate place where we see AI helping is with respect to the digitization of all that data. And I know private markets is a topic for this discussion. Just using that as an example, in the private market space, there’s a ton of limited partner agreements, general partner agreements, capital call statements, distribution statements, investment schedules, and on and on and on. And every one of these are different. They’re different formats coming from each partner. There’s no standardization.

So we’re currently in a world where humans in the private markets and private equity spaces, they’re having to read these documents, they’re having to transcribe them, they’re having to manually validate them. So this is where AI in the immediate sense is being used to actually ingest those documents, extract the key information, and even automate some of the validity checking of the data in the statements against other data sources that the AI can connect to and have access to.

Kali Jakobi:

Randy, when it comes to practical application of AI for these LPs and GPs, walk us through what does that look like? Are they adopting new technology? Are they creating an in-house solution? Are they talking to experts like you? How do they do that?

Randy Swanberg:

Great question. So especially in this context of just how do they ingest and process all of this data, there are vendors in this space that are targeting the private market space with technology to both create that ecosystem and network of connecting to the general partners and the limited partners and that whole ecosystem. And being able to ingest and build out the AI technology that knows how to read these different types of documents, knows what kind of data to look for and extract.

So that’s kind of the here and now in the near term. But there’s really a whole lot of other opportunities as we look forward into the private markets space and the potential of AI. Because it’s one thing to actually see a capital call statement and be able to ingest and say, “Okay, well, here’s the capital that’s being called. Here’s my opportunity.” Or, “These distribution statements, we’re getting this back now.” But really, how do we gain insights?

Once we get past the logistics of being able to digitize and process the data, really I think the future possibilities of AI or opportunities in private markets gets into being able to make sense of this data and actually provide forecasting, and for these partners to be able to anticipate capital requirements, factor that into their planning, their allocation strategies. We talked about risk earlier. What are the risks in this space? Well, liquidity issues, anticipating defaults. This is what AI’s good at, is being able to take large amounts of data, large amounts of historical data, learn the patterns, learn the trends, and then provide insights so that it can inform better decision-making. Again, accelerating the humans, making them better at what they do.

Kali Jakobi:

Randy, what is the cost of not adopting technology such as AI? I mean, it’s something that is hard to manage with the risks and the security issues like we’ve already talked about. But for firms not even dipping their toe in it, what’s their business risk for them?

Randy Swanberg:

You often hear about things like first mover advantage, being the first to adopt something and move, and does that window close? I think this is maybe one of those moments, meaning if all of these operational efficiencies and these productivity enhancements, if your competitors are gaining them by leveraging these technologies, you don’t want to be left behind. The values of the better qualities of service, the risk reduction, the additional insights. In any financial industry, everybody’s after alpha. What is the little insight that I can get and move first on it before anybody else? And I think AI is one of those technologies. Another tool in the tool belt to help with those decisions. So I hate to say it this way, but being left behind, you don’t want to be left behind.

Kali Jakobi:

Right. That makes me think more about generative AI versus, I guess, regular AI. As someone who is not from the space, can you define what the definitions are there for me?

Randy Swanberg:

Sure. And it’s kind of humorous for us who’ve been working in the AI realm for several years. I guess I go back probably 12, 13 years that I’ve been working in this field. But all of a sudden, because of Chat GPT, everybody’s talking about generative AI. And it is an inflection point, it is a step function. But really the first thing about generative AI is it’s really a consolidation of individual natural language processing techniques that have existed independently. You think about things like sentiment analysis or being able to identify a section of text and say what that text is talking about. “Oh, this is talking about a custody agreement,” “This is talking about a vendor agreement,” what we call classification. Or being able to identify counterparties, what we would traditionally call entity extraction. What’s new about generative AI is the fundamental underpinning. You’ll hear the term large language models, or LLMs, and these are these massive deep learning neural networks.

Sorry to be throwing out all these terms, but neural networks is an AI approach, which is where we get the term deep learning, because that’s taking machine learning to another level with respect to deep learning. And these neural networks are really loosely modeled after the human brain. They come, woefully, nowhere close to the power of the human brain. But it’s the fabric of neurons. You can think about it that way. These are the architectures that are now being trained on massive amounts of data. When we say massive amounts, we’re talking 50 terabytes plus of public internet data, articles, Wikipedias, websites, anything and everything. And these massive models are learning languages and human conversations, whether it’s in English or Spanish or French, all the languages. And these foundation models now are capable of doing all of those individual NLP tasks that we talked about before.

This one foundation model can identify sentiment, it can classify text, it can extract entities. And the new thing, generative, is where it can also try to complete your thoughts, meaning you give it a question, or you prompt it. That’s another term in the generative AI sense where you’re giving it a prompt. It will try to predict what are the next sequence of words that would complete that thought, or complete that. That’s why you can ask generative AI to write you a poem, and you can say, “Write me a poem about sunshine in Texas and no rain” and whatever. You just make it up. And it will actually generate a cohesive poem. It’s actually quite scary. That’s where this generative notion comes in. So that’s the difference in the technology. The challenge for us is how do we harness that for now, our business, financial services, and everything we’ve already talked about: operational efficiency, better risk controls, and obviously, in this case, transforming how our clients interact with our products and services.

Kali Jakobi:

Definitely a lot to dive into there. I mean, I feel like we could spend another hour just talking about the capabilities of generative AI. But when it comes to the specific ways that businesses can manage their data from a front to back perspective, how do you see AI leading that path and that journey of data through the back office to the front office of these investment managers?

Randy Swanberg:

I’m going to build on this generative AI track because I think that is the next inflection point to answer your question. From front to middle to back, what generative AI is going to give us is a new interaction model with all of that data for our clients. I often describe it as conversationally controlled workflow. So whether it’s an investment manager, whether it’s an asset owner, whoever is interacting with their data, they’ve got a set of questions that they’re interested in. What are my top two funds by net asset value? They just want to ask that question. They don’t want to have to pick up the phone. They don’t want to have to send an email. They don’t want to have to build a database query to go get it. They just want to say it and have the data rendered. And that’s what’s possible now. In State Street, we’re using the term copilot, again, accelerating humans or providing better experiences to humans.

But it’s that type of thing to where a client can ask that question, “What are the top two funds by net asset value?” And then build a whole thread of context around that, meaning, okay, the generative AI knows, “Oh, I need to go query this data to answer that question, and here’s your top two funds.” And then being able to say, “Well, have you detected any anomalies with the valuation of that first fund?” And then the generative AI links into other AI systems to say, “What are the anomalies detected for this particular fund?” And pulling that into a cohesive answer. So this is just scratching the surface, but hopefully it gives you an idea, from every aspect of data, every interaction with data, every interaction with systems, they’re poised to just be disrupted by just conversational interfaces.

Kali Jakobi:

Randy, how far do you think we are as an industry, not State Street or Charles River specific, from that being a reality? From the tech and the regulations and all of those things that have to come together to actually be able to use this scenario, how far away do you think we are from that happening?

Randy Swanberg:

Well, you often hear the terminology, general artificial intelligence, which is different than generative AI, specifically. But this notion to where I can interface to State Street and ask it any question, meaning I want to ask this private markets question about my Charles River for Private Markets experience, or I want to ask this Charles River question about my public equity questions, or I want to ask the alpha… That is a harder problem than creating, I’ll call it crawl, walk, run types of deployments. And by crawl, I mean let’s have a copilot that can just answer questions about this domain, like investment management guide or my portfolio data or whatever. So I think we’re going to start to see those types of use cases, very targeted, very specific, by the end of this year. But some of these grander visions of using generative AI to summarize all of my portfolio and generate my financial report and have that be the final draft that goes out, I think we’re still a few years away from that.

And again, the wild card here is regulatory and regulation. There’s lots of debates and conversations nowadays as to with generative AI, even fundamental questions like, all that data I talked about earlier, 50 terabytes plus of public internet data, did these foundation models even have the rights to learn from all of that data? So these things are playing out in the legal domains between companies. And then even the output then of these models, who owns the right to the output? So a lot of questions still have to be answered. But I do think some of these earlier use cases to where we’re bringing our own data and we’re just leveraging, I call it the freedom of speech eloquence of these generative AI models to answer questions against our own data, I think we’ll see those sooner than later.

Kali Jakobi:

Maybe we should ask AI and Chat GPT if it’s legal to do all of this and see what they have to say for their own defense.

Randy Swanberg:

It’s funny you ask that because early on with the public Chat GPT, I asked it, but its answer has changed.

Kali Jakobi:

Oh, really? It’s learned.

Randy Swanberg:

Yes. This is the scary part about this technology too, is it’s kind of dynamic. The more it learns from prompts of users and new questions from users and actually feedback. In fact, I should have mentioned this earlier, when I talked about the coexistence of humans and AI, even for our early prototypes with generative AI, we’re building in a thumbs up thumbs down. So that if the answer that comes back is not exactly what you’re looking for, or you think it’s incorrect or it’s just way off base, we can capture that feedback and then use that to continue to train, do prompt tuning, all of these techniques to try to improve it over time.

Kali Jakobi:

Well, as people are navigating their path toward utilizing AI, just in a general summation, Randy, can you talk about what strategies you would recommend people taking as that crawl, walk, run approach infiltrates throughout our individual workday lives?

Randy Swanberg:

Yes, I’ll take a stab at that. And a lot of it will be based off of personal scars learned over the past decade.

Kali Jakobi:

That’s the best way to learn.

Randy Swanberg:

From my own experiences. One of the first things is level setting expectations, especially like generative AI. I mean, people think it can do everything. But AI in general, just getting a better understanding of the potential, what it can and can’t do. One of the hardest things I had to explain to the business in our very early use cases was, we need to do a proof of concept first. And they’re like, “What do you mean we have to do a proof of concept? When I’ve done past it projects, I just say, ‘I need this application to do this function and I need it done by this date.'” Well, AI is not that deterministic, and sometimes it’s not deterministic at all. So the first is almost a culture change, getting people to understand that AI is different. We need to actually prove this out. We need to do a proof of concept.

We need to test this hypothesis against your data to see, is there enough signal in the data to achieve the outcome that we want to achieve? So that’s kind of the first step. And even then, getting folks to understand that even when successful, successful for AI might be 90% of the time we get the accurate response, but that means 10% of the time you don’t. So you have to get expectations set that, how do I put that in an operational environment? And the next question I normally get when I say that, “Well, how can you ever use it if you can’t guarantee it’s 100% accurate all the time?” Then we come back to the human comparison. The question is what are we supplementing or augmenting or replacing with respect to human performance? How have the humans performed? Have we measured their performance? Are we as equal or better?

I’ll actually give you an example from my prior life, if I’m allowed to do that. When I was at IBM and we were supporting IBM Watson and some of the use cases in the health industry, perfect example. Ophthalmologists who are needing to review retina scans, and the AI coming along and trying to actually understand retina scans and actually identify issues. The measure of success was 20 ophthalmologists reading the same scans. What were their conclusions? And then what did the AI conclude? Was it better than those 20 ophthalmologists? If so, it’s better than what we have now in human performance, 20 experts. So it’s kind of that example applied in different pockets to make sure that you can set the expectations, you can have an operating model that supports that. That’s why human in the loop is so important for many use cases. Again, you’re trying to accelerate the human. Let’s get you to this decision faster, but ultimately the decision still is in the human’s hands.

Kali Jakobi:

This might be my first conversation about AI that doesn’t leave me with an existential dread that AI is going to take over my life and my job. So Randy, I appreciate you explaining how AI will empower the human versus take over our entire financial jobs. But as our tried and true listeners know, I always end our podcast with a question that has nothing to do with our topic, and it is my beloved personality question. So to wrap up our podcast today, Randy, my question for you is if you could magically become a professional athlete overnight, what sport would you choose?

Randy Swanberg:

A little known secret about me is, this was like 100 years ago, but I played baseball in college. So you might think, and I might think, that my immediate answer would be, oh, professional baseball player. But as I think about that more, I’m going to say no. I think I would want to be a professional golfer because, for whatever reason, that little ball sitting still is harder to hit than those balls that were coming at me 100 years ago at 95 miles an hour. So it’s always been a challenge, and it just looks great on Sunday afternoons. They’re walking these beautiful fairways and they’re making millions of dollars. So yeah, professional golfer.

Kali Jakobi:

Professional golfer sounds like a nice life to me. I don’t see really any downside, except it takes a bit to get there.

Randy Swanberg:

Exactly. Yep.

Kali Jakobi:

That’s this case with any professional sports career. In the spirit of the World Cup, I would have to say my professional sport choice would be soccer. Played it in high school, was never great, but really enjoyed it and always loved a team sport.

Randy Swanberg:

I would have to say, though, there’s too much running. That’s another reason. Golf is a nice casual stroll.

Kali Jakobi:

Yeah, you just need to perform for a little bit of the majority of the time you’re outside. There’s a lot of running to soccer. But then you’re in great shape, you’ve got great heart health. A lot of benefits.

Randy Swanberg:

I’ll give you that.

Kali Jakobi:

Well, we could continue our professional athlete conversation for a long time, but as our data conversation wraps up, Randy, thank you so much for joining us today. And if anybody has any additional questions, feel free to reach out to me directly and I can get you the answers. Randy, thanks so much.

Randy Swanberg:

Thank you, Kali. Enjoyed it.

Kali Jakobi:

Till next time.

Thanks For listening to this episode of Data Disruption by Charles River. If you like what you heard, share and leave us a review. It helps others discover the show, and I thank you for it. And if you’d like additional insights related to this conversation or others, go to our website at www.crd.com. Till next time.

Information Classification: General

5929360.1.1.GBL.

The material discussed is for informational purposes only. The views expressed in this material are the views of the author and are subject to change based on market and other conditions and factors, moreover, they do not necessarily represent the official views of Mercatus and/or State Street Corporation and its affiliates.