podcasts Market Intelligence /marketintelligence/en/news-insights/podcasts/ir-in-focus-s2-ep-01.xml content esgSubNav
In This List
Podcast

IR in Focus | Season 2
Ep.1 – AI: Beyond the Buzz

Blog

Banking Essentials Newsletter: June 12th Edition

Blog

S&P 500 Q4 2023 Sector Earnings & Revenue Data

Podcast

Next in Tech | Episode 171: Concerns About Fraud Drive AI Investment

Podcast

Street Talk | Episode 127: The 'knife fight' for deposits could spur more bank deals

Listen: IR in Focus | Season 2
Ep.1 – AI: Beyond the Buzz

In a world where AI is ostensibly at everyone's fingertips, and has largely gone from buzzworthy to saturated, gain a clearer perspective for your IR program. Host Carmen Lilly sits down with colleague, Daniel J. Sandberg, a thought-leader at the intersection of data science, finance, and mathematics for an engaging, fresh take on Artificial Intelligence or "AI". Their conversation unveils AI origins, implications for IR practitioners, what to avoid, plus tips for navigating emerging and evergreen technologies to accelerate progress for IROs.

Learn more about Market Intelligence
Request Follow Up

Carmen Lilly

Hi, everyone, and welcome to our podcast, IR in Focus. I'm your host, Carmen Lilly. In the next 20 minutes, we'll be diving into AI and its impact to corporations and IR programs. With me today is my esteemed colleague, Daniel Sandberg. Welcome to the podcast, Dan.

Daniel Sandberg

Thanks, Carmen. Pleasure to be here.

Carmen Lilly

Thanks. And now before we get started, I'd like to present to our listeners your rich and varied background, if you don't mind.

So Daniel Sandberg is the Head of Quantamental Research team here at S&P Global Market Intelligence. In this role, he leads new product development and research for the quantitative part of the business. Now Dan and his team produce thought leadership pieces using the industry-leading datasets available from S&P Global Market Intelligence and combines data streams into new products to generate novel insights.

Dan is a member of the S&P Global Corporate Research Council, which spans all divisions. And Dan sits on the advisory boards for various alternative data providers, including People Data Labs. Now prior to his current role, Dan held several positions at S&P, including Head of Alternative Data Validation for Quantamental Research as a Senior Research Director with the Quantamental Research team and Technical Lead for the Investment Management business vertical.

Now before joining S&P Global in September of 2015, Dan was a buy-side quantitative portfolio manager with The Legacy Foundation, which is a pension manager and high-net-worth individual advisory in Charlottesville, Virginia. Dan holds a PhD in physics and is a CFA charterholder.

Carmen Lilly

All right. Like I said, you have a very rich background and you absolutely know your stuff. But for our listeners who aren't in the trenches of what's going on in the world of AI, I'd like to start with some basic level-setting on terminologies, just making sure we are speaking the same language here.

So for our listeners, can you walk us through the basic terms we should be using, so things like AI, generative AI, machine learning, deep learning, large language models, all of that fun stuff?

Daniel Sandberg

You've got it. And thanks again for that introduction, really excited to be here. There definitely are a lot of terms floating around these days, and it can get pretty confusing. So I'll give you the quick crash course here. AI is the big umbrella. That captures most of the other terms. Basically, any system where a machine is completing a task could be termed AI, even if there's no training or learning that's involved.

So for example, let's say that you want to go out on the town this weekend and you stopped at an ATM to get cash, that ATM is technically a form of AI. It's a machine that can do a human-like task. But the ATM is hardcoded with a set of rules. First, it reads your card, then it prompts you for your PIN, then it matches that PIN to a database. The ATM does the same thing every time. It never learns new behavior because it's a rules-based AI and the rules don't change.

Now machine learning, on the other hand, is a subset of AI where the machine learns from new data. So if you check your bank balance at the ATM and decided, "I'm going to save some money this week and watch a movie at home," and you go and you put on your Netflix, Netflix makes recommendations on what you might like to watch based on what's already been watched, right?

Netflix is using a machine learning algorithm called a recommender system to figure out what suggestions to make. The algorithm, that recommender system, is basically learning the features of the content that you enjoy, where the features could be things like the lead actor, the genre of the film or show, the director. It could be based on user feedback. And based on whatever features have been important in your decision-making in the past, it then makes those recommendations going forward.

Now a lot of what we've seen in headlines recently is focused on language models. Language models are a type of machine learning. So now we're another layer down. We started with AI at the top. That's the catch-all. Then machine learning is a subset of AI, where it's learning from the data. Language models are a subset of machine learning, specifically where language is the input to the model. And what you get back from that model is a probability of seeing that set of words. So much in the same way that Netflix defines a movie as a set of features, like the cast or the genre, the language model is extracting features of the word.

So for example, words like big and biggest share a semantic relationship because they have similar meaning, so do the words small and smallest. But big and small share a syntactic relationship because they're the same part of speech, just like biggest and smallest are the same part of speech. So breaking down those words into features is how the language model understands the language. Some features are intuitive, like part of speech or meaning, some features, less intuitive. But nevertheless, features are what the model uses as its basis.

Now language models don't need to be generative, meaning they don't need to necessarily create new text. Many of the spam filters that we have on our e-mail inboxes are using language models as part of that spam identification. Nothing is generated as a result of that analysis. It's just labeling the e-mails spam or not spam based on the text in the e-mail. And it turns out that you can actually build a pretty good spam detector using a relatively small language model.

So two concepts here. One is the concept of generative versus not and the other is the concept of small versus large. So the generative part has to do with what the model tells us. A language model can tell us how likely it is to see a given set of words. It could give us a label like spam versus not spam. Or you can ask the model to give you the most likely next word. And here's where language models and generative AI intersect. If the language model is giving you the next most likely word, then we prompt the generative language model, meaning we give it a series of words, and it thinks of what's next.

So maybe the prompt is who won the Super Bowl in 2021? The model weighs the probability of that sequence of words, and it suggests the most likely next word, given the training data. So assuming the information is in the training data, meaning the model has seen it, it may look at that sequence of words, who won the Super Bowl in 2021, and then it comes up with the next word, Tampa. And then it puts that whole sequence back through the model again and says who won the Super Bowl in 2021, Tampa, and then the next word, Bay. And you iteratively answer the question in that way.

So just to be clear on this, the model has no preference on truthfulness or being right or wrong. It simply looks at the training data, and it gives you the most probable answer, given that training data. So the accuracy of the result is very closely connected to the quality of that training data that the model has seen before. So that's the generative part.

Now not all language models are generative, not all generative models are language models. So those are different concepts, but they do intersect, especially with what we've seen recently. And that brings me to size. So we said that you can build a spam filter with a small language model. When I say small, I'm referring to the number of parameters. So we have words, we have features of words. We can combine those features with each other. And every time we combine features, we get another parameter.

It turns out that when you have enough parameters in these generative large language models, like billions and trillions of parameters, and you give it enough training data, like you give it the entire Internet, for example, then you start to get interesting things out of the model to the point where it mimics a rather intelligent person.

Carmen Lilly

Wow, okay. So my brain is kind of a little spinning here. So you were explaining, AI is an umbrella that helps break down all of the various ways in which machines are either learning or generating new content in which us as people can use in our professional lives or personal lives. You were talking about visiting an ATM.

Because there's been like a swell in interest in AI, but everything that I've read, and going back to your ATM example, it's been around for a while, but with this new set of tools, when ChatGPT was released, I think, just created a large swell of interest and a bunch of new tools hit the market at the same time. So can you give us a lay of the land on what's out there right now that folks are buzzing about and excited about and using right now and incorporating into their activities?

Daniel Sandberg

Absolutely. So as you point out, AI is everywhere. It's been around for a long time by the most liberal definition. But when I'm thinking specifically about the IR professional, the IRO, and when I think about the types of AI that are making headlines today, the space is somewhat nascent. And there are risks, right, especially with the types of material, nonpublic information that IROs have to deal with regularly. So it's about finding the technology that has the right guardrails. There has been a lot of focus on using natural language processing to extract features, like we talked about before, from earnings calls.

So for example, the sentiment of the tone that's used on the call or the clarity of the language that's used to deliver the information, the transparency of that information, those are features of the call. And S&P packaged features of earnings calls in a data feed tailored for buy-side asset managers in 2020. And then in 2022, we made those features available on our CIQ Pro Desktop. All those scores become available after the call have taken place.

And this year, we're going to be releasing a tool where IROs can run their scripted remarks through the algorithms before the calls take place. And that will be in a deployable software. So we'll be able to let that live on the end user's computer. They can run that on an air-gapped machine. Nothing talks back to the mother ship, nothing transmitted back over the Internet, so designed with security top of mind, and it will have access to those AI-enabled features that you mentioned.

Carmen Lilly

Great. Can we go into sentiment scoring a little bit more? When we talk about sentiment scores and we have a data feed, those transcripts that we send to buy side, what are the metrics or parameters or types of language that they're looking for that could potentially be things that could have red flags? One thing I was reading about is the fogginess score, how clear are you and how simple are these words that you're using to explain what's going on with your company. Do you mind going into some of those sentiment scoring definitions?

Daniel Sandberg

Absolutely. There are a number of ways to tackle the problem. You mentioned, I think you're alluding to the Gunning Fog Index or fogginess score, which is counting the number of big words by the count of syllables in a sentence relative to the total number of words in a sentence. The idea here is that if you have good news, you tend to say it directly. But if you have something that you're not thrilled about speaking on, you may dance around it a little bit, use more words, run-on sentences, things of that nature. So the length of the sentence and the number of large words in the sentence both cause the score to be higher, lower scores are preferred.

Sentiment is about using words that tend to lead to positive outcomes or be associated with positive outcomes. And I think the intuition is somewhat intuitive there. And then we also look at numeric transparency as one of our other headline scores. Basically, don't just tell me you had a good quarter, show me with numbers. All of these things are fallible to an extent. And so I often have conversations with IROs, where they say, "Here's a sentence that I wrote. It's a positive sentence. It bodes well for my firm. But I'm getting a negative score from the algorithm. So the algorithm doesn't work."

But in truth, the algorithm does exactly what buy side wants it to do. It tells me if the earnings call is range-bound in the range I expect it to be, or whether it's broken to the high side or the low side. So it's not necessarily about that one sentence, it's about the whole component or the average of the topic that's being discussed and whether that's indicative of something that might be market-moving and warrant some additional scrutiny.

Carmen Lilly

So to your point that there are parameters set up that help buy side flag, "Hey, this is unusual, you may need to take a look at this," that leads into, when I was researching, a lot of what I was reading was, "Yes, we are investing in AI. What we're doing is investing in AI that will help our advisers or will help our analysts do their job better and be more productive." It's not necessarily replacing what they're doing but supplementing and helping them with their day-to-day operations. Is that the gist?

Daniel Sandberg

I think that's right. There's almost an excessive amount of data out there. And it's long been the case that data-driven decision-making is what rules the day. That requires having both the data and having the ability to extract the right decision-making insights from the data. It's always been done by some combination of human and machine. And I don't think that changes anytime soon.

If nothing else, I think AI makes it easier for people to know what needs more attention. So even a fundamental manager that's making investment decisions based on discretion could use AI to identify a relevant news article or relevant sections in a regulatory filing or an earnings call. And certainly, there are firms out there that have strategies that are almost entirely, if not entirely, governed by machines. But I'd say those are probably the minority of use cases.

Carmen Lilly

Right. We don't want total replacement and just machines taking over. So I'm going to read just a quick quote here from one of our research articles that came out from 451 Research. I mean, for our listeners who are not familiar with 451 Research, so that's a sister company that we've incorporated into S&P Global Market Intelligence. And they focus specifically on information technology research and advisory services. So they put together what's coming up next. They have this survey about AI and machine learning that's trying to glean what is 2024 going to be about? What is the sentiment right now going on with AI and machine learning?

One of the interesting points was saying technology adoption is never a zero-sum game. And I think that's very interesting, and a point I wanted to hone in on is we talk about technology adoption as taking over certain roles or aspects of roles and then making a human replaceable. But again, through my research, it doesn't seem like we're talking about human replacement. We're talking about supplementing what the operations look like and making those folks more efficient. So in your experience, can you maybe dive into how buy-side firms are leveraging this to be better at managing their portfolios?

Daniel Sandberg

Absolutely. You're right that AI is an enabler. It's a tool that allows a single person to do more and do it faster and parse more information more quickly. So if you have a coverage universe of, let's say, your Russell 3000 investor or an S&P 1500 investor, that's a lot of companies to cover. You couldn't dial in to 1,500 earnings calls every quarter. What you could do is have an AI that's parsing that information as it's coming out and identifying the areas that might need a human in the loop to give it more attention and fully understand it.

Carmen Lilly

To follow up on that, in terms of talking specifically around generative AI from the same 451 Research article with their survey, the question they were presenting like, "Hey, do you plan to invest in generative AI?" And close to 50%, so 49% of the survey respondents says, "Yes, we have a high intent to invest in generative AI." So how would buy side use something like generative AI to help their performance?

Daniel Sandberg

Sure. So let's go back to our definitions because generative AI and language models are not necessarily the same thing. So generative methods have been used in all sorts of model training. You could, for instance, give pricing data, stock pricing data to a generative model and generate fictitious prices and test an algorithm on fictitious prices. That's been done, simulated stock market trading. So that's a non-language model application, investing more around the generative language model applications, where the model at least seems to be able to understand natural language.

And in that case, I think the value is in the ability to extract the meaningful context of that data. For example, news and filings are the way that 90-plus percent of information about companies is disseminated to investors. And that's really just a glut of information to parse through. So being able to distill that down into takeaways programmatically, it makes an otherwise intractable task relatively manageable.

Carmen Lilly

This is so interesting, I love it. And then just to try to flip that on its head and think about, okay, so we've covered how AI is currently being used by buy side and capital markets in general. You talked about the tool around sentiment scores that could benefit IR programs. What else could help IR programs be more efficient and effective?

Daniel Sandberg

It's about making life easy, right? In some ways, it feels like a necessary inevitability to address all the data that's out there today. Whether it's social media monitoring or responding to investor queries automatically or just having the ability to view the same analytics that buy-side firms are looking at when making investment decisions, all of that is going to be informative to IR programs. AI is a way for us to figure out where to put our limited bandwidth and attention and hopefully reduce some of the time that we spend on the more mundane and rote tasks.

Carmen Lilly

I love that concept. It's like I need machines to do these menial tasks, so it frees me up for the more creative and more interesting aspects of my job. So thinking about the IR programs, how they should be using it or thinking about AI but also extending that to leadership, so the C-suite and Boards, how should they be thinking about and preparing for short-term or long-term impact of AI on Investor Relations?

Daniel Sandberg

Short term, learn the basics, find a partner who knows the space and knows the risks, if that knowledge doesn't exist in-house, and invest in technology wisely, Long term, I think firms need to nurture AI literacy across their businesses, build strong governance policies and have a way to track the KPIs that matter. AI is a buzzword, and I feel like everybody wants to do something AI-related. But at the end of the day, if it's not meaningfully moving the needle for the business, then it's falling flat.

Carmen Lilly

We see the kind of the same thing, or at least the same threat of thought when it comes to ESG and sustainability. There's a lot of it's hot, it's topical. But in order to make it meaningful, it actually has to be working for your company and helping you move to that path to profitability.

Okay. So we talked about IR programs, how they can benefit. Let's talk about what potential threats could arise from the use of AI in Investor Relations. Maybe concerning external reputation and news accuracy, what risk are you seeing out there that IR programs and corporations should be aware of?

Daniel Sandberg

If an IRO or an IR team is using AI to generate content, then they need appropriate protocols to ensure that information is accurate, right? The fiduciary obligations that are on a person or a team do not transfer to the machine. And there is a risk for some erroneous information as has been documented, this concept of hallucinations, the LLMs hallucinate. That's really just a fancy way of saying they give you the wrong answer, right?

As for other folks using AI to generate fake news, I'm not sure much change is there. I imagine most companies have some sort of media monitoring policies and will try to correct misinformation. That's largely outside the scope of my expertise. I think there is a potential that AI makes it easier to find scandalous information about people by having a more efficient way to sift through data, that sort of muckraking is probably a frustration for an IRO. Getting in front of that, I suppose, having access to the tools and the technology so that you have the foresight of what might be coming would be the best way to take a proactive stance.

Carmen Lilly

We talk about that a lot on our podcast, setting yourselves up for success by being proactive. Glad that you also echo those sentiments. So we're running out of time. So Dan, I do love to end these podcasts with one piece of tangible advice that listeners can walk away with. So what is that one piece of advice you would leave with IR programs and corporations when it comes to dealing with AI?

Daniel Sandberg

The space is moving very fast. I would argue that the technology exists right now to give you a leg up, and that IR teams who aren't at least looking into AI-enabled tools in 2024 are going to be lagging their peers. As they do their analysis, I think security should be a big concern.

Adoption of large language models, in particular, that require cloud hosting means requiring an understanding of the liability that might be associated with the data going into those tools. And I would say the best approach would be to partner with a firm that knows the space and can help guide you with onboarding the tools that you need to stay current.

Carmen Lilly

Perfect. I love it. Thank you so much, Dan, for your time today, and thank you to our listeners. If you like this content, please subscribe. Thank you again, and we'll be back next month

No content (including ratings, credit-related analyses and data, valuations, model, software or other application or output therefrom) or any part thereof (Content) may be modified, reverse engineered, reproduced or distributed in any form by any means, or stored in a database or retrieval system, without the prior written permission of Standard & Poor's Financial Services LLC or its affiliates (collectively, S&P).