podcasts Market Intelligence /marketintelligence/en/news-insights/podcasts/next-in-tech-episode-150 content esgSubNav
In This List
Podcast

Next in Tech | Episode 150: Technology in Media Production

Blog

Banking Essentials Newsletter: June 12th Edition

Blog

S&P 500 Q4 2023 Sector Earnings & Revenue Data

Podcast

Next in Tech | Episode 171: Concerns About Fraud Drive AI Investment

Podcast

Street Talk | Episode 127: The 'knife fight' for deposits could spur more bank deals

Listen: Next in Tech | Episode 150: Technology in Media Production

Technology’s impacts are felt in many areas, including podcasts! We pull back the curtain a little with producers Darren Rose and Kyle Cangialosi joining host Eric Hanselman to look at the transformations that are driving media production. Concerns about deep fakes are all the rage, but the realities of the technical capabilities are still progressing in their ability to deliver professional quality media. Our podcast is benefiting from AI-powered studio tools, too.

Subscribe to Next in Tech
Subscribe

Next in Tech - Episode 150 Technology in Media Production

Table of Contents

Call Participants.............................................................................................................. 3

Presentation.................................................................................................................... 4

Call Participants

ATTENDEES

Darren Rose

Eric Hanselman

Kyle Cangialosi


Presentation

Eric Hanselman

Welcome to Next in Tech, an S&P Global Market Intelligence podcast where the world of emerging tech lives. I'm your host, Eric Hanselman, Chief Analyst for Technology, Media and Telecom at S&P Global Market Intelligence.

And today, we're going to be discussing technology in media. It's our 150th show. And I wanted to pull the curtain back a little bit and look at what's actually a lot of the technology that helps to make the podcast and a lot of the media that we do at S&P. And to discuss that with me, I have 2 of our producers from the 199 agency, Darren Rose and Kyle Cangialosi, both of whom who have been involved in the production of the Next in Tech podcasts. So welcome to you both.

Darren Rose

Hey, Eric. Thanks for having me. Great to be here.

Kyle Cangialosi

Hey, Eric. Yes, it's a lot of fun being on this side of the production for once. Thank you for having us.

Eric Hanselman

Well, really, we've turned things around, and we've actually got some of the folks behind the scenes right upfront. So it's great to have you here. I wanted to talk about -- I guess, I wanted to preface this with just a little background for our listeners, which is that we have all been talking about a lot of the shifts in technology, I'll get it right out there upfront, generative AI, all of the kinds of pieces of tech that come into the podcast and actually have leveraged all sorts of them. But there is a lot that's taking place behind the scenes on the production side of this.

And I guess it would be great to start out just for our listeners, a little bit of background about the process of actually creating the podcast. I think it may seem at times that it is wonderfully simple. We just record it, we publish it and life is great. But of course, there's a lot to this. We, of course, record this in advance. So we figure out there's all the logistics of sort of putting together topics and all the rest of that. But once we actually get there, we've got a schedule list, we're actually using studio time within our studio environments to put this together. There's a whole set of things that are happening even as we step up to the mic and then everything beyond.

Darren Rose

Yes, that's right. It's quite a process area. And typically, these things would start out, as you've mentioned, with a booking system. So you would send us your request and the guests you would like to host on that podcast. We've received that as an agency. And then we would typically start out by accepting your request and then scheduling the guests. We would send out an e-mail invitation and in that email invitation is a list of triage points, how to join the call that we're going to be setting up, some things to do in the background like muting devices, making sure that you've got the right microphone and headphones, making sure you're authenticated on VPN, et cetera, and that would be known as the preproduction process.

We then moved into production, which is typically the recording. And for the recording, we use software called Riverside.fm. And then once we have that recording, we'd move into post production, so we would download it. We would use AI tools sometimes to upscale the audio. And then we work on editing that podcast, the removal of filler words, and then we would send it back to you for a review after editing and then that is released out to the general public.

Eric Hanselman

In this whole process, there is all of the orchestration and management software that actually runs the booking process, all of the tools that we're leveraging to put this together, you mentioned that when we're actually doing this recording, we're using technologies that are leveraging cloud storage. We actually have a SaaS service that actually is the recording piece of this.

There are a lot of things that I think we're a long way between what maybe many folks may consider sort of the regular recording studio environment of big banks of reel tapes and all sorts of things that are going on -- this really is relatively leading edge and we start thinking about this and especially when we actually start getting into the post production, the scheduling and things, those are apps that I think most folks would be familiar with in terms of that scheduling. But there's a lot that we wind up doing behind that and a lot of evolution in terms of really how this has changed.

When we think about the production piece, there's the actual work, but doing the audio correction is actually relatively detailed. You're needing to get into managing levels. And in fact, just before we started here, there was a fire alarm going off, there are all sorts of things that can happen during the recording itself. We've got a very useful mechanism to be able to record each of the individuals, which is one of those things that I think allowed a lot of flexibility just to the outset, but that's something that traditionally was really hard to do. And even when we started recording Next in Tech, the technologies that we had at that point still had some limitations in terms of what they were able to manage, right?

Kyle Cangialosi

Exactly. Yes, Eric. The workflow that we use today is very different from the workflow we were using even a few years ago. I mean pre-pandemic, so much of what we were doing in the podcast space was in person, right? The goal was to pull as many people into a physical studio as you could and then handle any remote connection separately. So there are a couple of solutions to solving that challenge.

The first one would be to order them a microphone, ship it to them in the mail, walk them through how to use it, how to set it up and finding the best place in their home to record from. And then using a conferencing platform like Zoom or Teams to conduct the conversation, bridging the people who are in the studio with the person who's remote and simply recording them locally on their personal machine using the mic that we shipped them.

And that's the process we use to get the best quality audio possible in that given environment, but it's a very manual process and it takes quite a lot of time upfront to get to that place where we're ready to record. Sometimes, the faster solution if people don't have as much time to record is to just record directly in a conferencing platform like Zoom or Teams. And there are some downsides to that.

One is that it's going to sound very much like a conference call. It's going to be very compressed because it's being recorded in the cloud. But at the same time, that cloud recording is a stereo mix of everybody on the call. So we don't have that separation of speakers. If you have multiple people calling in, you now have the risk of, let's say, a dog barks behind speaker A remote, while speaker B remote is speaking, that's all baked in together. We can't separate that because we don't have separate tracks for each speaker.

So these are some of the challenges that we were facing and then the pandemic happens where the entire landscape of the space changed because all of a sudden, people are at home. They have a lot of time on their hands. They started listening to more podcasts and everyone starts buying higher-quality USB microphones because they're doing work from home or maybe they want to start a podcast. And the whole landscape has changed as a result where more hardware is being introduced to market that is tailored to at-home environments with things like built-in noise rejection and better sound effects through software that help people sound better in their homes.

But at the same time, all of these online recording platforms start emerging with that local recording feature, you're talking about, where through an online session, you're able to record multiple participants, and the software is recording their file to their local machines and then making the file available after the session. So the file isn't being sent over the Internet until it's finished, which is just total game changer for our team because now we don't have to run the risk of Internet connectivity and fluctuations if someone's child were to start watching a movie and starts tagging bandwidth in the home...

Eric Hanselman

Or video games.

Kyle Cangialosi

Exactly. That would result in a lot of unwanted lag.

Eric Hanselman

Well, everything that we've been familiar with on video conferences, right? It's all of the, oops, somebody is frozen and the audio quality is lousy, all those sorts of issues around the dependence on the Internet connection.

Kyle Cangialosi

Exactly. Like Next in Tech has always been a remote production. We've been able to use these tools to make the process easier on everyone. More recently, things like AI have become available through audio restoration and audio repair tools that now that we're able to record separate track audio for people around the world and then also use AI to almost upscale their file. Those 2 things in tandem have totally raised the bar on like a remote recording standard.

Eric Hanselman

Well, hopefully, as we're seeing from Next in Tech, but it is one of those things that if you think about that shift in tools because when we started Next in Tech, we were actually using a remote connection technology that was mimicking high-performance phone lines over the Internet, which used to be the studio quality standard, right? If you're remote, you'd have high-performance lines that happen to be connecting you. And now the idea that you do it over the Internet was great, but you still have that dependency.

Now we've really taken that next step. You were talking about some of the editing tools that are there with so much rapid change, clearly, the tool capabilities have changed. But what have those transitions been like? Are there things that work well? I know in so many aspects of tech, you start to dive into areas that seem like we've got a really great new idea and it turns out to either be clunky to take a lot of work, those sorts of things. Have you hit any of those hiccups? Or has it been relatively smooth sailing in terms of this transition?

Darren Rose

Well, typically with anything that's tech-based, we found that giving a set of instructions to anybody joining, say, for instance, the Riverside recording session is going to help, right, because not everybody is tech savvy. People are obviously getting better with obviously since COVID and people having to record themselves at home. But I mean I can run you through some of our triage questions. But one of the problems that we generally run into is in the most respectful way is that people were busy. People don't really have time to read things. And when there's a link in an invite, people just click it without actually reading the instructions. And so when we do run into problems, we'll just take people through the 10 steps that we've put in our instructions and now we can normally solve it that way.

Eric Hanselman

Well, one of those, for example, right, is that we changed our VPN provider.

Darren Rose

Exactly.

Eric Hanselman

And in fact, in the past, we used to say, turn off the VPN, but in fact, now we need to have the VPN on to make sure that actually access to the rest of this. It's things where there are transitions that maybe to the layperson aren't always intuitive. And in fact, when those transitions happen without a lot of coordination because it may be surprising, but sometimes technology changes happen without complete coordination. Yes, there's a certain amount of scrambling that has to happen around that, but yes.

Darren Rose

And sometimes, technology that we think will actually help make us sound better, for instance, Bluetooth headphones. It's a great piece of tech, but we prefer people to use wired headphones, a wide microphone. Just to kind of -- yes, sometimes Bluetooth can give kind of like warping, especially if the batteries are getting low. We can just help eliminate that by just using the old school wired headphones and wired microphones. But it's something you wouldn't know unless you were kind of told otherwise.

Eric Hanselman

The reality is that if you go to Bluetooth, this is a radio link, not only is there interference, but there's also a delay. And even though that latency may be relatively small, it's still going to have an impact. Although it's interesting, speaking of places where latency has an impact, Studio also does a lot of work with video. Or are there similar changes happening in video?

Darren Rose

Yes, absolutely. And again, I think the big change came post pandemic, where we realized that, okay, we can't have people in studio, people need to be at home, and we can't be fully reliant on people's Internet connection. So what else is there? And I think this is where the apps came to the fore. So we pivoted to an app called OpenReel, which would allow us to, as producers and as directors be able to link into somebody's iPhone and essentially control things like white balance levels. And the people that we were interviewing were essentially turning into producers themselves, we would direct them. They would help sometimes it's [indiscernible]. We'd have to ship them. Perhaps a tripod to help them raise their cameras up to eye level.

But we weren't so heavily reliant anymore on people's Internet connection because when we initiated the recording that recording, would have recorded locally to those people's iPhones. And then after the recording after we have pressed start, we were safe in the knowledge that even though someone may have had a low Internet connection, that recording was safely stored on that person's phone. And so we would just happen to be a little bit patient and wait for that video to upload through the cloud, which sometimes took a long time.

It's only more recently that these apps and these providers have now allowed progressive uploads. So we weren't asking somebody, okay, thank you for your time on the recording. Can you please like keep the app open for another 5 hours once we download 1 or 2 gigs worth of footage? It's gotten a lot better now. But again, we are somewhat reliant on those connections. So our producers at the 199 would run tests beforehand, simple connection test just to make sure that they have minimum spec requirements, which is typically around 5 megabits per second up and down.

And we're at the time now where it doesn't seem strange to watch people, especially like on the news who are recording things in their home. It just feels normal now. Prior to that, we were using other software, Globalinx, I believe was a provider that we were using in the studio or in the office, where it will be a subscription-based platform where news outlets could seek professional input from experts all over the world in different divisions, different areas. And you were sitting down typically in front of a green screen in a studio and there will be a live connection up to a network, and we'll do it that way. But we haven't used them since the pandemic. So back to my earlier point, I think people are just feeling more comfortable of doing the call similar to a Zoom or a Skype call from people's homes.

Eric Hanselman

Well, it seems like part of that is advancements in the devices themselves. I have now got a smartphone that both has reasonably high-quality audio and video and more importantly, has the storage capacity to be able to capture enough of it to really be useful and then hopefully, a connection that will get it back up to be edited once you're done. And that device technology plays such a big part of this. If you look at the amount of intelligence that's applied to audio processing, video improvement, all of those kinds of things, we've now got substantially more powerful handsets that can actually do a lot of that work themselves.

Kyle Cangialosi

These local recording platforms that we're using now also work really well in this hybrid state that a lot of the world is coming back into where you have a combination of in studio and remote. It's just been a lot more reliable for us to handle and work with remote participants through platforms like this, where pre-pandemic, some of the tools haven't really evolved yet. But thinking about video, too, Darren, something that we've been playing around with is eye contact using AI, where with remote sessions especially, you have people, let's say, they need to read from a script and they need a teleprompter provided without a traditional teleprompter glass, where the teleprompter text is directly in front of the lens.

There's always going to be this risk where their eyes you can see, they're looking at their desktop screen, and they're not quite making eye contact with the webcam. But there is software available now that can draw the eyes as a postproduction effect, you don't even need to run this in real time, although there are platforms through like NVIDIA, for example, that do exactly that for live streamers, but a post-process effect that can correct a person's gaze so that they're looking directly at the camera. And it's not always perfect. Sometimes if a person has particularly large frames in their glasses or they have long eyelashes, that kind of thing. This is a developing technology. But that's been a really cool innovation that has helped improve a lot of this remote recording as well in the video space.

Eric Hanselman

And it's interesting to settle things like that. It can make a big difference to the level of personal connection that it feels because when you can actually identify that somebody, "Oh, wow, they're looking directly at the camera. It can seem more sincere. It can seem more direct.

I'm curious, Darren, had mentioned some of the tools to be able to remove, I mean, certainly unwanted background noises. We had this fire alarm going. The thing that I've been impressed about is this level of abstraction because historically, if you wanted to edit out a sound, a dog bark, something like that, you'd have to carefully edit that, figuring out what that sound look like, hope that you could remove it maybe in a pause or something in the speech of the speaker.

But that's something now that we've got tools that can actually more directly identify not only sounds that you want to remove, but in many cases, directly edit words. And that's a kind of level of abstraction that it seems like we've now got much more powerful tools much the way we continue to abstract up a lot of things in technology.

I'm curious about what that impact has been and the capabilities. Are they solid? Are they useful? Are there things that you still need to do manually? I'm curious about what that transition is.

Kyle Cangialosi

One of the coolest developments in this space recently has been the shift towards text-based editors, like you're describing, Eric, where after a session, we can import all of the audio that we get from the session, and it's transcribed as text. All the words in the transcript are then dynamically linked to the files underneath. So you can now make cuts to your text, you can copy, paste things, and it's handling all of the audio cuts under the hood, under like a Sleek UI. This still requires a lot of manual checking. You don't want to let the AI take the wheel just yet, something like filler word removal.

Since everything is being transcribed, you can do a control F for common instances of filler, like um, ah, you know, and with a click of a button, remove all of those things from the transcript, but that can lead to things sounding overly choppy, overly edited, but also maybe the transcription wasn't all that great, and it mistook a filler word for a word that you do need. So these are still things that require that manual review. But overall, it's saving us a lot of time being able to work with content faster.

Eric Hanselman

And especially when we think about there's so many things in technology, the big bonus so often is that level of abstraction, but the challenge is what the quality is and how far we can really take it. Back in our 100th episode, I used one of the tools that you all have been working with to be able to synthesize my voice, to be able to build a voice model. And I gave it about 45 minutes worth of my speaking to go train the model, and we actually had it run as well just to be fully meta on this, had it run at ChatGPT prompt about podcast, what it was like to create a podcast. So we actually had the AI reading the AI. And it was reasonably -- sounded a fair bit like me.

The thing that I thought was really interesting is I had my spouse listened to it. And she said, "Oh, yes, yes, that first part definitely was not you." And then I said, "Well, all right, when does it stop being me?" And the last sentence, I don't know whether or not it was just a particular way I happen to phrase things or whatever, was AI, but it sounded exactly like me. And it just like, "Oh, yes, that was definitely you."

So it's interesting to see there are places where it's working well. There are places where it's not working well, where we're kind of hovering on that edge of having both the tools to be able to work in the edit side, but also on the creation side as well. And that's something that we had a guest on a while back who misspoke about a company name. And you were able to actually go back and replace the references to that company name in the post process without having to rerecord it by using tools that simulate the guest's voice to correct that because, in fact, they used -- I think it was domain suffix rather than dot-com, or [ dot IT ] or something like that.

But it was something that they actually already said that you're able to reconstruct using those tools, and that was really, I think, some of the more fascinating pieces to my mind are really being able to leverage a lot of this technology in ways that make the whole process much more productive.

Darren Rose

That's something that's getting better all the time, Eric. I think current AI models in order to get a good representation of how somebody sounds and to be able to truly synthesize their voice. A voiceover artist would have to say quite a few hours' worth of lines of text. I think it's something that Apple iPhone can do at the moment as well in settings for accessibility reasons, but we would demo some software by a company that we used to provide professional voiceovers. And a lot of their top voice over artists are now putting their hands up and have synthesized their voice, which essentially allows customers to purchase voiceovers by the minute or by the word count.

We were a little bit concerned as our production team as to how these voice over artists were going to get paid, but we were reassured by the company that the cost, which was like 1/10 of what it would cost for somebody to go into a booth and record their audio, all of that money was going directly to that voice over artists. But essentially, because they synthesize their voice, they weren't needed to give their physical time anymore. It was all there to use in the cloud.

Eric Hanselman

I have, in fact, heard some of these. There was something I heard recently, actually some now deceased radio personalities who were doing a promo. And in fact, there was -- because there were just hours and hours and hours of the radio show, they were able to go build a model to actually a pair of host and we're able to build a model that was -- I was just stunned at how much like their regular voices really sounded like. I guess I wonder we now have a model of my voice, are we heading towards an environment where we're replacing podcast hosts altogether.

Darren Rose

I was just thinking, Eric, we've got hours and hours of you somewhere on hard drive.

Eric Hanselman

We do indeed. Well, at a model that could probably be tweaked up from where we are today.

Kyle Cangialosi

Yes. That process working with your voice, and thank you for the opportunity, by the way, because the consent implications like we're saying here are very complicated when you're making a model of someone's voice, but it was really interesting to see where that technology is right now at the kind of prosumer level. For correcting single words, I think that it does a really good job. Sometimes it can be indistinguishable from normal speech, and this is synthetic speech correcting single words. But as soon as you start generating full sentences, you start to see more of the cracks in the pavement.

A good example of this, like I imagine pulling separate words that you want to stitch together as a sentence, there was an advertisement for a Marvel show a few years ago. I'm not sure if you're familiar with this, Eric, where they had a promotional trailer featuring Owen Wilson. They're showing B-roll on screen, and you're hearing his voice say, "Is this the first time for me in the Marvel universe?" Where you could clearly hear that they had him saying those words during an interview and they stitched those words together.

There are tools becoming available now that can interpret those words, take in the context of the sentence at large and then apply through, I'm assuming, some sort of generative AI or pitch shifting, get that to sound more like this was the first time for me in the Marvel universe. And that's an incredible ability as an editor to be able to manipulate content that you already have. Those were words spoken by a person, but using technology to adapt it so that it sounds better and more digestible, but also from a content perspective for people to just slap on all of these interesting AI technology starts to make things sound a lot more monotone and picture perfect.

There is some imperfection that is necessary, I think, for people to be interested in content like this. So we don't want to go full overboard, but it's interesting to see the ways that we're using some of these tools to adapt real people and trying to maintain the person behind the voice in the content that we're making.

Eric Hanselman

Well, it gets back to that whole question of what really is the value, what's the interest, what are the things that really are the useful part of this and, hey, hopefully, what people are taking away from our podcast. It is that creation. It is that spark. I mean you could get all the information that we talk about in lots of different places. But the ability to have all those things that get revealed in the conversation process, all the aspects that are the spontaneous nature of what we wind up covering, how we wind up going about this, that's certainly part of -- certainly, we hope. The entertainment part, I think also the interesting aspect of it.

But yet, we're getting to a point at which these are tools that Kyle said prosumer that the average person can actually do a fair amount of work with. And I guess that's certainly going to improve the general nature of things. We see that clearly, the amount of content that's out there, the quick created stuff that's there, everything that's on TikTok, YouTube and the rest.

I was really interested to see that some of the budget mechanics of a leading YouTube content producer, the fact that this person actually is spending millions of dollars in production time per episode for some of these things really seems to turn this whole thing back on its head. And in fact, they've gone so far beyond just what's the simple things of, yes, I've got my smartphone out and I'm recording that, in fact, you really do need production capabilities on much more massive scale to be able to do some of the things that are out there.

But it really seems like we're blurring these lines, and in some cases, a little more intentionally. And I wonder whether or not for some of these big production numbers, are they actually trying to make it look. I mean we saw that with, I think, what was one of the first sort of cutting edge. We recorded on a video camera things, things like The Blair Witch Project, where it was shot with professional production capabilities, but being made to look as though it was done in an amateur fashion. The pendulum swings back and forth pretty rapidly in all of that.

Darren Rose

I think Kyle touched on it earlier. So in addition to using Riverside, we're also using some other software called Descript. And it's the #1 thing that we essentially do after these podcasts, Eric, is we download the audio from the cloud, and we run it through Descript. It's got this ability or this AI upscaler tool called Studio Sound. And as the name suggests, it will take anybody's audio and run it through this AI generator.

It typically takes anywhere from, I don't know, a minute to 2 minutes to generate, and it will make somebody sound like they have recorded it, as the name suggests, in a studio. So it would take these tiny microphones that you typically get in a webcam or in a headset of a smartphone. And once it processes, you have this slider to choose from the strength of this effect that you want to use any. It's the first time, I remember the first time I listened after [indiscernible].

I'm very excited about this new feature. And he said, "Darren, you've got to check this out." And I think the first time I heard it, I went from 0 to 100%, and I think I said, wow, out loud. My colleague look over at me. And I've been working in this space for 20 years, and it's the first time it was just a huge moment, and I just thought, wow, AI is here to stay.

But yes, to answer your question, there are a whole host of tools now that people can use, a lot of these YouTubers and people using TikTok that well. I guess it's not too dissimilar to maybe 10, 15, 20 years ago where people wanted to produce music. And the only way to do that was that somebody had to go into a recording studio and spend a lot of money and use the experts. And then the Apple Mac came along and kind of changed all that people were able to do that much more affordably from their home, from their bedrooms. And I think it's now with AI coming to the floor, it's just something that's just going to get better and better and enable people to create good sounding and good-looking content without breaking the bank.

Eric Hanselman

Well, and we've seen some of these advances come to live performance and I look at Auto-Tune, and the extent here you've got a technology that lets singers sing on key -- or automatically correct singers that sing on key. And in fact, the effects of the tool, if you turned it up too high, could, in fact, cause all these audio artifact -- and in fact, the Auto-Tune sound became -- the nature of the artifact became a part of that sound. I wonder if we do similar kinds of things with a lot of the AI tools that are out there because, again, we still are in this world in which they can do a lot, but they can also introduce artifacts of their own.

Kyle Cangialosi

Leading up to this recording session, Eric, I was thinking that my grandfather was an announcer for NBC for a while and did a lot of voice-over work for educational cassettes. And I remember him describing the process of working with reel-to-reel. If the copy of the script changed or they needed to make an edit, you physically needed to stop the tape at the spot, take a scissor cut the tape, find the joining end and use Scotch tape to take these pieces of the reel together and then hope that it didn't jam in the player when you try to run it the next time. And when you think of that just 60, 70 years ago and where we are today with text-based editing platforms, I mean, it's amazing how far we've come in such a short amount of time. It's really something.

Eric Hanselman

The benefits of technology. Well, there are so many different pieces to all of this, and we're seeing a lot of it in the podcast itself. It will be really interesting to see where this goes. Well, I appreciate all the perspectives from you both. Thanks very much.

Darren Rose

Thanks, Eric. It's been a pleasure and it's been really interesting to be on the other side of the microphone as it were. So thanks for having me.

Kyle Cangialosi

Likewise, thank you so much for 150 episodes. So what an honor it is to finally be on this side of the production. So thank you again.

Eric Hanselman

Well, hey, we've got probably another 150 to go. So it's been great working with you both. I appreciate the time that you've spent and all the work that you put into the podcast. So many thanks to both of you and the rest of the team. I also identified that there are a bunch of folks that are behind all of us in this. But we are at time, and that is it for this episode of Next in Tech.

I want to thank our audience for staying with us. Thanks to our production team, some of whom happens to be here. Those of us who are not here, Caroline Wright, Kaitlin Buckley on the Marketing and Events teams. The rest of the team at the 199. I hope that you will join us for our next episode where we're going to be digging into a whole set of topics around cloud security and capabilities that are rolling out for the cloud and how better to leverage those capabilities. I hope you'll join us then because there is always something Next in Tech.

Copyright © 2024 by S&P Global Market Intelligence, a division of S&P Global Inc. All rights reserved.

These materials have been prepared solely for information purposes based upon information generally available to the public and from sources believed to be reliable. No content (including index data, ratings, credit-related analyses and data, research, model, software or other application or output therefrom) or any part thereof (Content) may be modified, reverse engineered, reproduced or distributed in any form by any means, or stored in a database or retrieval system, without the prior written permission of S&P Global Market Intelligence or its affiliates (collectively, S&P Global). The Content shall not be used for any unlawful or unauthorized purposes. S&P Global and any third-party providers, (collectively S&P Global Parties) do not guarantee the accuracy, completeness, timeliness or availability of the Content. S&P Global Parties are not responsible for any errors or omissions, regardless of the cause, for the results obtained from the use of the Content. THE CONTENT IS PROVIDED ON "AS IS" BASIS. S&P GLOBAL PARTIES DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE OR USE, FREEDOM FROM BUGS, SOFTWARE ERRORS OR DEFECTS, THAT THE CONTENT'S FUNCTIONING WILL BE UNINTERRUPTED OR THAT THE CONTENT WILL OPERATE WITH ANY SOFTWARE OR HARDWARE CONFIGURATION. In no event shall S&P Global Parties be liable to any party for any direct, indirect, incidental, exemplary, compensatory, punitive, special or consequential damages, costs, expenses, legal fees, or losses (including, without limitation, lost income or lost profits and opportunity costs or losses caused by negligence) in connection with any use of the Content even if advised of the possibility of such damages. S&P Global Market Intelligence's opinions, quotes and credit-related and other analyses are statements of opinion as of the date they are expressed and not statements of fact or recommendations to purchase, hold, or sell any securities or to make any investment decisions, and do not address the suitability of any security. S&P Global Market Intelligence may provide index data. Direct investment in an index is not possible. Exposure to an asset class represented by an index is available through investable instruments based on that index. S&P Global Market Intelligence assumes no obligation to update the Content following publication in any form or format. The Content should not be relied on and is not a substitute for the skill, judgment and experience of the user, its management, employees, advisors and/or clients when making investment and other business decisions. S&P Global Market Intelligence does not act as a fiduciary or an investment advisor except where registered as such. S&P Global keeps certain activities of its divisions separate from each other in order to preserve the independence and objectivity of their respective activities. As a result, certain divisions of S&P Global may have information that is not available to other S&P Global divisions. S&P Global has established policies and procedures to maintain the confidentiality of certain nonpublic information received in connection with each analytical process.

S&P Global may receive compensation for its ratings and certain analyses, normally from issuers or underwriters of securities or from obligors. S&P Global reserves the right to disseminate its opinions and analyses. S&P Global's public ratings and analyses are made available on its Web sites, www.standardandpoors.com  (free of charge), and www.ratingsdirect.com  and www.globalcreditportal.com (subscription), and may be distributed through other means, including via S&P Global publications and third-party redistributors. Additional information about our ratings fees is available at www.standardandpoors.com/usratingsfees.

© 2024 S&P Global Market Intelligence.

No content (including ratings, credit-related analyses and data, valuations, model, software or other application or output therefrom) or any part thereof (Content) may be modified, reverse engineered, reproduced or distributed in any form by any means, or stored in a database or retrieval system, without the prior written permission of Standard & Poor's Financial Services LLC or its affiliates (collectively, S&P).