Skip to Content Skip to Menu Skip to Footer

Featured

S&P Global Newsletters

Access S&P Global’s Essential Intelligence

Receive immediate insights on the individual market developments you need to know for a 360° perspective on the big stories shaping our world.

S&P Global, Cambridge Associates, Mercer: Private Markets Collaboration

S&P Global working with clients to create private markets performance analytics

S&P Global
Daily Update

Receive our daily newsletter directly in your inbox or via LinkedIn to learn about the big stories impacting our world. Each day, we send you the Essential Intelligence you need to understand the markets shaping your life. 

March 23-27, 2026
Houston, Texas

CERAWeek

Registration is now open. Join the global energy conversation on a unique platform for dialogue, discovery, and collaboration across the energy value chain.

Artificial Intelligence Insights

Gain insights into artificial intelligence (AI) trends and its transformative impact with AI fundamentals, AI applications, and AI governance and regulations.

Special Reports

Oct 30, 2025

AI and banking: Leaders will soon pull away from the pack

28 October 2025 AI and banking: Leaders will soon pull away from the pack Banks are shifting to advanced AI, boosting efficiency but increasing risks. Within five years, AI readiness will separate leaders from laggards. By Miriam Fernández, CFA and Nicolas Charnay This is a thought leadership report issued by S&P Global. This report does not constitute a rating action, neither was it discussed by a rating committee. Highlights Banks’ AI journeys are progressing, with a shift from rule-based automation to intelligent and autonomous systems that support decision making and promise to accelerate a technological transformation. The adoption and scaling of increasingly complex agentic AI introduces new risks and amplifies existing ones including human-machine misalignment, herding of automated actions, privacy and security risks, and financial instability. Although AI has not yet affected banks' credit quality, we expect rated entities' financial and competitive positions to diverge within the next three to five years, as adoption and scaling of AI strategies and management of related risks broadens the gap between leaders and laggards. The banking sector's AI journey is well underway. Financial institutions, led by data-rich, process-oriented, and deep-pocketed retail and investment banks, were early and enthusiastic pioneers of machine learning and deep learning (for over a decade), and more recently generative AI, particularly following the debut of ChatGPT in late 2022. Since then, the voyage has continued apace, notably with the incipient adoption of AI agents. As with many pioneering efforts, unexpected challenges and new risks are emerging. The next three to five years promises to be a determinative leg of that journey, during which AI will increasingly alter financial institutions' operations and their environments. Banks that secure the benefits of AI — including across costs and revenues — could find themselves with enduring advantages over competitors. S&P Global Ratings expects this will ultimately weigh in the assessment of credit quality for both the leaders and laggards in that race. The resulting enthusiasm for, and fear of missing out on, AI's benefits is common to all profit-seeking companies, including banks. Yet we caution against hype that could lead to poor investment decisions. Given banks’ decades of efficiency efforts and project management experience, we expect management teams to prudently allocate AI budgets and monitor returns. Banks that abandon those safeguards may face challenges in scaling AI solutions and increased pressure from both traditional competitors and more efficient new entrants like neobanks and fintechs. We reviewed the adoption, benefits, and limitations of AI in the banking sector two years ago (see "AI in Banking: AI Will Be An Incremental Game Changer," Oct. 31, 2023). Given the progress since then, the developments in generative and agentic AI, and the importance of the coming years, we are revisiting the topic, with an update and complement to that earlier report. Financial services as AI pioneers Given the financial sector's status as an early adopter of AI (behind the technology sector) it is not surprising to find that it continues to be a leader in deploying the technology. As of January 2025, 54% of surveyed financial services companies had deployed AI initiatives, up from 40% a year earlier and ahead of the 46% average across all business sectors, according to a report by S&P Global Market Intelligence 451 Research (see "Voice of the Enterprise: AI & Machine Learning Use Cases 2025," January 2025). That higher rate of deployment has also come with costs, notably due to the financial sector's relatively high rate of AI project abandonment (see figure 1). Traditional AI, such as supervised and unsupervised machine learning, remains at the heart of many financial services' AI initiatives. Banks are primarily using these types of AI to simplify operations, by replacing manual functions with machines that are less prone to operational failure and able to learn from their performance. AI capabilities are also being used to enhance fraud and risk management practices, for example with the analysis of vast amounts of data to uncover patterns not visible to the human-eye. Yet newer AI technologies are also finding their place. In the survey cited above, about one-third of financial services companies reported using generative AI (which produces original content) as of January 2025, compared to 21% a year earlier and an average of 27% across other sectors. The survey found that banks are primarily testing generative AI on lower-risk and internal use cases, such as generating synthetic data for testing and training, and to expand process automation. A similarly cautious approach is evident among those testing automation with agentic AI, which can autonomously perform tasks, reason, predict learn and adapt, with limited or no direct human oversight. Trends in banks' AI deployment Banks deployment of AI is uneven, varying across functions and geographic regions. This variation was revealed by an analysis of banking sector transcripts (including earnings calls, investor presentations, and conferences), conducted with the aid of Pronto NLP, a generative AI tool owned by S&P Global Market Intelligence. These transcripts, collected by S&P Global over three years (starting in October 2022) and spanning about 550 banks, show that most banks reporting the use of AI are deploying it for internal solutions. As of Q3 2025, 43% of global banks in the study reported internal AI deployment, while only 9% indicated its use in external-facing systems (see figure 2). Geographic deployment (both for internal and external use) also shows a clear divide, with Europe leading the way, based on our transcript sentiment analysis (see figure 3). Internal applications: Automation by generative AI is powering efficiency gains According to S&P Global Market Intelligence 451 Research, nearly 50% of financial institutions are using or developing generative AI systems for internal use, notably to increase the scope of multi-step process automation to multiply the number of tasks done autonomously. This enhancement from robotic process automation to generative AI is driven by the potential for improved efficiency and accuracy. For example, in fraud prevention, multimodal generative AI can analyze voice authentication data, passport images, and transaction logs to detect anomalies more rapidly and often with greater accuracy. Banks are also deploying generative AI for engineering and software development, accelerating coding and reducing project costs. External applications: AI chatbots combined with human expertise Chatbots are the most common focus of banks' external AI deployments, cited by 41% of banks, according to S&P Global Market Intelligence 451 Research survey data. The resulting growth in conversational AI assistants for customer service offers the potential for more responsive and personalized services, including multilingual support. Some systems also incorporate multimodal capabilities, making interactions more flexible and natural. However, most banks still use hybrid models that escalate complex queries to human agents. This suggests the performance limitations of AI chatbots and banks ongoing caution about eliminating human interaction in customer service. Additionally, few institutions have managed to combine client and transaction data with their knowledge databases in customer chatbots, largely due to data privacy concerns. How AI could benefit banks in the near term We expect the next three to five years to be pivotal in terms of banks' wide-scale adoption of AI, not least because entities that establish a meaningful lead could secure cost and revenue advantages and economies of scale, which in turn would support further investment in innovation. This virtuous circle could create an AI-gap among banks, which could ultimately weigh in our assessment of banks' credit quality. Initially, most banks are focusing on process automation, likely leading to improved efficiency and cost savings. Banks that achieve these efficiencies over the coming three to five years could establish a lasting competitive advantage. This prospect is driving significant investment, with the banking sector leading the way, projected to account for about 20% of global AI spending in 2028, according to IT market analyst International Data Corp. While accurately measuring efficiency gains at enterprise level is challenging, we estimate average efficiency gains (net of AI investment) of 10% to 25% from a combination of cost savings, increased revenues, and risk mitigation. For the top 200 global banks that we rate, we modeled the potential impact of four different AI investment scenarios on cost-to-income and return on equity (see figures 5 and 6). These scenarios are based on technology budgets ranging from 8% to 20% of non-interest expense, with AI and automation spending representing about 8% to 22% of total technology budgets, equating to as much as 5% of non-interest expenses. Our scenarios Low investment (0%-1.5% of non-interest expense): Exploratory projects and pilot implementation, with initial learning curves, modest setup costs, and limited benefits. Moderate investment (1.5%-2.5% of non-interest expense): Notable efficiency gains (5%-15%) from AI implementation in key areas. High investment (2.5%-3.5% of non-interest expense): Significant efficiency gains (15%-25%) from greater integration into broader banking operations. Very high investment (>3.5% of non-interest expense): Diminishing returns beyond the "high investment" scenario, but still some gains from optimizing existing AI solutions, exploring more complex uses, and from exploratory research. Our scenario analysis assumes that the return on AI investments will follow an S-curve – a typical pattern of technological diffusion. As such, we estimate that firms that undertake limited AI investment will reap little benefit, while firms that embrace AI adoption will likely dedicate larger investments and see accelerating benefits that plateau as return on investment diminishes beyond a certain point. We also assume a 48% abandonment rate of AI spending, in line with the S&P Global Market Intelligence 451 Research survey data cited above, to account for current challenges in scaling AI pilots to deployment. Given that banks will make various levels of investments in AI solutions, infrastructure, and talent, we modelled four scenarios and estimated the efficiency gains that could result in each scenario (based on end-2024 financial metrics for a sample of banks). The results suggest that efficiency gains, and therefore improvements in cost-to-income ratio and return on equity (ROE), only become substantial at high levels of investment (see figures 5 and 6). At the individual company level, we believe the main determinants of success will be: AI strategy: Banks' AI-development strategies differ, including due to the pace at which they decide to invest in various iterations of AI technology, their split between capital expenditure (which enables accumulation of capabilities and capital over time), and the operating expenses (such as licenses to third party vendors and cloud computing costs). It is too early to say what strategy will deliver the largest payoff, hence the variety of approaches and the elevated abandonment rates. Early adopters of AI could also reap first-mover advantages. Ultimately, much of the return will depend on banks’ high-level strategic choices and the speed at which they adapt to secure opportunities. Scale: AI investments pay off when applied at scale. This provides an obvious advantage to larger banks, which can apply AI technologies to a larger customer base and a larger cost base. Yet we recognize that smaller and newer entities, with less legacy infrastructure, might find it easier to adopt AI solutions widely across their business. Human adoption: AI tools must be adopted and embedded in human-led processes, both in banks operation's and among end users (i.e., on both the front and back end). The upskilling of staff and the tech-savviness of a bank's clientele are thus key differentiators of AI success. Risk management: AI-adoption introduces new risks (see below). Mismanagement of these risks might lead to significant additional costs that could offset or outweigh operational benefits. Ultimately, all of these factors will influence the time that it will take for productivity gains to materialize. Economics of AI adoption: How banks can measure AI investment success Global banks have started to invest vastly in AI technologies, and we expect this trend to continue. Over time, questions about the financial return on this investment will become increasingly prominent in banks’ board-level discussions, as AI naturally competes with other investment priorities. We already see this trend in industry surveys, with a growing number of firms citing AI investment costs as a key challenge (See Generative AI shows rapid growth but yields mixed results, Feb. 27, 2025). We expect success will be measured by efficiency gains and additional revenue derived from AI technologies. The potential for improvement across these two variables is difficult to estimate at this stage and will depend on the type of AI technologies in which banks invest. Moreover, the cost-benefit analysis differs markedly between investment in traditional AI, in generative AI, and in agentic AI. Sensitivity analysis helps to illustrate the dynamics at play between costs, revenues, and potential efficiency gains (see figures 7 and 8). Starting from a theoretical bank with a 50% cost-to-income ratio (i.e. close to the current median for our top 200 rated banks globally), we assume that: AI-related spending will increase overall operational expenses by 1% to 5%. Bank efficiency gains (i.e., the decline in annual operational expenses) could range between 5% and 40%. We then simulate five years of investment and review the cost-to-income ratio of that same bank. For instance, if a bank decides to increase its operational expenses by 3% annually, it will require a 15% efficiency gain per annum to return its cost-to-income ratio to its initial level after five years. Maintaining that 3% annual growth in expenses constant, we further assumed that annual revenues could grow by 1% to 5% due to the adoption of AI-related technologies. We then see that the banks’ profitability gains could quickly accelerate and become significant (figure 8). <img src="https://public.flourish.studio/visualisation/25688794/thumbnail" width="100%" alt="interactive diagram visualization" /> AI agents could transform efficiency and services The benefits of AI to banks could accelerate with the deployment of AI agents—systems that can act autonomously, learn, adapt, reason, and predict with limited direct human oversight. Agentic AI promises around-the-clock client service, and integration with existing infrastructure. Interest among banks is strong: 67% of financial services companies in developed economies intend to deploy AI agents, according to S&P Global Market Intelligence 451 Research. Some banks have said they are already deploying, developing, and testing AI agents (see figure 9). We have not evaluated the examples below based on the degree to which the agents can act autonomously, reason and predict, or learn and adapt from their environments. Furthermore, the specific product outcome remains uncertain but could range from back-office optimization to new business servicing channels. We expect banks will initially deploy AI agents in functions such as compliance, fraud detection, transaction monitoring, and internal process automation. These roles are well-suited to agentic AI systems that execute tasks based on deterministic AI models, which are rules-based or deploy pattern recognition, and typically offer greater control, explainability, and auditability. AI agents underpinned by foundation models (such as large language models) offers greater potential for specialized team-specific workflows. However, this flexibility also makes deployment more challenging and necessitates greater human oversight, particularly in high-risk functions such as credit provision. The highly regulated nature of banking requires that agents’ actions and outputs are explicable. Tools and techniques to facilitate this include methods such as Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanation (SHAP), as well as protocols such as counterfactual reasoning, attention maps, token attribution, and comprehensive output logging. Nonetheless, widespread application of such tools remains incipient, hindering banks adoption of some forms of agentic AI. The next evolution: Agentic specialization and action models Banks have, so far, primarily tested and, in some cases, deployed text generating LLMs and large multimodal models (LMMs), which process and generate different formats such as video, audio, and image. We expect those models will continue to play a role, particularly for human-like conversations, but we also expect the emergence of hybrid-approaches that utilize smaller, specialized models for specific tasks. We also see a growing role for large action models (LAMs) — models trained to execute a sequence of actions — which should facilitate broader adoption of multi-agentic AI systems. The key advantage of smaller, specialized generative AI models for agentic AI is their lighter computational requirements, resulting in greater efficiency and reduced costs. These models can also be trained to perform specific tasks in process-specific contexts and on targeted datasets, further optimizing performance. Reduced size and cost also enable deployment on-premise (i.e., within their own infrastructure) and on edge devices (e.g., ATMs and smartphones), which reduces privacy and security risks. Furthermore, as multi-agent systems become more common, these smaller models' reasoning and explainability will be easier for humans to trace. LAMs appear particularly suited to banks' often-complex workflows, multi-step and multi-system processes, and stringent regulatory requirements. Unlike LLMs, which are trained on text, LAMs are trained on datasets of actions (such as system logs, instructional videos, and software commands) to generate action sequences that complete tasks. This autonomy offers scalable efficiencies, but requires robust audit trails, safety mechanisms, and accountability frameworks. A challenging implementation path Banks' AI projects have so far yielded limited, and often difficult to quantify, returns on investment. Many banks are finding the transition from AI solution development to implementation challenging, as evidenced by the high rate of AI project abandonment. A key reason is the difficulty of adapting broad LLMs to specific solutions that apply enterprise-level data and can be integrated in specific business workflows. The development and deployment of agentic AI is likely to prove even more challenging as the autonomy of the systems will alter operations and thus require careful change management, workflow redesign, strong controls, and gradual implementation. Successful implementation will require overcoming both technical and organizational challenges. Key technical challenges Data readiness: Banks' data is often characterized by fragmentation, duplication, and siloing, which hinders the provision of training and inference data for AI, and thus the development of reliable AI systems. This problem is amplified by generative AI’s reliance on unstructured data like text or audio, while data privacy regulations, such as the EU’s General Data Protection Regulation, further complicate data management and sharing. The upshot is that financial services companies are dedicating about 35% of their AI workload to data ingestion and preparation, exceeding the effort spent on model training and inference, according to research by S&P Global Market Intelligence 451 Research. To mitigate these challenges, banks could build upon existing data governance and architecture — such as data lakehouse architectures and metadata catalogs — with systems for continuous data quality improvement that incorporate data privacy and security considerations by design. To leverage unstructured data for generative AI, banks could also consider creating vector databases for efficient data retrieval, investing in optical character recognition to convert scanned documents to machine-readable text, and by proactively identifying and by mitigating bias within their data. Memory and reasoning: Most AI systems' potential is limited by an absence of long-term memory and an inability to structurally learn from interactions with users over time. Additionally, general purpose LLMs tend to lack domain-specific knowledge (so called semantic memory) and procedural memory (such as bank-specific workflows), both of which are required if AI is to learn, adapt, and personalize at scale. Improving accuracy is key to scaling generative AI solutions and agentic AI applications. About 38% of financial services companies seek to improve AI model outputs by using retrieval-augmented generation (RAG), which enables LLMs to retrieve documents that serve as knowledge bases, while 57% still primarily rely on monitoring of live responses, according to S&P Global Market Intelligence 451 Research. Regular fine-tuning of AI models can be expensive and trigger additional regulatory requirements, including those under the EU’s AI regulations. A more efficient way to introduce domain-specific memory is through knowledge graphs or graphRAG, which can represent relationships between teams, accounts, and products within banks. Meticulous workflow mapping and detailed workflow descriptions can also support models’ procedural memory. Key organizational challenges Scalability: Only 5% of integrated AI pilots have delivered significant value and been integrated at scale into workflows, according to a study by the Massachusetts Institute of Technology, a university ("The GenAI Divide: State of AI In Business 2025," Aditya Challapally et al., July 2025, MIT). Reasons for failure cited in the study include technical issues (e.g., models’ inability to learn, retain memory, and adapt to specific workflows) and organizational design. We consider peoples' mindsets and a human skills gap to be another key barrier to integration. This gap stems from a lack of upskilling and cultural willingness to adopt AI, as well as the need for organizational workflow redesign and leadership support. Agentic AI could solve some of these memory and adaptability issues at a technical level, but it also introduces additional complexity and risks that could deter scalability. For example, agents require security and access controls, clear accountability, and reliable execution. Solving for these critical issues is still very much a work-in-progress. Because AI requires scale to generate reasonable returns on investment, failure to scale projects can leave banks at a competitive disadvantage, expose them to operational risks, and ultimately weigh on their credit quality. Banks will need flexible, modular tools to enable gradual scaling, while the collation of talent necessary to drive scaling will require cross-functional teams (including AI engineers, specialists in AI ethics, governance experts, and workflow managers). Strong management leadership on AI and new roles, such as Chief AI Officer, are also emerging and helping to ensure resource mobilization and adoption. Partnering with AI vendors can help banks access expertise and technology beyond their internal resources. Governance: Banks' AI governance must comply with complex and varying regulations across jurisdictions, account for the increasing complexity of AI models, and envisage the need to update internal risk management frameworks. About 50% of financial services companies have AI governance tools, according to S&P Global Market Intelligence 451 Research. However, rapid innovation complicates banks' ability to maintain the necessary controls, especially amid scaling and given agentic AI's nature and growing deployment. Without careful governance, banks could be exposed to material operational risks with financial, regulatory, reputational, and systemic implications. To mitigate such risks, banks can build safety and ethical considerations into AI systems at the design stage and incorporate safeguards at transaction and system levels (such as hard limits, approval triggers, and real-time monitoring for unusual patterns). They could also include clear ethical guidelines for AI systems and automated workflows, safety controls (e.g., automatic shutdowns), and robust privacy protections, such as data access controls for both humans and AI systems. How AI could affect banks’ credit quality Banks traditionally assess credit, market, liquidity, and operational risks, including those inherent in machine learning models. Generative AI and agentic AI introduce new risks and amplify existing ones for the financial sector, including threats to wider financial stability. We categorize AI factors that could influence banks’ credit quality as external (beyond a bank’s direct control but capable of creating risks and opportunities that impact credit quality) and bank specific (driven by a bank’s financial and risk operations and AI implementation strategies) (see figure 10). Bank-specific factors: Upside potential, new risks Banks' ability to leverage AI for efficiency gains and increased revenue, while managing associated risks and investment costs, will likely weigh in our assessment of their credit quality. We expect this could lead to stronger bank business profiles, resulting from improved financial performance through productivity and revenue growth (see scenario analysis above). Using AI to derive more accurate pricing, risk identification and monitoring could also enhance risk management. However, banks’ implementation of AI solutions, and generative AI in particular, has raised new risks. Key technical risks for banks are reliability, explainability, and accountability (see figure 11). While technical in nature, these risks can amplify credit risk if present in credit decision processes, and may increase operational, financial, and reputational risks if not well managed. External factors: Greater downside risks Numerous external factors stemming from AI developments could pose opportunities and risks to banks. Entities' preparedness for those will vary across jurisdictions due to government strategies, investment capacity, regulatory approaches, geopolitical priorities, and public-private partnerships. However, the following factors are relevant regardless of a bank’s location. Fintech and Big Tech challengers Fintechs, benefitting from modern cloud-native infrastructure can rapidly deploy AI-solutions. We anticipate disruption in areas such as payments, credit for underserved populations (through the use of alternative behavioural data), and in trading and investment. Banks, which are often limited in their independent AI development capabilities, are partnering with tech giants to meet market demands for innovation and scale. The intersection of AI and crypto assets is also a theme to monitor, as developers combine AI innovation with blockchain-based solutions. Third-party supply chain risk Over-reliance on vendors for cloud computing, AI components, and AI models creates single points of failure with potentially widespread consequences for financial systems. Amazon Web Services'(AWS) cloud outage on October 20 was an example of operational disruption across industries. About 73% of financial services companies perform inference within a hyperscaler’s public cloud (e.g., Google, AWS, Azure), according to S&P Global Market Intelligence 451 Research's AI and machine learning survey referenced earlier. Banks are adopting multi-vendor and multi-cloud strategies to mitigate these risks. Yet, most lack the skills and resources to develop proprietary generative AI models, leaving them exposed to operational risk due to their reliance on third-party private and open-source models. In Europe, 56% of banks used third-party models via cloud services, while only 18% developed their own, according to a study by the European Banking Authority (see "European Banking Authority Risk Assessment Report," EBA, Nov. 2024). That dependence makes robust vendor risk management, contingency planning, and AI model audits crucial to ensure operational resilience. Cyber risks amplified by generative AI and agentic AI Deepfakes, voice cloning and hyper-personalized phishing pose reputational and financial risks for banks. Advanced techniques such as prompt injection and automated AI model vulnerability exploitation increase security and operational risks. AI agents’ ability to coordinate complex attacks across systems and companies amplifies the risk of contagion and could disrupt critical financial infrastructure, contributing to financial instability. Banking sector risks amplified by AI agents Exploitation of legal loopholes: AI agents could autonomously identify and exploit market and regulatory gaps to facilitate insider trading or money laundering, potentially exposing banks to regulatory and financial penalties. Herding behaviour: Agents could behave similarly leading to concentrated trading, correlated decisions, or amplified exposures-ultimately threatening financial stability. Human-machine misalignment: Agents could optimize goals in ways that disregard human values or unintended consequences, potentially resulting in compounded transaction errors, privacy violations, liquidity crises, and market instability. Contributors: Paul Whitfield and Cat VanVliet Artificial Intelligence Insights Gain insights into artificial intelligence (AI) trends and its transformative impact with AI fundamentals, AI applications, and AI governance and regulations. Explore All Insights

Podcast Series

Leaders

Leaders is a long-form conversations with senior investors, CEOs, and entrepreneurs on topics that matter most to the investment community. In each episode, Joe Cass, Senior Director at S&P Global Ratings, explores market trends, investment outlooks, and personal stories from industry trailblazers.