Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
S&P Global Offerings
Featured Topics
Featured Products
Events
Language
Research & Insights
Who We Serve
Research & Insights
Who We Serve
Blog — S&P Global Sustainable1 — 24 April, 2026
How agentic AI is changing what's possible for banks, investors and corporates, and what it takes to scale workflow efficiency responsibly.
By Harry Tong
Highlights
This content is provided for informational purposes only. It does not constitute investment advice, research, a recommendation, an offer, or a solicitation to buy or sell any securities or financial instruments, nor does it provide advice on the suitability of any investment or transaction. Any examples or references to workflows, outputs, or use cases are illustrative only. Users should exercise their own judgment and seek independent professional advice as appropriate.
Over the past year, the capabilities of large language models (LLMs) have accelerated so quickly that we've moved from AI that helps people do their existing jobs faster, to AI that acts as a collaborator (i.e. agentic AI). In sustainable finance, a domain defined by unstructured data, evolving regulation, and complex multi-stakeholder decisions, the shift to agentic AI is transformational.
At S&P Global and within Sustainable1 specifically, we are building AI agents to help our clients complete time consuming but important tasks in their everyday workflow. We started with the origination workflow for debt capital markets banking four months ago, with an agent that helps users assess decarbonization plans for a company and benchmarks them against industry peers before suggesting possible green capital raising risks and opportunities for deal consideration.
By combining different personas, workflows and datasets from across S&P Global within AI agents, we can help solve increasingly difficult problems for our clients. As we scale our AI capabilities, it is important that we are fully transparent about what is happening inside the box and the benefits that can be realized. As previous Sustainable1 research has shown, strong AI governance is not a given.
At S&P Global Sustainable1 our data is backed by the The Quality Imperative and informed by the Climate Center of Excellence. Before an agent is released to our clients, we undergo testing and cross-referencing with teams across our technology, data, research, product and commercial organizations to help shape our AI governance process.
To understand where we are now, it helps to trace the path we've taken over the past 12 months.
We started with what you might call AI pipelines: straightforward, deterministic workflows, augmented by AI models for very specific tasks, such as extracting data or providing the summary or sentiment of a large article. At Sustainable1, this approach has been leveraged in our human-in-the-loop processes. Documents can be pre-processed to highlight and extract relevant data points to fuel the insights you can consume on our desktop and feed products. For example, sentiment analysis allows us to highlight companies that may have been involved with controversial events, so our analysts can review what this may mean for the company’s sustainability profile.
From there, it was clear that if there were applications within our own operations, then what could this mean for our clients? We moved into AI agents that follow a framework called “ReAct”, meaning agents that can reason about a problem and inform users on what action to take next. Rather than following a fixed script, the agent has flexibility in how it approaches a task for an end-user.
S&P Global Sustainable1’s Transition Finance Agent uses seven of our large datasets, across ESG scores, Environmental, Transition Risk, Business Involvement Screens and Regulatory insights, to efficiently arrive at pitch deck content for a banker to use as part of its strategy to start a conversation with a CFO or an investor.
How does the agent go from data to recommendations? In this space and in future blogs, we’ll describe the concepts that explain how an AI agent balances creativity and analytical processing to deliver a trustworthy work artifact.
Before we get into the technology, we want to stress something that's easy to overlook in the excitement around AI: the workflow comes first. The best AI product in the world is useless if it doesn't fit how people work.
With that in mind, when people hear "AI product," they tend to think about the model, the LLM. But the model is just one component. Building trustworthy AI products including for use in financial services requires a deliberate architecture that we view in three layers.
There is no AI strategy without a data strategy. Our goal is to turn unstructured data — across sustainability reports, regulatory filings, news articles, dozens of formats and languages and even satellite imagery — into structured, reliable intelligence. If the data going into an AI system is incomplete or unreliable, downstream processes offer limited remediation. The more relevant context you can give an agent on a task, the better its output will likely be.
The AI agents we’ve described can connect to data using something called the Model Context Protocol (MCP). Think of it as a universal adaptor, to connect to almost any data source or action via something called an MCP “tool.” For example, we can have a tool to find the right company, and another to find the relevant data points.
Why does this matter? Because this layer is where we balance probabilistic and deterministic outputs. When a large language model generates text, it's making probabilistic predictions: choosing the most likely next word based on patterns it learned during training. That's useful, but it's also inherently uncertain. This uncertainty from the probabilistic approach can be controlled through something called a system prompt that tells the agent how to behave. In this way, the agent can help with specific tasks for a fixed income investment banker, tailoring the output to the task at hand.
Deterministic systems, by contrast, always produce the same output for the same input: a database query, a calculation, a rule-based check. Within our agents, we reference the data points in-line, which can be verified in the underlying data extract and ultimately in the company’s disclosure and our methodologies. The agent reasons with this data to arrive at suggested conclusions. The MCP tools allow the agent to understand the data sources and ground outputs in verified data.
This layer also allows our clients to build their own agents on top of our data and tools, extending the platform in ways that are specific to their own workflows and questions.
There's a tendency to assume that a chat window is the user interface for AI. Chat is one way to interact with an agent, but the real value is in the deliverable: the sustainable investment analysis, the disclosure report, the output that helps bankers in a sustainable finance meeting. Increasingly, we see that many of the larger institutions we work with are developing their own AI systems in-house. They don't just want to consume our agents; they want their agents to work alongside ours. This is where protocols like Agent-to-Agent (A2A) come in, allowing agents built by different organizations to securely and efficiently collaborate. The future is many agents, built by different teams, working together.
This multi-agent system approach is the next major development. Aside from agents communicating across companies, specialized agents within organizations can collaborate on a task. One agent might help clients with regulatory frameworks, another with financial analysis, and a third in helping analysts synthesize findings into a coherent narrative.
Where do we go with this next?
Watch this space for future articles on what agentic AI means for different roles within banks, investors and corporates.
Find out more about our AI Agents here.
Disclaimer:
S&P Global and its affiliates disclaim all responsibility and liability for any decisions made or any harm, damage, or losses arising from reliance on or use of any content for any purpose or end-use, including in connection with beta versions and features, all of which are provided "as is" and "as available" and used entirely at your own risk. By using the generated content, you accept these terms. Some or all of the content made available to you pursuant to S1 AI Agents was generated by an AI tool in accordance with our Terms of Use. It may therefore contain errors, omission, inaccuracies, hallucinations, biases, inconsistencies, or outdated information. AI-generated content is provided solely for informational purposes and should not be considered a substitute for human-generated content. Additionally, it is expressly understood that the product, its use, its data, and any information it generates do not constitute, and are not a substitute for, professional consulting advice. AI generated content must be independently reviewed, verified, and approved through standard compliance procedures including compliance with applicable laws. Such content does not constitute investment advice and should not be relied upon or treated as a substitute in any regulatory or decision-making process, including those involving securities, financial products, or related determinations. The AI tool does not create or modify any ESG ratings, scores, or benchmarks, including in connection with any S&P Sustainable1 data or products, and any references to any ratings, scores or benchmarks are for informational display only and must not be construed as advice or endorsement.