Highlights

  • The S&P Global Corporate Sustainability Assessment (CSA) includes questions about companies’ use of AI to measure or improve sustainability performance and what policies companies have enacted to govern AI use. 
  • Of the companies that responded to these questions, just over one-third (36%) have a dedicated AI policy or an AI policy integrated into other governance policies. Dedicated AI policies most often cover privacy issues and rarely address issues around identification of AI-generated content. 
  • Similarly, 30% of responding companies say they use AI to help them improve on sustainability issues such as energy efficiency, resource management or product quality. The CSA data shows that large companies are embracing these AI use cases much more than small companies. 
  • While anecdotal examples of AI usage are widespread, few companies (21%) say they quantify the impact of their AI initiatives on sustainability goals.

Authors:

Svenja Hüsing | Senior Manager, Sustainability Research, S&P Global Sustainable1
Anders Almtoft | Manager, Sustainability Research, S&P Global Sustainable1

 

Contributors: 

Nicola Ballerini | Analyst, Sustainability Research, S&P Global Sustainable1
Salome Balderrama | Analyst, Sustainability Research, S&P Global Sustainable1
Artur Krasowski | Associate Analyst, Sustainability Research, S&P Global Sustainable1
Charlotte Gueye | Specialist, Sustainability Research, S&P Global Sustainable1
Matt MacFarland | Editor, Thought Leadership, S&P Global Sustainable1

Companies across sectors are increasingly embedding AI into their systems and products, describing it as a transformative force for efficiency, innovation and decision-making. The rapid deployment of the technology, whether in the form of machine learning or generative AI, has also revealed the environmental and ethical pitfalls that need to be addressed to ensure AI has a positive impact on business and society. 

The pros and cons are complex. On the environmental side, AI has the potential to radically improve energy efficiency and resource use, and if applied at scale to carbon-intensive industries, it could help curb greenhouse gas emissions. However, the environmental costs of AI data centers have been well documented, from driving higher emissions to straining local supplies of freshwater. The production and disposal of AI hardware can also worsen electronic waste issues.

AI’s social and ethical effects are not fully understood, S&P Global’s 451 Research recently concluded. While AI has the potential to enhance human cognition and creativity, it can perpetuate existing biases and inequalities because it is trained on historical data. This can lead to discrimination in critical areas such as hiring, lending and law enforcement. Additionally, the displacement of jobs due to automation could threaten economic stability for many workers. AI developers also face criticism around the inaccuracy of generated results. Large language models can produce “hallucinations” — responses that are incorrect or even nonsensical but presented with high confidence.

Addressing these problems is a key challenge for companies using AI internally or to enhance their offerings to the market — particularly those companies using the technology to improve their environmental, social or governance performance. To evaluate how companies are using AI for sustainability and what kind of governance they are implementing, the 2024 S&P Global Corporate Sustainability Assessment (CSA) asked two voluntary questions about whether companies have an AI governance policy in place and whether AI is utilized to measure or improve sustainability performance. From the full CSA universe of about 13,000 companies, 1,249 responded about AI governance policies and 1,578 responded about using AI for sustainability performance.

We find that about half (48%) of firms responding about AI governance do not have a dedicated AI policy or one integrated into other policies. Policies at the companies that do have them focus on data privacy and rarely cover issues of bias avoidance or identifying AI-generated content. This trend indicates that companies are embracing AI’s potential to have a positive impact on sustainability issues in business but are not emphasizing the governance needed to limit risk.

AI policies in business

AI governance is a structured discipline that enables organizations to adopt artificial intelligence ethically and responsibly. It ensures compliance with regulations covering AI technology, helps manage AI-related risks, provides documentation and transparency, continuously monitors AI models and aligns AI practices with societal values and human rights, as exemplified by regulatory frameworks like the EU AI Act adopted in June 2024. 

Corporate AI governance can take the form of a standalone policy or can be part of broader rules around issues such as cybersecurity or privacy. The CSA defines an AI policy as a dedicated policy or commitment with the purpose of managing AI-related risks and opportunities and the governance system the company has implemented. It should cover at least one of four main areas: data privacy, cybersecurity, mitigation of potential biases and identification of AI-generated content.

From the 2024 CSA universe of about 13,000 companies, 1,249 responded to the voluntary question on AI policies. Of this smaller group of respondents, 29% reported having a dedicated AI policy in place, with another 7% integrating AI rules into other policies. Another 16% of companies said an AI policy would be established within the next two years.

The communication services sector stands out in our analysis: Only 16% of companies have no policy, while 59% have a dedicated AI policy. The tech companies included in this sector rely on data-driven services and customer interaction, making AI governance particularly important. This sector also includes some of the firms spearheading AI development. This direct experience with AI may also create familiarity with the technology’s potential risks — especially around technical issues such as results accuracy, training data biases or hallucinations — and could lead to better adoption of governance.

This contrasts with the two sectors with the highest number of respondents to the CSA’s question on AI policies: financials (207 companies) and industrials (213). Financial institutions have found many use cases for AI, from risk management and fraud detection to consumer-facing chatbots to investment analysis. Yet only about one-quarter of firms (27%) have a dedicated AI policy that governs the technology’s use. The industrials sector, which includes carbon-heavy activities like steel and cement production, could be one of the largest beneficiaries of AI-driven improvements in efficiency and waste reduction. Only 20% of this sector has an AI policy in place, but it is also the sector with the highest share of companies (28%) that say a policy will be implemented within two years.

Key aspects of AI policies

The CSA asks companies with AI policies if they address any of four main topics that together constitute a robust policy: data privacy, cybersecurity, mitigation of potential biases, and rules around identifying AI-generated content. These components are also reflected in the OECD AI Principles and are considered best practice in the field.

The CSA asks companies that do have a policy in place what topics they include. In 10 out of 11 sectors, 75% or more of companies with an AI policy address data privacy. This level of adoption contrasts with the comparatively low number of companies seeking to address the issue of identifying AI-generated content. Policies are also less likely to include governance related to avoiding biases and cybersecurity. The financials sector stands out: 75% of companies in this sector with AI policies cover bias avoidance in those policies. Banks and other lending institutions have operated for decades under regulations such as the US Fair Housing Act that seek to stop discriminatory lending, making bias avoidance a familiar goal for these companies.

Key elements of AI policies

Data privacy concerns often arise from issues related to data collection, cybersecurity, model design and governance, according to IBM. These concerns encompass risks such as the collection of sensitive data, data gathering without consent, unauthorized use of data, unchecked surveillance and bias, and data exfiltration and leakage. Companies address this issue through implementing risk-based approaches. They can also include innovative privacy-preserving techniques and a commitment to comply with privacy regulations.

Cybersecurity is a pressing issue as attacks become more complex and more frequent. The integration of AI into business processes makes cybersecurity even more important, as cyberattacks can compromise the overall stability of AI systems. Companies using AI are responsible for mitigating any cyber vulnerabilities of their AI systems. 

Bias avoidance is important to ensure that output from AI does not lead to unfair discrimination. Companies need to identify and remove this risk through research and testing of reliable systems to mitigate bias, as well as investing in effective mitigation measures.

Identification of AI-generated content is a major challenge for many industries trying to clearly distinguish between AI-generated and user-generated content.  The issue will continue to arise as AI-generated content becomes more common. Policies address this issue by implementing methods for content provenance and authentication, such as watermarking or disclaimers that allow users to recognize AI-created content.

Adoption of AI for sustainability goals

Of the 1,578 companies that provided insights into their use of AI, 29% said they use AI to measure or enhance performance across various sustainability dimensions. The sectors with highest share of companies leveraging AI include communication services (43%), utilities (40%) and consumer discretionary (38%).

Regionally, both in the Asia-Pacific and Latin America regions, 35% of companies report AI initiatives for sustainability performance purposes, a higher share than Europe (29%), North America (18%) and Africa (16%).

There is also a significant difference in market capitalization between companies that say they employ AI for sustainability. Nearly half (49%) of large-cap companies that responded to this question are undertaking AI initiatives to improve sustainability performance, versus only 26% of small-cap firms. The average market capitalization of adopting companies is about $29 billion. Larger firms may be better positioned to spend on AI projects, and in some markets, they may be under more investor pressure to improve on sustainability issues like emissions.

Companies are using AI to improve sustainability performance in a variety of ways. The most common use cases are for improving energy consumption (36% of companies), customer relations (31%), and product quality (26%).  

Analysis of this data also shows that companies are more inclined to implement AI initiatives addressing topics considered material for their sector. For example, 70% of real estate firms are using AI to optimize energy consumption, while 50% of financial firms have adopted it for risk management. In the materials sector, 51% of companies leverage AI to enhance occupational health and safety — a notably higher proportion than in other sectors.

While adoption of AI is growing and companies are applying the technology across many topics, quantification of its impact is still rare. The 2024 CSA question on AI use asked companies whether they capture the effects of implementing AI and are quantifying that impact. Only 21% of the companies that do leverage AI are measuring its impact. The companies that do so most often quantify the impact on environmental goals such as energy use.

While systematic measurement of AI initiatives’ impact on sustainability is still uncommon, many companies provided anecdotal evidence of AI use cases to the CSA. These examples across environmental, social and governance-related uses range from optimization of energy and water resources to health and safety monitoring to fraud detection, to name a few common themes.

Looking forward

The integration of AI into business practices presents both opportunities and challenges. As AI continues to evolve, its potential to drive efficiency and innovation is undeniable. It also poses significant environmental and social risks that must be addressed. Growing recognition of these challenges is evident in the finding that 16% of the companies that responded to the CSA questions about AI governance plan to implement AI policies within the next two years. These policies are a crucial first step for ensuring ethical AI deployment that safeguards data privacy, mitigates biases and enhances cybersecurity. We also find that while companies are using AI to improve sustainability practices and performance on topics like carbon emissions, water usage, energy efficiency and employee safety, few can quantify the impact of their AI initiatives. Taken together, these findings show that while excitement around the rapid adoption of AI continues to grow across the business world, governance of the technology and corporate collection of data showing its benefits remain underdeveloped. 

This content may be AI-assisted and is composed, reviewed, edited, and approved by S&P Global.

Learn more about the Corporate Sustainability Assessment and S&P Global ESG Raw Data