blog Market Intelligence /marketintelligence/en/news-insights/blog/2021-ai-and-machine-learning-outlook content esgSubNav
Log in to other products

Login to Market Intelligence Platform

 /


Looking for more?

Contact Us
In This List
Blog

2021 AI and machine learning outlook

Podcast

Episode 1: Origins of 451 Research - Part 1

Podcast

Episode 3: Transformation of Customer Experience in 2020

Podcast

Episode 2: Origins of 451 Research - Part 2

Blog

Culture of success: 451 Research survey highlights the keys to strong data and analytics initiatives


2021 AI and machine learning outlook

Highlights

Covid-19 is driving investment in AI, with 86% of organizations agreeing that the pandemic has or will cause their organization to invest in new AI initiatives.

Skills shortages are still real in AI, but they are becoming more diverse as AI becomes more deeply embedded in organizations. Companies are looking for roles such as cloud architect, as well as data scientists.

Large language models will drive innovation in natural language processing, while AI chips will ship in ever greater numbers and may drive AI-related M&A in 2021.

Introduction

AI and machine learning may still be hot and top of mind for technology decision makers, line of business folk and investors, but that didn't prevent 2020 from broadsiding some AI initiatives. In our Vote AI ML Infrastructure 2020 survey published in July, 58% of organizations surveyed expected COVID-19 to have a negative impact on their existing AI initiatives, and 19% said the pandemic has led them to stop work on these projects. But 75% of organizations said COVID-19 led to new AI initiatives.

In our just-published data from Vote AI ML Use Cases 2021 survey, the picture has changed and things look more optimistic, with 86% of respondents agreeing that the pandemic has or will cause their organization to invest in new AI initiatives. With pandemic-induced uncertainty still looming over us all, this report looks at some of the key trends in AI we expect to see in 2021.

Skills shortages are no longer just about data scientists

As machine learning becomes more a core part of software development, the kinds of roles that have evolved over the past half century of writing code will become key hiring requirements for organizations wanting to roll their own machine learning. The roles won't all be precisely the same as those in 'traditional' software development, but we can see trends in our regular surveys of end users.

The fabled shortage of data scientists is still real, but it is not the only place organizations are looking. In our inaugural Voice of the Enterprise AI & Machine Learning survey conducted in summer 2018, lack of skilled resources was cited by 36% of respondents as the most significant barrier (with accessing and preparing data second at 16%), and data scientists were by far the most sought-after role.

In our latest survey, cloud architects came in top at 27%, followed by machine learning engineers, software engineers and then data scientists, chosen by 23% of respondents. So data scientists are still in short supply, but the multidisciplinary nature of AI and ML is now reflected in the variety of roles that companies are recruiting for, because it is a mix of ML skills, general IT skills, and an understanding of particular domains of the business.

Large language models look set to change the conversation

Language models are pretrained models that aim to predict the probability of a sequence of words and generate that text. A step change occurred with the introduction of Transformer language models by Google in 2017; these consider all the words in a sentence at once, rather than in a sequential order and can process much longer strings of information. Google released Bidirectional Encoder Representations from Transformers (BERT) in 2018 and implemented the technology in its web search engine in 2019. BERT has 340 million parameters: the variables that are determined by training data.

Microsoft launched its Turing-NLG model in February 2020 which has 17 billion parameters, and OpenAI launched GPT-3 in June sporting 175 billion parameters (GPT-2 in 2019 had 1.5 billion). Size matters here because the greater the number of parameters, the better the model is at interpreting information in context, even with limited training in that particular context. But these models are huge and expensive to build and run, so there will be a challenge in making them workable in terms of compute resources and energy consumption.

As for use cases, it's early days, but lots of things, such as advanced chatbots, writing summaries of complex research papers or news articles, content moderation, generating software code from a problem description, automating more of the e-Discovery process, and maybe one day cracking the nut of truly scalable enterprise search – ultimately enabling workers to find information in context to any type of query no matter how complex.

Aspects of these models are controversial because of their ability to generate text that could be anything, including fake history, racist or other abusive text, depending on the data on which they’re trained (as long as that data is in English - the lack of multilingualism being another issue to be resolved). This led Microsoft and its partner OpenAI to restrict access to its latest models to approved use cases only. We expect this trend to continue in 2021, but with even more massive language models and the revelation of potential enterprise use cases.

AI chips will ship and be shopped

The AI chip story will likely be dominated in 2021 by NVIDIA's ongoing acquisition of ARM, which may be completed in late 2021, or in 2022, or never, depending on the opinions voiced by competition regulators around the world. However, there are other more AI-focused chip vendors out there at various stages of maturity. Some that we profiled in detail in early 2019 have been around for four or five years now and will be shipping product, and hopefully (for them) securing some major partnerships in 2021.

And Intel, with a new CEO who has deep roots in the company, will be hoping to build on the Habana acquisition of late 2019, and secure more funds for its second bite at the AI-chip cherry. Also, given the sheer number of accelerator chip companies around – more than 40 at last count – we expect some consolidation. Intel is the usual front-runner in that regard, but its record has been patchy and the new CEO may want to take a different tack – or double down. The large cloud platforms could also reach for startups that offer unique functionality or reach some sort of performance benchmark.

As mentioned in the 2021 preview of the Data, AI & Analytics channel, pre-trained large language models have the potential to provide a step change in our ability to interpret and generate language. The potential use cases are vast, but challenges around who has access to such models will exist for some time, both in terms of the massive compute resources required to run them and the as-yet-uncertain governance frameworks they might need. There's much more on large language models in the 2021 channel preview.

Manufacturing AI moves beyond its early-adopter phase

It's hard to pick out just one industry for attention, but manufacturing looks like it's having a prominent role in AI adoption in 2021, judging not just by the momentum in the market that we have observed in 2020, but also the attention it is now getting from prominent vendors, after startups made the running exclusively in the past couple of years. The desire to automate parts of the manufacturing process has accelerated.

Our Voice of the Enterprise: Digital Pulse, Vendor Evaluations 2020 survey published in October 2020 saw 62% of manufacturers surveyed cite AI/ML as the most transformational technology over the next two years, belying the sectors' previous reputation for being behind the curve when it comes to leading-edge technologies, with the notable exception of the semiconductor manufacturing sector, among a few others.

Key use cases identified in our new Vote AI ML Use Cases 2021 survey are led by quality assurance, digital/data security, inventory monitoring and predictive maintenance/condition-based maintenance. Looking two years into the future, assembly line creation and optimization, and the on-message employee safety requirements (including social distancing) join the top four.

Technology is becoming a regulated industry

Technology is becoming a regulated industry, and the largest social media companies will begin hiring compliance teams in earnest. Although not the kinds of AI companies we follow, the travails of Twitter, Facebook, Parler and others in recent weeks of political turmoil in the US have highlighted the influence of these firms. And if politicians weren't thinking of regulations before, we bet a lot more are now.

Then there's the back-end providers that also took action – less visible and not so much free-speech-related, but just as consequential – including AWS, Salesforce and Stripe, which withdrew from providing their services to certain Trump-related entities. Of course, some of those companies would welcome regulation. It would help define the lines of engagement, and take some of the incredibly difficult decisions they have been making out of their hands. A result of this will be widespread hiring of compliance personnel by such companies.

Another factor to consider when thinking about technology (or at least the software part of it) and regulation is the difference between it and more tangible items like oil or food, in that your product can instantly (in most cases) be used in a country other than the one in which it was created. Thus, the focus on regulations needs to be beyond your own borders, assuming you have more than just parochial ambitions.

National AI strategies have real money attached to them

National AI strategies have been dribbling out over the past few years. But now that serious money is being committed to government-sponsored AI projects, we expect more strategies and frameworks to be put in place. For example, in the US, a new defense bill (the National Defense Authorization Act for Fiscal Year 2021) became law at the start of the year, and earmarked $6.4bn of government funding to AI initiatives.

Included in it are the creation of a National Artificial Intelligence Initiative Office to be led by the White House, aimed at ensuring AI technologies acquired by the Pentagon are ethically sound, and having the National Institute of Standards and Technology (NIST) develop an AI Risk Management Framework. With that sort of money sloshing around, and with a new administration that is a bit more detailed-focused than the previous one coming to power this month, we assume more governance will be in place.

In the UK – now formally out of the European Union – the UK AI Council non-statutory expert committee of independent advisers set up in 2019 has published its AI Roadmap to advise the UK government, and 'put the UK at the forefront of the artificial intelligence and data revolution.' More details can be found in our report.

China began developing its national AI strategy back in 2017, when the State Council of China released the New Generation Artificial Intelligence Development Plan. The aim is to become the 'world's dominant AI innovation center' by 2030, with the Ministry of Science and Technology (MOST) responsible for the technical implementation side. With US-China relations severely strained and not looking to change drastically any time soon, we expect the Chinese government to have its foot to the floor in pursuit of its goals in 2021.

Explore more 2021 Trends in Tech
Click Here