AI adoption has skyrocketed throughout the last 18 months. Besides Joe McKendrick, who wrote the foundational piece on HBR, professionals who work on AI would readily attest to this statement. Google search seems to be in on this not-so-secret too: When prompted with “AI adoption,” its auto-complete spurts out “skyrocketed over the last 18 months”.
Both anecdotal evidence and surveys we are aware of seem to point in this same direction. Case in point: The AI Adoption in the Enterprise 2021 survey by O’Reilly, conducted in early 2021, had three times more responses than in 2020, and company culture is no longer the most significant barrier to adoption.
In other words, more people are working with AI, it’s now being taken seriously, and maturity is increasing. That’s all good news. It means AI is no longer a game that researchers play — it’s becoming applied, taking center stage for the likes of Microsoft and Amazon and beyond.
The following examines the pillars we expect applied AI to build on in 2022.
Typically, when discussing AI, people think about models and data — and for good reason. Those are the parts most practitioners feel they can exert some control over, while hardware remains mostly unseen and its capabilities seen as being fixed. But is that the case?
So-called AI chips, a new generation of hardware designed to optimally run AI-related workloads, are seeing explosive growth and innovation. Cloud mainstays such as Google and Amazon are building new AI chips for their datacenters — TPU and Trainium, respectively. Nvidia has been dominating this market and built an empire around its hardware and software ecosystem.
Intel is looking to catch up, be it via acquisitions or its own R&D. Arm’s status remains somewhat unclear, with the announced acquisition by Nvidia facing regulatory scrutiny. In addition, we have a slew of new players at different in their journey to adoption, some of which — like Graphcore and SambaNova — have already reached unicorn status.
What this means for applied AI is that choosing where to run AI workloads no longer means just deciding between Intel CPUs and Nvidia GPUs. There are now many parameters to consider, and that development matters not just for machine learning engineers, but also for AI practitioners and users. AI workloads running more economically and effectively means there will be more resources to utilize elsewhere with a faster time to market.
Selecting what hardware to run AI workloads on can be thought of as part of the end-to-end process of AI model development and deployment, called MLOps — the art and science of bringing machine learning to production. To draw the connection with AI chips, standards and projects such as ONNX and Apache TVM can help bridge the gap and alleviate the tedious process of machine learning model deployment on various targets.
In 2021, with lessons learned from operationalizing AI, the emphasis is now shifting from shiny new models to perhaps more mundane, but practical, aspects such as data quality and data pipeline management, all of which are important parts of MLOps. Like any discipline, MLOps sees many products in the market, each focusing on different facets.
Some products are more focused on data, others on data pipelines, and some cover both. Some products monitor and observe things such as inputs and outputs for models, drift, loss, precision, and recall accuracy for data. Others do similar, yet different things around data pipelines.
Data-centric products cater to the needs of data scientists and data science leads, and maybe also machine learning engineers and data analysts. Data pipeline-centric products are more oriented towards DataOps engineers.
In 2021, people tried to give names to various phenomena pertaining to MLOps, slice and dice the MLOps domain, apply data version control and continuous machine learning, and execute the equivalent of test-driven development for data, among other things.
What we see as the most profound shift, however, is the emphasis on so-called data-centric AI. Prominent AI thought leaders and practitioners such as Andrew Ng and Chris Ré have discussed this notion, which is surprisingly simple at its core.
We have now reached a point where machine learning models are sufficiently developed and work well in practice. So much so, in fact, that there is not much point in focusing efforts on developing new models from scratch or fine-tuning to perfection. What AI practitioners should be doing instead, according to the data-centric view, is focusing on their data: Cleaning, refining, validating, and enriching data can go a long way towards improving AI project outcomes.
Large language models (LLMs) may not be the first thing that comes to mind when discussing applied AI. However, people in the know believe that LLMs can internalize basic forms of language, whether it’s biology, chemistry, or human language, and we’re about to see unusual applications of LLMs grow.
To back those claims, it’s worth mentioning that we are already seeing an ecosystem of sorts being built around LLMs, mostly the GPT-3 API commercially available by OpenAI in conjunction with Microsoft. This ecosystem consists mostly of companies offering copywriting services such as marketing copy, email, and LinkedIn messages. They may not have set the market on fire yet, but it’s only the beginning.
We think LLMs will see increased adoption and lead to innovative products in 2022 in a number of ways: through more options for customization of LLMs like GPT-3; through more options for building LLMs, such as Nvidia’s NeMo Megatron; and through LLMs-as-a-service offerings, such as the one from SambaNova.
As VentureBeat’s own Kyle Wiggers noted in a recent piece, multimodal models are fast becoming a reality. This year, OpenAI released DALL-E and CLIP, two multimodal models that the research labs claims are “a step toward systems with [a] deeper understanding of the world.” If LLMs are anything to go by, we can reasonably expect to see commercial applications of multimodal models in 2022.
Another important direction is that of hybrid AI, which is about infusing knowledge in machine learning. Leaders such as Intel’s Gadi Singer, LinkedIn’s Mike Dillinger, and Hybrid Intelligence Centre’s Frank van Harmelen all point toward the importance of knowledge organization in the form of knowledge graphs for the future of AI. Whether hybrid AI produces applied AI applications in 2022 remains to be seen.
Let’s wrap up with something more grounded: promising domains for applied AI in 2022. O’Reilly’s AI Adoption in the Enterprise 2021 survey cites technology and financial services as the two domains leading AI adoption. That’s hardly surprising, given the willingness of the technology industry to “eat its own dog food” and the willingness of the financial industry to gain every inch of competitive advantage possible by using its deep pockets.
But what happens beyond those two industries? O’Reilly’s survey cites health care as the third domain in AI adoption, and this is consistent with our own experience. As State of AI authors Nathan Benaich and Ian Hogarth noted in 2020, biology and health care are seeing their AI moment. This wave of adoption was already in motion, and the advent of COVID-19 accelerated it further.
“Incumbent pharma is very much driven by having a hypothesis a priori, saying, for example, ‘I think this gene is responsible for this disease, let’s go prosecute it and figure out if that’s true.’ Then there are the more software-driven folks who are in this new age of pharma. They mostly look at large-scale experiments, and they are asking many questions at the same time. In an unbiased way, they let the data draw the map of what they should focus on,” Benaich said to summarize the AI-driven approach.
The only way to validate whether the new age pharma approach works is if they can generate drug candidates that actually prove useful in the clinic, and ultimately get those drugs approved, Benaich added. Out of those “new age pharma” companies, Recursion Pharmaceuticals IPO’d in April 2021, and Exscientia filed to IPO in September 2021. They both have assets generated through their machine learning-based approach that are actually being used clinically.
As for manufacturing, there are a few reasons why we choose to highlight it among the many domains trailing in AI adoption. First, it suffers a labor shortage of the kind AI can help alleviate. As many as 2.1 million manufacturing jobs could go unfilled through 2030, according to a study published by Deloitte and The Manufacturing Institute. AI solutions that perform tasks such as automated physical product inspections fall into that category.
Second, the nature of industrial applications requires combining swathes of data with the physical world in very precise ways. This, some people have noted, lends itself well to hybrid AI approaches.
And last but not least, hard data. According to a 2021 survey from The Manufacturer, 65% of leaders in the manufacturing sector are working to pilot AI. Implementation in warehouses alone is expected to hit a 57.2% compound annual growth rate over the next five years.
(Source: VentureBeat What will applied AI look like in 2022? | VentureBeat)