GPUs and other co-processors will provide strong boost to Artificial Intelligence



Artificial intelligence (AI) may seem to be a fashionable
buzzword these days, but the term was first coined at Dartmouth
College in New Hampshire more than 50 years ago. Since then, AI has
evolved considerably from its rarefied academic beginnings.

Generally understood to be the animating technology by which
machines mimic the cognitive functions normally associated with a
human mind to resemble human intelligence, AI is increasingly
viewed as a significant force of change for the future, part of an
elite group of so-called transformative technologies including 5G
and IoT, which will fundamentally alter the way people live their
lives, conduct business, and make things.

It is common to find AI functionality today in servers as well
as high-performance computers because high processing power has
traditionally been required to conduct AI applications, especially
in the training aspect of the application. But the vast changes now
taking place in AI started approximately a decade ago, driven by
developments at US graphics technology leader Nvidia that sought to
harness the power of GPUs for AI-related tasks.

The use of GPUs produced electrifying results, delivering a
stunning increase of anywhere from 10 to 100 times the performance
of AI-related tasks compared to what could be obtained from a
microprocessor (MPU).


Scalar processors vs. vector processors

Scalar processors are those that perform computations on one or
a small set of data at a time. One common scalar processor is the
MPU, which performs the functions of a central processing unit in
computers and similar high-performance compute devices.

Another common scalar processor—but less well-known—is
the microcontroller (MCU). MCUs are deployed in everything from the
smallest handheld devices to cars and industrial automation. The
scalar design in the MCU is well suited to the flexibility required
to run a variety of applications, respond quickly to the human
interface, and manage resources—with all the tasks executed
simultaneously. Historically, it has been the raw performance and
speed of the MPU that has enabled it to perform AI tasks in data
centers and high-performance computers, but scalar architecture is
not optimized for the kind of vector or matrix math that is
commonly used for AI.

READ ALSO  Four short links: 27 August 2019

In comparison, GPUs are specialized processors highly efficient
at handling computer graphics and image processing. With many
smaller more optimized cores and direct access to manipulating data
in memory, GPUs excel at vector math. Through the rapid
manipulation of a computer’s hardware memory, GPUs accelerate the
rendering of images in the memory buffer, with the resulting images
then output in lightning speed to a computer monitor, TV screen, or
similar display device. It is this same vector processing, now
applied to general-purpose applications, that enables AI
applications to be offloaded from the primary applications
processor to a GPU for general-purpose vector processing at much
greater efficiency.


AI joins the mainstream

With the discovery that AI performance could be enhanced
significantly by deploying GPUs, artificial intelligence took on
fresh urgency and expanded quickly. From the cozy enclaves of
academia and the opaque halls of the defense establishment, AI
found its way into private enterprise and industry. Today, the
technology is in mainstream applications—from your browser
search engine, to natural language processing in digital assistants
such as Amazon’s Alexa, Google’s Assistant, Apple’s Siri,
Microsoft’s Cortana, and Samsung’s Bixby. New AI applications are
also starting to permeate the markets for transformative and
emerging technologies.

At present, AI capabilities are carried out mainly by MPUs in a
datacenter environment. And among servers today, less than 3%
contain any co-processors that ideally could be used to host AI
functionality and significantly boost AI capabilities. Instead, AI
today remains firmly in the charge of the MPU.

That will all soon change, however. Within five years, AI
functionality will expand greatly from the microprocessor domain,
and servers will incorporate platforms and co-processors. Platforms
will include GPUs and AI accelerators from suppliers like AMD,
Intel, Nvidia, and Xilinx; discrete AI processors like those being
made by Amazon, Google, Microsoft and several emerging suppliers;
and system-on-chips (SoC) with integrated machine learning (ML)
capabilities.

READ ALSO  D-Link release new wire-free home camera kits – Ausdroid

By 2023, servers with AI-bearing co-processors will represent up
to 15% of global server shipments, IHS Markit is forecasting.
Projections also show that the percentage of processors optimizing
AI functionality will rise substantially, growing by as much as
100% annually for the next several years.

Even more importantly, processors with AI optimization will be
increasingly deployed in a variety of embedded mechanisms, such as
those used in security cameras and in advanced driver assistance
systems (ADAS) for cars, as well as in applications where image
analysis—for purposes of security and identification, for
instance—will be of paramount importance. Applications of AI
will be able to run inferencing, using AI trained programs to
analyze patterns of data to make logical conclusions, in a very
wide variety of applications, utilizing the smallest of
AI-optimized SoCs and ML processors to completely transform our
world.

Tom Hackenberg is associate director and senior
principal analyst for processors at IHS Markit
Posted 19 June 2019



Source link

?
WP Twitter Auto Publish Powered By : XYZScripts.com