IDC Article

Diverse OT Markets Need AI/ML Across the Application Spectrum

OT (Operational Technology) systems consist of hardware and software that directly monitors and controls physical devices, processes, and events in a wide variety of commercial and home applications. The OT industry is a prime candidate for disruption by Artificial Intelligence (AI) and Machine Language (ML) technology. OT systems require AI/ML technology to tackle multidimensional problems while meeting high-speed, continuous real-time requirements—in contrast to the batch-oriented “statistical” AI systems being developed for IT systems. At the same time, OT systems have much smaller power budgets compared to the multi-megawatt power systems in Cloud-based data centers.

IDC Article

AI/ML Must Migrate to the Edge and Endpoints; Here’s Why

The emergence of the cloud, the development of connected embedded systems, and the expanding reach of smart phones, tablets, and PCs have fueled a revolution in the creation and consumption of data. IDC expects that by 2025 the amount of data created globally will grow by a factor of 10 times and reach 163 zettabytes (a trillion gigabytes). During the same time period, IDC expects that endpoint devices will be responsible for creating more than half of all that data, and the fastest growing areas in the endpoint segment will be embedded and IoT devices.

IDC Article

A Review of Processing Architectures Used to Accelerate AI/ML for OT Applications

As recently as two years ago, AI/ML workloads ran almost exclusively on server-class MPUs and server-based GPU accelerators, even though server-class MPUs and GPUs are not very power-efficient when it comes to neural-network (NN) processing. The lack of efficiency results from a design philosophy that emphasizes raw MPU and GPU compute performance, achieved through very high clock rates, rather than on compute performance per watt. This design approach caters to typical server workloads and depends on data-center power and cooling capabilities.

IDC Article

Why DRPs Excel at Implementing AI/ML Applications for OT Markets

Many tasks on the edge and in endpoint devices—especially Deep Neural Network (DNN) and Convolutional Neural Network (CNN) AI/ML inferencing tasks—require an optimal blend of processing performance and power efficiency. To date, AI/ML workloads such as DNNs and CNNs have run almost exclusively on server-class MPUs and server-based GPU accelerators, even though server-class MPUs and GPUs are not particularly power-efficient when it comes to AI/ML processing. This lack of efficiency is the direct result of a server-class design philosophy that emphasizes MPU and GPU compute performance at any price rather than on compute performance per watt.