Graphics Processing Units (GPUs) are special processors that can rapidly process repetitive computations. GPUs were originally designed for rendering performance-hungry display graphics to take excess load off of CPUs. GPUs can be characterized by extremely fast computing power, especially in repetitive computations. Within the past decade, GPUs are increasingly used in applications that require extremely fast, repetitive computations. A recent application successfully using GPUs is deep learning, which is a type of machine learning that benefits from the massive scale of parallel processing, especially since it requires more data processing rather than task processing.
GPUs consist of one integrated chip with many processor cores (from ten to thousands of logical cores on a single chip) that work simultaneously to rapidly process multiple computations in parallel. In contrast, a traditional CPU works on tasks in a sequential fashion. CPUs with multiple cores can break up tasks so that they can be processed simultaneously, but not on the massive scale that GPUs utilize for super-fast computations. GPUs have a higher computational density than CPUs. In other words, a GPU can have 10x or more the computational performance of a CPU of the same size (if you keep to floating point operations and Single Instruction Multiple Data type code). CPUs are designed to handle control statements and branching, whereas GPUs are not. GPUs will suffer performance degradation with algorithms best run on CPUs. General purpose GPUs are best at performing computations on massive amounts of data, avoiding operations that require the typical interaction with a stack to perform involved operations and decisions, whereas CPUs are better at handling tasks in parallel, which do involve working with a stack.
Industries that use GPUs include finance, weather modeling, data analytics, defense and intelligences systems, machine learning, modeling for manufacturing (e.g., modeling fluid dynamics), all manner of image processing, rendering and editing (including medical applications like MRI), research & development, supercomputing for academic research and modeling, the energy industry (oil and gas), bitcoin mining, security applications, and more. The common denominator in all of the applications in the above industries is that they need to rapidly process and perform computations on massive amounts of data. Nvidia and AMD are major manufacturers of GPUs.
Large numbers of cores are not exclusive to GPUs. Intel’s Xeon Phi™ co-processor with the Many Integrated Core (MIC) architecture competes with the Nvidia Tesla GPU also delivers “massive parallelism” with many cores to deliver extremely high-performance computing (HPC). The Intel Xeon Phi coprocessor offloads a CPU, as does a GPU. GPUs were originally created to offload CPUs for certain mathematically intense functions like image processing, and thus GPUs do not compete head-to-head with all CPUs. GPUs are more in competition with co-processors or other accelerator products. In general, a massively parallel core CPU will be able to do all the work that a GPU can do, but a GPU with the same number of cores will not be able to do the same breadth of functions without additional software and/or tools.
Leave a Reply