The first post in this series discussed why you want OpenCL. This post will describe how it works.
The GPUs in present day graphics cards like the AMD FirePro/Radeon and Nvidia Quadro/Geforce lines are massively parallel, multithreaded, multicore processors with enormous computational power and high bandwidth. Traditionally these multicore processors have been used for graphics processing, leaving the CPU to do everything else.
More Computing Power Using Massive Parallelism
The paradigm shift with OpenCL is a non-proprietary, standardized (and familiar) language to divide up general-purpose computational code into parallel threads so the GPU and CPU can work in tandem to deliver new functionality or tackle large processing tasks.
One of the key elements about OpenCL is its ability to allocate resources to the GPU or multicore CPU depending on how much power is needed and how data intensive any given task is. An OpenCL CPU+GPU-based solution means you can get simultaneously high performance for a design as well as its analysis and simulation.
In business terms, what OpenCL means is that responsiveness and speed from existing servers to handheld devices, will improve dramatically. When algorithms are redesigned to use OpenCL, speed-ups of 10x are common, and speed-ups of 30x are not unusual. (See, for example, EDEM Simulation Engine.)
Next I'll discuss how OpenCL will affect your workflow.
Author: Tony DeYoung