Advanced Micro Devices is continuing to see employees heading for the door, either of their own volition or not.
In September, Jim Keller, the chief chip architect and the driving force behind the development of AMD's upcoming "Zen" chip architecture, left the company to pursue other opportunities, ending his second stint with the processor maker. Just weeks after Keller's departure was made public, AMD officials in early October announced that as part of a restructuring effort, the company plans to cut another 500 jobs, the latest round of workforce reductions at AMD over the past few years.
Now AMD has lost Phil Rogers, an AMD Fellow and one of the drivers of the company's heterogeneous computing strategy. Rogers, who was at ATI for more than 11 years until AMD bought the company in 2006 and also was president of the Heterogeneous System Architecture (HSA) Foundation, this month joined Nvidia as its computer server architect.
Heterogeneous computing is an important part of AMD's compute strategy. Part of the company's chip portfolio are what officials call accelerated processing units (APUs), in which the CPU and graphics technology are integrated onto the same piece of silicon. HSA capabilities enable both the CPU and GPU to be treated as equals, with shared memory and task scheduling. It will be easier for workloads to run on the chip best suited for its needs, whether it's the CPU, GPU or another component, such as a digital signal processor (DSP). The idea is that with an HSA-compliant processor, software will be able to do a lot with the GPU that it couldn't before.
The HSA Foundation—which includes not only AMD but also ARM, Qualcomm, MediaTek and Samsung, among others—was created to accelerate the development of software that can take advantage of heterogeneous computing. The foundation has released HSA 1.0, and AMD's latest "Carrizo" APUs are the first chips to be compliant with the architecture.
Nvidia is not part of the HSA Foundation, but is working on its next-generation "Pascal" architecture, which will leverage an interconnect developed with IBM—dubbed NVLink—that will enable GPUs and CPUs to share data five to 12 times faster than they can now. According to Nvidia officials, the current speed is at about 16G bps. With NVLink and its fatter pipe, that will jump to 80G bps to 200G bps, they said.
No comments:
Post a Comment