SAN JOSE, Calif., Jul. 18, 2018 –
Advanced Micro Devices is gearing up to join a race to accelerate deep-learning jobs in client and embedded systems. However, AMD is not yet ready to provide any specifics on the 7-nm x86 and GPU chips that it aims to deliver over the next year – or its roadmap beyond 7 nm.
"There is a need for high performance with what we call the edge [of the network] closer to the source where data is coming in and [needing] to be analyzed – often in real time," said Mark Papermaster, AMD's chief technology officer, in an interview. "AMD's machine-learning strategy is holistic and provides engines of AI for both the data center and the edge."
In late 2016, AMD released its first GPU accelerators for deep learning in the data center. Since then, Google's TensorFlow Processing Unit and other designs have shown the advantages of adding arrays of multiply-accumulate units (MACs) in hardware to speed deep-learning algorithms.