AMD has come up with a modular AI accelerator to which the GPU and cache memory can be connected

AMD has come up with a modular AI accelerator to which the GPU and cache memory can be connected

AMD has registered a patent describing a modular Accelerated Processing Device (APD). According to the description, it consists of a Machine Learning Accelerator (MLA) chip, a GPU unit (e.g., based on RDNA 3) and a cache memory unit (very likely based on AMD Infinity Cache). The APD's sole purpose is to speed up machine learning algorithms, particularly in tasks involving matrix multiplication.

As TechPowerUp points out, the modular acceleration unit for machine learning algorithms will be able to perform functions similar to the tensor cores used in NVIDIA GPUs. The shoulders of the latter, for example, will be responsible for handling functions such as DLSS. The peculiarity of APD modular unit will allow AMD to avoid some problems associated with its application directly in the GPU itself. One of them, for example, is an inevitable increase in chip area and as a consequence, its manufacturing cost. Besides, the modularity will allow to use APD as a part of other products, not only as a part of graphic platforms. For example, the patent indicates the possibility of its use as part of central processors. The APD is proposed to be used for this purpose as an additional chiplet based on 12 nm process technology. In addition, the APD can perform the tasks of accelerating the processing of cache memory requests between the GPU and the cache chiplet, as well as being used as a cache memory or direct-addressable memory.

Post a Comment

0 Comments