Moving data back and forth from memory to Central Processing Unit (CPU) is necessary for computing today. However, this data movement requires a lot of power, especially for Artificial Intelligence (AI) computing-to the point where calling data from memory can consume more power than actually doing ‘compute’ work on it!

Caches have long been implemented to help but are inadequate for AI due to limited capacity and extensive management requirements. The Graphics Processing Unit (GPU) has been used to accelerate some of the complex computation that AI depends on, but it wasn’t originally designed for that task and is not very efficient. Even custom processors like the Tensor Processing Unit (TPU) still require significant power and cooling to dissipate the massive heat that is generated, not to mention the extremely high cost. The fact remains that moving data around is extremely inefficient.

It’s time for a change and our AIM platform is the key! Accelerating Intelligence @ Memory

Theme: Overlay by Kaira
en_USEnglish
zh_TWChinese en_USEnglish