Yuntianliyifei Introduces 3D Stacked Memory Architecture in Inference Chip Development

According to investor relations disclosures on May 12, Yuntianliyifei’s inference chip in development adopts a GPNPU architecture as its core technology roadmap. Key technical highlights include GPGPU-level universal programming capability compatible with mainstream CUDA ecosystems, optimized NPU cores for inference efficiency, and a 3D stacked memory architecture designed to increase bandwidth and reduce access latency, breaking through the memory wall bottleneck.

The company also employs a compute modular architecture to support rack-level scale-up supernode construction for trillion and hundred-trillion-scale MoE model inference. The technology roadmap targets exponentially reducing token costs and accelerating large model application deployment.

Disclaimer: The information on this page may come from third parties and does not represent the views or opinions of Gate. The content displayed on this page is for reference only and does not constitute any financial, investment, or legal advice. Gate does not guarantee the accuracy or completeness of the information and shall not be liable for any losses arising from the use of this information. Virtual asset investments carry high risks and are subject to significant price volatility. You may lose all of your invested principal. Please fully understand the relevant risks and make prudent decisions based on your own financial situation and risk tolerance. For details, please refer to Disclaimer.
Comment
0/400
No comments