Send Message


January 20, 2021

TSV technology: effectively expand DRAM capacity and bandwidth

With the recent rapid growth and widespread expansion of artificial intelligence (AI), machine learning, high-performance computing, graphics and network applications, the demand for memory is growing faster than ever. However, the traditional main memory DRAM is no longer sufficient to meet such system requirements. On the other hand, server applications in the data center provide higher capacity requirements for storage. Traditionally, the capacity of the memory subsystem has been expanded by increasing the number of storage channels per slot and using higher-density DRAM dual in-line memory modules (DIMMs). However, even with the most advanced 16Gb DDR4 DRAM, system memory capacity requirements may become insufficient for certain applications (such as memory databases). Through silicon (TSV) in memory has become an effective basic technology for capacity expansion and bandwidth expansion. This is a technology that punches holes through the entire thickness of the silicon wafer. The goal is to form thousands of vertical interconnects from the front to the back of the chip, and vice versa. In the early days, TSV was only regarded as a packaging technology, but instead of wire bonding. However, over the years, it has become an indispensable tool for expanding DRAM performance and density. Today, the DRAM industry has two main use cases, and TSVs have been successfully produced to overcome capacity and bandwidth expansion limitations. They are 3D-TSV DRAM and High Bandwidth Memory (HBM).

latest company news about TSV technology: effectively expand DRAM capacity and bandwidth  0

In addition to traditional dual chip packages (DDP) with wire bond die stacking, high-density memories such as 128 and 256GB DIMMs (16Gb-based 2rank DIMMs with 2High and 4High X4 DRAM) are also adopting 3D-TSV DRAM. In 3D-TSV DRAM, 2 or 4 DRAM die are stacked on top of each other, and only the bottom die is externally connected to the memory controller. The remaining dies are interconnected by many TSVs that provide input/output (I/O) load isolation internally. Compared with the DDP structure, this structure achieves a higher pin speed by decoupling the I/O load, and reduces power consumption by eliminating unnecessary circuit component duplication on stacked chips.

latest company news about TSV technology: effectively expand DRAM capacity and bandwidth  1

On the other hand, HBM was created to make up the bandwidth gap between the high bandwidth requirements of the SoC and the maximum bandwidth supply capability of the main memory. For example, in AI applications, the bandwidth requirements of each SoC (especially in training applications) may exceed several TB/s, which cannot be met by conventional main memory. A single main memory channel with 3200Mbps DDR4 DIMM can only provide 25.6GB/s of bandwidth. Even the most advanced CPU platform with 8 memory channels can only provide a speed of 204.8GB/s. On the other hand, 4 HBM2 stacks around a single SoC can provide> 1TB/s bandwidth, which can make up for their bandwidth gap. According to different applications, HBM can be used as a cache alone or as the first layer of two layers of memory. HBM is a kind of in-package memory, which is integrated with SoC through a silicon interposer in the same package. This allows it to overcome the maximum number of data I/O package pin limitations, which is a limitation of traditional off-chip packages. The HBM2 that has been deployed in actual products consists of 4 or 8 high-stack 8Gb die and 1024 data pins, and each pin runs at a speed of 1.6~2.4Gbps. The density of each HBM stack is 4 or 8GB, and the bandwidth is 204~307GB/s.

latest company news about TSV technology: effectively expand DRAM capacity and bandwidth  2

SK Hynix has been committed to maintaining an industry leading position in HBM and high-density 3D-TSV DRAM products. Recently, SK hynix announced the successful development of the HBM2E device, which is an extended version of HBM2 with a density of up to 16GB and a bandwidth of 460GB/s per stack. This is made possible by increasing the DRAM die density to 16Gb and achieving a speed of 3.6Gbps per pin on 1024 data IOs under a 1.2V power supply voltage. SK Hynix is ​​also expanding its lineup of 128~256GB 3D-TSV DIMMs to meet the needs of its customers for higher density DIMMs. TSV technology has now reached a certain level of maturity and can build the latest products with thousands of TSVs, such as HBM2E. However, in the future, while maintaining high assembly yields, reducing TSV pitch/diameter/aspect ratio and die thickness will become more challenging, and will be critical for continued future device performance and capacity scaling. Such improvements will allow to reduce the TSV load, reduce the relative die size portion of the TSV, and expand the number of stacks above 12Highs, while still maintaining the same total physical stack height. Through continuous innovation of TSV products and technologies, SK hynix will continue to focus on positioning itself at the forefront of storage technology leadership.HOREXS Group also continue improve the technology to meet SK Hynix demand,For any memory substrate manufacture welcome contact AKEN,

Contact Details