Send Message


March 11, 2021

Manufacturing Bits: DRAM Substrate(HOREXS brand )

At the recent 2020 International Electron Devices Meeting (IEDM), Imec presented a paper on a novel capacitor-less DRAM cell architecture.

DRAM is used for main memory in systems, and today’s most advanced devices are based on roughly 18nm to 15nm processes. The physical limit for DRAM is somewhere around 10nm.

DRAM itself is based on a one-transistor, one-capacitor (1T1C) memory cell architecture. The problem is that it’s becoming more difficult to scale or shrink the capacitor at each node.

“Scaling traditional 1T1C DRAM memories beyond 32Gb die density faces two major challenges,” according to Imec. “First, difficulties in Si-based array transistor scaling make it challenging to maintain the required off-current and world line resistance with decreasing cell size. Second, 3D integration and scalability – the ultimate path towards high-density DRAM – is limited by the need for a storage capacitor.”

In R&D, the industry is working on various next-generation memory technologies to replace DRAM. Then, some are working on ways to extend today’s DRAM using new materials.

For example, Imec has devised a DRAM cell architecture that implements two indium-gallium-zinc-oxide thin-film transistors (IGZO-TFTs) and no storage capacitor. DRAM cells in a 2T0C (2 transistor 0 capacitor) configuration show a retention time longer than 400s for different cell dimensions. This in turn reduces the memory’s refresh rate and power consumption.

The ability to process IGZO-TFTs in the back-end-of-line (BEOL) manufacturing line reduces the cell’s footprint and opens the possibility of stacking individual cells.

“Besides the long retention time, IGZO-TFT-based DRAM cells present a second major advantage over current DRAM technologies. Unlike Si, IGZO-TFT transistors can be fabricated at relatively low temperatures and are thus compatible with BEOL processing. This allows us to move the periphery of the DRAM memory cell under the memory array, which significantly reduces the footprint of the memory die. In addition, the BEOL processing opens routes towards stacking individual DRAM cells, hence enabling 3D-DRAM architectures. Our breakthrough solution will help tearing down the so-called memory wall, allowing DRAM memories to continue playing a crucial role in demanding applications such as cloud computing and artificial intelligence,” said Gouri Sankar Kar, program director at Imec.

Also at IEDM, IBM presented a paper on the world’s first embedded spin-transfer-torque MRAM (STT-MRAM) technology at the 14nm CMOS process node.

IBM’s STT-MRAM technology is designed for embedded and cache memory applications in mobile, storage and other systems.

A next-generation memory technology, STT-MRAM is attractive because it features the speed of SRAM and the non-volatility of flash with unlimited endurance. STT-MRAM is a one-transistor architecture with a magnetic tunnel junction (MTJ) memory cell. It uses the magnetism of electron spin to provide non-volatile properties in chips. The write and read functions share the same parallel path in the MTJ cell.

There are two types of STT-MRAM—standalone chips and embedded. Standalone STT-MRAM is shipping and being used in enterprise solid-state drives (SSDs.)

STT-MRAM is also targeted to replace today’s embedded NOR flash memory in microcontrollers (MCUs) and other chips. STT-MRAM is also geared for cache memory applications.

Today’s MCUs integrate several components on the same chip, such as a central processing unit (CPU), SRAM, embedded memory and peripherals. Embedded memory is used for code storage, which boots up a device and allows it to run programs. One of the most common embedded memory types is called NOR flash memory. NOR flash memory is rugged and works in embedded applications.

But NOR is running out of steam and is difficult to scale beyond the 28nm/22nm nodes. Plus, embedded NOR or eFlash is becoming too expensive at advanced nodes.

That’s where STT-MRAM fits in—it will replace embedded NOR at 28nm/22nm and beyond. “However, these advanced applications have been limited by two key challenges: 1) improving MTJ performance to reduce the write currents while controlling distributions; and 2) increasing the MRAM/CMOS circuit and cell density for advanced-node scaling. Previous leading work, all at the 28nm – 22nm nodes, highlighted the challenge of integrating tight-pitch MTJs within the short vertical space available between BEOL metal levels – a challenge which has so far prevented 14nm node eMRAM from being developed,” said Daniel Edelstein, an IBM fellow in the paper. Others contributed to the work.

“Here, we demonstrate the first 14nm node eMRAM technology. Using a 2Mb eMRAM macro, we achieve an integration at a tight MTJ pitch (160nm), which fits vertically between M1 and M2. This placement maximizes eMRAM circuit performance by eliminating stacked BEOL parasitics, and reduces chip size and cost by clearing upper wiring tracks for logic, and reducing the total number of levels to wire large arrays (these may need n+3 Cu levels for MTJs placed on level Mn, hence the advantage of n=1). We demonstrate read and write functionality, including write performance down to 4ns, and show that the eMRAM process module can be added while maintaining the logic BEOL reliability requirements,” Edelstein said.

“Several unit process innovations enabled this integration, including a novel sub-lithographic microstud (μ-stud) bottom electrode (BEL), fine profile control of the MTJ patterning and dielectric films, optimized BEL/MTJ metallization, and optimized post-MTJ low-k planarization across array and logic areas,” he said.

Non-ideal ReRAM
CEA-Leti has demonstrated a machine learning technique exploiting the “non-ideal” traits of resistive RAM (ReRAM).

Researchers have overcome several barriers to develop ReRAM-based devices for the edge of computing.

A subset of AI, machine learning utilizes a neural network in a system. A neural network crunches data and identifies patterns in the system. Then, it matches certain patterns and learns which of those attributes are important.

ReRAM, meanwhile, is also a next-generation memory type. ReRAM has lower read latencies and faster write performance than today’s flash memory. In ReRAM, a voltage is applied to a material stack, creating a change in the resistance that records data in the memory.

ReRAM, however, is difficult to develop. Only a few have shipped parts in the market. There are other issues. “Current approaches typically use learning algorithms that cannot be reconciled with the intrinsic non-idealities of resistive memory, particularly cycle-to-cycle variability,” said Thomas Dalgaty of CEA-Leti in Nature Electronics, a technology journal.

“Here, we report a machine learning scheme that exploits memristor variability to implement Markov chain Monte Carlo sampling in a fabricated array of 16,384 devices configured as a Bayesian machine learning model,” Dalgaty said. “Our approach demonstrates robustness to device degradation at ten million endurance cycles, and, based on circuit and system-level simulations, the total energy required to train the models is estimated to be on the order of microjoules, which is notably lower than in complementary metal–oxide–semiconductor (CMOS)-based approaches.”(From Mark LaPedus)

Contact Details