Send Message

News

March 11, 2021

More Data, More Memory-Scaling Problems

Memories of all types are facing pressures as demands grow for greater capacity, lower cost, faster speeds, and lower power to handle the onslaught of new data being generated daily. Whether it’s well-established memory types or novel approaches, continued work is required to keep scaling moving forward as our need for memory grows at an accelerating pace.

“Data is the new economy of this world,” said Naga Chandrasekaran, senior vice president of technology development at Micron in a plenary presentation at the recent IEDM conference.

Chandrasekaran gave some examples that illustrate the explosion in data. For health care alone, the industry generated 153 exabytes of data in 2013, a number that likely grew by 15 times in 2020. There also are 10 billion mobile devices in use, each of which will generate, store, share and stream new data sets. On a global scale, the total amount of data being generated each day is somewhere on the order of 2.5 quintillion bytes, and the number is rising quickly.

This wave of data was a big driver behind the chip industry’s growth in 2020. At SEMI’s Industry Strategy Symposium this week, analysts pointed to that as one of the big surprises in continued chip industry growth, despite expectations that numbers would tank due to the pandemic.

“Memory was an important element,” said Mario Morales, program vice president for enabling technologies and semiconductors at IDC. “Memory grew 10.8%. But NAND grew more than 30%.”

All of this data requires memory throughout its lifecycle, and the IEDM presentation laid out three primary concerns for three categories of memory: DRAM, NAND flash, and emerging technologies.

DRAM scaling challenges
DRAM remains a key component of most solutions. It is proven, cheap, and generally reliable. But it’s also far from perfect. The three issues highlighted at IEDM deal with rowhammer, sense margin, and the gate stack.

“On the DRAM device side with continued lateral scaling, we are facing challenges with row hammer, which is a widely known phenomenon where, when a word line is continually addressed [that is, it gets hammered], charge tends to accumulate in trap sites at the interface,” said Micron’s Chandrasekaran. “Later, when these charges are released, due to drift diffusion, they migrate to neighboring bits and results in charge gain. This can cause a data-loss mechanism and can be a security challenge.”

The drifting charges slowly disturb the contents of neighboring cells – a little bit with each access. After enough times in quick succession, the victim cells can lose their state ahead of the next refresh cycle.

Wendy Elsasser, distinguished engineer at Arm, agreed. “Row hammer remains a significant security concern, and it has been documented in multiple papers about how bits can flip to gain access into secure regions of memory,” she said.

This is not a new problem, but the basic issue is getting worse with each generation. “As we scale DRAM with planar scaling, the neighboring cell effect can become a near-neighbor cell effect, and more cells tend to get impacted,” said Chandrasekaran. “And this problem is just getting worse as we continue to scale thinner DRAMs.”

Because this has been a challenging problem to eliminate outright, solutions have focused on control – either issuing early refresh to re-establish any weakened cells or preventing further access after a limit has been reached. JEDEC has added some modes and commands, focusing on both the DRAM chip and the DRAM controller, but those are mitigations, not a solution to the root-cause issue.

Logic can be added to the DRAM itself to detect possible attacks, and memory IP creators have been working to build in stronger protections. “We spend hardware logic to detect such accesses, and then we proactively limit access to those rows,” noted Vadhiraj Sankaranarayanan, senior technical marketing manager at Synopsys. “But it is not that performance-effective. An alternative would be to proactively refresh the rows adjacent to those rows that get hammered.”

For performance and power reasons, some of the responsibility for detecting attacks has been put in the controller. “There are a variety of techniques that can be employed in the controller, because the controller is the one that orchestrates the traffic going onto the channel,” Sankaranarayanan added.

As to the root cause, cell-improvement engineering efforts continue, but ever-narrower cells make this a continuing challenge — especially when coupled with the need to keep die sizes reasonable and minimize any additional processing or materials costs.

The next challenge when scaling DRAM involves narrowing sense-amplifier margin. “Sense margin will reduce when cell capacitance decreases, driving us to increase the aspect ratio and introduce new materials,” said Chandrasekaran. “But even with the most ideal dielectric material – an air gap – the bit-line resistance/capacitance characteristics will be challenged as we scale, because there’s almost no space between two bit lines. And this limits what dielectric materials we can put in and eventually challenges our sense margin.”

In addition, smaller transistors are leading indirectly to reduced sense margin. “As the transistor area of sense amplifiers gets reduced so that we can get better array efficiency, the threshold-voltage variation will increase,” he said. This is a particular challenge for analog circuits, and it will require continued work for continued scaling.

Scaling with DRAM’s traditional low-cost gate stack is also running into power and performance issues. “A high-performance CMOS polycrystalline-silicon gate with silicon oxynitride gate-oxide technology has been the mainstream in the DRAM industry for decades,” said Chandrasekaran. “It’s well known, and it’s a very good cost solution. However, it is facing several challenges in meeting required EOT (equivalent oxide thickness) scaling to meet power and performance.”

An alternative solution is the high-K gate oxide and metal-gate CMOS. Both of these technologies have been common in the logic technology world and are an attractive option for memory CMOS scaling. This also will provide better drive, less variation, and transistor matching characteristics.

But it’s not just a simple matter of switching processes. Adoption of this technology in memory will require careful device engineering to enable periphery and edge devices and have good compatibility with array integration. And all of this needs to happen while keeping DRAM’s coveted affordability.

3D flash scaling challenges
The move from planar to 3D stacked NAND flash memory has, for the time being, alleviated the issue of having too few stored electrons by increasing the cell size in the new orientation. But as the number of layers increases — already in the hundreds — string current, integrated CMOS transistors, and physical robustness will need attention.

String current is flagging as the string becomes longer. “Increasing the vertical scaling will definitely challenge string current and make sensing operation more difficult,” said Chandrasekaran. The string current must travel all the way down through the layers and then back up again. The more layers, the longer and more resistive this path is, lowering the current.

A particular challenge is the fact that the channel material is polysilicon, with reduced mobility and a strong dependence on the grain size and trap density. “Controlling the grain size in these high-aspect ratio structures is a big challenge. So new ways of deposition and treatment are required,” said Chandrasekaran.

Alternatively, new materials may help keep string current intact. “There are several new materials that are also being considered as alternative channel materials, which will probably improve the string current,” he said. “But they also provide new challenges in terms of reliability mechanisms and the cell characteristics itself.”

Further row pitch scaling (which is vertical) also can help, but it reduces the size of the cell, moving back in the direction of storing too few electrons. This will hit a limit eventually and diminish the advantage of the larger cell size in 3D NAND if the word-line pitch continues to scale. “In the long run, you won’t have enough space for the cell, and we will face the same challenges as planar NAND with few-electron effects,” he said.

Meanwhile, there’s a need to transition to more advanced CMOS processing for the peripheral circuitry in order for it to keep up with required power and performance. This echoes the need to move to high-Κ metal gates in DRAM – bringing the need for careful device engineering in order to meet the requirements of both the memory cells and the logic.

And finally, as more layers are added, it becomes a challenge to keep the die thin enough for low-profile applications like cell phones – while maintaining enough bulk silicon for robust handling. “Over the next several generations, in order to meet the form-factor and package requirements for mobile solutions, the thickness of the active devices on top of the silicon will be higher than the silicon thickness itself,” said Chandrasekaran. “It creates new back-end handling challenges, and wafer warpage becomes a big issue. Die strength and handling of wafers is going to be a new challenge that drives our back-end equipment technology development.”

Emerging memory challenges
Numerous technologies are vying to be the next major non-volatile memory. These include phase-change memory (PCRAM), resistive RAM (RRAM/ReRAM), magnetoresistive RAM (MRAM), and, earlier in the development process, ferroelectric RAM (FeRAM), and correlated-electron RAM (CERAM). While PCRAM has hit production in Intel’s cross-point memories, and STT-MRAM is seeing increased integration, none of these technologies can today claim the solo mantle of next big thing. The main challenges largely relate to reliability and the use of new materials.

MRAM is one of the more hopeful entrants in this race. “MRAM is a type of memory that uses magnetic states of materials to store information, which is very different from charge-based memories such as DRAM and flash,” explained Meng Zhu, product marketing manager at KLA. While that may sound simple, MRAMs are also more difficult to build than existing memories due to thin layers and the different materials used in those layers.

Likewise, PCRAM relies on chalcogenides for its cell. RRAMs depend on a thin insulating material. And FeRAM needs materials that can switch into a ferroelectric state. CERAM is early enough in development that its composition isn’t yet well established, but new materials and delicate assembly are likely.

The question for all of these new memory types is how they will hold up over time and over millions of read/write operations. “Many of the leading emerging memory solutions face new reliability-mechanism challenges that need to be understood,” said Chandrasekaran.

MRAM, being farther along than some of the other technologies, provides a good example of the kinds of details that matter. “The main breakdown mechanism for MRAM is the wear-out of its thin MgO barrier,” Zhu said. “When the barrier has defects, such as pinholes or material weak points, the resistance of the junction can gradually decrease over time and can also lead to a sudden drop in resistance (breakdown).”

The other memory types have yet to identify and manage their own reliability mechanisms. Questions of endurance and data retention persist, and the evolution of cell resistance over time is of critical importance – especially when cells are considered for use in analog memories for applications like in-memory computing for machine learning.

To add to the challenges, many of these novel memory cells are sensitive to temperature, and their materials may not interact well with some of the well-established gases and other chemicals traditionally used in the semiconductor process.

“Most of the materials used in these advanced memory solutions are temperature- and chemical-sensitive,” said Chandrasekaran. “This requires introduction of low-temperature processing and ambient control in our fabs, and it also limits the use of well-known gases and chemicals because they tend to react with the cell materials and affect their performance. Such limitations will not only make it difficult to process these materials, but also add more cost.” Defining a flow that both uses lower temperatures and prevents chemical cell degradation will be necessary for these memories to enter the mainstream.

While the list of challenges presented at IEDM is by no means exhaustive, it presents the industry with a collection of challenging improvements that must be made in order to keep scaling at a pace that can keep up with evolving system requirements. More data requires more processing and more memory, and there are lots of ways to address this issue. But no single approach will solve all problems, and as more data is generated and more types of memory are introduced, there will be additional problems that haven’t even been discovered yet.From Bryon Moyer

Contact Details