March 11, 2021
The semiconductor industry is stepping up its efforts in advanced packaging, an approach that is becoming more widespread with new and complex chip designs.
Foundries, OSATs and others are rolling out the next wave of advanced packaging technologies, such as 2.5D/3D, chiplets and fan-out, and they are developing more exotic packaging technologies that promise to improve performance, reduce power, and improve time to market. Each package type is different, with various tradeoffs. As before, the idea behind advanced packaging is to assemble complex dies in a package, creating a system-level design. But advanced packaging faces some technical and cost challenges.
Advanced packaging isn’t new. For years, the industry has been assembling dies in a package. But advanced packages typically have been used for higher-end applications due to cost.
Today, though, advanced packaging is becoming a more viable option to develop a complex chip design for several reasons. Typically, to advance a design, the industry develops a system-on-a-chip (SoC) using chip scaling to fit different functions onto a single monolithic die. But scaling is becoming more difficult and expensive at each node, and not everything benefits from scaling.
Case in point: Intel, a long-time proponent of chip scaling, encountered several delays with its 10nm process due to various manufacturing glitches. Intel is now ramping up its 10nm designs, but it recently delayed 7nm amid yield issues. While the company vows it will fix the problem and continue with its chip scaling, it also is hedging its bets by stepping up its packaging efforts.
Samsung and TSMC, the two other leading-edge chipmakers, are moving ahead with chip scaling at 5nm and beyond. But Samsung and TSMC, as well as other foundries, also are expanding their packaging efforts. And the OSATs, which provide third-party packaging services, continue to develop new advanced packages.
Advanced packaging won’t solve every problem in chip design. Chip scaling still remains an option. What’s changing, though, is new package technologies are more competitive.
“Packaging is really the next phase to accomplish what is needed when the preference to shrink the node is no longer the clear option,” said Kim Yess, executive director of WLP materials at Brewer Science. “Creative architectures can enable mature high-volume manufacturing of active and passive devices to be packaged in such a way that the performance outcome is more robust and has a lower cost-of-ownership.”
No one package type can meet all needs. “The choice is dependent on the application, which dictates what the packaging architecture is going to look like. It’s all about what you want the performance to be and the form factor that you need for the end device,” Yess said.
So, vendors are developing several types. Here are some of the latest technologies:
ASE and TSMC are developing fan-out with silicon bridges. Fan-out is used to integrate dies in a package, and bridges provide the connections from one die to another.
TSMC is developing silicon bridges for 2.5D, a high-end die stacking technology.
Several companies are developing chiplets, a way to integrate dies and connect them in a package. Intel and others are developing new die-to-die interconnect specs for chiplets.
The Optical Internetworking Forum (OIF) is developing new die-to-die specs for chiplets, enabling new communications designs.
For decades, chipmakers introduced a new process technology with more transistor density every 18 to 24 months. At this cadence, vendors introduced new chips based on that process, enabling devices with more transistor density and new electronic products with greater value.
But it’s becoming more difficult to maintain this formula at advanced nodes. Chips have become more complex with smaller features, and IC design and manufacturing costs have skyrocketed. At the same time, the cadence for a fully scaled node has extended from 18 months to 2.5 years or longer.
“If you compare 45nm to 5nm, which is happening today, we see a 5X increase in wafer cost. That’s due to the number of processing steps required to make that device,” said Ben Rathsack, vice president and deputy general manager at TEL America.
Because of soaring design costs, fewer vendors can afford to develop leading-edge devices. Many chips don’t require advanced nodes.
But many designs still require advanced processes. “If you have been following Moore’s Law, you would think that scaling or innovation are stopping. Honestly, that’s not true. The amount of devices and how they are propagating are growing at a strong rate,” Rathsack said.
Scaling remains an option for new designs, although many are searching for alternatives like advanced packaging. “The momentum is driving more customers in more applications to explore alternative solutions than large, single-die solutions on expensive bleeding-edge silicon,” said Walter Ng, vice president of business development at UMC. “We always will be moving in a direction of needing more complex functionality. That typically means larger chips. We’ve always managed that with the ability to migrate to the next technology node, which has come with the same challenges of cost and power. We are at the point now where that ability begins to no longer be feasible and alternative solutions are becoming a must. Advanced packaging solutions, coupled with innovative interconnect approaches, are providing some of those attractive alternatives. But we need to keep in mind that the chip economics involved will determine the ultimate implementation.”
For decades, packaging was an afterthought. It simply encapsulated a die. And in the manufacturing flow, chipmakers process chips on a wafer in the fab. Then, the chips are diced and assembled in simple conventional packages.
Conventional packages are mature and inexpensive, but they are limited in electrical performance and interconnect density. That’s where advanced packaging fits in. It enables higher performance with more I/Os in systems.
2.5D vs. fan-out
Several advanced packaging types are in the market, such as 2.5D/3D and fan-out. Both types are moving toward more functions and I/Os, supporting larger and more complex dies.
Fan-out is a wafer-level packaging technology, where dies are packaged in a wafer. In the packaging landscape, fan-out fits in the mid-range to high-end space. Amkor, ASE, JCET and TSMC sell fan-out packages.
In one example of fan-out, a DRAM die is stacked on a logic chip in a package. This brings the memory closer to the logic, enabling more bandwidth.
Fan-out packages consist of dies and redistribution layers (RDLs). RDLs are the copper metal interconnects that electrically connect one part of the package to another. RDLs are measured by line and space, which refer to the width and pitch of a metal trace.
Fan-out is split into two segments — standard and high density. Targeted for consumer and mobile applications, standard-density fan-out is defined as a package with fewer than 500 I/Os and RDLs greater than 8μm line and space. Geared for high-end apps, high-density fan-out has more than 500 I/Os with RDLs less than 8μm line and space.
At the high-end, vendors are developing fan-out with RDLs at 2μm line/space and beyond. “To keep up with today’s bandwidth and I/O requirements, RDL linewidths and pitch requirements are increasingly shrinking, and are being processed similarly to BEOL connections using copper damascene processing to enable smaller linewidths,” said Sandy Wen, a process integration engineer at Coventor, a Lam Research Company, in a blog.
To make fan-out packages, dies are placed in a wafer-like structure using an epoxy mold compound. The RDLs are formed. The individual dies are cut, forming a package.
Fan-out has some challenges. When the dies are placed in the compound, they can move during the process. This effect, called die shift, can impact yield.
At one time, fan-out was limited in I/O count. Now, high-density fan-out is moving toward higher I/O counts and invading the high-end territory held by 2.5D.
2.5D is a high-end die stacking package technology. Fan-out won’t displace 2.5D. But fan-out is less expensive, because it doesn’t require an interposer like 2.5D.
Nonetheless, high-density fan-out is supporting more and larger chips, which require bigger packages. Typically, the packaging community uses the term “reticle” here. Used in chip production, a reticle or mask is a master template of an IC design. A reticle can accommodate die sizes up to roughly 858mm². If the die is larger, a chipmaker will process a chip on more than one reticle.
For example, a large chip may require two reticles (2X reticle size). Then, in the production flow, the two reticles are developed separately and stitched together, which is an expensive process.
TSMC, meanwhile, is shipping fan-out packages with a 1.5X reticle size. “We target to bring a 1.7X reticle size into production in Q4 this year,” said Douglas Yu, vice president of integrated interconnect & packaging at TSMC. “A 2.5X reticle will be qualified by Q1 ’21.”
Larger fan-out packages give customers some new options. Let’s say you want a package with high bandwidth memory (HBM). In HBM, DRAM dies are stacked on top of each other, enabling more bandwidth in systems.
HBM is mainly found in high-end and expensive 2.5D packages. Now, with larger package sizes, ASE and TSMC are developing less-expensive fan-out packages that support HBM.
There are other new options. ASE and TSMC are developing fan-out with silicon bridges. Intel was the first company to develop silicon bridges. Found in high-end packages, a bridge is a tiny piece of silicon that connects one die to another in a package. Bridges are positioned as a cheaper alternative than 2.5D interposers.
Bridges promise to bring new functionality to fan-out. For example, TSMC’s traditional fan-out features a 40μm pitch with 3 RDL layers at 2μm-2μm line/space. “(TSMC’s silicon bridge) technology can reduce the local pitch to 25μm to save chip area. An RDL line and space at 0.4μm and 0.4μm provides a much higher interconnect density,” Yu said.
2.5D, meanwhile, isn’t going away. Some are developing huge device architectures with more I/Os. For now, 2.5D is the only option here.
In 2.5D, dies are stacked on top of an interposer, which incorporates through-silicon vias (TSVs). The interposer acts as the bridge between the chips and a board, which provides more I/Os and bandwidth.
In one example, a vendor could incorporate an FPGA with four HBM cubes. In one cube alone, Samsung’s latest HBM2E technology stacks eight 10nm-class 16-gigabit DRAM dies on each other. The dies are connected using 40,000 TSVs, enabling data transfer speeds of 3.2Gbps.
Like fan-out, 2.5D is also expanding. For example, TSMC is developing a silicon bridge for 2.5D, which gives customers more options. TSMC is readying a 1.5X reticle version (4 HBMs) with a 3.0X reticle size (8 HBMs) in R&D.
All told, 2.5D remains the option for the high end, but fan-out is closing the gap. So how does fan-out stack up against 2.5D? In a paper, ASE — which calls its fan-out technology FOCoS — compared its two fan-out package types (chip-first and chip-last) versus 2.5D. Each package consists of an ASIC and HBM. The goal was to compare the warpage, low-k dielectric stress, interposer/RDL stress, joint reliability and thermal performance.
“The warpage of the two FOCoS package types are lower than 2.5D due to a smaller CTE mismatch between the combo die and stack-up substrate,” said ASE’s Wei-Hong Lai in the paper. “The (low-k) stress of FOCoS for both chip-first and chip-last are lower than 2.5D.”
The interconnection copper for 2.5D had lower stress than fan-out. “2.5D, chip-first FOCoS and chip-last FOCoS have similar thermal performance, and all of them are good enough for high-power applications,” Lai said.
More options—chiplets, SiPs
Besides 2.5D and fan-out, customers also could develop a custom advanced package. Options include 3D-ICs, chiplets, multi-chip modules (MCMs) and system-in-package (SiP). Technically, these aren’t package types. They’re architectures or methodologies used to develop a custom package.
An SiP is a custom package or module, that consists of a functional electronic system or subsystem, according to ASE. An SiP involves an assortment of technologies in a toolbox, which may include different devices, passives, and interconnect schemes, among other things. Selecting from these options, a customer can develop a custom SiP package to match its requirements.
Chiplets are another option. With chiplets, a chipmaker may have a menu of modular dies, or chiplets, in a library. Chiplets could have different functions at various nodes. Customers can mix-and-match the chiplets and connect them using a die-to-die interconnect scheme.
Potentially, chiplets could solve a major problem. At advanced nodes, a monolithic die is large and expensive. With chiplets, customers can break up the larger die into smaller pieces, thereby reducing cost and boosting yields. “We like to say that a chiplet is disaggregating a monolithic die into parts and then fabricating the parts, but they still function as a single die,” said Jan Vardaman, president of TechSearch International.
There are other benefits. “Ultimately, packaging technologies are about increasing density and decreasing power, allowing chiplets to be connected in a package with functionality that matches or exceeds the functionality of a monolithic SoC. The benefits to this approach include lower cost, greater flexibility and a quicker time to market,” said Ramune Nagisetty, director of process and product integration at Intel, in a recent presentation.
Using the chiplet approach, vendors could develop 3D-ICs or MCMs. MCMs integrate dies and connect them in a module. A 3D-IC could come in several forms. It might involve stacking logic on memory or logic on logic in a package.
Intel, for one, has developed various chiplet-like architectures. The company has the pieces in-house to develop these architectures, including its own IP blocks, silicon bridges and a die-to-die interconnect technology.
Fig. 1: 2.5D and 3D technologies using Intel’s bridge and Foveros technologies. Source: Intel
The die-to-die interconnect is critical. It joins one die to another in a package. Each die consists of an IP block with a physical interface. One die with a common interface can communicate to another die via a short-reach wire.
The industry is developing several die-to-die interface technologies—Advanced Interface Bus (AIB), Bunch of Wires (BoW), CEI-112G-XSR and OpenHBI.
The Open Domain-Specific Architecture (ODSA) group is developing two of these interfaces—BoW and OpenHBI. OpenHBI is a die-to-die interconnect technology derived from the HBM standard. BoW supports various packages. Both are in R&D.
Intel’s die-to-die technology is called AIB. Intel also is developing AIB-compliant chiplets or tiles. The company has developed 10 tiles with 10 more in the works, such as transceivers, data converters, silicon photonics and machine learning accelerators.
While Intel continues to put the pieces in place to develop chiplets, other device makers also could obtain AIB technology and develop similar architectures using their own or third-party IP.
Intel has access to AIB for its internal products. AIB is also offered as an open-source, royalty-free technology for third parties on the CHIPS Alliance Website.
A new version of AIB is in the works. The CHIPS Alliance, an industry consortium, recently released the AIB version 2.0 draft specification. AIB 2.0 has more than six times the edge bandwidth density than AIB 1.0.
For most companies, though, it’s a major challenge to develop chiplet-like architectures. The ability to obtain interoperable and tested chiplets from different vendors is still an unproven model.
There is a solution here. For example, Blue Cheetah Analog Design is developing a generator for AIB. The generator enables sign-off ready AIB custom blocks across various processes. “By producing custom blocks at push-button speeds, Blue Cheetah’s generators reduce time-to-market and engineering effort required to produce tape-out ready IP,” said Krishna Settaluri, CEO of Blue Cheetah.
That doesn’t solve all problems. For one thing, chiplets require known good dies. If one or more dies are faulty in the stack, the whole package may fail. So vendors require a sound manufacturing strategy with good process control.
“As advanced packaging processes have become increasingly complex with smaller features, the need for effective process control continues to grow,” said Tim Skunes, vice president of R&D at CyberOptics. “The cost of failure is high given these processes use expensive known good die.”
For advanced packages, vendors use existing interconnect schemes. In packages, the dies are stacked and connected using copper microbumps and pillars. Bumps/pillars provide small, fast electrical connections between different devices.
The most advanced microbumps/pillars are tiny structures with 40μm to 36μm pitches. The bumps/pillars are developed using various equipment. Then, the dies are stacked and bonded using a wafer bonder.
For this, the industry uses thermal compression bonding (TCB). A TCB bonder picks up a die and aligns the bumps to those from another die.
TCB is a slow process. Plus, bumps/pillars are approaching their physical limit, somewhere around 20μm pitches.
That’s where a new technology called hybrid bonding fits in. Still in R&D for packaging, hybrid bonding stacks and bonds dies using copper-to-copper interconnects. It provides more bandwidth with lower power than the existing methods of stacking and bonding.
Foundries are developing hybrid bonding for advanced packaging. TSMC, for one, is working on a technology called System on Integrated Chip (SoIC). Using hybrid bonding, TSMC’s SoIC enables 3D-like chiplet architectures at sub-10μm pitches.
Recently, TSMC disclosed its SoIC roadmap. By year’s end, SoIC will launch with 9μm bond pitches, followed by 6μm in mid-2021 and 4.5μm in early-2023.
Moving hybrid bonding from the lab to the fab isn’t a simple process. “The major process challenges of copper hybrid bonding include surface defect control to prevent voids, nanometer-level surface profile control to support robust hybrid bond pad contact, and controlling the alignment of copper pads on the top and bottom die,” said Stephen Hiebert, senior director of marketing at KLA.
Meanwhile, others also are developing chiplets. In the communications industry, for example, OEMs incorporate large Ethernet switch SoCs in systems. The SoC consists of an Ethernet switch die and a SerDes on the same chip.
“As we go to higher speeds, and as lithography goes to finer geometries, the analog and digital structures don’t scale the same,” said Nathan Tracy, a technologist and manager of industry standards at TE Connectivity. Tracy is also the president of the OIF.
“If you have a switch die, it has a digital portion. Then, you have SerDes, a serializer/deserializer that provides the I/O for the chip. That is an analog structure. It doesn’t scale well,” Tracy said.
As systems move towards faster data rates, the SerDes occupies too much space. So in some cases, the SerDes function is being separated from the larger die and broken into smaller dies or chiplets.
Then, all of the dies are being integrated in an MCM. The large switch chip sits in the middle, which is surrounded by four smaller I/O chiplets.
That’s where standards fit in here. The OIF is developing a technology called CEI-112G-XSR. XSR connects chiplets and optical engines in MCMs.
Clearly, advanced packaging is a frenetic market with a growing number of new options.
That’s important for customers. Monolithic dies with chip scaling won’t go away. But it’s becoming harder and more expensive at each turn.(From Mark LaPedus)