top | item 33273063

What's different about next-gen transistors

128 points| rbanffy | 3 years ago |semiengineering.com

56 comments

order
[+] justinlloyd|3 years ago|reply
There's a lot coming down the pipe in term of next-gen components in the SOI and 3D subthreshold world, ULP (ultra-low power), MEMS (pretty much everything you find in your phone these days, outside of the CPU itself), 3D integration, the problem is summarizing it in a neat little comment. Memristors are huge, and are going to be, huge-er still. World changing huge. Which ties in to new analogue design techniques. 3D integration, which we have had for a decade, is at the tipping point as we move to new nodes and start making monolithic integration a thing. Ultra low power energy efficient compute at the edge, we're talking consuming less power in a year than it takes me to think about what I want to say when writing this comment right now. On-die microfluidic cooling for 3D dies is starting to appear. On-die sensors for new camera tech. We're edging up to a point where multiple sensors and/or a camera, compute, model inference and mesh network connectivity all exist on a single die.
[+] modeless|3 years ago|reply
Memristors aren't huge yet, are they? Don't they have wear issues? Who is working on them?
[+] inasio|3 years ago|reply
Together with memristors, in-memory compute is likely going to be a big deal. It's already here in some forms, see e.g. Content Addressable Memory in high end networking gear.
[+] ABeeSea|3 years ago|reply
I feel like memresistors have been hyped since HP tried to use it to save their business from 2007-2015. Nothing ever came from it from HP at least. Have there been new developments that make it more likely to be viable?
[+] Taniwha|3 years ago|reply
In my experience memristors have been just about to be huge every year for the past 30
[+] ggm|3 years ago|reply
Has there been an acronym shift? MEMS used to mean mechanical contrivances like dlp mirrors. Memristors are surely different?
[+] B1FF_PSUVM|3 years ago|reply
Rabbit hole diving on the makers of the machines that make chips:

https://semiengineering.com/entities/asml/

https://en.wikipedia.org/wiki/ASM_International

https://www.asml.com/en/company/about-asml/history

From the last one, ca. 1988 ASML was failing badly:

"But in a market of fierce competition and many suppliers, the small unknown company from the Netherlands couldn’t catch a break. ASML had few customers and was unable to stand on its own two feet. Making matters worse, shareholder ASMI was unable to maintain the high levels of investment with little return and decided to withdraw, while the global electronics industry took a turn for the worse, and Philips announced a vast cost-cutting program. The life of our young cash-devouring lithography company hung in the balance. Guided by a strong belief in the ongoing R&D and in desperate need of funds, ASML executives reached out to Philips board member Henk Bodt, who persuaded his colleagues to lend a final helping hand."

(I understand that nowadays you need their equipment for new fabs.)

[+] agumonkey|3 years ago|reply
Interesting how the present came from a very fragile, almost non happening past.

A lot less critical, I've read that Olivetti vanished from the industry due to random finance wars between France and Belgium with some bank (which bought the company not long before) crashing and thus cutting money right at the moment where PC were taking off.

[+] Sakos|3 years ago|reply
Crazy to think about where we'd be without ASML right now. Would another company have stepped up to fill the void? Or was there something unique about ASML that allowed them to reach EUV instead of anybody else?
[+] Aardwolf|3 years ago|reply
Random semi related question about chip making: AFAIK CPU's have multiple layers, not just a single silicon layer of transistors. And also, making transistors involves doping the silicon with other elements. And lithography is about shining a laser through a mask. And all this starting from an already cut silicon wafer.

So how are the multiple layers done, if light is shined on the surface, how do you reach the other layers? And how is this doping done, you also need to choose what element goes where but this couldn't be done with light and a mask I'd think (so even if single layer I wonder how this works)? And how do you reach the other layers with those chemical elements for doping?

[+] pclmulqdq|3 years ago|reply
Each wafer has only one layer of transistors. They then have many layers of wiring on top. They go through semiconductor manufacturing with one layer of transistors.

For each layer of metal, they go through several steps of deposition of insulator, masking, etching, deposition of metal (often in several passes now), grinding or etching that, and then covering it all up with more insulator and grinding that layer down to produce a smooth surface for the next layer.

Once the chips are complete, 3D stacking is a packaging process. It involves grinding the backsides of the chips down until they are very thin and attaching the dies together using vias that run through the thin remaining silicon layer.

EDIT: Flash memory today has multiple stacked doped silicon layers, but it is a special process that is largely unsuitable for logic.

[+] ThrowawayR2|3 years ago|reply
You can only ever reach exposed surfaces. This means a very complex sequence of adding and removing layers to expose/hide things so that the currently exposed surface features are what you want to change in the current processing step.

The Wikipedia article on CMOS has a very nice illustration of the basic steps of this process: https://en.wikipedia.org/wiki/CMOS#/media/File:CMOS_fabricat... Current processes are much more complex than even that.

[+] variadix|3 years ago|reply
It’s built from the bottom up in layers (at least traditionally, I’m not sure how the newer 3D structures for memory are constructed). The bottom layer of silicon substrate is covered in an oxide and etched selectively by photoresist and masking. Further layers are connective layers of metal lines insulated by oxide, with vias connecting metal layers to one another. The same oxide deposition, photoresist, expose, etch process is used for each layer.
[+] spyremeown|3 years ago|reply
There are "wells" to tap the components into. You just add successive layers in a very smart way.
[+] hinkley|3 years ago|reply
Are we ever going to circle back to the notion of creating circuits that can hold more than 2 states? We seem to be doing that for SSDs but not for logic.

Or has the space already been explored and there's nothing there?

[+] jlokier|3 years ago|reply
Memristor and analogue processors to accelerate ML are doing this. Neural networks are ok with operations on noisy, analogue states. Analogue operations such as op-amp multiplication are more power hungry than individual digital gates, but some are less power hungry than digital multipliers which use thousands of gates.

For digital, noise-free logic, it's generally more power efficient and physically simpler to have logic gates operate on two cleanly separated states with more gates, than to have bulkier, more complicated gates that do the same thing with combined states. A transistor which is fully on or fully off in a logic circuit uses low power either way (like a wire or a gap in the circuit), but the in-between state uses more power (like a resistor, it produces heat). This is one reason why power consumption goes up with the amount of logic state switching (the in-between resistor-like state occurs briefly during each state change), and also why many-level stable logic states aren't so efficient, except with complicated gates that use many transistors to implement many thresholds.

For non-volatile storage, that doesn't apply. Information density is more important, individual memory cells do not change state often so switching power is less of a thing per memory cell, and the logic sits at the edge of the memory array, shared among many cells. The edge circuitry can afford to be more complicated, to optimise the bulk of the memory array.

In magnetic storage and communication, the signal processing to encode many states in a small signal takes considerable power, but the trade off is worth it.

[+] mjgerm|3 years ago|reply
Roughly speaking, an N-level digital logic system requires O(N) transistors in order to buffer/force a signal into one of N states, but only performs O(log(N)) more work with them relative to binary.

Without the buffering step, you'll eventually get the middle logic levels drifting (e.g. your "1"s become "0"s or "2"s). Binary gets this for "free" because there's no middle states; this doesn't apply just to a simple buffer, similar details apply to the implementation of all other gates (many of which are rather awkward to implement).

Analog works out for rough calculations because you can skip the buffering process, at the expense of having your calculation's precision limited by the linearity of your circuit.

SSDs are more of a special case, because to my knowledge they're not really doing work on multi-level logic outside of the storage cells. They pump current in on one axis of a matrix, read it out on the other, and then ADC it back to binary as fast as possible before doing any other logic.

Random sidebar: I don't see any constraint like this for mechanical computers, so a base-10 mechanical computer doesn't strike me as any more unreasonable than a base-2 mechanical computer (i.e. slop and tolerance is independent of gear size). In fact, it might be reasonable to say you should use the largest gears that the technology of your time can support (sorry Babbage).

[+] thunderbird120|3 years ago|reply
Analog circuits exist and are widely used in some applications including for stuff like inference in AI models. The issue is that they are somewhat inexact which is a huge problem for most conventional code. As for circuits which work in a discrete rather than continuous space, it's basically always better to just use binary to represent information because 2 states are the most easily separable. Once you have to start separating out more states things get more difficult and it's almost never worth the effort unless you're trying to do stuff like storage in SSDs.
[+] b3orn|3 years ago|reply
Nothing I can link you, but I read about some companies trying to bring back analog computers for machine learning purposes.
[+] db48x|3 years ago|reply
Every digital circuit already has three states: floating, source, and sink.
[+] fnordpiglet|3 years ago|reply
“””

Whatever the future holds, it’s clear that the industry is in no hurry to abandon silicon, despite the theoretical advantages of alternative materials. “””

Such as? What a frustrating way to leave us dangling.

[+] ip26|3 years ago|reply
Usually the alternatives switch much faster, such as GaAs
[+] martincmartin|3 years ago|reply
So carbon atoms are spaced about 0.15 nm apart in carbon metal. So a 3nm process is about 20 carbon atoms.

Impressive.

[+] SAI_Peregrinus|3 years ago|reply
It's just a name. No actual features are 3nm specifically.
[+] robotburrito|3 years ago|reply
Chip War is a fantastic book about the history of this industry. I would recommend it highly for some background information.