top | item 36926228

ASML EUV lithography machine could keep Moore’s Law on track

206 points| mfiguiere | 2 years ago |spectrum.ieee.org | reply

183 comments

order
[+] esperent|2 years ago|reply
> EUV necessitates an entirely new way to generate light. It’s a remarkably complex process that involves hitting molten tin droplets in midflight with a powerful CO2 laser. The laser vaporizes the tin into a plasma, emitting a spectrum of photonic energy. From this spectrum, the EUV optics harvest the required 13.5-nm wavelength and direct it through a series of mirrors before it is reflected off a patterned mask to project that pattern onto the wafer

This is incredible and feels like the most sci-fi sentence I've read in a long time.

It's unbelievable to think that this works, not just in a lab, but in commercial systems that will produce hundreds of chip wafers an hour (>100 anyway, they didn't clarify further).

[+] sbierwagen|2 years ago|reply
It's also terribly inefficient. EUV "mirrors" eat 30% of the incoming light. Since they have such a narrow reflective range and the source light isn't collimated or coherent, you have to use a bunch of them. By the time you're at the mask, you've lost 96% of the light. As a result:

>Hynix reported at the 2009 EUV Symposium that the wall plug efficiency was ~0.02% for EUV, i.e., to get 200-watts at intermediate focus for 100 wafers-per-hour, one would require 1-megawatt of input power

https://en.wikipedia.org/wiki/Extreme_ultraviolet_lithograph...

It's an damn good question of how much further this can scale. EUV photons are a lot more like x-rays than they are visible light. They're energetic enough now they're inflicting ionization effects on photoresist material, blurring the exposed area with secondary electron scatter. The fundamental limit of electronic transistors, ones made out of single molecules, are going to be tough to make with lithography.

[+] NortySpock|2 years ago|reply
Twice. They have to hit the molten droplet twice.

Once "gently" to deform it briefly into a concave shape, and the second, harder pulse to actually activate the droplet to emit extreme ultraviolet light

Asianometry on EUV. Skip to 10m50s. https://youtu.be/5Ge2RcvDlgw

[+] ksec|2 years ago|reply
>This is incredible and feels like the most sci-fi sentence I've read in a long time.

And that is sad part of so called "Tech" today. Zero appreciation of it outside of the minorities.

They are doing is on a massive scale, extreme precision, high cost of electricity, insane difficulty in both designing chips and production.

And yet HN thinks all the hardware chips today are over priced and absurdly expensive.

[+] jasonwatkinspdx|2 years ago|reply
Even more crazy that process runs at 50khz, that is vaporizing 50,000 droplets per second. This is necessary because the overall efficiency is really poor vs the energy density they need on the chip to react with resist.
[+] rsweeney21|2 years ago|reply
Came here to make the exact same comment. I can't help but feel like some day we will forget how we were able to make such amazing machines.
[+] zapkyeskrill|2 years ago|reply
In a moment of brain fart I imagined this process happening on the end device, us needing to /refuel/ the devices every now and then ...
[+] WeylandYutani|2 years ago|reply
Well it did take them 30 years to get working and lots of money.

That's why ASML doesn't have any competition: everyone else gave up.

[+] martin_drapeau|2 years ago|reply
I worked at Imec back in 2005 alongside the teams installing and researching the first EUV machines from ASML. Never thought they’d get it to work given the technological challenges. Laser-pulsed tin plasma, mirrors instead of lenses and vacuum exposure just to name a few! Glad they got it working so we can print smaller at scale.
[+] yread|2 years ago|reply
Friend of mine worked for an asml supplier. He was working on adjusting the optical path based on how the laser going through a lens heats that lens up and changes its optical qualities. There are so many challenges we don't even think about
[+] dcormier|2 years ago|reply
I recently had an opportunity to chat with a machinist who works for a shop that makes some parts for ASML’s machines. He showed me a picture of a couple of parts he finished that day. He said they weighed about a hundred pounds sitting there on the table, but at the acceleration they experience in the machine, they weigh roughy the same as a Toyota Tacoma.
[+] iamgopal|2 years ago|reply
We make centrifuge, 1000g is routine for most centrifuges.
[+] hotpotamus|2 years ago|reply
I can't imagine that what I think of as a machinist - a human who picks up parts and places them into machine tools and adjusts settings - is who makes parts for semiconductor manufacturing machines. I'm guessing the title has a lot more to do with CNC/automation these days?
[+] apienx|2 years ago|reply
The industry's currently shipping chips based on the 3nm process. I understand that it's mostly a marketing term (i.e. non-standardized), but I assume the actual transistor channel is within that order of magnitude.

Knowing that a silicon atom is larger than 0.1 nm, how can we possibly keep Moore's Law on track? It feels like we're close to hitting fundamental limits.

Any insights would be much appreciated. Thanks!

[+] ly3xqhl8g9|2 years ago|reply
We are ridiculously far from physical limits in our current artificial computers (both theoretical [1], and practical [2]). For more technical details see Jim Keller: [3] [4] [5].

[1] https://en.wikipedia.org/wiki/Limits_of_computation

[2] The ~12 watts computer inside each living human adult skull (and perhaps each eukaryote cell [6]) is still the state-of-the-art, for quite some time.

[3] 2021, Jim Keller: The Secret to Moore's Law, https://www.youtube.com/watch?v=x17jIKQf9hE

[4] 2019, Jim Keller: Moore’s Law is Not Dead, https://www.youtube.com/watch?v=oIG9ztQw2Gc

[5] 2023, Change w/ Jim Keller, https://www.youtube.com/watch?v=gzgyksS5pX8

[6] Our computers aren't yet able of polycomputation, where the computation topology, data, and functions depend on the observer, instead of computation in a passive implementation, once done forever set in s̶t̶o̶n̶e̶ silicon, 2023, Michael Levin, Agency, Attractors, & Observer-Dependent Computation in Biology & Beyond, https://www.youtube.com/watch?v=whZRH7IGAq0

[+] dougmwne|2 years ago|reply
Sure, we are close to the end of silicon semiconductor improvements. And Moores law could be near its end. In fact the price per transistor has not been dropping recently, so it may be over already.

If there’s hope for the future, it’s that there are many other computing technologies besides traditional silicon that show potential, so maybe the torch will be passed to quantum, or superconductors or dna or something else.

[+] wuming2|2 years ago|reply
Peak applications require the latest and most powerful tech. With it’s colossal trail of pollution from manufacturing.

For the rest I often wonder if would not be better for the environment to re-purpose older, already made tech.

Plenty of embedded systems grinding on for a long time.

And user facing applications lack one thing: public stats of peak system usage. When confronted with a new purchase we should be handed over a sheet of our own and peers statistics. Producers and service providers have them anyway.

[+] andy_ppp|2 years ago|reply
How does 13.5nm light etch features of 7nm and lower? I can sort of see how ultra pure water can focus the light (immersion lithography) and multi patterning (I’m not sure how this works really, I would have thought shining light through two masks would make things even more blurry). When the photon hits the silicon why isn’t the dot 13.5nm?
[+] atq2119|2 years ago|reply
It does seem magical.

The one thing I can answer is that multi-patterning does not shine light through two masks simultaneously. Instead, it consists of multiple separate steps.

I think for the rest, the point is that light arriving on the waver is not a binary thing, but due to refraction and self-interference light arrives in variable intensities. So within difficult constraints, this allows you to control the area in which the intensity is below our above certain thresholds. I assume that if you then manage to control the chemistry just right, you can then produce features that are smaller than the wavelength of the light -- under severe constraints of what shapes you can produce. You definitely do not get to produce an arbitrary bitmap of sub-wavelength pixel size.

[+] dist-epoch|2 years ago|reply
You just use the "edge" of the light to cut.

If you drag a baseball bat through sand, the edge of the cut "channel" is much sharper and narrower than the baseball bat.

Now offset the baseball bat a bit and draw another line which is partially overlapped over the first one. You will get the intersection of the two baseball bat wide channels, but it will be much narrower.

[+] glic3rinu|2 years ago|reply
masks dont have the actual shape, but shapes accounting for wave interference patterns that will end up producing the final shape when EUV light passes through. I believe the process of comming up with the correct intereference pattern takes weeks of supercomputing.
[+] abwizz|2 years ago|reply
good question, was also wondering.

then again, it's called the wave-lenght, not the wave-width

[+] lockhouse|2 years ago|reply
Processing power is fine these days. It’s memory that I feel has stagnated.

The standard computer configuration has been stuck at 8 GB of RAM and 256 GB of SSD storage forever.

[+] sbrother|2 years ago|reply
> Processing power is fine these days.

I don't know, I've been working with LLMs a lot recently and for the first time in a while I am wishing I had access to much more compute than I do. Imagine having the power of a H100 locally without having to pay thousands of dollars a month.

[+] jeffbee|2 years ago|reply
Get used to it. The memory wall is coming and if you are in the industry it's possible that within your career you may need to adapt to falling DRAM-to-core ratios.
[+] cypress66|2 years ago|reply
I'm not sure what "standard computer configuration" means. Maybe you mean a budget laptop? Your typical new gaming desktop build is 32GB, and for a workstation probably 64GB.

I think you can get a 2TB ssd for like a 100 bucks nowadays. They are dirt cheap.

[+] Tade0|2 years ago|reply
Out of curiosity I looked at the store I get my laptops etc. from and grouped by RAM, the laptop category looks like this:

8GB - ~350

16GB - ~1060

32GB - ~550

I don't know about desktop PCs, but in laptops 8GB is not mainstream any more.

[+] mikewarot|2 years ago|reply
Von Neuman's architecture has run out of steam. The fact that most transistors in a computer at any given moment are idle seems to be a huge waste. What if you could just have a computational fabric that lets you have one instruction per cell, and run whole programs in parallel?

FPGAs do that, but the "smart" routing fabric in them makes compiling code to them take hours or days.

If you eliminate the switching fabric on an FPGA, you are left with a grid of Look Up Tables (LUTS) each connected to their neighbors. The result is a Turing Complete computer that works exclusively in parallel.

[+] FpUser|2 years ago|reply
At home (which is also my workplace) all my PCs are at 128GB. The server is 512GB. Laptops are 64GB. RAM has not been stagnated. Just buy what you need. To get it cheap for example for laptops I would buy smallest configuration (RAM and SSD wise) but with good CPU. I would then throw out old RAM and SSD and replace with the ones I buy separately. Way cheaper this way. PCs and servers are assembled from parts. Again I just order what I need and then let custom PC maker nearby assemble it for me.
[+] cobalt|2 years ago|reply
build your own, it's not that expensive to 4x both those numbers
[+] tgtweak|2 years ago|reply
Is this the one that Intel is getting first dibs on for the next few generations? Looks like these fabrication paradigms generally work on Moore's law for some time before tapering off (S curve to some degree) but the discovery of a new paradigm can slow down the overall trendline if it takes too long to commercialize.
[+] crote|2 years ago|reply
Not going to happen. Remember, TSMC manufactures chips for Apple, Nvidia, AMD, Google, and a looot of other companies. They own about 60% of the cutting-edge fab capacity in the world, while Intel was basically an "also ran" during the transition to EUV.

ASML is never going to tie themselves to a single customer like that, let alone one which isn't even the market leader. High-NA is a massive technological change, and all the major players have already ordered their machines. Intel was simply the first to complete their order in a desperate attempt to avoid a repeat of their EUV debacle, but they'll receive their new toy at most a month or two earlier than their competition.

[+] TheUnhinged|2 years ago|reply
Intel is getting the first prototype(s), in a few years from now. Then add a a year or two before those EXE’s are actually used for volume manufacturing.

No exclusivity, as it’s ASML business model to work fairly with all semiconductor manufacturers.

[+] nine_k|2 years ago|reply
Somebody noticed that progress is usually a stack of sigmoids, not an uneventful and plain upwards curve.
[+] brancz|2 years ago|reply
Is there any indication that it’s possible to build subatomic size transistors? Last I checked the data, transistors are already only a few atoms in size (silicon and carbon atoms are somewhere in the 0.3nm range), and it was a widely held opinion that it would stop at that if not much sooner. That would keep Moore’s law alive for a bit longer at best but the end does seem in sight.

Even considering all of that the economics seem to have already stagnated in cost for performance.[1]

[1] http://databasearchitects.blogspot.com/2023/04/the-great-cpu...

[+] automatic6131|2 years ago|reply
Bear in mind, when silicon foundries say they have an Xnm process, nothing in that process is actually Xnm. TSMCs 2nm process does not make transistors 2nm wide[1]. They are in fact, approximately 40-50nm wide. The process number is a marketing number, and what changes each generation is actually transistor geometry (here you'll see terms like FinFET and GAA transistor and such, plus some process improvements that cause "half" generations)

[1]https://en.wikipedia.org/wiki/2_nm_process

But yeah, the fact that latest process nodes actually increase in cost is why people say "Moore's law is dead". Performance improves, but to keep the trendline roughly exponential, many things have had to give since the late 2000s. Such as: cost per wafer, power usage for max performance etc.

[+] javaunsafe2019|2 years ago|reply
One layman question here maybe someone with better knowledge of the field can answer: could it make sense at some point to work without masks and lights and have only large scale laser arrays that write the structures directly but in the smallest possible scale (electrons)? Wouldn’t such an array be more energy efficient, be able to acquire smaller scales and trough the parallel process at some point be as fast as current systems using masks + light?
[+] oznog|2 years ago|reply
Moore's Law is not engineering, it's economics.

The limits of physics can be surpassed with parallelization.

Moore's law is a reflection of the private and business market's desire/need for ever greater efficiency.

There is no limit.

[+] rowanG077|2 years ago|reply
Why aren't we going electron lithography? As a laymen I think it should easily be able to surpass EUV in terms of resolution. I would imagine there are very good reasons why we aren't seeing it.
[+] beebeepka|2 years ago|reply
What surprised me the most is how little telemetry they have, almost none of it in real time. I guess the industry is used to just eating up the cost defects but it was mind blowing (or eye opening) to me.