top | item 29761577

(no title)

shiohime | 4 years ago

You haven't been able to GPU mine bitcoin in like a decade, every miner uses dedicated ASICs specifically for mining btc. It's been impossible to compete without ASICs for a very long time.

discuss

order

barkingcat|4 years ago

Nvidia can design and manufacture asics too you know. Likely they have enough expertise to revolutionize the asic market as you know it, using the 3nm process at tsmc.

Since Bitmain asics are currently being made at around 16nm nodes, using 3nm process that Nvidia has access to - think about that for a bit.

thesz|4 years ago

Please bear in mind that 3nm is an "alpha" - a multiplier characterizing the process. If you took your time and find out difference between processes with different "alphas" you will find that transistor density is nowhere near what difference in "alphas" would suggest.

If I remember correctly, the difference in transistor density improves as 1/alpha whereas area of transistors, if everything gets shrunk with alpha decrease, should decrease as square of 1/alpha. I.e., if alpha differs two times, there should be 4 times more shrunk transistors whereas practice shows only 2 (and often less) more shrunk transistors.

This is because you cannot make transistors too small in every dimension, they will leak.

So the difference between 3nm and 16nm will be less than five times. Not 25 times as difference between these parameters would suggest.

The difference in getting masks and first prototype should be quite substantial in terms of upfront payments and time-to-production.

I remember that 25 square millimeters proof of concept in 180nm process needed $50K of money and half an year of delay. The 35nm (5 times less) prototype of the same area would cost about $500K and more than half an year delay. The difference between 16nm and 3nm is about the same, I can expect several millions of dollars of upfront cost of small chip and year long delay before prototype arrives for 3nm process.

These Bitmain guys should be assumed to be not stupid and they most probably have access to latest processes after second prototype. Yet they choose 16nm - we must ask ourselves "why?" and Nvidia should too.

As a matter of fact, Nvidia had its share of problems with tiny processes in past.