top | item 24906151

AMD to Acquire Xilinx

593 points| ajdlinux | 5 years ago |amd.com | reply

252 comments

order
[+] gvb|5 years ago|reply
Everybody seems to view this as AMD mimicing Intel when it acquired Altera. (That acquisition has not born visible fruit.)

My contrarian speculation is that this is a move driven by Xilinx vs. Nvidia given Nvidia’s purchase of Arm and Xilinx’ push into AI/ML. Xilinx is threatened by Nvidia’s move given their dependence on Arm processors in their SOC chips and their ongoing fight in the AI/ML (including autonomous vehicles) product space. My speculation is that this gives Xilinx an alternative high performance AMD64 (and possibly lower performance & lower power x86) "hard cores" to displace the Arm cores.

Interesting times.

[+] m0zg|5 years ago|reply
I think you're onto something here. AMD is likely seeing the end of the road for their CPU business within the next decade, since it will run up against physics and truly insane cost structures that will come after 5nm. At the same time we're far past the practical limit wrt ISA complexity (as evidenced by periodic lamentations about AVX512 on this site). The only real way to go past all of that right now is specialized compute, reconfigurable on demand, deployment of which is hampered by the fact that it's very expensive and not integrated into anything, so the decision to use it is very deliberate, which in practice means it rarely ever happens at all. Bundle a mid-size FPGA as a standardized chiplet on a CPU, integrate it well, provide less painful tooling, and that will change. Want hardware FFT? You got it. Want hardware TPU for bfloat16? You got it. Want it for int8? You got it. Think of just being able to add whatever specialized instruction(s) you want to your CPU.

I'm not sure this is worth $35B, but if Lisa Su thinks so, it probably is. She's proven herself to be one of the most capable CEOs in tech.

[+] ohazi|5 years ago|reply
Also, the advantage Altera supposedly got after being acquired by Intel was better fab integration with what was then the best process technology available (High-end FPGAs genuinely need good processes).

1. That's no longer the case, so sucks for Altera / Intel

2. AMD doesn't have a fab, so any advantages are necessarily on the design / architecture / integration side.

[+] jl2718|5 years ago|reply
I don’t think NVidia/ARM would affect Xilinx much. Given what the bulk of FPGAs are doing in the data center, I think AMD was looking more at NVidia/Mellanox and of course Intel/Altera, but for networking, not compute. For Xilinx, this gives a path to board-level integration with x86.
[+] ansible|5 years ago|reply
And for their products that include hard cores, maybe they will switch to RISC-V like with the MicroSemi PolarFire.

I'm still debating on getting the Icicle development kit.

[+] WWLink|5 years ago|reply
> My speculation is that this gives Xilinx an alternative high performance AMD64 (and possibly lower performance & lower power x86) "hard cores" to displace the Arm cores.

I'm trying to imagine an x86-based Ultascale+ style processor.

Hopefully AMD can help fix the mess known as vivado and the petalinux tools. lol.

[+] baybal2|5 years ago|reply
I do not believe it makes sense to spend so much money for a niche, in a niche product like AI/ML chips.

And I believe AMD are good with using calculators.

[+] person_of_color|5 years ago|reply
What is the difference between hard and soft cores?
[+] gmueckl|5 years ago|reply
Why not license other softcores, e.g. from SiFive?
[+] petra|5 years ago|reply
What happened recently in AMD's market ?

AWS based ARM processor looks to be widely deployed in the cloud.Nvidia, the leader in GPU compute in buying ARM.Intel, which has suffered deeply because of their 10nm fab problems are going to work with TSMC.And AMD's P/E ratio is at 159, higher than Amazon's!

So Maybe AMD is looking to convert some inflated stock with a predictable business.

And it's better to invest in a predictable business that may have possible synergies with yours. Otherwise it looks bad to the stock market.

And Xilinx is probably the biggest company AMD can buy.

[+] phendrenad2|5 years ago|reply
<many years ago> when Intel acquired Altera, and announced Xeon CPUs with on-chip FPGAs, I was optimistic that eventually they would add FPGAs to more low-end desktop CPUs (or at least Xeons in the sub-$1000 zone). But it never materialized. I'm slightly optimistic this time around too, but I suspect that the fact that Intel didn't do it hints at some fundamental difficulty.
[+] Nokinside|5 years ago|reply
Nokia designed their ReefShark 5G SoC chipset with significant FPGA component and used Intel as their supplier. Intel couldn't deliver what they promised. It was complete disaster.

They had to redesign ReefShark and cancel dividends. It was a huge setback.

[+] himinlomax|5 years ago|reply
I wonder how much of the delay in FPGA tech adoption is due to the utterly hilarious disaster that are the toolchains. They look like huge brittle proprietary monstrosities, incompatible with modern development methodologies.
[+] eganist|5 years ago|reply
I'm optimistic... not so much because of the merits of the acquisition but moreso because of AMD's history with strategic actions. ATI kept them afloat through a CPU performance drought, and divesting globalfoundries secured necessary liquidity. These two alone essentially saved AMD, so I've got faith in leadership being able to make the appropriate strategic maneuvers.

But maybe I'm being overly optimistic. (Probably because—disclosure—I'm long AMD. Been long for years.)

[+] eqvinox|5 years ago|reply
I'm hoping/expecting a chip that goes into the Epyc/SP3 socket and has the memory & PCIe & socket crossconnect as hard IP but the CPU cores replaced with programmable logic. If you have a use case for FPGAs, it's more likely you want it in a concentrated form like this... not on low-end or desktop systems :/

If I remember correctly, there was something similar back in the early HyperTransport days...

[+] throwmemoney|5 years ago|reply
It is really funny when you find out that Intel uses Xilinx FPGAs for prototyping as they cannot get what they acquired (Altera) working in house to make things work.
[+] cjwinans79|5 years ago|reply
If true, ouch! Intel seems to be getting kicked every which way these days. Too complacent when they once ruled the roost.
[+] tails4e|5 years ago|reply
I imagine they are using a 3rd party prototyping solution like HAPS from synosys, which uses Xilinx FPGAs inside - for good reason, for quite some time Xilinx have had some very large devices built specifically for this market. It must sting a little bit though....
[+] imtringued|5 years ago|reply
I would rather see Processing in Memory (PIM) become mainstream than FPGAs. FPGAs are basically an assembly line that you can change overnight. Excellent at one task and they minimize end to end latency but if it's actually about performance you are entirely dependent on the DSP slices.

With PIM your CPU resources grow with the size of your memory. All you have to do is partition your data and then just write regular C code with the only difference being that it is executed by a processor inside your RAM.

Having more cores is basically the same thing as having more DSP slices. Since those cores are directly embedded inside memory they have high data locality which is basically the only other benefit FPGAs have over CPUs (assuming same number of DSP and cores). Obviously it's easier to program than either GPUs or FPGAs.

[+] nickff|5 years ago|reply
You're comparing two completely different paradigms.

FPGAs are not an assembly line at all; the assembly line analogy applies much more closely to a processor's pipeline.

FPGAs are just a massive set of very simple logic units which can be interconnected in many different ways. FPGAs are best used in situations where you want to perform a series of simple operations on a massive incoming dataset, in parallel, especially in real-time situations. Performing domain transforms on data coming in from sensor arrays is one very good application for FPGAs.

[+] legulere|5 years ago|reply
Having memzero and memcpy happen in memory without polluting caches would already be a huge gain.
[+] varispeed|5 years ago|reply
I hope they will not drop their CPLD chips. They were made obsolete at least once but Xilinx fortunately decided to extend the support for a couple of more years. CPLD are very useful for repairing vintage gear where logic components fail and are no longer available (for example custom programmed PALs), so you can describe the logic in Verilog and often solder it in place of multiple chips. If they drop it then the only way to do it would be to use full blown FPGA which is a bit wasteful.
[+] thrtythreeforty|5 years ago|reply
I would be very interested in reading a blog post about this. Is there one that I can read or you'd be willing to write?
[+] znpy|5 years ago|reply
Hopefully we'll get better open source tools, a better Vivado maybe ?
[+] johnwalkr|5 years ago|reply
Xilinx Zynq and Ultrascale series are multiple Ghz ARM cores plus FPGA. They're incredibly useful for small volume niche use cases and to give an example from my industry, becoming popular in space applications. The reason is hardware qualification/verification is extremely expensive but a change to FPGA fabric is not.

My point is Xilinx have already proven ARM CPU+FPGA on one die and I think AMD CPU+FPGA is very likely to be a success.

Between this, ARM adoption, Apple Silicon and similar offerings (which kind of skipped ARM+FPGA for ARM+ASIC), RISC-V, it's like 1992 again with exciting architectures. Only this time software abstraction is much better so there is not a huge pressure to converge on only 1-2 architectures.

[+] panpanna|5 years ago|reply
ARM + ASIC? Isn't that simply a SoC?

Edit: technically, the arm part is also ASIC, but you get what I mean

[+] FPGAhacker|5 years ago|reply
Could be interesting. I prefer an independent Xilinx, but maybe competition with intel will stimulate the whole reconfigurable computing revolution that fizzled out.
[+] hehetrthrthrjn|5 years ago|reply
This is a smart move, reflecting Intel's own, with an eye to the datacenter where the FPGA is seen as having a bright future.
[+] saagarjha|5 years ago|reply
Has Intel done much with Altera? I haven’t heard much of anything come out of that partnership. (Then again, I’m not plugged in to this stuff.)
[+] QuixoticQuibit|5 years ago|reply
Can you expand on why Intel’s move was smart (what did the Altera acquisition do for them) and why FPGAs have a bright future in the datacenter?

From what little I’ve seen in this space, FPGAs have not made large inroads in the ML space or datacenters in general. This seems partly due to their inefficient nature compared to ASICS and moreover their software.

Unless AMD is planning something really ambitious (e.g., true software-based hardware reconfiguration that doesn’t require HDL knowledge) and are confident they’ve figured it out, I’m not sure what they hope to achieve here.

[+] voxadam|5 years ago|reply
I'd love to know why Intel chose to buy Altera instead of the industry leader Xilinx.
[+] baybal2|5 years ago|reply
> This is a smart move, reflecting Intel's own, with an eye to the datacenter where the FPGA is seen as having a bright future.

What in the world FPGAs have to do in a datacentre?

[+] mmrezaie|5 years ago|reply
I understand that they need a big push in DPU market, but I do not understand why companies as big as AMD do not invest and build what they need in house? If anyone can, it is AMD that can gather the talent. Everyone was talking about future data centers, and as far as I can tell I have been hearing about heterogeneous IO since 2009 (and that's me, and I was hearing it while working on Xen).

To asnwer my question maybe the market is so volatile that they cannot do strategic planning like that?

[+] saagarjha|5 years ago|reply
Hmm, this was rumored but I guess now it is actually happening. Nice bump on the share price there I guess, it’s currently trading at around $115 and it seems to be converted to $143 in AMD. I assume this is to help AMD push more into the server and ML compute spaces?
[+] 6d65|5 years ago|reply
Probably makes sense as a business decision.

In my opinion, I would also like for AMD to invest in ML tooling while they have the cash.

I hope one day Pytorch, XLA, Glow would have native AMDGPU support, and I will be able to buy a couple Radeon 6000 series cards, undervolt them and make a good ML box.

I think AMD gpus on TSMC 7nm, then maybe even 5nm, will have the best performance or watt. Even though they might be 10% or 20% slower than the alternative. For me performance per watt and dollar is more important.

Anyway, it's sad that they couldn't make a 5 to 10 people (I might be too optimistic) engineering team that would make their product relevant in this market.

[+] nfriedly|5 years ago|reply
I'd like to see consumer-level CPU + GPU + FPGA products that emulators could take advantage of. I'm thinking of floating point math for PS2 right now, but I'm sure there are other examples where an FPGA could be beneficial.
[+] beezle|5 years ago|reply
So who are left in the fpga space? Lattice?
[+] ohazi|5 years ago|reply
You can still buy Altera FPGAs, and you'll still be able to buy Xilinx FPGAs -- they're not going to just throw away a three billion dollar business.

Lattice is probably the next biggest. There's also Microchip (< Microsemi < Actel), Quicklogic, and Gowin.

Nobody really came close to competing with Altera / Xilinx at the high end, though.

[+] duskwuff|5 years ago|reply
Not a lot. Actel was acquired by MicroSemi in 2010, and MicroSemi was in turn acquired by Microchip in 2018.

There's a couple of upstarts in China like Gowin and Anlogic, but they haven't made much of an impact in the larger market yet.

[+] gpderetta|5 years ago|reply
Interestingly, Xilinx owns Solarflare. I wonder if that was part of the appeal.
[+] tutanchamun|5 years ago|reply
Yeah, I thought so too since Nvidia owns Mellanox and Intel having their own NICs, OmniPath etc.
[+] andy_ppp|5 years ago|reply
Could programmable AI chips compete with Graphics Cards and TPUs or is it futile to try?
[+] ninjaoxygen|5 years ago|reply
In the stock trading world, HFT on FPGA is closer to the edge than GPU / TPU solutions and they are using ML models, if you count that as AI. When the logic and the NIC are on the same hardware, it's really fast. An ASIC would be even faster, but you can't really iterate on that.
[+] heliophobicdude|5 years ago|reply
Perhaps they would not make good competition. FPGAs have been known to be slower than ASICs. But then again, perhaps some other company will find a good use for rapidly changing IC design.
[+] galangalalgol|5 years ago|reply
Their new versal line puts a tpu on the die with the fpga. Great for inference, especially if you want to use the fpga to quickly extract features in the fpga and infer from feature space.
[+] teleforce|5 years ago|reply
For those in ASIC and chip design industry, the two of the largest chip companies namely Intel and AMD buying two of the largest FPGA companies is inevitable, it's just a matter of "when" rather than "if".

I think the more interesting news is what they are going to do pro-actively with these mergers rather than just sitting on it.

I really hope their respective CEOs will take a page from the open source Linux/Android and GCC/LLVM revolutions. I'd say the chip makers companies are the ones that benefit most (largest beneficiary) from the these open source movement not the end users. To understand this situation we need to understand the economic rules of complementary goods or commodity [1].

In the case of chip makers if the price of designing/researching/maintaining OS like Linux/Android and the compilers infrastructure is minimized (i.e. close to zero) they can basically sell the hardware of their processors at a premium price with handsome profits. If on another hand, the OSes and the compilers are expensive, their profit will be inversely proportional to the complementary elements' (e.g. OSes & compilers) prices.

Unfortunately as of now, the design tools or CAD software for hardware design and programming, and also parallel processing design tools are prohibitively expensive, disjointed and cumbersome (hence expensive manpower), and if you're in the industry you know that it's not an exaggeration.

Having said that, I think it's the best for Intel/AMD and the chip design industry to fund and promote robust free and open source software development tools for their ASIC design including CPU/GPU/TPU/FPGA combo design.

IMHO, ETH Zurich's LLHD [2] and Chris Lattner's LLVM effort on MLIR [3] are moving in the right direction for pushing the envelope and consolidation of these tools (i.e. one design tool to rule them all). If any Intel or AMD guys are reading this you guys need to knock your CEO/CTO's doors and convinced them to make these complementary commodity (design and programming tools) as good and as cheap as possible or better free.

[1]https://www.jstor.org/stable/2352194?seq=1

[2]https://iis.ee.ethz.ch/research/research-groups/Digital%20Ci...

[3]https://llvm.org/devmtg/2019-04/slides/Keynote-ShpeismanLatt...

[+] MayeulC|5 years ago|reply
I don't recall where I read this, but hardware vendors have been trying to comoditize software, and vice-versa.

It's really obvious when you think about it. If you sell nails, you want to make sure that everyone has or can afford a hammer, and hammer manufacturers like to make sure that there is a large supply of compatible nails.

As much as I would like to see it, I am not sure the equation is that simple in the case of CAD software. Sure, that would make it easier to use FPGAs, but it would also make it easier to create competing products, as a stretch.

I still think it's worth it, and wish bitcode format was documented, at the very least.