top | item 7140898

AMD reveals its first ARM processor: 8-core Opteron A1100

306 points| shawndumas | 12 years ago |arstechnica.com | reply

191 comments

order
[+] quackerhacker|12 years ago|reply
I gotta give AMD massive credit...while I have mainly used intel in my setups, AMD has really pushed their offerings.

First I remember everyone driving the price up of an AMD video card just because bitcoin mining.

Second they got the backing in PS4 and Xbox One hardware.

Now an Arm 8-core CPU...although I find the clock speed (2GHz) kinda underwhelming, still AMD's pricing would entice me to buy 2 for the price of 1 Intel i7

[+] SwellJoe|12 years ago|reply
How about also comparing the power usage? At 25 watts, you can get three of these for one six core Intel CPU; so you're at 24 cores at 2 GHz vs 6 cores at 3 GHz (and probably still at a lower price). The GHz can't really be directly compared, though. Even comparing GHz across Intel product generations isn't useful. I have a 3.16 GHz Core 2 Duo in my desktop that I think (I haven't really benchmarked, but I've run a Litecoin miner on both for testing) does about half the work of the 3.2 GHz i7 in my laptop.

All that said, I have a 16 core AMD server in colo that is running at about 3% usage across all CPUs, and yet it is slow as hell because the disk subsystem can't keep up (replacing the spinning disks with SSDs as we speak). The reality is that CPU is not the bottleneck in the vast majority of web service applications. Memory and disk subsystems are the bottlenecks in every system I manage.

So, I love the idea of a low power, low cost, CPU that still packs enough of a punch to work in virtualized environments. Dedicate one of each of these cores to your VMs; would be pretty nice, I think.

[+] gaius|12 years ago|reply
Clock speed is only relevant when comparing the same CPU.

I have been having this argument since the 1980s in the school playground when kids with Spectrums would claim their 4Mhz Z80s were faster than 2Mhz 6502s...

[+] NolF|12 years ago|reply
> although I find the clock speed (2GHz) kinda underwhelming

Remember that the clock speed is not necesarely everything. AMD may be able to get more work done with those 2Ghz than a Snapdragon 800 might using similar or less energy.

[+] mmanfrin|12 years ago|reply
They got that Mac Pro spot, as well; half the price of the base model is from the GFX cards.
[+] ekianjo|12 years ago|reply
> Second they got the backing in PS4 and Xbox One hardware.

Does not mean much. Their APU used in these consoles is very weak compared to what we have in mid to high end PC graphics cards. They were obviously chosen for their price there, not for the performance (which is MEH at best).

[+] blahbl4hblahtoo|12 years ago|reply
@ 2Ghz they can probably produce large quantities. It sounds like they went for a "safe" fabrication process so that the initial rollout is as defect free as it can.

I really hope MS ports windows server to it...

[+] userbinator|12 years ago|reply
I think reusing the Opteron name is really not a good idea, since now there'll be x86 Opterons and ARM Opterons. Maybe Apteron would've been a better choice...
[+] alexandros|12 years ago|reply
Apteron in Greek (Άπτερον), the one with no wings. Other things being equal, maybe they should go with something else :)
[+] bostonpete|12 years ago|reply
How about "Opterarm"? :-)
[+] skylan_q|12 years ago|reply
I agree it could be a point of confusion, but anyone buying an Opteron would probably be looking at the CPU specs anyways.
[+] Zenst|12 years ago|reply
Fair point and some excellent alternatives been mooted.

I think AMD are positioning this for the server market (though would love a nice cheap desktop with this chip. With that they are levridging there only real asset in branding and I feel it will not water down but help the Opteron range live on, given they are moving away from the x86 area.

[+] ChuckMcM|12 years ago|reply
Maybe they should call it "Letniuoywercs" I don't know how to pronounce it either :-)

I agree that re-using the name is bad mojo from a marketing perspective, too many people will be caught off guard by the lack of compatibility with the x86 chip set.

[+] rbanffy|12 years ago|reply
Unless non-x86 desktop PCs become mainstream in the consumer market, I don't think there is much risk of confusion. The channels will be completely different and I don't think it will be easy to buy a motherboard with an Opteron A anytime soon.
[+] colanderman|12 years ago|reply
Dual 10 GbE built-in? Sweet. Wonder what the price will be; a dual 10 GbE Intel card goes for $500 alone.

More importantly – will they provide zero-copy I/O like you can get with Intel network cards via their DPDK [1] or PF_RING/DNA [2]?

[1] http://dpdk.org/ [2] http://www.ntop.org/products/pf_ring/dna/

[+] donavanm|12 years ago|reply
At $400 I assume youre thinking of something like an Intel X520 DA2 plus optics. If you can tolerate the power you can do 82599 with dual Phys for more like $150-200.

Obviously I have no idea on the network controller or its sdk/driver support.

[+] eroullit|12 years ago|reply
DPDK is Intel's turf and PF RING only supports igb/ixgbe/e1000 drivers. For out of the box usage, looking at Netmap or Linux PACKET_MMAP (though not entirely zero-copy) should be possible.
[+] thrownaway2424|12 years ago|reply
I don't think that's "sweet" I think that's a bad decision. What if I don't need two of those per 10 arm cores? Now I'm just paying for gates I don't need.
[+] dragontamer|12 years ago|reply
I don't think people "get it".

This is a microserver, designed to connect up I/O bound resources to each other. Imagine a cache like Squid running on this thing. Imagine mulitiple RAID-0 SSD drives on one side, and 20Gbps going out through the network.

This is NOT a computationally difficult task. For computationally difficult tasks, you have 8-core $2000 E5 Xeons (which get more and more efficient the bigger the workload you have).

However, filling your datacenter with $2000 Xeons so that they can spend 0.01% of their CPU power copying data from SSD drives to the network is a waste of money and energy.

The A1100 looks like it will be a solution in the growing microserver space. As Facebook and Google scales, they have learned that a large subset of their datacenters are I/O bound and that they're grossly overspending on CPU power.

Big CPUs -> Big TDPs -> higher energy costs.

This machine is designed with big I/O throughput (multiple 10GbE and 8 SATA ports on-chip), with the barest minimum CPU possible to save on energy costs.

The upcoming competitors to this market are HP Moonshot (Intel Atoms), AMD Opteron A1100, and... that's about it. Calexda's Boston Viridis has died, so that is one less competitor in this niche.

[+] elipsey|12 years ago|reply
It's a relief that AMD has a performance/watt alternative to bulldozer. I sure hope they can keep being a business so I have someone to buy hardware from that doesn't fuse off features to screw us out of a %65 margin.

Either way, I'm hoping ARM64 will trickle up from iThingies to the desktop so I can buy a CPU with Virtual MMIO without paying an extra hundred bucks.

[+] ANTSANTS|12 years ago|reply
Maybe their desktop and server CPUs aren't so hot right now, but I don't think you have to worry about AMD for a while. All of the current generation of consoles have AMD GPUs, those GPUs are on the same die as an AMD CPU for two of the three (PS4 and Xbone), AMD GPUs remain competitive with NVidia's offerings, and they seem to be winning mindshare with their lower-level Mantle graphics API.

EDIT: And the whole Bitcoin-mining thing (or Litecoin/Dogecoin mining thing, these days), as mentioned in another thread.

[+] jeffdavis|12 years ago|reply
Does the ARM architecture have anything like the nested page tables in recent x86-64 chips? Or is that an orthogonal processor feature that is not required (or forbidden) in a particular implementation of ARM/x86-64?

To make a real entrance into the server market, I would expect good virtualization support to be nearly a requirement.

[+] robot|12 years ago|reply
It has support for virtualization. There is a two stage translation where the first stage handles guest operating system and second stage handles the hypervisor mappings. Both stages have nested page tables.

Also there is an IOMMU implementation for supporting virtualization for IO. For example, the IOMMU and CPU MMU page table mappings are synchronized such that a DMA controller would also adhere to page table mappings set up for the CPU.

[+] msoad|12 years ago|reply
I don't know anything about chips. But I know ARM architecture was around for decades. Why it's hot again? I get the point for using it in smartphones and tablets, but why should servers use ARM?
[+] rayiner|12 years ago|reply
I don't see AMD's play here. What value do they add with a processor that they don't design, don't fab, and can't produce in the kind of volume tablet and phone chips get produced?
[+] icegreentea|12 years ago|reply
It's possible that this is their first foray into ARM with a core license to get a feel, and their next iteration will be with an architecture license (which they can get some design value add).
[+] gnoway|12 years ago|reply
I don't understand your comment. "Doesn't design and doesn't fab" describes every ARM licensee. Also, this chip isn't going anywhere near phones or tablets. Did you look at the specifications?
[+] zik|12 years ago|reply
ARM designs the "IP core" of the processor but doesn't fab anything themselves. AMD brings their own SOC design and fab process.
[+] fleitz|12 years ago|reply
AMD's hook is servers + GPU.
[+] strlen|12 years ago|reply
They seem to be aiming this at the server market: the volumes will be lower than tablet and phone chips.
[+] lazyjones|12 years ago|reply
Where is the market for this, apart from Facebook (Open Compute Project)? Is it set to compete with CPUs like the Xeon E3-1220L series? Will it end up in HP's Moonshot? I thought that bigger boxes with virtualization would be more economical for most uses than closets full of low-power CPUs.

Perhaps I/O is the key here, N of these A1100 CPUs can easily saturate N x 2 x 10GbE, a single box with 64+ cores probably cannot push 16 x 10GbE.

[+] rbanffy|12 years ago|reply
The developer board runs Fedora. Any workload that does not depend on a specific CPU architecture (mostly everything but Windows) should run on it. The dev board is there to make it possible to developers to fine tune their implementations so they run well on the new platform.

Will server makers buy it? That remains to be seen.

Making a dev board available (let's hope it's also cheap enough to make hobbyists buy it) is rather clever. Without software tuned for it, the chip could fail on the market like Sun's Niagara and Intel's Itanium did.

[+] venomsnake|12 years ago|reply
Probably it is build it and they will come strategy.
[+] pippy|12 years ago|reply
We're getting close to the age where you can buy a off the shelf AMD desktop machine with Linux, a good graphics card, and the same performance of a x86-64.
[+] zanny|12 years ago|reply
I think saying 8 A57 cores at 2ghz is anything close to x86 at 3.5+ghz on 4 cores.

Clock for clock, pipe for pipe (15 stages in A57, 14-19 in Haswell) x86 still wins because of per-instruction operand volume.

Though i wonder how well this chip would perform at 50w. Double the watts, maybe another 1.5ghz, might be viable for a cheap htpc.

[+] orik|12 years ago|reply
You mean an ARM machine?
[+] bentcorner|12 years ago|reply
Where are things wrt the rest of the experience? Can I run FF, Gimp, OpenOffice, arbitrary distros? What about Steam?
[+] GarrettBeck|12 years ago|reply
I have a feeling history is going to repeat itself. Statements like "AMD believes that it will be the leader of this ARM Server market" reminds me of the DRAM boom-and-bust from 2006-2009. The new (old) hot technology froths the market into a frenzy and semiconductor fabs start rushing to get a slice of the action.

Does Qimonda ring a bell to anyone?

[+] fleitz|12 years ago|reply
This may be a much bigger threat to intel than AMD64 was.
[+] chx|12 years ago|reply
> Two 10 Gigabit Ethernet ports.

Within a 25W envelope??? I thought dual 10GbE chips consume 10-15W.

[+] donavanm|12 years ago|reply
Optics are a big chunk also. A new controller with DAC Gbe phy is probably more like 7-8W. They only need to do the controller on this chip. A couple watts for the Phy are part of the motherboards budget.
[+] wmf|12 years ago|reply
Those chips are old.
[+] thomasfl|12 years ago|reply
While this seems to be targeted for small low power web servers, I really want low powered, cool and low temperature laptops. Laptops with hot intel processors are cooking my body if I actually keep my laptop on my lap.
[+] bitL|12 years ago|reply
25W :-(

As much as I wish AMD getting ahead, this is not good news for efficient ARM servers.

[+] hosh|12 years ago|reply
Cool. I can finally build that ZFS plug computer I wanted :-D
[+] protomyth|12 years ago|reply
I do wonder when we'll be able to order a motherboard and how much information will be available to create new drivers.