I gotta give AMD massive credit...while I have mainly used intel in my setups, AMD has really pushed their offerings.
First I remember everyone driving the price up of an AMD video card just because bitcoin mining.
Second they got the backing in PS4 and Xbox One hardware.
Now an Arm 8-core CPU...although I find the clock speed (2GHz) kinda underwhelming, still AMD's pricing would entice me to buy 2 for the price of 1 Intel i7
How about also comparing the power usage? At 25 watts, you can get three of these for one six core Intel CPU; so you're at 24 cores at 2 GHz vs 6 cores at 3 GHz (and probably still at a lower price). The GHz can't really be directly compared, though. Even comparing GHz across Intel product generations isn't useful. I have a 3.16 GHz Core 2 Duo in my desktop that I think (I haven't really benchmarked, but I've run a Litecoin miner on both for testing) does about half the work of the 3.2 GHz i7 in my laptop.
All that said, I have a 16 core AMD server in colo that is running at about 3% usage across all CPUs, and yet it is slow as hell because the disk subsystem can't keep up (replacing the spinning disks with SSDs as we speak). The reality is that CPU is not the bottleneck in the vast majority of web service applications. Memory and disk subsystems are the bottlenecks in every system I manage.
So, I love the idea of a low power, low cost, CPU that still packs enough of a punch to work in virtualized environments. Dedicate one of each of these cores to your VMs; would be pretty nice, I think.
Clock speed is only relevant when comparing the same CPU.
I have been having this argument since the 1980s in the school playground when kids with Spectrums would claim their 4Mhz Z80s were faster than 2Mhz 6502s...
> although I find the clock speed (2GHz) kinda underwhelming
Remember that the clock speed is not necesarely everything. AMD may be able to get more work done with those 2Ghz than a Snapdragon 800 might using similar or less energy.
> Second they got the backing in PS4 and Xbox One hardware.
Does not mean much. Their APU used in these consoles is very weak compared to what we have in mid to high end PC graphics cards. They were obviously chosen for their price there, not for the performance (which is MEH at best).
@ 2Ghz they can probably produce large quantities. It sounds like they went for a "safe" fabrication process so that the initial rollout is as defect free as it can.
I think reusing the Opteron name is really not a good idea, since now there'll be x86 Opterons and ARM Opterons. Maybe Apteron would've been a better choice...
Fair point and some excellent alternatives been mooted.
I think AMD are positioning this for the server market (though would love a nice cheap desktop with this chip. With that they are levridging there only real asset in branding and I feel it will not water down but help the Opteron range live on, given they are moving away from the x86 area.
Maybe they should call it "Letniuoywercs" I don't know how to pronounce it either :-)
I agree that re-using the name is bad mojo from a marketing perspective, too many people will be caught off guard by the lack of compatibility with the x86 chip set.
Unless non-x86 desktop PCs become mainstream in the consumer market, I don't think there is much risk of confusion. The channels will be completely different and I don't think it will be easy to buy a motherboard with an Opteron A anytime soon.
At $400 I assume youre thinking of something like an Intel X520 DA2 plus optics. If you can tolerate the power you can do 82599 with dual Phys for more like $150-200.
Obviously I have no idea on the network controller or its sdk/driver support.
DPDK is Intel's turf and PF RING only supports igb/ixgbe/e1000 drivers.
For out of the box usage, looking at Netmap or Linux PACKET_MMAP (though not entirely zero-copy) should be possible.
I don't think that's "sweet" I think that's a bad decision. What if I don't need two of those per 10 arm cores? Now I'm just paying for gates I don't need.
This is a microserver, designed to connect up I/O bound resources to each other. Imagine a cache like Squid running on this thing. Imagine mulitiple RAID-0 SSD drives on one side, and 20Gbps going out through the network.
This is NOT a computationally difficult task. For computationally difficult tasks, you have 8-core $2000 E5 Xeons (which get more and more efficient the bigger the workload you have).
However, filling your datacenter with $2000 Xeons so that they can spend 0.01% of their CPU power copying data from SSD drives to the network is a waste of money and energy.
The A1100 looks like it will be a solution in the growing microserver space. As Facebook and Google scales, they have learned that a large subset of their datacenters are I/O bound and that they're grossly overspending on CPU power.
Big CPUs -> Big TDPs -> higher energy costs.
This machine is designed with big I/O throughput (multiple 10GbE and 8 SATA ports on-chip), with the barest minimum CPU possible to save on energy costs.
The upcoming competitors to this market are HP Moonshot (Intel Atoms), AMD Opteron A1100, and... that's about it. Calexda's Boston Viridis has died, so that is one less competitor in this niche.
It's a relief that AMD has a performance/watt alternative to bulldozer. I sure hope they can keep being a business so I have someone to buy hardware from that doesn't fuse off features to screw us out of a %65 margin.
Either way, I'm hoping ARM64 will trickle up from iThingies to the desktop so I can buy a CPU with Virtual MMIO without paying an extra hundred bucks.
Maybe their desktop and server CPUs aren't so hot right now, but I don't think you have to worry about AMD for a while. All of the current generation of consoles have AMD GPUs, those GPUs are on the same die as an AMD CPU for two of the three (PS4 and Xbone), AMD GPUs remain competitive with NVidia's offerings, and they seem to be winning mindshare with their lower-level Mantle graphics API.
EDIT: And the whole Bitcoin-mining thing (or Litecoin/Dogecoin mining thing, these days), as mentioned in another thread.
Does the ARM architecture have anything like the nested page tables in recent x86-64 chips? Or is that an orthogonal processor feature that is not required (or forbidden) in a particular implementation of ARM/x86-64?
To make a real entrance into the server market, I would expect good virtualization support to be nearly a requirement.
It has support for virtualization. There is a two stage translation where the first stage handles guest operating system and second stage handles the hypervisor mappings. Both stages have nested page tables.
Also there is an IOMMU implementation for supporting virtualization for IO. For example, the IOMMU and CPU MMU page table mappings are synchronized such that a DMA controller would also adhere to page table mappings set up for the CPU.
I don't know anything about chips. But I know ARM architecture was around for decades. Why it's hot again? I get the point for using it in smartphones and tablets, but why should servers use ARM?
I don't see AMD's play here. What value do they add with a processor that they don't design, don't fab, and can't produce in the kind of volume tablet and phone chips get produced?
It's possible that this is their first foray into ARM with a core license to get a feel, and their next iteration will be with an architecture license (which they can get some design value add).
I don't understand your comment. "Doesn't design and doesn't fab" describes every ARM licensee. Also, this chip isn't going anywhere near phones or tablets. Did you look at the specifications?
Where is the market for this, apart from Facebook (Open Compute Project)? Is it set to compete with CPUs like the Xeon E3-1220L series? Will it end up in HP's Moonshot? I thought that bigger boxes with virtualization would be more economical for most uses than closets full of low-power CPUs.
Perhaps I/O is the key here, N of these A1100 CPUs can easily saturate N x 2 x 10GbE, a single box with 64+ cores probably cannot push 16 x 10GbE.
The developer board runs Fedora. Any workload that does not depend on a specific CPU architecture (mostly everything but Windows) should run on it. The dev board is there to make it possible to developers to fine tune their implementations so they run well on the new platform.
Will server makers buy it? That remains to be seen.
Making a dev board available (let's hope it's also cheap enough to make hobbyists buy it) is rather clever. Without software tuned for it, the chip could fail on the market like Sun's Niagara and Intel's Itanium did.
We're getting close to the age where you can buy a off the shelf AMD desktop machine with Linux, a good graphics card, and the same performance of a x86-64.
I have a feeling history is going to repeat itself. Statements like "AMD believes that it will be the leader of this ARM Server market" reminds me of the DRAM boom-and-bust from 2006-2009. The new (old) hot technology froths the market into a frenzy and semiconductor fabs start rushing to get a slice of the action.
Optics are a big chunk also. A new controller with DAC Gbe phy is probably more like 7-8W. They only need to do the controller on this chip. A couple watts for the Phy are part of the motherboards budget.
While this seems to be targeted for small low power web servers, I really want low powered, cool and low temperature laptops. Laptops with hot intel processors are cooking my body if I actually keep my laptop on my lap.
wasn't it someone at facebook who remarked that they would be interested in ARM cpu's once the freq > 2.5Ghz, also it seems that google also has a bunch of pa-semi guys, so, they working on an ARM clone isn't so far fetched...
[+] [-] quackerhacker|12 years ago|reply
First I remember everyone driving the price up of an AMD video card just because bitcoin mining.
Second they got the backing in PS4 and Xbox One hardware.
Now an Arm 8-core CPU...although I find the clock speed (2GHz) kinda underwhelming, still AMD's pricing would entice me to buy 2 for the price of 1 Intel i7
[+] [-] SwellJoe|12 years ago|reply
All that said, I have a 16 core AMD server in colo that is running at about 3% usage across all CPUs, and yet it is slow as hell because the disk subsystem can't keep up (replacing the spinning disks with SSDs as we speak). The reality is that CPU is not the bottleneck in the vast majority of web service applications. Memory and disk subsystems are the bottlenecks in every system I manage.
So, I love the idea of a low power, low cost, CPU that still packs enough of a punch to work in virtualized environments. Dedicate one of each of these cores to your VMs; would be pretty nice, I think.
[+] [-] gaius|12 years ago|reply
I have been having this argument since the 1980s in the school playground when kids with Spectrums would claim their 4Mhz Z80s were faster than 2Mhz 6502s...
[+] [-] NolF|12 years ago|reply
Remember that the clock speed is not necesarely everything. AMD may be able to get more work done with those 2Ghz than a Snapdragon 800 might using similar or less energy.
[+] [-] mmanfrin|12 years ago|reply
[+] [-] ekianjo|12 years ago|reply
Does not mean much. Their APU used in these consoles is very weak compared to what we have in mid to high end PC graphics cards. They were obviously chosen for their price there, not for the performance (which is MEH at best).
[+] [-] blahbl4hblahtoo|12 years ago|reply
I really hope MS ports windows server to it...
[+] [-] userbinator|12 years ago|reply
[+] [-] alexandros|12 years ago|reply
[+] [-] bostonpete|12 years ago|reply
[+] [-] skylan_q|12 years ago|reply
[+] [-] Zenst|12 years ago|reply
I think AMD are positioning this for the server market (though would love a nice cheap desktop with this chip. With that they are levridging there only real asset in branding and I feel it will not water down but help the Opteron range live on, given they are moving away from the x86 area.
[+] [-] ChuckMcM|12 years ago|reply
I agree that re-using the name is bad mojo from a marketing perspective, too many people will be caught off guard by the lack of compatibility with the x86 chip set.
[+] [-] rbanffy|12 years ago|reply
[+] [-] colanderman|12 years ago|reply
More importantly – will they provide zero-copy I/O like you can get with Intel network cards via their DPDK [1] or PF_RING/DNA [2]?
[1] http://dpdk.org/ [2] http://www.ntop.org/products/pf_ring/dna/
[+] [-] donavanm|12 years ago|reply
Obviously I have no idea on the network controller or its sdk/driver support.
[+] [-] eroullit|12 years ago|reply
[+] [-] thrownaway2424|12 years ago|reply
[+] [-] kev009|12 years ago|reply
[+] [-] dragontamer|12 years ago|reply
This is a microserver, designed to connect up I/O bound resources to each other. Imagine a cache like Squid running on this thing. Imagine mulitiple RAID-0 SSD drives on one side, and 20Gbps going out through the network.
This is NOT a computationally difficult task. For computationally difficult tasks, you have 8-core $2000 E5 Xeons (which get more and more efficient the bigger the workload you have).
However, filling your datacenter with $2000 Xeons so that they can spend 0.01% of their CPU power copying data from SSD drives to the network is a waste of money and energy.
The A1100 looks like it will be a solution in the growing microserver space. As Facebook and Google scales, they have learned that a large subset of their datacenters are I/O bound and that they're grossly overspending on CPU power.
Big CPUs -> Big TDPs -> higher energy costs.
This machine is designed with big I/O throughput (multiple 10GbE and 8 SATA ports on-chip), with the barest minimum CPU possible to save on energy costs.
The upcoming competitors to this market are HP Moonshot (Intel Atoms), AMD Opteron A1100, and... that's about it. Calexda's Boston Viridis has died, so that is one less competitor in this niche.
[+] [-] vardump|12 years ago|reply
[+] [-] elipsey|12 years ago|reply
Either way, I'm hoping ARM64 will trickle up from iThingies to the desktop so I can buy a CPU with Virtual MMIO without paying an extra hundred bucks.
[+] [-] ANTSANTS|12 years ago|reply
EDIT: And the whole Bitcoin-mining thing (or Litecoin/Dogecoin mining thing, these days), as mentioned in another thread.
[+] [-] jeffdavis|12 years ago|reply
To make a real entrance into the server market, I would expect good virtualization support to be nearly a requirement.
[+] [-] robot|12 years ago|reply
Also there is an IOMMU implementation for supporting virtualization for IO. For example, the IOMMU and CPU MMU page table mappings are synchronized such that a DMA controller would also adhere to page table mappings set up for the CPU.
[+] [-] msoad|12 years ago|reply
[+] [-] rayiner|12 years ago|reply
[+] [-] icegreentea|12 years ago|reply
[+] [-] gnoway|12 years ago|reply
[+] [-] zik|12 years ago|reply
[+] [-] fleitz|12 years ago|reply
[+] [-] strlen|12 years ago|reply
[+] [-] lazyjones|12 years ago|reply
Perhaps I/O is the key here, N of these A1100 CPUs can easily saturate N x 2 x 10GbE, a single box with 64+ cores probably cannot push 16 x 10GbE.
[+] [-] rbanffy|12 years ago|reply
Will server makers buy it? That remains to be seen.
Making a dev board available (let's hope it's also cheap enough to make hobbyists buy it) is rather clever. Without software tuned for it, the chip could fail on the market like Sun's Niagara and Intel's Itanium did.
[+] [-] venomsnake|12 years ago|reply
[+] [-] pippy|12 years ago|reply
[+] [-] zanny|12 years ago|reply
Clock for clock, pipe for pipe (15 stages in A57, 14-19 in Haswell) x86 still wins because of per-instruction operand volume.
Though i wonder how well this chip would perform at 50w. Double the watts, maybe another 1.5ghz, might be viable for a cheap htpc.
[+] [-] orik|12 years ago|reply
[+] [-] bentcorner|12 years ago|reply
[+] [-] GarrettBeck|12 years ago|reply
Does Qimonda ring a bell to anyone?
[+] [-] fleitz|12 years ago|reply
[+] [-] chx|12 years ago|reply
Within a 25W envelope??? I thought dual 10GbE chips consume 10-15W.
[+] [-] donavanm|12 years ago|reply
[+] [-] wmf|12 years ago|reply
[+] [-] thomasfl|12 years ago|reply
[+] [-] signa11|12 years ago|reply
edit: found the link http://www.theregister.co.uk/2013/12/16/google_intel_arm_ana...
[+] [-] bitL|12 years ago|reply
As much as I wish AMD getting ahead, this is not good news for efficient ARM servers.
[+] [-] hosh|12 years ago|reply
[+] [-] protomyth|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]