As someone who's never owned a mac pro can someone give an indication of how often "power" users are upgrading their CPU.
When I was a windows desktop user (up until about 8 years ago) I found that everytime i wanted upgrade the internals the CPU socket had changed along with the Mobo chipset necessitating a upgrade of not just CPU, but Mobo and memory.
I can understand swapping out gfx cards fairly regularly but are there that main users who completely gut their mac pro on a regular basis?
I've used both Mac Pro and HP Z800 workstations, and currently use a HP Z800 that is probably 3 or 4 years old, but also have used a Mac Pro as recently as 2 years ago.
I've never upgraded a CPU in a workstation.
The CPUs I buy are chosen for performance at a sane (relative) price. I'm currently running a couple of Xeon X5560 CPUs, I think they cost £1,600 each at the time.
I also buy as much RAM as is reasonable for the box, I started this Z800 on 24GB ECC RAM which was dirt cheap (relatively - as many slots on the motherboard meant I could get lots of the smaller sizes which were cheap). When the price later dropped on the larger modules I upped it to 192GB RAM.
As for graphics card... now that is something I upgrade every 2 years or so. I started on a Quadro FX1800 and am now on a Quadro 6000.
Aren't the names of these things wonderful? You know you get more when the number is HUGE!
I think this is basically the norm: Upgrade RAM, upgrade GPU... leave CPU until you replace the whole box.
PS: And whilst I'm here, the Mac Pro is a good design... but for the wrong product. I think this is a new design that is the right design with airflow, cooling and silence in mind and would suit desktop users well... but desktop requirements are not workstation requirements and not being able to replace GPUs is a non-starter. I'll skip this design of Mac Pro entirely, or at least until a very wide range of upgradeable GPUs are available. The RAM is also going to be costly, given the few slots available and the higher price for denser modules. I love the design, but this is the wrong class of computer for the constraints that come from the execution of that design.
Apple does not support CPU upgrades in the Mac Pros. You need non-standard tools to do it, it's not as easy as in a typical PC, and occasionally have to hack a firmware update depending on what Mac Pro model and what new CPU you want to use. The firmware hack is because Apple does not update firmware in older models to support newer CPUs, even though the hardware is otherwise capable of it.
I think people who upgrade their CPU in Mac Pros are a minority of Mac Pro users. I'm probably not a typical Mac Pro user (a "mid-range" tower would've been fine for me), but I bought a 2007 Mac Pro with the least expensive CPU option, then in 2012 upgraded the CPUs in it to extend it's life. The Xeon CPUs I used were ones Apple never offered, but they happen to work in the machine I had.
Also: I upgraded the graphics card several years ago, to an official Apple supported graphics (ATI 5770), but my Mac Pro model isn't officially supported by Apple (but it works fine). I also did a hack to get the current OS X version, Mountain Lion, running on my machine (it isn't supported by Apple). My Mac Pro borders on being a Hackintosh at this point.
Xeons never shipped in sufficient volume to make up for the huge discount that the system assemblers like Apple, Dell, HP &c were getting; I remember looking at upgrading a ~2009 era MP CPU, and you couldn't get the fastest CPUs for significantly less money than just upgrading the entire box.
I could be wrong as I've been out of the powerful-beige-box world for quite a few years. However when I was in it, the upgrade and replace thing was always with consumer grade CPUs and was like this for everyone I knew. The xenon isn't really targeting the same market. Rock solid stability and sustained performance wasn't something the gaming crowd I was in was after. Massive over-clocks, super hot chips (that died hot deaths) and a GPU clocked to the threshold of showing artefacts. The frame rate must stay high at all costs. This is nothing like what the Xeon targets. Just a thought, and again, I may be wrong.
Not a Mac guy, but I believe the Mac Pro normally uses a Xeon chip, which I believe have used fairly stable sockets over the years. For example, socket 604 was used by the Xeon for 5 years, and LGA771 has been supported from 2006 to present.
Geekbench doesn’t use the GPU; Geekbench also doesn’t use AVX, so half of the floating-point performance of recent cores is left on the table.
Actually, looking at disassembly of Geekbench, they barely even use SSE vectors, so it’s really nearly 3/4 of the floating-point performance that goes unmeasured by the benchmark.
One could perhaps argue in the defense of the benchmark that many consumer applications don’t take advantage of these ISA extensions, but you definitely cannot argue that pro apps don’t use them; this alone makes Geekbench (and in fairness, most benchmarks, which tend to be pretty naive) extremely misleading for evaluation of “pro” hardware. Ultimately there’s only one benchmark that’s useful for such evaluation: actually exercising the workload that you intend to run on the machine.
Geekbench also doesn’t use available libraries that offer high-performance implementations of the computations in question (of which any serious application developer would avail themselves). Looking at numbers for released hardware, the “LU decomposition” results in Geekbench make clear that they are either using far too small of a problem size or using a naive C implementation of the operation and measuring the performance of whatever code their compiler generated. If they wanted to show the performance of the system, they would use one of the available library implementations.
Such benchmarks tell you something, but it isn’t really how good hardware is; it’s closer to a measure of how well a compiler can salvage naive C code when it isn’t allowed to take advantage of the available hardware features. I’m not sure what that’s representative of, but I know that it’s not representative of professional workloads.
Don't be too sure of that.. plenty of pros want upgradable systems... Most of the mac pro users I know tend to keep them 4-5 years or more, with an upgrade cycle every 12-24 months (faster cpu, more ram, ssd, etc) in the same box. The new form factor really limits that, and would probably be better served with a consumer grade CPU etc.. this is much closer to a mid-range desktop a lot of people have been asking for, except it's going to cost an arm and a leg.
>> "and includes the latest four-channel ECC DDR3 memory running at 1866 MHz to deliver up to 60GBps of memory bandwidth.* "
- It should be "running at 933⅓ MHz". A DDR3-1866 module is 933⅓ MHz, 1866⅔ MT/s, ~15GB/s. They have quad channel DDR3-1866. (If you disagree - check the DDR3 spec. Article at apple.com is simply wrong in stating running at 1866 MHz I know it is a minor point, but when you are listing specs, you have to be precise.) -
Update: sorry, saying "running at 1866 MHz" is Ok. I stand corrected.
Absolutely nobody uses that scheme for referring to memory speeds. Everybody uses the effective clock speed, and there's really nothing technically wrong about labeling that with MHz, since Hz isn't restricted to referring to only sinusoidal clock signals. If you're going to be pedantic, then you ought to make clear when you're referring to the memory clock speed or the I/O clock speed or the transfer rate. For anyone who is interested only in using a memory module and not implementing the memory bus, 1866 is the only number that matters.
You can also add the dual naming scheme that gives us gems like "PC3-14400" where the latter number is the MBps of bandwidth or something similarly confusing to 95% of people, myself included.
[+] [-] jviddy|12 years ago|reply
When I was a windows desktop user (up until about 8 years ago) I found that everytime i wanted upgrade the internals the CPU socket had changed along with the Mobo chipset necessitating a upgrade of not just CPU, but Mobo and memory.
I can understand swapping out gfx cards fairly regularly but are there that main users who completely gut their mac pro on a regular basis?
[+] [-] buro9|12 years ago|reply
I've used both Mac Pro and HP Z800 workstations, and currently use a HP Z800 that is probably 3 or 4 years old, but also have used a Mac Pro as recently as 2 years ago.
I've never upgraded a CPU in a workstation.
The CPUs I buy are chosen for performance at a sane (relative) price. I'm currently running a couple of Xeon X5560 CPUs, I think they cost £1,600 each at the time.
I also buy as much RAM as is reasonable for the box, I started this Z800 on 24GB ECC RAM which was dirt cheap (relatively - as many slots on the motherboard meant I could get lots of the smaller sizes which were cheap). When the price later dropped on the larger modules I upped it to 192GB RAM.
As for graphics card... now that is something I upgrade every 2 years or so. I started on a Quadro FX1800 and am now on a Quadro 6000.
Aren't the names of these things wonderful? You know you get more when the number is HUGE!
I think this is basically the norm: Upgrade RAM, upgrade GPU... leave CPU until you replace the whole box.
PS: And whilst I'm here, the Mac Pro is a good design... but for the wrong product. I think this is a new design that is the right design with airflow, cooling and silence in mind and would suit desktop users well... but desktop requirements are not workstation requirements and not being able to replace GPUs is a non-starter. I'll skip this design of Mac Pro entirely, or at least until a very wide range of upgradeable GPUs are available. The RAM is also going to be costly, given the few slots available and the higher price for denser modules. I love the design, but this is the wrong class of computer for the constraints that come from the execution of that design.
[+] [-] nileshk|12 years ago|reply
I think people who upgrade their CPU in Mac Pros are a minority of Mac Pro users. I'm probably not a typical Mac Pro user (a "mid-range" tower would've been fine for me), but I bought a 2007 Mac Pro with the least expensive CPU option, then in 2012 upgraded the CPUs in it to extend it's life. The Xeon CPUs I used were ones Apple never offered, but they happen to work in the machine I had.
Also: I upgraded the graphics card several years ago, to an official Apple supported graphics (ATI 5770), but my Mac Pro model isn't officially supported by Apple (but it works fine). I also did a hack to get the current OS X version, Mountain Lion, running on my machine (it isn't supported by Apple). My Mac Pro borders on being a Hackintosh at this point.
[+] [-] jfb|12 years ago|reply
[+] [-] lostlogin|12 years ago|reply
[+] [-] sliverstorm|12 years ago|reply
[+] [-] frozenport|12 years ago|reply
[+] [-] stephencanon|12 years ago|reply
Actually, looking at disassembly of Geekbench, they barely even use SSE vectors, so it’s really nearly 3/4 of the floating-point performance that goes unmeasured by the benchmark.
One could perhaps argue in the defense of the benchmark that many consumer applications don’t take advantage of these ISA extensions, but you definitely cannot argue that pro apps don’t use them; this alone makes Geekbench (and in fairness, most benchmarks, which tend to be pretty naive) extremely misleading for evaluation of “pro” hardware. Ultimately there’s only one benchmark that’s useful for such evaluation: actually exercising the workload that you intend to run on the machine.
Geekbench also doesn’t use available libraries that offer high-performance implementations of the computations in question (of which any serious application developer would avail themselves). Looking at numbers for released hardware, the “LU decomposition” results in Geekbench make clear that they are either using far too small of a problem size or using a naive C implementation of the operation and measuring the performance of whatever code their compiler generated. If they wanted to show the performance of the system, they would use one of the available library implementations.
Such benchmarks tell you something, but it isn’t really how good hardware is; it’s closer to a measure of how well a compiler can salvage naive C code when it isn’t allowed to take advantage of the available hardware features. I’m not sure what that’s representative of, but I know that it’s not representative of professional workloads.
[+] [-] Steko|12 years ago|reply
[+] [-] unknown|12 years ago|reply
[deleted]
[+] [-] tbrock|12 years ago|reply
[+] [-] tracker1|12 years ago|reply
[+] [-] dmishe|12 years ago|reply
[+] [-] sjtgraham|12 years ago|reply
[+] [-] alanthonyc|12 years ago|reply
[1]:https://developer.apple.com/wwdc/videos/
[+] [-] dchichkov|12 years ago|reply
- It should be "running at 933⅓ MHz". A DDR3-1866 module is 933⅓ MHz, 1866⅔ MT/s, ~15GB/s. They have quad channel DDR3-1866. (If you disagree - check the DDR3 spec. Article at apple.com is simply wrong in stating running at 1866 MHz I know it is a minor point, but when you are listing specs, you have to be precise.) -
Update: sorry, saying "running at 1866 MHz" is Ok. I stand corrected.
[+] [-] wtallis|12 years ago|reply
[+] [-] zdw|12 years ago|reply