top | item 1028795

Intel's "cripple AMD" function

212 points| kierank | 16 years ago |agner.org | reply

80 comments

order
[+] drewcrawford|16 years ago|reply
I was an intern for AMD a few years ago (these are my views and not AMD's). I was pretty skeptical about AMD's antitrust claims against Intel until I went to work there. I'm as free market as they day is long, but there's a whole untold story of the evil things that go on in the back meeting rooms, even outside of sales, where most of the public lawsuit claims are/were.

The thing to remember is that AMD is a small fraction of the size of Intel, and they have to cover the same market segments. If they try to specialize (say, servers, or notebooks), Intel will just sell that segment at a loss. AMD has to cover everything with only a fraction of the people to stay competitive, and it's really hard.

Even while I was there, we had what I suspect (but have no proof) were incidents of people leaking product plans, roadmaps, etc. (but no IP) to Intel. It's sad, really.

[+] jimbokun|16 years ago|reply
"Even while I was there, we had what I suspect (but have no proof) were incidents of people leaking product plans, roadmaps, etc. (but no IP) to Intel."

I can't imagine Steve Jobs allowing this to happen at Apple. They have definitely caught people leaking things, and the consequences were swift and unpleasant for the leaker. Why can't AMD catch these people? Is there something preventing them from implementing the same kinds of measures to catch leakers as Apple?

(Using Apple just as an example, of course. I'm sure there are other companies who find leakers and make an example of them through the legal system.)

[+] brk|16 years ago|reply
The thing to remember is that AMD is a small fraction of the size of Intel, and they have to cover the same market segments.

Not sure what you mean by this statement. Should AMD get some sort of special consideration because they are smaller than Intel?

[+] herf|16 years ago|reply
When using IPP, I had to rewrite the CPU detector, even for new Intel chips as they came out. This code should be better...really it should just benchmark all the options and catch processor exceptions to pick a supported path.

Instead, the idea is to do a static dispatch for 'known' chips, which is really bad. When the Core2Duo came out, the version of IPP we used reverted to basic MMX code instead of SSE2, about 2.5x slower. This is just bad code, and it's bad on Intel chips, not just AMD.

Also there is the "optimized for benchmarking" piece. It's not always good to use all your cores for one job, for instance, but a lot of these libraries make the assumption that your CPU has nothing else to do.

[+] rbanffy|16 years ago|reply
Isn't this the textbook reason for using - and contributing to - open-source compilers and libraries?
[+] liuliu|16 years ago|reply
and gcc is not that bad after all. I uses openmp to parallel my program on core i7 860 cpu which should support 8-threads. But using icc as compiler, it will only utilize 7 cores, and it does affect performance (about 10% slower (wall time) than gcc which uses 8 cores). I suspect that it has something to do with the dynamic linked openmp library for icc.
[+] jrockway|16 years ago|reply
Sounds like AMD should just start setting the vendor string to "GenuineIntel", then. (This is something like the "like Mozilla" in every user agent string. If dumb software is going to do dumb tests, and you need to fool the dumb test to get your interoperability.)
[+] rbanffy|16 years ago|reply
Better yet: make it writable.

That way the OS could change it per process/thread/context and the code would be happy.

[+] dbz|16 years ago|reply
That idea is flawed because putting that string will make it optimize for an Intel chip, which will be worse than optimizing for an AMD chip because even though the compiler chooses the worst "optimized" option, it is still optimizing FOR that chip. -> The AMD optimization is optimized -But the Intel optimization is extra optimized (but not optimized for AM- only for itself).
[+] loudtiger|16 years ago|reply
sounds like the Pre and iTunes. haha.
[+] pbhjpbhj|16 years ago|reply
That would be passing off / trademark infringement FWIW.
[+] wmf|16 years ago|reply
Has anyone tried the Sun Studio compilers? They're free and supposed to be as good as Intel, but I've seen virtually no discussion of them.
[+] daeken|16 years ago|reply
For x86, they fall behind a bit. For x64, they're faster than ICC in general. Definitely worth a look if you don't mind going to OSol.
[+] rythie|16 years ago|reply
Not only should they fix it, they should open source the code, so AMD can contribute.

Intel often makes noises about open source, so they should put their money where there mouth is.

[+] notauser|16 years ago|reply
Compiler discussion to one side for a minute...

Intel does more than just make noises about open source. Their wifi and graphics chip set support has been excellent over the years. Prior to the recent changes at ATI they were pretty much the only company doing that.

[+] sfg|16 years ago|reply
I do not know much about processor benchmarking, but is it not a little weird that the bench markers use software that is not independent of the hardware they are testing? It seems like they are asking to be manipulated: why do they do this?
[+] wmf|16 years ago|reply
They think the software is processor-independent but it's really Intel-biased; that's the problem.
[+] Andys|16 years ago|reply
Sensational headline: this article is only about the Intel C Compiler, which as far as I can see, is only used for benchmarketing and research purposes.
[+] scott_s|16 years ago|reply
As a systems researcher, I have often used the Intel C Compiler when I wanted to make the most fair comparison possible because it is generally accepted to produce the best code of x86 processors. With that background, I correctly guessed from the headline what the content of the article was.
[+] praptak|16 years ago|reply
"Only" benchmarketing? If any published benchmarks are affected by this misfeature, it's pitchforks and torches for Intel.
[+] adame944|16 years ago|reply
Bottom line: it's a business decision. Code generated by the Intel compiler "works" on AMD chips, although it may not be optimal. For Intel to support the optimal codepaths on AMD chips would require a substantial amount of research. I don't think they're intentionally crippling AMD chips; just declining to invest the effort to support them optimally.
[+] wtallis|16 years ago|reply
Nope. Checking the vendor string to determine capabilities when the CPUID instruction already has flags for different capabilities is unjustifiable. When a CPU claims SSE2 support, the compiler should enable SSE2, regardless of the vendor string. If AMD's implementation of SSE2 is buggy, that's their problem, and Intel should have no trouble making it into a PR win.

This is really no better than printer manufacturers putting chips into their cartridges so that they can use the DMCA to prevent third parties from refilling or making compatible cartridges.

[+] DarkShikari|16 years ago|reply
That isn't exactly how it works.

The proper way to do it:

    if( CPUIDbits & SSE1_CAPABLE ) {enable SSE1}
    if( CPUIDbits & SSE2_CAPABLE ) {enable SSE2}
    [etc]
The even better way to do it:

    if( CPUIDbits & SSE1_CAPABLE ) {enable SSE1}
    if( CPUIDbits & SSE2_CAPABLE ) {enable SSE2}
    if( CPU is Athlon 64 ) {disable some SSE2 functions}
    if( CPU is Pentium-M ) {disable all SSE2 functions}
    [etc]
Intel's way of doing it:

    if( CPU is Pentium 3 ) {enable SSE1}
    if( CPU is Pentium 4 ) {enable SSE1/SSE2}
    if( CPU is Core 2 ) {enable SSE1/SSE2/SSE3/SSSE3}
    [etc]
Practically all sane applications do things the first way; a couple do things the second way. Anyone doing things the third way is just asking for trouble both in terms of future compatibility and resilience to unexpected situations. For example, some VMs disable certain instruction sets, which would result in SIGILLs when using the last method.
[+] nvoorhies|16 years ago|reply
In addition to just the question of what path is optimal, they'd have to keep track of all the bugs in AMD's, Cyrix's, Transmeta's, and other implementations which aren't the same as the bugs on Intel x86 chips. Falling back to a subset of the architecture that is more likely to produce the right behavior is the sane thing to do.

e.g. http://www.amd.com/us-en/assets/content_type/white_papers_an... v. http://download.intel.com/design/processor/specupdt/320836.p... - these aren't just a couple gotchas you can put on the back of an index card

[+] NathanKP|16 years ago|reply
This doesn't really make any sense. All you would need to do is compile the code on an Intel machine to get fast speed and then you can run it on an AMD machine. It shouldn't really cause any problems as long as developers build on genuine Intel machines. Of course that it irritating, but it shouldn't cause any slowdown on other machines.
[+] ShabbyDoo|16 years ago|reply
I think the compiler generates code which checks processor type at runtime, not compile time. If the compiled code is running on an AMD processor, the "safe" version of the compiled code is chosen automagically.