The connection between "AI" and "GPU" in everyone's mind is a testament to the PR chops of NVIDIA. You don't need a GPU to run ML/DL/neural networks, but NVIDIA have GPU tech so they're selling GPUs. What you need is the massive ALU power and, to lesser extent, the huge internal bandwidth of GPUs. There are huge chunks of GPU die area that are of no use when running NN-type of code: the increasingly complex rasterizers, the texture units, the framebuffer/zbuffer compression stuff, and on the software side, the huge pile of junk in the drivers that allows you not only to run games from a decade ago, but also run them better than last year's GPU. If you can afford to start from scratch, you can lose a lot of this baggage.
Indeed, and that's why there are a couple of startups working on new chips and why Google has the TPU. Here's a nice technical talk from Graphcore's CTO about that https://youtu.be/Gh-Tff7DdzU
What you need is a massive amount of parallel cores. Currently, the cheapest and most efficient way to achieve this is GPUs. It's true that some graphics-specific parts of a GPU are not needed for compute-only kernels (such as those for ML or other AI tools), but it's still a lower overhead than a CPU in yet another box.
Who knows, perhaps Intel will be developing more general purpose massively parallel compute processors, but intends to integrate some of the knowledge and experience accrued from the field of graphics processors.
And yet Intel seems to be wanting to make GPUs for machine learning now...so I guess Nvidia's PR worked against Intel, too?
But as I said in another comment, the truth is Intel doesn't seem to be knowing what it's doing, which is why it's pushing in 5 or 6 different directions with many-core accelerators, FPGAs, custom ASICs, neuromorphic CPUs, quantum computers, graphcores, and so on.
By the time Intel figures out which one of these is "ideal" for machine learning, and behind which arrows to "put more wood," Nvidia will have an insurmountable advantage in the machine learning chip market, backed by an even stronger software ecosystem that Intel can't build because it doesn't yet know "which ML chips will win out".
If I would describe Intel is a sentence these days is "Intel doesn't have a vision." It's mostly re-iterating on its chips and rent-seeking these days by rebranding weak chips with strong chip brands, and adding names like "Silver" and "Gold" to Xeons (and charging more for them, because come on - it says Gold on them!), as well as essentially bringing the DLC nickle-and-diming strategy from games to its chips and motherboards.
Meanwhile, it's wasting billions every year on failed R&D projects and acquisitions because it lacks that vision on what it really needs to do to be successful. Steve Jobs didn't need to build 5 different smartphones to see which one would "win out" in the market.
> If you can afford to start from scratch, you can lose a lot of this baggage.
Effectively, this argument is much like saying that your personal workloads don't use AVX and demanding that Intel tape out a whole different die without it. You would very rightly be laughed out of town for even suggesting it.
Much like the economics of cryptomining cards that lack display outputs, this comes down to whether there is actually enough of a market to justify taping out a whole specialty product just for this one niche, vs the economies of scale that come from mass production. AFter all that is the logic behind using a GPU in the first place, instead of a custom ASIC for your task (like Google's Tensor Processing Unit). On the whole it is probably cheaper if you just suck it up and accept that you're not going to use every last feature of the card on every single workload. It's simply too expensive to tape out a different product for every workload.
This only gets more complicated when you consider that many types of GPGPU computation actually do use things like the texture units, since it allows you to coalesce memory requests with 2D/3D locality rather than simple 1D locality. I would also not be surprised if delta compression were active in CUDA mode, since it is a very generic way to increase bandwidth.
The GPGPU community absolutely does use the whole buffalo here, there is very little hardware that is purely display-specific. If you want hardware that is more "compute-oriented" than the consumer stuff, that's why there's GP100 and GV100 parts. If you want even more compute-oriented than that, you're better off looking at something like that's essentially fixed-function hardware dedicated to your particular task, rather than a general-purpose GPU unit.
I hope we can also get a more scalable form factor, so that we can keep stacking these new compute engines even if we run out of PCI slots or physical space inside the case.
Whoever at AMD who refused to match the offer probably made a terrible decision. This is about the worst time to lose that talent right after inking a gpu die deal which, in light of this news, will only be temporary. AMD just got played.
If I were AMD, I would review Mark Papermaster's comp and incentives to ensure he doesn't leave.
Raja, if you are reading this make sure your Intel GPU has two things that competition doesn't:
1) FP8 half-precision training: NVidia is artificially disabling this feature in consumer GPUs to charge more for Tesla / Volta.
2) A licensed / clone of AMD SSG technology to give massive on-GPU memory: NVidia's 12 GB memory is not sufficient for anything beyond thumbnail or VGA sized images.
My experience with Intel Phi KNL has been miserable so far, I hope Raja has better luck with GPU line.
Freely available full documentation, from package pinouts to ISA and programming guides (at least as extensive as the x86 SDM[1]), would also be met with great praise, especially in the open-source community. For a recent GPU to have such open documentation would be, AFAIK, a first.
1) is not true, in the current generation FP16 is not artificially disabled on consumer GPUs but rather does not exists.
And it’s not just consumer GPUs out of 4 Pascal Tesla’s only 1 has support for FP16 and FP64 the GP100 based P100.
The GP102 and 104 Cards which include the consumer cards and Tesla’s like the P4 and P40 are inference focused and support Int 16 and Int 8 dot products while the first generation Pascal GP100 doesn’t.
If anyone artificially locked down FP16 support it’s AMD as consumer VEGA doesn’t support it for compute.
2) NVIDIA already has a competing solution, Pascal has had unified memory support form day one, 49 bit address space, AST and paging they already partner with SSD makers to have an addon card which is mapped to VRAM.
> My experience with Intel Phi KNL has been miserable so far, I hope Raja has better luck with GPU line.
I'd love to see the Phi approach taken further. I'm not a huge fan of having different ISAs, one for my CPU, one for the compute engines of the GPU (to say nothing about the blobs on my GPU, network controller, ME). I'd prefer a more general approach where I could easily spread the various workloads running on my CPU to other, perhaps more specialized but still binary-compatible, cores.
Heck... Even my phone has 8 cores (4 fast, 4 power-efficient, running the same ISA).
> 1) FP8 half-precision training: NVidia is artificially disabling this feature in consumer GPUs to charge more for Tesla / Volta.
No, this physically is not present on consumer chips. You can't subdivide the ALUs like that even on Tesla P5000 cards. Of course you can promote FP8 to FP32 without an issue, on any card, but you don't gain any performance either.
At the time Pascal was designed it didn't make any sense to waste die space on FP16 support let alone FP8, since games are purely FP32. This is changing now that Vega has FP16 capability ("Rapid Packed Math") and titles may be using this capability where appropriate. I would not be surprised to see it in Volta gaming cards at all.
It's funny, everything old is new again. Someone comes up with this idea about once every 10 years. Using FP16 or FP24 used to be big back in the DX9 days.
> 2) A licensed / clone of AMD SSG technology to give massive on-GPU memory: NVidia's 12 GB memory is not sufficient for anything beyond thumbnail or VGA sized images.
You're looking for NVIDIA GPUDirect Peer-to-Peer, which has existed since like 2011.
AMD's product is actually purely marketing hype, it's simply a card that contains a PLX chip to interface a NVMe SSD. It is the same technology that is used for multi-GPU cards like the Titan Z or 295x2, and it offers no performance advantages vs a regular NVMe SSD sitting in the next PCIe slot over.
This is something that people didn't know they wanted until AMD told them they wanted it. But you can do this on any GeForce card even, no need to shell out $7000 for some crazy custom card that doesn't even run CUDA.
The bigger problem is that there really isn't much of a use-case for it. NVMe runs at 4 GB/s, which is painfully short of the ~500 GB/s that the GPU normally runs at. That is even significantly less bandwidth than host memory can provide (a 3.0x16 PCIe bus limits you to 16 GB/s of transfers regardless of whether that's coming from NVMe or host memory).
I am not convinced that Intel can win here. They seem to not succeed with home grown GPU tech and other big bang approaches. Now if they were to acquire decent GPU tech then I would bet on them. Just the homegrown route seems to not work out for them.
I suspect part of the reason is the long time frames for dev of this tech. I suspect it is at least 2 years for this to see the light of day. That is forever in this space.
Intel failed with Larrabee and itanium. Maybe this will go better?
It looks like Raja will lead the development of machine learning-focused GPUs. Isn't this Intel basically admitting that their Xeon Phi, Nervana, and Altera GPU efforts to win the machine learning market are all a dead-end?
How many machine learning strategies is Intel going to try? Does it even know what it's doing? Spending billions of dollars left and right on totally different machine learning technologies kind of looks like it doesn't, and it's just hoping it will get lucky with one of them.
And even if you think that's not a terrible strategy to "see what works", there's still the issue that they need to have great software support for all of these platforms if they want developer adoption. The more different machine learning strategies it adopts, the harder that's going to be for Intel to achieve.
Titanium was really more of a failing of core technology than a failing of execution whereas Larrabee was much more of a failing of execution than core tech, and the issue with Larrabee was really the fact that software GPU functions were not only slower in speed, but even slower to develop (!).
Intel also utterly failed in the mobile space, IoT things and are setting themselves up for failure in edge computing. What's one more thing to fail at?
The link is incredibly light on actual content but this seems to be good news for AI enthusiasts as perhaps now we'll get a reasonable competitor to CUDA/CUDNN and their associated hardware for running GPU accelerated machine learning. Intel seems to be taking the ML/AI space seriously and this move seems very likely to be related. Yes I'm aware of OpenCL as I am also aware of it's level of support with libraries such as PyTorch, Tensorflow, Theano -- it isn't the first class citizen that CUDA is. While those libraries aren't perfect they offer the experience of writing the experiment on your laptop without a GPU, validating, then running the full experiment on larger hardware.
In my ideal world competition from intel would force NVidia to play nice with OpenCL or something similar, and encourage competition in the hardware space instead of driver support space. Unfortunately the worst-case looks something more like CUDA, OpenCL and a third option from Intel with OpenCL like adoption. :(
Interesting, given that just 2 days ago it was announced [0] that Intel was going to start to use AMD for some of their integrated graphics. Now they're going to complete against them in the discrete graphics space.
Also, Koduri recently left AMD after what many felt was a disappointing discrete graphics release in Vega.
Wowza. If I moved to a direct competitor like that, my employee contract "non-compete" clause would be brought out immediately. And I'm no C-level executive, just an individual contributor. I wish Washington had California's non-compete law.
Often there is a form of mutually assured destruction at play. Qualcomm and Intel have cross-hired major executives like Murthy and no one got sued.
I feel like non-competes are similar to parents these days. Everyone has tons of patents and everyone is infringing on everyone else so they just agree to pay licensing fees to one another and never go to war.
I’m certain AMD has hired their share of Intel people by now, its a no win.
Haven't read much on it, but this happening right after the integrated GPU deal with AMD just strengthens the "teaming up against NVIDIA" theme going on.
I'm excited. I don't care if Intel wins. I just want a video card that doesn't suck and works perfectly with linux. Even if I unplug my monitor sometimes... Even if it's a laptop and it switches GPU for different outputs... Even if I want to use the standard xrandr and normal ass linux tools for configuring my monitor.
Maybe that would happen if kernel developers were not such divas and thought it was appropriate to use coarse language in public discourse. Nvidias graphics drivers work perfectly on windows and they have the only OpenGL implementation that is not a total joke on Linux.
That's usually driver related, not hardware. e.g. it's a well known fact among game engine developers that OpenGL on AMD cards sucks, and it's not at all because of the hardware, it's purely the software drivers (they are much better on linux with open source ones).
Look at it another way: no Intel iGPU is on par with any discrete GPU, because in price segments where iGPUs appear, discrete GPUs tend to vanish in a matter of 1-2 years. There used to be a significant number of NVIDIA Geforce MX420/440s, 5200s and 6200s. Then much fewer 730s. Now 1030s are practically only in laptops. Intel has been nibbling away at this market slowly, but steadly for a decade.
As GPUs continue to evolve into general purpose vector supercomputers, and as ML/deep learning applications emerge, it seems clear that more and more future chip real estate (and power) will go to those compute units, not the x86 core in the corner orchestrating things.
Damn, this is a major loss for AMD, losing Raja is definitely not the right move. It would have been interesting to see the next iteration of AMD graphics with Raja on board.
Threadripper, and the Zen architecture, put them back on the map, that’s some serious hardware for the price. I wish they had just kept iterating on the CPUs and GPUs.
Vega is not a bad product, it just doesn’t beat nvidia’s offering in the bar charts, doesn’t mean it’s bad, it just means it’s second place which is fine since it’s cheaper as well. Technology needs to be iterated on. Something must be going on at AMD at the moment.
Intel is all in on becoming a "data company", with the recent design wins in self driving cars & the AMD deal I'm confident that they will come out of the AI HW race in strong shape. This move just reaffirms that.
Can someone explain this to me:
Isn't the GPU industry all about Patents and trade secrets (enforced by NDAs). Won't all Rajas expertise be tied up in that?
It isn't clear that AMD's GPU architecture has been really competing with nvidia. We'll have to see how big a deal this is when AMD's APUs come out. I expect them to be quite a bit better than intels integrated product.
This seems to be more of an direct competitive attack on AMDs integrated product than it is competition with nvidia. It feels to me like building discrete GPUs is almost a misdirection.
An interesting counterpoint here. I have a friend who works for Intel as an algorithms engineer for their self-driving vehicle acquisition (Mobileye). Currently, he's using 2 1080TIs w/ TensorFlow to perform deep learning. It is possible that Intel could be looking to develop a chip used specifically for this purpose (a bet on self driving cars) and not for mass-production/sale outside of that tech. Either way, all of the GPU/CPU updates in the past year is just going to create more competition, which is better for the consumer in most cases.
Well, the whole point of Mobileye acquisition was for Intel to have a competing chip for autonomous cars. But it is possible that they are also looking to compete on 1080Ti level. Which would be very hard.
Nvidia is lightyears ahead in the GPU market besides if this GPU push is aimed at the Deep Learning market Intel will have competition from the likes of Xilinx too. IMHO they need to provide great software to go with their GPUs. Traditionally hardware manufacturers have shipped barely usable software. They should perhaps try to use opencl and keep the rest of the tools and libraries open source.
This is what many people outside the AI world don’t seem to understand. Nvidia has a stranglehold in the form of CUDA and Cudnn. There isn’t any open source equivalent to Cudnn. AMD is trying to push OpenCl in this direction but it will be a long time before DL libraries start migrating to OpenCl. Like tomorrow by miracle if al alternative GPU which is as good as the 1080ti popped up, it would be useless in the AI market.
It will be interesting to see how "discrete" thses GPUs will be. I'm assuming they will only be "descrete" in the sense that they are not on the same chip, but rather same package (via EMIB).
Either way surely this is a move by Intel to take away from Nvidia's consumer share (which makes up the vast majority of their income) as Nvidia make inroads into the data center market?
The big win that discrete GPUs provide to the cloud/backend marketplace (that Intel sorta plays in via Xeon Phi) is from large banks of VERY fast memory coupled with fast-clocked vector processors. But without a bunch of HBM or something similar, the discrete GPU won't be able to do training at the scale that NVIDIA and AMD do.
Part of me feels it would be very awesome to see onboard GPU hardware outputting similar performance to discrete chips. Of course it would change the size of the socket, or at least the piece of hardware in the socket and cooling requirements. This would have downsides in terms of consumer choice, mind you, or even the fact that upgrading a chip would involve upgrading both. It definitely has merit in the server or small form factor space though.
My guess is they are competing for the nascent consumer vr/ar market, which may not require top tier gpu performance for that much longer.
Microsoft's Mixed Reality platform has the stated goal of running on integrated graphics and even a mid tier card in a year or two should do fine for usable vr/ar.
[+] [-] unsigner|8 years ago|reply
[+] [-] midko|8 years ago|reply
[+] [-] sspiff|8 years ago|reply
Who knows, perhaps Intel will be developing more general purpose massively parallel compute processors, but intends to integrate some of the knowledge and experience accrued from the field of graphics processors.
[+] [-] mtgx|8 years ago|reply
But as I said in another comment, the truth is Intel doesn't seem to be knowing what it's doing, which is why it's pushing in 5 or 6 different directions with many-core accelerators, FPGAs, custom ASICs, neuromorphic CPUs, quantum computers, graphcores, and so on.
By the time Intel figures out which one of these is "ideal" for machine learning, and behind which arrows to "put more wood," Nvidia will have an insurmountable advantage in the machine learning chip market, backed by an even stronger software ecosystem that Intel can't build because it doesn't yet know "which ML chips will win out".
If I would describe Intel is a sentence these days is "Intel doesn't have a vision." It's mostly re-iterating on its chips and rent-seeking these days by rebranding weak chips with strong chip brands, and adding names like "Silver" and "Gold" to Xeons (and charging more for them, because come on - it says Gold on them!), as well as essentially bringing the DLC nickle-and-diming strategy from games to its chips and motherboards.
Meanwhile, it's wasting billions every year on failed R&D projects and acquisitions because it lacks that vision on what it really needs to do to be successful. Steve Jobs didn't need to build 5 different smartphones to see which one would "win out" in the market.
[+] [-] paulmd|8 years ago|reply
Effectively, this argument is much like saying that your personal workloads don't use AVX and demanding that Intel tape out a whole different die without it. You would very rightly be laughed out of town for even suggesting it.
Much like the economics of cryptomining cards that lack display outputs, this comes down to whether there is actually enough of a market to justify taping out a whole specialty product just for this one niche, vs the economies of scale that come from mass production. AFter all that is the logic behind using a GPU in the first place, instead of a custom ASIC for your task (like Google's Tensor Processing Unit). On the whole it is probably cheaper if you just suck it up and accept that you're not going to use every last feature of the card on every single workload. It's simply too expensive to tape out a different product for every workload.
This only gets more complicated when you consider that many types of GPGPU computation actually do use things like the texture units, since it allows you to coalesce memory requests with 2D/3D locality rather than simple 1D locality. I would also not be surprised if delta compression were active in CUDA mode, since it is a very generic way to increase bandwidth.
The GPGPU community absolutely does use the whole buffalo here, there is very little hardware that is purely display-specific. If you want hardware that is more "compute-oriented" than the consumer stuff, that's why there's GP100 and GV100 parts. If you want even more compute-oriented than that, you're better off looking at something like that's essentially fixed-function hardware dedicated to your particular task, rather than a general-purpose GPU unit.
So, it doesn't really make any economic sense.
[+] [-] mrec|8 years ago|reply
[+] [-] amelius|8 years ago|reply
[+] [-] likelynew|8 years ago|reply
[+] [-] eganist|8 years ago|reply
Whoever at AMD who refused to match the offer probably made a terrible decision. This is about the worst time to lose that talent right after inking a gpu die deal which, in light of this news, will only be temporary. AMD just got played.
If I were AMD, I would review Mark Papermaster's comp and incentives to ensure he doesn't leave.
(I'm long AMD)
[+] [-] ActsJuvenile|8 years ago|reply
1) FP8 half-precision training: NVidia is artificially disabling this feature in consumer GPUs to charge more for Tesla / Volta.
2) A licensed / clone of AMD SSG technology to give massive on-GPU memory: NVidia's 12 GB memory is not sufficient for anything beyond thumbnail or VGA sized images.
My experience with Intel Phi KNL has been miserable so far, I hope Raja has better luck with GPU line.
[+] [-] userbinator|8 years ago|reply
[1] https://software.intel.com/en-us/articles/intel-sdm
[+] [-] dogma1138|8 years ago|reply
The GP102 and 104 Cards which include the consumer cards and Tesla’s like the P4 and P40 are inference focused and support Int 16 and Int 8 dot products while the first generation Pascal GP100 doesn’t.
If anyone artificially locked down FP16 support it’s AMD as consumer VEGA doesn’t support it for compute.
2) NVIDIA already has a competing solution, Pascal has had unified memory support form day one, 49 bit address space, AST and paging they already partner with SSD makers to have an addon card which is mapped to VRAM.
[+] [-] rbanffy|8 years ago|reply
I'd love to see the Phi approach taken further. I'm not a huge fan of having different ISAs, one for my CPU, one for the compute engines of the GPU (to say nothing about the blobs on my GPU, network controller, ME). I'd prefer a more general approach where I could easily spread the various workloads running on my CPU to other, perhaps more specialized but still binary-compatible, cores.
Heck... Even my phone has 8 cores (4 fast, 4 power-efficient, running the same ISA).
[+] [-] ndesaulniers|8 years ago|reply
half precision is FP16.
[+] [-] paulmd|8 years ago|reply
No, this physically is not present on consumer chips. You can't subdivide the ALUs like that even on Tesla P5000 cards. Of course you can promote FP8 to FP32 without an issue, on any card, but you don't gain any performance either.
At the time Pascal was designed it didn't make any sense to waste die space on FP16 support let alone FP8, since games are purely FP32. This is changing now that Vega has FP16 capability ("Rapid Packed Math") and titles may be using this capability where appropriate. I would not be surprised to see it in Volta gaming cards at all.
It's funny, everything old is new again. Someone comes up with this idea about once every 10 years. Using FP16 or FP24 used to be big back in the DX9 days.
> 2) A licensed / clone of AMD SSG technology to give massive on-GPU memory: NVidia's 12 GB memory is not sufficient for anything beyond thumbnail or VGA sized images.
You're looking for NVIDIA GPUDirect Peer-to-Peer, which has existed since like 2011.
https://developer.nvidia.com/gpudirect
AMD's product is actually purely marketing hype, it's simply a card that contains a PLX chip to interface a NVMe SSD. It is the same technology that is used for multi-GPU cards like the Titan Z or 295x2, and it offers no performance advantages vs a regular NVMe SSD sitting in the next PCIe slot over.
This is something that people didn't know they wanted until AMD told them they wanted it. But you can do this on any GeForce card even, no need to shell out $7000 for some crazy custom card that doesn't even run CUDA.
The bigger problem is that there really isn't much of a use-case for it. NVMe runs at 4 GB/s, which is painfully short of the ~500 GB/s that the GPU normally runs at. That is even significantly less bandwidth than host memory can provide (a 3.0x16 PCIe bus limits you to 16 GB/s of transfers regardless of whether that's coming from NVMe or host memory).
[+] [-] bhouston|8 years ago|reply
I suspect part of the reason is the long time frames for dev of this tech. I suspect it is at least 2 years for this to see the light of day. That is forever in this space.
Intel failed with Larrabee and itanium. Maybe this will go better?
[+] [-] mtgx|8 years ago|reply
How many machine learning strategies is Intel going to try? Does it even know what it's doing? Spending billions of dollars left and right on totally different machine learning technologies kind of looks like it doesn't, and it's just hoping it will get lucky with one of them.
And even if you think that's not a terrible strategy to "see what works", there's still the issue that they need to have great software support for all of these platforms if they want developer adoption. The more different machine learning strategies it adopts, the harder that's going to be for Intel to achieve.
[+] [-] deepnotderp|8 years ago|reply
[+] [-] andreiw|8 years ago|reply
[+] [-] diab0lic|8 years ago|reply
In my ideal world competition from intel would force NVidia to play nice with OpenCL or something similar, and encourage competition in the hardware space instead of driver support space. Unfortunately the worst-case looks something more like CUDA, OpenCL and a third option from Intel with OpenCL like adoption. :(
[+] [-] JosephLark|8 years ago|reply
Also, Koduri recently left AMD after what many felt was a disappointing discrete graphics release in Vega.
[0] https://www.anandtech.com/show/12003/intel-to-create-new-8th...
[+] [-] loeg|8 years ago|reply
[+] [-] ironchef253|8 years ago|reply
I feel like non-competes are similar to parents these days. Everyone has tons of patents and everyone is infringing on everyone else so they just agree to pay licensing fees to one another and never go to war.
I’m certain AMD has hired their share of Intel people by now, its a no win.
[+] [-] cakebrewery|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] chisleu|8 years ago|reply
[+] [-] orbifold|8 years ago|reply
[+] [-] epigramx|8 years ago|reply
[+] [-] chx|8 years ago|reply
Erm. Nope. No Intel iGPU is on par with the 1050 much less the 1050 Ti.
http://gpu.userbenchmark.com/Compare/Intel-Iris-Pro-580-Mobi...
(I compared mobile chips since the most powerful GT4 can only be found in the mobile chips.)
It's only slightly behind the 1030 which costs $73.
[+] [-] unsigner|8 years ago|reply
[+] [-] Synaesthesia|8 years ago|reply
[+] [-] payne92|8 years ago|reply
As GPUs continue to evolve into general purpose vector supercomputers, and as ML/deep learning applications emerge, it seems clear that more and more future chip real estate (and power) will go to those compute units, not the x86 core in the corner orchestrating things.
[+] [-] shmerl|8 years ago|reply
With Intel and AMD backing Mesa, things on Linux will get very interesting.
[+] [-] artellectual|8 years ago|reply
Threadripper, and the Zen architecture, put them back on the map, that’s some serious hardware for the price. I wish they had just kept iterating on the CPUs and GPUs.
Vega is not a bad product, it just doesn’t beat nvidia’s offering in the bar charts, doesn’t mean it’s bad, it just means it’s second place which is fine since it’s cheaper as well. Technology needs to be iterated on. Something must be going on at AMD at the moment.
[+] [-] 40acres|8 years ago|reply
[+] [-] cameronhowe|8 years ago|reply
[+] [-] ohyes|8 years ago|reply
This seems to be more of an direct competitive attack on AMDs integrated product than it is competition with nvidia. It feels to me like building discrete GPUs is almost a misdirection.
[+] [-] zachruss92|8 years ago|reply
[+] [-] p1esk|8 years ago|reply
[+] [-] farhanhubble|8 years ago|reply
[+] [-] deepGem|8 years ago|reply
[+] [-] gbrown_|8 years ago|reply
Either way surely this is a move by Intel to take away from Nvidia's consumer share (which makes up the vast majority of their income) as Nvidia make inroads into the data center market?
[+] [-] wyldfire|8 years ago|reply
[+] [-] diab0lic|8 years ago|reply
[+] [-] hackerfromthefu|8 years ago|reply
Microsoft's Mixed Reality platform has the stated goal of running on integrated graphics and even a mid tier card in a year or two should do fine for usable vr/ar.
[+] [-] chucky_z|8 years ago|reply
[+] [-] LoSboccacc|8 years ago|reply