They say "next-generation", but these are M60 GPUs, which are very much "previous-generation". Current generation would be P100 GPUs.
I am in the market for a cloud GPU offering, and I have to say the big cloud providers are very uncompetitive here, only offering these old, slow GPUs.
I was also surprised (and sent it around to our team internally last night). We're skipping Maxwell entirely as you can see from my previous comment threads.
For display it's still a fine part. The P100 is also a beast, so its overkill for most people just doing Remote Desktop. So perhaps the M60 (like with Azure) fills this market segment for them, and they don't mind the hardware diversity.
[Edit: Too sleepy. A post down below reminds us that these are G-series and G is for Graphics. So yeah, I assume they just didn't want to wait for enough P4 parts in volume or will quickly make another such announcement about the Pascals].
I think "generation" here is referring to EC2 generations and not GPU generations – AWS tends to use that term to refer to new instance types being released.
((Next Generation) (GPU EC2 Instances)) rather than ((Next Generation GPU) (EC2 Instances)) :)
"I am in the market for a cloud GPU offering, and I have to say the big cloud providers are very uncompetitive here, only offering these old, slow GPUs.
"
It's one thing if one of them is like that, but if all of them are like that, maybe it's not because of the cloud providers?
Or GTX 1080 ti. Aren't the Tesla class like the P100 mostly super overpriced for deep learning because their only main advantage is Double (64 bit) float support, and no one really needs that? Plus half float (16 bit) support, which is not super widely used (but certainly more than double). Something like 95%+ of Deep Learning must be done with single floats (32 bits) right now afaik, making this a fairly dubious expense
That happens not infrequently. 1#x ondemand is the ceiling bid for spots. It's the result of a bid war amongst two or more big customers who really don't want to be evicted.
You can't give fractional GPU instances with this card. The K80 had two logically separate chips that were separately-addressable over PCIe. This allowed them to send two different PCIe devices to different VMs. The M60 doesn't have this. The V100 is supposed to allow time slicing to do this kind of thing, but that's not out, nor do we know how well it'll work.
A lot worse, but your CPU doesn't take a hit. NVENC doesn't have very good quality at low bitrates, but it's fine for local recording (1080p@15Mbps+) that will be transcoded later.
In our use case(sports broadcasting, 720p) we found that in reasonable bitrate(>1Mb) NVIDIA HQ quality was virtually the same to x264 faster.
(in newer versions of nvenc, they got amazingly better in the last couple of years)
When the bandwidth drop you start to see x246 advantage.
I'm taking this and Nvidia's announcement it was going to sell a mining-oriented GPU as the shot over the bow for cryptocoins. But then again, only market-makers get rich calling a top.
Has anyone done much Linux gaming on EC2? I want to be able to play xonotic again but I don't play it often enough to justify buying a high power desktop.
I did some mac gaming on it. Not terrible for certain games. I was mainly playing Rocket League multiplayer. There's a bunch of resources/experiences at https://www.reddit.com/r/cloudygamer/
Nope. I have made profitability assessments on several different cloud GPU solutions having different hardware.
As a general rule, for every 100 USD you'd only mine about ~50 USD's worth of crypto-currencies.
Which is not surprising since on these products you get a fancy motherboard + high-end Intel CPUs + boatloads of RAM.
These are of little to no use when mining, and account for about half of the cost of this hardware. Also, the local cost of electricity is not the lowest price in the world (China having one of the lowest)
While the price of hardware is fixed, crypto-currencies possess a difficulty adjustment mechanism. This makes the whole system have an upper bound on mining profitability, and this bound converges on the profitability of the best-yield-hardware's. Which would be something to the tune of this [1]. Note that while having 6 GPUs, this system has 8 GB of RAM and an Intel Celeron.
EDIT: We're talking about IO-bandwidth-bound crypto-currencies here, like all the ones based on EThash[2][3], Ethereum being one of them. Bitcoin's upper bound on profitability is set by the best ASICs for SHA256 processing.
It's not profitable to mine using your own money. But i am pretty sure those instances will be abused by carders. Like you invest 100$ in carded money and get 50$ back in cryptocurrency.
No. Nowadays the big players in mining cryptocurrency have datacenters full of ASIC's. The currencies that are resistant to ASIC mining (due to eg using memory-bound hashing functions), like Monero, are probably just as resistant to GPU mining as they are to ASIC mining, although if you were to investigate it, I'd look at one of those and not bitcoin.
well in 2011 i was using a Desktop PC + 3 GPUs to mine bitcoins which was barely profitable at a Bitcoin price of around $20 USD. Would i have kept them though and not sold at that price.... FML
P are intended for general-purpose GPU compute applications (and have 1, 8 or 16 GPUs, more RAM and fewer CPUs). Typically you might use these for scientific computing / machine learning / anything CUDA intensive.
G are optimized for graphics-intensive applications (and have 1, 2 or 4 GPUs, less RAM and more CPUs) - you might use these for design work, gaming etc.
[+] [-] lars|8 years ago|reply
I am in the market for a cloud GPU offering, and I have to say the big cloud providers are very uncompetitive here, only offering these old, slow GPUs.
[+] [-] boulos|8 years ago|reply
For display it's still a fine part. The P100 is also a beast, so its overkill for most people just doing Remote Desktop. So perhaps the M60 (like with Azure) fills this market segment for them, and they don't mind the hardware diversity.
[Edit: Too sleepy. A post down below reminds us that these are G-series and G is for Graphics. So yeah, I assume they just didn't want to wait for enough P4 parts in volume or will quickly make another such announcement about the Pascals].
Disclosure: I work on Google Cloud.
[+] [-] matthewmacleod|8 years ago|reply
((Next Generation) (GPU EC2 Instances)) rather than ((Next Generation GPU) (EC2 Instances)) :)
[+] [-] DannyBee|8 years ago|reply
It's one thing if one of them is like that, but if all of them are like that, maybe it's not because of the cloud providers?
[+] [-] make3|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] moonbug22|8 years ago|reply
[+] [-] marklit|8 years ago|reply
[+] [-] moonbug22|8 years ago|reply
[+] [-] girvo|8 years ago|reply
[+] [-] nl|8 years ago|reply
When I'm using cloud GPUs it's pretty much a batch job, and latency is the last thing I care about.
I'm not aware of any DL projects in Australia on health images which may have some legislative requirements about keeping data onshore.
[+] [-] layoric|8 years ago|reply
[+] [-] dkobran|8 years ago|reply
[+] [-] floatboth|8 years ago|reply
[+] [-] moonbug22|8 years ago|reply
[+] [-] bryanlarsen|8 years ago|reply
[+] [-] shaklee3|8 years ago|reply
[+] [-] sp332|8 years ago|reply
[+] [-] amq|8 years ago|reply
How is the quality compared to x264 with the default settings (preset medium, crf 23)?
[+] [-] Scaevolus|8 years ago|reply
Here's a comparison video: https://www.youtube.com/watch?v=BV5btdqQfu4
According to this it's almost equivalent when you compare 720p@5Mbps and 1080p@12Mbps, which is way more than most streaming sites will do: http://on-demand.gputechconf.com/gtc/2014/presentations/S464...
[+] [-] SoapSeller|8 years ago|reply
In our use case(sports broadcasting, 720p) we found that in reasonable bitrate(>1Mb) NVIDIA HQ quality was virtually the same to x264 faster. (in newer versions of nvenc, they got amazingly better in the last couple of years) When the bandwidth drop you start to see x246 advantage.
[0] https://developer.nvidia.com/nvidia-video-codec-sdk
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] horusthecat|8 years ago|reply
[+] [-] swiley|8 years ago|reply
[+] [-] _neil|8 years ago|reply
[+] [-] mankoxyz|8 years ago|reply
[+] [-] BenoitP|8 years ago|reply
As a general rule, for every 100 USD you'd only mine about ~50 USD's worth of crypto-currencies.
Which is not surprising since on these products you get a fancy motherboard + high-end Intel CPUs + boatloads of RAM.
These are of little to no use when mining, and account for about half of the cost of this hardware. Also, the local cost of electricity is not the lowest price in the world (China having one of the lowest)
While the price of hardware is fixed, crypto-currencies possess a difficulty adjustment mechanism. This makes the whole system have an upper bound on mining profitability, and this bound converges on the profitability of the best-yield-hardware's. Which would be something to the tune of this [1]. Note that while having 6 GPUs, this system has 8 GB of RAM and an Intel Celeron.
[1] https://blockoperations.com/6-gpu-mining-rig-amd-rx580-intel...
----
EDIT: We're talking about IO-bandwidth-bound crypto-currencies here, like all the ones based on EThash[2][3], Ethereum being one of them. Bitcoin's upper bound on profitability is set by the best ASICs for SHA256 processing.
[2] https://github.com/ethereum/wiki/wiki/Ethash
[3] https://github.com/ethereum/wiki/wiki/Ethash-Design-Rational...
[+] [-] Dolores12|8 years ago|reply
[+] [-] dkersten|8 years ago|reply
[+] [-] toredash|8 years ago|reply
[+] [-] BillinghamJ|8 years ago|reply
[+] [-] kayoone|8 years ago|reply
[+] [-] phreeza|8 years ago|reply
[+] [-] piqufoh|8 years ago|reply
G are optimized for graphics-intensive applications (and have 1, 2 or 4 GPUs, less RAM and more CPUs) - you might use these for design work, gaming etc.