Dillon here (CEO @ Paperspace, YCW15). I want to give a huge thanks to the YC community and all the support over the years. We have always admired DO and couldn't be happier to join forces!
Is there a way to sign up on paperspace without a phone? I've used the same VOIP number for half a decade but unfortunately it's still automatically blocked from being used for many services, this prevents me from creating an account.
Hi Dillon - congrats on the offer! I adore your product. Quick question though - how sure are you DO won't be interfering with the product and the current team will continue doing its job? Alternatively, would you like DO to take a dominant role in running the product?
I’m surprised by this. Digital Ocean is in no position to be splurging on new revenue streams. They’re underwater and making a loss [0].
They’ve got growing revenue but falling profits and they’ve got more debt than assets. They may want to raise their droplet prices, or issue more stock and then refocus on making their business profitable.
DigitalOcean is organizationally incapable of making anything in-house anymore.
I want to blame leadership - and to be clear, leadership sucks - but the problems are pervasive through every layer of the organization.
The only significant launches in the last 4-5 years have all been acquisitions or built by partner companies and whitelabeled.
Every system and every team has massively circular dependencies on one another , so it's just a massive circle of "we can't move until they move".
The tech debt is insane. Everything is slowed down by terribly-run and massively underfunded internal "platforms" teams for kubernetes, CI/CD, various internal databases, etc.
If you want to build something useful you basically have to ignore upper management and do it in secret until it's done and so integral to the systems that they have no choice but to allow you to support it.
Asking leadership outright to invest in minor maintenance for systems the entire company depends on is never approved.
The bar for engineering practices, code quality, and system design quality is comically low.
All the systems are massively distributed, but there is no understanding of distributed systems issues.
I was told multiple times that "CAP theorem doesn't apply here" and gaslit that an asynchronously replicated MySQL instance that sometimes spiked to multi-minute replication lag to the read replicas should just be used as if it were completely consistent between master and read replicas.
Tons of stuff was just run as singletons with hand-rolled in-memory rate limiting to avoid having to understand distributed locking or semaphores. These systems inevitably start falling over a few months after creation, but you're not allowed to evolve it into a correct system, you just have to support that garbage forever.
Brain drain everywhere due to low salaries. Even engineers barely capable of committing working code were getting fat raises to leave.
PaperSpace product recently has been really bad. Gradient Notebooks are a worse version of Google Collab. Useless for any serious DL product building.
Their DL virtual servers, Core I think you call them is horrible. Very slow internet, takes forever to copy datasets into them. Most of the time, it's an uphill battle to get ssh access to them, they create some pointless virtual console instead and then a GUI to copy datasets, run trainings etc which is confusing and a hassle over simple ssh access. Programmers just want a simple ssh access to a server with a GPU, we are not looking for a WYSWYG like editor! I really don't know who the customer for this is? Marketing execs who want to do deep learning? I tried powering through the documentation, but it was outdated, and was plain wrong at points. It took me a couple of hours just to figure out how to copy a large dataset into my server and find the path to that dataset. Once I discovered, Lambda and Vast ai, I never looked back and forgot Paperspace for good.
Really as a cloud provider, all you need to do is create send a ssh tunnel to a system with a certain amount of compute and memory. Maybe like AWS you can create storage buckets but it's not absolutely neccesary. Don't add GUI, interfaces etc, your customers are engineers and they prefer simple systems that give them control.
One feature I would like in Lambda/ Vast is the opportunity to copy the dataset into the server before the GPU hours start billing. When you have TB's of datasets like me, you end up wasting 8-9 hours just copying the dataset and it feels annoying that I pay for the cloud hours during that time. Amazon kind of solves this, but it slows down data access in return. I would like a cloud provider who just lets me copy everything before starting to bill me.
Ain't nothing wrong with buying revenue (potential or actual), or a multiple bump ("we do VMs"->"we enable AI too!"). Go where the margin and demand is.
Back when I was in college Paperspace was kind enough to give us a few thousand in compute credits for our research project around autonomous driving to help with training costs. Glad to see the success!
Paperspace gives free access to Graphcore IPU nodes (with 4 IPUs each), which is pretty neat. Theoretically that is way more throughput than a Colab T4 instance.
... But in practice, I tore my hair out trying to port an actual Stable Diffusion web UI, until I hit a wall. I needed to upgrade the "Poplar SDK" or something beyond the ancient Python 3.8 version to get things working, but the download was behind some kind of corporate login.
Glad to read about your experience, because I just checked out their website and was about to bite on the free IPU access. But if it needs some special proprietary software to work in the first place, I probably won't bother.
I don't think there is much point to using the big 3 cloud providers. In fact I would think it is a fools errand to use AWS for startups or even big companies given the sheer cost of bandwidth, storage and compute compared to Digital Ocean, Vultr etc.
I shake my head every time when I read startups using AWS and racking up expensive and unpredictable bills to use the same compute that can be had at a fraction of the cost when using Tier 1.5 cloud providers.
As for this acquisition, I think it was a matter of time before GPUs were added to the service offering for Digital Ocean and this would be the best way forward rather than implementing this infrastructure from scratch.
I use Stabile Diffusion with Paperspace's Pro tier ($9/mo) which gives you up to 6 hours of non per-usage GPU (meaning you don't have to worry about having a mortgage to pay for the cost) since I have an aging Vega at home and I worry about the electricity bill (EU). My worry is that plan will go away replaced by strict per usage costs.
The gaming community also used Paperspace sometimes for PC game streaming. It allowed users to install Steam or other clients to tap into existing libraries.
I found about it in the reddit /r/cloudygamer sub and used it temporarily on vacation to play Assassin's Creed Odyssey and it worked pretty well.
$111M exit on $35M. Seems meh for seven years effort. Were prospects cloudy? Better return over seven years for most employees (possibly even founders) working up through a FAANG. I’m sure YC did great, of course.
It's kind of like going to vegas for a week and coming out with a lot of good stories and 90% of your money.
In terms of investor return, I think it's healthy to have some exits like this. A bunch of the money was only in for 2 years, and probably doubled their investment in that time. The rest of the investors got their money back, with something close to NASDAQ returns on top of it. If this was the baseline for the fund instead of going to zero, you wouldn't need unicorns and the questionable growth tactics that go with them.
If founders had 25%, they got "retirement with reasonable luxuries" or "gunpowder to play investor" money of double digit $millions.
If 20-25 employees split an option pool of 15%, it's close to replacing the FAANG opportunity cost.
So totally agree that it's a bit of a "meh" outcome in comparison to financial alternatives, and the pie splitting matters a lot. But it didn't go to zero, and everyone is within a stone's throw of their stock market / FAANG hurdle rates (and it's not like that FAANG career is guaranteed for people who thrive better at startups). And the stories and experiences are a hell of a lot better.
What is $35M here? Also, YC2015 would make it about 8 years run in 2023.
I think exit is timely because there's the dire possibility of fading AI/LLM hype that's where GPU demand would fall off the cliff not only on the server side but also that many devices might have better inference hardware.
Oh, fun memories - locked me out of my account (froze it or whatnot) but happily charged me. Didn’t notice for a while as I didn’t really use it. Support just ignored my questions.
DTE|2 years ago
TheFreim|2 years ago
xNeil|2 years ago
unknown|2 years ago
[deleted]
jzelinskie|2 years ago
gbN025tt2Z1E2E4|2 years ago
Grats on the sale either way.
atlasunshrugged|2 years ago
sixwing|2 years ago
dzohrob|2 years ago
FredPret|2 years ago
They’ve got growing revenue but falling profits and they’ve got more debt than assets. They may want to raise their droplet prices, or issue more stock and then refocus on making their business profitable.
[0] valustox.com/DOCN
skrtskrt|2 years ago
DigitalOcean is organizationally incapable of making anything in-house anymore.
I want to blame leadership - and to be clear, leadership sucks - but the problems are pervasive through every layer of the organization.
The only significant launches in the last 4-5 years have all been acquisitions or built by partner companies and whitelabeled.
Every system and every team has massively circular dependencies on one another , so it's just a massive circle of "we can't move until they move".
The tech debt is insane. Everything is slowed down by terribly-run and massively underfunded internal "platforms" teams for kubernetes, CI/CD, various internal databases, etc.
If you want to build something useful you basically have to ignore upper management and do it in secret until it's done and so integral to the systems that they have no choice but to allow you to support it.
Asking leadership outright to invest in minor maintenance for systems the entire company depends on is never approved.
The bar for engineering practices, code quality, and system design quality is comically low.
All the systems are massively distributed, but there is no understanding of distributed systems issues.
I was told multiple times that "CAP theorem doesn't apply here" and gaslit that an asynchronously replicated MySQL instance that sometimes spiked to multi-minute replication lag to the read replicas should just be used as if it were completely consistent between master and read replicas.
Tons of stuff was just run as singletons with hand-rolled in-memory rate limiting to avoid having to understand distributed locking or semaphores. These systems inevitably start falling over a few months after creation, but you're not allowed to evolve it into a correct system, you just have to support that garbage forever.
Brain drain everywhere due to low salaries. Even engineers barely capable of committing working code were getting fat raises to leave.
malfist|2 years ago
impulser_|2 years ago
re-thc|2 years ago
sashank_1509|2 years ago
Their DL virtual servers, Core I think you call them is horrible. Very slow internet, takes forever to copy datasets into them. Most of the time, it's an uphill battle to get ssh access to them, they create some pointless virtual console instead and then a GUI to copy datasets, run trainings etc which is confusing and a hassle over simple ssh access. Programmers just want a simple ssh access to a server with a GPU, we are not looking for a WYSWYG like editor! I really don't know who the customer for this is? Marketing execs who want to do deep learning? I tried powering through the documentation, but it was outdated, and was plain wrong at points. It took me a couple of hours just to figure out how to copy a large dataset into my server and find the path to that dataset. Once I discovered, Lambda and Vast ai, I never looked back and forgot Paperspace for good.
Really as a cloud provider, all you need to do is create send a ssh tunnel to a system with a certain amount of compute and memory. Maybe like AWS you can create storage buckets but it's not absolutely neccesary. Don't add GUI, interfaces etc, your customers are engineers and they prefer simple systems that give them control.
One feature I would like in Lambda/ Vast is the opportunity to copy the dataset into the server before the GPU hours start billing. When you have TB's of datasets like me, you end up wasting 8-9 hours just copying the dataset and it feels annoying that I pay for the cloud hours during that time. Amazon kind of solves this, but it slows down data access in return. I would like a cloud provider who just lets me copy everything before starting to bill me.
TradingPlaces|2 years ago
toomuchtodo|2 years ago
gargablegar|2 years ago
What’s Nvidia supply chain like with their AI GPUs? Is it constrained or is this a joke :p
x86_64Ubuntu|2 years ago
makestuff|2 years ago
brucethemoose2|2 years ago
... But in practice, I tore my hair out trying to port an actual Stable Diffusion web UI, until I hit a wall. I needed to upgrade the "Poplar SDK" or something beyond the ancient Python 3.8 version to get things working, but the download was behind some kind of corporate login.
That left a bad taste in my mouth.
lthom|2 years ago
Reubend|2 years ago
monlockandkey|2 years ago
I shake my head every time when I read startups using AWS and racking up expensive and unpredictable bills to use the same compute that can be had at a fraction of the cost when using Tier 1.5 cloud providers.
As for this acquisition, I think it was a matter of time before GPUs were added to the service offering for Digital Ocean and this would be the best way forward rather than implementing this infrastructure from scratch.
lbeltrame|2 years ago
ineedasername|2 years ago
I found about it in the reddit /r/cloudygamer sub and used it temporarily on vacation to play Assassin's Creed Odyssey and it worked pretty well.
kawsper|2 years ago
kossTKR|2 years ago
7e|2 years ago
czechdeveloper|2 years ago
drc500free|2 years ago
In terms of investor return, I think it's healthy to have some exits like this. A bunch of the money was only in for 2 years, and probably doubled their investment in that time. The rest of the investors got their money back, with something close to NASDAQ returns on top of it. If this was the baseline for the fund instead of going to zero, you wouldn't need unicorns and the questionable growth tactics that go with them.
If founders had 25%, they got "retirement with reasonable luxuries" or "gunpowder to play investor" money of double digit $millions.
If 20-25 employees split an option pool of 15%, it's close to replacing the FAANG opportunity cost.
So totally agree that it's a bit of a "meh" outcome in comparison to financial alternatives, and the pie splitting matters a lot. But it didn't go to zero, and everyone is within a stone's throw of their stock market / FAANG hurdle rates (and it's not like that FAANG career is guaranteed for people who thrive better at startups). And the stories and experiences are a hell of a lot better.
wg0|2 years ago
I think exit is timely because there's the dire possibility of fading AI/LLM hype that's where GPU demand would fall off the cliff not only on the server side but also that many devices might have better inference hardware.
perlgeek|2 years ago
Does that mean they have/had lots of hardware investment? Or do they "just" offer a management layer based on other clouds?
gigatexal|2 years ago
Seems it was a win all around. They raised 35M exited at 111M. Not. 10x win but not under water.
https://www.crunchbase.com/organization/paperspace
moneywoes|2 years ago
looping__lui|2 years ago
mewmew07|2 years ago
who locked you out? why did you get the hammer?