ydau | 2 years ago | on: Nvidia H100 and A100 GPUs – comparing available capacity at GPU cloud providers
ydau's comments
ydau | 2 years ago | on: Nvidia H100 and A100 GPUs – comparing available capacity at GPU cloud providers
Also, that’s good feedback on the CPUs bottlenecking. I’ll let our HPC hardware team know about this.
We are also looking into GPU direct storage to help resolve this:
ydau | 5 years ago | on: Alien Signals
ydau | 5 years ago | on: Nvidia will build 700-petaflop supercomputer for University of Florida
You can read more about GPT-3 here: https://lambdalabs.com/blog/gpt-3/
ydau | 5 years ago | on: GPT-3: A Hitchhiker's Guide
ydau | 6 years ago | on: 16x Tesla V100 Server, Benchmarks and Architecture
Tesla V100s have I/O pins for at most 6x 25 GB/s NVLink traces. So, systems with more than 6x GPUs cannot fully connect GPUs over NVLink. This causes I/O bottlenecks that significantly diminish returns of scaling beyond six GPUs.
This article provides an overview of their architecture that bypasses this limitation using additional high bandwidth links. Looking at the benchmarks, multi-GPU performance scales almost perfectly linearly from 1x GPU 16x GPUs.
I'm one of the engineers who worked on this project. Happy to answer any questions!
ydau | 6 years ago | on: Pop_OS 19.10
ydau | 6 years ago | on: Pop_OS 19.10
It's value prop is to enable people to easily install TensorFlow / PyTorch and their dependencies in a container-less fashion. Though it doesn't provide the isolation of containers.
What I've learned from talking to customers is that many people don't care that much about handling multiple versions of the same framework. I wouldn't be surprised if you find that, like Lambda Stack, people are mainly using this product to easily get started with TensorFlow/Pytorch.
Now that TensorFlow 2.0 is out, we will see a much more stable API. People won't have to change their code if TensorFlow bumps up a dependency version. For many, this will reduce the impetus for moving to containers.
ydau | 6 years ago | on: Pop_OS 19.10
https://lambdalabs.com/lambda-stack-deep-learning-software
This is a one-line apt/aptitude installation for TensorFlow, PyTorch, CUDA, cuDNN, etc. When NVIDIA releases a new version of CUDA, you can simply apt-get upgrade to the latest version.
Disclosure: I work for Lambda Labs.
ydau | 6 years ago | on: My low cost provider of GPUs has run out of capacity. Good alternatives?
ydau | 7 years ago | on: Ask HN: What's your advice for someone who's raising capital for the first time?
- A clear, succinct, well-designed deck does make a difference.
- Talk to your users. How do they use your product? How often? Include this in your deck.
- Venture Deals and Mastering the VC Game are helpful books
- Read early slide decks of now successful companies. Many are available online (e.g. Airbnb).
- Warm intros help. If you know someone who knows a VC and can intro you, ask!
- Different VCs have different investment strategies. Your TAM might not move the needle on a 1B fund, but it could on a 20M one.
- The best story is a growth curve that’s up and to the right.
- For later stage: not to sound demeaning, but VCs often act like lemmings. An offer on the table makes rallying others easier — reach out to those who gave you a “VC pass” (ie nevder responded to your email, or didn’t follow up after a meeting) and see if they’re interested now.
- If possible, get feedback on your deck from someone who has successfully raised.
- Don’t tell VCs which other VCs you’re talking to. You’ll be tempted, but don’t.
- Take notes after each meeting. What were the objections? Stumbling points? Use this feedback to improve your deck.
- Giving a range for your valuation or amount you want to raise makes you appear indecisive and lacking in confidence. Give specific figures.
- Be capable of justifying why you want to raise X. How’d you come to this figure?
- Stories help. How’d you come to this idea? If you have direct exposure to the problem you’re trying to solve - especially if it’s a business problem - incorporate this into your pitch.
- Make sure you’re talking to people who can make a decision within the firm.
- Don’t copy and paste cold emails. Personalize them.
- This can be a discouraging process. But it’s a numbers game. You only need one yes to get the ball rolling.
- Multiple offers help with negotiation :)
- Good luck!!
ydau | 7 years ago | on: 2080 Ti TensorFlow GPU Benchmarks
ydau | 7 years ago | on: 2080 Ti TensorFlow GPU Benchmarks
GPU modules are manufactured in China. Their harmonized codes are covered in recently established tariffs. 10% tariffs are already hitting cards arriving at US ports. This tariff will increase to 25% on Jan 1.
Prices will stay well above MSRP.
ydau | 7 years ago | on: 2080 Ti TensorFlow GPU Benchmarks
1.75 / 1.36 (speed up of 2080 Ti over a single 1080 Ti) = 1.28. So expect 2x 1080 Ti to be about 30% faster.
You can see how multi-GPU training works with Titan V benchmarks in the link below. 1080 Ti have similar scaling profile.
https://deeptalk.lambdalabs.com/t/benchmarking-the-titan-v-v...
ydau | 7 years ago | on: 2080 Ti TensorFlow GPU Benchmarks