top | item 24526145

Libcu++: Nvidia C++ Standard Library

226 points| andrew3726 | 5 years ago |github.com

133 comments

order

fanf2|5 years ago

Whenever a new major CUDA Compute Capability is released, the ABI is broken. A new NVIDIA C++ Standard Library ABI version is introduced and becomes the default and support for all older ABI versions is dropped.

https://github.com/NVIDIA/libcudacxx/blob/main/docs/releases...

MichaelZuo|5 years ago

It’s interesting that they use the word to broken to describe incompatible machine code. Well if the code is recompiled for each new version then it’s different from the old machine code, that’s by definition. Does any major software vendor support older versions of the ABI or machine code?

quotemstr|5 years ago

There should be no expectation of C++ ABI compatibility. Do you want your system to be ABI compatible or do you want it to evolve? You can't have both. You have to pick one. I favor evolution.

lionkor|5 years ago

> Promising long-term ABI stability would prevent us from fixing mistakes and providing best in class performance. So, we make no such promises.

Wait NVidia actually get it? Neat!

matheusmoreira|5 years ago

This is an awesome quote... Same argument used by the Linux kernel developers.

lars|5 years ago

It really is a tiny subset of the C++ standard library, but I'm happy to see they're continuing to expand it: https://nvidia.github.io/libcudacxx/api.html

shaklee3|5 years ago

Nvidia has had many members on the c++ standards committee for a while.

roel_v|5 years ago

Yeah, really tiny... At first I thought 'wow this is a game changer', but then I looked at your link and thought 'what's the point?'. Can someone explain what real problems you can solve with just the headers in the link above?

blelbach|5 years ago

Today, you can use the library with NVCC, and the subset is small. We'll be focusing on expanding that subset over time.

Our end goal is to enable the full C++ Standard Library. The current feature set is just a pit stop on the way there.

RcouF1uZ4gsC|5 years ago

For everyone wondering where are all the data structures and algorithms, vector and several algorithms are implemented by Thrust. https://docs.nvidia.com/cuda/thrust/index.html

Seems the big addition of the Libcu++ to Thrust would be synchronization.

blelbach|5 years ago

Yep, that's correct. My team develops Thrust, CUB, and libcu++.

jlebar|5 years ago

This is super-cool.

For those of us who can't adopt it right away, note that you can compile your cuda code with `--expt-relaxed-constexpr` and call any constexpr function from device code. That includes all the constexpr functions in the standard library!

This gets you quite a bit, but not e.g. std::atomic, which is one of the big things in here.

BoppreH|5 years ago

Unfortunate name, "cu" it's the most well known slang for "anus" in Brazil (population: 200+ million). "Libcu++" is sure to cause snickering.

unrealhoang|5 years ago

It’s penis in Vietnamese (pop. 80M), I guess people don’t really care since tech language is usually English

blelbach|5 years ago

"cu" is a pretty common prefix for CUDA libraries. cuBLAS, cuTENSOR, CUTLASS, CUB, etc.

It gets worse if you try to spell libcu++ without pluses:

libcuxx libcupp (I didn't hate this one but my team disliked it).

We settled on `libcudacxx` as the alphanumeric-only spelling.

jcampbell1|5 years ago

These things never seem to matter even in English. How many times have you heard someone say “I don’t like Microsoft”, followed by “that’s what she said”.

gumby|5 years ago

cu is, or was back in the day, a standard Unix utility (call up) — connect to another machine via modem.

It doesn’t appear to be in Ubuntu any more but still in openbsd, netbsd, and macos!

You can’t win win these namespace collisions: I have friends whose names are obscenities in other languages I speak.

CyberDildonics|5 years ago

Wait until you see the namespace the standard library is under.

Although maybe short words that are slang in languages different from what something was written in aren't a big deal.

amelius|5 years ago

"CU" is also an abbreviation of "see you". I don't think it causes much awkwardness, but I could be wrong.

NullPrefix|5 years ago

This only affects developers. Limited scope.

Wasn't there something related about Microsoft Lumia phones?

nitrogen|5 years ago

Do chemists have similar problems working with copper, whose chemical symbol is Cu?

gswdh|5 years ago

In all honesty, out of the combinations for two and three letter acronyms there’s bound to be a language out the there where the meaning is crude. I recall on here recently, something being rude in Finnish or Swedish. We’re professionals, it’s just a name, who cares.

einpoklum|5 years ago

1. How do we know what parts of the library are usable on CUDA devices, and which are only usable in host-side code?

2. How compatible is this with libstdc++ and/or libcu++, when used independently?

I'm somewhat suspicious of the presumption of us using NVIDIA's version of the standard library for our host-side work.

Finally, I'm not sure that, for device-side work, libc++ is a better base to start off of than, say, EASTL (which I used for my tuple class: https://github.com/eyalroz/cuda-kat/blob/master/src/kat/tupl... ).

...

partial self-answer to (1.): https://nvidia.github.io/libcudacxx/api.html apparently only a small bit of the library is actually implemented.

blelbach|5 years ago

> apparently only a small bit of the library is actually implemented.

Yep. It's an incremental project. But stay tuned.

> I'm somewhat suspicious of the presumption of us using NVIDIA's version of the standard library for our host-side work.

Today, when using libcu++ with NVCC, it's opt-in and doesn't interfere with your host standard library.

I get your concern, but a lot of the restrictions of today's GPU toolchains comes from the desire to continue using your host toolchain of choice.

Our other compiler, NVC++, is a unified stack; there is no host compiler. Yes, that takes away some user control, but it lets us build things we couldn't build otherwise. The same logic applies for the standard library.

https://developer.nvidia.com/blog/accelerating-standard-c-wi...

> Finally, I'm not sure that, for device-side work, libc++ is a better base to start off of than, say, EASTL (which I used for my tuple class: https://github.com/eyalroz/cuda-kat/blob/master/src/kat/tupl... ).

We wanted an implementation that intended to conform to the standard and had deployment experience with a major C++ implementation. EASTL doesn't have that, so it never entered our consideration; perhaps we should have looked at it, though.

At the time we started this project, Microsoft's Standard Library wasn't open source. Our choices were libstdc++ or libc++. We immediately ruled libstdc++ out; GPL licensing wouldn't work for us, especially as we knew this project had to exchange code with some of our other existing libraries that are under Apache- or MIT-style licenses (Thrust, CUB, RAPIDS).

So, our options were pretty clear; build it from scratch, or use libc++. I have a strict policy of strategic laziness, so we went with libc++.

Mr_lavos|5 years ago

Does this mean you can do operations on struct's that live on the GPU hardware?

shaklee3|5 years ago

You have been able to do that for a long time with UVA.

gj_78|5 years ago

I really do not understand why a (very good) hardware provider is willing to create/direct/hint custom software for the users.

Isn't this exactly what a GPU firmware is expected to do ? Why do they need to run software in the same memory space as my mail reader ?

blelbach|5 years ago

NVIDIA employs more software engineers than hardware engineers.

> Why do they need to run software in the same memory space as my mail reader ?

It is a lot more expensive to build functionality and fix bugs in silicon than it is to do those same things in software.

At NVIDIA, we do as much as we possible can in software. If a problem or bug can be solved in software instead of hardware, we prefer the software solution, because it has much lower cost and shorter lead times.

Solving a problem in hardware takes 2-4 years minimum, massive validation efforts, and has huge physical material costs and limitations. After it's shipped, we can't "patch" the hardware. Solving a problem in software can sometimes be done by one engineer in a single day. If we make a mistake in software, we can easy deploy a fix.

At NVIDIA we have a status for hardware bugs called "Won't Fix, Fix in Next Chip". This means "yes, there's a problem, but the earliest we can fix it is 2-4 years from now, regardless of how serious it is".

Can you imagine if we had to solve all problems that way? Wait 2-4 years?

On its own, our hardware is not a complete product. You would be unable to use it. It has too many bugs, it doesn't have all of the features, etc. The hardware is nothing without the software, and vice versa.

We do not make hardware. We make platforms, which are a combination of hardware and software. We have a tighter coupling between hardware and software than many other processor manufacturers, which is beneficial for us, because it means we can solve problems in software that other vendors would have to solve in hardware.

> I really do not understand why a (very good) hardware provider is willing to create/direct/hint custom software for the users.

Because we sell software. Our hardware wouldn't do anything for you without the software. If we tried to put everything we do in software into hardware, the die would be the size of your laptop and cost a million dollars each.

You wouldn't buy our hardware if we didn't give you the software that was necessary to use it.

> Isn't this exactly what a GPU firmware is expected to do ?

Firmware is a component of software, but usually has constraints that are much more similar to hardware, e.g. long lead times. In some cases the firmware is "burned in" and can't be changed after release, and then it's very much like hardware.

Const-me|5 years ago

> Isn't this exactly what a GPU firmware is expected to do?

The source data needs to appear on the GPU somehow. Similarly, the results computed on GPU are often needed for CPU-running code.

GPUs don’t run an OS and are limited. They can’t possibly access file system, and many useful algorithms (like PNG image codec) is a poor fit for them. Technically I think they can access source data directly from system memory, but doing that is inefficient in practice, because GPUs have a special piece of hardware (called copy command queue in d3d12, or transfer queue in Vulcan) to move large blocks of data over PCIe.

That library implements an easier way to integrate CPU and GPU pieces of the program.

dahart|5 years ago

What do you mean about running in the same memory space? Your operating system doesn’t allow that. Is your concern about using host memory? This open source library doesn’t automatically use host memory, users of the library can write code that uses host memory, if they choose to.

How would a firmware help me write heterogeneous bits of c++ code that can run on either cpu or gpu?

scott31|5 years ago

A pathetic attempt to lock developers into their hardware.

jpz|5 years ago

They seem to be pushing the barrier on innovation on GPU compute. It seems a little unfair to call that pathetic, whatever strategic reasons they have to find OpenCL unappetising (which simply enables their sole competitor in truth.)

Their decision making seems rational, of course it's not ideal if you're consumer. We would like the ability to bid off NVidia with AMD Radeon.

Convergence to a standard has to be driven by the market, but it's impossible to drive NVidia there because they are the dominant player and it is 100% not in their interests.

It doesn't mean they're a bad company. They are rational actors.

blelbach|5 years ago

> A pathetic attempt to lock developers into their hardware

Ah-ha, you've caught us! Our plan is to lock you into our hardware by implementing Standard C++.

Once you are all writing code in Standard C++, then you won't be able to run it elsewhere, because Standard C++ only runs on NVIDIA platforms, right?

... What's that? Standard C++ is supported by essentially every platform?

Darnit! Foiled again.

daniel-thompson|5 years ago

I think CUDA itself is the locking attempt; this is just a tiny cherry on top.

pjmlp|5 years ago

The other vendors are to blame for sticking with outdated C and printf style debugging.

gj_78|5 years ago

Agree++. They are good at hardware and should stay that way.