top | item 31318684

(no title)

electricshampo1 | 3 years ago

Unlike goroutines, seems here you have control over the execution schedule for the virtual threads if you provide an executor. This is pretty great.

Think this will obsolete go over the next few decades.

discuss

order

jjice|3 years ago

I don't think it will. If everyone was clamoring for Java and settled on Go only because of goroutines, then sure, but I think Go was liked for a lot of reasons aside from that. I also don't often see people complain about wanting more control over the scheduler for Go (could be that I just miss those).

I'd be surprised if Go adoption plummeted because of this, but who knows, I sure don't have a crystal ball.

lostcolony|3 years ago

Sane concurrency is -one- of the reasons people reach for Go, and sure, that may no longer be a differentiator. But it's definitely not the only one I've heard people toss around (and, agreed, I've never heard anyone bemoan the lack of control of the scheduler). In fact, the introduction of virtual threads and no new memory semantics I think means it still fails one of the main benefits of goroutines (channels and default copying semantics); everything in JVM land by default is still going to use shared memory and default pass by reference semantics.

I think it's all a moot point though, as it basically just demonstrates the next iteration of Paul Graham's Blub Paradox. With every iteration of new improvements for the JVM it reinforces the belief of many that the JVM is the best tool for every job (after all, it now just got cool feature y they just now learned about and can use and OMG Blub-er-Java is so cool, who needs anything else?!), and reinforces the belief of many others that the JVM is playing catchup with other languages (it only just -now- got feature y) and there are often better tools out there.

cogman10|3 years ago

Not on the first go around. AFAIK, they are looking at exposing more of the internal scheduling API but that's likely not going to be a part of the initial release.

The executor services referred to in the blog are for the order of execution of tasks on the virtual thread pool. For a "virtualThreadExecutor" service, every task will get a virtual thread and scheduling will happen internally.

You can still use a fixed thread pool with a custom task scheduler if you like, but probably not exactly what you are after.

CHY872|3 years ago

This blogpost does rely on plugging in an executor. While the API was removed, it’s one private variable away (documented in footnote). As you say, it seems like it’s an ‘on the way, but later’ thing - the last Loom preview I used (a while ago) actually had the API so when I started drafting this post I was unhappily surprised!

didip|3 years ago

Java moves a lot slower so I don't think it will obsoletes Go.

If anything, if Loom is great, then it will keep Go on its toes and hopefully Go will also evolved due to external pressure.

slantedview|3 years ago

I beg to differ. Go has been moving at a glacial pace. Generics took forever to implement, aren't even feature complete (no type parameters on methods), and have no integration with the standard lib. Meanwhile Java is adding lots of new features and versions at a quick pace.

d3nj4l|3 years ago

I've heard some variation of "$java_feature will make $language obsolete" for years now, most recently wrt kotlin/scala, and it's never held true. It's great for the people who use Java, but there are tons of reasons why other people use other languages.

gorjusborg|3 years ago

I've noticed the same behavior.

Imitation can only get you so far. Java is changing, and in many cases for the better, by absorbing features from other languages. However, I still think several other languages do a better job curating features to fit a niche.

That said, Loom appears to be a serious upgrade for JVM languages. Now, if startup could get an order of magnitude faster...

EdwardDiego|3 years ago

Java record vs. Kotlin dataclass?

kjeetgill|3 years ago

One of the unsung heroes of go is how goroutines sit on top of channels + select. Blocking and waiting on one queue is easy, blocking and waiting on a set of queues waiting for any to get an element is a good deal trickier. Having that baked into the language and the default channel data-structures really does pay dividends over a library in a case like this.

You can kinda do this with futures but I suspect it'll be wildly inefficient. I really hope Java get's something to fill this niche. We already have a menagerie of Queue, BlockingQueue, and TransferQueue implementations. What's a few more?

sudarshnachakra|3 years ago

I guess the Structured Concurrency JEP below addresses the problems you'll get to solve. It'll enable things like AND / OR combinations of virtual threads which IMHO looks like a better way to solve this rather than having a special syntax for select.

https://openjdk.java.net/jeps/8277129

But frankly I'm afraid of how these changes affect garbage collection since more and more vthread stacks are going to be in the heap (I hope they are contemplating some form of deterministic stack destruction along with the above JEP).

ackfoobar|3 years ago

> Having that baked into the language and the default channel data-structures really does pay dividends over a library in a case like this.

Kotlin coroutines have the bare minimum in the language, and implement the rest (e.g. channel, select, `go`/`launch`) in libraries. Could you explain what the dividends for Go are?

MrBuddyCasino|3 years ago

Channels on the JVM would be sweet. You can do the same with Futures, and its probably not even slower, but it is a lot more clunky. I suspect its never gonna happen, too big a change. Maybe Kotlin will do it.

Thaxll|3 years ago

Go does not need 128MB of memory to run hello world in a container.

People don't pick up Go over Java because of goroutines, Java is still and will forever be an "enterprise" language behind many layers of abstractions.

CHY872|3 years ago

The JVM is a master of gradually closing the gap. Over the last decade:

- Garbage collectors have required far less tuning; with G1, Shenandoah, ZGC it's likely that your application will need little tuning on normal sized heaps.

- modules, jlink etc allow one to build a much smaller Java application by including only those parts of the JVM that one needs.

- Graal native images are real. These boast a far lower startup overhead and much lower steady state memory usage for simpler applications.

Probably my counterexample of choice is this: https://github.com/dainiusjocas/lucene-grep - it uses Lucene, one of the best search libraries (core of Elasticsearch, Solr, most websites), which is notoriously not simple code, to implement grep-like functionality. In simple cases, they demonstrate a 30ms whole process runtime with no more than 32MB of RAM used (which looks suspiciously like a default).

The JVM is fast becoming a bit like Postgres... one of those 'second best at everything' pieces of tech.

native_samples|3 years ago

FWIW, this criticism no longer applies to people using AOT compilation. From my macOS laptop:

     /usr/bin/time -l ./hello-world
    Hello World!
        0.00 real         0.00 user         0.00 sys
             3231744  maximum resident set size
                   0  average shared memory size
                   0  average unshared data size
                   0  average unshared stack size
                 841  page reclaims
                   1  page faults
                   0  swaps
                   0  block input operations
                   0  block output operations
                   0  messages sent
                   0  messages received
                   0  signals received
                   2  voluntary context switches
                   4  involuntary context switches
            22395110  instructions retired
            18507246  cycles elapsed
             1294336  peak memory footprint
So "peak memory footprint" for hello world is 1.2 MB and it starts instantly.

Now, not everyone can/will use AOT compilation. It's slow to compile and peak performance is lower unless you set up PGO, plus it may need a bit of work in cases where apps assume the ability to do things like generate code on the fly. But Go can't do runtime code generation easily at all, and if you are OK with those constraints, you get C-like results.

kaba0|3 years ago

Low memory consumption also has a price in this case. On a 1 TB server machine guess which platform will have better throughput by far? Go’s GC will die under that load.

Writing a “hello world”-scoped microservice is a tiny niche.

MobiusHorizons|3 years ago

I'm curious why you think a language feature would obsolete an entire programming language. Do you imagine that go programmers secretly wish they were writing Java syntax, in my experience this is very much not true.

EdwardDiego|3 years ago

I'm tempted to make cheap shots about sets that do actual set operations without a for loop, but I'm hoping Go >=1.19 will start introducing some nice generic collections in the stdlib.