> macOS on Apple silicon processors (M1, M2, and M3) includes a feature which controls how and when dynamically generated code can be either produced (written) or executed on a per-thread basis. […] With macOS 14.4, when a thread is operating in the write mode, if a memory access to a protected memory region is attempted, macOS will send the signal SIGKILL instead.
This isn’t just any old thread triggering SIGKILL, it’s the JIT thread privileged to write to executable pages that is performing illegal memory accesses. That’s typically a sign of a bug, and allowing a thread with write access to executable pages to continue executing after that is a security risk.
But I know of other language runtimes that take advantage of installing signal handlers for SIGBUS/SIGSEGV to detect when they overflow a page so they can allocate more memory, etc. This saves from having to do an explicit overflow check on every allocation. Those threads aren’t given privilege to write to executable memory, so they’re not seeing this issue…
So this sounds like a narrow design problem the JVM is facing with their JIT thread. This blog doesn’t explain why their JIT thread needs to make illegal memory accesses instead of an explicit check.
> "This blog doesn’t explain why their JIT thread needs to make illegal memory accesses instead of an explicit check."
Because explicit checks on every memory access (pointer dereference) makes Java significantly slower, even with compiler optimisations to remove redundant checks[1]. Memory protection is a fundamental, very useful, hardware feature and it's perfectly reasonable for user space language runtimes to take advantage of it.
Or, to put it another way, SIGSEGV has been a part of Unix-family OSes for decades. It works perfectly fine on Linux and Windows and there's no reason it shouldn't work on macOS.
[1] (Many years ago I worked on a cross-platform implementation of the Java runtime and wrote much of the threads and signal handling code. We had an option to enable explicit memory checks, which got us up and running faster on new platforms where the SIGSEGV handlers hadn't been written yet. From memory this made everything something like 30-50% slower, so it was definitely worthwhile to implement SIGSEGV handling. In our case SIGSEGV handlers were used both as part of the garbage collector/memory management and to implement Java's NullPointerException)
It said it affected back to Java 8, so seems like this design has been there for a while, and since older versions are EOL, any Java level fix would not be patched back.
“The Java Virtual Machine […] leverages the protected memory access signal mechanism both for correctness (e.g., to handle the truncation of memory mapped files) and for performance.”
Where by “protected memory access signal mechanism”, they mean SIGBUS/SIGSEGV, i.e., a segfault.
This is probably because the JVM is doing “zero cost access checks”, which is where you do the moral equivalent of:
…because it’s faster than checking file permissions before every write. (It’s a common pattern in systems programming, so it’s not quite as crazy as it sounds.)
I guess my opinion on this is that if you write your program to intentionally trigger and ignore kill(10) / kill(11) from the host OS, for the sake of a speed boost, you can’t really get too mad when the host OS gets fed up and starts sending kill(9) instead.
I also wonder what happens in the (extremely rare) case where the signal the JVM is trapping is a real segfault, and not an operating system signal.
This isn't about files, this is about plain pages of RAM[0]. It is a basic CPU operation to trap on unmapped pages, and OSes rightfully expose this useful feature (in addition to using it themselves), allowing processes to do many things, from lazily-computed memory regions to removing significant amounts of overhead doing a thing the CPU will inevitably do itself anyway.
I believe the "the truncation of memory mapped files" section is for when the Java process memory-maps a file (as Java provides memory-mapping operations in its standard library, and probably also uses them itself), and afterwards some other unrelated process truncates the file, resulting in the OS quietly making (parts of) the mappings inaccessible. Here the process couldn't even check the permissions before reading (never mind how utterly hilariously inefficient that would be, defeating the purpose of memory-mapping) as the mappings could change between the check and subsequent read anyway.
> I also wonder what happens in the (extremely rare) case where the signal the JVM is trapping is a real segfault, and not an operating system signal.
Just an educated guess, but the JVM knows if a thread may expect a segfault at a given point or not. If no thread expects one, then I assume the segfault handler just writes out that a segfault happened with some useful info, and terminates the program. I mean, I’m sure about the effect as I have caused a JVM to segfault a couple of times with native memory, so it handles it as expected.
"The issue was not present in the early access releases for macOS 14.4, so it was discovered only after Apple released the update."
I wonder if Oracle really didn't know beforehand.
Apple has long been telling people (writing JITs) that to write to executable memory, they need the correct entitlements (com.apple.security.cs.allow-jit, allow-unsigned--executable-memory, and or/ .disable-executable-page-protection). I wonder if Oracle has been ignoring them, satisfied with the signal-handler workaround, and Apple finally enforced their policy.
Apple also expects that developers deploying apps on MacOS that use Java have these entitlements configured on a per-app basis. Oracle likely objects that this is not really for the application developer to certify, since it's pretty much out of their control.
In any case, I'm doubting Oracle's release is the whole truth.
> Apple has long been telling people (writing JITs) that to write to executable memory, they need the correct entitlements (com.apple.security.cs.allow-jit, allow-unsigned--executable-memory, and or/ .disable-executable-page-protection). I wonder if Oracle has been ignoring them, satisfied with the signal-handler workaround, and Apple finally enforced their policy.
As far as I understand, that’s not the issue, the JIT itself works just fine. The JVM just uses the (quite common) trick that it doesn’t actually bound check everything, but let’s the hardware trigger an interrupt, expecting that to “bubble up” to the program at hand, so it can handle certain cases “for free”. This behavior was changed by apple, which causes issues.
This is honestly a wild and out there claim. The OpenJdk team would never want to see this happen to their user base. They’re some of the most professional programmers I’ve ever seen.
The whole truth is that the Apple kernel team broke user space.
The main question now is why hasn't it been exposed in pre-release 14.4. This could mean some very urgent and risky change got its way to the 14.4 release, or that the whole macos release process is broken and unstable.
I don’t think the article claims that a Java process tries to access some other process’s memory.
In a typical modern operating system, a memory page can be non-writable and non-executable, writable and non-executable, or non-writable and executable, but not simultaneously writable AND executable.
If you generate executable code at runtime, then you need write access to a page to write the executable code into that page. Then you need to tell the operating system to change the page from writable to executable.
If you then try to write to the page, you’ll get a signal (SIGSEGV or SIGBUS, according to the article).
Oracle’s JVM apparently relies on this behavior: a Java process sometimes tries to write to a page (in its own memory space) that is not marked writable. The JVM then catches the SIGSEGV and recovers (perhaps by asking the operating system to change the page back from executable to writable, or by arranging to write to a different page, or to abort the write operation altogether).
It's not. It's trying to access unmapped or protected memory in its own process.
Basically what its used for is to implement an 'if' that's super fast on the most likely path but super slow on the less likely path.
It's not super clear what its being used for (this is often used for the GC but the fact that graal isn't affected means that likely still works). Possibly they are using this to detect attempts to use inline-cache entries that have been deleted.
In a virtual memory operating system, every program has its own address space. Accessing an unmapped address is not the same as trying to access another process's memory.
It's also pretty common to use memory protection to autoextend stacks... Allocate the stack size you need, ask the OS to mark the page(s) after the stack as protected, catch the signal when you hit the protection, allocate some more stack and a new protected page unless the stack is too big. Works for heaps too.
Let the MMU hardware check accesses, so you don't have to check everything in software all the time.
A fairly common idiom is to use memory protection to provide zero cost access checks, as you can generally catch the signals produced by most memory faults, and then work out where things went wrong and convert the memory access error into a catchable exception, or to lazily construct data structures or code.
So you want the trap, but the trap itself can be handled. It sounds like there’s been a semantic change when the trap occurs for execution of an address or an access to an executable page.
There are also a bunch of poorly documented Mac APIs to inform the memory manager and linker about JIT regions and I wonder if it’s related to those. It really depends on exactly what oracle’s jvm is trying to do, and what the subsequent cause of the fault is.
Certainly it’s a less than optimal failure though :-/
Accessing such areas is sometimes done deliberately since programmers could rely on the OS telling them what just happened using signals instead of nuking the process wholesale. Doing it without signals is usually slow and/or clunky (null-pointer checks, read/write permissions, existence of pages), or straight out impossible.
Accessing other processes' memory is not the concern since virtual memory provides each process the illusion of having the entire address space for itself.
I just bought a MacBook Pro with the M3 Max chip and installed MATLAB R2023b. Sonoma 14.3 is in place. As a requirement, I had to also install Corretto 8. MathWorks only supports the Java 8 JRE included with Amazon Corretto 8. I am already having several problems in MATLAB with his new setup. Can I assume that updating to Sonoma 14.4 might very well cause even more problems? I really don't understand any of this.
It is always funny to Me when Apple zealots come into threads blaming everyone but Apple that software broke. Complaining Java doesn’t follow Apple standards or some crap. Then 9 days later Apple issues a fix because they did indeed break it.
It seems highly unlikely that the macos people don't test anything on the jvm during acceptance. It's even more suspicious that this change didn't happen during the public beta. Is it possible that Apple is firing a warning shot at Java? Even as a huge fan of Hanlon's razor, this seems like such an enormous oversight its hard for me to ascribe it to incompetence.
> it seems highly unlikely that the macos people don't test anything on the jvm during acceptance.
I would be surprised if they do to be honest (Apple doesn't even catch obvious bugs in the new macOS settings panel, which really makes me wonder if there is a software QA process at all). For 3rd party apps they seem to rely on the software vendors to holler if a macOS update breaks their app. That's why the macOS prerelease versions exist. But since the bug wasn't present in the prerelease, affected vendors couldn't catch it. It's still a fuckup in Apple release process of course (which tbh also isn't surprising).
It's not a problem that breaks all JVM based software instantly. So maybe Apple tests but not long enough to trigger this issue.
I really don't know what Apple would be 'warning' against. Don't use Java? There are tens of thousands of business and development tools depending on the JVM. Blocking Java would diminish the value of macOS tremendously and doing so without warning would open Apple up to lots of lawsuits.
Another example for how preventing users from doing rollbacks is a terrible practice. Even if it's not your application's fault, users may have very good reasons to revert an update, if only temporarily.
This also bothers me on Android. Sometimes, an app update may break something and prevent me from using it. But Google doesn't allow me to reinstall a previously published version from the Play Store. If I don't have to (or can't easily) do without that application until a fix might be released, my only option is to find an older release on some shady mirror site.
Even if this was the right thing, they could / should have changed this behavior in a pre-release because that's exactly the kind of API change in the OS that will catch people off-guard. As another commenter wrote, I'd consider this either a serious flaw in Apples release process or they learned about some very dangerous vulnerability where the old behavior was abused and they decided that they rather annoy all users and vendors of Java software out there than tolerate the vulnerability in MacOS. But in this case I'd surmise that at least now Oracle would have been informed about this.
A gross and low performance option for now might be to run Java under Rosetta, but I’m saying that based on them saying that this is apple silicon specifically and processes under rosetta have a bunch of quirks to support intel semantics. This would allow you to work around this for now.
That said I’m curious what the exact scenario that leads to this is, I’m assuming it’s not common as you would expect it to have come up during betas and pre -release seeds.
> I’m assuming it’s not common as you would expect it to have come up during betas and pre -release seeds.
The article specifically says that the issue was not present in early access releases, so it was not possible to discover it before the actual release.
I wonder if it's the same reason as why Civilization 6 stopped working on iPadOS 17.4. Did they change something deep in the kernel for DMA compliance?
I wonder if we’re about to enter 4-5 years of macOS “dark ages”, due to Apple grappling with EU/DMA.
Much like Microsoft in early 2000s, between IE/lawsuit and grappling with internet security/viruses. Windows XP, launched in 2001, was considered by most a great OS, didn’t have another good OS successor until 8-years later (Windows 7).
It’s not at all like they didn’t have the time or the resources to deal with this.
I think we already saw some of this in particular with the recent bullshit they tried to pull with PWAs in iOS 17.4 that they were hoping to just let things break and were hoping that they could shift the blame and anger towards the EU instead.
Windows, as a kernel itself and by extension as a server, is very resilient stable to a point that there is a Windows NT 4 machine of a certain railroad control system still running continuously for 14 years without any restarts. It even still reboots back without problem in disastrous cases such as power loss due to hurricane or earthquake. Trust me, it is made by Dave Culter, it, just, works.
It is really the client facing side of Windows that really sucks, (warning: explicitly strong language) such as having really shitty software known as Office, like god why Word and not Latex, and why spreadsheet when we have database that we can query efficiently? Or not being able to have multi-user RDP session due to Microsoft having licensing dispute with Citrix about 20-ish years ago (fuck you Citrix, you asshole!). Or why do I have to do a lot of hoops and install a lot of "C++ redistributable" for running some antique software? Or why do I have to jump through a lot of group policy simply to enable WinRM and get remote powershell management?
Either way, I'm typing this on a Windows 11 desktop with WSL2 on. The hybrid experience is incredible, unless you need some performance critical app (WSL2 is in general slower than bare metal Windows and bare metal Linux itself, of course, except in machine learning).
Things like 9P to cross the Window file system access also introduced a lot of pain such as permission control because Windows does not have a POSIX-like permission system, like instead of having a simple 2 bytes that split into 3 octal number (there is a reason it is maxed out at 777), you have an incredibly sophisticated, capability and token-based access control system dated almost 30 years ago that Linux doesn't even have back in the day! But that pile of shit is now full of bugs and exploits such as token/handle duplication. (oh yes I'm talking about black hat territory as I also do some red team CTF regarding these stuff)
An issue introduced by macOS 14.4, which causes Java process to terminate unexpectedly, is affecting all Java versions from Java 8 to the early access builds of JDK 22
If this affects so many versions of Java and nobody notices, is anyone even using Java on macOS?
Plenty of people develop for java on macs. The issue is that per the article this behavior was not present in the early access macOS builds, which means something changed between beta and release.
And there's a known issue with an interaction between minecraft, Java, and the video drivers that crashes out and it can be traced back all the way to here: https://github.com/glfw/glfw/issues/1997
It's not terminating directly. I've seen a few IDE crashes this week, less than one per day, but since there's no log there's no easy way to determine it's related to a macOS change.
Well, that’s why Apple forbids use of private APIs in the App Store apps. If you built all your tech stack on the foundation of some peculiar nondocumented platform’s behavior, don’t be surprised when this stack breaks.
This is not an API. It's the handling of writes to memory the process has protected. In the past this would generate a signal the process could handle and recover from. Now it generates a sigkill which is uncatchable / unrecoverable from.
These behaviours have been historically well documented.
riscy|1 year ago
This isn’t just any old thread triggering SIGKILL, it’s the JIT thread privileged to write to executable pages that is performing illegal memory accesses. That’s typically a sign of a bug, and allowing a thread with write access to executable pages to continue executing after that is a security risk.
But I know of other language runtimes that take advantage of installing signal handlers for SIGBUS/SIGSEGV to detect when they overflow a page so they can allocate more memory, etc. This saves from having to do an explicit overflow check on every allocation. Those threads aren’t given privilege to write to executable memory, so they’re not seeing this issue…
So this sounds like a narrow design problem the JVM is facing with their JIT thread. This blog doesn’t explain why their JIT thread needs to make illegal memory accesses instead of an explicit check.
Reason077|1 year ago
Because explicit checks on every memory access (pointer dereference) makes Java significantly slower, even with compiler optimisations to remove redundant checks[1]. Memory protection is a fundamental, very useful, hardware feature and it's perfectly reasonable for user space language runtimes to take advantage of it.
Or, to put it another way, SIGSEGV has been a part of Unix-family OSes for decades. It works perfectly fine on Linux and Windows and there's no reason it shouldn't work on macOS.
[1] (Many years ago I worked on a cross-platform implementation of the Java runtime and wrote much of the threads and signal handling code. We had an option to enable explicit memory checks, which got us up and running faster on new platforms where the SIGSEGV handlers hadn't been written yet. From memory this made everything something like 30-50% slower, so it was definitely worthwhile to implement SIGSEGV handling. In our case SIGSEGV handlers were used both as part of the garbage collector/memory management and to implement Java's NullPointerException)
destring|1 year ago
https://lkml.org/lkml/2012/12/23/75
amelius|1 year ago
At least they could have provided a path back to the old behavior.
LadyCailin|1 year ago
beeboobaa|1 year ago
Or in apple vernacular, it should just work.
fwlr|1 year ago
Where by “protected memory access signal mechanism”, they mean SIGBUS/SIGSEGV, i.e., a segfault.
This is probably because the JVM is doing “zero cost access checks”, which is where you do the moral equivalent of:
…because it’s faster than checking file permissions before every write. (It’s a common pattern in systems programming, so it’s not quite as crazy as it sounds.)I guess my opinion on this is that if you write your program to intentionally trigger and ignore kill(10) / kill(11) from the host OS, for the sake of a speed boost, you can’t really get too mad when the host OS gets fed up and starts sending kill(9) instead.
I also wonder what happens in the (extremely rare) case where the signal the JVM is trapping is a real segfault, and not an operating system signal.
dzaima|1 year ago
I believe the "the truncation of memory mapped files" section is for when the Java process memory-maps a file (as Java provides memory-mapping operations in its standard library, and probably also uses them itself), and afterwards some other unrelated process truncates the file, resulting in the OS quietly making (parts of) the mappings inaccessible. Here the process couldn't even check the permissions before reading (never mind how utterly hilariously inefficient that would be, defeating the purpose of memory-mapping) as the mappings could change between the check and subsequent read anyway.
[0]: https://bugs.java.com/bugdatabase/view_bug?bug_id=8327860, "I've managed to narrow this down to this small reproducer:" section
kaba0|1 year ago
Just an educated guess, but the JVM knows if a thread may expect a segfault at a given point or not. If no thread expects one, then I assume the segfault handler just writes out that a segfault happened with some useful info, and terminates the program. I mean, I’m sure about the effect as I have caused a JVM to segfault a couple of times with native memory, so it handles it as expected.
w10-1|1 year ago
I wonder if Oracle really didn't know beforehand.
Apple has long been telling people (writing JITs) that to write to executable memory, they need the correct entitlements (com.apple.security.cs.allow-jit, allow-unsigned--executable-memory, and or/ .disable-executable-page-protection). I wonder if Oracle has been ignoring them, satisfied with the signal-handler workaround, and Apple finally enforced their policy.
Apple also expects that developers deploying apps on MacOS that use Java have these entitlements configured on a per-app basis. Oracle likely objects that this is not really for the application developer to certify, since it's pretty much out of their control.
In any case, I'm doubting Oracle's release is the whole truth.
kaba0|1 year ago
As far as I understand, that’s not the issue, the JIT itself works just fine. The JVM just uses the (quite common) trick that it doesn’t actually bound check everything, but let’s the hardware trigger an interrupt, expecting that to “bubble up” to the program at hand, so it can handle certain cases “for free”. This behavior was changed by apple, which causes issues.
vips7L|1 year ago
The whole truth is that the Apple kernel team broke user space.
zx8080|1 year ago
pier25|1 year ago
Also amazing it wasn't caught during the beta period.
empthought|1 year ago
goosedragons|1 year ago
mvdtnz|1 year ago
CharlesW|1 year ago
I'm just a lowly JavaScript/TypeScript/PHP programmer, but what is the Very Good Reason that Java trying to access other processes' memory?
mayoff|1 year ago
In a typical modern operating system, a memory page can be non-writable and non-executable, writable and non-executable, or non-writable and executable, but not simultaneously writable AND executable.
If you generate executable code at runtime, then you need write access to a page to write the executable code into that page. Then you need to tell the operating system to change the page from writable to executable.
If you then try to write to the page, you’ll get a signal (SIGSEGV or SIGBUS, according to the article).
Oracle’s JVM apparently relies on this behavior: a Java process sometimes tries to write to a page (in its own memory space) that is not marked writable. The JVM then catches the SIGSEGV and recovers (perhaps by asking the operating system to change the page back from executable to writable, or by arranging to write to a different page, or to abort the write operation altogether).
scialex|1 year ago
Basically what its used for is to implement an 'if' that's super fast on the most likely path but super slow on the less likely path.
It's not super clear what its being used for (this is often used for the GC but the fact that graal isn't affected means that likely still works). Possibly they are using this to detect attempts to use inline-cache entries that have been deleted.
toast0|1 year ago
It's also pretty common to use memory protection to autoextend stacks... Allocate the stack size you need, ask the OS to mark the page(s) after the stack as protected, catch the signal when you hit the protection, allocate some more stack and a new protected page unless the stack is too big. Works for heaps too.
Let the MMU hardware check accesses, so you don't have to check everything in software all the time.
olliej|1 year ago
A fairly common idiom is to use memory protection to provide zero cost access checks, as you can generally catch the signals produced by most memory faults, and then work out where things went wrong and convert the memory access error into a catchable exception, or to lazily construct data structures or code.
So you want the trap, but the trap itself can be handled. It sounds like there’s been a semantic change when the trap occurs for execution of an address or an access to an executable page.
There are also a bunch of poorly documented Mac APIs to inform the memory manager and linker about JIT regions and I wonder if it’s related to those. It really depends on exactly what oracle’s jvm is trying to do, and what the subsequent cause of the fault is.
Certainly it’s a less than optimal failure though :-/
royjacobs|1 year ago
samus|1 year ago
Accessing other processes' memory is not the concern since virtual memory provides each process the illusion of having the entire address space for itself.
8crazyideas|1 year ago
xcv123|1 year ago
Do not update until Apple fixes the issue.
dimask|1 year ago
Btw what sort of problems are you facing? I have had problems with closing figures, but figured it out eventually with a workaround [0].
[0] https://se.mathworks.com/matlabcentral/answers/2027964-matla...
tebruno99|1 year ago
w10-1|1 year ago
Can you tell from this or any other Oracle bug whether Apple is bending its rules for Java? I can't tell either way.
javajosh|1 year ago
flohofwoe|1 year ago
I would be surprised if they do to be honest (Apple doesn't even catch obvious bugs in the new macOS settings panel, which really makes me wonder if there is a software QA process at all). For 3rd party apps they seem to rely on the software vendors to holler if a macOS update breaks their app. That's why the macOS prerelease versions exist. But since the bug wasn't present in the prerelease, affected vendors couldn't catch it. It's still a fuckup in Apple release process of course (which tbh also isn't surprising).
bzzzt|1 year ago
I really don't know what Apple would be 'warning' against. Don't use Java? There are tens of thousands of business and development tools depending on the JVM. Blocking Java would diminish the value of macOS tremendously and doing so without warning would open Apple up to lots of lawsuits.
Anamon|1 year ago
This also bothers me on Android. Sometimes, an app update may break something and prevent me from using it. But Google doesn't allow me to reinstall a previously published version from the Play Store. If I don't have to (or can't easily) do without that application until a fix might be released, my only option is to find an older release on some shady mirror site.
sunshinerag|1 year ago
metanonsense|1 year ago
olliej|1 year ago
That said I’m curious what the exact scenario that leads to this is, I’m assuming it’s not common as you would expect it to have come up during betas and pre -release seeds.
grodriguez100|1 year ago
The article specifically says that the issue was not present in early access releases, so it was not possible to discover it before the actual release.
millzlane|1 year ago
ivan_gammel|1 year ago
MaxBarraclough|1 year ago
sebazzz|1 year ago
INGSOCIALITE|1 year ago
[deleted]
not_me_ever|1 year ago
:tripplefacepalm:
Somebody hire some engineers at Oracle.
kaba0|1 year ago
erik_seaberg|1 year ago
tiffanyh|1 year ago
I wonder if we’re about to enter 4-5 years of macOS “dark ages”, due to Apple grappling with EU/DMA.
Much like Microsoft in early 2000s, between IE/lawsuit and grappling with internet security/viruses. Windows XP, launched in 2001, was considered by most a great OS, didn’t have another good OS successor until 8-years later (Windows 7).
mdhb|1 year ago
I think we already saw some of this in particular with the recent bullshit they tried to pull with PWAs in iOS 17.4 that they were hoping to just let things break and were hoping that they could shift the blame and anger towards the EU instead.
xyst|1 year ago
There was a HN post about a hashicorp founder using Linux within a vm on their mbp. Might adopt that same approach, if I can find the og post.
nullwarp|1 year ago
Worked great for years before I changed jobs that let me bring my own hardware finally.
Kipters|1 year ago
open592|1 year ago
https://youtu.be/ubDMLoWz76U?si=ipmho73-r9FzZpBp
stevefan1999|1 year ago
It is really the client facing side of Windows that really sucks, (warning: explicitly strong language) such as having really shitty software known as Office, like god why Word and not Latex, and why spreadsheet when we have database that we can query efficiently? Or not being able to have multi-user RDP session due to Microsoft having licensing dispute with Citrix about 20-ish years ago (fuck you Citrix, you asshole!). Or why do I have to do a lot of hoops and install a lot of "C++ redistributable" for running some antique software? Or why do I have to jump through a lot of group policy simply to enable WinRM and get remote powershell management?
Either way, I'm typing this on a Windows 11 desktop with WSL2 on. The hybrid experience is incredible, unless you need some performance critical app (WSL2 is in general slower than bare metal Windows and bare metal Linux itself, of course, except in machine learning).
Things like 9P to cross the Window file system access also introduced a lot of pain such as permission control because Windows does not have a POSIX-like permission system, like instead of having a simple 2 bytes that split into 3 octal number (there is a reason it is maxed out at 777), you have an incredibly sophisticated, capability and token-based access control system dated almost 30 years ago that Linux doesn't even have back in the day! But that pile of shit is now full of bugs and exploits such as token/handle duplication. (oh yes I'm talking about black hat territory as I also do some red team CTF regarding these stuff)
secondcoming|1 year ago
npalli|1 year ago
semiquaver|1 year ago
latchkey|1 year ago
IntelliJ IDEA, the product itself, is JVM based.
bombcar|1 year ago
And there's a known issue with an interaction between minecraft, Java, and the video drivers that crashes out and it can be traced back all the way to here: https://github.com/glfw/glfw/issues/1997
It's not fixed.
bzzzt|1 year ago
lanna|1 year ago
comonoid|1 year ago
karmakaze|1 year ago
CharlesW|1 year ago
nurettin|1 year ago
DuskHorizon|1 year ago
bhawks|1 year ago
These behaviours have been historically well documented.