> Rosetta can translate most Intel-based apps, including apps that contain just-in-time (JIT) compilers.
How on Earth does it do that? If executable code is being generated at runtime, it's going to be x86_64 binary machine code still (there are too many ways to generate valid machine code, and it won't know right away whether you're JITting, or cross compiling and actually want x86_64), so Rosetta would need to detect when the code's about to be run, or when it's marked as executable, and translate the machine code in that bit of memory just in time. The length of the ARM code might be longer, so it would have to be in a different block of memory, with the original x86_64 code replaced with a jump to the new code or something.
It's late at night here, so maybe I'm missing a simpler approach, but I'm a bit surprised they have it working reliably enough to make such a general statement (there being a great variety of JIT systems). From a quick search I can't tell if Microsoft's x86-on-ARM translation in Windows 10 ARM supports JITs in the program being run.
They might be using something like an NX bit on the generated x86_64 page, so that whenever the code attempts to jump into it, a page fault is generated, and the kernel is able to handle that, kicking in the JIT compilation and translating the code / address. This is essentially a "double JIT" so there will likely be a performance hit.
Since they control the silicon, Apple might also be leveraging a specialized instruction / feature on the CPUs (e.g. a "foreign" bit that's able to mark memory pages as being from another architecture, and some addressing scheme that links them to one or more native pages behind the scenes)
Maybe the A series chips even have "extra" registers / program counters / interrupts that aid in accelerating this emulation process.
Think of binary translation as just another kind of compiler. It parses machine code, generates an IR, does things to the IR, and then codegens machine code in a different ISA. (Heck, Rosetta 2 is probably built on LLVM. Why wouldn't it be? Apple already put so much work into it. They could even lean on similar work like https://github.com/avast/retdec .)
During the "do things to IR" phase of compilation, you can do static analysis, and use this to inform the rest of the compilation process.
The unique pattern of machine-code that must occur in any implementation of JIT, is a jump to a memory address that was computed entirely at runtime, i.e. with a https://en.wikipedia.org/wiki/Use-define_chain for that address value that leads back to a call to mmap(2) or malloc(2). Static analysis of the IR can find and label such instructions; and you can use this to replace them in the IR with more-abstract "enter JITed code" instrinsic ops.
Then, in the codegen phase of this compiler, you can have such intrinsics generate a shim of ARM instructions. Conveniently, since this intrinsic appears at the end of the JIT process, the memory at the address passed to the intrinsic will almost certainly contain finalized x86 code, ready to be re-compiled into ARM code. So the shim can just do a (hopefully memoized) call to the Rosetta JIT translator, passing the passed-in address, getting back the address of some ARM code, and jumping to that.
The original PowerPC on Intel Rosetta was pretty amazing.
First, most programs do much of their work inside the OS - rendering, network, interaction, whatever, so that's not emulated, Rosetta just calls the native OS functions after doing whatever input translation is necessary. So, nothing below a certain set of API's is translated.
You have to keep a separate translated binary in memory, and be able to compile missing bits as you encounter them, while remembering all your offset adjustments. It worked amazingly well during the PowerPC transition. Due to so many things running natively on x86, the translated apps frequently faster than running native on PowerPC macs!
Typically the way systems do this is by translating small sections of straightline code, and patching the exits as they are translated. So you start by saying translate the block at address 0x1234. That code may go until a jump to address 0x4567. When translating that jump, they instead make a call to the runtime system which says "where is the translated code starting at address 0x4567?" If the code doesn't exist, it goes ahead and translates that block and patches the originally jump to skip the runtime system next time around.
This means early on in the program's run you spend a lot of time translating code, but it pretty quickly stabilizes and you spend most of your time in already translated code.
Of course, if your program is self modifying then the system needs to do some more work to invalidate the translation cache when the underlying code is modified.
I'm not sure why they're making a big deal about this, couldn't the original Rosetta do this too? QEMU has been doing this since (I think) even before the original Rosetta, they call it user mode emulation. You run as if it was a normal emulator but also trap syscalls and forward them to the native kernel instead of emulating a kernel too.
I'm more interested in how they're doing the AOT conversion and (presumably) patching the result to still be able to emulate for JITs. That'd be (comparatively) simple if it was just for things from the iOS and Mac App Stores since Apple has the IR for them still but they made it sound like it was more generic than that.
My gut feeling is that it will be about the same as Itanium/HP Envy x2 emulation. Emulation of highly optimized hardware where code is generated by highly optimized compilers without an order of magnitude slowdown is just too good to be true.
> If executable code is being generated at runtime, it's going to be x86_64 binary machine code still
JITs have to explicitly make the memory they write the generated code to executable. The OS "just" needs to fail to actually make the page executable, and then handle the subsequent page fault by transpiling or interpreting the x86 code therein.
Notice how the second store on ARM is a store with release semantics to ensure correct memory ordering as it was intended in the original C code. This information is lost as it's not needed on x86 which guarantees (among other things) that stores are visible in-order from all other threads.
That's the big piece I've been wondering about too.
Three options as I see it (none of them great):
1) Pin all threads in an x86 process to a single core. You don't have memory model concerns on a single core.
2) Don't do anything? Just rely on apps to use the system provided mutex libraries, and they just break if they try to roll their own concurrency? Seems like exactly the applications you care about (games, pro apps), would be the ones most likely to break.
3) Some stricter memory model in hardware? Seems like that'd go against most of the stated reason for switching to ARM in the first place.
Elsewhere in the documentation[0] Apple explicitly calls out that code that relies on x86 memory ordering will need to be modified to contain explicit barriers. All sensible code will do this already.
Does anyone have any insight into the business logistics involved in a transition like this?
I presume Apple has done maintenance on transitioning desktop OS X to ARM as an option for a very long time. How many years ago would they have had to decide that was the direction they were going to make it reality? 2015? How many people would have been working on it? How many billions of dollars? How does the cost of developing the software compare to the cost of developing a new processor, or compared to tooling a production line for an entirely new platform?
I'm really curious about the share of institutional resources and attention this likely involved. I wonder how important it is as context to Apple over the past few years.
I also wonder if it heralds a new direction in personal computers. Every time I've considered that my Mac isn't that great, the alternatives aren't that great either. Would Apple ever decide to seriously compete for a dominant share of the personal computer market?
And finally, I am also curious about the accounting and financial disclosures required in such decisions. How much are institutional investors informed of long range plans? There's naturally a lot more pressure to distribute cash to shareholders if you don't know about some five-year-plan, yet that pressure waxes and wanes from activist investors, and institutional investors like CalPERS always seemed to be on board with Apple's direction anyway. Do unofficial leaks prevent violations of insider trading rules?
I've been using Macs at work for a long time, but sadly this will mark the end of that era.
In particular, this limitation on Rosetta rules out an ARM-based Mac for work:
> Virtual Machine apps that virtualize x86_64 computer platforms
My job requires me to use a piece of proprietary Windows-only software for a large portion of my work. If I can't use this software I can't do my job. Currently I run it in VMware Fusion on an Intel Mac, which is a perfect solution for me - I get the great features of MacOS, plus I can run the proprietary toolset that my job requires.
There is a very remote possibility that Windows for ARM could be virtualized by some future version of VMware and the proprietary toolset could run under that, but I'm not holding my breath.
Due to budget constraints, I don't think there's any way that my work would spring for a MacBook Pro plus a Windows machine for me.
On the flip side, Windows 10 seems to be getting really good, so I expect that I'll be just as happy and productive with Windows 10+WSL2.
Rosetta's goal is to support legacy Mac apps. It's quite finely scoped, but I hope Apple will go above this and make it available as part of their virtualization framework. This way e.g. Parallels and Docker could use this to provide some way to tap in probably the fastest way to run x86 on ARM Macs.
This would mean Rosetta would go beyond its stated scope, and certainly go beyond what the relatively short-lived Rosetta 1 did, but would make it easier to integrate ARM Macs in what is still an x86 world.
I'm worried they're not going to do this because x86 emulation is probably too slow (people will attempt to run their AAA games in a VM). It would also mean that Apple will need to support Rosetta 2 forever. If they're not going to do this, everybody will have to rely on qemu. Qemu is great, but I hope it will perform adequately for at least basic Docker stuff...
I think the longer Rosetta exists and macOS thereby supports a wider variety of binary executables, the more people will rely on it to deliver them day-to-day functionality in increasingly complex configurations, and the more Apple will expose itself to those users' criticisms.
If Apple's rationale for the transition is to gain further control over their product design and manufacture, the prospect of having to appease folks who won't give up their old software (like me!) but who come to expect indefinite Rosetta-type backwards compatibility doesn't sound all that appealing from Apple's perspective. We're talking about a company who denied Carbon support in the 64-bit transition while some of their largest software developers still clung to it.
Furthermore, the old software they're temporarily supporting through Rosetta represents the agreements and policies Apple followed with third party developers during earlier periods in their history, which I'm sure they want to get away from. In their eyes, the sooner they can ditch those policies and impose iOS-type restrictions on any and all macOS software development, the better.
This is probably my last Mac, after thirty years. It's been fun, watching them rise in power, but they've had to become a very different kind of company to get to where they are.
FWIW they showed the latest Tombraider game running on Rosetta on a dev Mac said to be using the ARM chip from the high-end iPad. It ran pretty well. Not sure if I would quite count this as AAA
I think Rosetta is mostly about userspace, and a Hypervisor is definitely not something that sticks to that - modern VMs run on the bare metal thanks to CPU acceleration and dedicated instructions (i.e. VT-x and AMD-V). Getting an app to run on a foreign architecture it's different than emulating a whole machine in order to get an unmodified OS kernel to run. You can do it this very day with QEMU, yesterday I tried just for fun to boot Windows XP on my Raspberry Pi 4 (it boots but it's horrendously slow, if you're interested). Maybe with time and effort, an emulator can get a satisfactory level of speed (probably never enough to run Windows games on emulated Windows though, but who knows).
AAA games spend lots of times inside system libraries. Porting those to native could be enough to get acceptable performance.
So, chess could be more of a challenge for emulation than AAA games (but probably less of an issue, as it would be easily ported, and have fewer users, anyways)
I thought this was the most interesting paragraph.
> What Can't Be Translated?
> Rosetta can translate most Intel-based apps, including apps that contain just-in-time (JIT) compilers.
I guess translation of JIT compiled stuff implies this isn't a once of translation. I guess translating plugins implies that too.
It sounds like very clever stuff to me!
> However, Rosetta doesn’t translate the following executables:
>
> Kernel extensions
Fair enough
> Virtual Machine apps that virtualize x86_64 computer platforms
I guess most VMs rely on hardware virtualization which would be tricky to translate well.
> Rosetta translates all x86_64 instructions, but it doesn’t support the execution of some newer instruction sets and processor features, such as AVX, AVX2, and AVX512 vector instructions. If you include these newer instructions in your code, execute them only after verifying that they are available. For example, to determine if AVX512 vector instructions are available, use the sysctlbyname function to check the hw.optional.avx512f attribute.
These sound like they should be relatively straight forward to translate. I wonder why they didn't? Lack of time? Or perhaps because translating them means that they don't run fast enough to be useful and the fallback paths are likely to run quicker.
Having big flashbacks to the switch from PPC to x86 here. Rosetta worked relatively smoothly during that transition so fingers crossed it will be ok here too.
Though with Docker support on the mac already being a second class citizen to running on Linux I wonder if a lot of devs will stop using macs for dev
Docker on ARM will work, Docker for x86 will not. The State of the Union showed a demo of Hypervisor.framework with Parallels, and they made it clear that Debian AArch64 was running (uname -a). Since Docker runs inside a VM on the Mac, it'll have to be an ARM VM with ARM containers.
(Presumably, running docker build with your Dockerfile will make it work just fine, unless you need x86 specific libraries).
Apple mentioned they are specifically "working with Docker to support 'these things' in coming months".
Confirmed with some Docker folks they are working on "something".
All very nondescript, but they said they can't talk about it yet.
My understanding is that the virtualization Apple provides is only for the same architecture as the host OS. In the demos given running Debian, they run uname -a and it reports aarch64
Docker has had an experimental feature for some time to build containers cross architecture [1]. I'm guessing this transition is a good excuse to finish that up. Running cross architecture containers with only Docker is not possible as far as I know.
I'm guessing that we're going to have to see a lot better adoption of cross-platform container builds because of this.
Rosetta can translate most Intel-based apps, including apps that contain just-in-time (JIT) compilers. However, Rosetta doesn’t translate the following executables:
* Kernel extensions
* Virtual Machine apps that virtualize x86_64 computer platforms
>Rosetta translates all x86_64 instructions, but it doesn’t support the execution of some newer instruction sets and processor features, such as AVX, AVX2, and AVX512 vector instructions.
No AVX will probably mean that the vast majority of pro/graphics intensive apps won't work out of the box with Rosetta.
Are the limitations on x86_64 virtualization likely to be for technical reasons, or patent reasons? I read a comment on here alluding to some patents on x86_64 virtualization expiring later this year: https://news.ycombinator.com/item?id=23612256 - could that mean that there is a chance this might happen and they are keeping it quiet for now, or are patents likely unrelated?
Apple spent $1B buying Intel's modem business last year. https://www.apple.com/newsroom/2019/07/apple-to-acquire-the-... Apple knew this transition was coming. They could have easily slipped in other terms to deal with any IP licensing issues around implementing the x86_64 instruction set.
then how do we end up calling the correct address in arm64 land? there are no type tags in assembly to distinguish between integers and addresses. in QEMU my understanding is translation is done dynamically so addresses would float around in memory as x86_64 addresses and then when you tried to `call` them it would look up a mapping table. In QEMU I suspect they also try and optimise this case similar to a JIT using inline caching so most of the time you wouldn't actually hit the mapping table.
But if you are not dynamically converting x86_64 addresses to arm64 addresses then you need to understand what all the addresses in the program are and understand all the manipulations that might be performed on those addresses. now, you shouldn't actually be doing weird manipulations of addresses to functions in memory but if you are running obfuscated code this often happens.
I think in QEMU this would work assuming (myfun+4)() does something intelligent:
uint64_t add(uint64_t v) {
return v + 4;
}
void myfun();
int main() {
void (*x)() = &myfun;
x = (void*)add((uint64_t)x);
x();
printf("%lld\n", add(4));
}
if you are holding function addresses as arm64 addresses in memory then you need to dispatch add() based on whether the argument is an integer or an address.
I wonder how/if it supports unaligned jumps which x86 supports IIRC. The consequence of unaligned jumps is that it can effectively make it impossible to know the set of instructions a binary might use.
You'll probably also enjoy Nvidia's Denver architecture[1] (used in the Tegra processors) which JITs ARM bytecode into their own internal instruction set inside of the processor.
[+] [-] mkl|5 years ago|reply
How on Earth does it do that? If executable code is being generated at runtime, it's going to be x86_64 binary machine code still (there are too many ways to generate valid machine code, and it won't know right away whether you're JITting, or cross compiling and actually want x86_64), so Rosetta would need to detect when the code's about to be run, or when it's marked as executable, and translate the machine code in that bit of memory just in time. The length of the ARM code might be longer, so it would have to be in a different block of memory, with the original x86_64 code replaced with a jump to the new code or something.
It's late at night here, so maybe I'm missing a simpler approach, but I'm a bit surprised they have it working reliably enough to make such a general statement (there being a great variety of JIT systems). From a quick search I can't tell if Microsoft's x86-on-ARM translation in Windows 10 ARM supports JITs in the program being run.
[+] [-] TheCoreh|5 years ago|reply
Since they control the silicon, Apple might also be leveraging a specialized instruction / feature on the CPUs (e.g. a "foreign" bit that's able to mark memory pages as being from another architecture, and some addressing scheme that links them to one or more native pages behind the scenes)
Maybe the A series chips even have "extra" registers / program counters / interrupts that aid in accelerating this emulation process.
[+] [-] derefr|5 years ago|reply
During the "do things to IR" phase of compilation, you can do static analysis, and use this to inform the rest of the compilation process.
The unique pattern of machine-code that must occur in any implementation of JIT, is a jump to a memory address that was computed entirely at runtime, i.e. with a https://en.wikipedia.org/wiki/Use-define_chain for that address value that leads back to a call to mmap(2) or malloc(2). Static analysis of the IR can find and label such instructions; and you can use this to replace them in the IR with more-abstract "enter JITed code" instrinsic ops.
Then, in the codegen phase of this compiler, you can have such intrinsics generate a shim of ARM instructions. Conveniently, since this intrinsic appears at the end of the JIT process, the memory at the address passed to the intrinsic will almost certainly contain finalized x86 code, ready to be re-compiled into ARM code. So the shim can just do a (hopefully memoized) call to the Rosetta JIT translator, passing the passed-in address, getting back the address of some ARM code, and jumping to that.
[+] [-] oppositelock|5 years ago|reply
First, most programs do much of their work inside the OS - rendering, network, interaction, whatever, so that's not emulated, Rosetta just calls the native OS functions after doing whatever input translation is necessary. So, nothing below a certain set of API's is translated.
You have to keep a separate translated binary in memory, and be able to compile missing bits as you encounter them, while remembering all your offset adjustments. It worked amazingly well during the PowerPC transition. Due to so many things running natively on x86, the translated apps frequently faster than running native on PowerPC macs!
[+] [-] eholk|5 years ago|reply
This means early on in the program's run you spend a lot of time translating code, but it pretty quickly stabilizes and you spend most of your time in already translated code.
Of course, if your program is self modifying then the system needs to do some more work to invalidate the translation cache when the underlying code is modified.
[+] [-] lsllc|5 years ago|reply
Here's a tutorial for a simple JIT that uses mmap() with PROT_EXEC to write machine code to before executing:
https://github.com/spencertipping/jit-tutorial
[+] [-] bluedino|5 years ago|reply
https://www.reddit.com/r/emulation/comments/2xq5ar/how_close...
[+] [-] amaranth|5 years ago|reply
I'm more interested in how they're doing the AOT conversion and (presumably) patching the result to still be able to emulate for JITs. That'd be (comparatively) simple if it was just for things from the iOS and Mac App Stores since Apple has the IR for them still but they made it sound like it was more generic than that.
https://www.qemu.org/docs/master/user/main.html
[+] [-] jojobas|5 years ago|reply
[+] [-] himinlomax|5 years ago|reply
JITs have to explicitly make the memory they write the generated code to executable. The OS "just" needs to fail to actually make the page executable, and then handle the subsequent page fault by transpiling or interpreting the x86 code therein.
[+] [-] andoma|5 years ago|reply
[+] [-] monocasa|5 years ago|reply
Three options as I see it (none of them great):
1) Pin all threads in an x86 process to a single core. You don't have memory model concerns on a single core.
2) Don't do anything? Just rely on apps to use the system provided mutex libraries, and they just break if they try to roll their own concurrency? Seems like exactly the applications you care about (games, pro apps), would be the ones most likely to break.
3) Some stricter memory model in hardware? Seems like that'd go against most of the stated reason for switching to ARM in the first place.
[+] [-] AndrewStephens|5 years ago|reply
[0] https://developer.apple.com/documentation/apple_silicon/addr...
[+] [-] rz2k|5 years ago|reply
I presume Apple has done maintenance on transitioning desktop OS X to ARM as an option for a very long time. How many years ago would they have had to decide that was the direction they were going to make it reality? 2015? How many people would have been working on it? How many billions of dollars? How does the cost of developing the software compare to the cost of developing a new processor, or compared to tooling a production line for an entirely new platform?
I'm really curious about the share of institutional resources and attention this likely involved. I wonder how important it is as context to Apple over the past few years.
I also wonder if it heralds a new direction in personal computers. Every time I've considered that my Mac isn't that great, the alternatives aren't that great either. Would Apple ever decide to seriously compete for a dominant share of the personal computer market?
And finally, I am also curious about the accounting and financial disclosures required in such decisions. How much are institutional investors informed of long range plans? There's naturally a lot more pressure to distribute cash to shareholders if you don't know about some five-year-plan, yet that pressure waxes and wanes from activist investors, and institutional investors like CalPERS always seemed to be on board with Apple's direction anyway. Do unofficial leaks prevent violations of insider trading rules?
[+] [-] Mister_Snuggles|5 years ago|reply
In particular, this limitation on Rosetta rules out an ARM-based Mac for work:
> Virtual Machine apps that virtualize x86_64 computer platforms
My job requires me to use a piece of proprietary Windows-only software for a large portion of my work. If I can't use this software I can't do my job. Currently I run it in VMware Fusion on an Intel Mac, which is a perfect solution for me - I get the great features of MacOS, plus I can run the proprietary toolset that my job requires.
There is a very remote possibility that Windows for ARM could be virtualized by some future version of VMware and the proprietary toolset could run under that, but I'm not holding my breath.
Due to budget constraints, I don't think there's any way that my work would spring for a MacBook Pro plus a Windows machine for me.
On the flip side, Windows 10 seems to be getting really good, so I expect that I'll be just as happy and productive with Windows 10+WSL2.
[+] [-] DCKing|5 years ago|reply
This would mean Rosetta would go beyond its stated scope, and certainly go beyond what the relatively short-lived Rosetta 1 did, but would make it easier to integrate ARM Macs in what is still an x86 world.
I'm worried they're not going to do this because x86 emulation is probably too slow (people will attempt to run their AAA games in a VM). It would also mean that Apple will need to support Rosetta 2 forever. If they're not going to do this, everybody will have to rely on qemu. Qemu is great, but I hope it will perform adequately for at least basic Docker stuff...
[+] [-] rezmason|5 years ago|reply
If Apple's rationale for the transition is to gain further control over their product design and manufacture, the prospect of having to appease folks who won't give up their old software (like me!) but who come to expect indefinite Rosetta-type backwards compatibility doesn't sound all that appealing from Apple's perspective. We're talking about a company who denied Carbon support in the 64-bit transition while some of their largest software developers still clung to it.
Furthermore, the old software they're temporarily supporting through Rosetta represents the agreements and policies Apple followed with third party developers during earlier periods in their history, which I'm sure they want to get away from. In their eyes, the sooner they can ditch those policies and impose iOS-type restrictions on any and all macOS software development, the better.
This is probably my last Mac, after thirty years. It's been fun, watching them rise in power, but they've had to become a very different kind of company to get to where they are.
[+] [-] Angostura|5 years ago|reply
[+] [-] qalmakka|5 years ago|reply
[+] [-] Someone|5 years ago|reply
So, chess could be more of a challenge for emulation than AAA games (but probably less of an issue, as it would be easily ported, and have fewer users, anyways)
[+] [-] nickcw|5 years ago|reply
> What Can't Be Translated?
> Rosetta can translate most Intel-based apps, including apps that contain just-in-time (JIT) compilers.
I guess translation of JIT compiled stuff implies this isn't a once of translation. I guess translating plugins implies that too.
It sounds like very clever stuff to me!
> However, Rosetta doesn’t translate the following executables: > > Kernel extensions
Fair enough
> Virtual Machine apps that virtualize x86_64 computer platforms
I guess most VMs rely on hardware virtualization which would be tricky to translate well.
> Rosetta translates all x86_64 instructions, but it doesn’t support the execution of some newer instruction sets and processor features, such as AVX, AVX2, and AVX512 vector instructions. If you include these newer instructions in your code, execute them only after verifying that they are available. For example, to determine if AVX512 vector instructions are available, use the sysctlbyname function to check the hw.optional.avx512f attribute.
These sound like they should be relatively straight forward to translate. I wonder why they didn't? Lack of time? Or perhaps because translating them means that they don't run fast enough to be useful and the fallback paths are likely to run quicker.
[+] [-] bjpirt|5 years ago|reply
Though with Docker support on the mac already being a second class citizen to running on Linux I wonder if a lot of devs will stop using macs for dev
[+] [-] marcinzm|5 years ago|reply
How will this impact docker? Does this mean you can't run x86 docker containers on new Apple laptops?
[+] [-] jedieaston|5 years ago|reply
(Presumably, running docker build with your Dockerfile will make it work just fine, unless you need x86 specific libraries).
[+] [-] cpuguy83|5 years ago|reply
[+] [-] linux2647|5 years ago|reply
[+] [-] DCKing|5 years ago|reply
I'm guessing that we're going to have to see a lot better adoption of cross-platform container builds because of this.
[1]: https://docs.docker.com/buildx/working-with-buildx/#build-mu...
[+] [-] cloogshicer|5 years ago|reply
[+] [-] mrkstu|5 years ago|reply
[+] [-] EtienneK|5 years ago|reply
[+] [-] p1mrx|5 years ago|reply
[+] [-] gabagool|5 years ago|reply
I understand your consternation, though. Dev docs shouldn't require dumbing things down.
[+] [-] dsabanin|5 years ago|reply
What Can't Be Translated?
Rosetta can translate most Intel-based apps, including apps that contain just-in-time (JIT) compilers. However, Rosetta doesn’t translate the following executables:
* Kernel extensions
* Virtual Machine apps that virtualize x86_64 computer platforms
Some people are not going to be happy about this.
Edit: But I personally am okay with that.
[+] [-] gjsman-1000|5 years ago|reply
[+] [-] godzillabrennus|5 years ago|reply
There was so much positivity around the Intel transition. It opened up the Mac platform. Now it’s going back into a closed black box.
[+] [-] loph|5 years ago|reply
This reminds me of Digital's VEST technology. VEST would convert VAX programs to run on Alpha. From 32-bit CISC to 64-bit RISC. Nearly 30 years ago.
https://web.stanford.edu/class/cs343/resources/binary-transl...
[+] [-] jayyhu|5 years ago|reply
No AVX will probably mean that the vast majority of pro/graphics intensive apps won't work out of the box with Rosetta.
[+] [-] tomduncalf|5 years ago|reply
[+] [-] monocasa|5 years ago|reply
I imagine Apple has enough patents that Intel infringes on to be able to cross licence.
[+] [-] fsflyer|5 years ago|reply
[+] [-] unicornfinder|5 years ago|reply
[+] [-] benmmurphy|5 years ago|reply
But if you are not dynamically converting x86_64 addresses to arm64 addresses then you need to understand what all the addresses in the program are and understand all the manipulations that might be performed on those addresses. now, you shouldn't actually be doing weird manipulations of addresses to functions in memory but if you are running obfuscated code this often happens.
I think in QEMU this would work assuming (myfun+4)() does something intelligent:
if you are holding function addresses as arm64 addresses in memory then you need to dispatch add() based on whether the argument is an integer or an address.[+] [-] slaymaker1907|5 years ago|reply
[+] [-] nothis|5 years ago|reply
[+] [-] Synaesthesia|5 years ago|reply
[+] [-] underdeserver|5 years ago|reply
[+] [-] ATsch|5 years ago|reply
[1] https://en.wikipedia.org/wiki/Project_Denver
[+] [-] saagarjha|5 years ago|reply
[+] [-] viro|5 years ago|reply