top | item 38079565

(no title)

ojn | 2 years ago

Zero work? I guess memory fades quickly sometimes.

Apple rented out DTK systems for $500 for developers to get their hands on ARM hardware and port their applications. Hardly zero work. You couldn't even keep the hardware, you had to send it back (and originally you wouldn't even get $500 in credit to buy a production system with, luckily they fixed that after the easily predictable uproar).

Google is providing software emulators instead, much easier to work with and doesn't require complicated device logistics. Not only that, but the only audience that really needs to do work to get ready, are either those producing NDK applications, or those working on hardware support for RISC-V.

Java/Kotlin applications would not need any porting. Sounds like zero work to me!

discuss

order

lxgr|2 years ago

If you were initially fine with software emulation (i.e. Rosetta 2), as were many small and large software projects for macOS or Unix, you had no need whatsoever to get a DTK.

makeitdouble|2 years ago

> If you were initially fine with software emulation (i.e. Rosetta 2)

You'd need a DTK to know if you're fine with emulation or not.

If you cared at all about your app running, you wouldn't just assume that it magically runs fine on an emulation layer you never touched, and that at speeds that are reasonable.

For a reference point, Rosetta was far from being great, and while some apps could run against it, it was most of the time an ultra painful experience. That pain helped devs to put the effort into making native versions, but it also means you couldn't expect Rosetta 2 to give acceptable performances from Apple's track record.

beebmam|2 years ago

Java/JVM on ARM/M1 had an enormous amount of bugs for the first several years, fyi. Source: I encountered many of them, which eventually got fixed.

dathinab|2 years ago

luckily many of the things which caused bugs on ARM where related to the weaker memory ordering and code having (invalid) implicit assumptions about stuff like memory barriers

luckily because this has mostly been fixed and is also the biggest stumbling block when going from x86 => RISC-V

Through there was something about the specific LR/SC definition which I found quite problematic when it comes to implementing certain atomic patterns. But that was 2?? years or so ago and I don't remember the details at all.

Eitherway theoretically if you have a correct C++ standard compatible program without UB it should "just work" on RISK-V by now, but then I don't think such a thing exists (/s/j).

KingOfCoders|2 years ago

"Java/JVM" in general or on Android? Seems to be the second b/c I heavily used Java in the 90s and didn't encounter JVM bugs.

fweimer|2 years ago

GNU/Linux applications are typically quite portable across CPUs. Is Android really that different? I would expect that once you have ported it an NDK application to at least two architectures, the third one should be really easy. And with Android, you already get two easily testable architecture (the ubiquitous aarch64, and x86-64 under virtualization).

(It's not that many Debian or Fedora community contributors have a mainframe in their basement—yet the software they package tends to build and run on s390x just fine. Okay, maybe there are some endianess issues, but you wouldn't hit those with RISC-V.)

charcircuit|2 years ago

Assuming you have the source code for all your native code, then yes it should mostly work. 3rd party libraries can contain native code which means that you may be dependent on someone else porting their code.

dathinab|2 years ago

> Is Android really that different?

no, this has little to do with android

but "GNU/Linux applications are typically quite portable across CPUs" is only this case because a lot of people have a lot of interest and time invested into making that work

but for many commercial apps there is little insensitive to spend any time into making them work with anything but ARM (even older armv7 might not be supported because depending on what your app does the market share makes that not worth it)

still with both ARM and RSIV-V havine somewhat similar memory models and in general today people writing much more C/C++ standard compatible code instead of "it's UB in therefor but will work on this hardware"-nonsense I don't see to many technical issues

on issue could be that a non negligible amount of Android apps which use the NDK use it only to "link" code from other toolchains into android apps, e.g. rust binaries. And while I'm not worried about rust I wouldn't be surprised if some toolchains used might not support RISK-V when it starts to become relevant. Especially when it comes to the (mostly but not fully) snake oil nonsense stuff banking apps tend to use I would be surprised if there wouldn't be some issues.

Through in the end the biggest issue is a fully non technical one:

- if you release an app with NDK you want to test it on real hardware for all archs you support (no matter if there is some auto cross compilation or interpretation (like Roseta)

- but to do so you need access to such hardware, emulators might be okay but are not always cutting it

- but there aren't any RISK-V android phones yet

- but to release such phones you want to have apps, especially stuff like paypal or banking apps available

- but to have such things available the providers much evaluate it to be worth the money it costs to test which is based on marked share which is starting at 0 and slow growing due to missing support for some apps and missing killer features

So I think RISK-V android will be a think but as long as multiple big companies are not pushing it strongly it will take a very long time until it becomes relevant in any way.

flohofwoe|2 years ago

At least for my C/C++/ObjC code it was literally zero work, just recompile and done.

In reality I have a lot more trouble with differences between macOS versions than differences between CPU architectures.