- Vulkan first graphics interfaces. I mean, OK. Not that I'm in love with this overly complex API, but fine.
- No OpenGL support. I guess this is where world is moving
- No POSIX support. Quite a bit of game engines rely on it, oh well, when google cared about developers
- Nothing about sound (my personal thing)
As game developer working with Android was unpleasant to say the least. For example sound - android has like 4 sound systems, all of them (except simplest one available from Java side) are not fully implemented, and swarming with compatibility bugs across manufacturers and android versions. Oh, not to say that they introduce new one once in a while, with new version adoption slower than a sloth.
I get that Google engineers enjoy rewriting things they don't like, but, come on. Fix existing first. Don't change APIs on us - not everyone has extra million of $$$ to throw on a project to refactor it every time Google decided that they want shiny new thing, which as result is broken too but in different ways. OK. I admit, I exaggerate, but it seems like tough times for game engines (besides super hyped ones like Unity or Unreal, which also has no problem throwing tens of millions).
Note about OpenGL support: I'm pretty sure they'll drag it in by porting ANGLE library which is currently actively worked on. Compatibility layers for Vulkan are getting momentum, but I hope they'll join forces with MoltenGL/VK instead of making their own worse analog.
- OpenGL should be a library. This is just going to make OpenGL development easier in the long run. Right now there are too many OpenGL implementations and the differences matter. Running one OpenGL library on top of N different Vulkan implementations is miles better than running on top of N different OpenGL implementations.
- POSIX can be a library too. It doesn’t have to be provided by the kernel. People have been strapping POSIX layers on top of things for ages. This was originally how Mach worked. You can still do weird things on iOS and macOS “below” the POSIX layer (although most POSIX syscalls are just provided by the kernel).
You talk about how hard it is to change APIs… and how much you hate to refactor things every time Google decides they want the shiny new thing. But POSIX is rooted in the 1970s. It sucks. It’s about time to try something new. The entire POSIX model is based around the idea that you have different people using the same computer and you don’t want them to accidentally delete each others’ files. There are a ton of APIs in POSIX which are straight-up shit, like wait(). Sandboxes are unportable and strapped-on, using arcane combinations of things like chroot and cgroups.
Let’s give people a chance to try out something new. Unix is somewhere around 50 years old now. OS research is dying out. Make it easier to run untrusted programs.
> Don't change APIs on us - not everyone has extra million of $$$ to throw on a project to refactor it every time Google decided that they want shiny new thing, which as result is broken too but in different ways.
This was what drove me out of my (brief) stint at Android development. Did some hobby development to learn the ropes, spent a lot of time trying to "do it right". It was a bit clunky but alright I guess. Left my project alone for a few months while my day job was busy. Came back to it to find that, in a few months, there had been not one but two generations of deprecated APIs between what I'd written and current 'best practices' and that a bunch of pretty fundamental stuff had been deprecated.
I'm not wasting my life chasing that particular Red Queen.
Fuschia relies on the kernel Zicron which last time I checked used Magma as a framework to provide compositing and buffer sharing between the logical split of the application and system driver, which exist as user space services.
The fact that graphics drivers exist as user space services should reduce latency by minimizing the need for capabilities, the equivalent of a system call which requires an expensive system call/trap handling routine in std linux (for example).
This is presumably to support an architecture with direct access to a GPU where the main CPU scheduler doesn't have to schedule a round trip of data from main CPU to data bus to GPU and back, compared to std linux on std hardware architecture, for example which will support this overall design to decrease latency and advance open source graphics development in a user space setting.
- No OpenGL support. I guess this is where world is moving
Vulkan is still built off of OpenCL, which is kernel code designed explicitly for the GPU. While not technically OpenGL, it is a more direct interface architecturally with the hardware and there are plenty of engines working on vulkan support. Consider there are other advantages to using a graphics driver than just OpenGL (like parallel computing and abstracting large data sets into matrices that map nicely into GPUs) and optimize based on this assumption and it is not as unreasonable as it sounds.
- No POSIX support
"Full POSIX compatibility is not a goal for the Fuchsia project; enough POSIX compatibility is provided via the C library, which is a port of the musl project to Fuchsia. This helps when porting Linux programs over to Fuchsia, but complex programs that assume they are running on Linux will naturally require more effort." - https://lwn.net/Articles/718267/
That sounds like you want to continue using APIs and OS from 1970s, while Fuchsia is deliberately trying to break those old conventions.
OpenGL was never a serious contender for modern games and there's a good reason why all new 3D APIs are moving to more low-level representations (Vulcan, DX12, Metal) with libraries on top. OpenGL is a horrible implicit stateful machine which is terrible to multithread and still holds a model of a hard-wired 3D accelerator as a base, which isn't how new graphics cards work. Everything else is bolted on top of that out-of-date idea which makes is hugely unwieldly for new software.
> No OpenGL support. I guess this is where world is moving
> No POSIX support. Quite a bit of game engines rely on it, oh well, when google cared about developers
Well, tell that to game developers working for consoles. They don't have access to the same APIs and they made it work just fine.
Most of the time, you don't make your own engine and rely on some other engine to support your platform, so it just works.
I've done enough Wii, PSP, PS3 or PS4 development and having different APIs was never a problem.
It's strange how my perception of Google and its software has changed over the years.
When Android was first released I've put it to the same category with Open Firmware & Debian. Now when I see Fuchsia, I reflexively categorize as a pseudo-open source software, where you can see the code but cannot do anything useful on the long run, because it's designed to be downloadable but unusable for casual/research purposes.
Maybe "we put out the source for you to see and start to limit its free use when it matures enough" model gets tiring for me.
I really wonder about this kernel IPC hype. I watched a video on React+Redux by Facebook engineers about a week ago, and the reasoning behind creating the things in the first place. Feelings about React+Redux and how they are overused aside:
Redux came first and it happened because the "interaction graph" between various components on the website became virtually impossible to resolve cognitively. The engineers (especially on-boarding engineers) faced troubles understanding how everything worked in concert and, especially, the side-effects of a single change. The nail in the coffin for the engineer was Facebook chat and she was forced to use numbers, instead of lines, to depict interactions because the graph was too dense. Redux forced interaction in one direction, with "back interaction" happening one frame/tick later. Web developers have praised and embraced this architecture.
If you look at the very first image in the article, it moves from a very straightforward application -> VFS -> ext2 -> driver -> disk graph to a [potential] rats nest. It runs counter to the hard lessons that Facebook have learned with complex distributed systems.
Is this why nothing has really come from Mach/Hurd? Up until putting 1 and 1 together a few minutes ago, micro-kernels sounded so clean and elegant. They now seem unwieldy. Perhaps there is exists another layer of architecture that can tame the dependency/interaction graph; isolation really does sound like a good idea, but we need a better way of doing it.
> Is this why nothing has really come from Mach/Hurd?
Mach 3.0 was the kernel used in NeXT and the original Mac OS X release. It was quite a bit later replaced with a rewritten kernel XNU, that’s basically a hybrid. The reason as I understand it, message passing in the kernel is expensive when crossing user space so often.
I wouldn’t call that nothing coming of Mach, it did really well!
Also, I’d say that micro-kernels are closer to the paradigm of react/redux (redux the message bus, react the user space servers) than to a monolith design like Linux.
I think instead of comparing this to a front end web framework, you should consider how Fuschia is designed using capabilities (similar to the concept of a system call, and how a system call handler works) around a fundamental restructuring of a kernel: originally named Magenta, now called Zircon, to fundamentally optimize for the evolving modular architecture of advanced mobile computing. Understanding how the OS design flows from the original kernel design is key to understanding Fuschia OS as a whole. This article from two years ago does a pretty job of justifying its' existence (in this article, Zicron is referred to as Magenta as it had not been renamed yet): https://lwn.net/Articles/718267/ but I would actually love to hear feedback/constructive criticism of the kernel architecture itself.
I remember where I was and what I was supposed to be doing when I was instead reading the HN article that posted this lwn article (above) about Magenta 2 yrs ago, after finding Eric Love's Guide to Linux Kernel Development, and found this a fascinating read to compare to, given my learnings in kernel dev at the time.
I guess Google engineers has some experience with those hard earned lessons before Facebook even existed. The diagram you mentioned is very typical of microkernels and subject of research for decades.
Google has been a Linux user since before it even existed [1], and has been almost exclusively a Linux shop since: Linux servers, Linux desktops (Chromebooks, Goobuntu, Whatever debian testing derived distro they are using now is called), Linux based phone OS.
So I'd say for most of its business activities, Google is actually okay with the GPL, otherwise they'd have rewritten it a long time ago. Given the size of Google and how important Android is to them, I think they'd have prioritized Fuchsia OS far more if their bottom line were really affected.
There is a point with drivers, as Fuchsia has better support for proprietary drivers than Linux and IIRC one of the Android modifications was to add HALs to the kernel to make proprietary drivers easier. But it's not a killer argument, after all they figured it out for Android.
Their business intent with Fuchsia is definitely not clear so it's a honey pot for speculation. I don't know. Maybe they wanted to diversify their approaches and have something if the Android kernel fork gets more and more patches and eventually backporting becomes impossible and they need to maintain it on their own now. Maybe they just want to be upstream as it involves a great deal or power that you have. Maybe it's part of a deal with a hardware vendor who wants their driver to become proprietary in the future.
There's other reasons why fuchsia is arguably better, such as using a micro-kernel and being "capability-based." The license is liberal enough to allow relicensing it as GPL, if anyone cared to fork it.
Regardless of what you think of Google, the reality is Fuchsia and Android have little to nothing in common. While they would certainly rather have a permissive language from the bottom up, they certainly wouldn't have written a whole new OS with that as the primary goal.
Android's platform design would arguably have been considered modern in 2007 or 2008, but it is 2019, and the cracks in Android's capabilities have been widening for a while. Sometimes people still think of Android as "newer" in the operating system space, but it's a decade old!
For me this reason is enough to justify linux replacement. I think it's one of the main reasons for google to, they want to deal with vendors' blobs in a more stable and ease manner.
The point of being GPL free is you don't need to release source code.
If you combine this with the other point of Fuchsia, hardcore security, the final combined result is that nobody can audit what Google is actually doing on "your" devices. And that's why.
I think it's a valid concern, look at how many originally open-source Android APIs and frameworks Google has pushed out in favor of their proprietary one's. Similar concerns I've seen voiced is there being a inseparable integration with Google services, like how Google has pushed Gapps dependency onto Android, but now baked into the OS-level.
That's very much my feeling. Linux has a history behind and some design decisions made for a wide range of devices a long time ago. Google has the kind of money necessary to build an OS from scratch in order to power their devices (Android, Chromebook, Google Home), whilst other mortal beings have to put up with one of the existing choices.
It is irrelevant. Running Android apps is no longer tied to AOSP. Chromebooks run Android apps.
There is an app ecosystem composed of Google Mobile Services and the Play store, today it runs on Android and on ChromeOS, and it seems that it will run on fuchsia too.
I would not be surprised if Google decides to never launch this. They currently have two consumer facing operating systems and linux (presumably) on their servers. On the server side they use containerized services, are involved with Kubernetes, etc. None of that stuff probably is even close to working on Fuchsia and I don't think server side is a priority with that.
On chrome os, the most significant feature is the newly added ability to run linux applications via crostini. Containerization is of apps is a big topic in general in the linux world with e.g. things like snap.
On Android, they have an extremely large installed base, enormously slow upgrade cycles, and a huge legacy development ecosystem. This ecosystem overlaps with ChromeOS as well since it can run Android apps. And Android can do linux apps as well (though that mostly makes less sense).
Fuchsia would need to play nice in that ecosystem or it will face an empty room problem where all of the key apps have not been ported yet, they have a tiny marketshare initially (thus removing the incentives for fixing that), and OEMs will likely vote with their feet by sticking to the old platforms. Part of that is supporting linux based stuff; which especially on Chrome seems increasingly important.
So, kind of pessimistic that Google has a strategy for making this work. This makes sense as a research project but not as a product strategy currently. Yes Android has issues. Maybe fixing those might be easier than replacing it entirely?
I don't know if it's just me, but lately the amount of interesting stuff (articles, software) i come across that sadly is only available in chinese keeps increasing.
If it wasn't for these groups of volunteers that translate them to english (and the other way around, also useful) these projects would be completely unaccessible.
Oh, yes, this is a very carefully thought out project and I love it. I have a hard time trusting Google, but this is clearly the result of some very straightforward, competent, well-directed thinking, and I hope it succeeds.
Wonder if the permissions model will make it harder to circumvent DRM that takes away functionality like screenshots. If a super user can't imbue child processes with their capabilities then control won't ever be fully in the owner's hands.
I'm not sure of this need to remove, or not support, OpenGL these days.
It's not like OpenGL is the greatest graphics API, it's just that it's been used for decades - I personally learned 3D graphics rendering in GL and I still think in GL when I'm compositing a scene in my head.
If I'm going to prototype something, I have historically loved that no matter what platform I'm on, I can pretty much just hit 'compile' and I'll have the same result.
Which leads me to my second complaint - Metal. When Vulkan has come out as the clear choice for next-gen multi-platform graphics technology, Apple, once again, decided to completely invent a proprietary system instead of using the low-hanging fruit that was already there.
Especially since I thought of trying it out and followed Fuchsia instructions[0] only to run into exactly same problem building it that others have reported >3 months back[1].
Also, instructions to download and install jiri with curl under Fuchsia docs don't work either[2] and curl returns '404 not found' instead. Got around this using instructions here [3] only to fail soon after as per above.
> A channel has 2 handles, h1, h2, write messages from one end, and read messages from the other.
This seems like it's bound to be a critical section for many operations. So for one/some of the supported target architectures, what's an example of how channel write/reads look in detail? Do I need to trap or can I write "directly" into pages that the recipient can read? Can messages span cache lines?
I believe that Fuchsia sounds ambitious and it's only through the dedication of mega-tech corps like Google/etc that ambitious efforts like this can succeed. But the world in general, even the tech world, is loathe to accept change. Backwards-compatible layers to emulate the linux syscall layer would probably be a critical transition feature. This allows consumers to phase in the work required as they can stomach it, rather than suffer it all-at-once-or-nothing.
How hard is it to port Fuchsia to a new target? Can anyone point me to a series of commits for a target that does this?
The kernel switch for channel communication seems like a big performance hit just to stop "sharing memory". Why not allow shared memory with negotiation on the communication protocol? The OS software could share memory, but only use protocols that are strict, simple, secure, etc...
If it's anything like sel4 in this regard, you can share memory just fine, it's just that the first pass should use IPC through the kernel (which is very optimized).
I really like fuschia, Redox and the other microkernels. However, I am highly concerned about the license change.
About the only upside of the Android ecosystem was the kernel's license, forcing kernel drivers to be open-sourced. This allows mainlining back devices into the Linux tree, which enables a variety of user projects, such as postmarketos.
On the other hand, some of these drivers already acted as a shim to a userspace blob, and having an IPC would at least provide isolation from the blobs themselves, and allow tapping a clear IPC interface for reverse-engineering those. So all might not be lost.
On the sustainability of open source, since this seems to be debated much in this thread, I personally think (and it is a debatable viewpoint) that this ought to be incentivized at a government level (by taxing software/technology sales, and sponsoring open source projects with the proceeds). This would provide a much more robust base on which to provide (proprietary if you want) services upon.
Ah, ok, "introduction" as in "Introduction to Fuchsia OS". Which is interesting too, but I initially thought that Google is actually introducing Fuchsia OS now, i.e. presenting plans to replace Android with it. Wishful thinking ;)
1. MessagePack (instead of, say, FlatBuffers – am I wrong to think that FlatBuffers are more efficient? (and the format was created by Google))
2. ELF binaries (instead of, say, taking a clean-sheet approach to executable formats)
Also, on ELF they say: "The loading position of the ELF is random and does not follow the v_addr specified in the ELF program header."
(They're talking about ASLR.) This just highlights for me that a more efficient format is possible, one that doesn't have a virtual address in its headers. Perhaps there are even bigger wins possible with a clean-sheet format.
[+] [-] Svoka|7 years ago|reply
- Vulkan first graphics interfaces. I mean, OK. Not that I'm in love with this overly complex API, but fine.
- No OpenGL support. I guess this is where world is moving
- No POSIX support. Quite a bit of game engines rely on it, oh well, when google cared about developers
- Nothing about sound (my personal thing)
As game developer working with Android was unpleasant to say the least. For example sound - android has like 4 sound systems, all of them (except simplest one available from Java side) are not fully implemented, and swarming with compatibility bugs across manufacturers and android versions. Oh, not to say that they introduce new one once in a while, with new version adoption slower than a sloth.
I get that Google engineers enjoy rewriting things they don't like, but, come on. Fix existing first. Don't change APIs on us - not everyone has extra million of $$$ to throw on a project to refactor it every time Google decided that they want shiny new thing, which as result is broken too but in different ways. OK. I admit, I exaggerate, but it seems like tough times for game engines (besides super hyped ones like Unity or Unreal, which also has no problem throwing tens of millions).
Note about OpenGL support: I'm pretty sure they'll drag it in by porting ANGLE library which is currently actively worked on. Compatibility layers for Vulkan are getting momentum, but I hope they'll join forces with MoltenGL/VK instead of making their own worse analog.
[+] [-] klodolph|7 years ago|reply
- OpenGL should be a library. This is just going to make OpenGL development easier in the long run. Right now there are too many OpenGL implementations and the differences matter. Running one OpenGL library on top of N different Vulkan implementations is miles better than running on top of N different OpenGL implementations.
- POSIX can be a library too. It doesn’t have to be provided by the kernel. People have been strapping POSIX layers on top of things for ages. This was originally how Mach worked. You can still do weird things on iOS and macOS “below” the POSIX layer (although most POSIX syscalls are just provided by the kernel).
You talk about how hard it is to change APIs… and how much you hate to refactor things every time Google decides they want the shiny new thing. But POSIX is rooted in the 1970s. It sucks. It’s about time to try something new. The entire POSIX model is based around the idea that you have different people using the same computer and you don’t want them to accidentally delete each others’ files. There are a ton of APIs in POSIX which are straight-up shit, like wait(). Sandboxes are unportable and strapped-on, using arcane combinations of things like chroot and cgroups.
Let’s give people a chance to try out something new. Unix is somewhere around 50 years old now. OS research is dying out. Make it easier to run untrusted programs.
[+] [-] taneq|7 years ago|reply
This was what drove me out of my (brief) stint at Android development. Did some hobby development to learn the ropes, spent a lot of time trying to "do it right". It was a bit clunky but alright I guess. Left my project alone for a few months while my day job was busy. Came back to it to find that, in a few months, there had been not one but two generations of deprecated APIs between what I'd written and current 'best practices' and that a bunch of pretty fundamental stuff had been deprecated.
I'm not wasting my life chasing that particular Red Queen.
[+] [-] stdcli|7 years ago|reply
The fact that graphics drivers exist as user space services should reduce latency by minimizing the need for capabilities, the equivalent of a system call which requires an expensive system call/trap handling routine in std linux (for example).
This is presumably to support an architecture with direct access to a GPU where the main CPU scheduler doesn't have to schedule a round trip of data from main CPU to data bus to GPU and back, compared to std linux on std hardware architecture, for example which will support this overall design to decrease latency and advance open source graphics development in a user space setting.
- No OpenGL support. I guess this is where world is moving
Vulkan is still built off of OpenCL, which is kernel code designed explicitly for the GPU. While not technically OpenGL, it is a more direct interface architecturally with the hardware and there are plenty of engines working on vulkan support. Consider there are other advantages to using a graphics driver than just OpenGL (like parallel computing and abstracting large data sets into matrices that map nicely into GPUs) and optimize based on this assumption and it is not as unreasonable as it sounds.
- No POSIX support
"Full POSIX compatibility is not a goal for the Fuchsia project; enough POSIX compatibility is provided via the C library, which is a port of the musl project to Fuchsia. This helps when porting Linux programs over to Fuchsia, but complex programs that assume they are running on Linux will naturally require more effort." - https://lwn.net/Articles/718267/
[+] [-] izacus|7 years ago|reply
OpenGL was never a serious contender for modern games and there's a good reason why all new 3D APIs are moving to more low-level representations (Vulcan, DX12, Metal) with libraries on top. OpenGL is a horrible implicit stateful machine which is terrible to multithread and still holds a model of a hard-wired 3D accelerator as a base, which isn't how new graphics cards work. Everything else is bolted on top of that out-of-date idea which makes is hugely unwieldly for new software.
[+] [-] Orphis|7 years ago|reply
Well, tell that to game developers working for consoles. They don't have access to the same APIs and they made it work just fine. Most of the time, you don't make your own engine and rely on some other engine to support your platform, so it just works.
I've done enough Wii, PSP, PS3 or PS4 development and having different APIs was never a problem.
[+] [-] bayindirh|7 years ago|reply
When Android was first released I've put it to the same category with Open Firmware & Debian. Now when I see Fuchsia, I reflexively categorize as a pseudo-open source software, where you can see the code but cannot do anything useful on the long run, because it's designed to be downloadable but unusable for casual/research purposes.
Maybe "we put out the source for you to see and start to limit its free use when it matures enough" model gets tiring for me.
[+] [-] zamalek|7 years ago|reply
Redux came first and it happened because the "interaction graph" between various components on the website became virtually impossible to resolve cognitively. The engineers (especially on-boarding engineers) faced troubles understanding how everything worked in concert and, especially, the side-effects of a single change. The nail in the coffin for the engineer was Facebook chat and she was forced to use numbers, instead of lines, to depict interactions because the graph was too dense. Redux forced interaction in one direction, with "back interaction" happening one frame/tick later. Web developers have praised and embraced this architecture.
If you look at the very first image in the article, it moves from a very straightforward application -> VFS -> ext2 -> driver -> disk graph to a [potential] rats nest. It runs counter to the hard lessons that Facebook have learned with complex distributed systems.
Is this why nothing has really come from Mach/Hurd? Up until putting 1 and 1 together a few minutes ago, micro-kernels sounded so clean and elegant. They now seem unwieldy. Perhaps there is exists another layer of architecture that can tame the dependency/interaction graph; isolation really does sound like a good idea, but we need a better way of doing it.
[+] [-] bluejekyll|7 years ago|reply
Mach 3.0 was the kernel used in NeXT and the original Mac OS X release. It was quite a bit later replaced with a rewritten kernel XNU, that’s basically a hybrid. The reason as I understand it, message passing in the kernel is expensive when crossing user space so often.
I wouldn’t call that nothing coming of Mach, it did really well!
Also, I’d say that micro-kernels are closer to the paradigm of react/redux (redux the message bus, react the user space servers) than to a monolith design like Linux.
[+] [-] stdcli|7 years ago|reply
I remember where I was and what I was supposed to be doing when I was instead reading the HN article that posted this lwn article (above) about Magenta 2 yrs ago, after finding Eric Love's Guide to Linux Kernel Development, and found this a fascinating read to compare to, given my learnings in kernel dev at the time.
[+] [-] elliotec|7 years ago|reply
https://en.wikipedia.org/wiki/Redux_(JavaScript_library)
https://en.wikipedia.org/wiki/React_(JavaScript_library)
Perhaps you're talking about the Flux architecture? Not the same thing. Care to share the video you mentioned?
[+] [-] fluffycat|7 years ago|reply
[+] [-] snvzz|7 years ago|reply
Because Mach. First generation microkernel. Slow. The microkernel world has moved way past that, whereas Hurd has stayed behind still using Mach. [0]
[0] https://blog.darknedgy.net/technology/2016/01/01/0/
[+] [-] m33k44|7 years ago|reply
[+] [-] altmind|7 years ago|reply
How do you feel, is there anything behind this claim?
[+] [-] est31|7 years ago|reply
So I'd say for most of its business activities, Google is actually okay with the GPL, otherwise they'd have rewritten it a long time ago. Given the size of Google and how important Android is to them, I think they'd have prioritized Fuchsia OS far more if their bottom line were really affected.
There is a point with drivers, as Fuchsia has better support for proprietary drivers than Linux and IIRC one of the Android modifications was to add HALs to the kernel to make proprietary drivers easier. But it's not a killer argument, after all they figured it out for Android.
Their business intent with Fuchsia is definitely not clear so it's a honey pot for speculation. I don't know. Maybe they wanted to diversify their approaches and have something if the Android kernel fork gets more and more patches and eventually backporting becomes impossible and they need to maintain it on their own now. Maybe they just want to be upstream as it involves a great deal or power that you have. Maybe it's part of a deal with a hardware vendor who wants their driver to become proprietary in the future.
[1]: https://web.archive.org/web/19971210065425/http://backrub.st...
[+] [-] nwah1|7 years ago|reply
[+] [-] ocdtrekkie|7 years ago|reply
Android's platform design would arguably have been considered modern in 2007 or 2008, but it is 2019, and the cracks in Android's capabilities have been widening for a while. Sometimes people still think of Android as "newer" in the operating system space, but it's a decade old!
[+] [-] ernst_klim|7 years ago|reply
For me this reason is enough to justify linux replacement. I think it's one of the main reasons for google to, they want to deal with vendors' blobs in a more stable and ease manner.
[+] [-] lpu4o74p|7 years ago|reply
If you combine this with the other point of Fuchsia, hardcore security, the final combined result is that nobody can audit what Google is actually doing on "your" devices. And that's why.
[+] [-] paavoova|7 years ago|reply
[+] [-] snvzz|7 years ago|reply
A glance at the design will tell you that no, there's nothing to that claim.
Fuchsia's only competitor is Genode. Linux is basically obsoleted by its design.
[+] [-] doctorpangloss|7 years ago|reply
It’s a great senior engineer retention project though.
[+] [-] otabdeveloper1|7 years ago|reply
Are people really dumb enough to fall for this trick over and over again?
(Don't answer, that was a rhetorical question.)
[+] [-] imtringued|7 years ago|reply
[+] [-] microcolonel|7 years ago|reply
[+] [-] pjmlp|7 years ago|reply
[+] [-] jgimenez|7 years ago|reply
[+] [-] ucaetano|7 years ago|reply
There is an app ecosystem composed of Google Mobile Services and the Play store, today it runs on Android and on ChromeOS, and it seems that it will run on fuchsia too.
[+] [-] draw_down|7 years ago|reply
[deleted]
[+] [-] jillesvangurp|7 years ago|reply
On chrome os, the most significant feature is the newly added ability to run linux applications via crostini. Containerization is of apps is a big topic in general in the linux world with e.g. things like snap.
On Android, they have an extremely large installed base, enormously slow upgrade cycles, and a huge legacy development ecosystem. This ecosystem overlaps with ChromeOS as well since it can run Android apps. And Android can do linux apps as well (though that mostly makes less sense).
Fuchsia would need to play nice in that ecosystem or it will face an empty room problem where all of the key apps have not been ported yet, they have a tiny marketshare initially (thus removing the incentives for fixing that), and OEMs will likely vote with their feet by sticking to the old platforms. Part of that is supporting linux based stuff; which especially on Chrome seems increasingly important.
So, kind of pessimistic that Google has a strategy for making this work. This makes sense as a research project but not as a product strategy currently. Yes Android has issues. Maybe fixing those might be easier than replacing it entirely?
[+] [-] gvand|7 years ago|reply
If it wasn't for these groups of volunteers that translate them to english (and the other way around, also useful) these projects would be completely unaccessible.
[+] [-] marssaxman|7 years ago|reply
[+] [-] kevin_thibedeau|7 years ago|reply
[+] [-] anonuser123456|7 years ago|reply
[+] [-] Andrex|7 years ago|reply
[+] [-] analognoise|7 years ago|reply
I'm not sure if this is a translation gaffe or not, but I'm using it as a new sick burn for sputtering projects.
[+] [-] dralley|7 years ago|reply
[+] [-] mapcars|7 years ago|reply
>"/" -> root vfs service handle, "/dev" -> dev fs service handle, "/net/dns" -> DNS service handle
Just resurrect Plan 9 already?
[+] [-] lostgame|7 years ago|reply
It's not like OpenGL is the greatest graphics API, it's just that it's been used for decades - I personally learned 3D graphics rendering in GL and I still think in GL when I'm compositing a scene in my head.
If I'm going to prototype something, I have historically loved that no matter what platform I'm on, I can pretty much just hit 'compile' and I'll have the same result.
Which leads me to my second complaint - Metal. When Vulkan has come out as the clear choice for next-gen multi-platform graphics technology, Apple, once again, decided to completely invent a proprietary system instead of using the low-hanging fruit that was already there.
[+] [-] rixrax|7 years ago|reply
Especially since I thought of trying it out and followed Fuchsia instructions[0] only to run into exactly same problem building it that others have reported >3 months back[1].
Also, instructions to download and install jiri with curl under Fuchsia docs don't work either[2] and curl returns '404 not found' instead. Got around this using instructions here [3] only to fail soon after as per above.
[0] https://fuchsia.googlesource.com/docs/+/a40928d45b43dbf72d5e... [1] https://www.reddit.com/r/Fuchsia/comments/a56rg7/issues_when... [2] https://fuchsia.googlesource.com/docs/+/a40928d45b43dbf72d5e... [3] https://fuchsia.googlesource.com/jiri/#Bootstrapping
[+] [-] calvinmorrison|7 years ago|reply
I feel attacked. Glad they're picking up seriously on the idea of namespaces.
[+] [-] wyldfire|7 years ago|reply
> A channel has 2 handles, h1, h2, write messages from one end, and read messages from the other.
This seems like it's bound to be a critical section for many operations. So for one/some of the supported target architectures, what's an example of how channel write/reads look in detail? Do I need to trap or can I write "directly" into pages that the recipient can read? Can messages span cache lines?
I believe that Fuchsia sounds ambitious and it's only through the dedication of mega-tech corps like Google/etc that ambitious efforts like this can succeed. But the world in general, even the tech world, is loathe to accept change. Backwards-compatible layers to emulate the linux syscall layer would probably be a critical transition feature. This allows consumers to phase in the work required as they can stomach it, rather than suffer it all-at-once-or-nothing.
How hard is it to port Fuchsia to a new target? Can anyone point me to a series of commits for a target that does this?
[+] [-] irq-1|7 years ago|reply
[+] [-] monocasa|7 years ago|reply
[+] [-] MayeulC|7 years ago|reply
About the only upside of the Android ecosystem was the kernel's license, forcing kernel drivers to be open-sourced. This allows mainlining back devices into the Linux tree, which enables a variety of user projects, such as postmarketos.
On the other hand, some of these drivers already acted as a shim to a userspace blob, and having an IPC would at least provide isolation from the blobs themselves, and allow tapping a clear IPC interface for reverse-engineering those. So all might not be lost.
On the sustainability of open source, since this seems to be debated much in this thread, I personally think (and it is a debatable viewpoint) that this ought to be incentivized at a government level (by taxing software/technology sales, and sponsoring open source projects with the proceeds). This would provide a much more robust base on which to provide (proprietary if you want) services upon.
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] rob74|7 years ago|reply
[+] [-] Solar19|7 years ago|reply
1. MessagePack (instead of, say, FlatBuffers – am I wrong to think that FlatBuffers are more efficient? (and the format was created by Google))
2. ELF binaries (instead of, say, taking a clean-sheet approach to executable formats)
Also, on ELF they say: "The loading position of the ELF is random and does not follow the v_addr specified in the ELF program header."
(They're talking about ASLR.) This just highlights for me that a more efficient format is possible, one that doesn't have a virtual address in its headers. Perhaps there are even bigger wins possible with a clean-sheet format.
[+] [-] bogomipz|7 years ago|reply
"Global file system
In Unix, there is a global root file system"
Why night this be seen as a shortcoming, especially with mount namespaces now? Can anyone say?