Modern browsers these days are powerful things - almost an operating system in their own right. So I'm asking the community, should everything now be developed as 'web first', or is there still a place for native desktop applications?
As a long-time Win32 developer, my only answer to that question is "of course there is!"
The efficiency difference between native and "modern" web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.
Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.
For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II. Even the web stuff at the time was nowhere near as disgusting as it is today --- AJAX was around, websites did use JS, but simple things like webchats still didn't require as much bloat. I lived through that era, so I knew it was possible, but the younger generation hasn't, so perhaps it skews their idea of efficiency.
In terms of improvement, some things are understandable and rational, such as newer video codecs requiring more processing power because they are intrinsically more complex and that complexity is essential to their increase in quality. But other things, like sending a text message or email, most certainly do not. In many ways, software has regressed significantly.
I recently had to upgrade my RAM because I have Spotify and Slack open all the time. Today RAM is cheap but it is crazy those programs take up so much resources.
Another program I use a lot is Blender (3D software). Compared to Spotify and Slack it is a crazy complicated program with loads of complicated functionalities. But it starts in a blink and only uses resources when it needs to (calculations and your 3D model).
So I absolutely agree with you.
I also think it has to do with the fact that older programmers now more about the cost of resources than younger programmers do. We used computers without harddisk and KBs of RAM. I always have this in my mind while programming.
The younger programmers may be right that resources don't matter much because they are cheap and available. But now I had to upgrade my RAM.
You are looking back at the past with rosy goggles.
What I remember from the time was how you couldn’t run that many things simultaneously. Back when the Pentium II was first released, I even had to close applications, not because the computer ran out of RAM, but because the TCP/IP stack that came with Windows 95 didn’t allow very many simultaneous connections. My web browser and my chat were causing each other to error out.
AJAX was not around until late in the Pentium II lifecycle. Web pages were slow, with their need for full refreshes every time (fast static pages an anomaly then as now), and browsers’ network interaction was annoyingly limited. Google Maps was the application that showed us what AJAX really could do, years after the Pentium II was discontinued.
Also, video really sucked back in the day. A Pentium II could barely process DVD-resolution MPEG-2 in realtime. Internet connections generally were not the several Mbit/s necessary to get DVD quality with an MPEG-2 codec. Increasing resolution increases the processing power geometrically. Being able to Zoom call and see up to 16 live video feeds simultaneously is an amazing advance in technology.
I am also annoyed at the resource consumption, but not surprised. Even something “native” like Qt doesn’t seem to be using the actual OS-provided widgets, only imitating them. I figure it’s just the burden we have to pay for other conveniences. Like how efficient supply lines means consumer toilet paper shortages while the suppliers of office toilet sit on unsold inventory.
> Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.
With Moore's law being dead, efficiency is going to get a lot more popular than it has been historically. I think we're going to start seeing an uptick in the popularity of more efficient GUI programs like the ones you describe.
We see new languages like Nim and Crystal with their only value proposition over Python being that they're more efficient.
Similarly, I predict we will see an uptick in popularity of actually native frameworks such as Qt over Electron for the same reason. We may even start seeing wrapper libraries that make these excellent but complicated frameworks more palatable to the Electron crowd, similar to how compiled languages that look like Python or Ruby are getting bigger.
I wonder how much memory management affects this. My journey has been a bit different: traditional engineering degree, lots of large Ruby/JS/Python web applications, then a large C# WPF app, until finally at my last job, I bit the bullet and started doing C++14 (robotics).
Coming from more "designed" languages like C#, my experience of C++ was that it felt like an insane, emergent hodgepodge, but what impressed me was how far the language has come since the 90s. No more passing raw pointers around and forgetting to deallocate them, you can get surprisingly far these days with std::unique_ptr and std::shared_ptr, and they're finally even making their way into a lot of libraries.
I sense there's a bit of a movement away from JVM/CLR-style stop-the-world, mark-and-sweep generational GC, toward more sophisticated compile-time techniques like Rust's borrow checker, Swift's reference counting, or C++ smart pointers.
I mention memory management in particular both because it seems to be perceived as one of the major reasons why languages like C/C++ are "hard" in a way that C#/Java/JS aren't, and I also think it has a big effect on performance, or at least, latency. I completely agree we've backslid, and far, but the reality is, today, it's expensive and complicated to develop high-performance software in a lower-level, higher-performance language (as is common with native), so we're stuck with the Electron / web shitshow, in large part because it's just faster, and easier for non-specialists to develop. It's all driven by economic factors.
This sentiment is why I've moved to write Elixir code professionally three years ago, and why I write Nim for all my personal projects now. I want to minimize bloat and squeeze out performance from these amazing machines we are spoiled with these days.
A few years ago I read about a developer that worked on a piece-o-shit 11 year old laptop, he made his software run fast there. By doing that, his software was screaming fast on modern hardware.
It's our responsibility to minimize our carbon footprint.
https://tsone.kapsi.fi/em-fceux/ - This is an NES emulator. The Memory tap in Developer tools says this takes up 2.8MB. Runs in 60fps on my modern laptop.
It seems possible to build really efficient applications in JS/WebASM.
Multiple layers of Javascript frameworks is the cause of the bloat, and is the real problem I think.
> Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.
If we save developer-cycles, it's not wasted, just saved somewhere else. In the first place we should not go by numbers, because there always will be someone who can complain for a faster solution.
> For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II.
Yes, no. The level of ability and comfort at that time was significant lower. Sure, the base-functionalitify was the same, but the experience was quite different. Today there are a gazillion more little details which make life more comfortable, which you just don't realize there are there. Some of them working in the background, some being so naturally that you can't imagine them not being there since the beginning of everything.
> The efficiency difference between native and "modern" web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.
Except for the 25 years support, you can get the same features if an electron runtime was introduced and you avoid using too many libraries from npm. In most electron apps, most bloat is caused by the bundled runtime instead of the app itself. See my breakdown from a year ago of an electron based color picker: https://news.ycombinator.com/item?id=19652749
While true, it also had plenty of limitation. You have to keep carrying around a huge legacy, you're locked in to the APIs, SDKs and operating systems of a single vendor, often locked themselves to a single type of hardware.
The win32 code doesn't run anywhere, except on Windows, but most of the compute devices are mobile (non-laptop) systems and those don't come with Windows.
Running your native apps now takes both less work and more work: you can write (Somewhat) universal code but the frameworks and layers required to get it to build and run on Windows, macOS, Linux, iOS, Android, and any other system the market you target relies on now comes in as a dependency.
It used to be that the context you worked in was all you needed to know, and delivery and access was highly top-down oriented meaning you'd have to get the system (OS, hardware) to run the product (desktop app). That is no longer the case as people already have a system and will select the product (app) based on availability. If you're not there, that market segment will simply ignore you.
That is not to say that desktop apps have no place, or that CEF is the solution to all the cross-platform native woes (it's not, it's the reason things have gotten worse), but the very optimised and optimistic way to writing software from the 90's is not really broadly applicable anymore.
Is it practical to target wine as an application platform? That will require building without vs. Or build on windows and test with wine. What are the apis one would need to avoid in order to ensure wine compatibility?
What are some solid resources for learning more about optimization? I graduated from a bootcamp, and at both jobs I have had I ask my leads about optimization and making it run even faster and am often told that we don't need to worry about it because of how fast computers are now. But I am sitting there thinking about how I want my stuff to run like lightning for every system.
256MB RAM? How extravagant! My first computer had 3kB.
This is just the nature of “induced demand”. We might expand the power of our computers by several orders of magnitude, but our imaginations don’t keep up, so we find other ways of using all that capacity.
You might have used these words as a way to say "way faster", but factually you are incorrect. several orders of magnitude = thousands of times faster. No way.
If the browser is computationally expensive abstraction, so is the various .NET SDKs, the OS, custom compiler and the higher language of your choice. Yes there were days were an game like prince of persia could be fit in to the memory of apple IIe and all of it including the sound graphics, mechanics and the asset was less than 1.1 MB !
However the effort required to write such efficient code, hand optimise compiler output is considerable not to mention very few developers will be able to do it.
Unless your domain requires high performance(with wasm and WebGL this will also be reduced) or something niche a browser cannot not currently provide it no longer make sense to develop desktop applications. The native application is too much hassle and security risk for the end user compared to a browser app and is worth the trade-off in performance for vast majority of usescases.
While the browser security sandboxes have its issues, I don't want go back to the days of an native applications constantly screwing my registry, launch processes , add unrelated malware and billion toolbars to your browser ( java installers anyone ?) .
Till late 2000's Every few months I would expect to do reinstall the entire OS (esp Windows and occasionally OS X) because of this kind of shareware / malware nonsense native apps used to pull. While tech savy users avoid most of this pitfalls maintaining the extended family's systems was constant pain. Today setting a chromebook or surface( Default S mode enabled) and installing an ad blocker is all i need to do , those systems are clean for years.
I do not think giving effectively root access and hoping that that installing application will not abuse is not a better model than a browser app. It is not just small players who pull this kind abuse either, Adobe CC suite runs like 5 launch processes and messes up the registry even today. The browser performance hit is more than worth not having to deal with that
Also just on performance from a different point of view, desktop apps made my actually system slower, you would notice this on fresh install of the OS, your system will be super fast , then over few weeks it will slow down, From antivirus to every application you added , they all hogging more of my system resources than browser apps do today.
The reason that people don't write them is because users aren't on "the desktop". "The desktop" is split between OS X and Windows, and your Windows-app-compiled-for-Mac is going to annoy Mac users and your Mac-app-compiled-for-Windows is going to annoy Windows users. Then you realize that most users of computing devices actually just use their phone for everything, and your desktop app can't run on those. Then you realize that phones are split between Android and iOS, and there is the same problem there -- Android users won't like your iOS UI, and iOS users won't like your Android UI. Then there are tablets.
Meanwhile, your web app may not be as good as native apps, but at least you don't have to write it 6 times.
Every single app that I use, I try and make sure it is native. I shun electron apps at all cost. It's because people who put in effort to use the native APIs put in a lot more effort in the app in general based on my anecdotal evidence. It is also more performant and smaller in size, things that I cherish. It also pays homage to limits and striving to come up with new ways of overcoming them, which hackers would have to do in the past. I don't think not worrying about memory, CPU, etc are not healthy in the long run. Slack desktop app is almost 1 gig in size. That is crazy to me, no matter the "memory is cheap" mantra.
I'm going to be slammed for using these two words, but for any real work you need to have as few layers of indirection between the user and the machine as possible, and this includes the UX, in the sense that it is tailored to the fastest and most comfortable data entry and process monitoring.
I don't see any `web first` or Electron solution replacing Reaper or Blender in a foreseeable future. One exception I'm intrigued with is VS Code, which seems to be widely popular. May be I need to try it to form my own opinion.
I will come at this from a different, philosophical perspective:
Web apps come from a tradition of engaging the user. This means (first order) to keep people using the app, often with user-hostile strategies: distraction, introducing friction, etc.
Native desktop apps come from a tradition of empowering the user. This means enabling the user to accomplish something faster, or with much higher quality. If your app distracts you or slows you down, it sucks. "Bicycle for the mind:" the bicycle is a pure tool of the rider.
The big idea of desktop apps - heck, of user operating systems at all - is that users can bring their knowledge from one app to another. But web apps don't participate in this ecosystem: they erode it. I try a basic task (say, Undo), and it doesn't work, because web apps are bad at Undo, and so I am less likely to try Undo again in any app.
A missing piece is a force establishing and evolving UI conventions. It is absurd that my desktop feels mostly like it did in 1984. Apple is trying new stuff, but focusing on iPad (e.g. cursors); we'll have to see if they're right about it.
I prefer well-designed desktop applications to web applications for most things that don't naturally involve the web:
* Email clients (I use Thunderbird)
* Office suites
* Music and media players
* Maps
* Information managers (e.g., password managers)
* Development tools
* Personal productivity tools (e.g., to-do lists)
* Games
As Windows starts on-boarding their unified Electron model (I can't recall what they have named this), I suspect we'll see more lightweight Electron desktop apps. But for the record, I like purpose built, old-fashioned desktop applications. I prefer traditional desktop applications because:
* Traditional applications economize on display real-estate in ways that modern web apps rarely do. The traditional desktop application uses compact controls, very modest spacing, and high information density. While I have multiple monitors, I don't like the idea of wasting an entire monitor for one application at a time.
* Standard user interface elements. Although sadly falling out of favor, many desktop applications retain traditional proven high-productivity user interface elements such as drop-down menus, context menus, hotkeys, and other shortcuts.
* Depth of configuration. Traditional desktop apps tended to avoid the whittling of functionality and customization found in mobile and web apps. Many can be customized extensively to adapt to the tastes and needs of the user.
Bottom-line: Yes, for some users and use-cases, it still makes sense to make desktop apps. It may be a "long-tail" target at this point, but there's still a market.
I make a living developing software only available on Windows and macOS. That said, if I didn't need to interact so much with the operating system, I'd be making a web app. It all depends on what you want to make though. Video editing software? Native app. CRUD app? Web app.
You may also want to consider pricing implications of both. Desktop software can usually be sold for a higher up front cost, but it's tough sell to make it subscription based. SaaS would make your life a lot easier if you have a webapp. People are starting to get used to paying monthly for a service anyway.
Pro tip: If you decide to make a native app, don't use Electron. Instead, use the built-in WebBrowser/WKWebView components included in .NET and macOS. Create the UI once using whatever web framework you want, and code the main app logic in C#/Swift. Although the WebBrowser control kind of sucks right now, Microsoft is planning on releasing WebBrowser2 which will use Blink. I think they might also have it where the libraries are shared between all apps using it, to further reduce bloat. The old WebBrowser component can be configured to use the latest Edge rendering though by adding in some registry keys or adding this meta tag:
> Pro tip: If you decide to make a native app, don't use Electron. Instead, use the built-in WebBrowser/WKWebView components included in .NET and macOS. Create the UI once using whatever web framework you want, and code the main app logic in C#/Swift. Although the WebBrowser control kind of sucks right now, Microsoft is planning on releasing WebBrowser2 which will use Blink. I think they might also have it where the libraries are shared between all apps using it, to further reduce bloat. The old WebBrowser component can be configured to use the latest Edge rendering though by adding in some registry keys or adding this meta tag:
>
>
> <meta http-equiv="x-ua-compatible" content="ie=edge">
I understand the concept of making a native app to include using the native UI platforms. What you described is hardly more native than electron, which is basically a web app at heart.
Or maybe there needs to be a consensus on terms. Do people consider electron apps to be native? I would put them in some weird middle ground, but definitely closer to web technologies than native development.
Great to read about someone in a similar situation to me. I work as the developer and maintainer of a niche-market financial / real-estate application. This application has been developed and supported since the late 80s, first being done in Turbo Pascal, then Delphi, and then under my stewardship we moved to C#. I refactored the calculation and report production code into a library, and since that time we've built a Mac version and Web version, all utilising the same 'core' library. This means that for critical calculations and data output we - my business partner, who is the 'domain brains', and I - can do all the hard work on the Windows version (with which we are most familiar and comfortable, and IMO VS on Windows is still miles ahead of VS on Mac), and then 'just' do the GUI work for the other versions.
We did look at doing exactly as you said, i.e. using a web view within Windows and Mac, however I couldn't really get things working well enough at the time (as TBH I am bit of a noob WRT web development, and just pick things up as necessary as we go along).
For our market, there is strong demand for the desktop versions, and this is even with a subscription model; people get access to the most recent major and minor versions of the software as well as phone and email support while under subscription. When their sub runs out they are entitled to minor version updates, but nothing else. My biz partner is very good with people and very knowledgeable in the domain we operate, so this kind of arrangement suits everybody. Oh, and I get to work remote, and have done with him for ~15 years. The current situation really makes one appreciate fortunate arrangements such as this.
For a personal project I am currently using this approach and can confirm it works great.
I wrote just enough PyObjC to get myself a trayicon in the Mac menu bar, which shows a popover containing a wkwebview to localhost. Then I have all the app logic in Python, exposed to the webview through a bottle server, and Svelte for the UI. Highly recommended.
I mean, we built the Windows Terminal as a native application because we didn't want users to have to be saddled with 40MB of a webview/Electron just to boot up a terminal. It might take us longer to make the terminal as feature rich than it would have with JS, but no amount of engineering resources could have optimized out that web footprint. When we think about what the terminal looks like in 5 years, that's what we were thinking about.
I specialize in data recovery / digital forensics tools, which require very low-level disk access to be able to read physical media at the block level. I doubt there will ever be an HTML5 standard for low-level disk access.
But aside from my particular specialty, I also prefer any other software I use to be fully native. I'm surprised that's such a controversial thing to ask for these days. All I ask is so precious little:
* I want the software I use to be designed to run on the CPU that I own!
* I want software that doesn't require me to upgrade my laptop every two years because of how inefficient it gets with every iteration.
* I want software that isn't laughably bloated. I think we have a real problem when we don't bat an eye upon seeing the most rudimentary apps requiring 200+ MB of space.
I remember hanging out in /r/Unity3d and some newbie posted a download for their completed game. It was super basic - just a game where you move a cube around a grid, but the size of the game was insane, like half a gig.
The dev who posted it seemed perplexed when people told him the game was 100x bigger than it should be.
> I doubt there will ever be an HTML5 standard for low-level disk access.
It's clear you've never worked with Electron; nothing about its system-level access has anything to do with HTML5 or related standards.
All of that lives in NodeJS, which offers reasonably low-level APIs for accessing system resources. For cases where that's not enough Node can easily call out to logic written in other languages, either directly through FFI (foreign function interfaces) or by spinning up an independent binary via the shell.
This is the problem with this discourse: the vast majority of the Electron haters are people who have no idea what they're talking about when it comes to the actual thing they're criticizing. It's particularly hypocritical when they go so far as to frame "JavaScript hipsters" as some combination of ignorant, inexperienced, and/or lazy.
Have you looked into webassembly and how browsers give access to filesystems[1], there could be hope for a future of high performant filesystem access.
There are still a lot of fields where performance matters. This is especially true with apps that need low latency, like most games. Something like Stadia may be fine for a casual gamer but it still feels laggy to many, especially those used to gaming at 144Hz+ with almost zero input lag and gsync.
VR is another area where native desktop is still superior.
Then there is anything that is dealing with a lot of local data and device drivers. Video editing for example.
Development tools that work in a browser are getting better but native (or even just Java-based like IntelliJ stuff) still seems superior for now.
Stuff that doesn't use TCP, like network analysis tools, either need to be done as a desktop app or need to run a local server to point the webapp used to control them to.
I guess what I'm getting at is that if you need low-level access to the local device or if you care a lot about things like rendering performance then native is still the way to go.
IMO desktop apps aren’t quite equivalent to native apps. Native apps look and behave in a consistent way. They have
• Familiar UI primitives:
- Controls and chrome are in the same place
- Font sizes are the same across apps
- Consistent icons and button shapes
• Support standard keyboard shortcuts (including obscure ones that developers re-implementing these UIs might not know about)
- All the Emacs-style keybindings that work in native macOS text fields but are hit-or-miss in custom web text fields
- Full keyboard access (letting me tab and use the space bar and arrow keys to interact with controls)
• And consistent, predictable responsiveness cadences
- Somewhat contrived example: In Slack (browser or Electron app), switching between channels/DMs (via ⌘K) has a lag of about 0.5–1 second. If I start typing my message for that recipient during this lag, it actually gets saved as a draft in the channel that I just left and my content in the new channel gets truncated. I don’t think that kind of behavior would happen in a native macOS app, which renders UIs completely synchronously by default/in-order (so it might block the UI, but at least interactions will be in a consistent state)
I don't agree with the first two points. Native applications aren't consistent in this way. There are dozens of cross-platform GUI kits and they all behave slightly different, just like Electron apps. If you want consistency, you need to build multiple apps, one for each OS with their respective toolkits. Ain't nobody got time for that when you can easily build on Electron and target browsers, macOS, Windows, and Linux in one single app. No wonder Electron is winning the battle so far, regardless of your last point.
I purposefully make my FF unlike the other apps on my system. I use a couple of workarounds to prevent OS level keybinds from working in some apps. Sometimes a completely purpose made UI is better, sometimes.
In general, consistency [in desktop UI] is good, but there are good reasons to break it.
If you are going to expect me to run your software in an always-on manner, I would greatly appreciate a native application.
I frequently do light computing on a Surface Go. It's a delightful little device and I love it, but it is not powerful enough that I can leave gmail, Slack, and Discord open all the time.
I don't have enough RAM to run another web application but I could very easily afford a native app or two.
Yes, I hope so. I have just released a new data transformation/ETL tool for the desktop (Qt/C++ app for Mac and Windows). The advantages of desktop are:
-performance/low latency
-richer UI (e.g. shortcuts)
-privacy
But there are trade-offs:
-the user has to install/upgrade it
-less insight into what the user is doing
-harder to sell as a subscription
Lots of software would be better (from the user's POV) as a desktop app. However as a developer and as a software business owner/investor, it's (much) better to write web apps. So it depends on what you're asking here. Should you invest in desktop dev skills to further your career? No. Should you write your software idea as a desktop app if you want to make a business of it? Not if you can avoid it. If you're asking something else, well I guess the answer is 'maybe'.
If you develop an application that runs in the web browser, I won't use it. That's not some dogmatic principle of mine, it's just an empirical fact.
I use only one browser-based application: Gmail.
I've never used another browser-based application and I can't imagine that I ever will unless there's truly no alternative and it's forced on me by an employer.
I've happily paid for dozens of desktop applications, and I'm even semi-happily paying for around ten of them that have switched to a subscription model, but I never have and likely never will use browser-based applications even if they're free.
In some (most) cases, desktop apps could be better - performance, latency, offgrid capabilities, and even privacy. In most cases, I prefer offline desktop apps, then their online counterparts.
One area which is really tough to nail is cross-platform support though. Getting a good app on one system is hard enough, getting it in all three - rarely done. This is one of the things where web shines.
From business standpoint, I think web-first with an eye on native works for majority of cases. That is, as long as the majority of users don’t care about the above. In some future, if we start valuing efficiency and especially privacy more, this could turn around. But it feels like, even then, web will probably find a way to make more sense for most people.
I'd say that that when you're writing an application which is fundamentally just a pretty wrapper (e.g. it exists to take user input and pipe it over HTTP to some web service or use it to generate a command for some other binary) and your users don't care about performance, resource usage or reliability, it makes sense to use a browser. Your application is very UI-focused and if you're already familiar with HTML, CSS and JS, use what you know.
However if you're working on an application that has strict resource usage, reliability and/or performance requirements like say a control system for industrial equipment, a 3D game, a video encoder, photo editing software, or software that's going to be run on an embedded system, you're going to find it difficult to do what needs to be done with a browser/wrapper. It can be done for sure but it'll be something you work around rather than with.
I like to take my laptop out to a park and work with all the radios off to get the best use out of my battery. I also like to do complicated things with a lot of files that need to be organized in a real filesystem, the directory structure of a graphic novel can easily match the complexity of a program’s source tree.
Your web app, which requires several extra levels of indirection between the code and the bare metal, an online connection, quite possibly is built on a framework that tends to suck down a significant percentage of my CPU even when it’s idle in the background with all windows closed, and its own weird file management that’s probably a giant hassle when I need to get my work into another program, has no place in my world.
We're building POS applications for major retailers, and for this kind of software, native is king and will stay for the foreseeable future (with a few exceptions confirming the rule, of course). These applications need tight integration with exotic hardware, must satisfy weird fiscal requirements often written with native applications in mind, must run rock-solid 24/7 with daily usage and predictable response times for weeks without a restart, must be able to provide core functionality without interruption in offline situations while syncing up transparently with everyone else when back online and usually run in an appliance-like way on dedicated hardware (start with the system, always in foreground, only major application on the system, user should not be able to close or restart it, updates and restarts triggered by remote support/management).
All of this is much easier to do with native applications, running them in a browser just adds the need for crazy kludges and workarounds to simulate or enforce something you get for free if running native. Also you end up with managing hardware boxes with periphery attached and software running on them anyway, so whether managing a native application that is a browser which then runs the POS application or whether directly managing the POS application does not save you any work; if anything it even gives you an additional thing to manage, which increases maintenance effort and potential for failure (which quickly is catastrophic in this business, POS down today effectively means the store can close its doors).
Back-office applications in the same space are actually pretty well-suited for a web application, and frequently implemented as such today.
A lot of ATM machines and POs systems are glorified web apps. Not sure why a web app can’t be rock solid. Certainly easier to go native since you only have one platform, but I don’t see it being required
I'm all for web apps, unless you need to do things they don't do well. If you are doing, say, video editing -- yeah I want a native desktop app for that. At least currently.
But those things are getting fewer and fewer. And it annoys me to no end that I can't, say, run my favorite screencast/video editor (screenflow) on my Windows or Chromebook machine, since it seems pretty deeply tied to the OS. I don't want to have to learn another one, and I don't want to replace my Mac which is on borrowed time.
That said, I use a lot of apps like Gimp and Inkscape on my Mac, and they may be technically native, they can be really awful about "feeling native." I don't mind inconsistent user interfaces so much, as long as it is mostly cosmetic. But I've spent SO much time in both of those searching for lost windows, etc. (OMG Inkscape devs, has anyone even tried it on multiple monitors???) Things you never run into with "true" native apps (those two use GTK toolkit).
So, I certainly recommend web apps if you app can run sufficiently fast or otherwise can get away with being a web app.
[+] [-] userbinator|5 years ago|reply
The efficiency difference between native and "modern" web stuff is easily several orders of magnitude; you can write very useful applications that are only a few KB in size, a single binary, and that same binary will work across 25 years of OS versions.
Yes, computers have gotten faster and memory and disks much larger. That doesn't mean we should be wasting it to do the same or even less functionality we had with the machines of 10 or 20 years ago.
For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II. Even the web stuff at the time was nowhere near as disgusting as it is today --- AJAX was around, websites did use JS, but simple things like webchats still didn't require as much bloat. I lived through that era, so I knew it was possible, but the younger generation hasn't, so perhaps it skews their idea of efficiency.
In terms of improvement, some things are understandable and rational, such as newer video codecs requiring more processing power because they are intrinsically more complex and that complexity is essential to their increase in quality. But other things, like sending a text message or email, most certainly do not. In many ways, software has regressed significantly.
[+] [-] thdrdt|5 years ago|reply
Another program I use a lot is Blender (3D software). Compared to Spotify and Slack it is a crazy complicated program with loads of complicated functionalities. But it starts in a blink and only uses resources when it needs to (calculations and your 3D model).
So I absolutely agree with you.
I also think it has to do with the fact that older programmers now more about the cost of resources than younger programmers do. We used computers without harddisk and KBs of RAM. I always have this in my mind while programming.
The younger programmers may be right that resources don't matter much because they are cheap and available. But now I had to upgrade my RAM.
[+] [-] Decade|5 years ago|reply
What I remember from the time was how you couldn’t run that many things simultaneously. Back when the Pentium II was first released, I even had to close applications, not because the computer ran out of RAM, but because the TCP/IP stack that came with Windows 95 didn’t allow very many simultaneous connections. My web browser and my chat were causing each other to error out.
AJAX was not around until late in the Pentium II lifecycle. Web pages were slow, with their need for full refreshes every time (fast static pages an anomaly then as now), and browsers’ network interaction was annoyingly limited. Google Maps was the application that showed us what AJAX really could do, years after the Pentium II was discontinued.
Also, video really sucked back in the day. A Pentium II could barely process DVD-resolution MPEG-2 in realtime. Internet connections generally were not the several Mbit/s necessary to get DVD quality with an MPEG-2 codec. Increasing resolution increases the processing power geometrically. Being able to Zoom call and see up to 16 live video feeds simultaneously is an amazing advance in technology.
I am also annoyed at the resource consumption, but not surprised. Even something “native” like Qt doesn’t seem to be using the actual OS-provided widgets, only imitating them. I figure it’s just the burden we have to pay for other conveniences. Like how efficient supply lines means consumer toilet paper shortages while the suppliers of office toilet sit on unsold inventory.
[+] [-] djhaskin987|5 years ago|reply
With Moore's law being dead, efficiency is going to get a lot more popular than it has been historically. I think we're going to start seeing an uptick in the popularity of more efficient GUI programs like the ones you describe.
We see new languages like Nim and Crystal with their only value proposition over Python being that they're more efficient.
Similarly, I predict we will see an uptick in popularity of actually native frameworks such as Qt over Electron for the same reason. We may even start seeing wrapper libraries that make these excellent but complicated frameworks more palatable to the Electron crowd, similar to how compiled languages that look like Python or Ruby are getting bigger.
[+] [-] eldavido|5 years ago|reply
I wonder how much memory management affects this. My journey has been a bit different: traditional engineering degree, lots of large Ruby/JS/Python web applications, then a large C# WPF app, until finally at my last job, I bit the bullet and started doing C++14 (robotics).
Coming from more "designed" languages like C#, my experience of C++ was that it felt like an insane, emergent hodgepodge, but what impressed me was how far the language has come since the 90s. No more passing raw pointers around and forgetting to deallocate them, you can get surprisingly far these days with std::unique_ptr and std::shared_ptr, and they're finally even making their way into a lot of libraries.
I sense there's a bit of a movement away from JVM/CLR-style stop-the-world, mark-and-sweep generational GC, toward more sophisticated compile-time techniques like Rust's borrow checker, Swift's reference counting, or C++ smart pointers.
I mention memory management in particular both because it seems to be perceived as one of the major reasons why languages like C/C++ are "hard" in a way that C#/Java/JS aren't, and I also think it has a big effect on performance, or at least, latency. I completely agree we've backslid, and far, but the reality is, today, it's expensive and complicated to develop high-performance software in a lower-level, higher-performance language (as is common with native), so we're stuck with the Electron / web shitshow, in large part because it's just faster, and easier for non-specialists to develop. It's all driven by economic factors.
[+] [-] TheSpiceIsLife|5 years ago|reply
Work expands so as to fill the time available for its completion.[1]
Corollary: software expands to fill the available resources.
1. https://en.wikipedia.org/wiki/Parkinson%27s_law
[+] [-] sergiotapia|5 years ago|reply
A few years ago I read about a developer that worked on a piece-o-shit 11 year old laptop, he made his software run fast there. By doing that, his software was screaming fast on modern hardware.
It's our responsibility to minimize our carbon footprint.
[+] [-] mirimir|5 years ago|reply
As a long-time Linux user, that's what I say as well.
And as a privacy activist, that's what I routinely use.
[+] [-] tenebrisalietum|5 years ago|reply
https://tsone.kapsi.fi/em-fceux/ - This is an NES emulator. The Memory tap in Developer tools says this takes up 2.8MB. Runs in 60fps on my modern laptop.
It seems possible to build really efficient applications in JS/WebASM.
Multiple layers of Javascript frameworks is the cause of the bloat, and is the real problem I think.
[+] [-] slightwinder|5 years ago|reply
If we save developer-cycles, it's not wasted, just saved somewhere else. In the first place we should not go by numbers, because there always will be someone who can complain for a faster solution.
> For example, IM, video/audio calls, and working with email shouldn't take hundreds of MB of RAM, a GHz-level many-core processor, and GBs of disk space. All of that was comfortably possible --- simultaneously --- with 256MB of RAM and a single-core 400MHz Pentium II.
Yes, no. The level of ability and comfort at that time was significant lower. Sure, the base-functionalitify was the same, but the experience was quite different. Today there are a gazillion more little details which make life more comfortable, which you just don't realize there are there. Some of them working in the background, some being so naturally that you can't imagine them not being there since the beginning of everything.
[+] [-] est31|5 years ago|reply
Except for the 25 years support, you can get the same features if an electron runtime was introduced and you avoid using too many libraries from npm. In most electron apps, most bloat is caused by the bundled runtime instead of the app itself. See my breakdown from a year ago of an electron based color picker: https://news.ycombinator.com/item?id=19652749
[+] [-] oneplane|5 years ago|reply
The win32 code doesn't run anywhere, except on Windows, but most of the compute devices are mobile (non-laptop) systems and those don't come with Windows.
Running your native apps now takes both less work and more work: you can write (Somewhat) universal code but the frameworks and layers required to get it to build and run on Windows, macOS, Linux, iOS, Android, and any other system the market you target relies on now comes in as a dependency.
It used to be that the context you worked in was all you needed to know, and delivery and access was highly top-down oriented meaning you'd have to get the system (OS, hardware) to run the product (desktop app). That is no longer the case as people already have a system and will select the product (app) based on availability. If you're not there, that market segment will simply ignore you.
That is not to say that desktop apps have no place, or that CEF is the solution to all the cross-platform native woes (it's not, it's the reason things have gotten worse), but the very optimised and optimistic way to writing software from the 90's is not really broadly applicable anymore.
[+] [-] fctorial|5 years ago|reply
[+] [-] bfuller123|5 years ago|reply
[+] [-] geekraver|5 years ago|reply
This is just the nature of “induced demand”. We might expand the power of our computers by several orders of magnitude, but our imaginations don’t keep up, so we find other ways of using all that capacity.
[+] [-] simonebrunozzi|5 years ago|reply
You might have used these words as a way to say "way faster", but factually you are incorrect. several orders of magnitude = thousands of times faster. No way.
[+] [-] tantalor|5 years ago|reply
A few kb for the binary + 20-40 gb for the OS with 25 years of backwards compatibility
[+] [-] manquer|5 years ago|reply
Unless your domain requires high performance(with wasm and WebGL this will also be reduced) or something niche a browser cannot not currently provide it no longer make sense to develop desktop applications. The native application is too much hassle and security risk for the end user compared to a browser app and is worth the trade-off in performance for vast majority of usescases.
While the browser security sandboxes have its issues, I don't want go back to the days of an native applications constantly screwing my registry, launch processes , add unrelated malware and billion toolbars to your browser ( java installers anyone ?) .
Till late 2000's Every few months I would expect to do reinstall the entire OS (esp Windows and occasionally OS X) because of this kind of shareware / malware nonsense native apps used to pull. While tech savy users avoid most of this pitfalls maintaining the extended family's systems was constant pain. Today setting a chromebook or surface( Default S mode enabled) and installing an ad blocker is all i need to do , those systems are clean for years.
I do not think giving effectively root access and hoping that that installing application will not abuse is not a better model than a browser app. It is not just small players who pull this kind abuse either, Adobe CC suite runs like 5 launch processes and messes up the registry even today. The browser performance hit is more than worth not having to deal with that
Also just on performance from a different point of view, desktop apps made my actually system slower, you would notice this on fresh install of the OS, your system will be super fast , then over few weeks it will slow down, From antivirus to every application you added , they all hogging more of my system resources than browser apps do today.
[+] [-] jrockway|5 years ago|reply
The reason that people don't write them is because users aren't on "the desktop". "The desktop" is split between OS X and Windows, and your Windows-app-compiled-for-Mac is going to annoy Mac users and your Mac-app-compiled-for-Windows is going to annoy Windows users. Then you realize that most users of computing devices actually just use their phone for everything, and your desktop app can't run on those. Then you realize that phones are split between Android and iOS, and there is the same problem there -- Android users won't like your iOS UI, and iOS users won't like your Android UI. Then there are tablets.
Meanwhile, your web app may not be as good as native apps, but at least you don't have to write it 6 times.
[+] [-] ilrwbwrkhv|5 years ago|reply
[+] [-] ZoomZoomZoom|5 years ago|reply
I don't see any `web first` or Electron solution replacing Reaper or Blender in a foreseeable future. One exception I'm intrigued with is VS Code, which seems to be widely popular. May be I need to try it to form my own opinion.
[+] [-] ridiculous_fish|5 years ago|reply
Web apps come from a tradition of engaging the user. This means (first order) to keep people using the app, often with user-hostile strategies: distraction, introducing friction, etc.
Native desktop apps come from a tradition of empowering the user. This means enabling the user to accomplish something faster, or with much higher quality. If your app distracts you or slows you down, it sucks. "Bicycle for the mind:" the bicycle is a pure tool of the rider.
The big idea of desktop apps - heck, of user operating systems at all - is that users can bring their knowledge from one app to another. But web apps don't participate in this ecosystem: they erode it. I try a basic task (say, Undo), and it doesn't work, because web apps are bad at Undo, and so I am less likely to try Undo again in any app.
A missing piece is a force establishing and evolving UI conventions. It is absurd that my desktop feels mostly like it did in 1984. Apple is trying new stuff, but focusing on iPad (e.g. cursors); we'll have to see if they're right about it.
[+] [-] bhauer|5 years ago|reply
* Email clients (I use Thunderbird)
* Office suites
* Music and media players
* Maps
* Information managers (e.g., password managers)
* Development tools
* Personal productivity tools (e.g., to-do lists)
* Games
As Windows starts on-boarding their unified Electron model (I can't recall what they have named this), I suspect we'll see more lightweight Electron desktop apps. But for the record, I like purpose built, old-fashioned desktop applications. I prefer traditional desktop applications because:
* Traditional applications economize on display real-estate in ways that modern web apps rarely do. The traditional desktop application uses compact controls, very modest spacing, and high information density. While I have multiple monitors, I don't like the idea of wasting an entire monitor for one application at a time.
* Standard user interface elements. Although sadly falling out of favor, many desktop applications retain traditional proven high-productivity user interface elements such as drop-down menus, context menus, hotkeys, and other shortcuts.
* Depth of configuration. Traditional desktop apps tended to avoid the whittling of functionality and customization found in mobile and web apps. Many can be customized extensively to adapt to the tastes and needs of the user.
Bottom-line: Yes, for some users and use-cases, it still makes sense to make desktop apps. It may be a "long-tail" target at this point, but there's still a market.
[+] [-] fbelzile|5 years ago|reply
You may also want to consider pricing implications of both. Desktop software can usually be sold for a higher up front cost, but it's tough sell to make it subscription based. SaaS would make your life a lot easier if you have a webapp. People are starting to get used to paying monthly for a service anyway.
Pro tip: If you decide to make a native app, don't use Electron. Instead, use the built-in WebBrowser/WKWebView components included in .NET and macOS. Create the UI once using whatever web framework you want, and code the main app logic in C#/Swift. Although the WebBrowser control kind of sucks right now, Microsoft is planning on releasing WebBrowser2 which will use Blink. I think they might also have it where the libraries are shared between all apps using it, to further reduce bloat. The old WebBrowser component can be configured to use the latest Edge rendering though by adding in some registry keys or adding this meta tag:
<meta http-equiv="x-ua-compatible" content="ie=edge">
[+] [-] antaviana|5 years ago|reply
The key is to make it only available as a subscription (no permanent licenses) and it does not have any cloud component.
We managed not to fall in the trap of making two types of licenses (suscription or permanent) to maximize early revenue (we could affort to wait).
We seek a marriage relationship with our users not a hook up.
This increases the value our users extract knowing it will not go away in the long term and the lifetime value of customers is a lot higher.
We know that some customers will not accept a subscription-only desktop application but in B2B world they are fewer than it might seem.
[+] [-] dvdhnt|5 years ago|reply
Great to know, thank you.
[+] [-] OkGoDoIt|5 years ago|reply
Or maybe there needs to be a consensus on terms. Do people consider electron apps to be native? I would put them in some weird middle ground, but definitely closer to web technologies than native development.
[+] [-] mb_72|5 years ago|reply
We did look at doing exactly as you said, i.e. using a web view within Windows and Mac, however I couldn't really get things working well enough at the time (as TBH I am bit of a noob WRT web development, and just pick things up as necessary as we go along).
For our market, there is strong demand for the desktop versions, and this is even with a subscription model; people get access to the most recent major and minor versions of the software as well as phone and email support while under subscription. When their sub runs out they are entitled to minor version updates, but nothing else. My biz partner is very good with people and very knowledgeable in the domain we operate, so this kind of arrangement suits everybody. Oh, and I get to work remote, and have done with him for ~15 years. The current situation really makes one appreciate fortunate arrangements such as this.
[+] [-] doteka|5 years ago|reply
I wrote just enough PyObjC to get myself a trayicon in the Mac menu bar, which shows a popover containing a wkwebview to localhost. Then I have all the app logic in Python, exposed to the webview through a bottle server, and Svelte for the UI. Highly recommended.
[+] [-] zadjii|5 years ago|reply
[+] [-] dmitrybrant|5 years ago|reply
But aside from my particular specialty, I also prefer any other software I use to be fully native. I'm surprised that's such a controversial thing to ask for these days. All I ask is so precious little:
* I want the software I use to be designed to run on the CPU that I own!
* I want software that doesn't require me to upgrade my laptop every two years because of how inefficient it gets with every iteration.
* I want software that isn't laughably bloated. I think we have a real problem when we don't bat an eye upon seeing the most rudimentary apps requiring 200+ MB of space.
[+] [-] umvi|5 years ago|reply
The dev who posted it seemed perplexed when people told him the game was 100x bigger than it should be.
[+] [-] _bxg1|5 years ago|reply
It's clear you've never worked with Electron; nothing about its system-level access has anything to do with HTML5 or related standards.
All of that lives in NodeJS, which offers reasonably low-level APIs for accessing system resources. For cases where that's not enough Node can easily call out to logic written in other languages, either directly through FFI (foreign function interfaces) or by spinning up an independent binary via the shell.
This is the problem with this discourse: the vast majority of the Electron haters are people who have no idea what they're talking about when it comes to the actual thing they're criticizing. It's particularly hypocritical when they go so far as to frame "JavaScript hipsters" as some combination of ignorant, inexperienced, and/or lazy.
[+] [-] marclave|5 years ago|reply
[1] https://developer.mozilla.org/en-US/docs/Web/API/FileSystem
[+] [-] alasdair_|5 years ago|reply
VR is another area where native desktop is still superior.
Then there is anything that is dealing with a lot of local data and device drivers. Video editing for example.
Development tools that work in a browser are getting better but native (or even just Java-based like IntelliJ stuff) still seems superior for now.
Stuff that doesn't use TCP, like network analysis tools, either need to be done as a desktop app or need to run a local server to point the webapp used to control them to.
I guess what I'm getting at is that if you need low-level access to the local device or if you care a lot about things like rendering performance then native is still the way to go.
[+] [-] feifan|5 years ago|reply
• Familiar UI primitives: - Controls and chrome are in the same place - Font sizes are the same across apps - Consistent icons and button shapes
• Support standard keyboard shortcuts (including obscure ones that developers re-implementing these UIs might not know about) - All the Emacs-style keybindings that work in native macOS text fields but are hit-or-miss in custom web text fields - Full keyboard access (letting me tab and use the space bar and arrow keys to interact with controls)
• And consistent, predictable responsiveness cadences - Somewhat contrived example: In Slack (browser or Electron app), switching between channels/DMs (via ⌘K) has a lag of about 0.5–1 second. If I start typing my message for that recipient during this lag, it actually gets saved as a draft in the channel that I just left and my content in the new channel gets truncated. I don’t think that kind of behavior would happen in a native macOS app, which renders UIs completely synchronously by default/in-order (so it might block the UI, but at least interactions will be in a consistent state)
[+] [-] falafel|5 years ago|reply
[+] [-] pbhjpbhj|5 years ago|reply
I purposefully make my FF unlike the other apps on my system. I use a couple of workarounds to prevent OS level keybinds from working in some apps. Sometimes a completely purpose made UI is better, sometimes.
In general, consistency [in desktop UI] is good, but there are good reasons to break it.
[+] [-] implicit|5 years ago|reply
I frequently do light computing on a Surface Go. It's a delightful little device and I love it, but it is not powerful enough that I can leave gmail, Slack, and Discord open all the time.
I don't have enough RAM to run another web application but I could very easily afford a native app or two.
[+] [-] hermitcrab|5 years ago|reply
But there are trade-offs: -the user has to install/upgrade it -less insight into what the user is doing -harder to sell as a subscription
I wrote this in 2013 and I think it is still mostly true: 'Is desktop software dead?' https://successfulsoftware.net/2013/10/28/is-desktop-softwar...
[+] [-] roel_v|5 years ago|reply
[+] [-] WaltPurvis|5 years ago|reply
I use only one browser-based application: Gmail.
I've never used another browser-based application and I can't imagine that I ever will unless there's truly no alternative and it's forced on me by an employer.
I've happily paid for dozens of desktop applications, and I'm even semi-happily paying for around ten of them that have switched to a subscription model, but I never have and likely never will use browser-based applications even if they're free.
[+] [-] Ingon|5 years ago|reply
One area which is really tough to nail is cross-platform support though. Getting a good app on one system is hard enough, getting it in all three - rarely done. This is one of the things where web shines.
From business standpoint, I think web-first with an eye on native works for majority of cases. That is, as long as the majority of users don’t care about the above. In some future, if we start valuing efficiency and especially privacy more, this could turn around. But it feels like, even then, web will probably find a way to make more sense for most people.
[+] [-] Youden|5 years ago|reply
I'd say that that when you're writing an application which is fundamentally just a pretty wrapper (e.g. it exists to take user input and pipe it over HTTP to some web service or use it to generate a command for some other binary) and your users don't care about performance, resource usage or reliability, it makes sense to use a browser. Your application is very UI-focused and if you're already familiar with HTML, CSS and JS, use what you know.
However if you're working on an application that has strict resource usage, reliability and/or performance requirements like say a control system for industrial equipment, a 3D game, a video encoder, photo editing software, or software that's going to be run on an embedded system, you're going to find it difficult to do what needs to be done with a browser/wrapper. It can be done for sure but it'll be something you work around rather than with.
[+] [-] egypturnash|5 years ago|reply
Your web app, which requires several extra levels of indirection between the code and the bare metal, an online connection, quite possibly is built on a framework that tends to suck down a significant percentage of my CPU even when it’s idle in the background with all windows closed, and its own weird file management that’s probably a giant hassle when I need to get my work into another program, has no place in my world.
[+] [-] Slartie|5 years ago|reply
All of this is much easier to do with native applications, running them in a browser just adds the need for crazy kludges and workarounds to simulate or enforce something you get for free if running native. Also you end up with managing hardware boxes with periphery attached and software running on them anyway, so whether managing a native application that is a browser which then runs the POS application or whether directly managing the POS application does not save you any work; if anything it even gives you an additional thing to manage, which increases maintenance effort and potential for failure (which quickly is catastrophic in this business, POS down today effectively means the store can close its doors).
Back-office applications in the same space are actually pretty well-suited for a web application, and frequently implemented as such today.
[+] [-] kanwisher|5 years ago|reply
[+] [-] jitendrac|5 years ago|reply
[+] [-] joshbaptiste|5 years ago|reply
[+] [-] robbrown451|5 years ago|reply
But those things are getting fewer and fewer. And it annoys me to no end that I can't, say, run my favorite screencast/video editor (screenflow) on my Windows or Chromebook machine, since it seems pretty deeply tied to the OS. I don't want to have to learn another one, and I don't want to replace my Mac which is on borrowed time.
That said, I use a lot of apps like Gimp and Inkscape on my Mac, and they may be technically native, they can be really awful about "feeling native." I don't mind inconsistent user interfaces so much, as long as it is mostly cosmetic. But I've spent SO much time in both of those searching for lost windows, etc. (OMG Inkscape devs, has anyone even tried it on multiple monitors???) Things you never run into with "true" native apps (those two use GTK toolkit).
So, I certainly recommend web apps if you app can run sufficiently fast or otherwise can get away with being a web app.