Especially happy to see Tesseract OCR v4.0 [0] now being in the mainline repository. Tesseract was the main motivation for changing my web stack to docker a couple of weeks ago, and I had to use a separate builder image [1] in Alpine 3.8. Now it is just:
I'm curious to know why they switched back to openssl from libressl. Are there compatibility issues or have the issues that caused libressl to be created been addressed now in openssl?
- better upstream support from projects
- To my understanding, various of the issues in OpenSSL
that made us switch to libressl have been resolved.
(for example memory management)
- libressl failed to retain compability with OpenSSL
- libressl breaks ABI every 6 months, OpenSSL does not
- FIPS support
From what I understand, libressl was a fork of openssl resulting from a bunch of people who were pissed about heartbleed. Not being so dependent on one library is good, but that doesn't necessarily mean that the alternatives are better. Openssl is still more established, and from what I understand (anecdotally), openssl is faster (an important consideration for a distro like alpine). With that said, libressl is likely more secure (which means it may be an option to go along with alpine's hardened kernel).
From the web site: “ container requires no more than 8 MB and a minimal installation to disk requires around 130 MB of storage. Not only do you get a fully-fledged Linux environment but a large selection of packages from the repository.”
I remember in 1997 when I could boot Linux off a 1.44MB floppy and get a fully a fully functioning Linux environment even with network support in a blitz. If 130MB is considered “lean”, what happened to our Unix principles of minimalism and clean design?
They bumped up against modern considerations of plug-n-play. You'll see that it's the kernel which makes up most of it, the rest of it is quite small. If you wish to compile a kernel for your own hardware, you can slim that down by a _huge_ amount. I've managed to go from around 130 mb -> 25 on one machine.
> I remember in 1997 when I could boot Linux off a 1.44MB floppy
I installed slackware in the early 90's. The kernel was on one floppy, then the barest rootfs was on a 2nd and 3rd floppy. I believe the whole install spanned 13 1.44MB floppies. That's not so different from say, current day Openwrt in overall size, which can still fit into 4MB of ROM if needed.
I remember that even in 1997 you needed one boot-disk containing the kernel, somewhat taylored to your system (SCSI/non-SCSI, networking) already because of space constraints, and a root-disk, and the running system was pretty minimal.
I'm running a normal installation; the largest package (nearly 500MB) is linux-vanilla. The included drivers are mostly what are hogging space. This goes back to the whole plug-and-play another person mentioned; there are a lot of supported devices.
I used it in containers (LXC through Proxmox VE) and my base install with "functioning Linux environment even with network support in a blitz" comes around 8MB, as the kernel comes from the host I simply save on that end. Works very well.
How suitable is Alpine as a desktop distribution? It seems like a low-GNU distribution with an emphaisis on static linkage. Is that a correct assessment? I'm very happy with Arch, I stopped my distro-hopping six years ago when I landed on it, but I worry that I'm getting complacent because Arch is just so easy to use.
I probably wouldn't use it for desktops. The AUR (and all the packages it offers) is one if the best features of arch and very convenient for desktop users. Not something I'd like to give up on my desktop, and it's dependent on gnu stuff to work.
However, it's great for small servers. I use raspberry pis and old computers to serve applications at home, and I just switched them all to Alpine. Perfect for that use case; orders of magnitude better than something like ubuntu server. I would highly recommend it for server.
Quick edit before anyone gets all offended: yeah it could be used for desktop, I just wouldn't recommend it. Especially coming from arch, the AUR represents a massive repository of software. It will likely be a while befire Alpine is in a similar position, especially if they want to stick to their musl static-link ethos. It's a lot easier to deal with compiling and possibly minor porting yourself for a single-porpose server box (only needs a few apps) vs a multi-purpose general box (needs many).
I ran two arch boxes for about 7 years, two years ago I got an X200 with libreboot and did not want systemd on it. I chose Alpine because it allows you to use musl libc, this was after fighting a bug in the gentoo build process. The idea is to understand exactly how my system boots up, so I can cut as much cruft from it as possible, and also for my system to boot up in the same order every time, something that in my experience maintaining several computers with systemd, causes a lot of problems with chasing down bugs (If the bugs are nonexistent half the time because the services boot up in almost-random order, how the hell can you debug it?!).
In my experience, it's extremely viable. Most things you can get away with pulling from the Alpine repos, everything else you can stick in a chroot (musl libc causes some problems with compatibility) or compile from source (Which is usually quicker than you'd think, except when C++ gets involved).
It's so much more stable than any other modern linux system I have run. Someone I know has a twin X230 with Manjaro on, and the boot-up time is so long. I managed to get my Alpine box to 40 seconds to gui, but was limited by the dhcp resolution and the pre-boot flashups. Manjaro takes about three or four minutes to get to the login screen, and then another minute or two to load the GUI.
My much more powerful arch box has about the same bootup times as the X230, even though theoretically I am using lighter technologies. Something I have noticed as well is that because of systemd running boot items concurrently, it actually ends up with a less deterministic boot. A lot of the times it simply fails to resolve wifi on boot (leading to a several minute hang), and the systemd logs and dmesg show absolutely nothing at fault.
Not answering your question directly. But found, for my needs for desktop support for Wayland is important (because I use AMD graphics cards without a fan (as I cannot cope with noise well)).
And since manufacture driver support for AMD is not there (my cards are over 5 years old) -- I found that speed of Wayland is really good.
So I 'standardized', on Fedora and just upgraded from 27 to 29 in one shot, using their upgrade plugin with 0 problems (had to uninstall like 3 packages and put them back).
For what I've seen in the 20 minutes I tried it, it's probably best suited for ridiculously low footprint cli only installs. My favorite low fat distro for desktop on constrained x86 hardware is DietPI, which regardless of the name doesn't run only on xPI boards and is functionally very close to a normal Debian, although much lighter.
This is from a DietPi x86-64 install onto a Virtualbox VM, XFCE desktop plus an open terminal and both Firefox and LibreOffice loaded; Firefox showing mozilla.org webpage and Libreoffice Writer an empty page.
Not bad at all! Although I prefer Armbian for embedded boards, DietPi really screams on small netbooks.
dietpi@DietPi:~$ free
total used free shared buff/cache available
Mem: 2052524 464456 1112748 41452 475320 1405128
Swap: 45052 0 45052
The only problem with using Alpine as a desktop distribution, is that security updates will sometimes take months to be available, and this is only because the team working on Alpine is very small.
If this wasn't an issue, I would not use anything else on the desktop.
Glad that this happened. OpenSSL looks a lot better than when the entire drama started and it was quite hard to even build OpenSSL from source on alpine.
Much awaited release. I've been running my entire application line up on Alpine it's been awesome! I wish more Cloud / Hosting providers have default Alpine Images. I've been using alpine's APK packaging system to manage software builds and release cycles.
So for those who are curious my CI builds the software into packages automatically versioning them and marking the build versions, storing the packages in my GCS bucket and then automatically runs apk add --upgrade on my package. All orchestrated with Terraform and LXD, no docker / Kubernetes involved whatsoever. Now there is also apk-autoupdate which I look forward to exploring and seeing how it can simplify my build process.
Just wondering how many are using Alpine in Production? I have heard problems of LibreSSL ( no longer matters ) and muslc. But no one has came out and state they are using it happily in production on X number of Servers.
My company is using Alpine on (at least) 6 kubernetes nodes in production. I'm happy with it so far, but we're just running a java app (albeit high traffic).
[+] [-] jakobgm|7 years ago|reply
Especially happy to see Tesseract OCR v4.0 [0] now being in the mainline repository. Tesseract was the main motivation for changing my web stack to docker a couple of weeks ago, and I had to use a separate builder image [1] in Alpine 3.8. Now it is just:
> apk add tesseract-ocr
[0] https://pkgs.alpinelinux.org/package/v3.9/community/armhf/te...
[1] https://hub.docker.com/r/inetsoftware/alpine-tesseract/
[+] [-] jillesvangurp|7 years ago|reply
[+] [-] _ikke_|7 years ago|reply
[+] [-] mises|7 years ago|reply
https://en.wikipedia.org/wiki/LibreSSL
[+] [-] hestefisk|7 years ago|reply
I remember in 1997 when I could boot Linux off a 1.44MB floppy and get a fully a fully functioning Linux environment even with network support in a blitz. If 130MB is considered “lean”, what happened to our Unix principles of minimalism and clean design?
[+] [-] genghizkhan|7 years ago|reply
[+] [-] jcelerier|7 years ago|reply
but in 1997 every pointer was half the size it is today, and there were 1/1000th the number of device drivers that exist today.
[+] [-] tyingq|7 years ago|reply
I installed slackware in the early 90's. The kernel was on one floppy, then the barest rootfs was on a 2nd and 3rd floppy. I believe the whole install spanned 13 1.44MB floppies. That's not so different from say, current day Openwrt in overall size, which can still fit into 4MB of ROM if needed.
[+] [-] agumonkey|7 years ago|reply
[+] [-] elcomet|7 years ago|reply
Maybe 99% of those 130Mb are the drivers, a lot were added to the kernel since 1997 I guess
[+] [-] wener|7 years ago|reply
[+] [-] rmu09|7 years ago|reply
[+] [-] mises|7 years ago|reply
[+] [-] tlamponi|7 years ago|reply
[+] [-] mhd|7 years ago|reply
It never got on the level of the famous QNX demo disk.
[+] [-] Medox|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] fuzzy2|7 years ago|reply
> Firefox is only available on x86_64 due to Rust.
Could someone explain the reasoning behind this? I’m not familiar with whatever restrictions Rust may impose.
[+] [-] scriptdevil|7 years ago|reply
They only have rust packages for X86_64
[+] [-] _ikke_|7 years ago|reply
[+] [-] sevensor|7 years ago|reply
[+] [-] mises|7 years ago|reply
However, it's great for small servers. I use raspberry pis and old computers to serve applications at home, and I just switched them all to Alpine. Perfect for that use case; orders of magnitude better than something like ubuntu server. I would highly recommend it for server.
Quick edit before anyone gets all offended: yeah it could be used for desktop, I just wouldn't recommend it. Especially coming from arch, the AUR represents a massive repository of software. It will likely be a while befire Alpine is in a similar position, especially if they want to stick to their musl static-link ethos. It's a lot easier to deal with compiling and possibly minor porting yourself for a single-porpose server box (only needs a few apps) vs a multi-purpose general box (needs many).
[+] [-] fao_|7 years ago|reply
In my experience, it's extremely viable. Most things you can get away with pulling from the Alpine repos, everything else you can stick in a chroot (musl libc causes some problems with compatibility) or compile from source (Which is usually quicker than you'd think, except when C++ gets involved).
It's so much more stable than any other modern linux system I have run. Someone I know has a twin X230 with Manjaro on, and the boot-up time is so long. I managed to get my Alpine box to 40 seconds to gui, but was limited by the dhcp resolution and the pre-boot flashups. Manjaro takes about three or four minutes to get to the login screen, and then another minute or two to load the GUI.
My much more powerful arch box has about the same bootup times as the X230, even though theoretically I am using lighter technologies. Something I have noticed as well is that because of systemd running boot items concurrently, it actually ends up with a less deterministic boot. A lot of the times it simply fails to resolve wifi on boot (leading to a several minute hang), and the systemd logs and dmesg show absolutely nothing at fault.
[+] [-] platform|7 years ago|reply
And since manufacture driver support for AMD is not there (my cards are over 5 years old) -- I found that speed of Wayland is really good.
So I 'standardized', on Fedora and just upgraded from 27 to 29 in one shot, using their upgrade plugin with 0 problems (had to uninstall like 3 packages and put them back).
[+] [-] squarefoot|7 years ago|reply
This is from a DietPi x86-64 install onto a Virtualbox VM, XFCE desktop plus an open terminal and both Firefox and LibreOffice loaded; Firefox showing mozilla.org webpage and Libreoffice Writer an empty page. Not bad at all! Although I prefer Armbian for embedded boards, DietPi really screams on small netbooks.
(beware of punch-in-the-eye colors) https://dietpi.com/#download[+] [-] JoshuaRLi|7 years ago|reply
[+] [-] efiecho|7 years ago|reply
If this wasn't an issue, I would not use anything else on the desktop.
[+] [-] sdfasdslk|7 years ago|reply
[+] [-] Leace|7 years ago|reply
Glad that this happened. OpenSSL looks a lot better than when the entire drama started and it was quite hard to even build OpenSSL from source on alpine.
[+] [-] artellectual|7 years ago|reply
So for those who are curious my CI builds the software into packages automatically versioning them and marking the build versions, storing the packages in my GCS bucket and then automatically runs apk add --upgrade on my package. All orchestrated with Terraform and LXD, no docker / Kubernetes involved whatsoever. Now there is also apk-autoupdate which I look forward to exploring and seeing how it can simplify my build process.
[+] [-] hendry|7 years ago|reply
[+] [-] wener|7 years ago|reply
[+] [-] tracker1|7 years ago|reply
I'll sometimes do builds in a larger distro (debian or ubuntu server), then deploy into alpine.
[+] [-] ksec|7 years ago|reply
[+] [-] tpxl|7 years ago|reply
[+] [-] oweiler|7 years ago|reply
https://hub.docker.com/_/alpine
[+] [-] r6by|7 years ago|reply
[+] [-] rcarmo|7 years ago|reply
[+] [-] g33k247|7 years ago|reply
[+] [-] Human_USB|7 years ago|reply
I use the rPi for dnscrypt-proxy2.
[+] [-] w323898|7 years ago|reply
[+] [-] unknown|7 years ago|reply
[deleted]