ReganLaitila's comments

ReganLaitila | 3 years ago | on: Is there hope for Linux on smartphones? [video]

There is no hope for a linux smartphone due to the reality that there is no hope for the linux desktop. This is aptly demonstrated by the presenters slide of the linux kernel with a single through line into userspace, where you are shown a single interface (more or less) represented as the kernel, into a byzantine maze of components, represented by userspace.

Who actually wants to target such a mess? Nobody statistically. To enter the linux market you need to compile a matrix of distros, init systems, kernal module subsystems, package managers, device daemons, display servers, window managers, desktop environments, custom configuration formats, filesystem hierarchies, custom socket protocols, custom syscall interfaces, and whatever else. Or just literally re-invent everything and go your own way (Android and similar efforts). Repeat for each and every version of a "distro", each permutation having their interpretation of the correct "linux/unix" way. When you are developing software either for a direct profit motive or for a user freedom motive, they are aligned with having stable environmental targets that continue to work within a reasonable timeframe.

We all love siting conway's law: "Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure". The Linux userspace communication structure is plain chaos. As an app developer: what distro am I on, and at what version, what config file am I fiddling, what custom socket protocol do I need to implement, what obscure distro dependent library do I need to link against, and what horrendous wrappers do I need to write to get my software to not only run on "linux" but the other platforms where most users are.

Windows, MacOSX, IOS, and Android-flavors understand this better at least from a market perspective, because the most important aspect of computing is that software works more than less of the time. works > security. I don't have to upgrade my OS to get a new version of an app. I don't want an OS upgrade to change the version of an app. I don't have to know what static vs dynamic linking is. I don't have to dig into yet another custom config format to change simple settings. As much as I enjoy discussing the particulars of tech, and digging into details from time to time, at the end of a long day I just want my computing to work. The linux ecosystem does not provide that.

Yes manufactures play games with hardware locking out FOSS efforts. Some of that can be remedied by serving the legacy hardware market or providing manufactures an actual platform to target. At the same time, outside of the kernel, I see no effort of the linux userspace providing any sense of a unified UX, consistency, discoverability, or stability. Each new half decade presents yet another set of inconsistent interfaces, confusing commands, esoteric configs, outdated tutorials, and the general sense that FOSS is not positioned to serve tech at the human scale.

Until userspace can provide an interface that is stable, more or less like the kernel for a decade or more, with opportunity to shim for backwards compatibility for edge cases, we will never see linux exceed further than pets of backend infrastructure.

ReganLaitila | 3 years ago | on: Zellij: A terminal workspace with batteries included

Thanks for the clarification I see now what your getting to. I agree having proper packages makes sense at some point in a project's maturity, as you have more infrastructure, checks, and gates to ensure that what you requested is 'valid', and the system produces enough logs/data to compare to other systems to detect drift/compromise. I tend to see if a project gets popular enough, with enough eyeballs/contributors, official packages tend to become inevitable.

Since we've passed the trust gate up to this point for discussion purposes, I still wonder if there is a better model for young projects. Its not just we have multiple package formats, its the per distro/version matrix that tends to bite small developers and projects on time commitment. I would like to see something better than 'curl | sh' that is practical and portable across the unix-y ecosystem. Perhaps a third-party checksum db that caches valid script hashes ala golang sumdb or similar. Seems ripe for improvement.

ReganLaitila | 3 years ago | on: Zellij: A terminal workspace with batteries included

"It is inherently different, because it's been proven that you can detect the use of curl|bash Serverside"

Malicious people do malicious things? I worry that we conflate trust with validity. Some package systems do it better than others, but in principle you trust that for example, a maintainer of a package repository is not serving you bad checksums and malicious content. After all these systems get their checksums/keys on-first-use, so you still need to make the trust judgement. And they could still change the responses based on your ip, user agent, or other metadata they have access to when you interact with the system.

To boil it down to my gripe, the comments about checksums/gpg signing being the reason to never 'curl | sh' make no sense until you can clear the trust argument first, which no one does. And once you do clear the trust argument, and conclude the source is trustworthy, we can have a more technical debate on the distribution mechanism itself and what makes sense from that perspective.

edit: forgot to add, 'curl | sh' is also a trust on-first-use scenario just like with package ecosystems.

ReganLaitila | 3 years ago | on: Zellij: A terminal workspace with batteries included

Please lend your own time and energy to generate packages for bespoke distributions and package managers. You will need: deb, rpm, apk, AppImage, casks, tars, and likely more. Make sure to spend your time submitting your package to maintainers for each repository/registry for each distribution and each distro version. Don't forget to test each and every permutation!

Is all that too hard? No problem. Stand up your own repository for each distribution mechanism and instruct the user to run a bunch of random curl and key handling commands to bind their machine to this new software supply chain attack channel. At least this 'potentially malicious' code is being checksumed/gpg verified!

-- the point --

'curl | sh' is inherently no different from a trust perspective then issuing a package installation command or installing a new repository source for a package manager. Each user makes the value judgement if they trust the software or not. Your free to run the 'curl' part and inspect the script, or contribute packages to the byzantine linux/unix ecosystem if it boils your blood so hard.

Practicality is a feature sometimes.

ReganLaitila | 3 years ago | on: SELinux is unmanageable; just turn it off if it gets in your way

the linux fiefdoms have a serious UX problem. SELinux being a prime example. As the article articulates, no wonder why people just turn it off. If your subsystems are not consistent, discoverable, palpable, and most important logical your setting yourself up for lousy adoption. And just "reading the docs" does not solve this problem. Your subsystem does not get to consume my professional time slice.

The reason docker became the de facto entry point into containerization in yesteryear is because if you were dealing with 'containers' you were dealing with the 'docker' cli entry point. Everything you did with linux containers in the (mainstream) came from 'docker' and you can '--help' to your hearts content -or- google as much as you required with others that had the same shared experience with 'docker'. We've moved on in recent years but its important to remember the power of a well described, but imperfect interface.

SELinux has none of this mindshare. What is my canonical entrypoint to SELinux on any particular distro? There is none. I have to specifically know to install support packages for 'audit2allow' or 'audit2why' to do any reasonable troubleshooting on why a processes wont start. Why? Because any raw logs are so chocked with implementation details as a administrator I cannot make a real-world decision on what is broken on the system. Sysadmins do not start every day thinking about SELinux and memorizing its maze of tools and procedures. Something is starting to smell here...

For SELinux I need to know about, and sometimes explicitly install, half a dozen cli tools to administer selinux. Most of which don't follow any particular naming convention or entry point. I now need to learn a completely new markup for policy AND compile them AND install them using other esoteric tools . I need to explicitly refresh system state after making any changes, and return to my blunt 'audit2why' what-is-this tool to figure out if I did anything right.

The principles of SELinux are fine. The UX of SELinux in terms of getting shit done day to day is not.

ReganLaitila | 4 years ago | on: Run end-to-end tests faster with Firecracker

May I ask what stack you employ to meet these goals?

Many tend to reach for Gitlab CI or Github Actions but these piles of "executable yaml" never appear to be up to the task of complex deployment logic you describe in your post, not including that they don't account for multi-repo or composed artifact workflows naturally. The state of the art, if you can call it that, is Jenkins where you can drop into raw-ish groovy/java for the logic pieces when you need to. But then you run into the constant struggle of working around Jenkin's leaky abstractions and peculiarities.

You can patch together a pile of bash, python, go et al but you land in a worse place where there is no guiding structure to the automation for onboarding, enhancement, and maintenance.

I'm curious of other's experiences building complex build / deployment pipelines where up-front you have consistent entry structure to the automation but have all the escape hatches one would need to implement custom logic when required, in a type safe, potentially compiled, testable way (ie: pipelines as 'actual' code).

Of course one could write their own automation engine to avoid yaml hell and all that. However I am not seeing any pervasive solutions being presented that don't present "yet another (yaml | json | xml | cue | whatever) task dag launching containers running random scripts from wherever".

ReganLaitila | 4 years ago | on: Using Ansible and Nomad for a homelab (part 1)

It's important to recognize snark and sarcasm for better or worse.

I think GP is referring to the idea that just because ipv6 is capable to provide universal connectivity from a technical sense, that does not translate to the techopolies implementing it in its rawest form. They have "interests".

They are quite happy being a dependent node between two individuals/devices talking to each other directly on the network in many aspects. The "Who, ???, When, Where, ???" are very important to their ability to monetize you and keep their business going. No good/evil duality here its just business and the capitalist way in the basic form. Why would they want you to send messages directly to another individual/device when you can just as aptly use their "cloud/network" service instead? Why buy a VPS from AWS or Digitalocean when you can just host the same services from your phone or a spare computer?

ipv6 can in some sense threaten the for-mentioned dependency, given no restriction. So expect even if these massive operators implement ipv6 end-to-end, probably as a cost/complexity saving measure, that "security", "convenience", "reliability" measures are put in place so that you are not permitted to make direct connections across ipv6 unchecked or at best some technical upsell, or just not possible at all.

The sad story being that while ipv4 and NAT/CGNAT were intended as a technical stop-gap to ipv4 exhaustion and security, waiting for ipv6, it effectively moats users into network centric power hierarchies where the ISPs, hardware vendors, OS vendors get to dictate the level of access, which are useful from a business aspect.

Remember ipv4 is now "scarce". Scarcity produces economies, which produce commodities, which produce futures/speculation, which produce business strategy. ipv6 promotes universal abundance and connectivity which is terrible for business on the strategic front. No wonder why ipv6 is going nowhere so fast.

ReganLaitila | 4 years ago | on: Don't write bugs

From what I think is the article's punch line, in context of software bugs:

"We put them there, and we can decide to not put them there"

I don't think this is strictly true. Sure from a strict computational theory or mathematical proof perspective we can 'produce no bugs' but from a meat-space reality standpoint programming ecosystems do not work this way.

The game changes when you need to ship some bits from one place to another, or demand contract on the representation of truths over time, or guarantee no interruption of service on vague notions of "indefinitely".

How perfect can any one individual execute the exacting theory of the stack: ASM(s), C-derivatives, OS Kernels, TCP/UDP, DNS, TLS, pythons/javascripts/golangs, HTTP, SQL-derivatives, libraries, frameworks, cluster orchestrators, oh my the list just keeps getting bigger each and every year...

So we write bugs, because we have to. Imperfect knowledge is part of our professional practice. Much like the sciences our bugs persist not because they were "wrong" but because we build on top of them as new facts emerge.

From the articles principle advice:

"If you want a single piece of advice to reduce your bug count, it’s this: Re-read your code frequently. After writing a few lines of code (3 to 6 lines, a short block within a function), re-read them. That habit will save you more time than any other simple change you can make."

We do this. Programmers can tend to be narcissistic, we love reading our code we just wrote: oh the beauty, oh the wonderful shortcut, oh the performance, all while enjoying the quickly fading context in which we wrote it. If our knowledge was wrong in constructing the code, assumptions are still wrong while reviewing it. But in practice we run code reviews, automated compilation/linters/tests/etc to ensure that the quality of our code is not left to the individual programmer who is quick to forget that it was not for them in the first place. Bugs have a harder time surviving multiple perspectives (generally).

Now back to searching for this years bash incantations :)

ReganLaitila | 5 years ago | on: Who’s behind Wednesday’s epic Twitter hack?

oh come on. Competent companies regard their internal networks untrusted with or without a vpn access solution. If your an incompetent company then there is no argument for a vpn vs no vpn because your incompetent, and will eventually succumb to the horrors of your insecurities regardless if your applications and network endpoints are directly exposed to the internet or behind a vpn solution.

your beyondcorp link has nothing to do with a well implemented vpn solution + standard access controls to network endpoints like the link suggests. Your clearly supporting a false dichotomy in which having a well constructed vpn solution is "wrong" and does not add to your overall security posture. Shenanigans.

vpn or not you still need to authorize/authenticate your network endpoints. But hey, you don't want a vpn so give me a list of your internet accessible ssh hosts and well see how well your "zero trust" gets you if you can't keep up with best practices. Good luck!

ReganLaitila | 5 years ago | on: Who’s behind Wednesday’s epic Twitter hack?

I would be curious as to who is citing that using a vpn is some "anti-pattern", to what? Not protecting your network accessible assets?

If you have the means, certainly use a corporate/smb/personal vpn. It is one layer in a multitude of layers you should be using to protect your network.

Its not as if once you achieve vpn access you have no other authz gates to internal applications. Its a "great filter" to help narrow the possible avenues of attack and it works. If your inner layer of authz fails its not the vpn's fault.

Whats your alternative? Just make every application and network endpoint publicly accessibly on the internet?

page 1