FrozenCow's comments

FrozenCow | 1 year ago | on: Ask HN: How do you share and sync .env files and secrets with your team

I'm currently using direnv + 1password + https://github.com/tmatilai/direnv-1password. `direnv` to load shell environment dynamically upon entering a directory. It can load static .env files, but can also source shell scripts to load envvars.

1password is the company password manager. It has shared 'vaults' where a team can share secrets with one another. They can thus be used for authorization, who can access which secrets.

direnv-1password is a plugin for direnv that will load secrets from 1password into envvars. With this, upon entering a project, you'll be asked to unlock 1password (using yubi or fingerprint scan) and it'll fetch the needed secrets from the project.

This way secrets secrets are not easily readable from your disk, like they would with .env files.

Other password managers likely have similar tooling for direnv. Though I don't know whether it'll be this convenient.

FrozenCow | 1 year ago | on: Ask HN: How to store and share passwords in a company?

Most services are connected through SSO, so those won't have passwords and are automatically shut down when the user leaves the company.

All employees also have a 1password account for which we can store individual passwords for the services that are not connected through SSO.

For some services we only have a single token/service account which we need to share within the team. Often they were stored in a `.env` file, but that tend to be a burden for onboarding and quite a bit of maintenance for each individual.

Within my current team we share them using direnv and https://github.com/tmatilai/direnv-1password. Secrets are loaded as environment variables whenever the dev enters the projects directory. They are loaded from 1password which is unlocked using fingerprint scanner. This way the secrets are not actually stored on disk.

People leaving the team does still require manual password rotation, but at least not everyone in the team needs to update their `.env` file this way.

FrozenCow | 2 years ago | on: Connect to your Raspberry Pi over USB using gadget mode

For many phones it is possible to change the exposed protocol from mtp to mass storage. Mass storage does need an image file with fat filesystem to work on most devices. It nowadays is not possible to expose the phones internal storage.

FrozenCow | 2 years ago | on: Connect to your Raspberry Pi over USB using gadget mode

I've worked on DriveDroid. There were indeed people who have done this.

Best example of such a use-case was someone who wanted to make an old printing press fetch files its files from the internet. The press only had an interface for floppy disks. He replaced the floppy disk controller with a floppy emulator that exposed an USB port. Connected a phone with DriveDroid. Synced usb image files from the internet to be exposed over usb mass storage. The image files were fat images that hold the printing job files. They were generated by the server automatically. It worked pretty well from what I heard.

FrozenCow | 3 years ago | on: Zero to Nix, an unofficial, opinionated, gentle introduction to Nix

Personally, I think because its feature-set is currently very much "scattered".

Nix is in the transition to use 'flakes': a new concept that gives a bit more structure and allows easier reuse of Nix packages, NixOS modules and more. In addition it includes a standardized 'lock' file. Lock files are quite useful (or even essential) for reproducibility.

However, it is in experimental phase for more than 4 years now. It is behind a configuration flag, so it isn't obvious how to use it. A division in community, projects and documentation happens this way.

Because it is still considered an experimental feature, flakes and its surrounding CLI tools aren't mentioned in the official docs.

Even though it is experimental, flakes is considered an essential part of Nix by a large portion of the community.

This makes those people look for and create their own solutions:

This results in multiple documentation sites:

- Official manual (https://nixos.org/manual/nixos/stable/) - https://nix.dev/ - https://nixos.wiki/ - Blog posts - Now https://zero-to-nix.com/

Multiple wrapping tools for development environments:

- `nix-shell` (non-flake style) - `nix develop` (flake style) - https://devenv.sh/ - https://www.jetpack.io/devbox/ - https://floxdev.com/

It makes sense that these are created. I'm still hoping Nix flakes will become the default and UX can be iterated upon. But it doesn't make the future of Nix all bright and beautiful atm.

FrozenCow | 8 years ago | on: NixOps – Declarative cloud provisioning and deployment with NixOS

Also, for convenience, nested objects/attrsets can be shortened. For instance:

    {
        a = {
            b = {
                c = 3;
            };
        };
    }
can be shortened to:

    {
        a.b.c = 3;
    }
This is used quite often in NixOS system configuration files. There you'll find lines like:

    services.openssh.enable = true;
For a list of all system configuration options that NixOS supports by default see (NixOS options)[https://nixos.org/nixos/options.html].

FrozenCow | 9 years ago | on: It’s Been Real, Android: Why I’m Retiring from Android

IMO, this is exactly right. Not only is the quality of APIs often a disaster, the recommendations/best practices that Google is broadcasting have been way off in the past.

Rememember that Google has been advocating AsyncTask in the past. You could use something else or roll your own, but it doesn't make sense to do that when AsyncTask is being communicated as the way forward. They should've embraced third-party libraries, but instead they still communicate how to use AsyncTask. Example: last March 2016 they published a video on how to use AsyncTask [0]. They do highlight the red-flags around it, but do not mention any of the third-party libraries that are considered standard for a lot of Android developers already.

The same goes for Fragments. It is still being advocated as the right way, but I have some doubts. I imagine they will be advocated against in a couple of years. They help at the time, but are far from logical/simple when you don't know all of the crooks and crannies (and there are quite a few).

Another API that needs serious work is the Storage Access Framework (SAF). Previously Android applications could use the Java File API to access files. With recent Android versions this has been closed off with good reasons and intentions. Instead of using the File API directly, you now need to use SAF. SAF doesn't support all operations you could do with the File API. I would say this is a regression and existing applications that relied on these features are now broken and unrepairable. In addition, applications now needed to explicitly ask the user for permission to specific directories or files. This was so badly implemented that every file manager needed to instruct the user on how to use the directory picker of Android using screenshots before showing the picker. This is still a problem and even if it is fixed, it will still be a problem for years due to phones being unable to update.

To give you an impression on how those APIs are designed: many calls of SAF will return null to indicate something went wrong. No exceptions or error-codes. There is often no indication why a certain file cannot be retrieved. It could be non-existing, it could be denied permissions, it could be some strange behavior in the rom. There is sometimes a way to find out why this returned null and that is looking through Androids global log. I've implemented this in the beta of my app so that I could find out why files weren't accessible on some devices.

Also, because of the slow adoption of Android versions, your application needs to support both SAF and the Java File API. The support library has wrappers for Java File API that works like SAF, but it's still a downgrade to use this API as exceptions from the Java File API are just ignored [1].

For me Gradle is actually a step in the right direction. Builds seem to be more reproducible and do not fail randomly compared to the Eclipse/Ant days. The performance is however awful. Incremental builds take 30 - 60 seconds. When comparing this to building pure Java projects it shows that it is an Android-only issue and not one with Gradle.

[0] https://www.youtube.com/watch?v=jtlRNNhane0

[1] https://github.com/guardianproject/android-support-library/b...

FrozenCow | 10 years ago | on: Things Rust shipped without

Wouldn't it be better to create serialization functionality at compile-time?

In the Java space some libraries are moving to compile-time code generation instead of relying on reflection. It is a huge win, since a lot more can be checked beforehand. Dagger 2 is a good example how it can be beneficial. It provides dependency injection at compile-time, which will in turn check whether all dependencies are satisfied. I haven't seen this being done at compile-time before, but it is definitely a step up from reflection-based DI.

I'm not sure whether macros of Rust can provide such functionality, but the developers seem conservative when it comes to adding functionality. That seems like a good thing.

FrozenCow | 11 years ago | on: Why Is My Smart Home So Dumb?

This is what I do as well. One raspberry pi with all the required connections (Z-Wave USB controller, IR emitter and a wire to my PC) with an extremely simple node application that serves urls like /lights/on, /curtains/open and /computer/power. My phone running Tasker to trigger the various appliances. Add some buttons to the homescreen for direct control over everything. The real added value comes from a trigger from the wakeup-alarm app that runs a Tasker script, turning on the lights, waiting 5 minutes and opening the curtains.

I haven't found an all-in-one solution to do all this. Then again, I haven't looked for anything else once I got my curtains going through Tasker. An open standard would've made the process much simpler though.

FrozenCow | 11 years ago | on: Ask HN: What are some nice electronics courses that blend theory and practice?

I was looking for the same thing and found this: http://www.falstad.com/circuit/

It shows blocks traveling across wires/circuits. You can create your own circuits and see what happens. It's not just for logic, but you can see how analog circuits operate as well. The speed of the blocks represents the current flowing and the color of the wires represents the voltage.

I wish more stuff like this was available for circuitry. It gives a nice basic intuition how things work and shows why I need to place a resistor in front of a LED.

FrozenCow | 11 years ago | on: Node.js in Flame Graphs

You can check the 2nd and 3rd groups when using /(^\/foo\/bar$)|(^\/foo\/bar\/\d+$)/:

    > /(^\/foo\/bar$)|(^\/foo\/bar\/\d+$)/.exec('/foo/bar')
    [ '/foo/bar',
      '/foo/bar',
      undefined,
      index: 0,
      input: '/foo/bar' ]

    > /(^\/foo\/bar$)|(^\/foo\/bar\/\d+$)/.exec('/foo/bar/3')
    [ '/foo/bar/3',
      undefined,
      '/foo/bar/3',
      index: 0,
      input: '/foo/bar/3' ]
EDIT: Markdown differences.

FrozenCow | 11 years ago | on: Revisiting How We Put Together Linux Systems

> Of course there are exceptions {...} put it in a tarball in /opt, make some symlinks to /usr/local

This becomes unmanageable once you need to do that for multiple applications. Different applications need different versions. I'm not that familiar with Ruby, but you can imagine different versions of Ruby itself needing different versions of system libraries. An upgrade of your OS could become incompatible with the Ruby version you just compiled yourself.

It's good that some people are looking for solutions to this problem. It's a worthwhile effort, even though it might not be directly applicable to everyone. Same was true for Systemd a number of years ago.

FrozenCow | 11 years ago | on: Revisiting How We Put Together Linux Systems

This could have been solved on a different level. If the right architecture was in place Steam didn't need to solve it.

You can have multiple shared libraries of the same name, but different major versions. Applications that need the same version can use that same version. Applications that need different versions can use different versions. The package repository shouldn't be the conflicting factor.

FrozenCow | 11 years ago | on: Revisiting How We Put Together Linux Systems

The current packaging systems has problems. You cannot install just any version of any application on your system, but a lot of people want that.

A nice example is games. You want to install a game on your system without working around the package manager. That was very hard to do if you weren't on the distro that the game was build for. For instance, if the game was just released, there was little chance it would work on Debian stable...

Steam has solved this problem by using their own package manager and their own set of 'approved' libraries that other games must link to. Steam always ships with the set of 'approved' libraries, just so that it can side-step the libraries on your system.

The same is true if you want to build a piece of software that seems to be incompatible with your system: it needs a different GCC, it might need a different version of Gnome libraries.

Also, disk images aren't as simple as you make it seem. If you have a system and want to upgrade one of your applications (but leave the rest as they are!), you aren't going to like the current package management tools.

These kinds of problems happen often for a lot of people and a solution is highly appreciated. That said, the solution in the article probably isn't the best.

page 1