My dotfiles git repo is meant to be cloned in my home directory. It comes with this .gitignore committed in the repo:
/*
!/.vim/
/.vim/.netrwhist
Basically it ignores everything in my home directory, unless I explicitly `git add` it, which matches my workflow. For the few cases where I want to notice changes (like the entire ~/.vim/ subdirectory), I explicitly un-ignore it as you can see above.
The only downside I've experienced is my bash PS1 prompt shows the status of the dotfiles repo (branch/dirty/etc) in any directory I'm in that's inside my home dir - I've learnt to ignore it, and it doesn't interfere with CDing into an actual directory that's its own git repo.
In the past I wrote a bash setup script for my dotfiles repository which pretty much does the opposite, symlinking a combination of shared and os-specific directories and files into my home dirs. One definite advantage with your technique is that no special setup script is required. I'm thinking I can obviate the need for splitting ommon and os-specific dirs by just detecting when the OS is Linux or macOS within the scripts themselves and using an if statement to gate their execution or sourcing.
This concept is related to another HN personal favorite of mine: "Best thing in your bash_profile / aliases" [0]. Lots of interesting command-line shell optimization and slick hack ideas in there.
I think that method requires git to scan all the files in the directory so it can then ignore them. The advantage of the “showUntrackedFiles no” method is that git will only look at the tracked files, which is much faster if you have a million files in your home dir, like I do. (Or so I believe.)
You can use the `git check-ignore` command to check if a certain directory is ignored. I used it in my zsh config to suppress the status of ignored directories in my prompt.
It originated as a way for people who aren't familiar with CLI to install things. People have now been trained to expect this level of simplicity. I've worked with people that will blindly copy and paste these lines into terminals, having absolutely no idea what they do, and even blindly type in sudo password when prompted. It's basically the worst of all worlds from a security perspective. In my opinion this should be burned to the ground.
Normally when I see this I will manually download the bash script, read through exactly what it does, and manually type in each command instead of running the script directly. This way I know what it is doing, and it can't hide command output by piping into /dev/null and doing something without my knowledge.
Especially since it is possible to detect merely downloading from actually piping to a shell serverside[0], you should never do this even if you've examined the script first.
I don't know why you are being downvoted but I'd like to know. I have a very official and very governmental API that isn't properly set up and the official doc says to use curl -k to talk to it. I had a long argument with one of the dev, with a PoC and a live example, about how it was a bad idea, especially considering how it can easily be fixed but... the 'feature' is still there.
I suppose the next $2,000 a day consultant will get them the memo.
It's fair to say that the technique described by SneakyCobra is amazing. I previously used this to manage my own dotfiles but there's still a few problems with SneakyCobra's approach.
* initial setup can be tricky even for experienced users
* Incorrect use can potentially destroy your home directory
So, I wrote a little utility call "SDF: Sane dotfile manager" that makes the technique used by SneakyCobra approachable to a complete novice and hence more reliable to use.
I do almost exactly the same as you; my install.sh is a glorified wrapper around `ln -s`, but for each file, it verifies whether the file is already symlinked and if not renames the original to something like `.foo.bak.$(date -I)`. This is probably overkill, but it was especially nice when I was just starting to version control my dotfiles and still found unmanaged files sometimes that contained things worth saving.
I use a combination of stow and git via a script I wrote that I call stash - https://github.com/scotte/stash - it's very basic and simple, but has served me well.
I'm still looking for a good way to I'm using this approach, but looking for a way manage dotfiles for multiple machines. Having separate branches feels clunky, since there is a lot of overlap and tweak may involve making the same tweak on several branches. Any recommendations for managing this situation?
I have a low-key solution. I check in files named by the hostname. For example `.bashrc.[hostname]`. Then I have a quick conditional in all my .bashrc files that checks for a hostname-specific file. This way, I commit them all, but only the relevant ones get loaded.
Surprised nobody has mentioned YADM - it can do per-device files and/or per-device templating too (jinja2 syntax). It's just a thin wrapper around git so you can use any git commands too.
The reason I want to be able to have different files for different machines was to make slight variations to some of my dotfiles. I used to use branches but it was too much error prone work keeping all my branches up-to-date. I switched to a system where I template my dotfiles, but now I have to expand those templates for them to actually work. I do have leverage when to expand the templates and how to install them. There are a bunch of different ways to do this depending on what you want but what I ended up doing was:
1. Template files using a syntax that was easy find / replace using a regex. You could use an existing one if you like.
2. Generate a bash install script with all the file variants embedded as base64 strings. I can build this script locally, but I also have a travis ci build that pushes up the install.sh script as a gh-pages like branch.
3. I can now curl the install.sh script from any machine I want and bootstrap my dotfiles. The only install time dependencies are bash, curl, git, base64, mkdir, and echo so it's a very portable self-contained script.
4. During install time, I use a case on hostname to determine which files to use and I use git to put them into my $HOME directory using a similar strategy described by the article.
I template trees using a python script, for both ~ and /. The template language isn't even currently that complex - basic per-host conditionals suffice. The host pushing the config does the templating, serializes it, and shoves it over ssh to the receiver. This way I can do things like leave passwords in config files (eg mpd.conf) and not have them end up on eg a VPS. Another example is having helpful comments in authorized_hosts to say where a key is from without that information ending up on the hosts themselves.
The receiving host runs python, so it can do things like refuse to overwrite files that have been changed locally. I still need to add a notion of hooks to run on the receiver when a given file is changed. If the remote dependency on python/ssh becomes a problem, I will simply add an option to dump a tarball locally.
I really tried to use ansible et al, but those tools seem to be geared towards managing large groups of essentially identical hosts, rather than generally differing hosts with some commonality.
it's great. It has tags to only pull up specific dotfiles (say for emacs, .config etc), and supports configurations for multiple hosts and multiple source folders.
This is overkill, but I have a DAG of profiles. Each profile can refer to one or more parent profiles. When I produce a config for a particular profile, a small Python script applies the profiles starting from the root node(s).
To avoid trashing my home directory, this actually is done to the side and committed into a bare git repository (this part is similar to the article). Afterwards, I use `git --git-dir=... --work-tree=~ checkout -p` to apply any changes one-by-one, allowing me to preserve any local edits I may have made.
Check $uname and other variables with if statements to activate aliases depending on OS, username etc.
I also keep a barebones “core” of aliases/functions that i use everywhere (e.g. on linux servers as well as my current macos laptop). And then a file that contains the non-core stuff that only get used on my dev environment (MacBook) but not on servers.
—
It’d help if you provided examples of what differences you have between machines though. Most should be pretty simple e.g. slightly different dir structure, different package managers etc.
I ended up putting everything common into the master branch, and keep only the varying parts (not the common parts) in machine-specific branches. These are normally "include files" that end up in ~/.profile.d/, ~/.emacd.d/, etc.
I have to check out both the main branch and the machine-specific branch in separate directories, and use symlinks. OK by me, though; I don't set up a new workstation often.
Why do you want different configs on different machines? I think the "easy" solution is "don't do that";) But of course that's useless advice if you actually have some usecase, so a suggestion: Have case/if blocks on $(hostname), or even do something like `test -f ~/dotfiles/bashrc.$(hostname).local && source ~/dotfiles/bashrc.$(hostname).local`
The problem with keeping all dotfiles in a single repo is that if you want to get an older version of one particular dotfile, you'll also be getting older versions of other dotfiles as well.
I want every dotfile I use to be independent of the rest and a log that shows changes to just that one dotfile, so I store each of them in separate repos and use GNU Stow[1] to manage them.
The above is actually a bit of an oversimplification of what I do, as I store related dotfiles in a single repo as well, so that (for example) all my weechat dotfiles are in a single repo, as I rarely want to checkout a single file independantly of the rest there.
Git can fetch a single file as of any commit with 'git show' and show the history of any single file with 'git log' or 'git diff'. Any decent web/GUI tool will do these things too.
A Git repo per small text file seems like overkill to me.
Who says your git repo has to be public? Use a GitHub private repo, host the repo yourself behind SSH on a $5 Digital Ocean droplet, use the free private repos that come with Gitlab.com... securing git repositories is a solved problem.
There's a lot of dotfiles on Github and it doesn't seem to be a problem (Except if you check in private credentials, but that's not a problem unique to dotfiles).
If you rely on your configuration to be secret to be secure it's just security by obscurity and not worth much anyway.
[+] [-] minaguib|7 years ago|reply
My dotfiles git repo is meant to be cloned in my home directory. It comes with this .gitignore committed in the repo:
Basically it ignores everything in my home directory, unless I explicitly `git add` it, which matches my workflow. For the few cases where I want to notice changes (like the entire ~/.vim/ subdirectory), I explicitly un-ignore it as you can see above.The only downside I've experienced is my bash PS1 prompt shows the status of the dotfiles repo (branch/dirty/etc) in any directory I'm in that's inside my home dir - I've learnt to ignore it, and it doesn't interfere with CDing into an actual directory that's its own git repo.
[+] [-] jaytaylor|7 years ago|reply
In the past I wrote a bash setup script for my dotfiles repository which pretty much does the opposite, symlinking a combination of shared and os-specific directories and files into my home dirs. One definite advantage with your technique is that no special setup script is required. I'm thinking I can obviate the need for splitting ommon and os-specific dirs by just detecting when the OS is Linux or macOS within the scripts themselves and using an if statement to gate their execution or sourcing.
This concept is related to another HN personal favorite of mine: "Best thing in your bash_profile / aliases" [0]. Lots of interesting command-line shell optimization and slick hack ideas in there.
Thanks again for showing me a superior way :)
[0] https://news.ycombinator.com/item?id=18898523
[+] [-] wrs|7 years ago|reply
[+] [-] agazso|7 years ago|reply
[+] [-] syoc|7 years ago|reply
[+] [-] BluSyn|7 years ago|reply
Normally when I see this I will manually download the bash script, read through exactly what it does, and manually type in each command instead of running the script directly. This way I know what it is doing, and it can't hide command output by piping into /dev/null and doing something without my knowledge.
Seriously people. Never. Trust. Bash Scripts.
[+] [-] Sylamore|7 years ago|reply
[0] - https://www.idontplaydarts.com/2016/04/detecting-curl-pipe-b...
[+] [-] johnchristopher|7 years ago|reply
I suppose the next $2,000 a day consultant will get them the memo.
[+] [-] unknown|7 years ago|reply
[deleted]
[+] [-] shreyansh_k|7 years ago|reply
* initial setup can be tricky even for experienced users * Incorrect use can potentially destroy your home directory
So, I wrote a little utility call "SDF: Sane dotfile manager" that makes the technique used by SneakyCobra approachable to a complete novice and hence more reliable to use.
You can find the introductory text here (https://shreyanshja.in/blog/sane-dotfiles-management/) And the source code here (https://github.com/shreyanshk/sdf)
Let me know how it works for you. :-)
[+] [-] mikewhy|7 years ago|reply
[+] [-] sametmax|7 years ago|reply
[+] [-] yjftsjthsd-h|7 years ago|reply
[+] [-] lscotte|7 years ago|reply
[+] [-] lwhsiao|7 years ago|reply
[+] [-] katet|7 years ago|reply
[+] [-] gkmcd|7 years ago|reply
http://yadm.io
[+] [-] djblue|7 years ago|reply
1. Template files using a syntax that was easy find / replace using a regex. You could use an existing one if you like.
2. Generate a bash install script with all the file variants embedded as base64 strings. I can build this script locally, but I also have a travis ci build that pushes up the install.sh script as a gh-pages like branch.
3. I can now curl the install.sh script from any machine I want and bootstrap my dotfiles. The only install time dependencies are bash, curl, git, base64, mkdir, and echo so it's a very portable self-contained script.
4. During install time, I use a case on hostname to determine which files to use and I use git to put them into my $HOME directory using a similar strategy described by the article.
github: https://github.com/djblue/dotfiles install.sh: https://git.io/vxQ4g
[+] [-] mindslight|7 years ago|reply
The receiving host runs python, so it can do things like refuse to overwrite files that have been changed locally. I still need to add a notion of hooks to run on the receiver when a given file is changed. If the remote dependency on python/ssh becomes a problem, I will simply add an option to dump a tarball locally.
I really tried to use ansible et al, but those tools seem to be geared towards managing large groups of essentially identical hosts, rather than generally differing hosts with some commonality.
[+] [-] Barrin92|7 years ago|reply
it's great. It has tags to only pull up specific dotfiles (say for emacs, .config etc), and supports configurations for multiple hosts and multiple source folders.
[+] [-] frutiger|7 years ago|reply
To avoid trashing my home directory, this actually is done to the side and committed into a bare git repository (this part is similar to the article). Afterwards, I use `git --git-dir=... --work-tree=~ checkout -p` to apply any changes one-by-one, allowing me to preserve any local edits I may have made.
All this is available as a sparsely documented Python package: https://github.com/frutiger/stratum
[+] [-] __blockcipher__|7 years ago|reply
I also keep a barebones “core” of aliases/functions that i use everywhere (e.g. on linux servers as well as my current macos laptop). And then a file that contains the non-core stuff that only get used on my dev environment (MacBook) but not on servers.
—
It’d help if you provided examples of what differences you have between machines though. Most should be pretty simple e.g. slightly different dir structure, different package managers etc.
[+] [-] nine_k|7 years ago|reply
I have to check out both the main branch and the machine-specific branch in separate directories, and use symlinks. OK by me, though; I don't set up a new workstation often.
[+] [-] yjftsjthsd-h|7 years ago|reply
[+] [-] mikewhy|7 years ago|reply
[+] [-] santadakota|7 years ago|reply
https://github.com/RichiH/vcsh
https://myrepos.branchable.com/
[+] [-] hectorm|7 years ago|reply
[+] [-] pmoriarty|7 years ago|reply
I want every dotfile I use to be independent of the rest and a log that shows changes to just that one dotfile, so I store each of them in separate repos and use GNU Stow[1] to manage them.
The above is actually a bit of an oversimplification of what I do, as I store related dotfiles in a single repo as well, so that (for example) all my weechat dotfiles are in a single repo, as I rarely want to checkout a single file independantly of the rest there.
[1] = https://www.gnu.org/software/stow/
[+] [-] benley|7 years ago|reply
That doesn't really follow; it is easy to check out old versions of a single file with git:
for ".bashrc from two commits ago on the current branch", that would be: Hopefully I'm not just misunderstanding your point here.[+] [-] njharman|7 years ago|reply
[+] [-] boardwaalk|7 years ago|reply
A Git repo per small text file seems like overkill to me.
[+] [-] bruxis|7 years ago|reply
Something like:
[+] [-] sam_lowry_|7 years ago|reply
[+] [-] arunix|7 years ago|reply
[+] [-] personjerry|7 years ago|reply
[+] [-] thingfox|7 years ago|reply
Yadm, a thin wrapper around git allows for alternate files, encryption and templating. See my post https://news.ycombinator.com/item?id=19594859
https://github.com/thingfox/dotfiles
https://yadm.io/docs/encryption
https://yadm.io/docs/bootstrap
https://yadm.io/docs/alternates
[+] [-] jdormit|7 years ago|reply
[+] [-] dewey|7 years ago|reply
If you rely on your configuration to be secret to be secure it's just security by obscurity and not worth much anyway.