I used to have a bash script I would run to install everything (libs/tools) you're average web developer would need (node, sublime, google chrome, , python libs etc etc). Unfortunately I lost it after some time. Does anyone use anything similar?
I don't do fresh installs very often, but when I do, it's generally because I want it to be a genuine fresh install. As such, I don't install a tool until I need it. I find that tools I once thought I absolutely needed are not tools or libraries that I use anymore.
I have three exceptions to this: Vim (my editor of choice), dotfiles (which I store in a git repository and put in place using stow, installed via a simple bash script), and Vagrant, so I can do development testing against a VM.
As Docker matures, I may use it in place of Vagrant, but it's not ready to fill the same role quite yet.
> I don't do fresh installs very often, but when I do, it's generally because I want it to be a genuine fresh install. As such, I don't install a tool until I need it. I find that tools I once thought I absolutely needed are not tools or libraries that I use anymore.
Same. The core tools and libraries I use change more often than my development machine so when switching to a new machine it's a fun time to reassess what I need to install so I'm not carrying any baggage. I think the time automating the installation would be a net loss given how much I'd need it.
The setup required for building projects that I work on are automated using things like Vagrant so outside of that I only really need a few tools anyways.
I hadn't come across stow before. I might integrate that into my dotfiles.
Right now I use a bash script I wrote to deploy my dotfiles into ~/, via symlink, including renaming existing files (after prompting the user).
I get what you mean about using that opportunity to do some spring cleaning. I find I need to do that to my Vim installation periodically too. I'll add a new plugin that looks like it will be useful, or add something for a new language or template system, then not use it enough to justify it.
I recently reinstalled my NixOS laptop. I just installed the distribution, added my SSH keys, cloned a repository, made a handful of symlinks, and then told NixOS to set everything up.
It's actually a collaborative repository, so that both of us in our company can improve the computer configuration, install new tools or language runtimes, etc etc.
The shared configuration has stuff like: our user accounts and public SSH keys; local mailer setup; firewall; conf for X/bash/emacs/ratpoison/tmux; list of installed packages (including Chromium, mplayer, nethack, etc); fonts and keymaps; various services (nssmdns, atd, redis, ipfs, tor, docker, ssh agent, etc); some cron jobs; a few custom package definitions; and some other stuff.
In Emacs, I use "use-package" for all external package requirements so that when I start the editor it installs all missing packages from ELPA/MELPA.
Aside from dealing with the binary WiFi blob this Dell computer demands, reinstalling was a pleasure.
I have my dotfiles for that. Split into different categories (brew, npm, pip) together with all the config files I need. brew and brew cask (with brew-bundle [0] for Brewfile support) take care of getting all libraries and applications onto the system.
For the development itself I'm either shipping my entire config (.vimrc for example) or use systems like spacemacs, sublimious or proton that only need 1 single file to re-install the entire state of the editor.
The install script itself [1] is then symlinking everyhing into place and executes stuff like pip install -r ~/.dotfiles/pip/packages.txt.
It takes a bit of effort to keep everything up to date but I'm never worried of loosing my "machine state". If I go to a new machine all I have to do is clone my dotfiles, execute install.sh and I have everything I need.
On servers I am using saltstack [2], a tool like puppet, ansible and friends, to ensure my machines are in the exact state I want them to be. I'm usually using the serverless version and push my states over SSH to them.
I have no strong opinion here, but I am curious to hear yours. What were the discriminators in choosing saltstack over the alternatives?
My head spins with these tools and every time I pick one I seem to eventually run into a road block that is a no-go. The most recent effort was ansible and the no-go was its strict dependency on python2.7.
If you consistently use the same Linux distribution, consider building metapackages for that distribution.
I created a set of Debian packages that depend on suites of packages I need. I download and install "josh-apt-source", which installs the source in /etc/apt/sources.list.d and the key in /etc/apt/trusted.gpg.d/ , then "apt update" and "apt install josh-core josh-dev josh-gui ...". That same source package also builds configuration packages like "josh-config-sudoers".
>I created a set of Debian packages that depend on suites of packages I need
You created a Debian installation of yourself. Simply run "apt install josh-core josh-dev...", and Josh will be ready to start developing on the system whenever he is connected.
I wish... I use linux, macos and windows on a daily basis. The best I can manage is to have most of my regular scripts in dropbox, and added to my path symlinked under ~/bin/OS, which actually works pretty well. Even managed to get a bash and CMD based ssh agent running script working, which was interesting (started with the windows/cmd one, and made a bash one to match the logic/intent).
Setting up conemu to autorun a script when opening a cmd, as well as configuring my bash profile. At work (more osx these days), I usually email myself the latest, and have a ~/.auth file thats 700 that I can source in to init my proxy settings (including ssh+corkscrew)... Allows me to only have to edit credentials in one location.
The question asks only about *nix systems, I assume, but it worth mentioning that there is a great tool for Windows too, just in case if someone needs it - https://chocolatey.org/
Use a configuration management tool (I picked https://saltstack.com/ , mostly because of the docs & community support) but there's lots to choose from - Chef, Puppet, Ansible, and so on.
There's a learning curve, and plenty of 'where did my afternoon go?' rabbit holes you can lose yourself in. But the upside is that you can have consistent, repeatable, and rapid builds, with modularity as a bonus.
Don't be afraid with any of these kinds of tools to brute force complex components if you're in a hurry - ie. ignore the pure / idiomatic way, and use the tool's primitives to dump a shell script on a remote box and then run it.
I've found with Ansible at least, I was initially tempted to make large complicated roles for things like "application server" or "development desktop" but what ended up working much better was very granular roles such as "nginx server" and "emacs" (often just a single task such as "yum: name=nginx state=installed") that can be combined in playbooks. This makes it easier to avoid duplicating tasks in different roles, or having a lot of complex conditional cases in your roles.
I'm using a shell script together with the nix package manager for that. The shell script just ensures that all packages are there (e.g. doing `nix-env -i fpp wget iterm2 jekyll ghc ruby nodejs composer php`). I can pin the version of all packages by configuring `NIX_PATH` to point to a specific `nixpkgs` (the package repository) commit. So that all people have exact the same versions of everything.
I also use Nix, with sets of packages managed by git. I use four "levels":
- System-wide packages, systemd services, users, cronjobs, etc. are managed by /etc/nixos/configuration.nix
- My user profile has one package installed, which depends on all of the tools I want to be generally available (emacs, firefox, etc.). By using a single meta-package like this, I can manage it using git and it makes updates/rollbacks easier
- Each project I work on maintains its own lists of dependencies (either directly in a .nix file, or converted automatically from other formats like .cabal), which are brought into scope by nix-shell
- I also have a bunch of scripts which use nix-shell shebangs, so their dependencies are fetched when invoked, and available for garbage collection once they're finished
I've recently learned you can put all the packages in one custom Nix expression in ~/.nixpkgs/config.nix, like below, then load (and reload/update/change) it with one `nix-env -i all`, or faster with: `nix-env -iA nixos.all`:
# ~/.nixpkgs/config.nix
{
packageOverrides = defaultPkgs: with defaultPkgs; {
# To install below "pseudo-package", run:
# $ nix-env -i all
# or:
# $ nix-env -iA nixos.all
all = with pkgs; buildEnv {
name = "all";
paths = [
fpp wget iterm2 jekyll
ghc ruby nodejs composer php
];
};
};
}
and then keep only this one file in git. (Though still working on how to possibly also keep .inputrc, .bashrc, .profile, etc. in it.)
I've found ansible to be okay at setting up my environment. I'm able to configure everything from my zsh themes, terminal font size, window manager shortcuts, thunderbird logins, and so forth. The playbook takes about 30 minutes to run and after that I have almost everything ready.
Unfortunately I don't have a public GH repo I can point at as I don't want to expose everything I use to the internet. However the principle is the same as provisioning servers with ansible.
The only thing different I do is I use GPG keys to decrypt and untar things like thunderbird profiles rather than using Ansible vault. I restore GPG keys + SHH keys from offline, encrypted USB backups.
Have you considered using Gitlab? I really like GitHub, but can't justify the prices right now (just starting out in my career). But while Gitlab isn't as popular, or maybe quite as polished (though it's getting there fast), it does have free private repos. I've used this to store private data like this.
Protip: "brew list" will list all installed packages, including dependencies, which you might not want. What you probably want is "brew leaves", where it lists all installed packages that are not dependencies of another installed package.
This makes a difference in cases where a dependency is no longer needed in the latest version.
On a related note, why does the majority of package managers make the common and simple task of "list all manually installed packages" so incredibly hard?
For a fun brain twister, try to list all the manually installed packages on your system by just reading the man pages and no internet. Ubuntu is nightmare mode for this challenge.
That right there is irony. You intended to give good advice about using reliable methods for installing software, then immediately recommend the Devils ass crack of package management tools.
I use a shell script for a new debian[0] installation and also have other scripts for kubuntu[1], opensuse[2] and other software installations. I store my dotfiles[3] and other useful scripts that I can customize for each development environment. Hope that helps!
Setting up a new system is where NixOS really shines. Once you have one system working it is trivial to duplicate it on new metal.
1. Install NixOS
2. Copy configuration.nix*
3. Copy dotfiles
4. # nixos-rebuild switch
5. Enjoy your old setup on new hardware--no secret sauce needed!
*A hardware-configuration.nix should have been generated by the installer. By default this is sourced by configuration.nix, in which case configuration.nix shouldn't need editing.
Manually. I wipe rarely, and change tools often for various reasons (even ignoring version upgrades), making building and maintaining an installation script not worth it.
For awhile I did maintain a windows batch script that installed things off of a share at work. I was dealing with pre-release Windows 8, and wiped frequently for upgrades. Even that probably wasn't worth it, but I didn't have a second machine at the time, and wanted to run it overnight instead of blocking my ability to work.
I have a repository[0] that holds all my configuration and installs some language-specific tools. Otherwise I just manually install any packages I need. I may consider automating this at some point but I don't use that many tools so it hasn't been particularly onerous.
ditto. In addition to dot files, my repo has a `system_setup.sh` which installs everything that can be installed on the command line and sets up symlinks and and and. Every time i add a new tool to my arsenal (brew install x usually) i also add it to that file. This repo means i can be up and running on any new system in under 30 mins. Most of that time is for some manual downloads, and git checkouts I have to do too.
Yep, this is what I do for Windows systems, and it works well for me. I have a *.bat file. The first line installs chocolatety, and the subsequent lines install packages that I want. For example...
http://pastebin.com/cpbdbfAN
Boxstarter (http://boxstarter.org/) automates setting up windows machines even more. It utilises Chocolatey to install 3rd-party software and also can install windows updates and take care for reboots etc.
Once you've set up a box-script, you may run it on a freshly installed windows, go to lunch, and when you return everything's set up.
Depends on the system.. for OSX, first VSCode, second Homebrew, after that VS Code. I use Homebrew to install the version-switchers for my language of choice (usually node/nvm).
From there, I'll setup a ~/bin directory with various scripts as utilitarian. I may source some of them in my profile script.
----
Windows Git for Windows, Git Extensions, ConEmu, Visual Studio (Pro or Community, depending on environment), VS Code. I should look into chocolatey, but admit I haven't. NVM for windows.
----
Linux/Ubuntu generally apt, and ppa's as needed.
----
FYI: I keep my ~/bin symlinked under dropbox, as I tend to use the same scripts in multiple places. I will separate ~/bin/win, ~/bin/osx and ~/bin/bash, and have them in the path in appropriate order... linux/bash being default. I'll usually use bash in windows these days too, and set my OSX pref to bash. It's the most consistent option for me, even with windows /c/...
I use Ansible with Brew and Brew Cask. I've found using Brew for everything makes it easier to upgrade all applications for security reasons and it also gives a high level view of my system. Here's the relevant config file of the things I install:
I find I end up with a lot of cruft and my tools of choice change over time, so I don't worry about it. A decent package manager makes this approach tolerable.
Homebrew and Homebrew Cask on OS X handle at least 90% of what I want to install.
We've adapted this at 18F. It's not a fork but it is based on and inspired by Thoughtbot's original project. Strong recommend. https://github.com/18f/laptop
[+] [-] falcolas|9 years ago|reply
I have three exceptions to this: Vim (my editor of choice), dotfiles (which I store in a git repository and put in place using stow, installed via a simple bash script), and Vagrant, so I can do development testing against a VM.
As Docker matures, I may use it in place of Vagrant, but it's not ready to fill the same role quite yet.
[+] [-] munificent|9 years ago|reply
Yes! A new machine is a great time to run a garbage collection cycle on my tools.
[+] [-] seanwilson|9 years ago|reply
Same. The core tools and libraries I use change more often than my development machine so when switching to a new machine it's a fun time to reassess what I need to install so I'm not carrying any baggage. I think the time automating the installation would be a net loss given how much I'd need it.
The setup required for building projects that I work on are automated using things like Vagrant so outside of that I only really need a few tools anyways.
[+] [-] ajford|9 years ago|reply
Right now I use a bash script I wrote to deploy my dotfiles into ~/, via symlink, including renaming existing files (after prompting the user).
I get what you mean about using that opportunity to do some spring cleaning. I find I need to do that to my Vim installation periodically too. I'll add a new plugin that looks like it will be useful, or add something for a new language or template system, then not use it enough to justify it.
[+] [-] moondev|9 years ago|reply
[+] [-] fratlas|9 years ago|reply
[+] [-] mathgeek|9 years ago|reply
[+] [-] orf|9 years ago|reply
[+] [-] treebeard901|9 years ago|reply
Lazy comma usage. Come on!
[+] [-] mbrock|9 years ago|reply
It's actually a collaborative repository, so that both of us in our company can improve the computer configuration, install new tools or language runtimes, etc etc.
The shared configuration has stuff like: our user accounts and public SSH keys; local mailer setup; firewall; conf for X/bash/emacs/ratpoison/tmux; list of installed packages (including Chromium, mplayer, nethack, etc); fonts and keymaps; various services (nssmdns, atd, redis, ipfs, tor, docker, ssh agent, etc); some cron jobs; a few custom package definitions; and some other stuff.
In Emacs, I use "use-package" for all external package requirements so that when I start the editor it installs all missing packages from ELPA/MELPA.
Aside from dealing with the binary WiFi blob this Dell computer demands, reinstalling was a pleasure.
[+] [-] nahtnam|9 years ago|reply
[+] [-] Mandatum|9 years ago|reply
[+] [-] dvcrn|9 years ago|reply
For the development itself I'm either shipping my entire config (.vimrc for example) or use systems like spacemacs, sublimious or proton that only need 1 single file to re-install the entire state of the editor.
The install script itself [1] is then symlinking everyhing into place and executes stuff like pip install -r ~/.dotfiles/pip/packages.txt.
It takes a bit of effort to keep everything up to date but I'm never worried of loosing my "machine state". If I go to a new machine all I have to do is clone my dotfiles, execute install.sh and I have everything I need.
On servers I am using saltstack [2], a tool like puppet, ansible and friends, to ensure my machines are in the exact state I want them to be. I'm usually using the serverless version and push my states over SSH to them.
[0]: https://github.com/Homebrew/homebrew-bundle
[1]: https://github.com/dvcrn/dotfiles/blob/master/install.sh
[2]: https://saltstack.com
[+] [-] DigitalJack|9 years ago|reply
My head spins with these tools and every time I pick one I seem to eventually run into a road block that is a no-go. The most recent effort was ansible and the no-go was its strict dependency on python2.7.
[+] [-] JoshTriplett|9 years ago|reply
I created a set of Debian packages that depend on suites of packages I need. I download and install "josh-apt-source", which installs the source in /etc/apt/sources.list.d and the key in /etc/apt/trusted.gpg.d/ , then "apt update" and "apt install josh-core josh-dev josh-gui ...". That same source package also builds configuration packages like "josh-config-sudoers".
[+] [-] logicallee|9 years ago|reply
You created a Debian installation of yourself. Simply run "apt install josh-core josh-dev...", and Josh will be ready to start developing on the system whenever he is connected.
[+] [-] tracker1|9 years ago|reply
Setting up conemu to autorun a script when opening a cmd, as well as configuring my bash profile. At work (more osx these days), I usually email myself the latest, and have a ~/.auth file thats 700 that I can source in to init my proxy settings (including ssh+corkscrew)... Allows me to only have to edit credentials in one location.
[+] [-] Shorel|9 years ago|reply
Or I have to install the PPAs before I can use the metapackage (which kind of defeats the idea of using a metapackage instead of a script)?
[+] [-] zihotki|9 years ago|reply
[+] [-] j_s|9 years ago|reply
[+] [-] Jedd|9 years ago|reply
There's a learning curve, and plenty of 'where did my afternoon go?' rabbit holes you can lose yourself in. But the upside is that you can have consistent, repeatable, and rapid builds, with modularity as a bonus.
Don't be afraid with any of these kinds of tools to brute force complex components if you're in a hurry - ie. ignore the pure / idiomatic way, and use the tool's primitives to dump a shell script on a remote box and then run it.
[+] [-] ams6110|9 years ago|reply
[+] [-] _query|9 years ago|reply
Package customizations like a .vimrc is also handled by nix (I recently blogged about how I do this: https://www.mpscholten.de/nixos/2016/05/26/sharing-configura...).
The shell scripts together with the package customizations (e.g. my custom vimrc) are managed by git.
[+] [-] chriswarbo|9 years ago|reply
- System-wide packages, systemd services, users, cronjobs, etc. are managed by /etc/nixos/configuration.nix
- My user profile has one package installed, which depends on all of the tools I want to be generally available (emacs, firefox, etc.). By using a single meta-package like this, I can manage it using git and it makes updates/rollbacks easier
- Each project I work on maintains its own lists of dependencies (either directly in a .nix file, or converted automatically from other formats like .cabal), which are brought into scope by nix-shell
- I also have a bunch of scripts which use nix-shell shebangs, so their dependencies are fetched when invoked, and available for garbage collection once they're finished
[+] [-] akavel|9 years ago|reply
[+] [-] pwnna|9 years ago|reply
Unfortunately I don't have a public GH repo I can point at as I don't want to expose everything I use to the internet. However the principle is the same as provisioning servers with ansible.
The only thing different I do is I use GPG keys to decrypt and untar things like thunderbird profiles rather than using Ansible vault. I restore GPG keys + SHH keys from offline, encrypted USB backups.
[+] [-] ajford|9 years ago|reply
Check it out if you haven't already.
[+] [-] jmfayard|9 years ago|reply
On OSX, this is a no-brainer with brew[1] and brew cask[2]
# On my old mac
=> then I save relevant parts for future references [1] http://brew.sh/ [2] https://caskroom.github.io/[+] [-] desdiv|9 years ago|reply
Protip: "brew list" will list all installed packages, including dependencies, which you might not want. What you probably want is "brew leaves", where it lists all installed packages that are not dependencies of another installed package.
This makes a difference in cases where a dependency is no longer needed in the latest version.
On a related note, why does the majority of package managers make the common and simple task of "list all manually installed packages" so incredibly hard?
For a fun brain twister, try to list all the manually installed packages on your system by just reading the man pages and no internet. Ubuntu is nightmare mode for this challenge.
[+] [-] stephenr|9 years ago|reply
> brew
That right there is irony. You intended to give good advice about using reliable methods for installing software, then immediately recommend the Devils ass crack of package management tools.
[+] [-] svaksha|9 years ago|reply
[0] https://github.com/svaksha/yaksha/blob/master/yksh/apt-debia...
[1] https://github.com/svaksha/yaksha#2-folders
[2] https://github.com/svaksha/yaksha/tree/master/yksh
[3] https://github.com/svaksha/yaksha/tree/master/home
[+] [-] Sir_Cmpwn|9 years ago|reply
[+] [-] ramblenode|9 years ago|reply
1. Install NixOS
2. Copy configuration.nix*
3. Copy dotfiles
4. # nixos-rebuild switch
5. Enjoy your old setup on new hardware--no secret sauce needed!
*A hardware-configuration.nix should have been generated by the installer. By default this is sourced by configuration.nix, in which case configuration.nix shouldn't need editing.
[+] [-] dleslie|9 years ago|reply
[+] [-] MaulingMonkey|9 years ago|reply
For awhile I did maintain a windows batch script that installed things off of a share at work. I was dealing with pre-release Windows 8, and wiped frequently for upgrades. Even that probably wasn't worth it, but I didn't have a second machine at the time, and wanted to run it overnight instead of blocking my ability to work.
[+] [-] michaelmior|9 years ago|reply
[0] https://github.com/michaelmior/dotfiles
[+] [-] masukomi|9 years ago|reply
[+] [-] ninjakeyboard|9 years ago|reply
[+] [-] cyptus|9 years ago|reply
[+] [-] cyptus|9 years ago|reply
http://pastebin.com/HmiqDDbi
[+] [-] daveslash|9 years ago|reply
[+] [-] zauberpony|9 years ago|reply
Once you've set up a box-script, you may run it on a freshly installed windows, go to lunch, and when you return everything's set up.
[+] [-] tracker1|9 years ago|reply
From there, I'll setup a ~/bin directory with various scripts as utilitarian. I may source some of them in my profile script.
----
Windows Git for Windows, Git Extensions, ConEmu, Visual Studio (Pro or Community, depending on environment), VS Code. I should look into chocolatey, but admit I haven't. NVM for windows.
----
Linux/Ubuntu generally apt, and ppa's as needed.
----
FYI: I keep my ~/bin symlinked under dropbox, as I tend to use the same scripts in multiple places. I will separate ~/bin/win, ~/bin/osx and ~/bin/bash, and have them in the path in appropriate order... linux/bash being default. I'll usually use bash in windows these days too, and set my OSX pref to bash. It's the most consistent option for me, even with windows /c/...
[+] [-] rcconf|9 years ago|reply
https://github.com/arianitu/setup-my-environment/blob/master...
The ansible script also links to my dotfiles, which can be found at:
https://github.com/arianitu/dotfiles
[+] [-] tlrobinson|9 years ago|reply
Homebrew and Homebrew Cask on OS X handle at least 90% of what I want to install.
[+] [-] HugoDias|9 years ago|reply
[+] [-] gboone42|9 years ago|reply
[+] [-] rikkipitt|9 years ago|reply