ths's comments

ths | 4 years ago | on: Testing Distributed Systems

> The issue isn't tooling, it's hardware resources and in some cases licencing.

Hardware resources are definitely an issue. That's why we generally recommend using remote development environments, which aren't as resource-constrained as the local dev machine. Making that comparably smooth to the local dev experience (e.g. for live reloading of services without rebuilding containers) needs some clever tooling (which is partly the reason we're building our product).

With production-like remote dev environments, you get the same capabilities as your CI environment, but can run test suites ad-hoc (and without having to spin them up and tearing them down for every test run).

There's no fundamental reason why CI environments should have capabilities that individual dev environments can't have—it's all a matter of automation in the end.

> The real challenges, like determining seed data etc is too project specific to be abstracted away.

Very much agree with that! The generic stuff (dependencies, parallel processing, waiting for things to spin up etc.) should be taken care of by the tooling, but without constraining the project-specific stuff (which is highly individual).

ths | 4 years ago | on: Testing Distributed Systems

Related to this topic: When running integration/e2e tests, setting up the environment (all the required services, data stores etc.) in the right sequence, loading them with test data and so forth can be thorny to automate.

Good automation around preparing/provisioning the testing environment is a necessary companion to the testing tools/frameworks themselves.

Most commonly, fully-capable testing environments aren't available during the inner loop of development (where the dev setup can usually only run unit tests or integration tests for 1-2 services + a database).

Because of this, people tend to rely solely on their CI pipelines to run integ/e2e tests, which can slow things down a lot when one of those tests fails (since the write/run/debug loop has to go through the CI pipeline).

As an industry, I think we should start taking automation and developer productivity more seriously—not least when it comes to writing and debugging tests for complex distributed systems. The more we can lower the marginal cost of writing and running tests, the more effective our test suites will become over time.

Shameless plug: My company (https://garden.io/) is developing a framework and toolchain to bring the full capabilities of a CI pipeline to the inner loop of development, so that developers can efficiently run all/any test suites (including integ/e2e tests) in their personal dev environments.

We do this by capturing the full dependency graph (builds, deploys, tests, DB seeding etc.) of the system in a way that can power CI, preview environments and inner-loop development.

ths | 5 years ago | on: Docker for Mac M1 RC

Sounds like Garden Core could be a great fit here.

The motivation behind Garden was that, like you, we had built our own home-grown kubernetes dev environments, but felt like there should be a polished, general-purpose framework + tool for this sort of thing.

ths | 5 years ago | on: Docker for Mac M1 RC

(Garden co-founder here)

Garden supports in-cluster building, using buildkit or kaniko.

This way, you don't need to have Docker or k8s running on your dev machine as you're working.

It also automates the process of redeploying services and re-running tests as you're coding (since it leverages the build/deploy/test dependencies in your stack).

We also provide hot reloading of running services, which brings a similarly fast feedback loop as with local dev.

The idea is to have a dev environment that has the same capabilities as the CI environment, and to be able to run any/all of your tests without having to go through your CI system (which generally involves a lot more waiting).

ths | 9 years ago | on: Brain, Mind, Body and the Disease of Addiction

We have an easier time categorising a disorder as a disease rather than a moral failure when we see clear neurobiological correlates.

But as we know (or at least currently believe), the separation of disorders into "hardware" (non-moral/impersonal) and "software" (moral/personal) is ultimately an illusion: The material substrate for our personality is precisely our neurobiology.

For example, ΔFosB overexpression in the Nucleus Accumbens following repeated reward stimulus is "the most significant biomolecular mechanism in addiction since its viral or genetic overexpression (through chronic addictive drug use) in D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for many of the neural adaptations and behavioral effects (e.g., expression-dependent increases in self-administration and reward sensitization) seen in drug addiction" (https://en.wikipedia.org/wiki/FOSB#Role_in_addiction)

If we reach a point where we also find sufficiently convincing neural correlates for the more "high-level" aspects of addictive psychology and personality, wouldn't that eventually lead to us treating the whole complex in non-moral terms?

My feeling is that we intuitively choose the conceptual structure that we feel is most functional, given our state of knowledge. Morality is just another model for predicting and interacting with the behaviour in question, albeit less formal and more heuristic-based.

ths | 13 years ago | on: Just Use Sublime Text

Indeed! I disagree with people who think vim and emacs are ugly: they have their own nice 8bit-esque aesthetic which I'd expect hackers to like, given that they've chosen to stare at terminals all day (I've rolled my own colorscheme for both vim and emacs). And they make good use of screen real-estate, very important when coding on a laptop.

I haven't tried Evil, in my last emacs phase it wasn't mature yet IIRC - maybe I should give that a go and see if it tips the balance yet again.

ths | 13 years ago | on: Just Use Sublime Text

Even if the mouse were quicker on average for long jumps, the moving of the hand from mouse to keyboard feels like more of a combo breaker, more distracting. Speed is very important, but equally important is for the editing experience to have a low attention footprint to leave more space for the train of thought behind the changes being made.

ths | 13 years ago | on: Just Use Sublime Text

I rock back and forth, depending on the languages I'm using. Emacs does more, but is in my experience more kludgy, even after having in aggregate spent many days customizing it (maybe it would rub better if I wasn't hardwired to the vi/vim modal editing/motions method). Vim feels cleaner and less annoying, but sometimes the easy integration with external tools tips the balance in emacs' favor. E.g. I use vim for Rails apps (where I didn't really feel enough difference from vim + terminal), but if I were writing Clojure, stuff like the repl integration would probably mean emacs.

Sublime is really pretty, and has great functionality out of the box.

Ultimately, vi/vim's contribution is the modal editing method (and derived things like motions), which can be transplanted to any other IDE or editor that cares to support it. But for the fundamendals - editing and switching between files - I've so far found vim to be the cleanest, most natural vim.

ths | 13 years ago | on: Just Use Sublime Text

If you want ⌘S, ⌘C, ⌘V etc., try Macvim (https://code.google.com/p/macvim/). I use it for longer coding sessions (more colors, faster rendering when in fullscreen with split panes), and terminal vim over ssh when working on a remote server. Also, I map Caps-lock to ESC on my mac, easier to reach (and who uses Caps-lock anyway).

ths | 13 years ago | on: One rat brain 'talks' to another using electronic link

I wonder if this could have some great research applications. A year ago I watched some lectures from Stanford on ethobiology (i.e. the branch dealing with the biological processes underlying behavior) on YouTube by Robert Sapolsky, and he talked a lot about the difficulty of figuring out what parts of the brain (and which interplays of brain centers) are responsible for behavioral patterns, especially when it comes to the more complex things. One joke was something like: "You know that feeling when someone calls you and you don't really want to talk, but feel uncomfortable with saying that actually you're busy and would rather just read a book than talk? I think we've found the brain center for that." We have learned a lot from what happens when people have certain parts of their brains damaged. But maybe we could learn much, much more about the brain by being able to fiddle around with many different kinds of signals to different brain centers, and trying out hypotheses by stimulating several centers simultaneously in order to produce (or not produce) certain behaviors?

ths | 13 years ago | on: Chomsky: Work, Learning and Freedom

I should have been clearer: I'm not advocating anarcho-capitalism, just having an academic discussion. My point was that a state isn't necessary to enforce contracts, although the competing entities that would theoretically replace the state are quite state-like in many ways as you point out. The Icelandic Commonwealth was anarcho-capitalistic though, wasn't it (anarchy + property rights)? Worked okay for a few hundred years.

I do see your point that anarcho-capitalism isn't really anarchism, though. The societies they envision are radically different.

ths | 13 years ago | on: Chomsky: Work, Learning and Freedom

> Without the state, corporations (and capitalism) are not possible.

A state is not necessary for enforcing contracts - that could be done by private parties, as in anarcho-capitalism. Also: Cooperatives can exist within a capitalist system, but the converse is not true. So I guess it comes down to whether or not all property as defined by the status quo should be redistributed or reallocated, presumably by force; if that were not the case, there wouldn't really be any disagreement between left-anarchists and anarcho-capitalists, right?

ths | 13 years ago | on: GNU Guile 2.0.7 released

If you want a lisp that compiles to JS, ClojureScript is also a good option. It's ready for use, and there's a lot of smart people working on making it better. The Clojure/ClojureScript community is also very good - intelligent and friendly.

ths | 13 years ago | on: There’s more to mathematics than rigour and proofs

Good post! Generalizing the point, maybe a good description of sophistication is: using one's intuition in a natural way to explore the boundaries of one's knowledge, having previously refined that intuition by rigorous study and experiential learning. It's through an increasingly refined intuition about things within our horizon of knowledge that we're able to focus our conscious thought on the border of that horizon and expand it.

ths | 13 years ago | on: Codeq

WRT Smalltalk's image, code management is only one of its marquee features, so I think Smalltalkers would object to calling image vs files a false dichotomy.

Yeah, I agree. That was inaccurate on my part. What I said really only applies to the code management side.

ths | 13 years ago | on: Codeq

Perhaps this project shows that file vs image is, in fact, a false dichotomy. Instead of replacing our current file-based workflows with a monolithic image, this kind of tool could provide all the image functionality on top of the file structure. This model is no more difficult for large teams than the file-based one, since in that respect the database/query engine/tools they describe have the same essential ingredients for this as Git has, and one could envision a similar toolchain (push/pull/Github etc. for the database instead of the Git repo) emerging for a system like this. Best of both worlds? In many ways, it feels like the next step for the ideas that make Git great: why not add language semantics and query capabilities and take things to the next level?

ths | 13 years ago | on: Clojure is not Software Conservative

Exactly. When he says

    So under its expressive covers, everything about Clojure is strongly
    conservative, with a core overriding bias towards protecting
    programmers from mistakes.
he's probably talking about macros and the focus on immutability/FP. With respect to macros, I really don't get why he interpreted Christophe Grand's presentation like that unless he walked away after the first two slides or so (and even then it's a stretch). Rich Hickey debunked his statement on macros very well here on HN (http://news.ycombinator.com/item?id=4366661) - the view Christophe expresses is precisely the opposite of conservative: it's about increasing composability and flexibility, very liberal attributes by Yegge's own definition. Although "protecting programmers from mistakes" is definitely one side of FP/immutability, I seem to recall Rich and others arguing for it more in terms of reducing the cognitive overhead of programming by reducing incidental complexity - not having to hold as many things in your head while thinking about your system allows your brain to handle bigger systems and move faster. Removing mental obstacles for the programmer who is impatient to create furiously seems pretty damn liberal to me.

I had fun reading Yegge's post, as with most of his posts. But the part about Clojure really missed the mark. Which is a shame, given Yegge's arguing for lisp in the past and Clojure being a really cool, viable lisp for getting stuff done in the 21st century.

ths | 13 years ago | on: Coming Home to Vim

I just can't be bothered with fiddling with config files and installing little bundles and packages for every functionality.

While this is obviously a matter of taste, I do like vim's minimalism in that you start off with a simple core with sensible defaults and just add any features you want (I use tcomment, command-t, and fuzzyfinder for switching buffers - that's pretty much it, my .vimrc is less than one page). I found vim much easier to get into than emacs (which I'm currently using for most heavyweight coding sessions) where the defaults are (imho) not as sensible and you have to dig through tons of stuff that comes bundled with it, instead of understanding everything from the beginning and building from there. Of course, it's great to be able to extend emacs with elisp to do any stuff you feel is missing, and I've generally felt it easier to integrate the whole dev experience (minus the browser) into it than with vim - but such usage was of course never in harmony with vim's philosophy in the first place.

On top of that the whole "language of editing" and combining noun,verb,adjective commands, etc... doesn't really appeal to me because I'm too visual when I'm editing code. I can't stop to think about the right semantics about what I want to do, I just do it visually.

Like the rest of vim, these things just become muscle memory - you will completely stop thinking about what command to use after a bit. The reward is getting all those micro-level text editing tasks done with fewer keystrokes, minimizing the time you spend typing and therefore the disconnect with the thought that preceded the typing. I think this dynamic is really great when you get into the zone and want to let stuff flow out with as little neuromuscular obstruction as possible :P

I think the vi/vim editing model is a genuinely important contribution to text editing technique, much more so than vim the program, but of course you can use vim keybindings in lots of other editors and IDEs and still get the core benefits, assuming the most important vim-powers have been implemented.

ths | 13 years ago | on: Another year of Clojure

It's indeed possible to add type-checking to Clojure by writing macros for type declarations and checking. But Clojure was designed primarily as a dynamic language, and if one wants type checking maybe Haskell fits the bill better, since it has a powerful type system and a syntax that was designed with such a type system in mind from the get-go. I'd suspect that a heavyweight type system would just look and feel cumbersome in Clojure, but maybe I'm wrong.

ths | 14 years ago | on: Our experience using Clojure to speed up Beanstalk

Clojure does have more sugar than Scheme, but imho some of it improves readability; for example, using brackets for grouping instead of overloading lists like Scheme does can make code easier to scan, because when you see parens in Clojure there are fewer meanings to choose from (usually only function application or a list literal). Example:

Scheme

  (let ((x 2) (y 3))
    (let ((x 7) (z (+ x y)))
      (* z x)))
Clojure

  (let [x 2 y 3]
    (let [x 7 z (+ x y)]
      (* z x)))
I think the Clojure version is easier to read without a paren-matching editor, though Scheme's rigorous minimalism does have its charm.
page 1