jacobparker's comments

jacobparker | 1 year ago | on: Null-Restricted and Nullable Types

The one nullable gotcha with doing this is that the .NET standard library doesn't have nullable annotations in older frameworks.

One approach to adding nullable annotations to a large code-base is to go "bottom-up" on DLL at a time. The annotations are somewhat viral; if you were to go top-down then as you get to more of the "shared" DLLs and add annotations you'll find yourself having to go back and revisit the DLLs you already migrated. In this light, the .NET standard library situation is problematic.

Imagine implementing IEnumerable<T> by wrapping another IEnumerable<T>; in .NET framework there are no nullable annotations, so you can get into a situation where you don't add a nullable annotation where you will eventually need one upgrading to newer .NET. This can bubble up throughout your code base, making that eventual .NET upgrade more challenging.

I'm not saying its not worth it to do this in .NET framework, but you can very easily add extra work to the inevitable(?) update to .NET 8+. When you get there you could of course temporarily regress and turn nullable stuff back to warnings until you work through that surprise backlog, but nullable is really nice so you might be strongly inclined to not do that... not a huge problem, just something to be aware of!

jacobparker | 4 years ago | on: Show HN: Dependently typed language for proofs that you can implement in one day

I highly recommend SF. The trick is you must use the books via coqIDE or something equivalent; viewing the rendered HTML is almost pointless at best, but probably counter-productive.

Binary search would be a good exercise. I think SF volume 1 would be more than enough, with the exception of deciding how to model arrays (you'd probably want to look up Coq's persistent arrays/PArray system for this.) SF volume 3 does binary search _trees_ which is similar but avoids the array problem (as is customary in functional languages.)

However, Coq (by default) also uses infinite precision numbers. This works very well for proofs but isn't necessarily realistic. This is where some extra fun comes in: fixed precision arithmetic is a very common source of bugs for binary search: https://ai.googleblog.com/2006/06/extra-extra-read-all-about...

jacobparker | 4 years ago | on: Show HN: Dependently typed language for proofs that you can implement in one day

Some examples from a Software Foundations (a series of books about Coq, available online):

I just wrote something I called insertion sort. I want to know that this is a valid implementation of sorting. What does it mean to be a sorting algorithm? It means that the output is sorted, and it's a permutation (shuffling) of the input. This is an exercise here: https://softwarefoundations.cis.upenn.edu/vfa-current/Sort.h...

Say I've written a Red-Black tree. I want to know that it behaves like a map, or that it keeps the tree appropriately balanced (which is related to performance): https://softwarefoundations.cis.upenn.edu/vfa-current/Redbla...

One more: say you have specified a programming language, which includes describing the structure of programs (the grammar essentially) and "small step" semantics (e.g `if true then x else y` can take a step to `x`). It would be interesting to show that any well-typed halting program can be evaluated in multiple steps to exactly one value (i.e. the language is deterministic and "doesn't get stuck" (or, in a sense, have undefined behaviour)). That's the subject of volume 2 https://softwarefoundations.cis.upenn.edu/plf-current/toc.ht... Beyond this, you may have done a similar thing for a lower level language (machine assembly, wasm, ...) and have a compiler from your language to the low level language. You may want to prove that the compiler "preserves" something, e.g. the compiled output evaluates to the same result ultimately.

RE: termination, in something like Coq that is a bit special because every function terminates by construction. That might sound limiting but it's not really/there are ways around it. It would, however, be impossible to write something like the Collatz function in Coq and extract it in the obvious sense.

EDIT: and there are other ways these tools can (theoretically) be used to work with programs, but that's a long story. There have been real-world programs built like this, but it is very uncommon at this time. It is an area of active research.

jacobparker | 4 years ago | on: Bringing supply chain security features to the Go community

For NuGet, consider using centralized package references: https://github.com/NuGet/Home/wiki/Centrally-managing-NuGet-...

You get one file defining the set of all packages used in your repo (or some subset of your repo, etc.) and Dependabot will update this file directly. Individual projects can choose to use a package but won't specify the version.

It requires that your projects are all in sync with package versions but (1) that sounds like what you want (2) it's usually the best thing.

jacobparker | 5 years ago | on: We found and fixed a rare race condition in our session handling

At D2L we have a large C# code base which gets deployed in a few different subsets, but is largely a monolithic fleet of web servers.

To prevent these kind of problems we have a few approaches, but the main way is to prevent shared mutable state. To do this we have a custom C# code analyzer (source here: https://github.com/Brightspace/D2L.CodeStyle/tree/master/src... , but it's not documented for external consumption at this point... ImmutableDefinitionChecker is the main place to look.) It goes like this:

  [Immutable]
  public class Foo : Bar {
    // object is a scary field type, but "new object()"
    // is always an immutable value.
    private readonly object m_lock = new object();
  
    // we'll check that ISomething is [Immutable] here
    private readonly ISomething m_something;
  
    // Fine because this lambdas in initializers can't
    // close over mutable state that we wouldn't
    // otherwise catch.
    private readonly Action m_abc = () => {};
  
    // see the constructor
    private readonly Func<int, int> m_xyz;
  
    // danger: not readonly
    private int m_zzz;

    // danger: arrays are always mutable
    private readonly int[] m_array1 new[]{ 1, 2, 3 };

    // ok
    private readonly ImmutableArray<int> m_array2
      = ImmutableArray.Create( 1, 2, 3 );
  
    public Foo( ISomething something ) {
      m_something = something;
  
      // ok: static lambdas can't close over mutable state
      m_xyz = static _ => 3;
  
      // danger: lambdas can in general close over mutable
      // state.
      int i = 0;
      m_xyz = x => { i += x; return i; };
    }
  }
Here we see that a type has the [Immutable] attribute on this class, so we will check that all the members are readonly and also contain immutable values (via looking at all assignments, which is easy for readonly fields/properties). Additionally, we will check that instances of Bar (our base class) are known to be immutable.

Any class that were to extend Foo (as a base class) will be required to be [Immutable] as well. There's a decent number of aspects to this analysis (e.g. a generic type can be "conditionally immutable" -- ImmutableArray<int> is, but ImmutableArray<object> is not), check the source if you're interested.

We require all static (global) variables be immutable, and any "singleton" type (of which we have tens of thousands if I remember correctly) also must be.

Another important thing we do to is to cut off any deferred boot-up code from accessing customer data (via a thread-local variable that is checked before access through the data layer). This prevents accidentally caching something for a particular customer inside, say, a persistent Lazy<T> (or the constructor for a singleton, etc.)

We've adapted our code base to be very strict about this over the last few years and it discovered many obscure thread-safety bugs and doubtlessly prevented many from happening since.

jacobparker | 5 years ago | on: RFC 8959: The “secret-token” URI Scheme

You might be interested in:

git config --global core.excludesfile ~/.gitignore

You can have a system-wide (but local only) .gitignore. It doesn't help other people who clone your repo, but it can be useful in some situations.

jacobparker | 5 years ago | on: Highlights from Git 2.28

For the past few years the performance work (e.g. commit graph) has made a huge difference for large/fast moving repos. They're big enough to be productivity enhancements for some people: when we merged our Android-like multi-repo setup into a proper mono-repo we required updating git because things like git status/git commit/git log could take minutes without the recent perf fixes.

It pays to follow the release notes because some of these features are opt-in (e.g. commit graph is still optional).

The sparse checkout stuff is great but still too low-level for us to use, but it's laying the groundwork for something good.

jacobparker | 5 years ago | on: GitHub isn't fun anymore

Thanks for the reply. What do you mean by a CI/CD system?

I'm not sure what you think was "bolted on" to actions, could you be specific? Features have been added (during the beta and since) but I can't think of anything that could really be described as "bolted on". "bolted on" to me implies some kind of disharmony with other features, weird quirks etc.

> I seriously didn't understand what problems github actions were intended to solve

You can run arbitrary code in response to any GitHub repository event, and also cron jobs and external triggers (a GitHub app can POST to a specific API to trigger those style of workflows).

The code runs with access to GitHub APIs for your repo (it can clone it, leave comments, create new PRs, whatever), and in any event that is associated with a PR a check status gets added to the PR.

This is a pretty general-purpose tool and I think it's easy to imagine the kind of problems people would use it to solve...

> my suspicion is [...]

Honestly it sounds like you're not really familiar with it and have some strongly negative opinions. I'd recommend either giving it another look and putting aside your feelings or working on a more substantive critique...

jacobparker | 5 years ago | on: GitHub isn't fun anymore

It's really hard to understand what you're trying to say here. I use actions a lot. There is no distinction between actions and CI/CD here. The context of this blog post is a set of features that were added to actions (during the beta!) to better support CI/CD workflows. There is no CI/CD "shortcut", like you say, bolted on to actions.

The alpha was rougher (and completely different) but that's why it was called an alpha!

jacobparker | 5 years ago | on: 0.999...= 1

Any other argument for a different base aside, in base 12 you have the analogous “problem” that 0.BBB... = 1. None of the usual bases, equipped with this “...” power, avoid the “problem” (non-unique representation).

jacobparker | 5 years ago | on: OrbitDB: Peer-to-peer databases for the decentralized web

> When you get down to it, between cache coherency across CPUs and memory, disk flush delays and disk caches, every database is eventually consistent.

This is a false. None of the things you listed preclude consistency.

> And if you want to operate over a distributed network, which means you WILL have network partitions, then you are subject to CAP and will need eventual consistency mechanisms.

That's not how CAP works. Plenty of distributed CP databases exist.

jacobparker | 5 years ago | on: Buridan's Principle (1984) [pdf]

Right, but the reasons why are different from the halting problem. As stated in the OP, "the key assumption in this argument is continuity". I'm not sure what you mean by the outputs being "expected to be considered a continuous function". In both the halting problem and this case the outputs are discrete (e.g. binary: halts/does not halt, left/right, etc.)

There are other impossibilty results, e.g. the FLP theorem. Despite having computing-based interpretations they're not all related to the halting problem. :)

That being said, maybe it's possible to generalize the halting problem to a continuity in some reasonable way and get something like Buridan's principle. It's not obvious to me how that would be done but I'd be interested to see!

jacobparker | 6 years ago | on: Temporarily rolling back SameSite Cookie Changes

Mostly, if you opt in: https://caniuse.com/#feat=same-site-cookie-attribute . SameSite=Lax will still send cookies for some types of GET requests, depending on the complexity of your site and UX there are ways to be more protected (with SameSite=Strict, cookie pairs etc.)

Once all browsers behave like Chrome is trying to (SameSite=Lax by default) we will have dramatically less CSRF on the web. Other browsers are likely to adopt this change eventually if Chrome sticks with it. You will at least need to consider users with out-of-date browsers for a while yet (and implement XSRF tokens and/or explicitly opt-in to Lax/whatever).

The old behaviour will still (and always?) be around with SameSite=None. It has uses, but misuses could create CSRF vulnerabilities. There will still be CSRF problems on the web but it will get a lot rarer and, mercifully, not the default.

page 1