smitherfield's comments

smitherfield | 4 years ago | on: Preparing Rustls for Wider Adoption

No evidence? We know for a fact that US, Russian, Chinese, British, Israeli etc. intelligence agencies are looking for crypto vulnerabilities, and we know for a fact that they do not publicize the vulnerabilities they find.

smitherfield | 4 years ago | on: Preparing Rustls for Wider Adoption

That's if you look at major PUBLICIZED attacks on TLS endpoints. It's quite plausible that the people who've found (i.e. are looking for) attacks based on incorrect crypto aren't publicizing them.

smitherfield | 4 years ago | on: We need to know the origin of Covid-19

But it wasn't an adverse inference in that case, at least not from Saddam's perspective. He wanted the world to believe he had WMDs, because he believed, not unreasonably (c.f. North Korea), that this would deter military action by the U.S. and his other enemies (Iran, Syria, Israel, Saudi Arabia).

smitherfield | 4 years ago | on: Walter Mondale has died

Mondale proposed some interesting ideas during his Presidential campaign, in particular a national industrial policy, that I think we would have done well to take heed of.

smitherfield | 4 years ago | on: We need to know the origin of Covid-19

This is more like if the police come to search your house, and find that you've burned it down, or you've barricaded yourself inside with guns and hostages.

But, let's assume for the sake of argument that your analogy is the correct one. You know where they assume that if you don't let the police search your house, or if you don't answer police questions, you must be guilty? China.

So, we can just as easily judge the Chinese government by another heuristic: The Golden Rule

smitherfield | 4 years ago | on: We need to know the origin of Covid-19

I think a useful concept with this is the legal doctrine of adverse inference.[1] If one of the parties to a lawsuit conceals or destroys important evidence, it is assumed that that evidence would have been unfavorable to the party which concealed or destroyed it.

So, while we may not be able to know for sure how COVID-19 originated, we can certainly draw an adverse inference from the behavior of the Chinese government.

[1] https://en.wikipedia.org/wiki/Adverse_inference

smitherfield | 7 years ago | on: Benchmarks of Cache-Friendly Data Structures in C++

Yeah, that's one of my biggest pet peeves when looking at other people's code (along with unnecessary dynamic allocations in general). One of the reasons I perhaps irrationally still prefer C++ to Rust is the pervasive use of dynamic arrays of known static size in the latter's documentation, and how it makes fixed-size arrays much less ergonomic to use than dynamic arrays.

smitherfield | 7 years ago | on: Benchmarks of Cache-Friendly Data Structures in C++

Why wouldn't an implementation along these lines be performant?

  template<typename... Ts>
  class SoA : public tuple<vector<Ts>...> {
          // ...
          template<size_t... Is>
          tuple<Ts&...> subscript(size_t i, index_sequence<Is...>) {
                  return {get<Is>(*this)[i]...};
          }
  public:
          // ...
          auto operator[](size_t i) {
                  return subscript(i, index_sequence_for<Ts...>{});
          }
  };

smitherfield | 7 years ago | on: Special Cases Are a Code Smell

> "Technically" O(n) is the only O(n).

In idealized algorithmic analysis, but not necessarily real life. "Amortized O(1)," which I assume you concede is a commonly-used, meaningful and legitimate term, means "technically" an idealized O(>1) but O(1) in practice.

Calling memcpy inside a Ruby method call is amortized O(1) because for any "n" that fits within available memory, it will always be much faster than the other things in a Ruby method call, which involve dozens of locks, hash table lookups with string keys, dynamic type checks, additional Ruby method calls and so forth.

Likewise, computational complexity on an idealized Von Neumann machine isn't always the same on a real computer, in both directions. Dynamic allocations are theoretically O(n) but may be O(1) if the program never exceeds the preallocated space. Or suppose there were a loop over an array of pointers which dereferenced each pointer; the dereferences are theoretically O(1) but may be O(n) if they evict the parent array from the cache.

> What is the common case in your view?

Such as an array small enough that it can be copied with 10 or fewer vector load/stores.

> O(3n) = O(2n) = O(n)

Yes, that's my point. It's impossible to implement the example in less than idealized O(n) time, so O(n) and O(1) operations are equivalent complexity-wise WRT the entire method.

smitherfield | 7 years ago | on: Special Cases Are a Code Smell

>No, you still need to copy the old array to the new array.

That's just a lock (nontrivial but O(1)) and a memcpy (technically O(n) but trivial, and O(1) for the common case if it's implemented with vector instructions), plus in any event the sums-of-neighbors method has to be at least O(n) on an idealized Von Neumann machine because it must read every element of the source array and also write every element of the destination.

smitherfield | 7 years ago | on: Special Cases Are a Code Smell

Yeah, a little OCD but I couldn't stand the first example (and some of the others). Here's a more reasonable implementation that doesn't special case (or more precisely wraps the special-casing in a standard library method):

  class Array
    def neighbor_sums
      map.with_index do |_, i|
        fetch(i - 1 & 0xFF_FF_FF_FF, 0) + fetch(i + 1, 0)
      end
    end
  end

  [1, 1, 1, 1].neighbor_sums # [1, 2, 2, 1]
page 1