rpglover64's comments

rpglover64 | 6 years ago | on: Enabling developers and organizations to use differential privacy

Haven't read the paper yet, but have read the blog posts (which are awesome, BTW!).

I'm wondering if you have any thoughts on Frank McSherry's old blog post expressing his distrust for approximate-DP [1]. He seems to have different intuitions than your "almost DP" post expresses and makes criticisms that aren't quite addressed in your post.

[1]: https://github.com/frankmcsherry/blog/blob/master/posts/2017...

rpglover64 | 6 years ago | on: Enabling developers and organizations to use differential privacy

While this is true, there's some nuance.

First of all, there's a lot of recent (and not so recent) work in Local Differential Privacy [1], which uses the "untrusted curator" model. Although this software doesn't use it, the article mentions RAPPOR, which is a good example.

Second of all, encryption protects your _data_, but not your _privacy_; that is, assuming your data gets used in any way, you have no guarantees about whether the result reveals anything you'd rather keep secret. Of course, if you're talking about normal encryption, your data _can't_ be used, but then you're not really sharing it at all, as much as storing it there (like Dropbox). But once you start talking about things like homomorphic encryption or secure multiparty computation, it's important to keep in mind that they are complements to differential privacy, not replacements.

[1]: https://en.wikipedia.org/wiki/Local_differential_privacy

rpglover64 | 8 years ago | on: Signal Foundation

> It's the same thing Firefox does with Linux distros who want to add their own patches.

"did", I think. I don't know which distros still run into this, but Debian now ships Firefox (RIP Iceweasel).

rpglover64 | 9 years ago | on: Reflecting on Haskell in 2016

> After using the language for two years I find that the types are actually enough to understand a new library, however, am taking for granted that it's an acquired skill.

I think this overstating it, unfortunately. I'm an intermediate Haskeller, and when I tried to use `hasql`, I found the lack of documentation to slow me down.

It's a testament to the power of types as documentation that I was able to use it at all, but examples and simple cookbook-style "Here's how you do this thing" or "Here's how you use this component" or "You can't do this because the interface doesn't allow it; here's why" would have sped up my acquisition of the library immensely.

rpglover64 | 9 years ago | on: Reflecting on Haskell in 2016

> it's just a formula. I also know it doesn't do any logging, or "launch nukes", and that it is thread safe.

So the type claims. Without looking at the implementation or using `-XSafeHaskell`, you don't know if there's an `unsafePerformIO` call lurking.

/pedantry

rpglover64 | 10 years ago | on: Choose GitLab for your next open source project

Thank you for doing an AMA! I know I'm late to the party, but I have an experience-report/feature-request/question:

I tried to use GitLab in a classroom setting, and it went okay. One of the reasons we decided against using it the next year was the apparent lack of an archival backup feature (c.f. my Stack Exchange [question](http://serverfault.com/q/627618/172148) on the matter.)

We'd like to start completely fresh every year, so that former course assistants and students don't have access, but we'd also like to keep around the old data (for various reasons). Given that GitLab can only restore a backup to the same version that generated it, the only option this left us with was to archive the whole VM, which just feels sloppy.

I understand that this feature is not a priority and is a relatively large technical undertaking, so I'm not holding my breath on it getting implemented; even so, I thought that sharing my experience would be valuable.

Once again, thank you for engaging with the community and for such a great product.

rpglover64 | 10 years ago | on: Introducing OpenAI

I don't know if this is a definition other people use, but here's one possibility.

Intelligence (in a domain) is measured by how well you solve problems in that domain. If problems in the domain have binary solutions and no external input, a good measure of quality is average time to solution. Sometimes, you can get a benefit by batching the problems, so lets permit that. In other cases, quality is best measured by probability of success given a certain amount of time (think winning a timed chess or go game). Sometimes instead of a binary option, we want to minimize error in a given time (like computing pi).

Pick a measure appropriate to the problem. These measures require thinking of the system as a whole, so an AI is not just a program but a physical device, running a program.

The domain for the unrestricted claim of intelligence is "reasonable problems". Having an AI tell you what the mass of Jupiter or find Earth-like planets is reasonable. Having it move its arms (when it doesn't have any) is not. Having it move _your_ arms is reasonable, though.

The comparison is to the human who is or was most qualified to solve the problem, with the exception of people uniquely qualified to solve the problem (I'm not claiming that the AI is better than you are at moving your own arms).

rpglover64 | 10 years ago | on: A riddle wrapped in a curve

Are you happier with the state of symmetric crypto, which, despite relying on conjectures (like the existence of pseudo-random functions) tends not to rely on _algebraic_ ones?

Personally, I don't have particular worries about the hardness assumptions of asymmetric crypto, and I think of them a bit like I think of bitcoin (hear me out). Yes, it is certain that eventually someone will solve the discrete log problem for any given algebraic structure (either by rendering all crypto that relies on it broken, or (less likely) proving it fundamentally secure), but for now, we know that this is hard (since it has been open for a while), and we're also incentivizing people to make mathematical discoveries.

I'd also claim that the "crypto community" (at least the academic side of it) and the "technology community" are not the same, and (at least to me) often feel opposed. Cryptologists write papers filled to the brim with dense and precise mathematical assumptions and reductions; technologists skim the papers, ignore the assumptions, and implement half-assed, unaudited versions of the systems in question and claim them secure (pardon my cynicism).

As to what the community thinks about mathematical public key crypto, they hail it as the greatest innovation since sliced bread and the herald of modern cryptography. Prior to modernity, cryptography was very ad-hoc and depended on what the author's intuitions; modernity introduced precise definitions of what it meant for a system to be secure and raised the bar. It also relies heavily on the concept of a hardness reduction, i.e. a proof that breaking a cryptogrpahic primitive is at least as hard as solving a yet-unsolved math problem.

Specifically about algebraic problems, I have a (low confidence) intuition that they are unavoidable in public-key crypto precisely because of the need for an algebraic structure relating the public and private keys. With this in mind, I'd rather have algorithms which rely on known hard to solve problems (demonstrated hard by having years of mathematical effort poured into them with minimal result) to those which rely on problems no one has ever bothered to look at.

A final question: you are unhappy with public key crypto that relies on algebra; would you be happier if it relied on some other branch of mathematics? Analysis? Topology (okay, so that's still algebra)? Complexity theory (a secure cryptosystem that relied only on P!=NP would be a holy grail for several reasons, but I don't know of any attempts to find one)? Would you feel safe using a cryptosystem that was secure if and only if the Riemann Hypothesis were true? If the RH were false? The Collatz Conjecture?

rpglover64 | 10 years ago | on: Lojban

This comes up often, but it turns out to be pretty crappy, in practice, for a few reasons.

First, although it's not ambiguous, it's vague. It's considered good practice to leave empty places that are not immediately relevant to the conversation. The way this works is kinda like if every every sentence is a function call with arguments, and every argument has a default value; according to the language spec, though, the default value for each argument is to be inferred from context.

Second, it's often referentially ambiguous (e.g. "the bear" doesn't necessarily uniquely identify an entity), and its system of anaphora (think pronouns), although different than English's is insufficiently precise for computer communication.

Third, there are other languages which are designed with that goal in mind, and which are more natural. I'm thinking specifically of Attempto Controlled English.

rpglover64 | 11 years ago | on: Psychology Journal Bans Significance Testing

I think the problem is not "if you find a frequentist (as opposed to bayesian) statistician", but "if you find a frequentist (as opposed to bayesian) e.g. biologist".

Non-statisticians have been trained using bad, frequentist methods, and one way of forcing them to retrain is by forcing them to learn new statistical tools to get published.

rpglover64 | 11 years ago | on: What are your favourite sci-fi books?

You might like to check out Brandon Sanderson's work (if you haven't yet). It's all fantasy, but it tends to avoid the "and then something magic happens" problem.

I'd interpret the thing you dislike about fantasy as a violation of Sanderson's first law: "The author's ability to resolve conflicts in a satisfying way with magic is directly proportional to how the reader understands said magic."

http://stormlightarchive.wikia.com/wiki/Sanderson%27s_Laws_o...

rpglover64 | 11 years ago | on: Learn You a Haskell for Great Good (2008)

If you like math (I mean stuff like set theory and abstract algebra, not arithmetic and high-school math), then Haskell is wonderful. I find programming in it to be more enjoyable than in any other language. It will also make you think differently, which is a useful feature if you're learning a language to expand your horizons. It's also a gateway language to more bleeding edge aspects of language design.

That said, I find that knowing enough C to fix build and configuration problems in open source projects has been more useful than Haskell for me.

rpglover64 | 11 years ago | on: What Color Is Your Function?

(Late to the party; sorry)

> you can't dynamically check something of a function type

Another approach, besides contracts, (one that my lab is working on) relies on whole program type-checking and path-sensitive program analysis.

page 1