graphene's comments

graphene | 6 years ago | on: British Airways faces record £183M fine for data breach

From the article:

    Where does the money go?

    The penalty is divided up between the other European
    data authorities, while the money that comes to the
    ICO goes directly to the Treasury.
So it seems most of it will go to other European data authorities.

graphene | 7 years ago | on: Julia 1.0

That's really cool! I checked Rigetti's github but there seem to be no open source Julia projects there (although, somewhat surprisingly, lots of Common Lisp!). Are you free to say what Julia is being used for at Rigetti (and Intel), and whether there are any plans to release things as open source in the future?

graphene | 8 years ago | on: Uber loses court appeal against drivers' rights

FWIW, it's exactly what I've heard when talking to Uber drivers in London. One guy used to work as a minicab driver; he prefers Uber because he can choose his hours, doesn't get cut out of lucrative jobs to the benefit of the company owner's nephew, and feels safer at night due to the rating system.

graphene | 8 years ago | on: Ask HN: In what creative ways are you using Makefiles?

I do something similar, here's my Makefile -- I have scripts that build figures in a separate directory, /figures. I'm sure it could be terser, but it does the job for me.

    texfiles = acronyms.tex analytical_mecs_procedure.tex analytical_mecs.tex \
               anderson_old.tex background.tex chaincap.tex \
               conclusions.tex cvici.tex gold_chain_test.tex introduction.tex \
               main.tex mcci_manual.tex methods.tex moljunc.tex \
               tb_sum_test.tex times_procedure.tex tm_mcci_workflow.tex tmo.tex \
               vici_intro.tex
 
    # dynamically generated figures
 
 
    all: main.pdf
 
    main.pdf: $(texfiles) figures/junction_occupations.pdf figures/overlaps_barplot.pdf \
                          figures/transmission_comparison.pdf \
                          figures/wigner_distributions.pdf
        pdflatex main.tex && bibtex main && pdflatex main.tex && pdflatex main.tex
 
    figures/junction_occupations.pdf: figures/junction_occupations.hs
        ghc --make figures/junction_occupations.hs
        figures/junction_occupations -w 800 -h 400 -o figures/junction_occupations.svg
        inkscape -D -A figures/junction_occupations.pdf figures/junction_occupations.svg 
 
    figures/overlaps_barplot.pdf: figures/overlaps_barplot.py
        python figures/overlaps_barplot.py
 
    figures/transmission_comparison.pdf: figures/transmission_comparison.py
        python figures/transmission_comparison.py
 
    figures/wigner_distributions.pdf: figures/wigner_distributions.py
        python figures/transmission_comparison.py
 
    clean:
        rm *.log *.aux *.blg *.bbl *.dvi main.pdf

graphene | 8 years ago | on: The Common Lisp Cookbook

Honest question, how is this different from what you can do in e.g. python? The python interpreter supports reloading of modules and evaluation of expressions, is there functionality that CL has above and beyond this that makes it more powerful?

graphene | 9 years ago | on: Ask HN: Anybody using Amazon Machine Image for AWS Deep Learning?

Yes, according to the first law of Thermodynamics, energy is conserved; regardless of whether there is computation being done, 100W of electric power will result in 100W of heat being dissipated.

About heating with computers, There is a Dutch company doing this, https://cloud.nerdalize.com/. It's interesting to think about how their economics work, because the heaters will be switched off for large parts of the year, and for much of the day. They must be banking that computation won't get much cheaper over time (in terms of FLOPS/$) because otherwise it'd be hard to recoup their initial investment. In a sense, this almost looks like a bet against Moore's law!

graphene | 9 years ago | on: Deep learning with coherent nanophotonic circuits [pdf]

They trained on a computer model of the optical circuit, and only did the feed-forward step on the real thing. The rationale for that is that real-life models spend much more time (and energy) in inference mode, so that is the step you'd most want to optimize.

I can't help but think it would be really cool to automatically produce a circuit that would output the gradient of the error of the actual NN, so you could optimize that directly.

graphene | 9 years ago | on: LuaTeX 1.0.0

Well, yes and no. You are absolutely right that a complete implementation of TeX would be difficult, but you could read a subset of the language that is big enough to be useful, including simple macro definitions and commonly used commands, which is exactly what pandoc's LaTeX reader already does.

graphene | 9 years ago | on: LuaTeX 1.0.0

Not entirely what you're describing, but pandoc goes a long way towards being a sort of LLVM for text documents. In order to do all the format conversions, it transforms inputs into a tree-based internal representation, and then translates that into the output format.

Unfortunately it doesn't have a (pure) TeX reader yet, but that could be implemented relatively easily.

graphene | 10 years ago | on: Nobody’s Talking About Nanotech Anymore

The counterargument is that as more and more industries start to do their engineering at the nanoscale (whether coming from above, as in materials and electronics, or from below as in biochemistry and pharmaceuticals), the physics of their systems will become more similar. This will cause their design rules to also become more and more similar, so it's plausible that you could end up with a small number of players possessing engineering expertise that can be applied to a very wide variety of sectors.

graphene | 10 years ago | on: Nobody’s Talking About Nanotech Anymore

There is the fact that proteins are constrained to function as part of organisms that are capable of self-replication. This constraint means that proteins (mostly) are not very stable wrt oxidisation and UV degradation, and only work properly in aqueous solution.

It's anyone's guess how significant these constraints will be from the viewpoint of developing artificial protein machines (maybe rapid (bio)degradation is a good thing!), but there's definitely large swathes of chemical design space outside of arbitrary chains of known amino acids, and we might be able to discover entire classes of molecular machines that don't have the drawbacks of bioinspired proteins.

graphene | 10 years ago | on: Supercomputers: Obama orders world's fastest computer

I don't know if this is what you're referring to, but a common application for government-owned supercomputers is simulating the degredation of nuclear warheads. The degradation of the fissile material as well as its surroundings is highly critical to a nation's security, and also very hard to model well.

Of course in an ideal world those cycles would be used to help cure cancer, but given that these warheads exist, it's probably a good idea to invest resources into getting an idea of what shape they're in.

page 1