enolan | 6 months ago | on: Ask HN: Who wants to be hired? (September 2025)
enolan's comments
enolan | 7 months ago | on: Ask HN: Who wants to be hired? (August 2025)
Remote: Preference for in-office/hybrid, open to remote
Willing to relocate: no
Technologies: machine learning, LLMs, flow matching, JAX, PyTorch, Python
Resume: https://www.echonolan.net/resume/cv.pdf
Email: [email protected]
I'm an ML engineer/ML research engineer looking for roles where there are gradients to descend, models to build, and/or datasets to gather. In general I want to advance the frontier of what's possible. My credentials are more towards research engineering than applied work, but I'm open to both. My last job was as a contract research engineer doing AI safety work at Redwood Research. I spent about half my time on our synthetic datasets of safe & unsafe LLM behavior, and the other half on fancy finetuning. I finetuned an LLM to exhibit unsafe behavior while fooling logistic probes trained to detect it, but only in deployment. Weird multi-part composite loss. My other big project is independent research in text-to-image models that are trained solely on unlabeled image data, exploiting CLIP embeddings for the link between text and images. If nothing goes horribly wrong I'll be submitting a paper for ICLR in about 7 weeks.
If you need help getting ML systems up and running, getting them fast and correct, or if any of that stuff sounds interesting to you, get in touch.
enolan | 11 months ago | on: Ask HN: Who wants to be hired? (May 2025)
Remote: Sure, but my preference is in-office.
Willing to relocate: No
Technologies: JAX, PyTorch, Machine learning in general, Python, Rust
Resume: https://www.echonolan.net/resume/cv.pdf
Email: [email protected]
I'm an ML engineer/research engineer looking for work in that vein. I can train models, I can design architectures, I can implement papers, I can gather real and synthetic data, and I can in general solve the problems that come up when trying to build ML systems. Background: I recently ended a 3 month contract at Redwood Research doing AI safety research where I spent about half my time improving our synthetic data pipelines and about half finetuning LLMs to exhibit unsafe behavior in deployment but not training, and to do so while defeating probes trained to detect the unsafe behavior (but only in deployment). I also have an ongoing solo research project working on text-to-image models trained without any text, relying on embeddings for conditioning. [1] is a (very out of date) blog post about that. There'll be a paper soon inshallah. Before the ML stuff I worked at a startup where we built a new cryptocurrency, and before that another startup.
[1] https://www.echonolan.net/posts/2024-03-09-is-it-possible-to...
enolan | 1 year ago | on: Ask HN: Who wants to be hired? (October 2024)
I'm looking for an ML engineering gig. I can help you gather data, preprocess it, design models, train models, etc etc. My ideal job would be doing generative AI stuff with images/audio/video, but I'm open to anywhere there's gradients to descend. Recently I've been working on a project building a text-to-image model that learns solely with unlabeled image data, relying on CLIP for the link between captions and images[1]. I think it's a) cool and b) demonstrative of strong abilities. At a higher level of abstraction you can think of this as embedding guided content synthesis. The model learns to generate images conditioned on their CLIP embedding being within an input spherical cap. If you center the cap on the CLIP embedding of some text you get images that look like they'd have that caption, if you center it on the embedding of another image you get semantically similar images. The radius of the cap determines how similar the outputs are.
[1]: https://www.echonolan.net/posts/2024-03-09-is-it-possible-to... new model that generates better samples coming soon
enolan | 1 year ago | on: Ask HN: Who is hiring? (August 2024)
enolan | 1 year ago | on: Ask HN: Who wants to be hired? (August 2024)
Location: NYC
Remote: Either on-site or remote is fine
Willing to relocate: No
Technologies: Machine learning, generative AI, text-to-image models, JAX, Python, Rust, PyTorch, OCaml, Haskell
Résumé/CV: https://www.echonolan.net/resume/cv.html (HTML) or https://www.echonolan.net/resume/cv.pdf (PDF)
Email: [email protected]
Ideally, I'd find a job doing ML engineering on text-to-image, text-to-video, text-to-audio or related models - recently I've built a text-to-image model that is trained with unlabeled images alone, using CLIP for the link between captions and images[1]. I'm interested in ML engineering in other domains as well, and my last job was building a new cryptocurrency, so I have skills there too.Here's my blurb about the model I built:
Recently, I've built a text-to-image model that is trained without any text labels, using unlabeled images and CLIP for the link between captions and images. This has never been done or even investigated before. Results are promising so far. I think this work is the best representation of what I'm capable of. In the process, I gathered a dataset of 33 million images for training data, including removing redundant images, deduplicating, and taking stills from any videos. I ported a VQGAN implementation from PyTorch to JAX, built an efficient preprocessing pipeline, built transformer models in JAX, and designed and trained baseline models and more sophisticated ones. To support the approach I eventually settled on, I designed and implemented an efficient algorithm to sample unit vectors from a finite set, conditioned on the vectors being inside a spherical cap. For that I needed to write a Python library in Rust to help with constructing the space partitioning data structure used for sampling. The sampling algorithm gets used to generate training examples and the model learns to sample images conditioned on the image's CLIP embedding being within an arbitrary spherical cap.
[1]: https://www.echonolan.net/posts/2024-03-09-is-it-possible-to...
enolan | 5 years ago | on: Slate Star Codex and Silicon Valley’s War Against the Media
Metz cares about access to SV sources, probably. I'll quote Nostalgebraist[0]:
> The journalist also written a soon-to-be-published book about AI work at “Google, Microsoft, Facebook and OpenAI,” whose blurb makes it sound impressed with its subjects, and also touts his “exclusive access to each of these companies.” So, this is someone whose career depends on being in the good graces of the big-name Silicon Valley crowd, and presumably cares a lot whether e.g. Paul Graham is mad at him.
I don't think you can do the sort of work Cade Metz wants to do if the first thing anyone from OpenAI thinks of when you write to them is "oh, it's that asshole that doxed Scott Alexander".
[0]: https://nostalgebraist.tumblr.com/post/621772274317623296/my...
enolan | 7 years ago | on: What is this Twitter army of Amazon drones cheerfully defending warehouse work?
enolan | 7 years ago | on: Vermont will cover $10K of expenses for people who move there and work remotely
enolan | 8 years ago | on: Why It’s So Hard to Actually Work in Shared Offices
enolan | 8 years ago | on: U.S. Regulators to Subpoena Crypto Exchange Bitfinex, Tether
Or hypothetically you could pay for stuff that's priced in dollars without the friction and fees of the traditional banking system. I don't think anybody actually does that though.
It's a way of combine the advantages of cryptocurrency - speed, fees, lack of regulation - with the stability of fiat currencies.
Not very stable if the issuer is insolvent though.
enolan | 9 years ago | on: Bringing Pokémon GO to life on Google Cloud
Google's post is weird because they seem to think the game was a technical success. Google may have done great, it's impossible to tell from the outside, but the actual user experience is - or at least was when I played it - awful.
Remote: Yes, but prefer in-office
Willing to relocate: Yes, to the Bay Area
Technologies: machine learning, transformers, LLMs, finetuning, flow matching, JAX, PyTorch, Python
Resume: https://www.echonolan.net/resume/cv.pdf
Email: [email protected]
I'm an ML engineer/ML research engineer looking for roles where there are gradients to descend, models to build, and/or datasets to gather. In general I want to advance the frontier of what's possible. My credentials are more towards research engineering than applied work, but I'm open to both. My last job was as a contract research engineer doing AI safety work at Redwood Research. I spent about half my time on our synthetic datasets of safe & unsafe LLM behavior, and the other half on fancy finetuning. I finetuned an LLM to exhibit unsafe behavior while fooling logistic probes trained to detect it, but only in deployment. Weird multi-part composite loss. My other big project is independent research in text-to-image models that are trained solely on unlabeled image data, exploiting CLIP embeddings for the link between text and images. Currently fighting with spherical flow matching models and racing the ICLR paper deadline.
If you need help getting ML systems up and running, getting them fast and correct, or if any of that stuff sounds interesting to you, get in touch.