Launch HN: Halluminate (YC S25) – Simulating the internet to train computer use
70 points| wujerry2000 | 6 months ago
Training AI agents to use computers, browsers, and software is one of the highest-potential opportunities for AI. To date, however, this capability is still unreliable. The emerging method to improve this is called Reinforcement Learning with Verifiable Rewards (RLVR). However, researchers are currently bottlenecked by a lack of high-quality simulators and task + verifiers.
To solve this problem, we’re building Westworld, a fully-simulated internet made up of synthetic versions of the most common consumer and enterprise apps. Agents use Westworld to learn how to do economically valuable tasks.
For example, AI agents can practice planning vacations on a simulated flight booking site (https://flights.halluminate.ai/), or learn how to reorganize outdated information in your sales platform, or train to do financial modeling directly in a spreadsheet.
Here’s a demo showing our flight booking simulation: https://www.loom.com/share/74a3b28067e24c1b886054ba90a90aa5.
How it works: AI agents access our environment and are given a task + verifier. A task is basically an objective for the agent to achieve, for example "Book me a flight from SF to NYC on this date with x, y, z filters.” A verifier is a programmatic way to determine if the task was successfully completed. For example, in this case it might be a json that checks if the final flight data matches expectations. These signals can then be used to calculate a reward in RL.
The more simulators we build, the more AI labs can improve on capabilities that computer use agents are currently weak at. One of our customers saw a ~20% improvement in date-picking performance when training on our flight booking simulator.
Two things make this hard so far:
(1) The simulations have to be realistic. You can’t get away with a vibe-coded “80% solution” because even small divergences impact performance. Generating simulated data is even harder. For example, massaging flight data to look realistic took a lot of trial and experimentation.
(2) The tasks you train agents on have to be well-chosen. They are only valuable if they reflect work that people actually want solved. We need a lot of feedback from domain experts to get this right.
That said, we find this work incredibly interesting and are excited to tackle these issues. A few things we are pumped to ship in the near term: - Ability to train on long-horizon tasks by stringing multiple simulators together for extended workflows; - Procedural data generation. Instead of synthetically generating all the data upfront, how can we model data generation so that our simulators are populated procedurally as agents explore (think Minecraft); - Open source! We plan to release our environments to the public so developers/researchers can hack them for their own experimentation.
RL simulators are just one part of our business. The other part is around human data creation (think Scale AI but for computer use). We produce off-the-shelf pre-training/fine-tuning datasets, expert human evaluation/error analysis, or any other data needs for our customers. There are also a lot of exciting overlaps between the two - for example, using human experts to help create our simulators/tasks. Happy to go in more detail, but we thought that simulators would make for the more interesting HackerNews post :)
Finally, about us: Wyatt and I met while studying CS at Cornell and have been living and working together for over 7 years. I previously led product/research at Capital One Labs, where I launched one of the first AI agents in banking. Wyatt previously was a Cornell Milstein scholar and did large-scale data engineering for 2 early-stage startups in NYC. We left our jobs last year, and faced these problems first-hand while building evals for our customers who were browser/computer use agent companies.
If anyone has any questions, feedback, or thoughts please let us know! Looking forward to your comments.
zebomon|6 months ago
My own experience makes me lean toward thinking that the truth is somewhere in the middle in this situation, and that simulators like these will be valuable. I've been experimenting a lot with computer use on my website Bingeclock, passing through different prompts along the lines of "make a movie marathon based on X." The newest agents are consistently impressive, while also being consistently imperfect in surprising and interesting ways.
Whether or not all the labs are already running this kind of thing internally for themselves, you would know better than I. But it's an idea that seems very useful nonetheless. Congratulations on the launch!
wujerry2000|6 months ago
re: labs doing this internally. They definitely are! However, the scale of sims buildout is going to be massive, probably many orders of magnitude above what we have today. We think it makes sense for one central player to do this because a really good simulator can be used by multiple people at once. It doesn’t make sense for every AI lab/company to build out their own environments if an industry standard catalog exists.
mandeepj|6 months ago
Does this simulation really required? There's another YC startup, they're processing PDFs I believe. They didn't train their systems on any simulation.
Edited to reword and add more context.
wujerry2000|6 months ago
That being said, there are still a lot of use cases its not good at, and also looking at long trajectory tasks, enterprise work tasks, etc. I imagine those are all still very nascent.
I think we are still very early on computer use, being "production ready" requires probably close to 95%+ accuracy on most tasks and we're not there yet for most use cases.
davecyen|6 months ago
wm2|6 months ago
more importantly though are use cases that depend on the data. the data on real google flights/expedia is constantly changing, so it's impossible to build datasets based ground truth, e.g. the answer for a task like "Find the cheapest round-trip flight option from Bologna (BLQ) to Dushanbe (DYU) if I leave on 2026-05-05 and come back on 2026-05-15. Return the total price and the flight numbers for all flights." isn't stable. on our site, we control the data, so that answer is stable (deterministically random). so controlling the whole clone rather than running on the prod site unlocks richer and more repeatable tasks/testing.
lastly, our site runs the exact same locally as deployed, it has zero internet dependencies. so it can be run offline directly on the cluster with no issue for network latency/failures
DearAll|6 months ago
wm2|6 months ago
orliesaurus|6 months ago
whymauri|6 months ago
wujerry2000|6 months ago
we share the public/consumer simulators, but we also build bespoke environments on a per customer basis (think enterprise sites or even full VMs loaded with applications and data).
environment creation scalability is a big priority for us. we currently automate most of the process, but it still takes a fair bit of manual work to finish them and to get the details right. there is some reusability across environments, for example, we can use the flight results generation code in any travel/flightbooking sim. we also have some semi-automated approaches for creating tasks and verifiers. but still lots of work to be done here.
BobbyJo|6 months ago
I think each agent use case is going to need a simulation for its reward to eek out the last 20%.
Edit: Realized I forgot to say Great Work! Looks Cool!
wujerry2000|6 months ago
Both those spaces are still optimizing on the last mile performance gains that get exponentially harder.
The good thing about computer use is building software environments are faster and also more repeatable, so hopefully we see quicker improvements here. :)
nasmorn|6 months ago
wujerry2000|6 months ago
I definitely think as companies begin optimizing for an "Agent first" economy, they will start figuring out how to optimize their sites for agent traffic.
They definitely could do this themselves, but I imagine there will be some engineering work/expertise around building RL envs that they might want to partner with an external provider to do it.
ALSO the value of Westworld isn't any standalone env but many stringed together for long trajectory workflows. That is why they may be inclined to work with another provider to do it.
Those are just our thoughts though, will see how the market plays out
sealthedeal|6 months ago
wujerry2000|6 months ago
Engineering: QA automation is huge, closes the loop on "fully automated" software engineering if another computer use system is able to click around and help identify bugs in software
Deep Research: probably the biggest use case for computer use right now, finding information that isn't easily indexed or accessible via APIs.
General RPA: This is industry specific, but lots of just everyday knowledge work involves data transfer between many platforms that sucks and no one wants to do. A great example is Epic in Healthcare. SO much labor is employed just to write and read information from this desktop app that isn't easily accessible. Imagine a computer use system that can do automated data pulls at scale for legacy desktop apps. This is a huge huge use case, and something that we're excited to try and improve with simulators of things like Epic, SAP, Salesforce, etc.
Consumer: Lots of just general everyday tasks. I would recommend checking out https://yutori.com/ if you're interested in seeing how a computer use agent can be helpful in your day to day. Its fun for daily news reports, restaurant reservation checking, etc.
CodingJeebus|6 months ago
If it gets a major travel detail wrong, purchases a business class ticket on accident, etc. and I need to adjust the booking by calling the airline, then I’m way less happy than I was if I just bought the ticket myself. Not to mention what happens when Google flights gets a UI refresh and knocks the accuracy rate of the agent down even 10%.
Digital criminals are gonna love it, though.
I’m personally much more interested in automating browser tasks that aren’t economically valuable because that mitigates the risk.
wujerry2000|6 months ago
I think this will probably be a mixture of automated QA/engineering and scale.
Another interesting path is actually partnering directly with software providers to offer their platforms as simulators IF they see there is a competitive advantage to training agents to perform well on their UI.
This idea we're really excited about, but it would require a company to see real revenue potential in enabling agentic access vs not. I'd say we're still on the "block them out" phase of the internet (ex. see Cloudflare's recent post about bot detection: https://blog.cloudflare.com/perplexity-is-using-stealth-unde...)
unknown|6 months ago
[deleted]
mousetree|6 months ago
superb_dev|6 months ago
sandGorgon|6 months ago
wujerry2000|6 months ago
eddijkstra|6 months ago
thebiglebrewski|6 months ago
mikepurvis|6 months ago
bobotowned|6 months ago
[deleted]
mrbluecoat|6 months ago
wujerry2000|6 months ago
inLewOf|6 months ago
rickcarlino|6 months ago
hmokiguess|6 months ago
[deleted]