top | item 41556519

Launch HN: Silurian (YC S24) – Simulate the Earth

338 points| rejuvyesh | 1 year ago

Hey HN! We’re Jayesh, Cris, and Nikhil, the team behind Silurian (https://silurian.ai). Silurian builds foundation models to simulate the Earth, starting with the weather. Some of our recent hurricane forecasts can be visualized at https://hurricanes2024.silurian.ai/.

What is it worth to know the weather forecast 1 day earlier? That’s not a hypothetical question, traditional forecasting systems have been improving their skill at a rate of 1 day per decade. In other words, today’s 6-day forecast is as accurate as the 5-day forecast ten years ago. No one expects this rate of improvement to hold steady, it has to slow down eventually, right? Well in the last couple years GPUs and modern deep learning have actually sped it up.

Since 2022 there has been a flurry of weather deep learning systems research at companies like NVIDIA, Google DeepMind, Huawei and Microsoft (some of them built by yours truly). These models have little to no built-in physics and learn to forecast purely from data. Astonishingly, this approach, done correctly, produces better forecasts than traditional simulations of the physics of our atmosphere.

Jayesh and Cris came face-to-face with this technology’s potential while they were respectively leading the [ClimaX](https://arxiv.org/abs/2301.10343) and [Aurora](https://arxiv.org/abs/2405.13063) projects at Microsoft. The foundation models they built improved on the ECMWF’s forecasts, considered the gold standard in weather prediction, while only using a fraction of the available training data. Our mission at Silurian is to scale these models to their full potential and push them to the limits of physical predictability. Ultimately, we aim to model all infrastructure that is impacted by weather including the energy grid, agriculture, logistics, and defense. Hence: simulate the Earth.

Before we do all that, this summer we’ve built our own foundation model, GFT (Generative Forecasting Transformer), a 1.5B parameter frontier model that simulates global weather up to 14 days ahead at approximately 11km resolution (https://www.ycombinator.com/launches/Lcz-silurian-simulate-t...). Despite the scarce amount of extreme weather data in historical records, we have seen that GFT is performing extremely well on predicting 2024 hurricane tracks (https://silurian.ai/posts/001/hurricane_tracks). You can play around with our hurricane forecasts at https://hurricanes2024.silurian.ai. We visualize these using [cambecc/earth] (https://github.com/cambecc/earth), one of our favorite open source weather visualization tools.

We’re excited to be launching here on HN and would love to hear what you think!

141 comments

order

shoyer|1 year ago

Glad to see that you can make ensemble forecasts of tropical cyclones! This absolutely essential for useful weather forecasts of uncertain events, and I am a little dissapointed by the frequent comparisons (not just you) of ML models to ECMWF's deterministic HRES model. HRES is more of a single realization of plausible weather, rather than an best estimate of "average" weather, so this is a bit of apples vs oranges.

One nit on your framing: NeuralGCM (https://www.nature.com/articles/s41586-024-07744-y), built by my team at Google, is currently at the top of the WeatherBench leaderboard and actually builds in lots of physics :).

We would love to metrics from your model in WeatherBench for comparison. When/if you have that, please do reach out.

cbodnar|1 year ago

Agree looking at ensembles is super essential in this context and this is what the end of our blogpost is meant to highlight. At the same time, a good control run is also a prerequisite for good ensembles.

Re NeuralGCM, indeed, our post should have said "*most* of these models". Definitely proves that combining ML and physics models can work really well. Thanks for your comments!

bbor|1 year ago

HN never disappoints, jeez. Thanks for chiming in with some expert context! I highly recommend any meteoronoobs like me to check out the pdf version of the linked paper, the diagrams are top notch — https://www.nature.com/articles/s41586-024-07744-y.pdf

Main takeaway, gives me some hope:

  Our results provide strong evidence for the disputed hypothesis that learning to predict short-term weather is an effective way to tune parameterizations for climate. NeuralGCM models trained on 72-hour forecasts are capable of realistic multi-year simulation. When provided with historical SSTs, they capture essential atmospheric dynamics such as seasonal circulation, monsoons and tropical cyclones. 
But I will admit, I clicked the link to answer a more cynical question: why is Google funding a presumably super-expensive team of engineers and meteorologists to work on this without a related product in sight? The answer is both fascinating and boring:

  In recent years, computing has both expanded as a field and grown in its importance to society. Similarly, the research conducted at Google has broadened dramatically, becoming more important than ever to our mission. As such, our research philosophy has become more expansive than the hybrid approach to research we described in our CACM article six years ago and now incorporates a substantial amount of open-ended, long-term research driven more by scientific curiosity than current product needs.
From https://research.google/philosophy/. Talk about a cool job! I hope such programs rode the intimidation-layoff wave somewhat peacefully…

d_burfoot|1 year ago

> These models have little to no built-in physics and learn to forecast purely from data. Astonishingly, this approach, done correctly, produces better forecasts than traditional simulations of the physics of our atmosphere.

Haha. The old NLP saying "every time I fire a linguist, my performance goes up", now applies to the physicists....

joshdavham|1 year ago

> Silurian builds foundation models to simulate the Earth, starting with the weather.

What else do you hope to simulate, if this becomes successful?

CSMastermind|1 year ago

The actual killer thing would be flooding. Insurance has invested billions into trying to simulate risk here and models are still relatively weak.

nikhil-shankar|1 year ago

We want to branch out to industries which are highly dependent on weather. That way we can integrate their data together with our core competency: the weather and climate. Some examples include the energy grid, agriculture, logistics, and defense.

cshimmin|1 year ago

Do earthquakes next!

Signed,

A California Resident

brunosan|1 year ago

Can we help you? We build the equivalent for land, as a non-profit. It's basically a geo Transformer MAE model (plus DINO, plus matrioska, plus ...), but largest and most trained (35 trillion pixels roughly). Most importantly fully open source and open license. I'd love to help you replace land masks with land embeddings, they should significantly help downscale the local effects (e.g. forest versus city) that afaik most weather forecast simplify with static land cover classes at most. https://github.com/Clay-foundation/model

nikhil-shankar|1 year ago

Hi, this looks really cool! Can we meet? Shoot us an email at contact@silurian.ai

serjester|1 year ago

This is awesome - how does this compare to the model that Google released last year, GraphCast?

nikhil-shankar|1 year ago

Hi, Nikhil here. We haven't done a head-to-head comparison of GFT vs GraphCast, but our internal metrics show GFT improves on Aurora and published metrics show Aurora improves on GraphCast. You can see some technical details in section 6 of the Aurora paper (https://arxiv.org/pdf/2405.13063)

furiousteabag|1 year ago

Curious to see what other things you will simulate in the future!

Shameless plug: recently we've built a demo that allows you to search for objects in San Francisco using natural language. You can look for things like Tesla cars, dry patches, boats, and more. Link: https://demo.bluesight.ai/

We've tried using Clay embeddings but we quickly found out that they perform poorly for similarity search compared to embeddings produced by CLIP fine tuned on OSM captions (SkyScript).

brunosan|1 year ago

howdy! Clay makers here. Can you share more? Did you try Clay v1 or v0.2 What image size embeddings from what instrument?

We did try to relate OSM tags to Clay embeddings, but it didn't scale well. We did not give up, but we are re-considering ( https://github.com/Clay-foundation/earth-text ). I think SatClip plus OSM is a better approach. or LLM embeddings mapped to Clay embeddings...

sltr|1 year ago

Check out Climavision. They use AI to generate both hyper-local ("will there be a tornado over my town in the next 30 minutes?") and seasonal ("will there be a draught next fall?") forecasts, and they do it faster than the National Weather Service. They also operate their own private radar network to fill observational gaps.

Disclosure: I work there.

https://climavision.com/

bbor|1 year ago

Fascinating. I have two quick questions, if you find the time:

  …we’ve built our own foundation model, GFT (Generative Forecasting Transformer), a 1.5B parameter frontier model that simulates global weather…
I’m constantly scolding people for trying to use LLMs for non-linguistic tasks, and thus getting deceptively disappointing results. The quintessential example is arithmetic, which makes me immediately dubious of a transformer built to model physics. That said, you’ve obviously found great empirical success already, so something’s working. Can you share some of your philosophical underpinnings for this approach, if they exist beyond “it’s a natural evolution of other DL tech”? Does your transformer operate in the same rough way as LLMs, or have you radically changed the architecture to better approach this problem?

  Hence: simulate the Earth.
When I read “simulate”, I immediately think of physics simulations built around interpretable/symbolic systems of elements and forces, which I would usually put in basic opposition to unguided/connectionist ML models. Why choose the word “simulate”, given that your models are essentially black boxes? Again, a pretty philosophical question that you don’t necessarily have to have an answer to for YC reasons, lol

Best of luck, and thanks for taking the leap! Humanity will surely thank you. Hopefully one day you can claim a bit of the NWS’ $1.2B annual budget, or the US Navy’s $infinity budget — if you haven’t, definitely reach out to NRL and see if they’ll buy what you’re selling!

Oh and C) reach out if you ever find the need to contract out a naive, cheap, and annoyingly-optimistic full stack engineer/philosopher ;)

cbodnar|1 year ago

Re question 1: LLMs are already working pretty well for video generation (e.g. see Sora). You can also think of weather as some sort of video generation problem where you have hundreds of channels (one for each variable). So this is not inconsistent with other LLM success stories from other domains.

Re question 2: Simulations don't need to be explainable. Being able to simulate simply means being able to provide a resonable evolution of a system given some potential set of initial conditions and other constraints. Even for physics-based simulations, when run at huge scale like with weather, it's debatable to what degree they are "interpretable".

Thanks for your questions!

OrvalWintermute|1 year ago

Am skeptical about the business case for this given the huge government investment in part of this.

What will your differentiators be?

Are you paying for weather data products?

danielmarkbruce|1 year ago

Better on some dimension will work. More accurate, faster, more fine grained, something.

Better weather predictions are worth money, plain and simple.

amirhirsch|1 year ago

Weather models are chaotic, are ML methods more numerically stable than a physics based simulation? And how do they compare in terms of compute requirements? the Aurora paper seemed to be promising, but I would love a summary of comparison better than what I get out of Claude.

Once upon a time I converted spectral-transform-shallow-water-model (STSWM or parallelized as PSTSWM) from FORTRAN to Verilog. I believe this is the spectral-transform method we have run for the last 30 years to do forecasting. The forecasting would be ~20% different results for 10-day predictions if we truncated each operation to FP64 instead of Intel's FP80.

nikhil-shankar|1 year ago

Great questions.

1. The truth is we still have to investigate the the numerical stability of these models. Our GFT forecast rollouts are around 2 weeks (~60 steps) long and things are stable in in that range. We're working on longer-ranged forecasts internally.

2. The compute requirements are extremely favorable for ML methods. Our training costs are significantly cheaper than the fixed costs of the supercomputers that government agencies require and each forecast can be generated on 1 GPU over a few minutes instead of 1 supercomputer over a few hours.

3. There's a similar floating-point story in deep learning models with FP32, FP16, BF16 (and even lower these days)! An exciting area to explore

Angostura|1 year ago

Have you had a crack at applying this approach to the effectively unforecastable - earthquakes, for example?

ijustlovemath|1 year ago

> Astonishingly, this approach, done correctly, produces better forecasts than traditional simulations of the physics of our atmosphere.

It seems like this is another instance of The Bitter Lesson, no?

agentultra|1 year ago

I'm not sure I buy The Bitter Lesson, tbh.

Deep Blue wasn't a brute-force search. It did rely on heuristics and human knowledge of the domain to prune search paths. We've always known we could brute-force search the entire space but weren't satisfied with waiting until the heat death of the universe for the chance at an answer.

The advances in machine learning do use various heuristics and techniques to solve particular engineering challenges in order to solve more general problems. It hasn't all come down to Moore's Law.. which stopped bearing large fruit some time ago.

However that still comes at a cost. It requires a lot of GPUs, land, energy, and fresh water, and Freon for cooling. We'd prefer to use less of these resources if possible while still getting answers in a reasonable amount of time.

photochemsyn|1 year ago

That's a highly controversial claim that would need a whole host of published peer-reviewed research papers to support it. Physics-based simulations (initial state input, then evolve according to physics applied to grids) have improved but not really because of smaller grids, but rather by running several dozen different models and then providing the average (and the degree of convergence) as the forecast.

Notably forecast skill is quantifiable, so we'd need to see a whole lot of forecast predictions using what is essentially the stochastic modelling (historical data) approach. Given the climate is steadily warming with all that implies in terms of water vapor feedback etc., it's reasonable to assume that historical data isn't that great a guide to future behavior, e.g. when you start having 'once every 500 year' floods every decade, that means the past is not a good guide to the future.

crackalamoo|1 year ago

Yes, it seems like it. Although I would imagine the features and architecture of the model still take some physics into account. You can't just feed weather data into an LLM, after all.

1wd|1 year ago

Does anyone predict economy/population/... by simulating individual people based on real census information? Monte carlo simulation of major events (births, death, ...) based on known statistics based on age, economic background, location, education, profession, etc.? It seems there are not that many people that this would be computationally infeasible, and states and companies have plenty of data to feed into such systems. Is it not needed because other alternatives give better results, or is it already being done?

jandrewrogers|1 year ago

I've done a lot of advanced research in this domain. It is far more difficult than people expect for a few reasons.

The biggest issue is that the basic data model for population behavior is a sparse metastable graph with many non-linearities. How to even represent these types of data models at scale is a set of open problem in computer science. Using existing "big data" platforms is completely intractable, they are incapable of expressing what is needed. These data models also tend to be quite large, 10s of PB at a bare minimum.

You cannot use population aggregates like census data. Doing so produces poor models that don't ground truth in practice for reasons that are generally understood. It requires having distinct behavioral models of every entity in the simulation i.e. a basic behavioral profile of every person. It is very difficult to get entity data sufficient to produce a usable model. Think privileged telemetry from mobile carrier backbones at country scales (which is a lot of data -- this can get into petabytes per day for large countries).

Current AI tech is famously bad at these types of problems. There is an entire set of open problems here around machine learning and analytic algorithms that you would need to research and develop. There is negligible literature around it. You can't just throw tensorflow or LLMs at the problem.

This is all doable in principle, it is just extremely difficult technically. I will say that if you can demonstrably address all of the practical and theoretical computer science problems at scale, gaining access to the required data becomes much less of a problem.

ag_rin|1 year ago

I’m also super interested in this kind of question. The late Soviet Union and their cybernetics research were really into simulating this kind of stuff to improve the planned economy. But I’m curious if something like this can be done on a more local scale, to improve things like a single company output.

kristjansson|1 year ago

You might find early agent-based models (e.g. the Sante Fe Institute's Artificial Stock Market[0]) interesting.

IMO the short answer is that such models can be made to generate realistic trajectories, but calibrating the model the specific trajectory of reality we inhabit requires knowledge of the current state of the world bordering on omniscience.

[0]: https://www.santafe.edu/research/results/working-papers/asse...

Nicholas_C|1 year ago

Agent based modeling (ABM) is an attempt at this. I've wanted to forecast the economy on a per-person basis since playing Sim City as a kid (although Sim City is not an ABM to be clear). From doing a bit of research a while back it seemed like the research and real world forecasting have been done on a pretty small scale and nothing as grand as I'd hoped. It's been a while since I've looked into so I would be happy to be corrected.

cossatot|1 year ago

Doyne Farmer's group at Oxford does 'agent-based' economics simulations in this vein. He has a new book called 'Making Sense of Chaos' that describes it.

7e|1 year ago

Every weather forecasting agency in the world is pivoting to ML methods, and some of them have very deep pockets and industry partnerships. Some big tech companies are forging ahead on their own. Unless you have proprietary data, you just bought yourself a low paying job with long hours. Typical poor judgement of naive YC founders. Founding a company is more exciting than being successful.

andrewla|1 year ago

Is the plan to expand from weather forecasting into climate simulation? Given the complexity of the finding initial conditions on the earth, a non-physical (or only implicitly-physical) model seems like it could offer a very promising alternative to physical models. The existing physical models, while often grossly correct (in terms of averages), suffer from producing unphysical configurations on a local basis.

nikhil-shankar|1 year ago

Yes, 100%! We'll still take a statistical/distributional approach to long-ranged climate behavior rather than trying to predict exact atmospheric states. Keep an eye out for more news on this

nxobject|1 year ago

Congratulations on splitting off to make some money! I remember reading about ClimaX a year ago and being extremely excited – especially because of the potential to lower the costs of large physical simulations like these.

Have specific industries reached out to you for your commerical potential – natural resource exploration, for example?

scottcha|1 year ago

Are you planning on open sourcing your code and/or model weights? Aurora code and weights were recently open sourced.

cbodnar|1 year ago

Not immediately, but we will consider open sourcing some of our future work. At least, we definitely plan to be very open with our metrics and how well (or bad) our models are doing.

legel|1 year ago

Congrats to Jayesh and team! I was lucky to meet the founding CEO recently, and happy to let everyone know he's very friendly and of course super intelligent.

As a fellow deep learning modeler of Earth systems, I can also say that what they're doing really is 100% top notch. Congrats to the team and YC.

abdellah123|1 year ago

Did you explore other branches of AI, namely KRLs? It's an underrated area especially in recent years.

Using the full expressive power of a programming language to model the real world and then execute AI algorithms on highly structured and highly understood data seems like the right way to go!

kristopolous|1 year ago

This really, really looks like a nullschool clone (https://earth.nullschool.net/). Is it not?

nikhil-shankar|1 year ago

Hi, it totally is. That's one of our favorite weather visualization projects. We're using Cameron Beccario's open source version of nullschool for our forecasts. We cited him above in the blurb and also on our about page (https://hurricanes2024.silurian.ai/about.html)

jay-barronville|1 year ago

I don’t think I understand what your issue is with them. They used an open-source project to visualize their data, were open about doing so, and cited the creator of the project.

What more did you want from them? (Genuine question.)

99catmaster|1 year ago

Wow, that’s uncanny.

koolala|1 year ago

I'm hoping the singularity will coincide with a large-scale AI achieving simulated Earth consciousness. Human intelligence is only a spec compared to all the combined intelligence of nature.

xpe|1 year ago

What is "simulated Earth consciousness"?

hwhwhwhhwhwh|1 year ago

So ChatGPT has a cutoff date on the stuff it can talk about. This predicting weather sounds like ChatGPT being able to predict next week's news from which it has been trained on. I can see how it can probably predict some stuff like Argentina winning a football match scheduled for next week when played against India given India sucks at football. But can it really give any useful predictions? Like can it predict things which are not public? Like who will Joe Rogan interview in 2 weeks? Or what would be the list of companies in YCs next batch?

sillysaurusx|1 year ago

Sure, not every model is an autoregressive transformer. And even a GPT could give some useful predictions if you stuff the context window with things it's been fine tuned to predict. We did that to get GPT to play chess a few years ago.

Specifically, I could imagine throwing current weather data at the model and asking it what it thinks the next most likely weather change is going to be. If it's accurate at all, then that could be done on any given day without further training.

The problems happen when you start throwing data at it that it wasn't trained on, so it'll be a cat and mouse game. But it's one I think the cat can win, if it's persistent enough.

the_arun|1 year ago

In India we use Natural Intelligence - Astrology - for predicting results. Note that it has high percentage of hallucinations.

SirLJ|1 year ago

How accurate is the weather prediction for a city for tomorrow on average for the min and max temperature? Thanks a lot!

baetylus|1 year ago

Exciting idea and seems like a well-proven team. Good luck to you guys here and don't mind the endemic snark in the other threads. A couple basic questions --

1. How will you handle one-off events like volcanic eruptions for instance? 2. Where do you start with this too? Do you pitch a meteorology team? Is it like a "compare and see for yourself"?

cbodnar|1 year ago

Volcanoes are a tricky one. There are a few volcanic eruptions in historical data, but it's unclear if this is enough to predict reasonably well how such future eruptions (especially at unseen locations) will affect the weather. Would be fun to look at some events and see what the model is doing. Thanks for the suggestion!

Re where do we start. A lot of organisations across different sectors need better weather predictions or simulations that depend on weather. Measuring the skill of such models is a relatively standard procedure and people can check the numbers.

resters|1 year ago

very cool! i was thinking of doing space weather simulation using vocap and a representation of signals in the spatial domain. maybe it could be added.

kyletns|1 year ago

This is cool. What do you mean by "defense?"

itomato|1 year ago

I keep waiting for someone to integrate data from NEON

cbodnar|1 year ago

I am curious, what would you do with this data if you had infinite resources?

bschmidt1|1 year ago

Wow, so excited for this.

I had a web app online in 2020-22 called Skim Day that predicted skimboarding conditions on California beaches that was mostly powered by weather APIs. The tide predictions were solid, but the weather itself was almost never right, especially wind speed. Additionally there were some missing metrics like slope of beach which changes significantly throughout the year and is very important for skimboarding.

Basically, I needed AI. And this looks incredible. Love your website and even the name and concept of "Generative Forecasting Transformer (GFT)" - very cool. I imagine the likes of Surfline, The Weather Channel, and NOAA would be interested to say the least.

cbodnar|1 year ago

That's pretty cool! Would be great to learn more about your app and how the wave/tide prediction was working. Is there some place to read more about this?

jawmes8|1 year ago

Yes please improve surf forecasting!