top | item 46401539

Apple releases open-source model that instantly turns 2D photos into 3D views

400 points| SG- | 2 months ago |github.com

202 comments

order

bertili|2 months ago

transcriptase|2 months ago

I love how virtually no GitHub instructions related to AI simply work as written.

Each assumes you already have their developer environment configured to have the tool work, but simply don’t have it compiled yet.

RobotToaster|2 months ago

https://raw.githubusercontent.com/apple/ml-sharp/refs/heads/...

"Exclusively for research purposes" so not actually open source.

ffsm8|2 months ago

The readme doesn't claim its open source either from what I can tell. Seems to be just a misguided title by the person who submitted it to HN

The only reference seems to be in the acknowledgement, saying that this builds ontop of open source software

andy99|2 months ago

Meta’s campaign to corrupt the meaning of Open Source was unfortunately very successful and now most people associate releasing the weights with open source.

zarzavat|2 months ago

There's no reason to believe that weights are copyrightable. The only reason to pay attention to this "license" is because it's enforced by Apple, in that sense they can write whatever they want in it, "this model requires giving ownership of your first born son to Apple", etc. The content is irrelevant.

thebruce87m|2 months ago

I’m going to research if I can make a profitable product from it. I’ll publish the results of course.

sa-code|2 months ago

Should the title be corrected to source-available?

echelon|2 months ago

That sucks.

I'm writing open desktop software that uses WorldLabs splats for consistent location filmmaking, and it's an awesome tool:

https://youtube.com/watch?v=iD999naQq9A

This next year is going to be about controlling a priori what your images and videos will look like before you generate them.

3D splats are going to be incredibly useful for film and graphics design. You can rotate the camera around and get predictable, consistent details.

We need more Gaussian models. I hope the Chinese AI companies start building them.

wasting_time|2 months ago

Is there any model that is actually free as in freedom (not necessarily gratis)?

bsnnkv|2 months ago

Nice to see some more interesting use of this kind of educational source licensing

littlestymaar|2 months ago

Your daily reminder that neural network weights aren't creative work and as such aren't subject to copyright protection in the first place. The “license” is purely cosmetic (or rather, it has an internal purpose: it's being put there by the ML scientists who want to share their work and have to deal with the corporate reluctance to do so).

LtWorf|2 months ago

When AI and open source is used together you can be sure it's not open source.

m4ck_|2 months ago

If all these AI models were trained on copyrighted materials for which the trainers had no right to, is it wrong to steal their models and use them however we want? Morally I'd say absolutely not, but I"m sure these AI bros would vigorously defend their own IP, even if it was built on stolen IP created by humans.

hwers|2 months ago

[deleted]

randyrand|2 months ago

It’s open source, just not open domain.

chmod775|2 months ago

Big day for VR pornography!

I'm not kidding. That's going to be >80% of the images/videos synthesized with this.

avaer|2 months ago

Unfortunately not as significant as you'd think.

The output is not automatically metrically scaled (though you can use postprocessing to fix this, it's not part of this model). And you can't really move around much without getting glitches, because it only inferences in one axis. It's also hard capped at 768 pixels + 2 layers.

Besides depth/splatting models have been around for quite a while before this. The main thing this model innovates on is inference speed, but VR porn isn't a use case that really benefits from faster image/video processing, especially since it's still not realtime.

This year has seen a lot of innovation in this space, but it's coming from other image editing and video models.

rcarmo|2 months ago

Gives the term "Gaussian splat" an entirely different meaning...

coffeecoders|2 months ago

I feel like being in a time loop. Every time a big company releases a model, we debate the definition of open source instead of asking what actually matters. Apple clearly wants the upside of academic credibility without giving away commercial optionality, which isn't unsurprising.

Additionally, we might need better categories. With software, flow is clear (source, build and binary) but with AI/ML, the actual source is an unshippable mix of data, infra and time, and weights can be both product and artifacts.

basisword|2 months ago

I'm glad you said it. Incredible tech and the top comment is debating licensing. The demos I've seen of this are incredible and it'll be great taking old photos (that weren't shot with a 'spatial' camera) and experiencing them in VR. I think it sums up the Apple approach to this stuff (actually impacting peoples lives in a positive way) vs the typically techie attitude.

mabedan|2 months ago

> which isn't unsurprising

There has to be an easier combination of words for conveying the same thing.

ericflo|2 months ago

I don't think it isn't unsurprising :)

jama211|2 months ago

Wait so you are surprised?

d_watt|2 months ago

I’ve been using some time off to explore the space and related projects StereoCrafter and GeometryCrafter are fascinating. Applying this to video adds a temporal consistency angle that makes it way harder and compute intensive, but I’ve “spatialized” some old home videos from the Korean War and it works surprisingly well.

https://github.com/TencentARC/StereoCrafter https://github.com/TencentARC/GeometryCrafter

sho_hn|2 months ago

I would love to see your examples.

gjsman-1000|2 months ago

Is this the same model as the “Spatial Scenes” feature in iOS 26? If so, it’s been wildly impressive.

alexford1987|2 months ago

It seems like it, although the shipped feature doesn’t allow for as much freedom of movement as the demos linked here (which makes sense as a product decision because I assume the farther you stretch it the more likely it is to do something that breaks the illusion)

The “scenes” from that feature are especially good for use as lock screen backgrounds

basisword|2 months ago

I assume this is the same spatial scenes feature that was on visionOS prior to OS 26. In my experience that was really incredible. You could take a standard 2D photo of someone and suddenly you were back in the room with them.

nyc_pizzadev|2 months ago

Ya, I like when it’s automatically done on my featured photo, gives the phone a very 3D look and feel.

mercwear|2 months ago

I am thinking the same thing, and I do love the effect in iOS26

analog31|2 months ago

I wonder if it helps that a lot of people take more than one picture of the same thing, thus providing them with effectively stereoscopic images.

Coneylake|2 months ago

Also, frames from live photos

jtrn|2 months ago

I was thinking of testing it, but I have an irrational hatred for Conda.

optionalsquid|2 months ago

You could use pixi instead, as a much nicer/saner alternative to conda: https://pixi.sh

Though in this particular case, you don't even need conda. You just need python 3.13 and a virtual environment. If you have uv installed, then it's even easier:

    git clone https://github.com/apple/ml-sharp.git
    cd ml-sharp
    uv sync
    uv run sharp

jtreminio|2 months ago

You can simply use a `uv` env instead?

moron4hire|2 months ago

You aren't being irrational.

quleap|2 months ago

I hate pip, a million times worse than conda

bdelmas|2 months ago

I’m so sad I had this idea at least 6 years ago but I didn’t have the connections to make it happen. But that’s nice that they released the project. Apple open sourcing their tech?

yalogin|2 months ago

Is this already integrated into the latest iOS? If so it’s not good. It only works on a few images and for the most part the rendering feels fake and somehow incoherent

burnt-resistor|2 months ago

Damn. I recall UC Davis was working on this sort of problem for CCTV footage 20 years ago, but this is really freakin' progress now.

jokoon|2 months ago

does it make a mesh?

doesn't seem very accurate, no idea of the result with a photo of large scene, that could be useful for level designers

avaer|2 months ago

It doesn't but it's pretty trivial to do if all you want is a pinholed mesh.

I managed to one-shot it by mixing in the mesh exporter from https://github.com/Tencent-Hunyuan/HunyuanWorld-Mirror but at that point you might as well use HWM, which is slower but much better suited to the level design use case.

Note that the results might not be as good as you expect, because this does not do any angled inpainting -- any deviation from the camera origin and your mesh will be either full of holes or warped (depending on how you handle triangle rejection) unless you layer on other techniques far outside the scope of this model.

And note that although HWM itself does support things like multi-image merging (which ml-sharp does not), in my testing it makes so many mistakes as to be close to useless today.

If you want something different that is designed for levels, check out Marble by World Labs.

andybak|2 months ago

Gaussian splats

dmos62|2 months ago

Anyone's aware of something similar for making interactive (or video) tours of apartments from photos?

lvl155|2 months ago

I don’t know when Apple turned evil but hard for me to support them further after nearly four decades. Everything they do now is directly opposite of what they stood for in the past.

saagarjha|2 months ago

Curious what this has to do with the post?

knorker|2 months ago

Apple has not been nice and open since the 1970s. The only open and nice person in any important role is Wozniak.

tsunamifury|2 months ago

Apple absolute Never believed in open source in the past so yes. They are not the same

ww520|2 months ago

Is the model in ONNX format or PyTorch format?

backtogeek|2 months ago

License arguments aside, pretty cool.

hermitcrab|2 months ago

"Sharp Monocular View Synthesis in Less Than a Second"

"Less than a second" is not "instantly".

0_____0|2 months ago

If you're concerned by that, I have some bad news about instant noodles.

ethmarks|2 months ago

What would your definition of "instantly" be? I would argue that, compared to taking minutes or hours, taking less than a second is fast enough to be considered "instant" in the colloquial definition. I'll concede that it's not "instant" in the literal definition, but nothing is (because of the principle of locality).

vednig|2 months ago

facebook worked on a similar project almost 5 years back

bbstats|2 months ago

would love a multi-image version of this.

darig|2 months ago

[deleted]

b112|2 months ago

Ah great. Easier for real estate agents to show slow panning around a room, with lame music.

I guess there are other uses?? But this is just more abstracted reality. It will be innacurate just as summaried text is, and future peoples will again have no idea as to reality.

tim1994|2 months ago

For panning you don't need a 3D view/reconstruction. This also allows translational camera movements, but only for nearby views. Maybe I am overly pedantic here, but for HN I guess thats appropriate :D

stevep98|2 months ago

It will be used for spatial content, for viewing in Apple Vision Pro headset.

In fact you can already turn any photo into spatial content. I’m not sure if it’s using this algorithm or something else.

It’s nice to view holiday photos with spatial view … it feels like you’re there again. Same with looking at photos of deceased friends and family.

Invictus0|2 months ago

Apple is not a serious company if they can't even spin up a simple frontend for their AI innovations. I should not have to install anything to test this.

consonaut|2 months ago

It's included in the ios photo gallery. I think this is a separate release of the tech underneath.

avaer|2 months ago

This is a free research project on GitHub. I think I'd rather apple focus on making hardware than hoarding GPUs for PR stunts to prove they are a "serious company".

pcurve|2 months ago

[flagged]

foota|2 months ago

I'm not trying to be too pc, but you can't really tell based on someone's name where they were born.

That said, the US only has some 5% of the worlds population (albeit probably a larger proportion of the literate population), so you'd only expect some fraction of the world's researchers to be US born. Not to mention that US born is an even smaller fraction of births (2.5-3%, by Google), so you'd expect an even smaller fraction of US born researchers. So even if we assume that we're on par with peer countries, you'd only expect US born researchers to be a fraction of the overall research population. We'd have to be vastly better at educating people to do otherwise, which is a longshot.

Obviously this makes turning away international students incredibly stupid, but what are we to do against stupidity?

onion2k|2 months ago

are most research done by foreign born people

Approximately 96% of the world's population is not American, so you should expect that really.

saagarjha|2 months ago

1. People with foreign sounding names may have been born in the United States.

2. People who were born outside the United States but moved here to do research a while back don’t suddenly stop doing research here.

raphman|2 months ago

FWIW, many of the researchers on the paper did not study in the U.S. but immigrated after their PhD studies.

I checked the first, middle, and last author: Lars Mescheder got his PhD in Germany, Bruno Lecouat got his PhD in France, Vladlen Koltun got his PhD in Israel.

(Edit: or maybe they did not actually immigrate but work remote and/or in Europe)

xvector|2 months ago

Why don't we produce enough experts in the US to saturate our tech companies?

It's because American education culture is trash. American parents are fine with their kids getting Bs and Cs. Mediocrity is rewarded and excellence is discouraged in our schools, both socially and institutionally.

Meanwhile you have hundreds of millions of foreign born children pulling out all the stops to do the best they possibly can at school precisely so they can get into the US and work at one of our top companies.

It was never even a competition. Immigrants and children of theirs will continue to outperform because it is literally baked into their culture - and it is baked out of ours.

_fizz_buzz_|2 months ago

Apple is also a global company and has offices and research labs world wide. At least a couple of the authors seem to work for Apple but at their German lab.

chairhairair|2 months ago

How do you know where the authors were born?

neom|2 months ago

It makes sense you're getting downvoted but I thought it was actually an interesting question so I spent the past hour or so doing an autistic rabbit hole (including finding the linkedins of the folks on the paper linked here to understand their backgrounds), heh.

Was somewhat surprised to learn that the pipeline wasn't built by industry demand, it was supply pressure from abroad that happened to arrive just as US universities needed the money (2009/10). In 1999, China's government massively expanded higher education, combined with a system where the state steers talent into stem via central quotas in the "gaokao", it created an overflow of CS capable graduates with nowhere to go domestically, India's 1991 liberalization created the IT services boom (TCS, Infosys, Y2K gold rush) and made engineering THE middle class ticket, so same overflow problem. US phd programs became the outlet for both countries.

In that light, the university side response probably wasn't state side industry demand for loads of PhDs, who was hiring those then? Google Brain didn't exist until 2011, FAIR until 2013. It wasn't really till 2012+ that industry in tech started to hire big research groups to actually advance the field vs specialized PhDs here and there for products... so not a huge amount of pull from there. Then, at the same time, universities were responding to a funding crisis... there was a 2008 state budget collapse, so it was backfilled with international Master's students paying $50-80k cash (we do this in Canada heavily also), that revenue cross-subsidized PhD programs (which are mostly cost centers remember). I also read some say PhD students were also better labor: visa constraints meant they couldn't easily bounce to industry, they'd accept $30k stipends, tho I saw other research contradicting this idea. The whole system was in place before "AI Researcher" was even a real hiring category. Then deep learning hit (2012), industry woke up, and they found a pre built pipeline to harvest: The authors on that Apple paper finished their PhDs around 2012-2020, meaning they entered programs 2009-2015 when CS PhDs were already 55-60% foreign born. Those students stayed, 75-85% of Chinese and Indian STEM PhDs are still here a decade later. They're now the senior researchers publishing papers you read here on HN.

This got me wondering, could the US have grown this domestically? In 2024 they produced ~3,000 CS PhDs, only ~1,100 domestic. To get 3,000 domestic you'd need 2.7x the pipeline...which traces back to needing 10.8 million 9th graders in 2018 instead of 4 million (lol), or convincing 3x more CS undergrads to take $35k stipends instead of $150k industry jobs. Neither happened. So other countries pay for K-12 and undergrad, capture the talent at PhD entry, keep 75%+ permanently.

Seems like a reasonable system emerged from a bunch of difficult constraints?

(and just to reiterate, even tho it was an interesting research project for me, you can't infer where someone is directly from based on their name)

https://sccei.fsi.stanford.edu/china-briefs/highest-exam-how...

https://en.wikipedia.org/wiki/Economic_liberalisation_in_Ind...

https://ncses.nsf.gov/pubs/nsf24300/data-tables

https://www.aau.edu/newsroom/leading-research-universities-r...

https://ncses.nsf.gov/pubs/nsf25325

https://www.science.org/content/article/flood-chinese-gradua...

https://www.insidehighered.com/quicktakes/2017/10/11/foreign...