top | item 38543029

EfficientSAM

57 points| Thomashuet | 2 years ago |yformer.github.io

7 comments

order

IshanMi|2 years ago

So if I'm understanding this correctly:

The SAM paper from this past April (that let you do zero-shot segmentation on any image, seemingly better than even OpenAI's CLIP) was using a ~600M parameter ViT model to generate image embeddings. And in order to make it less computationally expensive to generate those same embeddings, they replace that model with a smaller ViT encoder that was pre-trained using the masked auto-encoder back propagation method?

cchance|2 years ago

it's called efficient Sam and it appears to be onpar or better than fastsam but did I miss a memory or speed comparison?

yorwba|2 years ago

The comparison is figure 1 of the paper. I think the bubble size represents number of parameters, which likely roughly corresponds to memory consumption.

naveen99|2 years ago

can’t wait for everywhere all at once function.