top | item 37398891

Meta's Segment Anything written with C++ / GGML

233 points| ariym | 2 years ago |github.com

31 comments

order
[+] ariym|2 years ago|reply
This is a port of Meta's Segment Anything computer vision model which allows easy segmentation of shapes in images. Originally written in Python, Yavor Ivanov has ported it to C++ using the GGML library created by Georgi Gerganov which is optimized for CPU instead of GPU, specifically Apple Silicon M1/M2. The repo is still in it's early stage
[+] dekhn|2 years ago|reply
Do you know how the time to do the image embedding takes? In SAM, most of the time is spent generating a very expensive embedding (prohibitive for real-time object detection). From the timing on your page it looks like yours is also similarly slow, but I'm curious how it compares to the pytorch Meta implementation.
[+] unshavedyak|2 years ago|reply
Well... damn. Is there a framework like this (or this directly?) which can run object detection? People, car types, makes, animals, etc?
[+] yeldarb|2 years ago|reply
Yes, GroundingDINO is an open set object detector. There are some others (eg DETIC and OWL-ViT) as well.

We’ve been working on using them (often in conjunction with SAM) for auto-labeling datasets to train smaller faster models that can run in real-time at the edge: https://github.com/autodistill/autodistill

[+] Tostino|2 years ago|reply
I am looking for a model similar to this, but for text. I want to group text with different labels that apply to subsets of the text. Think of being able to quickly pull-out related segments from a large body of text. Let's take, for instance, a sales contract that specifies a discounted price for various goods. If you select the label "data rows", the system should be able to extract all the text pertaining to the table that specifies which SKUs are being purchased, and at what discounted price. Moreover, this model should be capable of segmenting the content into semantically relevant chunks. One example: each row in the aforementioned table would be tagged with multiple labels. One would be just that it is a row, the data in the first column should be labeled for what it represents, e.g. "product number". Another example: if there's a section discussing the terms of delivery or warranty conditions, selecting the respective labels would instantly extract that specific information, regardless of where it's located within the document. Would be great for it to be able to segment into some controllable range of tokens/characters to allow for pulling those chunks into a vector database, along with the relevant tags related to the chunk.
[+] artninja1988|2 years ago|reply
Big fan of your work GGML friends
[+] lelag|2 years ago|reply
Another GGML model port that I'm pretty excited about is https://github.com/PABannier/bark.cpp.

The Bark python model is very compute intensive and require a powerful GPU to get bearable inference speed. I really hope that bark.cpp with GPU/Metal support and quanticized model can bring useful inference speed on a laptop in the near future.

[+] accurrent|2 years ago|reply
Hmm wonder how this compares to stuff like FastSAM and MobileSAM. Is SAM quantized better or are those knock of architectures more performant.
[+] fzaninotto|2 years ago|reply
Bravo, the demonstration is genuinely impressive!

Next Step: Incorporate this library into image editors like Photopea (via WebAssembly) to boost the speed of common selection tasks. The magic wand is a tool of the past.

I'd pay for such a feature.

[+] farhanhubble|2 years ago|reply
While I love the efficiency from these Python to C++ ports I can't stop thinking about the long tail of subtle bugs that will likely infest these libraries forever but then the Python versions also sit atop C/C++ cores
[+] wmf|2 years ago|reply
Good news! Deep learning inherently has a long tail of subtle bugs (SolidGoldMagikarp anyone?) so no one will care if C++ introduces a few more.
[+] hoseja|2 years ago|reply
Just because Python silently ignores the bugs doesn't mean they're not there.
[+] OccamsMirror|2 years ago|reply
Just wait until they’re ported to C++ using AI!
[+] IshKebab|2 years ago|reply
I'm so glad the AI community is finally starting to ditch Python. It has held progress back for far too long.
[+] lelag|2 years ago|reply
The AI community is nowhere close to ditching Python. Most model development and training still use python based toolchains (torch, tf...). The new trends is for popular and useful models to be ported to more efficient stack like C++/GGML for easier usage and inference speed on consumer hardware.

Another popular optimisation is to port models to WASM + GPU because it makes them easy to support a variety of platforms (desktop, mobile...) with a single API and it can still offer great performance (see Google's mediapipe as an exemple of that).

[+] fsloth|2 years ago|reply
In general if you don't know what you are doing, it's much faster to first figure out a good strategy for a solution in a language that does not suffer from all of the encumbrance C++ brings in.

Python is really great for fast prototyping. It can be argued most AI products so far are result of fast prototyping. So not sure if there is anything wrong with that.

As practical models emerge, at that point it indeed makes sense to port them to C++. But I would not in my wildest dreams suggest prototyping a data model in C++ unless absolutely necessary.

[+] dbmikus|2 years ago|reply
How has Python held it back? Most of the heavy computation lifting is done by C extensions/bindings and the models are compiled to run on CUDA, etc. What am I missing?
[+] Havoc|2 years ago|reply
I’d say discovery and innovation would be slower in a less relaxed language. And speed end up comparable thanks to the compiled parts of python
[+] jebarker|2 years ago|reply
This is exactly the wrong way around. We've seen the progress we've seen because of the adoption of Python. Even now there are relatively few people that can write code like this and have the ML and math experience to push forward the research.