This is a port of Meta's Segment Anything computer vision model which allows easy segmentation of shapes in images. Originally written in Python, Yavor Ivanov has ported it to C++ using the GGML library created by Georgi Gerganov which is optimized for CPU instead of GPU, specifically Apple Silicon M1/M2. The repo is still in it's early stage
Do you know how the time to do the image embedding takes? In SAM, most of the time is spent generating a very expensive embedding (prohibitive for real-time object detection). From the timing on your page it looks like yours is also similarly slow, but I'm curious how it compares to the pytorch Meta implementation.
Yes, GroundingDINO is an open set object detector. There are some others (eg DETIC and OWL-ViT) as well.
We’ve been working on using them (often in conjunction with SAM) for auto-labeling datasets to train smaller faster models that can run in real-time at the edge: https://github.com/autodistill/autodistill
I am looking for a model similar to this, but for text. I want to group text with different labels that apply to subsets of the text. Think of being able to quickly pull-out related segments from a large body of text.
Let's take, for instance, a sales contract that specifies a discounted price for various goods.
If you select the label "data rows", the system should be able to extract all the text pertaining to the table that specifies which SKUs are being purchased, and at what discounted price.
Moreover, this model should be capable of segmenting the content into semantically relevant chunks. One example: each row in the aforementioned table would be tagged with multiple labels. One would be just that it is a row, the data in the first column should be labeled for what it represents, e.g. "product number". Another example: if there's a section discussing the terms of delivery or warranty conditions, selecting the respective labels would instantly extract that specific information, regardless of where it's located within the document.
Would be great for it to be able to segment into some controllable range of tokens/characters to allow for pulling those chunks into a vector database, along with the relevant tags related to the chunk.
The Bark python model is very compute intensive and require a powerful GPU to get bearable inference speed. I really hope that bark.cpp with GPU/Metal support and quanticized model can bring useful inference speed on a laptop in the near future.
Next Step: Incorporate this library into image editors like Photopea (via WebAssembly) to boost the speed of common selection tasks. The magic wand is a tool of the past.
While I love the efficiency from these Python to C++ ports I can't stop thinking about the long tail of subtle bugs that will likely infest these libraries forever but then the Python versions also sit atop C/C++ cores
The AI community is nowhere close to ditching Python. Most model development and training still use python based toolchains (torch, tf...). The new trends is for popular and useful models to be ported to more efficient stack like C++/GGML for easier usage and inference speed on consumer hardware.
Another popular optimisation is to port models to WASM + GPU because it makes them easy to support a variety of platforms (desktop, mobile...) with a single API and it can still offer great performance (see Google's mediapipe as an exemple of that).
In general if you don't know what you are doing, it's much faster to first figure out a good strategy for a solution in a language that does not suffer from all of the encumbrance C++ brings in.
Python is really great for fast prototyping. It can be argued most AI products so far are result of fast prototyping. So not sure if there is anything wrong with that.
As practical models emerge, at that point it indeed makes sense to port them to C++. But I would not in my wildest dreams suggest prototyping a data model in C++ unless absolutely necessary.
How has Python held it back? Most of the heavy computation lifting is done by C extensions/bindings and the models are compiled to run on CUDA, etc. What am I missing?
This is exactly the wrong way around. We've seen the progress we've seen because of the adoption of Python. Even now there are relatively few people that can write code like this and have the ML and math experience to push forward the research.
[+] [-] ariym|2 years ago|reply
[+] [-] dekhn|2 years ago|reply
[+] [-] unshavedyak|2 years ago|reply
[+] [-] yeldarb|2 years ago|reply
We’ve been working on using them (often in conjunction with SAM) for auto-labeling datasets to train smaller faster models that can run in real-time at the edge: https://github.com/autodistill/autodistill
[+] [-] Tostino|2 years ago|reply
[+] [-] artninja1988|2 years ago|reply
[+] [-] lelag|2 years ago|reply
The Bark python model is very compute intensive and require a powerful GPU to get bearable inference speed. I really hope that bark.cpp with GPU/Metal support and quanticized model can bring useful inference speed on a laptop in the near future.
[+] [-] accurrent|2 years ago|reply
[+] [-] fzaninotto|2 years ago|reply
Next Step: Incorporate this library into image editors like Photopea (via WebAssembly) to boost the speed of common selection tasks. The magic wand is a tool of the past.
I'd pay for such a feature.
[+] [-] farhanhubble|2 years ago|reply
[+] [-] wmf|2 years ago|reply
[+] [-] hoseja|2 years ago|reply
[+] [-] OccamsMirror|2 years ago|reply
[+] [-] farhanhubble|2 years ago|reply
[deleted]
[+] [-] IshKebab|2 years ago|reply
[+] [-] lelag|2 years ago|reply
Another popular optimisation is to port models to WASM + GPU because it makes them easy to support a variety of platforms (desktop, mobile...) with a single API and it can still offer great performance (see Google's mediapipe as an exemple of that).
[+] [-] fsloth|2 years ago|reply
Python is really great for fast prototyping. It can be argued most AI products so far are result of fast prototyping. So not sure if there is anything wrong with that.
As practical models emerge, at that point it indeed makes sense to port them to C++. But I would not in my wildest dreams suggest prototyping a data model in C++ unless absolutely necessary.
[+] [-] dbmikus|2 years ago|reply
[+] [-] Havoc|2 years ago|reply
[+] [-] jebarker|2 years ago|reply