The "Mistral Pixtral multimodal model" really rolls off the tongue.
> It’s unclear which image data Mistral might have used to develop Pixtral 12B.
The days of free web scraping especially for the richer sources of material are almost gone, with anything between technical (API restrictions) and legal (copyright) measures building deep moats. I also wonder what they trained it on. They're not Meta or Google with endless supplies of user content, or exclusive contracts with the Reddits of the internet.
What do you mean by copyright measures? Has anything changed on that front in the last two years?
My hunch is that most AI labs are already sitting on a pretty sizable collection of scraped image data - and that data from two years ago will be almost as effective as data scraped today, at least as far as image training goes.
At what point does an agent sitting at a browser collecting information differ from a human?
I have multiple ad-blockers running, how am I different from a bot scouring the “free” web? I get the idea of copyright and creators wanting to be paid for their content. However, I think there are plenty of human users out there not “paying” for “free” content either. Which one is a greater loss of revenue? A collection of over a million humans? Or 100 or so corporate bots?
>The days of free web scraping especially for the richer sources of material are almost gone
I would say the opposite, it has never been easier to collect a huge amount of data, in particular if you have a target, also you don't even need to write a line of code if you are good at explaining Claude 3.5 Sonnet what you want to achieve and the details.
1. This is a VLM, not a text-to-image model. You can give it images, and it can understand them. It doesn't generate images back.
2. It seems like Pixtral 12B benchmarks significantly below Qwen2-VL-7B [1], so if you want the best local model for understanding images, probably use Qwen2. If you want a large open-source model, Qwen2-VL-72B is most likely the best option.
Mistral being more open than 'openai' is kind of a meme. How can a company call itself open while it refuses to openly distribute it's product and when competitor are actually doing it.
I’d love to know how much money Mistral is taking in versus spending. I’m very happy for all these open weights models, but they don’t have Instagram to help pay for it. These models are expensive to build.
I like Qwen2-VL 7B because it outputs shorter captions with less fluff. But if you need to do anything advanced that relies on reasoning and instruction following the model completely falls flat on it's face.
For example, I have a couple way-too-wordy captions made with another captioner, which I'd like to cut down to the essentials while correcting any mistakes. Qwen2 is completely ignoring images with this approach, and decides to only focus on the given caption, which makes it unable to even remotely fix issues in said caption.
I am really hoping Pixtral will be better for instruction following. But I haven't been able to run it because they didn't prioritize transformers support, which in turn has hindered the release of any quantized versions to make it fit on consumer hardware.
I’m no expert but Florence2 has been my go-to. It’s pretty great at picking up art styles and IP stuff - “The image depicts Goku from the anime series Dragonball Z…”
I don’t believe you can really prompt it though, but the other models where I could also didn’t work well on that front anyways.
TagGui is an easy way to try out a bunch of models.
> the 12-billion-parameter model is about 24GB in size
Probably not on the device itself but I would love that use case as well. At least going to my own server. I’d want to protect notes in particular, which is why I don’t do any cloud backup on my RM2. But some self hosted, AI assisted OCR workflows could be really nice.
12B is pretty small, so I’m doubting it’ll be anywhere close to internvl2 however mistral does great work and likely this model is still useful for on device tasks
buran77|1 year ago
> It’s unclear which image data Mistral might have used to develop Pixtral 12B.
The days of free web scraping especially for the richer sources of material are almost gone, with anything between technical (API restrictions) and legal (copyright) measures building deep moats. I also wonder what they trained it on. They're not Meta or Google with endless supplies of user content, or exclusive contracts with the Reddits of the internet.
simonw|1 year ago
My hunch is that most AI labs are already sitting on a pretty sizable collection of scraped image data - and that data from two years ago will be almost as effective as data scraped today, at least as far as image training goes.
bronco21016|1 year ago
I have multiple ad-blockers running, how am I different from a bot scouring the “free” web? I get the idea of copyright and creators wanting to be paid for their content. However, I think there are plenty of human users out there not “paying” for “free” content either. Which one is a greater loss of revenue? A collection of over a million humans? Or 100 or so corporate bots?
GaggiX|1 year ago
I would say the opposite, it has never been easier to collect a huge amount of data, in particular if you have a target, also you don't even need to write a line of code if you are good at explaining Claude 3.5 Sonnet what you want to achieve and the details.
jazzyjackson|1 year ago
htrp|1 year ago
img2dataset also exists
reissbaker|1 year ago
1. This is a VLM, not a text-to-image model. You can give it images, and it can understand them. It doesn't generate images back.
2. It seems like Pixtral 12B benchmarks significantly below Qwen2-VL-7B [1], so if you want the best local model for understanding images, probably use Qwen2. If you want a large open-source model, Qwen2-VL-72B is most likely the best option.
1: https://qwenlm.github.io/blog/qwen2-vl/
Jackson__|1 year ago
Only the 2&7B have been "open sourced". From your link:
>We opensource Qwen2-VL-2B and Qwen2-VL-7B with Apache 2.0 license, and we release the API of Qwen2-VL-72B!
aucisson_masque|1 year ago
seydor|1 year ago
dangsux|1 year ago
[deleted]
ChrisArchitect|1 year ago
New Mistral AI Weights
https://news.ycombinator.com/item?id=41508695
azinman2|1 year ago
candiddevmike|1 year ago
wruza|1 year ago
Also, can your model of choice understand your requests to include/omit particular nuances of an image?
Jackson__|1 year ago
For example, I have a couple way-too-wordy captions made with another captioner, which I'd like to cut down to the essentials while correcting any mistakes. Qwen2 is completely ignoring images with this approach, and decides to only focus on the given caption, which makes it unable to even remotely fix issues in said caption.
I am really hoping Pixtral will be better for instruction following. But I haven't been able to run it because they didn't prioritize transformers support, which in turn has hindered the release of any quantized versions to make it fit on consumer hardware.
AuryGlenz|1 year ago
I don’t believe you can really prompt it though, but the other models where I could also didn’t work well on that front anyways.
TagGui is an easy way to try out a bunch of models.
Flockster|1 year ago
Like writing on an ePaper tablet, exporting the PDF and feed this into this model to extract todos from notes for example.
Or what would be the SotA for this application?
tonygiorgio|1 year ago
Probably not on the device itself but I would love that use case as well. At least going to my own server. I’d want to protect notes in particular, which is why I don’t do any cloud backup on my RM2. But some self hosted, AI assisted OCR workflows could be really nice.
jhgg|1 year ago
whimsicalism|1 year ago
edude03|1 year ago
Jackson__|1 year ago
https://xcancel.com/_philschmid/status/1833954941624615151
jazzyjackson|1 year ago
For a general knowledge chatbot it doesn't know much of course, but its a good worker bee.
unknown|1 year ago
[deleted]