I played around with a machine learning demo and used a banana, apple and an orange for learning via webcam, and used speech synthesis to make it speak out load. After the accuracy was good I point the cam on my wife and it said: - 100% certainty a banana
hah! keep in mind the model will always make a prediction with one of the labels it is trained with for any image it is shown. You can add a "none" label and add images of things that are not banana, orange, apple, to learn the important features of a picture that make it a banana. if you are using your webcam, you can collect images of you, your office, backgrounds, etc
I've been wanting for quite some time to build a device with a camera that could recognize my cat on the counter and turn on a servo that would release a jet of compressed air. It looks like I could actually use this for that.
I’ve been debating doing this too. I’m not too worried about the cat detection but haven’t got a clue how to programmatically release compressed air? Wouldn’t it be as easy just to play the sound of compressed air through a speaker? My dumb cat wouldn’t be able to tell the difference.
Add a random delay 0-60 sec delay for the jet air. This would create a fear and whenever she would sit there will think about jet of air coming any moment of time and now you can remove the machine after 1 month when your cat is trained.
Check out the oss project [autofocus](https://github.com/uptake/autofocus) used by the Chicago zoo to operate camera traps. Could be useful for you too!
omg please make this project haha I love this - we'll make it a project highlight on our twitter @lobe_ai! And be a hit on our subreddit https://reddit.com/r/lobe
I really like how the website is done. Visually and content-wise. It transports the message pretty well into my brain.
Concise, not overloaded, good font sizes and looks good on mobile and desktop.
Seems to be marketed as 'machine learning', but upon closer look is only for machine learning on images. Anyone know of something similar for analysis of other kinds of data; particularly interested to analyze records (like spreadsheet data)?
Under project templates - coming soon they say it will work with tabular data.
I expect, as they allude to, it will begin with simple classification tasks in order to stick with the clean user experience they've built. But I'm super eager to see what they propose in this area.
We are working on adding more project templates in the future, and Lobe is designed upon the idea that machine learning should be made easy—no matter the problem type you are facing.
I'm the founder, but you can use sysrev.com to review pdfs, json, text, etc. and assign labels / do annotations. You can see https://blog.sysrev.com/simple-ner/ for how to build something like a gene named entity recognizer in text. We have mechanical turk like compensation tools too, but you'll need to ping me ([email protected]) for access.
There are other options for this too, I think spacy.io has an annotation app.
Yep we are starting with image classification for this initial beta launch, but plan to expand to more data types and problem types in future releases! The vision is to make a tool usable by anyone to build custom machine learning
This seems pretty cool -- but one issue to me is that (similar to the chasm that exists in low-code app building once the magic doesn't suit you) that if I already have the skills to create a mobile app that integrates tensorflow, I probably also have the skills to train my models. It would be cool if feature-extraction (image pre-processing and first network layer(s)) could run on the front-end, and the rest of the network/search on the back-end, similar to how distributed speech recognition works. Then I could use a canned lib on the device that integrates w/the camera, and get my results via a websocket. (Of course, I could still run everything on the client still as well.)
What's that old joke? Something like in the 1980s a Media Lab teacher gives the class a computer vision assignment where they're supposed to be able to tell whether or not an image contains a bird, and 40 years later they're still working on it? Lobe.ai reminds me of trying to identify plants with Google Goggles 10(ish?) years ago. It didn't work very well then, and then Google killed Goggles. Side note: none of the "click on the leaf feature" web-based plant identifiers gave a satisfactory answer either.
Google Lens (on the stock Android camera app) works pretty great for identifying plants to me. I'd say it's right about 95% of the time (in New England) if I can see a flower, and about 75% of the time for a leaf.
I remember a previous version from about 2 years ago... I think Lobe.ai was web-based, at the time? And you could drag and drop various blocks around to do image recognition and analysis? I probably have some of the details wrong but the demos were very impressive.
While I never got approved for that beta (probably rightly so, I'm just some random person with no actual connection to ML or AI), I was excited to see what their work led to. Congrats on releasing this latest iteration and acquisition!
Thank you! And you are right, the general idea of Lobe 1.0 is in your message! For Lobe 2.0 we switched a bit as you can see, and the good news is, you can go and download the app at lobe.ai without the need to wait any longer!
The reasoning behind the change and the why we abstracted some of those details you are mentioning was to actually make it even more accessible for people to be able to build machine learning models. We think that this is a paradigm that should be used by everyone, and that’s why stripping down the onion of complexity was really important for us when we started with this project.
Thank you so much! We deeply explored how to make it easy for anyone to get started with machine learning. We looked at where people were spending the most time getting started and iterating on creating a custom machine learning model. This is why we expanded Lobe to focus on 3 fundamental steps:
1. Collecting & labeling images
2. Training your model and evaluating the results
3. Playing with your model and seeing how well its performing
Why does this exist? I don't mean what do you use it for, the many uses are obvious, but why has a company (Microsoft) made it and released it for free.
Reading the license I assume it may change at some future version to require money to use it, and that a new version will install and then say please pay us to continue using? Or probably just this product is no longer available? Note these are not things I am thinking will happen but rather my theoretical assumptions to try to answer the question of why has Microsoft, a for profit company, made this closed source, free tool that I think might be pretty useful for a lot of people.
Our driving force is to make this technology as accessible as possible to as many people as possible. We believe that machine learning will be a huge new way that people interact with computers going forward to better their lives.
Lobe will always let you train custom machine learning for free on your computer. We hope this becomes a vibrant ecosystem, and the business model around the edges can come later for value-add services.
You agree to receive these automatic updates without any additional notice. Updates may not include or support all existing software features, services, or peripheral devices.
This is the part that causes me to assume it will stop working at some point in the future? And when would that be:
a) Term.
The term of this agreement will continue until the commercial release of the software. We also may not release a commercial version.
But honestly unsure if I am just paranoid. Or even if paranoia is the right term for my feeling about it, it's something Microsoft is letting me use and at some point it won't be usable anymore - such is life - might be the more reasonable response to it.
I must have missed it, but Lobe is owned by Microsoft. The product looks clean and well suited for CV 101 applications. Looks like a no-code meets AI solution. Anyone using it beyond research / personal project implementations?
One thing I thought of when I saw the demo video, that is probably on the team's radar:
There would be a lot of cool ways to improve the model by giving feedback, either showing training images where the model is uncertain, or some more advanced explanations for classifications flagged as incorrect, in order to guide the user to gather the training data that can improve it.
And possibly providing a summary of where it knows it works well.
There are a lot of benefits there, both for improving models people are building but also to help users understand why their model is performing as it does.
Thanks for your suggestions here. We are always looking at ways to improve Lobe, and the feedback loop of how to improve your model is one of the most important ones for us.
The app is beautifully done. I'm really impressed by how well it works given the knobs available.
However I tried to train it to recognize some images of characters from an anime (so a little different than facial recognition), and I managed to break the model: achieving 64% error with significant number of examples per class. I think one downside is Lobe doesn't expose how potentially overconfident the model is. I would love the ability to take the existing model and test it on a new image that I can import into the app.
EDIT: I would love to see the following in a future version:
1. What are the percentages associated with each image per class. I see that an image was misclassified, but did it at least include my desired class in its top 5 predicted classes?
2. Test the model on unlabeled inputs directly in the app to see how well the model might generalize. I would like to see a "Test" tab on the left once training is complete.
3. View other metrics of model goodness like F-1 score and training details like CV partitions in the app somehow.
Teachable Machine is a good way to start! Lobe tries to give you more ways to, not only continuously work on your model by adding more images, changing the number of labels, and even re-label a bunch of your images while training a custom machine learning model in the background—but also, it gives you the ability to analyze your results in real time, and test your model while giving it feedback, so the loop to make your model better is continuously happening. I love that about Lobe.
One of the things it shows on the main page says "train an apps to count reps" while a lady is doing physical exercises.
This is next to ridiculous. I don't need an app or any assistance in counting my reps. I can do that myself. That's easy.
What I really dream of an app for is app to tell my mistakes in technique/posture for every particular exercise. I don't even mind putting a funny costume or some motion sensors on to make its job easier.
Hey qwerty456127! We’ve actually had users build models like this in the past due to some respiratory tracking they need to get done in order to do their exercises properly, so they can’t concentrate on counting and therefore, an automated system proved useful for them.
On the other comment, yes! The app you are describing sounds really interesting, and it is something that could be build using image classification, you just need the right images and camera setup, though!
[+] [-] z3t4|5 years ago|reply
[+] [-] cohenjacobd|5 years ago|reply
[+] [-] SamBam|5 years ago|reply
[+] [-] deanclatworthy|5 years ago|reply
[+] [-] winchester6788|5 years ago|reply
https://gist.github.com/YashasSamaga/e2b19a6807a13046e399f4b... (download links for yolov4.weights is at https://github.com/AlexeyAB/darknet)
Using this, you will be able to detect if a cat is present in your image.
[+] [-] kburman|5 years ago|reply
[+] [-] mulberrybush|5 years ago|reply
https://www.youtube.com/watch?v=QPgqfnKG_T4
[+] [-] hijinks|5 years ago|reply
[+] [-] aaaaardvark|5 years ago|reply
[+] [-] cohenjacobd|5 years ago|reply
[+] [-] raverbashing|5 years ago|reply
[+] [-] mbeissinger|5 years ago|reply
[+] [-] fblp|5 years ago|reply
However, they do have opensourced bootstrap apps here: https://github.com/lobe
[+] [-] cohenjacobd|5 years ago|reply
[+] [-] palebluedot|5 years ago|reply
[+] [-] Aldipower|5 years ago|reply
[+] [-] elwell|5 years ago|reply
Love the info site design.
[+] [-] mdifrgechd|5 years ago|reply
I expect, as they allude to, it will begin with simple classification tasks in order to stick with the clean user experience they've built. But I'm super eager to see what they propose in this area.
[+] [-] RamonGilabert|5 years ago|reply
[+] [-] diskzero|5 years ago|reply
[+] [-] tomlue|5 years ago|reply
There are other options for this too, I think spacy.io has an annotation app.
[+] [-] nabergh|5 years ago|reply
[+] [-] mbeissinger|5 years ago|reply
[+] [-] jpf0|5 years ago|reply
We specialize in tabular data and are building a pipeline-based approach for creating and serving models.
[+] [-] thelazydogsback|5 years ago|reply
[+] [-] rhizome|5 years ago|reply
[+] [-] SamBam|5 years ago|reply
[+] [-] rvense|5 years ago|reply
https://dspace.mit.edu/handle/1721.1/6125
[+] [-] enlightens|5 years ago|reply
While I never got approved for that beta (probably rightly so, I'm just some random person with no actual connection to ML or AI), I was excited to see what their work led to. Congrats on releasing this latest iteration and acquisition!
[+] [-] RamonGilabert|5 years ago|reply
The reasoning behind the change and the why we abstracted some of those details you are mentioning was to actually make it even more accessible for people to be able to build machine learning models. We think that this is a paradigm that should be used by everyone, and that’s why stripping down the onion of complexity was really important for us when we started with this project.
[+] [-] cohenjacobd|5 years ago|reply
1. Collecting & labeling images 2. Training your model and evaluating the results 3. Playing with your model and seeing how well its performing
[+] [-] bryanrasmussen|5 years ago|reply
Reading the license I assume it may change at some future version to require money to use it, and that a new version will install and then say please pay us to continue using? Or probably just this product is no longer available? Note these are not things I am thinking will happen but rather my theoretical assumptions to try to answer the question of why has Microsoft, a for profit company, made this closed source, free tool that I think might be pretty useful for a lot of people.
[+] [-] mbeissinger|5 years ago|reply
Lobe will always let you train custom machine learning for free on your computer. We hope this becomes a vibrant ecosystem, and the business model around the edges can come later for value-add services.
[+] [-] bryanrasmussen|5 years ago|reply
You agree to receive these automatic updates without any additional notice. Updates may not include or support all existing software features, services, or peripheral devices.
This is the part that causes me to assume it will stop working at some point in the future? And when would that be:
a) Term.
The term of this agreement will continue until the commercial release of the software. We also may not release a commercial version.
But honestly unsure if I am just paranoid. Or even if paranoia is the right term for my feeling about it, it's something Microsoft is letting me use and at some point it won't be usable anymore - such is life - might be the more reasonable response to it.
[+] [-] itsbits|5 years ago|reply
[+] [-] awb|5 years ago|reply
[+] [-] mdifrgechd|5 years ago|reply
There would be a lot of cool ways to improve the model by giving feedback, either showing training images where the model is uncertain, or some more advanced explanations for classifications flagged as incorrect, in order to guide the user to gather the training data that can improve it.
And possibly providing a summary of where it knows it works well.
There are a lot of benefits there, both for improving models people are building but also to help users understand why their model is performing as it does.
[+] [-] RamonGilabert|5 years ago|reply
[+] [-] seltzered_|5 years ago|reply
[+] [-] SoylentOrange|5 years ago|reply
However I tried to train it to recognize some images of characters from an anime (so a little different than facial recognition), and I managed to break the model: achieving 64% error with significant number of examples per class. I think one downside is Lobe doesn't expose how potentially overconfident the model is. I would love the ability to take the existing model and test it on a new image that I can import into the app.
EDIT: I would love to see the following in a future version:
1. What are the percentages associated with each image per class. I see that an image was misclassified, but did it at least include my desired class in its top 5 predicted classes?
2. Test the model on unlabeled inputs directly in the app to see how well the model might generalize. I would like to see a "Test" tab on the left once training is complete.
3. View other metrics of model goodness like F-1 score and training details like CV partitions in the app somehow.
Again, this is a really cool idea :)
[+] [-] robalicious|5 years ago|reply
[+] [-] psahgal|5 years ago|reply
More info on AutoML: https://cloud.google.com/automl
[+] [-] rodolphoarruda|5 years ago|reply
Boo hoo, I'm running Linux on my desktop...
[+] [-] e2e4|5 years ago|reply
Both apps are great!
[+] [-] RamonGilabert|5 years ago|reply
[+] [-] ampdepolymerase|5 years ago|reply
[+] [-] cohenjacobd|5 years ago|reply
[+] [-] qwerty456127|5 years ago|reply
This is next to ridiculous. I don't need an app or any assistance in counting my reps. I can do that myself. That's easy.
What I really dream of an app for is app to tell my mistakes in technique/posture for every particular exercise. I don't even mind putting a funny costume or some motion sensors on to make its job easier.
[+] [-] RamonGilabert|5 years ago|reply
On the other comment, yes! The app you are describing sounds really interesting, and it is something that could be build using image classification, you just need the right images and camera setup, though!