How did you want to sort them out when they were clumped together? Once you find that some bean is bad, you’d throw away also the neighbours?
They should not be clumped together because the two conveyors and funnels in the current machine design will place the beans into a single line, with enough distance between them so that each can be removed individually.
Throwing out the whole group of beans together would also be an option, though. Commercial optical sorters do it similarly: they create “first rejects” in a first pass, which includes some good beans. The first rejects are then fed to the machine again. This time, the beans will be grouped differently and so the machine will only throw out the bad ones (with a few exceptions of course). These are then called the “final rejects”.
Ok then @xixli, I have just converted and uploaded our coffee bean images in a manageable size. they in our repo in the training-data-jpg directory, and to download them you can either git clone the whole repository or download it as a ZIP archive. Happy experimenting!
Thanks for your explain .I will report to our company for your request.
By the way,we have some mini color sorter videos on YOUTUBE ,you could check for your reference
This video for sorting rice.We also have video for sorting green coffee.If you need I could send you for your reference.
May I suggest a thin shard of glass at a certain angle, so beans can slide and offer both sides to cameras?
Free fall is another option, and analyze it at the beginning of the fall, when the speed is low.
A lot cheaper than conveyor belts, and faster than flipping them over to see the other side (or trying to).
I really hope this project is a success.
Oh, that’s a great idea, thank you! I was thinking repeatedly about some alternatives, which seem inferior now:
- a setup with mirrors – but still they can’t see under a bean, just its top and sides
- a setup with a transparent conveyor – but that will accumulate dust, and also a material that is both flexible and optically clear and does not distort the picture is difficult to find
I think that’s suitable for more industrial-scale machines that can guarantee a maximum processing time for the pictures – because when analyzing in free fall and without conveyors, the machine needs to rely on timing to kick the right beans into the right destinations.
I’m so happy to be of any help.
I really hate sorting bad beans (read: hate to convince my wife to do it}.
If you need a financial push, I can pre-buy my $300 unit right now.
I’m romanian, not native english, so mistakes made are unintentional.
Just wanted to say hi, wow and thanks! This looks like an AWESOME project! My wife has a small coffee roasting company, and I can confirm that much of her time is spent sorting beans. I have very basic coding knowledge (python, html, php etc) and access to 3D printers and raspberry pis. I would love to help out in any way possible with my limited skillsets. Perhaps some testing of software as we have a stock of green beans also? Hope things are still progressing with this project
Hello Neil, welcome here and thank you for the motivating comment!
Where we got stuck (for the moment) is creating a reliable image classifier to distinguish good and bad beans. Normal deep learning based approaches with pre-trained networks are too fixed on recognizing shapes and edges – fuzzy color areas etc. as in green beans do not sit well with them, we tried that.
But there is a lot of other software that could help, including applications of OpenCV. This is not necessarily complex coding but you need to have the right idea. Python is a good language of choice for that, actually. So, if you want to give it a try (We have a repo with training and testing images of individual green beans, see here.)
Once we have a working classifier algorithm, I promise to finish a machine and publish its designs under an open licence. A crowdfunding campaign for that seems within reach then. Just right now, I’m a bit stuck, also since I don’t have much time to spend on this right now.
Sorry to hear things are being difficult - I guess that’s what proves it’s a worthwhile project!
I’ve started to slowly look through the documentation on opencv and am finding it interesting and not so complex that my brain hurts! I’m still nowhere near skilled enough to even think about how this could work for this project, but I’ll keep learning when I get the chance and see if any eureka moments pop up later on!
Fingers crossed we manage to pull something together between us all. Come on internet!
Hi . I saw your wonderful project and totally agree with your concept.
I am making optical green beans sorter as my personal project which is used Open CV , TensorFlow and Jetson nano.
I will show my projects at Maker Fair Tokyo 2019 in 3-4th August.
Would you like to come to MFT2019 if you are interested and have chance to go to Japan.
see the website>> https://makezine.jp/event/makers-mft2019/m0004/)
Hello @toshota, and many thanks for sharing this wonderful project
Congratulations especially for making the green beans sorting work with TensorFlow! I had tried the same but with a pre-trained network (ImageNet v3) and it did not work at all. Probably because it knew too much about all kinds of objects like cars, balls, elephants etc. … which interfered with the beans sorting task as that was a very different thing to do …
I think now I should try your TensorFlow architecture on our beans images dataset and see what happens.
While I can’t make it to Japan for Maker Fair Tokyo, I’m really interested in updates from your project. And if it’s an open source project, I’d be happy to contribute in some ways.
Thank you for your comment. Unfortunately, it is not an open-source project but you can share my mechanical idea. I made a youtube movie of my sorter. Please check!
The software works good but the picking unit does not work 100%. It has some vibration problems on the conveyer so far.
This is such a lovely machine! It’s great!
Your beans flipping wheel is really creative, actually. I had thought about this a lot as well, and so far the best idea I had was a chute made from glass and imaging the beans while they slide down there. It seems that the crumpling of the conveyor that is introduced by the beans flipping wheel causes some mechanical issues with the ejection mechanism later … but of course that’s just a small issue, as any new machine will have in the first iteration.
I used your dataset and fastai library for transfer learning to create a more accurate model. Feel free to use the code or conribute in any way
Hello @cj23, that’s great news! And thanks so much for registering here and telling us about your success I’d be interested to know your final accuracy when training the model! A few percentage points lower than optimum is no big deal … the problem I had was that most of my models did not even converge, and if they did, they would mostly not go higher than 82% accuracy.
Edit: Just found it in the Python notebook file: 99.977%. Whoa!
Also @natalia_skoczylas: look what happened with your made-in-Morocco coffee bean images!
Feel free to ask any questions you may have.
Would this be able to put the project in production? Are there any other challenges you face?
Yes, that was the enabling action to bring this project back to life!
No big challenge left now, as far as I can see. Just some rather simple robotics control stuff with a RPi, camera, stepper motors and 3D printed parts. I have all the electronics around already, my only challenge is now to find the time to get it all done. But since I promised above to restart the work if somebody can figure out the image classifier, I’ll get it done eventually
Amazing! Glad to know
No way!! this is an incredible result, I am so happy I had a chance to help with this! Well done @cj23 and Matt, of course <3