Open source coffee sorter project

Ah yes, I did not yet update the time plan.

In reality, we worked on that between January and April but got stuck with the image classifier. I now think that neural networks are not good at sorting something based on “fuzzy spots of color”, they want to see edges and clear lines. So maybe creating color histograms from the bean images and running a classifier on that would help, but we did not try that yet.

If you want to do some experiments on your own, I could upload our dataset of coffee beans images at least.

Yes, that would be cool :slight_smile: How many beans per second (on the conveyor belt) did you process? Were the pictures too blurry? How did you manage consistent lighting?

Ok, I will upload the picture set then. It’s just the first training set for the image classifier though, not yet produced with a coffee sorter machine. We simply made photos of 20-30 beans at once (unordered) and I wrote a small script that split these up into individual images with one been each. For consistent lighting, the best trick was direct sun and a white translucent plastic bowl over the beans when taking a photo.

How did you want to sort them out when they were clumped together? Once you find that some bean is bad, you’d throw away also the neighbours?

They should not be clumped together because the two conveyors and funnels in the current machine design will place the beans into a single line, with enough distance between them so that each can be removed individually.

Throwing out the whole group of beans together would also be an option, though. Commercial optical sorters do it similarly: they create “first rejects” in a first pass, which includes some good beans. The first rejects are then fed to the machine again. This time, the beans will be grouped differently and so the machine will only throw out the bad ones (with a few exceptions of course). These are then called the “final rejects”.

Ok then @xixli, I have just converted and uploaded our coffee bean images in a manageable size. they in our repo in the training-data-jpg directory, and to download them you can either git clone the whole repository or download it as a ZIP archive. Happy experimenting! :slight_smile:

2 Likes

Thanks for your explain .I will report to our company for your request.
By the way,we have some mini color sorter videos on YOUTUBE ,you could check for your reference

MINI COLOR SORTER WORKING VIDEO

This video for sorting rice.We also have video for sorting green coffee.If you need I could send you for your reference.

2 Likes

Hello,
May I suggest a thin shard of glass at a certain angle, so beans can slide and offer both sides to cameras?
Free fall is another option, and analyze it at the beginning of the fall, when the speed is low.
A lot cheaper than conveyor belts, and faster than flipping them over to see the other side (or trying to).
I really hope this project is a success.

1 Like

Oh, that’s a great idea, thank you! I was thinking repeatedly about some alternatives, which seem inferior now:

  • a setup with mirrors – but still they can’t see under a bean, just its top and sides
  • a setup with a transparent conveyor – but that will accumulate dust, and also a material that is both flexible and optically clear and does not distort the picture is difficult to find

I think that’s suitable for more industrial-scale machines that can guarantee a maximum processing time for the pictures – because when analyzing in free fall and without conveyors, the machine needs to rely on timing to kick the right beans into the right destinations.

I’m so happy to be of any help.
I really hate sorting bad beans (read: hate to convince my wife to do it}. :smile:
If you need a financial push, I can pre-buy my $300 unit right now. :smiley:
I’m romanian, not native english, so mistakes made are unintentional.

1 Like

Just wanted to say hi, wow and thanks! This looks like an AWESOME project! My wife has a small coffee roasting company, and I can confirm that much of her time is spent sorting beans. I have very basic coding knowledge (python, html, php etc) and access to 3D printers and raspberry pis. I would love to help out in any way possible with my limited skillsets. Perhaps some testing of software as we have a stock of green beans also? Hope things are still progressing with this project :slight_smile:

2 Likes

Hello Neil, welcome here and thank you for the motivating comment!

Where we got stuck (for the moment) is creating a reliable image classifier to distinguish good and bad beans. Normal deep learning based approaches with pre-trained networks are too fixed on recognizing shapes and edges – fuzzy color areas etc. as in green beans do not sit well with them, we tried that.

But there is a lot of other software that could help, including applications of OpenCV. This is not necessarily complex coding but you need to have the right idea. Python is a good language of choice for that, actually. So, if you want to give it a try :slight_smile: (We have a repo with training and testing images of individual green beans, see here.)

Once we have a working classifier algorithm, I promise to finish a machine and publish its designs under an open licence. A crowdfunding campaign for that seems within reach then. Just right now, I’m a bit stuck, also since I don’t have much time to spend on this right now.

1 Like

Hey Matthias,

Sorry to hear things are being difficult - I guess that’s what proves it’s a worthwhile project!

I’ve started to slowly look through the documentation on opencv and am finding it interesting and not so complex that my brain hurts! I’m still nowhere near skilled enough to even think about how this could work for this project, but I’ll keep learning when I get the chance and see if any eureka moments pop up later on!

Fingers crossed we manage to pull something together between us all. Come on internet!

1 Like

Hi . I saw your wonderful project and totally agree with your concept.
I am making optical green beans sorter as my personal project which is used Open CV , TensorFlow and Jetson nano.
I will show my projects at Maker Fair Tokyo 2019 in 3-4th August.
Would you like to come to MFT2019 if you are interested and have chance to go to Japan.
see the website>> IRU KIKAI | Maker Faire Tokyo 2019 | Make: Japan)

4 Likes

Hello @toshota, and many thanks for sharing this wonderful project :slight_smile:

Congratulations especially for making the green beans sorting work with TensorFlow! I had tried the same but with a pre-trained network (ImageNet v3) and it did not work at all. Probably because it knew too much about all kinds of objects like cars, balls, elephants etc. … which interfered with the beans sorting task as that was a very different thing to do …

I think now I should try your TensorFlow architecture on our beans images dataset and see what happens.

While I can’t make it to Japan for Maker Fair Tokyo, I’m really interested in updates from your project. And if it’s an open source project, I’d be happy to contribute in some ways.

Hi matthias

Thank you for your comment. Unfortunately, it is not an open-source project but you can share my mechanical idea. I made a youtube movie of my sorter. Please check!

The software works good but the picking unit does not work 100%. It has some vibration problems on the conveyer so far.

2 Likes

This is such a lovely machine! It’s great! :green_heart:

Your beans flipping wheel is really creative, actually. I had thought about this a lot as well, and so far the best idea I had was a chute made from glass and imaging the beans while they slide down there. It seems that the crumpling of the conveyor that is introduced by the beans flipping wheel causes some mechanical issues with the ejection mechanism later … but of course that’s just a small issue, as any new machine will have in the first iteration.

Hi,

I used your dataset and fastai library for transfer learning to create a more accurate model. Feel free to use the code or conribute in any way

1 Like

Hello @cj23, that’s great news! And thanks so much for registering here and telling us about your success :slight_smile: I’d be interested to know your final accuracy when training the model! A few percentage points lower than optimum is no big deal … the problem I had was that most of my models did not even converge, and if they did, they would mostly not go higher than 82% accuracy.

Edit: Just found it in the Python notebook file: 99.977%. Whoa! :dizzy_face: :blush:

Also @natalia_skoczylas: look what happened with your made-in-Morocco coffee bean images!

3 Likes

Thanks!
Feel free to ask any questions you may have.
Would this be able to put the project in production? Are there any other challenges you face?