Recently, I was challenged to do this task which basically asked to use neural networks to predict the image orientation (upright, upside down, left or right) and with that prediction rotate the image to the correct position (upright), all of this in 24 hours!

My experience with neural networks up to this point was using the Multi-layer Perceptron inside Scikit-learn, and I had never tackled image processing. Which meant it was time to bang my head against the wall!

The training set was composed of approximately 50 thousand images with their label stored in a csv that looked like this:

Image for post

In the challenge description it was hinted the use of the CIFAR10 model and the respective example on Keras. So, for a neural network beginner like myself, this was a huge leap.

The plus side of neural networks is that there is no need to reinvent the wheel, there are already many pre-trained networks for diverse purposes and they are available for anyone.

With one problem solved, but still running against time, I had focus on the image processing.

Image for post

Well, several hours, stackoverflow posts and image processing test functions later, I was finally able to convert these images into vectors, however I still needed to attach the label to each vector. At this point desperation took over.

#image-processing #data-science #neural-networks #data science

How to build an image automatic rotator in 24 hours
1.10 GEEK