Experimental Process with Transfer Knowledge

Abstract

This week I learned about transfer learning. The goal was to reach an accuracy of 87% or higher. I was able to get the results needed with a InceptionResNetV2 model trained with Adam Optimization with an accuracy of 91.57% and validation accuracy of 91.04%.

Introduction

While doing transfer learning I had to take an existing model and modify it to fit the data I needed as a return. I couldn't just choose anything it needed to have an accuracy of 87% or higher.

Materials/Resources

In this project, I used the CIFAR 10 dataset which comes with 50000 trained images for 10 different classes including frogs, dogs, and cars. And then includes another 10000 just for testing.

I also used a few models from Keras Applications to test which worked best with the results I wanted. These include pre-trained weights to make the transfer learning process easier to apply.

Google collab was also a really useful resource as it allowed me to test my models efficiently.

Methods

For this project, I picked a few different models from the Keras Applications and ran them through the program. After running them I found that the InceptionResNetV2 was the best of what I choose. Having run multiple lets me understand why some work better than others and why it is better to have certain layers. Not to mention how Amam optimization helped out a lot. I did also only run with 4 epochs because I didn't want the program to overtrain and get too detailed as this could cause the model to be worse.

Results

As a result, I found that the InceptionResNetV2 model trained the best from the few I had tested with an accuracy of 91.57% and validation accuracy of 91.04%.

Results from InceptionResNetV2 model

Discussion

In the end, as previously mentioned InceptionResNetV2 did come out on top with over a 90% accuracy while other models did not perform as well. This did take about an hour and a half to finish training and I was surprised with the results compared to the others I have seen. If I had more time I would have possibly been able to fine toon to get better results.

Sources

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

The whiteboard analogy to deal vanishing and exploding gradients

Data Science with no Math

Different Cross Validation types and how it works to overcome overfitting in Machine Learning

Understanding Q-Learning, the Cliff Walking problem

Class weights for categorical loss

The ML libraries in 2021

Image Inpainting: Research summary and discussion

An overview of Transformer Architectures in Computer Vision

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ethan Roberts

Ethan Roberts

More from Medium

Understanding the MLJAR AutoML framework

Expected value as evaluation metric in Machine Learning

Recognising Handwritten Digits

Use previously implemented ML models — ML Practitioner Quick bits