
The machine learning model is like our baby. The learning process is almost like how children learn. For instance, the children observe the surroundings and learn from what they observe.
Let us understand this using a small story. John is our hero and is a student of class 12. He is good at studying and reads the prescribed textbook thoroughly. Class 12 is the board exam, so the school conducts a series of revision tests before the students appear for the board exam. So, you can quantify what John learned using his test marks during the academic year. The higher the marks he scores, the better his understanding of that subject. If his preparation is insufficient, he can't get good scores. However, if John wants to improve his score, he can revise the subjects and appear for the test again.
These tests help him assess his understanding of the subject and prepare well for the final board exam. In addition, the teachers consciously ensure that the internal tests question paper covers all chapters in the textbook so the student can score good marks in the public exam.
John prepares well, and finally, he appears for the public exam. In the exam, John comes across a few questions the textbook didn't cover. Can he answer those questions? Possibly not, correct?
These questions are from topics that are out of the syllabus. So, it means John didn't learn to answer these questions during the preparation and cannot answer. Makes sense? Ok, why are we talking about this here? The reason is this is what happens in Machine learning. So here, our trained model is John.
You define the problem statement similar to what you want John to be in the future. For example, let's say you want John to be a Journalist; he has chosen history and journalism as the main subjects.
Then, you prepare the dataset to train the model. It is like preparing a content/syllabus for John to study. Using multiple algorithms is like trying various course notes, and cleansing the dataset is like curating the notes for John to learn. Finally, training the model with numerous hyperparameter settings is like John undergoing a series of revision tests.
Like John appearing for the Final exam, you deploy the model once training is satisfactory.
Now that your model is in production. Let us say that it is a fraud detection model. Given an input data point, it will classify any new customer as fraud or not. However, suppose you send a data point with a different pattern, i.e. a data point that includes a new product category that wasn't part of the training dataset; the model will treat that as out of the syllabus. Hence it will predict with some assumptions resulting in random classification. So, for a binary classifier problem, the prediction accuracy for that data point becomes 50%. Therefore, if the number of data points that differs from the training dataset increases, the model's overall prediction accuracy will decrease.
So, what to do now? Imagine John's 12th-grade marks get reduced due to questions, not from the syllabus, and the board gives no mark correction. The only option available is to read those portions of the syllabus that was initially out of the syllabus and appear for the exam in the next attempt. That's what you must do for Machine learning models as well. You will have to retrain or rebuild the model depending on how vastly the data differs from the training dataset.
So, contrary to software deployment, the ML model doesn't work forever as the data usually changes over a while. So, you will have to retrain the model manually on an ongoing basis.
I hope you enjoyed reading this. If you find it interesting, please like, comment and share.
Views are personal.
Image Credit
Photo by George Milton: https://www.pexels.com/photo/anonymous-woman-reading-textbook-in-light-room-7034639/
Banner created using canva.com
Comments