General

The Intelligence of Machine Learning - Part 2

Intcore Team
Oct 10, 2019
3 min read
89 views
The Intelligence of Machine Learning

A couple of months ago, we started a topic about Machine Learning, we specifically talked about Linear Regression Algorithms and how linear regression can predict the price of the house based on a previous housing data.

You can find more about this in here “The Intelligence of Machine Learning - Part 1”.

Moving to the next thing we’re going to talk about, Detecting Spam emails, something that will tell if an email is spam or not and how we do this. In order to do so, we need previous data (training set). Given for example a hundred email that we looked at already, out of these hundred emails we’ve flagged some of them as spam, and the rest are not.  

Thinking about some features that spam emails may be likely to display and analyze these features. One of those features maybe containing the word “Cheap”. 

Let's assume that we have 25 spam emails and 75 non-spam emails. By analyzing those emails and looking for the word “Cheap” in them, we found that 20 spam emails contains that word, and 5 non-spam emails contains this word also. so, if an email contains the word “Cheap”, the probability of it being a spam will be (number of spam emails contains the word “Cheap” divided by the total number of spam emails). 

Applying this on our situation, we will find that the probability of an email containing the word “Cheap” being a spam email is 80%. So, we can associate this feature (contains “Cheap” word) with a probability of 80% and we can use it to flag future messages  spam or not.

Based on the previous example, We can also look at other features to specify a spam email such as Missing titles, spelling mistakes, etc.

This is known as “Naive Bayes Algorithm” or “Naive Bayes Classifier”.

Recommending Applications:-

Gender

Age

Application

Male

15

Pokemon Go

Female

25

HaYaChat

Male

32

HaYaChat

Female

12

Pokemon Go

Male

17

Pokemon Go

Given the above table, It seems that age is more decisive for predicting what will users download, for simple demonstration, we can use this algorithm, 

If user < 20, then recommend Pokemon Go.

If user > 20, then recommend HaYaChat.

We end up here with what is called “Decision Tree” and the decision are given by the question we asked, and these decision trees are built with data, and now when we have a new user, we can put them to the decision tree and recommend them whatever app the tree suggests is to recommend.

To wrap things up...

We this time we have talked about Naive Bayes Algorithm and decision tree algorithm with a couple of examples. next time, We’ll talk about Logistic regression and its usages. Till then, see you.

Share:
Keep Reading

Related Articles

Emoji domains are coming
technology
Nov 06, 2016
2 min read

Emoji domains are coming

We text emojis. We chat them. We affix billions in our Snapchats and tweets. If we’re not making website addresses with them yet, we will be soon.

IoT with Android Things
General
Nov 07, 2019
3 min read

IoT with Android Things

Since the introduction of the Internet, there have been a lot of developments around it. But the most significant of them all is the Internet of Things (IoT).