Title of the Paper: The Discipline of Machine Learning by Tom M. Mitchell published in July
2006,
Summary:
Author’s (Tom M. Mitchel) central idea of
writing the paper is to express what is meant by Machine learnings, some of the
current applications of it and to discuss on few research questions, including
long term questions, which helps to understand the possible future state of
Machine learnings.
According to this paper, Machine learnings focuses on How to get
computers to program themselves from E experience in order to improve the P
performance for a particular T Task. One of the advantages of computers
programming themselves has seen Speech recognition, where accuracy is more if
one trains the system instead of programming it by hand. Similarly, implication
is seen in Computer vision, Bio-surveillance, robot control, accelerating
empirical sciences etc. The reason Machine learnings algorithm is better than
hand program on cases like where application is too complex for human to
manually design the program and/or the application requires to be customized in
user environment once it is developed in factory setting.
On the long term perspective of Machine learnings, author is interested
to share the questions like is it possible to build machine learnings system
which continuously learns and improves its mechanism like human and animal
does? On the other side, the theories
learnt in Machine learnings can it be relate back to how human and animals’
learnings system works? Tom also refers to designing programming language which
will support to write subroutine with options to hand code or to be learned. One of the key findings highlighted was
learnings in human is more effective when multiple input modalities (such as
vision, sound, touch) are used and same is true for Machine learnings.
Finally, Tom wants to end his
paper with readers to think on ethical questions related to Privacy and
availability of data for Machine learnings, and has kept the questions open for
discussion as some of it has social policy component.
Interesting Idea:
1)
On the question: To what degree can we have both
data privacy and the benefits of data mining?
Idea of sending algorithm to
hospital instead of collecting private data from hospital was new and
interesting for me. The reason why it was interesting as I used to think of
getting data into algorithm not vice versa J.
2)
For learners that actively collect their own training
data, what is the best strategy?
How to find master’s slippers by
robot, what strategy of collecting data would be optimal for robot to find the slippers
was thought provoking and interesting. The reason it was interesting because of
so many things need to be considered while deciding the best strategy and it
was evident while I was going through below journey of devising the best
strategy for the robot to find the slipper.
To this I was thinking, for robot
to find the master’s slipper has to do something what I will be doing to find
the slipper fast, like I will have picture in mind how master’s slipper looks
like, (in different lighting conditions) then I will be searching on those
place where it is commonly kept by my master, (that is learning from experience
and associating with probability) then I will be more interested in searching
on floor instead of ceiling J
, (law of gravity (knowledge of physics) then if I found left slipper then I
know the chance of right slipper will be nearby may be under the bed (human
behavior, this is where domain knowledge comes J) etc. If
still not found I will go and open to see inside cupboard or storeroom (I
guess, this is anomaly detection from past experience) and again if it still
not found I will report the chance of robbery or theft (as slippers can’t walk
by its own (Ability to distinguish between self-moving object vs externally
moved object)). However, before reporting robbery I may analyze the scenario is
it marriage occasion (for some religion in India), where there is a custom to
hide the slippers (I mean shoes here, what if Robot has to search shoes instead)
of Groom by Bride’s sister or friends as a part of having fun (Social knowledge
used). If found also, how Robot will be sure of similar looking slippers (may
be Master’s brother is also using the same brand and size slipper, which
algorithm it will use to distinguish between these similar looking slippers.
The robot will have limited number of pictures for slipper with particular
angle, light direction, intensity, how it will transform the pixelated data of
pictures and try to match the similarity with slipper picture (found) with
different angle, direction, position etc.
I was also
thinking from future perspective, what if Master is just playing Prank with
Robot to find the slippers, in reality he has hidden the slippers in remote
location, Robot has to start with emotion detection of Master first, by using
advanced sensors with algorithm to detect emotions of his Master first,
probably by using multi-modalities such as functional Near Infrared
Spectroscopy (fNIRS) , electroencephalogram (EEG), video (face) and peripheral
signals (respiration, cardiac rate) . If the algorithm predicts high
probability of Prank by Masters, then Robot can use other sensors like functional
magnetic resonance imaging (fMRI) to read brain memory of the Masters and find the
remote location where he has hidden the slipper.
How those
ideas relate to:
On the
question: How can
we transfer what is learned for one task to improve learning in other related
tasks?
Like it is mentioned we might
like to learn a family of related functions and apply to two different cases,
although there will be difference however we can leverage the commonalities.
Like we have derived an equations
and relationship behavior for some region say US, same can be applied to Canada
after considering differentiating factors which might influence the model such
as inflation, growth rate, seasons etc.
Idea need more clarity:
On
the question: Can machine learning
theories and algorithms help explain human learning?
In this Tom was referring to “reinforcement
learning algorithms and theories predict surprisingly
well the neural activity of dopaminergic neurons in
animals during reward-based learning”.
I understand that reinforcement learning
algorithms was created based on how human beings behave keeping cumulative
reward as goal. Is author is saying that algorithm created for reinforcement is
somewhat similar to the process observed by looking neural activity in animals,
while animals were doing activity related to reward. If I only examine the
beginning and the end only, is it good to say that something we have learnt
from human behavior is seen in animal behavior, and it is proved by linking
reinforcement algorithm and neural activity of animals.
Does my above understanding closer to what Tom is referring to or I am
thinking in some different direction?
*******************************Thank You*************************************