Machine Learning (ML)

Are We the Masters of Machines or are Machines the Masters of Us?

by Rickesh Bedia

Published 2017-04-15

Refresh your memory of our first steps here, with my blog on Artificial Intelligence.

Machine Learning

Machine Learning (ML) is the concept that a computer program can learn and adapt to new data without human interference. It is a sub-field of artificial intelligence that keeps a computer’s built-in algorithms current, regardless of changes in the worldwide economical changes.

The world is driven by data coming from various different sources, in a variety of forms. A huge amount of data, known as Big Data, is becoming available due to advancements in modern technology. Governments and companies use this data to gain an insight into the minds of their citizens or consumers.

So, how do governments and companies extract the necessary and relevant information needed to inform them? They deploy Artificial Intelligence (AI) techniques across different industries to collate, process and distribute useful information. The main method used by AI for Big Data, is Machine Learning.

So what is it about Machine Learning that's making it the hottest tech in Silicon Valley right now? Well, it's the ability to take on repetitive and mindless tasks, and from this, creating thought provoking processes.

A (Not So) Brief History

1642 - Mathematician Blaise Pascal designed one of the first mechanical adding machines. Known as Pascaline, it used a system of gears and wheels to add and subtract. It was invented for arguable the most boring and mindless task, calculating taxes (no offense to accountants).

1694 - Leibniz invented a similar machine to Pascaline called The Step Reckoner, but was more accurate and could perform the four basic arithmetic operations. He also created the binary system, still used today.

1801 - Joseph Jacquard tackled the problem of storing data. He designed a weaving loom that was able to store data, using metal cards punched with holes. A collection of these cards were coded to directly program the looms with consistent results.

1847 - Boole created a set of operators, anachronistically named, Boolean Operators that had responses true or false, yes or no and was represented in binary. Boolean operators have stood the test of time, maybe you even used them today!

1890 - Herman Hollerith created the first combined system of both mechanical calculation and punch cards. These quickly calculate statistics gathered from millions of people and was named the Tabulating Machine. Starting to sound familiar?

1945 - The Mark 1 built by IBM was the first electric and mechanical computer. It could store 72 numbers and perform complex multiplication in 6 seconds and division in 16 seconds. While this may seem slow compared to the sub-milliseconds of modern computers, it's still very fast compared to you and me.

1946 - The Electronic Numerical Integrator and Computer (ENIAC) was the first fully electronic computer. It was a thousand times faster than the Mark 1, weighed 30,000kg and was 3 metres high, 24 metres wide. Imagine that sitting in your pocket!

1949 - The first step towards ML was based on the Hebbian Theory, a theory in neuroscience that proposes an explanation for the adaptation of neurons in the brain during the learning process. The Hebbian Learning theory looks for correlations between nodes in a Recurrent Neural Network. It memorizes any commonalities in the network and also serves like a memory.

1952 - The first self-learning programme was created by Arthur Samuel, learning and improving at checkers as it played against itself and human opponents. It was able to see which moves were winning strategies and then incorporated these strategies.

1957 - Frank Rosenblatt designed the Perceptron, a type of neural network, based on the network connecting together billions of cells in your brain. Instead of cells, the Perceptron connects points where simple decisions come together to solve complex problems.

1967 - Programs able to recognise patterns were designed based on an algorithm called Nearest Neighbour. It is an algorithm of instructions and steps. When a new object is introduced, it is compared to the training set and classified to the nearest neighbour or most similar object in memory.

1970 - Seppo Linnainmaa published the general method for Automatic Differentiation (AD) of discrete connected networks of nested differentiable functions. This corresponds to the modern version of Backpropagation.

1979 - Standford University students invent the Standford Cart which is able to navigate obstacles in a room.

1981 - Gerald Dejong a.k.a Mr EBL introduced Explanation Based Learning (EBL). Prior knowledge of the world is provided in training examples. The programme analyses the training data, discards irrelevant information and forms a general rule.

1985 — Terry Sejnowski invents NetTalk, which learns to pronounce words the same way a baby does.

1990's - ML is applied to data mining, adaptive software, web application and learning. Machine Learning Algorithms are improved using supervised learning (given input and output variables, use an algorithm to learn the mapping function from the input to the output) and unsupervised learning (only get output data, to model the underlying structure to learn more about the data.)

1997 - IBM's Deep Blue ML beats Garry Kasparov, Chess Grandmaster (if that title is not a reason to get really good at chess, I don't know what is!)

2000's - After the millennium, Adaptive Programming was brought to the forefront. Adaptive programs are capable of recognising patterns and learning from experience. They are also capable of abstracting new information from current data. This optimizes efficiency and accuracy of processing and output, everything you could want in modern ML.

2006 — Geoffrey Hinton coins the term deep learning to explain new algorithms that let computers “see” and distinguish objects and text in images and videos.

2010 — The Microsoft Kinect can track 20 human features at a rate of 30 times per second, allowing people to interact with the computer via movements and gestures.

2011 — Google Brain is developed, and its deep neural network can learn to discover and categorize objects similar to a cat.

2011 - IBM Watson wins $35,734 in a live game of Jeopardy!

2012 – Google’s X Lab develops a machine learning algorithm that is able to autonomously browse YouTube videos to identify the videos that contain cats. Google really loves their cat videos!

2014 – Facebook develops DeepFace, a software algorithm that is able to recognize or verify individuals photos, mimicking humans own ability to do so.

2015 – Microsoft creates the Distributed Machine Learning Toolkit, which enables the efficient distribution of machine learning problems across multiple computers.

2015 – Over 3,000 AI and Robotics researchers, endorsed by Stephen Hawking, Elon Musk and Steve Wozniak (among many others), sign an open letter warning of the danger of autonomous weapons which select and engage targets without human intervention. (The Rise of the Terminators)

2016 – Google’s artificial intelligence algorithm, AlphaGo developed by Deep Mind, beats a professional player at the Chinese board game Go, winning all five games, which is considered the world’s most complex board game and is many times harder than chess.

How does Machine Learning Work?

ML systems have 3 main parts:

  • Model: the system making predictions or identifications.
  • Parameters: the signals or factors used by the model to form its decisions.
  • Learner: the system that adjusts the parameters, and in turn the model, by looking at differences in predictionsversus actual outcome.

In ML, the Model is King. The model is initially given by a human and uses the parameters to make calculations. It will then use various mathematical equations to predict future outcomes.

Once the model is set, real life information is entered. This allows us to compare the results of the model compared to the actual results.

This initial set of data is called the "training set" as this is used by the machine, to train itself to create a better model. The learner in the model looks at the scores and sees how far they were from the models results. Further mathematical computations and approximations are made to adjust the initial assumptions.

The system is run again with another data set. The real scores are compared against the model's results and hopefully they are closer. These results won't be perfect, so the learner again readjusts the parameters to reshape the model. Then another test data set is inputted and the process runs over and over. If successful, the model results should start to get closer to the actual results on each iteration.

Of course, as we are modeling and adjusting, we will never be 100% correct. We can get to a stage where we have a high degree of confidence in the model, but this varies per person and per situation. In some cases, 95% confidence may be enough to convince us our model is correct. In other cases we may look for more than 99% confidence levels in our model.

Examples and Companies Using Machine Learning

  1. Data Security
    • Predicating Malware in files
    • Reporting anomalies in how data is accessed
  2. Financial Trading
  3. Market Personalization
    • Companies want to understand their consumers, to serve them better
  4. Money Laundering
    • PayPal uses ML to fight money laundering, distinguishing between legitimate and fraudulent transactions.
  5. Recommendations
    • Netflix and Amazon use ML to analyse your activity and compare it against millions of other uses to determine what you might buy or watch
  6. Online Search
    • Google have the largest data store in the world. Every time you type in the Google search bar, a program watches how you respond to the results, which links you click on, if you succumb to desperation and go into the unknown (the mysterious second page of results).
  7. Natural Language Processing (NLP)
    • ML Algorithms can be used for customer service and quickly route customers where they want to go. It has been used to turn incomprehensible legal contracts to layman's terms and sort through large volumes of documents for information
  8. Smart Cars
    • We mentioned self-driving smart cars in AI, but ML can be used by smart cars to learn about their driver and the environment. It could adjust internal settings such as temperature, radio, seat position, based on the driver to give you the best possible driving experience. It may also report problems, drive itself and offer advice on traffic and road conditions by potentially talking to other cars over the internet.

Other uses include:

  • Email: Identifying and deleting Spam emails
  • Predictions: Emergency Room wait times, Strokes, Seizures or Heart Failure
  • Uber: Predict your travel habits and improving maps

Issues

We can break down issues in ML into 2 categories, technical issues and ethical issues

Technical Issues

  1. Specialised Learning vs General Learning
    • Programming Algorithms on the constraints of a Programming Language versus General Learning based on mathematics
  2. Bayesians vs Frequentists
    • Bayesians focus on subjective beliefs (probabilities), frequentists focus on the measurement or calculation of objective quantities (universal laws).
  3. Optimization
    • Convex Optimization and Convex Objective Functions
  4. Science vs Engineering
    • Learning Algorithms vs Hardware and the machine itself

Ethical Issues

  1. Unemployment as ML machines take over menial jobs
  2. Inequalities. How to distribute wealth created my machines?
  3. How is our humanity affected from interactions with machines?
  4. Protection against unforeseen consequences. Think of a genie in a bottle that can grant your wildest dreams but bring terrible consequences (Cartoon Explainer)
  5. How do we control such a complex intelligent system?

For homework, watch Person of Interest. As well as just being a good programme, it highlights the (mis-)use of an intelligent AI, the consequences that come with having a complex system and the potential power it can give the user.

The Future is Bright, the Future is ML

What is the future of ML? That's a very open question like "Where do you see yourself in 5 years?" If we want to answer this question we need a computer, an awesome algorithm and a Big Data set. Do you think it would be too "on the nose" to use Machine Learning to predict the future of Machine Learning?

Potential future uses of ML are more precise recommendations and adverts, real-time speech translations, detailed feedback for fitness tracking applications and prolonging battery life by automating the systems resource allocation for applications.

Due to Machine Learning the future is going to be automated, personalised, cognitive and easier to predict.

We follow up by going deep into learning ... with Deep Learning.

submit to reddit