tensorflow

Training AI with tensorflow.js

How to train an AI algorithm to beat you at rock/paper/scissors. ...

This post will show you how to train a Machine Learning algorithm, within your browser, in real time, with your webcam using tensorflow.js!

Machine Learning (ML) is a subset of Artificial Intelligence (AI) that allows systems to learn and improve from experience, in an automated way without being explicitly programmed. This works by writing the algorithm (with a library like tensorflow.js), then providing it data that can be referenced and ‘learnt’ from. The type of algorithm will dictate whether it learns in a ‘supervised’ (a human categorising information so that the AI knows how to identify information) or ‘unsupervised’ method (where the AI is fed raw data and it discovers patterns in the data without human involvement).

Tensorflow.js is a javascript framework for developing Machine Learning / Deep Learning algorithms. Once those algorithms are created you can then combine it with training data to generate a model – that can then be ready to be used.

The code below shows my first experiment with tensorflow.js, demonstrating an image classification Machine Learning algorithm – that you can use to train generate a model that can categorise images from your webcam feed as rock, paper or scissors.

This process is a type of ‘supervised’ learning method, as you are actively training the Machine Learning algorithm. The name of the specific supervised learning model is called K Nearest Neighbours (KNN), which is specialised to understand the difference between specific categories of information – depending on how a person trains the category recognition algorithm (in this case rock/paper/scissors).

In this example Tensorflow.js works in the browser on the front end. This means that all the data that the algorithm processes is using your device’s CPU / GPU. MobileNet is an open sourced image classification algorithm – which is used as a base to allow you to classify your webcam photos with tensorflow.js’s KNN classifer, to train your own Machine Learning model.

I’ll probably do a more in depth post about how Artificial Intelligence and Machine Learning works in a different post, but for now – lets demo!!

Image Classification Demo

I’ve categorised this post as a ‘Code Playground’ as opposed to a tutorial, because I’ve integrated a platform called Code Pen into the post, that will allow you to experiment with the machine learning algorithm (assuming you’re using a desktop device).

I’ve written instructions below, that should allow anyone to have a play with the algorithm (regardless of technical level). If you’re extra curious, you can look through the code that makes the demo work, the complex Machine Learning model itself is not viewable, as it is referenced as a 3rd party library.

Please note that no images from your webcam feed are stored anywhere once you leave the demo on this page. This is because the machine learning model only exists (and is therefore trained) in the browser, once the browser is refreshed or closed – the model loses all the training data.

Me demonstrating a trained machine learning model, using the code shown below. If you are viewing this post on a mobile device, you won’t be able to follow the below steps – this image should give you an understanding of what the algorithm can do

How to use

  1. View this code pen link on a desktop device, using the latest version of Chrome, Firefox or Safari.
  2. Make sure the ‘Result’ tab is viewable (if it is, then you will see a button on screen saying ‘Click here to start’)
  3. Click on the button that says ‘Click here to start’
  4. Click ‘Allow’ when the browser asks permission to access your webcam (if this does not display, then ensure you are on the latest version of chrome/firefox/safari/edge)
  5. Three button should then appear at the bottom of the screen: “Add Rock”, “Add Paper”, “Add Scissors”
  6. To train the AI in recognising a rock gesture, clench your fist in front of the webcam and tap the “Add Rock” button – be sure to take multiple images from different angles so that the model has more information on what a rock gesture is.
  7. Repeat the same process for paper and scissors gestures, by clicking the “Add Paper” and “Add Scissors” buttons respectively.
  8. As you are moving your hand in front of the webcam, you will notice that a prediction and probability of the AI being correct is shown above the webcam feed.
  9. The likelihood is that the prediction is going to be off, as it takes a lot of training data to make an AI predict accurately – and even then it will still sometimes make mistakes. To reduce these outliers, keep training the model – have fun 😀

Share this post