Cómo construir un sistema de reconocimiento de imágenes simple con TensorFlow (Parte 1)

Esta no es una introducción general a la inteligencia artificial, el aprendizaje automático o el aprendizaje profundo. Ya hay muchos artículos excelentes que cubren estos temas (por ejemplo, aquí o aquí).

Y esta no es una discusión sobre si la IA esclavizará a la humanidad o simplemente robará todos nuestros trabajos. Puede encontrar mucha especulación y algo de miedo prematuro en otros lugares.

En cambio, esta publicación es una descripción detallada de cómo comenzar en Machine Learning mediante la construcción de un sistema que es (algo) capaz de reconocer lo que ve en una imagen.

Actualmente estoy en un viaje para aprender sobre inteligencia artificial y aprendizaje automático. Y la forma en que aprendo mejor es no solo leyendo cosas, sino construyendo cosas y obteniendo algo de experiencia práctica. Y de eso se trata esta publicación. Quiero mostrarles cómo se puede construir un sistema que realice una tarea simple de visión por computadora: reconocer el contenido de la imagen.

Yo no pretendo ser un experto. Todavía estoy aprendiendo y hay mucho que aprender. Estoy describiendo con qué he estado jugando, y si es algo interesante o útil para ti, ¡es genial! Si, por otro lado, encuentra errores o tiene sugerencias de mejora, hágamelo saber para que pueda aprender de usted.

No necesita ninguna experiencia previa con el aprendizaje automático para poder realizar el seguimiento. El código de ejemplo está escrito en Python, por lo que un conocimiento básico de Python sería excelente, pero el conocimiento de cualquier otro lenguaje de programación probablemente sea suficiente.

¿Por qué el reconocimiento de imágenes?

El reconocimiento de imágenes es una gran tarea para desarrollar y probar enfoques de aprendizaje automático. La visión es discutiblemente nuestro sentido más poderoso y es algo natural para nosotros los humanos. Pero, ¿cómo lo hacemos realmente? ¿Cómo traduce el cerebro la imagen de nuestra retina en un modelo mental de nuestro entorno? No creo que nadie lo sepa exactamente.

El punto es que aparentemente es fácil para nosotros, tan fácil que ni siquiera necesitamos poner ningún esfuerzo consciente en ello, pero difícil para las computadoras (en realidad, puede que tampoco sea tan fácil para nosotros, tal vez simplemente no somos conscientes de cuánto trabajo es. Más de la mitad de nuestro cerebro parece estar directa o indirectamente involucrado en la visión).

¿Cómo podemos hacer que las computadoras realicen tareas visuales cuando ni siquiera sabemos cómo las estamos haciendo nosotros mismos? Ahí es donde entra en juego el aprendizaje automático. En lugar de intentar dar instrucciones detalladas paso a paso sobre cómo interpretar imágenes y traducirlas a un programa de computadora, estamos dejando que la computadora lo resuelva por sí misma.

El objetivo del aprendizaje automático es brindar a las computadoras la capacidad de hacer algo sin que se les diga explícitamente cómo hacerlo. Simplemente proporcionamos algún tipo de estructura general y le damos a la computadora la oportunidad de aprender de la experiencia, de manera similar a como los humanos también aprendemos de la experiencia.

Pero antes de comenzar a pensar en una solución completa para la visión por computadora, simplifiquemos un poco la tarea y veamos un subproblema específico que es más fácil de manejar para nosotros.

Clasificación de imágenes y conjunto de datos CIFAR-10

Intentaremos resolver un problema que sea lo más simple y pequeño posible, sin dejar de ser lo suficientemente difícil como para enseñarnos lecciones valiosas. Todo lo que queremos que haga la computadora es lo siguiente: cuando se le presente una imagen (con dimensiones de imagen específicas), nuestro sistema debe analizarla y asignarle una sola etiqueta. Puede elegir entre un número fijo de etiquetas, cada una de las cuales es una categoría que describe el contenido de la imagen. Nuestro objetivo es que nuestro modelo elija la categoría correcta con la mayor frecuencia posible. Esta tarea se llama clasificación de imágenes.

Usaremos un conjunto de datos estandarizado llamado CIFAR-10. CIFAR-10 consta de 60.000 imágenes. Hay 10 categorías diferentes y 6.000 imágenes por categoría. Cada imagen tiene un tamaño de solo 32 por 32 píxeles. El tamaño pequeño hace que a veces sea difícil para los humanos reconocer la categoría correcta, pero simplifica las cosas para nuestro modelo de computadora y reduce la carga computacional requerida para analizar las imágenes.

La forma en que ingresamos estas imágenes en nuestro modelo es alimentando el modelo con un montón de números. Cada píxel se describe mediante tres números de punto flotante que representan los valores rojo, verde y azul de este píxel. Esto da como resultado 32 x 32 x 3 = 3,072 valores para cada imagen.

Además de CIFAR-10, hay muchos otros conjuntos de datos de imágenes que se utilizan comúnmente en la comunidad de visión por computadora. El uso de conjuntos de datos estandarizados tiene dos propósitos. Primero, es mucho trabajo crear tal conjunto de datos. Debe encontrar las imágenes, procesarlas para que se ajusten a sus necesidades y etiquetarlas todas individualmente. La segunda razón es que usar el mismo conjunto de datos nos permite comparar objetivamente diferentes enfoques entre sí.

Además, los conjuntos de datos de imágenes estandarizados han llevado a la creación de concursos y listas de alta puntuación de visión por computadora. La competencia más famosa es probablemente la Competencia Image-Net, en la que hay 1000 categorías diferentes para detectar. El ganador de 2012 fue un algoritmo desarrollado por Alex Krizhevsky, Ilya Sutskever y Geoffrey Hinton de la Universidad de Toronto (documento técnico) que dominó la competencia y ganó por un gran margen. Esta fue la primera vez que el enfoque ganador fue utilizar una red neuronal convolucional, que tuvo un gran impacto en la comunidad de investigadores. Las redes neuronales convolucionales son redes neuronales artificiales modeladas libremente a partir de la corteza visual que se encuentra en los animales. Esta técnica había existido por un tiempo, pero en ese momento la mayoría de la gente aún no veía su potencial para ser útil.Esto cambió después de la competencia Image-Net de 2012. De repente, hubo mucho interés en las redes neuronales y el aprendizaje profundo (el aprendizaje profundo es solo el término utilizado para resolver problemas de aprendizaje automático con redes neuronales multicapa). Ese evento juega un papel importante en el inicio del boom del aprendizaje profundo de los últimos años.

Aprendizaje supervisado

¿Cómo podemos usar el conjunto de datos de imágenes para que la computadora aprenda por sí misma? Aunque la computadora hace la parte de aprendizaje por sí misma, todavía tenemos que decirle qué aprender y cómo hacerlo. La forma en que hacemos esto es especificando un proceso general de cómo la computadora debe evaluar las imágenes.

Estamos definiendo un modelo matemático general de cómo pasar de la imagen de entrada a la etiqueta de salida. El resultado concreto del modelo para una imagen específica depende no solo de la imagen en sí, sino también de los parámetros internos del modelo. Estos parámetros no los proporcionamos nosotros, sino que los aprende la computadora.

Todo resulta ser un problema de optimización. Comenzamos definiendo un modelo y proporcionando valores iniciales para sus parámetros. Luego, alimentamos el conjunto de datos de imágenes con sus etiquetas conocidas y correctas al modelo. Esa es la etapa de entrenamiento. Durante esta fase, el modelo mira repetidamente los datos de entrenamiento y sigue cambiando los valores de sus parámetros. El objetivo es encontrar valores de parámetros que den como resultado que la salida del modelo sea correcta con la mayor frecuencia posible. Este tipo de formación, en la que se utiliza la solución correcta junto con los datos de entrada, se denomina aprendizaje supervisado. También existe el aprendizaje no supervisado, en el que el objetivo es aprender de los datos de entrada para los que no hay etiquetas disponibles, pero eso está más allá del alcance de esta publicación.

Una vez finalizado el entrenamiento, los valores de los parámetros del modelo ya no cambian y el modelo se puede usar para clasificar imágenes que no formaban parte de su conjunto de datos de entrenamiento.

TensorFlow

TensorFlow es una biblioteca de software de código abierto para el aprendizaje automático, que fue lanzada por Google en 2015 y se ha convertido rápidamente en una de las bibliotecas de aprendizaje automático más populares que utilizan los investigadores y profesionales de todo el mundo. Lo usamos para hacer el trabajo pesado numérico de nuestro modelo de clasificación de imágenes.

Construyendo el modelo, un clasificador Softmax

El código completo para este modelo está disponible en Github. Para usarlo, debe tener instalado lo siguiente:

  • Python (el código se ha probado con Python 2.7, pero Python 3.3+ también debería funcionar, enlace a las instrucciones de instalación)
  • TensorFlow (enlace a las instrucciones de instalación)
  • CIFAR-10 dataset: Download the Python version of the dataset from //www.cs.toronto.edu/~kriz/cifar.html or use the direct link to the compressed archive. Place the extracted cifar-10-batches-py/ directory in the directory where you are putting the python source code, so that the path to the images is /path-to-your-python-source-code-files/cifar-10-batches-py/.

Alright, now we’re finally ready to go. Let’s look at the main file of our experiment, softmax.py and analyze it line by line:

The future-Statements should be present in all TensorFlow Python files to ensure compatibility with both Python 2 and 3 according to the TensorFlow style guide.

Then we are importing TensorFlow, numpy for numerical calculations, and the time module. data_helpers.py contains functions that help with loading and preparing the dataset.

We start a timer to measure the runtime and define some parameters. I’ll talk about them later when we’re actually using them. Then we load the CIFAR-10 dataset. Since reading the data is not part of the core of what we’re doing, I put these functions into the separate data_helpers.py file, which basically just reads the files containing the dataset and puts the data in a data structure which is easy to handle for us.

One thing is important to mention though. load_data() is splitting the 60000 images into two parts. The bigger part contains 50000 images. This training set is what we use for training our model. The other 10000 images are called test set. Our model never gets to see those until the training is finished. Only then, when the model’s parameters can’t be changed anymore, we use the test set as input to our model and measure the model’s performance on the test set.

This separation of training and testing data is very important. We wouldn’t know how well our model is able to make generalizations if it was exposed to the same dataset for training and for testing. In the worst case, imagine a model which exactly memorizes all the training data it sees. If we were to use the same data for testing it, the model would perform perfectly by just looking up the correct solution in its memory. But it would have no idea what to do with inputs which it hasn’t seen before.

This concept of a model learning the specific features of the training data and possibly neglecting the general features, which we would have preferred for it to learn is called overfitting. Overfitting and how to avoid it is a big issue in machine learning. More information about overfitting and why it is generally advisable to split the data into not only 2 but 3 different datasets can be found in this video (youtube mirror) (the video is part of Andrew Ng’s great free machine learning course on Coursera).

To get back to our code, load_data() returns a dictionary containing

  • images_train: the training dataset as an array of 50,000 by 3,072 (= 32 pixels x 32 pixels x 3 color channels) values.
  • labels_train: 50000 labels for the training set (each a number between 0 nad 9 representing which of the 10 classes the training image belongs to)
  • images_test: test set (10,000 by 3,072)
  • labels_test: 10,000 labels for the test set
  • classes: 10 text labels for translating the numerical class value into a word (0 for ‘plane’, 1 for ‘car’, etc.)

Now we can start building our model. The actual numerical computations are being handled by TensorFlow, which uses a fast and efficient C++ backend to do this. TensorFlow wants to avoid repeatedly switching between Python and C++ because that would slow down our calculations.

The common workflow is therefore to first define all the calculations we want to perform by building a so-called TensorFlow graph. During this stage no calculations are actually being performed, we are merely setting the stage. Only afterwards we run the calculations by providing input data and recording the results.

So let’s start defining our graph. We first describe the way our input data for the TensorFlow graph looks like by creating placeholders. These placeholders do not contain any actual data, they just specify the input data’s type and shape.

For our model, we’re first defining a placeholder for the image data, which consists of floating point values (tf.float32). The shape argument defines the input dimensions. We will provide multiple images at the same time (we will talk about those batches later), but we want to stay flexible about how many images we actually provide. The first dimension of shape is therefore None, which means the dimension can be of any length. The second dimension is 3,072, the number of floating point values per image.

The placeholder for the class label information contains integer values (tf.int64), one value in the range from 0 to 9 per image. Since we’re not specifying how many images we’ll input, the shape argument is [None].

weights and biases are the variables we want to optimize. But let’s talk about our model first.

Our input consists of 3,072 floating point numbers and the desired output is one of 10 different integer values. How do we get from 3,072 values to a single one? Let’s start at the back. Instead of a single integer value between 0 and 9, we could also look at 10 score values — one for each class — and then pick the class with the highest score. So our original question now turns into: How do we get from 3,072 values to 10?

The simple approach which we are taking is to look at each pixel individually. For each pixel (or more accurately each color channel for each pixel) and each possible class, we’re asking whether the pixel’s color increases or decreases the probability of that class.

Let’s say the first pixel is red. If images of cars often have a red first pixel, we want the score for car to increase. We achieve this by multiplying the pixel’s red color channel value with a positive number and adding that to the car-score. Accordingly, if horse images never or rarely have a red pixel at position 1, we want the horse-score to stay low or decrease. This means multiplying with a small or negative number and adding the result to the horse-score.

For each of the 10 classes we repeat this step for each pixel and sum up all 3,072 values to get a single overall score, a sum of our 3,072 pixel values weighted by the 3,072 parameter weights for that class. In the end we have 10 scores, one for each class. Then we just look at which score is the highest, and that’s our class label.

The notation for multiplying the pixel values with weight values and summing up the results can be drastically simplified by using matrix notation. Our image is represented by a 3,072-dimensional vector. If we multiply this vector with a 3,072 x 10 matrix of weights, the result is a 10-dimensional vector containing exactly the weighted sums we are interested in.

The actual values in the 3,072 x 10 matrix are our model parameters. If they are random/garbage our output will be random/garbage. That’s where the training data comes into play. By looking at the training data we want the model to figure out the parameter values by itself.

All we’re telling TensorFlow in the two lines of code shown above is that there is a 3,072 x 10 matrix of weight parameters, which are all set to 0 in the beginning. In addition, we’re defining a second parameter, a 10-dimensional vector containing the bias. The bias does not directly interact with the image data and is added to the weighted sums. The bias can be seen as a kind of starting point for our scores.

Think of an image which is totally black. All its pixel values would be 0, therefore all class scores would be 0 too, no matter how the weights matrix looks like. Having biases allows us to start with non-zero class scores.

This is where the prediction takes place. We’ve arranged the dimensions of our vectors and matrices in such a way that we can evaluate multiple images in a single step. The result of this operation is a 10-dimensional vector for each input image.

The process of arriving at good values for the weights and bias parameters is called training and works as follows: First, we input training data and let the model make a prediction using its current parameter values. This prediction is then compared to the correct class labels. The numerical result of this comparison is called loss. The smaller the loss value, the closer the predicted labels are to the correct labels and vice versa.

We want to model to minimize the loss, so that its predictions are close to the true labels. But before we look at the loss minimization, let’s take a look at how the loss is calculated.

The scores calculated in the previous step, stored in the logits variable, contains arbitrary real numbers. We can transform these values into probabilities (real values between 0 and 1 which sum to 1) by applying the softmax function, which basically squeezes its input into an output with the desired attributes. The relative order of its inputs stays the same, so the class with the highest score stays the class with the highest probability. The softmax function’s output probability distribution is then compared to the true probability distribution, which has a probability of 1 for the correct class and 0 for all other classes.

We use a measure called cross-entropy to compare the two distributions (a more technical explanation can be found here). The smaller the cross-entropy, the smaller the difference between the predicted probability distribution and the correct probability distribution. This value represents the loss in our model.

Luckily TensorFlow handles all the details for us by providing a function that does exactly what we want. We compare logits, the model’s predictions, with labels_placeholder, the correct class labels. The output of sparse_softmax_cross_entropy_with_logits() is the loss value for each input image. We then calculate the average loss value over the input images.

But how can we change our parameter values to minimize the loss? This is where TensorFlow works its magic. Via a technique called auto-differentiation it can calculate the gradient of the loss with respect to the parameter values. This means that it knows each parameter’s influence on the overall loss and whether decreasing or increasing it by a small amount would reduce the loss. It then adjusts all parameter values accordingly, which should improve the model’s accuracy. After this parameter adjustment step the process restarts and the next group of images are fed to the model.

TensorFlow knows different optimization techniques to translate the gradient information into actual parameter updates. Here we use a simple option called gradient descent which only looks at the model’s current state when determining the parameter updates and does not take past parameter values into account.

Gradient descent only needs a single parameter, the learning rate, which is a scaling factor for the size of the parameter updates. The bigger the learning rate, the more the parameter values change after each step. If the learning rate is too big, the parameters might overshoot their correct values and the model might not converge. If it is too small, the model learns very slowly and takes too long to arrive at good parameter values.

The process of categorizing input images, comparing the predicted results to the true results, calculating the loss and adjusting the parameter values is repeated many times. For bigger, more complex models the computational costs can quickly escalate, but for our simple model we need neither a lot of patience nor specialized hardware to see results.

These two lines measure the model’s accuracy. argmax of logits along dimension 1 returns the indices of the class with the highest score, which are the predicted class labels. The labels are then compared to the correct class labels by tf.equal(), which returns a vector of boolean values. The booleans are cast into float values (each being either 0 or 1), whose average is the fraction of correctly predicted images.

We’re finally done defining the TensorFlow graph and are ready to start running it. The graph is launched in a session which we can access via the sess variable. The first thing we do after launching the session is initializing the variables we created earlier. In the variable definitions we specified initial values, which are now being assigned to the variables.

Then we start the iterative training process which is to be repeated max_steps times.

These lines randomly pick a certain number of images from the training data. The resulting chunks of images and labels from the training data are called batches. The batch size (number of images in a single batch) tells us how frequent the parameter update step is performed. We first average the loss over all images in a batch, and then update the parameters via gradient descent.

If instead of stopping after a batch, we first classified all images in the training set, we would be able to calculate the true average loss and the true gradient instead of the estimations when working with batches. But it would take a lot more calculations for each parameter update step. At the other extreme, we could set the batch size to 1 and perform a parameter update after every single image. This would result in more frequent updates, but the updates would be a lot more erratic and would quite often not be headed in the right direction.

Usually an approach somewhere in the middle between those two extremes delivers the fastest improvement of results. For bigger models memory considerations are very relevant too. It’s often best to pick a batch size that is as big as possible, while still being able to fit all variables and intermediate results into memory.

Here the first line of code picks batch_size random indices between 0 and the size of the training set. Then the batches are built by picking the images and labels at these indices.

Every 100 iterations we check the model’s current accuracy on the training data batch. To do this, we just need to call the accuracy-operation we defined earlier.

This is the most important line in the training loop. We tell the model to perform a single training step. We don’t need to restate what the model needs to do in order to be able to make a parameter update. All the info has been provided in the definition of the TensorFlow graph already. TensorFlow knows that the gradient descent update depends on knowing the loss, which depends on the logits which depend on weights, biases and the actual input batch.

We therefore only need to feed the batch of training data to the model. This is done by providing a feed dictionary in which the batch of training data is assigned to the placeholders we defined earlier.

After the training is completed, we evaluate the model on the test set. This is the first time the model ever sees the test set, so the images in the test set are completely new to the model. We’re evaluating how well the trained model can handle unknown data.

The final lines print out how long it took to train and run the model.

Results

Let’s run the model with with the command “python softmax.py”. Here is how my output looks like:

Step 0: training accuracy 0.14 Step 100: training accuracy 0.32 Step 200: training accuracy 0.3 Step 300: training accuracy 0.23 Step 400: training accuracy 0.26 Step 500: training accuracy 0.31 Step 600: training accuracy 0.44 Step 700: training accuracy 0.33 Step 800: training accuracy 0.23 Step 900: training accuracy 0.31 Test accuracy 0.3066 Total time: 12.42s

What does this mean? The accuracy of evaluating the trained model on the test set is about 31%. If you run the code yourself, your result will probably be around 25–30%. So our model is able to pick the correct label for an image it has never seen before around 25–30% of the time. That’s not bad!

There are 10 different labels, so random guessing would result in an accuracy of 10%. Our very simple method is already way better than guessing randomly. If you think that 25% still sounds pretty low, don’t forget that the model is still pretty dumb. It has no notion of actual image features like lines or even shapes. It looks strictly at the color of each pixel individually, completely independent from other pixels. An image shifted by a single pixel would represent a completely different input to this model. Considering this, 25% doesn’t look too shabby anymore.

What would happen if we trained for more iterations? That would probably not improve the model’s accuracy. If you look at results, you can see that the training accuracy is not steadily increasing, but instead fluctuating between 0.23 and 0.44. It seems to be the case that we have reached this model’s limit and seeing more training data would not help. This model is not able to deliver better results. In fact, instead of training for 1000 iterations, we would have gotten a similar accuracy after significantly fewer iterations.

One last thing you probably noticed: the test accuracy is quite a lot lower than the training accuracy. If this gap is quite big, this is often a sign of overfitting. The model is then more finely tuned to the training data it has seen, and it is not able to generalize as well to previously unseen data.

This post has turned out to be quite long already. I’d like to thank you for reading it all (or for skipping right to the bottom)! I hope you found something of interest to you, whether it’s how a machine learning classifier works or how to build and run a simple graph with TensorFlow. Of course, there is still a lot of material that I would like to add. So far, we have only talked about the softmax classifier, which isn’t even using any neural nets.

My next blog post changes that: Find out how much using a small neural network model can improve the results! Read it here.

Thanks for reading. You can also check out other articles I’ve written on my blog.