Object Detection in Tensorflow

Tensorflow Hub is an excellent source of state of the art pre-trained models. Among the models available, the FasterRCNN+InceptionResNetV2 network is excellent at object detection. It has been trained on about 9M images available from the Open Image v4 dataset. Due to the size of the model it takes a few seconds to run detection on an image. But the results are astounding. Let’s see how to use it.

Continue reading

Installing Tensorflow in macOS M1 Chip

At the time of this writing (Sep, 2021), the preferred way to install Tensorflow in Apple M1 is to use the Metal PluggableDevice. The old tensorflow_macos Github repo has been closed and now lives in archive mode only. This is a fast changing situation. I am not seeing a very open communication from either Apple or Google on this. In any case, today i will discuss how to setup Tensorflow with Metal acceleration on macOS M1 chip. Specifically, I have a MacBook Air (M1, 2020).

Continue reading

Univariate Time Series Prediction Using LSTM

A univariate time series has only one feature. This feature also serves as label. Examples of univariate time series problem include:

  1. Predict the daily minimum temperature based solely on the past minimum temperature readings.
  2. Predict the closing price of a stock solely based on the last few days of closing prices.

We will use LSTM to solve this problem. We will use the daily minimum temperature in Melbourne data set.

Continue reading

Embedding Lookup in Tensorflow

Understanding how tf.nn.embedding_lookup works can be unduly complex. Perhaps a simple example will help. All it does is lookup the embedding values given a list of indices.

Let’s say we have these embeddings in 3 dimension space for a vocabulary of 4 items.

#Embedding with 3 dimensions with a vocabulary of 4
embedding = [
    [0.36808, 0.20834, -0.22319],
    [0.7503, 0.71623, -0.27033],
    [0.042523, -0.21172, 0.044739],
    [0.17698, 0.065221, 0.28548]
]

We can then lookup the embedding for the first and third item like this.

tf_embedding = tf.constant(embedding, dtype=tf.float32)

with tf.Session() as sess:
  index_to_lookup = [0, 2]
  lookup = tf.nn.embedding_lookup(tf_embedding, index_to_lookup)

  print(sess.run(lookup))

This will print.

[[ 0.36808   0.20834  -0.22319 ]
 [ 0.042523 -0.21172   0.044739]]