TensorFlow 1.0 unlocks machine learning on smartphones

The first major release of Google's machine learning library includes improvements to allow efficient deep learning on mobile devices

TensorFlow 1.0 unlocks machine learning on smartphones
Pixabay

TensorFlow, Google's open source deep learning framework, has announced a release candidate for a full-blown version 1.0.

TensorFlow 1.0 not only brings improvements to the framework's gallery of machine learning functions, but also eases TensorFlow development to Python and Java users and improves debugging. A new compiler that optimizes TensorFlow computations opens the door to a new class of machine learning apps that can run on smartphone-grade hardware.

Another slice of Py, Java on the side

Since Python's one of the biggest platforms for building and working with machine learning applications, it's only fitting that TensorFlow 1.0 focuses on improving Python interactions. The TensorFlow Python API has been upgraded so that the syntax and metaphors TensorFlow uses are a better match for Python's own, offering better consistency between the two.

The bad news is those changes are guaranteed to break existing Python applications. TensorFlow's developers have released a script to automatically upgrade old-style TensorFlow API scripts to the new format, but the script can't fix everything; you may still need to tweak scripts manually as needed.

TensorFlow is now available in a Docker image that's compatible with Python 3, and for all Python users, TensorFlow can now be installed by pip, Python's native package manager. This last is a huge step toward increasing TensorFlow general usefulness, especially for those working with the stock Python distribution rather than one specifically geared for data science (such as Anaconda).

Java is another major language platform for machine learning, but TensorFlow previously did not have a set of Java bindings. Version 1.0 of the framework introduced a Java API, but it's far from complete and apt to change at any time, and you need to be able to build TensorFlow from source on Linux or MacOS. (Consider this further evidence that the Windows port of TensorFlow is still somewhat a second-class citizen.)

Going mobile with XLA

Perhaps the single biggest addition to TensorFlow 1.0 isn't a language support feature or new algorithms. It's an experimental compiler for linear algebra used in TensorFlow computations, Accelerated Linear Algebra (XLA). It speeds up some of the math performed by producing machine code that can run either on CPUs or GPUs. Right now, XLA only supports Nvidia GPUs, but that's in line with the general nature of GPU support for machine learning applications.

XLA also improves the portability of TensorFlow so that existing TensorFlow programs can run unmodified on new hardware platforms by simply creating a back end. This is a big deal in light of IBM adding TensorFlow support to its PowerAI hardware solution for machine learning, powered by a mix of GPUs and Power8 CPUs.

TensorFlow's engineers have reduced the overall memory usage and footprint of the app as well. These optimizations pay off universally, but it's a particularly big deal for mobile. Previous versions of TensorFlow added support for Android, iOS, and the Raspberry Pi hardware platform, allowing it to perform actions like image classification on such devices.

Discussion of machine learning often involves the driving force of high-end hardware: custom CPUs, arrays of GPUs, FPGAs, and the scale provided by cloud environments. But the theory goes that creating machine learning models that work on the average smartphone, without needing a cloud back end to support it 24/7, could bring into existence new kinds of applications. Even if those goals don't completely materialize, the benefits this work will provide for TensorFlow should be worth the effort.

Copyright © 2017 IDG Communications, Inc.