Raspberry Pi3 Edge TPU Coral USB Acceleratorで機械学習

とりあえず、届いたばかりのEdge TPU Coral USB Accelerator。

coral.withgoogle.comより、Dev Boardがなかったので、Raspberry Pi3にEdge TPU USB Accelerator Coralを差し込んで、機械学習させたので、ご興味がありましたらどうぞ。

一応coral.withgoogle.comより、Dev BoardとUSB Acceleratorに関しましてと、今回はRaspberry Piを使用したので、できるだけ公式サイトのリンクなどを貼り付けておきます。

こちらは、TensorFlow公式ユーチューブ(YouTube)動画アカウント、Google Developers公式ユーチューブ(YouTube)動画アカウントより。

Edge TPU live demo: Coral Dev Board & Microcontrollers (TF Dev Summit ’19) TensorFlow

At TensorFlow Dev Summit 2019, we announced TensorFlow Lite 1.0 
TensorFlow Lite is a lightweight solution for mobile and embedded devices. In this video, you will see live demos of both the Coral Dev Board, and microcontrollers in action!

Google and NXP advance artificial intelligence with the Edge TPU Google Developers

At CES, the Google AIY team shared how it’s advancing AI at the edge with the new Edge TPU chip, integrated with an NXP i.MX8 processor.

一応、Edge TPU USB Accelerator Coralのサイズ感がわかるかと思い、SDカードを置いてみました。わかりづらかったらすみません。

環境はこんな感じです。1坪もあれば充分です。

Dev Board $149.99

A single-board computer with a removable system-on-module (SOM) featuring the Edge TPU.
Supported OS: Mendel Linux (derivative of Debian)
Supported Framework: TensorFlow Lite
Languages: Python (C++ coming soon)

A development board to quickly prototype on-device ML products. Scale from prototype to production with a removable system-on-module (SOM).

USB Accelerator $74.99

A USB accessory that brings machine learning inferencing to existing systems. Works with Raspberry Pi and other Linux systems.

A USB accessory featuring the Edge TPU that brings ML inferencing to existing systems.
Supported OS: Debian Linux
Compatible with Raspberry Pi boards
Supported Framework: TensorFlow Lite

Local inferencing

Run on-device ML inferencing on the Edge TPU designed by Google.

Works with Debian Linux

Connect to any Linux-based system with an included USB Type-C cable.

Supports TensorFlow lite

No need to build models from the ground up. Tensorflow Lite models can be compiled to run on USB Accelerator.

Dev Boardが日本では手に入らない状況でしたので、何かないかなあと思い、Raspberry Pi3でと思いましたら、公式ドキュメントにも、Raspberry PiにEdge TPU Coral USB Acceleratorのインポート方法は公式ドキュメントは掲載されています。

一応公式サイトより引用させて頂きます。
Setup for Linux or Raspberry Pi
To get started, perform the following steps on your Linux machine or Raspberry Pi that will connect to the Accelerator.

Install the Edge TPU runtime and Python library:

wget http://storage.googleapis.com/cloud-iot-edge-pretrained-models/edgetpu_api.tar.gz

tar xzf edgetpu_api.tar.gz

cd python-tflite-source

bash ./install.sh

Caution: During installation, you’ll be asked, “Would you like to enable the maximum operating frequency?” Enabling this option improves the the inferencing speed but it also causes the USB Accelerator to become very hot to the touch during operation and might cause burn injuries. If you’re not sure you need the increased performance, type N and press Enter to use the default operating frequency.
Help! If you see the message ./install.sh: line 116: python3.5: command not found, then the install failed because you don’t have Python 3.5 installed. So type python3 –version and press Enter. If it prints Python 3.6 or higher, then open the install.sh script and edit the very last line to use python3 instead of python3.5. However, if your Python version is lower than 3.5, you need to install Python 3.5.

公式ドキュメントを少しわかりやすく説明をしますと、Edge TPU runtimeのedgetpu_apiとPythonライブラリを入れたので、まずコマンドで、

wget http://storage.googleapis.com/cloud-iot-edge-pretrained-models/edgetpu_api.tar.gz

そして、そこまで時間は掛からないと思いますが、ずずずっと流れ込んで、

pi@raspberrypi: ~ $

と表示されましたら

tar xzf edgetpu_api.tar.gz
cd python-tflite-source
bash ./install.sh

でインストールが終わるかと思います。

公式ドキュメント通りにすでにイメージファイルでスコアが出るかと思います。

Run a model on the Edge TPU
Now that your USB Accelerator is setup, you can start running TensorFlow Lite models on the Edge TPU.
For example, here’s a demo that performs image classification with the parrot image in figure 1:

cd edgetpu/

python3 demo/classify_image.py \
--model test_data/mobilenet_v2_1.0_224_inat_bird_quant_edgetpu.tflite \
--label test_data/inat_bird_labels.txt \
--image test_data/parrot.jpg

とコマンドを打ち込めば公式ドキュメントにあるように、すでに用意されているデータから赤いオウムのような鳥のスコアが出るかと思います。

---------------------------
Ara macao (Scarlet Macaw)
Score :  0.613281
---------------------------
Platycercus elegans (Crimson Rosella)
Score :  0.152344

こんな感じでスコアが出るかと思います。
coral.withgoogle.com/tutorials/accelerator

Edge TPU API overview & demos
To enable easy inferencing with TensorFlow Lite models on the Coral Dev Board and USB Accelerator, we’ve created the edgetpu Python module. It provides simple APIs that perform image classification, object detection, and weight imprinting (transfer-learning) on your Edge TPU device.

You can execute the samples below using models provided with the Edge TPU Python API package. For information about building your own models, read TensorFlow Models on the Edge TPU.

Edge TPU APIsのデモコマンド
API demos
Try the following scripts that demonstrate how to use ClassificationEngine and DetectionEngine. Then inspect the source code for each to learn more about the Edge TPU APIs.

First you must navigate to the directory with the demos:

cd /usr/lib/python3/dist-packages/edgetpu/
cd python-tflite-source/edgetpu/

で先程の赤いオウム以外のものもあり、人の顔object検出してスコアを出したりと。
coral.withgoogle.com/tutorials/edgetpu-api

コンパイラについて。

Edge TPU Model Compiler
Compile your model for the Edge TPU

Compile your TensorFlow Lite model for compatibility with the Edge TPU by uploading your .tflite file below.

For more information about how to create a model, see TensorFlow models on the Edge TPU.

DURING BETA
Currently, the Edge TPU compiler requires that your model use one of the following architectures:

MobileNet V1/V2:
224×224 max input size; 1.0 max depth multiplier
MobileNet SSD V1/V2:
320×320 max input size; 1.0 max depth multiplier
Inception V1/V2:
224×224 fixed input size
Inception V3/V4:
299×299 fixed input size
All models must be a quantized TensorFlow Lite model (.tflite file) less than 100MB.

COMING SOON
In a future update, we will remove the architecture-type restrictions. You can then compile any quantized TensorFlow Lite model that meets certain requirements:

All tensor sizes are fixed at compile-time (no dynamic sizes).
All model parameters (such as bias tensors) are fixed at compile-time.
All tensors are either 1-, 2-, or 3-dimensional. If a tensor has more than 3 dimensions, then only the 3 innermost dimensions may have a size greater than 1.
Other operation-specific limitations also apply. We will provide those details when the new compiler is available.

となっております。

Coral beta公式サイトのリソースより。

Mendel Linux OS

This is the Mendel system image for the Coral Dev Board.

ReleasePackageSizeSHA-256 checksum
Betamendel-enterprise-beaker-22.zip1.3 GB

85a1db9a6d251a38f34fabf808b4ad3c35d7ab413318bee1d70de48bde776486

For flashing instructions, see Reflash the Coral Dev Board.

U-boot recovery image

This is just the u-boot image, which you can use on an SD card to recover a board that fails to boot from eMMC.

ReleasePackageSizeSHA-256 checksum
Betamendel-enterprise-beaker
-recovery.img

4.2 MB

0bd8eae1a6aec75afd3ad5239e0e8d24c68e07113c77cc3799096a83b0da923a

For recovery instructions, see Flash from u-boot on an SD card.

Edge TPU runtime and Python API library

This is the package required on the host computer that’s attached to the Coral USB Accelerator.

ReleasePackageSizeSHA-256 checksum
Betaedgetpu_api.tar.gz

31.8 MB52e29f89481e935a9ce2beb0bdafc0495a60f74ef89aa220ddd1142e27adb23e

For setup instructions, see Get Started with the USB Accelerator.

coral.withgoogle.com
tensorflow.org/lite
raspberrypi.org
github.com/tensorflow
tensorflow.org/overview
ご興味がありましたら。
aiyprojects.withgoogle.com