New Coral APIs and Tools for AI at The Edge

New Coral APIs and Tools for AI at The Edge

Coral is a complete toolkit to build products with local AI. We've provided APIs in Python and C++ that enable developers to take advantage of the Edge TPU's local inference speed. We're now offering two separate reusable libraries, each built upon the powerful TensorFlow Lite APIs: libcoral for C++ and PyCoral for Python.

Fall has finally arrived and with it a new release of Coral's C++ and Python APIs and tools, along with new models optimized for the Edge TPU and further support for TensorFlow 2.0-based workflows.

Coral is a complete toolkit to build products with local AI. Our on-device inferencing capabilities allow you to build products that are efficient, private, fast and offline with the help of TensorFlow Lite and the Edge TPU.

From the beginning, we've provided APIs in Python and C++ that enable developers to take advantage of the Edge TPU's local inference speed. Offline processing for machine learning models allows for considerable savings on bandwidth and cloud compute costs, it keeps data local, and it preserves user privacy. More recently, we've been hard at work to refactor our APIs and make them more modular, reusable and performant, while at the same time eliminating unnecessary API abstractions and surfacing more of the native TensorFlow Lite APIs that developers are familiar with.

So in our latest release, we're now offering two separate reusable libraries, each built upon the powerful TensorFlow Lite APIs and each isolated in their own repositories: libcoral for C++ and PyCoral for Python.

libcoral (C++)

Unlike some of our previous APIs, libcoral doesn't hide tflite::Interpreter. Instead, we're making this native TensorFlow Lite class a first-class component and offering some additional helper APIs that simplify some of your code when working with common models such as classification and detection.

With our new libcoral library, developers should typically follow the pattern below to perform an inference in C++:

  1. Create tflite::Interpreter instance with the Edge TPU context and allocate memory.
  2. To simplify this step, libcoral provides the MakeEdgeTpuInterpreter() function:

// Load the model
auto model = coral::LoadModelOrDie(absl::GetFlag(FLAGS_model_path));

// Get the Edge TPU context
auto tpu_context = coral::ContainsEdgeTpuCustomOp(*model) ?
     coral::GetEdgeTpuContextOrDie() :
     nullptr;

// Get the interpreter
auto interpreter = coral::MakeEdgeTpuInterpreterOrDie(
     *model,
     tpu_context.get());  
  1. Configure the interpreter's input.
  2. Invoke the interpreter:
interpreter->Invoke();
  1. As an alternative to Invoke(), you can achieve higher performance with the InvokeWithMemBuffer() and InvokeWithDmaBuffer() functions, which enable processing the input data without copying from another region of memory or from a DMA file descriptor, respectively.
  2. Process the interpreter's output.

To simplify this step, libcoral provides some adapters, requiring less code from you:

auto result = coral::GetClassificationResults(
     *interpreter,
     /* threshold= */0.0f,
     /*top_k=*/3);

The above is an example of the classification adapter, where developers can specify the minimum confidence threshold, as well as the maximum number of results to return. The API also features a detection adapter with its own result filtering parameters.

For a full view of the example application source code, see classify_image.cc on GitHub and for instructions on how to integrate libcoral into your application, refer to README.md on GitHub.

This new release also brings updates to on-device retraining with the decoupling of imprinting functions from inference on the updated ImprintingEngine. The new design makes the imprinting engine work with the tflite::Interpreter directly.

To easily address the Edge TPUs available on the host, libcoral supports labels such as "usb:0" or "pci:1". This should make it easier to manage resources on multi-Edge TPU systems.

Finally, we've made a number of performance improvements such as more efficient memory usage and memory-based instead of file-based abstractions. Also, the design of the API is more consistent by leveraging the Abseil library for error propagation, generic interfaces and other common patterns, which should provide a more consistent and stable developer experience.

PyCoral (Python)

The new PyCoral library (provided in a new pycoral Python module) follows some of the design patterns introduced with libcoral, and brings parity across our C++ and Python APIs. PyCoral implements the same imprinting decoupling design, model adapters for classification and detection, and the same label-based TPU addressing semantics.

On PyCoral, the "run inference" functionality is now entirely delegated to the native TensorFlow Lite library, as we've done-away with the model "engines" that abstracted the TensorFlow interpreter. This change allowed us to eliminate the code duplication introduced by the Coral-specific BasicEngine, ClassificationEngine and DetectionEngine classes (those APIs—from the "Edge TPU Python library"—are now deprecated).

To perform an inference with PyCoral, we follow a similar pattern to that of libcoral:

Create an interpreter:

interpreter = edgetpu.make_interpreter(model_file)
interpreter.allocate_tensors()

Configure the interpreter's input:

common.set_input(interpreter, image)

Invoke the interpreter:

interpreter.invoke()

Process the interpreter's output:

classes = classify.get_classes(interpreter, top_k=3)

For fully detailed example code, check out our documentation for Python.

Updates to the Coral model garden

With this release, we're further expanding the Coral model garden with MobileDet. MobileDets refer to a family of lightweight, single-shot detectors using the TensorFlow Object Detection API that achieve state-of-the-art accuracy-latency tradeoff on Edge TPUs. It is a lower-latency detection model that offers better accuracy, compared to the MobileNet family of models.

Check out the full collection of models available from Coral for the Edge TPU, including Classification, Detection, Segmentation and models specially prepared for on-device training.

Migrating our entire workflow and model collection to TensorFlow 2 is an ongoing effort. This release of the Coral machine learning API starts introducing support for TensorFlow 2-based workflows. For now, MobileNet v1 (ImageNet), MobileNet v2 (ImageNet), MobileNet v3 (ImageNet), ResNet50 v1 (ImageNet), and UNet MobileNet v2 (Oxford pets) all support training and conversion with TensorFlow 2.

Model Pipelining

Both libcoral and PyCoral have graduated the model pipelining functionality from Beta to General Availability. Model pipelining makes it possible for large models to be partitioned and distributed across multiple Edge TPUs to run them considerably faster.

Refer to the documentation for examples of the API in C++ and Python.

The partitioning of models is done with the Edge TPU Compiler, which employs a parameter count algorithm, partitioning the model into segments with similar parameter sizes. For cases where this algorithm doesn't provide the throughput you need, this release is introducing a new tool that supports a profiling-based algorithm, which divides the segments based on latency observed by actually running the model multiple times, possibly resulting in a more balanced output.

The new profiling_partition tool can be used as such:

./profiling_partition \
  --edgetpu_compiler_binary $PATH_TO_COMPILER \
  --model_path $PATH_TO_MODEL \
  --output_dir $OUT_DIR \
  --num_segments $NUM_SEGMENTS

The Original Article can be found on tensorflow.org

tensorflow artificial-intelligence ai deep-learning developer

Bootstrap 5 Complete Course with Examples

Bootstrap 5 Tutorial - Bootstrap 5 Crash Course for Beginners

Nest.JS Tutorial for Beginners

Hello Vue 3: A First Look at Vue 3 and the Composition API

Building a simple Applications with Vue 3

Deno Crash Course: Explore Deno and Create a full REST API with Deno

How to Build a Real-time Chat App with Deno and WebSockets

Convert HTML to Markdown Online

HTML entity encoder decoder Online

AI vs ML vs DL What is Artificial Intelligence Machine Learning Deep Learning? General vs Narrow AI

What is the difference between AI ML and DL. What is ANI vs AGI. When will general artificial intelligence be a reality? What can narrow artificial intelligence do today that is better than human intelligence?

Artificial Neural Network | Deep Learning with TensorFlow and Artificial Intelligence

Artificial Neural Network | Deep Learning with Tensorflow and Artificial Intelligence | I have talked about Artificial neural networks and its implementation in TensorFlow using google colab. You will learn: What is an Artificial Neural Network? Building your neural network using Tensorflow.

Artificial Intelligence, Machine Learning, Deep Learning 

Artificial Intelligence (AI) will and is currently taking over an important role in our lives — not necessarily through intelligent robots.

Artificial Intelligence vs. Machine Learning vs. Deep Learning

Simple explanations of Artificial Intelligence, Machine Learning, and Deep Learning and how they’re all different

Artificial Intelligence vs. Machine Learning vs. Deep Learning

Learn the Difference between the most popular Buzzwords in today's tech. World — AI, Machine Learning and Deep Learning