Jeromy  Lowe

Jeromy Lowe


An AIoT Example using Raspberry Pi and Azure AI

A IoT is what you get when you combine AI with IoT, using it you can create very interesting applications, not just your average IoT device that collects telemetry data and uploads it to the cloud, but a device that can do smart things using cloud computing and AI services such as Azure AI.

In this article I’m going to explain an example of AIoT using a Raspberry Pi, a Pi Camera and Azure Face API. Raspberry Pi, if you are not already familiar with, is a credit card sized computer which have a huge ecosystem around it of extensions and software. One of the very popular extensions is the Pi Camera, a tiny camera that you can attach it to a Pi board and by doing so we can have so many potentials.

This a howto article intended for the technical crowd who wishes to learn about AI and IoT applications, however if you are a non techie, you may enjoy some of my business minded articles on how to deal with these technologies. Also, if you are interested in the technology you might wish to follow me as I’m going to demonstrate in the future articles how to become a citizen developer and develop your IoT applications without code involved.

In this example I’m going to use a Raspberry PI, Azure Face API from their Cognitive Services and nodejs to write things up. You can actually achieve the same using any other programming language such as Python, C#, Go or even PHP if you like, since all of these languages are being support by the nifty Pi. I’m using nodejs for its merit that you won’t need to parse json messages in order to to interact with REST services, so it would be more convenient than say Python.

#technology #ai #iot

What is GEEK

Buddha Community

An AIoT Example using Raspberry Pi and Azure AI
Trystan  Doyle

Trystan Doyle


An AIoT Example using Raspberry Pi and Azure AI - Fady Anwar

AIoT is what you get when you combine AI with IoT, using it you can create very interesting applications, not just your average IoT device that collects telemetry data and uploads it to the cloud, but a device that can do smart things using cloud computing and AI services such as Azure AI. In this […]

#technology #aiot #azure #ai

TensorFlow Lite Object Detection using Raspberry Pi and Pi Camera

I have not created the Object Detection model, I have just merely cloned Google’s Tensor Flow Lite model and followed their Raspberry Pi Tutorial which they talked about in the Readme! You don’t need to use this article if you understand everything from the Readme. I merely talk about what I did!


  • I have used a Raspberry Pi 3 Model B and PI Camera Board (3D printed a case for camera board). **I had this connected before starting and did not include this in the 90 minutes **(plenty of YouTube videos showing how to do this depending on what Pi model you have. I used a video like this a while ago!)

  • I have used my Apple Macbook which is Linux at heart and so is the Raspberry Pi. By using Apple you don’t need to install any applications to interact with the Raspberry Pi, but on Windows you do (I will explain where to go in the article if you use windows)

#raspberry-pi #object-detection #raspberry-pi-camera #tensorflow-lite #tensorflow #tensorflow lite object detection using raspberry pi and pi camera

Tools and Images to Build a Raspberry Pi n8n server


Tools and Images to Build a Raspberry Pi n8n server


The purpose of this project is to create a Raspberry Pi image preconfigured with n8n so that it runs out of the box.

What is n8n?

n8n is a no-code/low code environment used to connect and automate different systems and services. It is programmed using a series of connected nodes that receive, transform, and then transmit date from and to other nodes. Each node represents a service or system allowing these different entities to interact. All of this is done using a WebUI.

Why n8n-pi?

Whevever a new technology is released, two common barriers often prevent potential users from trying out the technology:

  1. System costs
  2. Installation & configuration challenges

The n8n-pi project eliminates these two roadblocks by preconfiguring a working system that runs on easily available, low cost hardware. For as little as $40 and a few minutes, they can have a full n8n system up and running.


This project would not be possible if it was not for the help of the following:


All documentation for this project can be found at

Download Details:

Author: TephlonDude


#pi #raspberry pi #raspberry #raspberry-pi

Otho  Hagenes

Otho Hagenes


Making Sales More Efficient: Lead Qualification Using AI

If you were to ask any organization today, you would learn that they are all becoming reliant on Artificial Intelligence Solutions and using AI to digitally transform in order to bring their organizations into the new age. AI is no longer a new concept, instead, with the technological advancements that are being made in the realm of AI, it has become a much-needed business facet.

AI has become easier to use and implement than ever before, and every business is applying AI solutions to their processes. Organizations have begun to base their digital transformation strategies around AI and the way in which they conduct their business. One of these business processes that AI has helped transform is lead qualifications.

#ai-solutions-development #artificial-intelligence #future-of-artificial-intellige #ai #ai-applications #ai-trends #future-of-ai #ai-revolution

Giving AIoT a Voice using Azure AI

In my previous article we learnt together how to use Azure Face API to enable Raspberry Pi IoT device to detect faces. In this second howto article we are going to build atop of what we have done so far to give Raspberry Pi a voice using Azure Speech API.

In order to be able to follow through the steps detailed here you will need to setup a Raspberry Pi following the instructions from the previous article.

When your IoT device is ready, you will need to create a Speech API on your Azure portal. For convenience, I’m going to use bash commands. To open your bash console you will need to open the cloud shell from the portal upper right corner.

And when the shell pane show up you will need to select the bash option like shown below. You might need to create an Azure storage account first time you use the Azure cloud shell to host your shell files. Azure will create this automatically for you.

We will start by creating a resource group to host the components required by this howto and the future ones. Here I’m group called AIoT which will be located in northeurope location, you may pick a different location.

az group create --name AIoT --location northeurope

You should get a json success message as following

fady@Azure:~$ az group create --name AIoT --location northeurope


"id": "/subscriptions/GUID/resourceGroups/AIoT",

"location": "northeurope",

"managedBy": null,

"name": "AIoT",

"properties": {

"provisioningState": "Succeeded"


"tags": null,

"type": "Microsoft.Resources/resourceGroups"


Now we proceed to create the new Speech API resource named AIoTSpeech under the resource group we just created as following. The F0 sku means you are using a free tier which you can use only once.

az cognitiveservices account create -n AIoTSpeech -g AIoT --kind SpeechServices --sku F0 -l northeurope --yes

You should get another success json message like below

fady@Azure:~$ az cognitiveservices account create -n AIoTSpeech -g AIoT --kind SpeechServices --sku F0 -l northeurope --yes


"customSubDomainName": null,

"endpoint": "",

"etag": "\"GUID\"",

"id": "/subscriptions/GUID/resourceGroups/AIoT/providers/Microsoft.CognitiveServices/accounts/AIoTSpeech",

"internalId": "GUID",

"kind": "SpeechServices",

"location": "northeurope",

"name": "AIoTSpeech",

"networkAcls": null,

"provisioningState": "Succeeded",

"resourceGroup": "AIoT",

"sku": {

"name": "F0",

"tier": null


"tags": null,

"type": "Microsoft.CognitiveServices/accounts"


After done, you will need to grab the API keys so you can use them to call the service. You can do so by typing the below command with resources group name and resource name as parameters.

az cognitiveservices account keys list -g AIoT -n AIoTSpeech

You will get a json response message with key 1 & 2, any of them should work.

Now your Azure Speech service is ready, let’s prepare our Raspberry Pi by installing the nodejs npm packages required to call the API, download the generated voice wav file and play it on Raspberry Pi sound system.

npm install microsoft-cognitiveservices-speech-sdk fs play-sound

After done, create a new file named speech.js and copy and paste the below nodejs code snippet in it

// pull in the required packages.

**var** sdk = require(``"microsoft-cognitiveservices-speech-sdk"``);

**var** fs = require(``"fs"``);

**var** player = require(``'play-sound'``)(opts = {})

**var** subscriptionKey = "key"``; //place your subscription key here

**var** serviceRegion = "northeurope"``; // place your azure location here

**var** filename = "hello.wav"``; // This is the file name which is going to be downloaded

// create the pull stream we need for the speech sdk.

**var** pullStream = sdk.AudioOutputStream.createPullStream();

// open the file and write it to the pull stream.

fs.createWriteStream(filename).on(``'data'``, **function**``(arrayBuffer) {;

}).on(``'end'``, **function**``() {



// now create the audio-config pointing to our stream and

// the speech config specifying the language.

**var** speechConfig = sdk.SpeechConfig.fromSubscription(subscriptionKey, serviceRegion);

// setting the recognition language to English.

speechConfig.speechRecognitionLanguage = "en-US"``;

**var** audioConfig = sdk.AudioConfig.fromStreamOutput(pullStream);

// create the speech synthesizer.

**var** synthesizer = **new** sdk.SpeechSynthesizer(speechConfig, audioConfig);

// we are done with the setup

**var** text = "Hello World!"

console.log(``"Now sending text '" + text + "' to: " + filename);

// start the synthesizer and wait for a result.



**function** (result) {


//when the wav file is ready, play it``'hello.wav'``, **function**``(err){

**if** (err) **throw** err



synthesizer = undefined;


**function** (err) {

console.trace(``"err - " + err);


synthesizer = undefined;



Copy the file same way explained in the previous article to your Raspberry Pi, connect a speaker or headphones to either your Pi stereo jack or one of its usb ports. Then from the shell, type the below command.

#technology #ai #aiot #azure #cognitiveservices #iot #nodejs #raspberrypi #speech api