Raspberry Pi

Raspberry Pi

The Raspberry Pi is a tiny and affordable computer that you can use to learn programming through fun, practical projects.

Tools and Images to Build a Raspberry Pi n8n server


Tools and Images to Build a Raspberry Pi n8n server


The purpose of this project is to create a Raspberry Pi image preconfigured with n8n so that it runs out of the box.

What is n8n?

n8n is a no-code/low code environment used to connect and automate different systems and services. It is programmed using a series of connected nodes that receive, transform, and then transmit date from and to other nodes. Each node represents a service or system allowing these different entities to interact. All of this is done using a WebUI.

Why n8n-pi?

Whevever a new technology is released, two common barriers often prevent potential users from trying out the technology:

  1. System costs
  2. Installation & configuration challenges

The n8n-pi project eliminates these two roadblocks by preconfiguring a working system that runs on easily available, low cost hardware. For as little as $40 and a few minutes, they can have a full n8n system up and running.


This project would not be possible if it was not for the help of the following:


All documentation for this project can be found at http://n8n-pi.tephlon.xyz.

Download Details:

Author: TephlonDude

GitHub: https://github.com/TephlonDude/n8n-pi

#pi #raspberry pi #raspberry #raspberry-pi

Tools and Images to Build a Raspberry Pi n8n server

TensorFlow Lite Object Detection using Raspberry Pi and Pi Camera

I have not created the Object Detection model, I have just merely cloned Google’s Tensor Flow Lite model and followed their Raspberry Pi Tutorial which they talked about in the Readme! You don’t need to use this article if you understand everything from the Readme. I merely talk about what I did!


  • I have used a Raspberry Pi 3 Model B and PI Camera Board (3D printed a case for camera board). **I had this connected before starting and did not include this in the 90 minutes **(plenty of YouTube videos showing how to do this depending on what Pi model you have. I used a video like this a while ago!)

  • I have used my Apple Macbook which is Linux at heart and so is the Raspberry Pi. By using Apple you don’t need to install any applications to interact with the Raspberry Pi, but on Windows you do (I will explain where to go in the article if you use windows)

#raspberry-pi #object-detection #raspberry-pi-camera #tensorflow-lite #tensorflow #tensorflow lite object detection using raspberry pi and pi camera

TensorFlow Lite Object Detection using Raspberry Pi and Pi Camera

The Raspberry Pi 400 - A full computer in a keyboard!

The Raspberry Pi 400 has arrived in the studio, and in this video I’ll give it a review. I’ll show an unboxing of the Personal Computer Kit from Canakit, which is a great way to get started on the Pi 400. Then I’ll show off the hardware, as well as the out-of-box experience.

#raspberry pi #pi #raspberry-pi

The Raspberry Pi 400 - A full computer in a keyboard!

How to run Joystick with Raspberry Pi | Raspberry Pi Ultimate Robot

In this video we are going to learn how to install and run the Ps4 joystick in raspberry pi. We will also created a module out of this so that we can run it with the motor module that we created in the previous video.

Part 1: Hardware Build: https://youtu.be/Zdv4cOmOmb8
Part 2: Motor Module: https://youtu.be/0lXY87NwVIc
Part 3: Keyboard Module: https://youtu.be/YEYBbFdus-Q

#raspberry-pi #pi #programming

How to run Joystick with Raspberry Pi | Raspberry Pi Ultimate Robot
Edureka Fan

Edureka Fan


Raspberry Pi 3 Tutorial For Beginners | Raspberry Pi 3 Projects Explained

This “Raspberry Pi 3 Tutorial” video by Edureka will help you in getting started with Raspberry Pi 3 with examples.

#iot #raspberry #developer #raspberry-pi

Raspberry Pi 3 Tutorial For Beginners | Raspberry Pi 3 Projects Explained
Philian Mateo

Philian Mateo


How to Install TensorFlow and Recognize images using Raspberry Pi


This article demonstrates how to install TensorFlow and recognize images using Raspberry Pi.


  • Raspberry Pi
  • TensorFlow
  • Putty or VNC View

**About TensorFlow **

  • TensorFlow is a free and open-source software library for dataflow
  • It is a symbolic math library.
  • TensorFlow is a computational framework for building machine learning models.

TensorFlow has two components

  • a graph protocol buffer
  • a runtime that executes the (distributed) graph

Types of an image

  • 8-bit COLOR FORMAT
  • 16-bit COLOR FORMAT

A 16-bit color format is divided into three different colors which are Red, Green, and Blue (RGB).

Step 1

Let’s go to install the raspbian stretch operating system and also get the latest version of Python, so open the Linux terminal to update Python.

This is image title

sudo apt-get update    
python --version    
python3 --version    

Installing TensorFlow needs some library file, so Libatlas library is required by the TensorFlow package installation. Once the library file is installed, then install the TensorFlow package.

Before installing TensorFlow, install the Atlas library.

sudo apt install libatlas-base-dev  

Once that is finished install TensorFlow via pip3

pip3 install --user tensorflow  

Nowwe’ve  successfully installed TensorFlowTensorFlow version-1-9-0.

Step 2

Once we install the TensorFlow we’re ready to test the basic TensorFlow script provided by the TensorFlow site and first, you can test the Hello World program. Now, I create a  new Python file like tftest.py,

sudo nano tftest.py  

Next, you have to import the TensorFlow library.

import tensorflow as tf  
hello = tf.constant('Hello, TensorFlow')  
sess = tf.Session()  
print (sess.run(hello))

Just run the code. You can see the hello TensorFlow program is successfully printed.

Run the code from the terminal:

python3 tftest.py   

This is image title

Step 3

Clone the TensorFlow classification script.

git clone https://github.com/tensorflow/models.git  

This is a panda image for reference:

This is image title

Once the script is running the panda image will be recognized.

cd models/tutorials/image/imagenet  
python3 classify_image.py 

Source Code

from __future__ import absolute_import  
from __future__ import division  
from __future__ import print_function  
import argparse  
import os.path  
import re  
import sys  
import tarfile  
import numpy as np  
from six.moves import urllib  
import tensorflow as tf  
FLAGS = None  
DATA_URL = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'  
class NodeLookup(object):  
  def __init__(self,  
    if not label_lookup_path:  
      label_lookup_path = os.path.join(  
          FLAGS.model_dir, 'imagenet_2012_challenge_label_map_proto.pbtxt')  
    if not uid_lookup_path:  
      uid_lookup_path = os.path.join(  
          FLAGS.model_dir, 'imagenet_synset_to_human_label_map.txt')  
    self.node_lookup = self.load(label_lookup_path, uid_lookup_path)  
  def load(self, label_lookup_path, uid_lookup_path):  
    if not tf.gfile.Exists(uid_lookup_path):  
      tf.logging.fatal('File does not exist %s', uid_lookup_path)  
    if not tf.gfile.Exists(label_lookup_path):  
      tf.logging.fatal('File does not exist %s', label_lookup_path)  
    proto_as_ascii_lines = tf.gfile.GFile(uid_lookup_path).readlines()  
    uid_to_human = {}  
    p = re.compile(r'[n\d]*[ \S,]*')  
    for line in proto_as_ascii_lines:  
      parsed_items = p.findall(line)  
      uid = parsed_items[0]  
      human_string = parsed_items[2]  
      uid_to_human[uid] = human_string  
    node_id_to_uid = {}  
    proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines()  
    for line in proto_as_ascii:  
      if line.startswith('  target_class:'):  
        target_class = int(line.split(': ')[1])  
      if line.startswith('  target_class_string:'):  
        target_class_string = line.split(': ')[1]  
        node_id_to_uid[target_class] = target_class_string[1:-2]  
    node_id_to_name = {}  
    for key, val in node_id_to_uid.items():  
      if val not in uid_to_human:  
        tf.logging.fatal('Failed to locate: %s', val)  
      name = uid_to_human[val]  
      node_id_to_name[key] = name  
    return node_id_to_name 
  def id_to_string(self, node_id):  
    if node_id not in self.node_lookup:  
      return ''  
    return self.node_lookup[node_id]  
def create_graph():  
  with tf.gfile.FastGFile(os.path.join(  
      FLAGS.model_dir, 'classify_image_graph_def.pb'), 'rb') as f:  
    graph_def = tf.GraphDef()  
    _ = tf.import_graph_def(graph_def, name='')  
def run_inference_on_image(image):   
  if not tf.gfile.Exists(image):  
    tf.logging.fatal('File does not exist %s', image)  
  image_data = tf.gfile.FastGFile(image, 'rb').read() 
  with tf.Session() as sess:  
    softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')  
    predictions = sess.run(softmax_tensor,  
                           {'DecodeJpeg/contents:0': image_data})  
    predictions = np.squeeze(predictions)  
 .  node_lookup = NodeLookup()  
    top_k = predictions.argsort()[-FLAGS.num_top_predictions:][::-1]  
    for node_id in top_k:  
      human_string = node_lookup.id_to_string(node_id)  
      score = predictions[node_id]  
      print('%s (score = %.5f)' % (human_string, score))  
def maybe_download_and_extract():  
  dest_directory = FLAGS.model_dir  
  if not os.path.exists(dest_directory):  
  filename = DATA_URL.split('/')[-1]  
  filepath = os.path.join(dest_directory, filename)  
  if not os.path.exists(filepath):  
    def _progress(count, block_size, total_size):  
      sys.stdout.write('\r>> Downloading %s %.1f%%' % (  
          filename, float(count * block_size) / float(total_size) * 100.0))  
    filepath, _ = urllib.request.urlretrieve(DATA_URL, filepath, _progress)  
    statinfo = os.stat(filepath)  
    print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')  
  tarfile.open(filepath, 'r:gz').extractall(dest_directory)  
def main(_):  
  image = (FLAGS.image_file if FLAGS.image_file else  
           os.path.join(FLAGS.model_dir, 'cropped_panda.jpg'))  
if __name__ == '__main__':  
  parser = argparse.ArgumentParser()  
      help='Absolute path to image file.'  
      help='Display this many predictions.'  
  FLAGS, unparsed = parser.parse_known_args()  
  tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)  


This is image title

The same method is used to classify the external images like the following terminal command.

python3 classify_image.py --image_file=/PATH/  

This is the very beginning of the TensorFlow Raspberry pi, just install the TensorFlow and Classify the image.


In this article, you learned how to install TensorFlow and do image recognition using TensorFlow and Raspberry Pi.

Thank you for reading!

#tensorflow #python #raspberry pi #raspberry-pi

How to Install TensorFlow and Recognize images using Raspberry Pi
Ari  Bogisich

Ari Bogisich


Create C# Universal Application for Raspberry Pi 2

It is time to create our first C## Windows 10 Universal Application for Raspberry Pi 2. You can find LED blinking example in an official documentation, so today we are gonna create weather application. This application will connect to the remote server and get actual weather information based on city and country name, and display this information on a screen.

Prepare your computer

  1. Download Visual Studio 2015 Community Edition RC
  2. Select Custom installation and enable Universal Windows Apps Development Tools and Emulators for Windows Mobile options
  3. Install WindowsDeveloperProgramForIoT.msi you can find it inside Windows 10 IoT Core Insider Preview image files. Download here
  4. Create Blank App (Windows Universal) project
  5. Select ARM platform and Remote Debugging:
  • Enter your Raspberry Pi 2 IP address and don’t use windows authentication.
  • Or go to Project settings -> Debug and set Target device to “Remote Machine”, specify IP address and deselect ‘Use authentication’ box
  1. Open MainPage.xaml and add simple text:
<Grid Background="{ThemeResource ApplicationPageBackgroundThemeBrush}">
    <TextBlock Text="Hello World" 
  1. Press F5 to deploy application to the Raspberry Pi

#raspberry pi #c# #windows 10 #raspberry pi 2 #programming-c #csharp

Create C# Universal Application for Raspberry Pi 2
Olen  Predovic

Olen Predovic


Extend The Lifespan of Your Raspberry Pi’s SD Card with log2ram

In my previous post, I talked about how you can use zram to squeeze more memory out of your Raspberry Pi at no cost. In this, I will talk about how you can then use that additional compressed memory to extend the life of the SD card on your Raspberry Pi.

The brilliant design of using SD cards

Since the advent of the Raspberry Pi, almost all single-board computers (SBCs) on the market have followed their lead in using SD cards as the main storage medium for the OS.

The main benefits of doing so, as I see it, are:

  1. The convenience of not needing to disconnect and move the device to your computer to be reset or re-flashed in the event of some catastrophic failure of the OS
  2. The low price-point of SD cards enabling cheap replacement in the event of storage failure
  3. Faster iterative learning and feature-set switches, where the device can play a completely different role just by swapping the SD card which entails switching the OS.
  4. For example: Media-center ➡️ Desktop replacement ➡️ IoT hub

When that brilliance becomes a problem

While the use of SD cards are great for learning and small personal projects, SBCs in the current day and age have since outgrown their original use cases as a result of the _specifications arms race _between the Raspberry Pi Foundation and all other SBC manufacturers.

For example, as of the time of writing, we’re seeing the Raspberry Pi 4 Model B with an option of 8GB RAM, which is way too much for a simple IoT controller. Therefore it’s unsurprising that the SBC market is slowly but surely, outgrowing their original use cases.

What I use SBCs for

I currently run self-hosted web services for my family and close friends (up to 20 users) on 11 SBCs, totalling up to 50 cores, 23GB of RAM spread over 2 clusters, Kraken and Leviathan.

#raspberry-pi-os #raspberry-pi #raspbian #sd-card #linux

Extend The Lifespan of Your Raspberry Pi’s SD Card with log2ram
Zakary  Goyette

Zakary Goyette


Get and Store Temperature From a Raspberry Pi With Go - DZone IoT

In this tutorial, I’ll show you how to grab temperature from a Raspberry Pi and build an endpoint to store the data, with Go. You will learn:

  • How to retrieve the temperature from a sensor
  • How to send that data in JSON
  • Build an API endpoint to receive it
  • Store the data in SQLite database

And we’ll do it all with Go. I did a live stream of the entire process that you can watch here.

What You’ll Need for This Tutorial

I’m using Pop!_OS to develop on, but you can use anything you’d like.

Note: If you’d like to learn more about Raspberry Pi, check out this new course, setting up a Raspberry Pi Home Server.

Why Are We Doing This?

I’ve previously written a tutorial to grab room temperature from a Raspberry Pi, and it’s very similar, only using Python. This is another “hand-rolled” tutorial. Why are we doing this by hand? Why not use a cloud service?

The purpose of this tutorial is to give you a deep understanding of IoT and how it works. You can easily use a cloud provider such as:

These services are great. They’re awesome. If you’re building a real project or working on IoT professionally, this is the way to go. They provide excellent secure services and handle so many things for you.

That’s great, but if you want to truly learn IoT you need to get down to the nuts and bolts. The cloud services have clients you configure, and you push the data up, and it’s visualized for you. In this tutorial, we’re going to build all that stuff ourselves and understand it.

Let’s Rock and Roll!

Connect the Temperature Sensor

The sensor will have three wires coming out from it and will need to be connected to the GPIO of the Raspberry Pi, as shown above.

  • There is a red power wire that goes to pin 1.
  • The black wire is ground and goes to pin 6.
  • The orange (sometimes yellow or white) is data and goes to Pin 11.

It’s pretty simple. If you need additional help, here’s a good guide to hooking up the AM2302.

#internet of things #raspberry pi #golang #raspberry pi tutorial

Get and Store Temperature From a Raspberry Pi With Go - DZone IoT

Step by Step Slow Guide: Kubernetes Dashboard on Raspberry Pi Cluster (Part 1)

It’s been a while since I wrote guid on setting up Kubernetes on Raspberry Pi cluster:

So, it is about time to talk about monitoring. This time we will be talking about setting up and configuring Kubernetes Dashboard.

Before we can begin, in order to be able to see cluster metrics and graphs, we first need to install first MetricsServer. I already wrote a guide about it’s installation, so let’s first go trough that guide before coming back and continue.

#metrics-server #raspberry-pi-cluster #kubernetes-dashboard #kubernetes #raspberry-pi

Step by Step Slow Guide: Kubernetes Dashboard on Raspberry Pi Cluster (Part 1)

Step by Step Slow Guide: Kubernetes Dashboard on Raspberry Pi Cluster (Part 2)

How to setup self-signed certificate for Kubernetes Dashboard and expose it via load-balancer

In previous part we were talking about setting up Kubernetes Dashboard, and now we are going to focus on setting up Dashboard certificate and exposing it outside our cluster.

First we are going to expose our Kubernetes Dashboard via load-balancer. We will be using MetalLB. I was writing previously about how to set it up on Raspberry Pi Kubernetes cluster If you didn’t do so yet, please follow the link below

#metrics-server #raspberry-pi-cluster #raspberry-pi #kubernetes-dashboard #kubernetes

Step by Step Slow Guide: Kubernetes Dashboard on Raspberry Pi Cluster (Part 2)
Biju Augustian

Biju Augustian


Learn Raspberry Pi for Image Processing Applications

Image Processing Applications on Raspberry Pi is a beginner course on the newly launched Raspberry Pi 3 and is fully compatible with Raspberry Pi 2 and Raspberry Pi Zero.

The course is ideal for those who are new to the Raspberry Pi and want to explore more about it.

You will learn the components of Raspberry Pi, connecting components to Raspberry Pi, installation of NOOBS operating system, basic Linux commands, Python programming and building Image Processing applications on Raspberry Pi.

This course will take beginners without any coding skills to a level where they can write their own programs.

Basics of Python programming language are well covered in the course.

Building Image Processing applications are taught in the simplest manner which is easy to understand.

Users can quickly learn hardware assembly and coding in Python programming for building Image Processing applications. By the end of this course, users will have enough knowledge about Raspberry Pi, its components, basic Python programming, and execution of Image Processing applications in the real time scenario.

The course is taught by an expert team of Electronics and Computer Science engineers, having PhD and Postdoctoral research experience in Image Processing.

Anyone can take this course. No engineering knowledge is expected. Tutor has explained all required engineering concepts in the simplest manner.

The course will enable you to independently build Image Processing applications using Raspberry Pi.

This course is the easiest way to learn and become familiar with the Raspberry Pi platform.

By the end of this course, users will build Image Processing applications which includes scaling and flipping images, varying brightness of images, perform bit-wise operations on images, blurring and sharpening images, thresholding, erosion and dilation, edge detection, image segmentation. User will also be able to build real-world Image Processing applications which includes real-time human face eyes nose detection, detecting cars in video, real-time object detection, human face recognition and many more.

The course provides complete code for all Image Processing applications which are compatible on Raspberry Pi 3/2/Zero.

Who is the target audience?

Anyone who wants to explore Raspberry Pi and interested in building Image Processing applications

To read more:

#Image Processing Applications on Raspberry Pi # Image Processing #Raspberry Pi #Scratch

Learn Raspberry Pi for Image Processing Applications
Alec  Nikolaus

Alec Nikolaus


This Raspberry Pi–powered setup improves home brewing

We spied New Orleans–based Raspberry Pi–powered home brewing analysis and were interested in how this project could help other at-home brewers perfect their craft.

Raspberry Pi in a case with fan, neatly tucked away on a shelf in the Danger Shed

When you’re making beer, you want the yeast to eat up the sugars and leave alcohol behind. To check whether this is happening, you need to be able to track changes in gravity, known as ‘gravity curves’. You also have to do yeast cell counts, and you need to be able to tell when your beer has finished fermenting.

“We wanted a way to skip the paper and pencil and instead input the data directly into the software. Enter the Raspberry Pi!”

Patrick Murphy

Patrick Murphy and co. created a piece of software called Aleproof which allows you to monitor all of this stuff remotely. But before rolling it out, they needed somewhere to test that it works. Enter the ‘Danger Shed’, where they ran Aleproof on Raspberry Pi.

The Danger Shed benefits from a fancy light-changing fan for the Raspberry Pi

Raspberry Pi 3 Model B+ spins their Python-based program on Raspberry Pi OS and shares its intel via a mounted monitor.

#uncategorized #python #brewing #raspberry pi 3b+ #pycharm #raspberry pi os

This Raspberry Pi–powered setup improves home brewing

The most beautiful project of Raspberry Pi 2020


In this video, we are going to watch The most beautiful project of Raspberry Pi

#programming #raspberry-pi #pi

The most beautiful project of Raspberry Pi 2020

Build a thermal camera with Raspberry Pi and Go

The spread of the COVID-19 virus has gripped many parts of the world (especially in Asia) in the past couple of months and has affected many aspects of a lot of people’s lives, including mine. I no longer commute to work every day and try to work from home as much as possible. Non-essential meetings are cancelled (which is a good thing) and other meetings are mostly done through video or audio conferencing. Most larger-scale events like conferences have been postponed to avoid gathering of people, which increases the risk of COVID-19 spreading.

For business continuity, my team has been segregated to Team A and Team B, they take turns to use the office on alternate weeks, and neither the twain shall meet. Also we enter almost every office building, everyone’s body temperature is checked and if they have fever, they are not allowed in and instead advised to see a doctor.

In one of our management meetings recently, we were discussing about how to deal with the flow of people (both employees and visitors) into our various offices around the island. Checking temperature is mostly done through non-contact thermometers by security guards. This however is a laborious and time-consuming method, and as people head to their offices this comes a bottleneck that ironically causes people to gather.

One of the suggestions was to use thermal imagers to do mass screening, which was quickly agreed. However, only the offices with more people flow will be equipped with them since they’re not cheap. Per set they can run into tens of thousands of dollars! One of my colleagues joked and said he’d like to have one for his personal office to screen everyone who comes by.

That, of course, set me off immediately.

Raspberry Pi to the rescue

I wrote a story last December on how I used my iPad Pro as a development device by attaching a Raspberry Pi4 to it as a USB gadget. That was just the start, of course. The Pi4 is much more than a small computer. It will now also be the base of my new thermal screener.

This is image title

My Raspberry Pi4 (with a bright new case)
For this project I will be using Go primarily and it will be compiled and run on the Pi4. It will:

  1. Read data from the AMG8833 thermal camera sensor.
  2. Convert the data to temperature readings.
  3. Generate thermal images based on the readings.

To display the readings, the software will also act as a web server and continually display the images on a web page. Also because the Pi4 is run headless, the thermal camera software will be started and run as a systemd service.

This is how it should turn out if all is well. This is a screenshot of me sitting down on a sofa around 1 meter away from the camera and raising my right hand.

This is image title
Screenshot of thermal camera output from Safari on the iPhone

The thermal camera hardware

The idea is to build a really cheap thermal camera for temperature screening. For the hardware I’m simply re-using my existing Pi4 and connecting it to a AMG8833 thermal camera sensor.

The AMG8833 is one of the cheapest thermal camera sensors around (slightly more than $60 Singapore dollars, or US$39.95). The sensor itself is a 8x8 array of infra-red sensors from Panasonic that return an array of 64 individual infrared temperature readings over I2C. It measure temperatures ranging from 0°C to 80°C with an accuracy of ± 2.5°C and can detect a human from a distance of up to 7 meters (detection means it can sense the heat differences). It can generate up to 10 frames per second (or as frame every 100 millisecond).

This is image title
The Adafruit AMG8833 thermal camera sensor

The pin out connections from the AMG8833 are quite straightforward. I’ll be using only 4 out of the 6 pins.

  • Vin – this is the power pin. The sensor uses 3.3V so I connect it to corresponding 3.3V pin (pin 1) on Pi4
  • 3Vo – this is the 3.3V output, I won’t be using it.
  • GND – this is common ground for power and logic. I connect it to ground pin (pin 9) on the Pi4. There are more than 1 ground pin on the Pi4, you can use any of them.
  • SCL – this is the I2C clock pin and we connect it to the corresponding SCL pin on the Pi4 (pin 5).
  • SDA – this is the I2C data pin, and we connect it to the corresponding SDA pin on the Pi4 (pin 3).
  • INT – this is the interrupt-output pin. It is 3V logic and is used to detect when something moves or changes in the sensor vision path. I’m not using it.

This is image title

This is how it looks like after connecting the pins.
This is image title
Attaching the AMG8833 to my Pi4

To stand up the thermal camera, I took some discarded foam package cushion and carved a scaffolding mini-tower to hold it.

This is image title

Building a scaffolding tower with some discarded foam package cushion
And we’re done! It looks kind of scrappy but it’ll do.

The thermal camera software

Let’s see how the software works next.

For this project I used 2 external libraries. The first is the amg8833 project, which I took from https://github.com/jweissig/amg8833. The project itself is a port of Adafruit’s AMG88xx Library. The second library is a pure Golang image resize library https://github.com/nfnt/resize. The rest are all from the Go standard libraries.

Variables and parameters

We start off with the variables used as well as list of parameters that we capture from the command-line.

// used to interface with the sensor
var amg *amg8833.AMG88xx

// display frame
var frame string

// list of all colors used 1024 color hex integers
var colors []int

// temperature readings from the sensor 8x8 readings
var grid []float64

// refresh rate to capture and display the images
var refresh *int

// minimum and maximum temperature range for the sensor
var minTemp, maxTemp *float64

// new image size in pixel width
var newSize *int

// if true, will use the mock data (this can be used for testing)
var mock *bool

// directory where the public directory is in
var dir string

func init() {
	// capture the user parameters from the command-line
	refresh = flag.Int("f", 100, "refresh rate to capture and display the images")
	minTemp = flag.Float64("min", 26, "minimum temperature to measure from the sensor")
	maxTemp = flag.Float64("max", 32, "max temperature to measure from the sensor")
	newSize = flag.Int("s", 360, "new image size in pixel width")
	mock = flag.Bool("mock", false, "run using the mock data")
	var err error
	dir, err = filepath.Abs(filepath.Dir(os.Args[0]))
	if err != nil {


Let’s look at the variables. amg is the interface to the sensor using the amg8833 library. I useframe to store the resized image captured from the sensor, which is then used by the web server to serve out to the web page. frame is a base64 encoded string.

The colors slice is a list of all the colors used in the image. This variable is declared here but populated in the heatmap.go file. grid is an array of 64 floating point temperature readings that is read from the AMG8833 8x8 sensor.

I capture a few parameters from the user when I start the software from the command line. refresh is the refresh rate to capture and display the images. By default it’s 100 milliseconds. minTemp and maxTemp are the minimum and maximum temperature ranges we can want to show on the image. newSize is the width of the final image that’s shown on the browser.

Finally, the mock parameter is used to determine if we are actually capturing from the sensor or using mock data. I used this when I was developing the software because it was easier to test.


I start by checking if the user wants to use the mock data or the actual data captured from the sensor. If the user wants to capture from the sensor, I initialize the amg and then start a goroutine on startThermalCam.

The startThermalCam function is simple, it just grabs the temperature readings into the grid and then wait for a period of time as defined in refresh.

The rest of the main function is just setting up the web server. I only have 2 handlers for the web server. The first is for the web page, and the second returns the image captured from the thermal camera.

func main() {
	if *mock {
		// start populating the mock data into grid
		go startMock()
		fmt.Println("Using mock data.")
	} else {
		// start the thermal camera
		var err error
		amg, err = amg8833.NewAMG8833(&amg8833.Opts{
			Device: "/dev/i2c-1",
			Mode:   amg8833.AMG88xxNormalMode,
			Reset:  amg8833.AMG88xxInitialReset,
			FPS:    amg8833.AMG88xxFPS10,
		if err != nil {
		} else {
			fmt.Println("Connected to AMG8833 module.")
		go startThermalCam()

	// setting up the web server
	mux := http.NewServeMux()
	mux.Handle("/public/", http.StripPrefix("/public/", http.FileServer(http.Dir(dir+"/public"))))
	mux.HandleFunc("/", index)
	mux.HandleFunc("/frame", getFrame)
	server := &http.Server{
		Addr:    "",
		Handler: mux,
	fmt.Println("Started AMG8833 Thermal Camera server at", server.Addr)

// start the thermal camera and start getting sensor data into the grid
func startThermalCam() {
	for {
		grid = amg.ReadPixels()
		time.Sleep(time.Duration(*refresh) * time.Millisecond)



The first handler, index uses the public/index.html template, passing it the refresh value. It also triggers a goroutine that start generating frames into the frame variable.

The getFrame handler takes this frame (which is a base64 encoded string) and pushes it out to the browser.

func index(w http.ResponseWriter, r *http.Request) {
	t, _ := template.ParseFiles(dir + "/public/index.html")
	// start generating frames in a new goroutine
	go generateFrames()
	t.Execute(w, *refresh)

// push the frame to the browser
func getFrame(w http.ResponseWriter, r *http.Request) {
	str := "data:image/png;base64," + frame
	w.Header().Set("Cache-Control", "no-cache")


the HTTP handlers

The generateFrames function continually generates the image and places it into the frame variable. This image is encoded as a PNG file and then further encoded as a base64 string to be displayed as a data URL.

// continually generate frames at every period
func generateFrames() {
	for {
		img := createImage(8, 8) // from 8 x 8 sensor
		createFrame(img)         // create the frame from the sensor
		time.Sleep(time.Duration(*refresh) * time.Millisecond)

// create a frame from the image
func createFrame(img image.Image) {
	var buf bytes.Buffer
	png.Encode(&buf, img)
	frame = base64.StdEncoding.EncodeToString(buf.Bytes())

generating the frames to populate the frame variable

Create images

The createImage is where the main action is. Remember the sensor captures data as an array of 64 temperature readings in the grid variable. Creating an image from this is simple.

First, I use the image standard library to create a new RGBA image. Then, for each temperature reading, I find get the index of the corresponding color I wanted and use that to get the hex color integer from the colors array.

package main

// this is the color heatmap used to display the image, from blue to red
// there are 1024 values

func init() {
	colors = []int{
		0x0000ff, 0x0001ff, 0x0002ff, 0x0003ff, 0x0004ff, 0x0005ff, 0x0006ff, 0x0007ff,
		0x0008ff, 0x0009ff, 0x000aff, 0x000bff, 0x000cff, 0x000dff, 0x000eff, 0x000fff,
		0x0010ff, 0x0011ff, 0x0012ff, 0x0013ff, 0x0014ff, 0x0015ff, 0x0016ff, 0x0017ff,

initializing the color hex integer array

With that, I grab the red, green and blue values from the integer and set it into consecutive elements of the Pix attribute of the image. If you remember from the A gentle introduction to genetic algorithms story I wrote earlier, Pix is a byte array with 4 bytes representing a pixel (R, G, B and A each represented by a byte), The red, green and blue bytes fit nicely into them and by the time the loop ends, we have a 8 pixel by 8 pixel thermal image!

Of course, this is way too small to show on the screen, so we use the resize library to resize the image to a more respectable size. Notice that it’s not just making the pixels larger, we use an algorithm (specifically the Lanczos resampling algorithm) to create a much smoother image when enlarged.

// create an enlarged image from the sensor
func createImage(w, h int) image.Image {
	// create a RGBA image from the sensor
	pixels := image.NewRGBA(image.Rect(0, 0, w, h))
	n := 0
	for _, i := range grid {
		color := colors[getColorIndex(i)]
		pixels.Pix[n] = getR(color)
		pixels.Pix[n+1] = getG(color)
		pixels.Pix[n+2] = getB(color)
		pixels.Pix[n+3] = 0xFF // we don't need to use this
		n = n + 4
	dest := resize.Resize(360, 0, pixels, resize.Lanczos3)
	return dest

// get the index of the color to usee
func getColorIndex(temp float64) int {
	if temp < *minTemp {
		return 0
	if temp > *maxTemp {
		return len(colors) - 1
	return int((temp - *minTemp) * float64(len(colors)-1) / (*maxTemp - *minTemp))

// get the red (R) from the color integer i
func getR(i int) uint8 {
	return uint8((i >> 16) & 0x0000FF)

// get the green (G) from the color integer i
func getG(i int) uint8 {
	return uint8((i >> 8) & 0x0000FF)

// get the blue (B) from the color integer i
func getB(i int) uint8 {
	return uint8(i & 0x0000FF)


Displaying on the browser

The final bit is to display it on the browser. Here’s the HTML template that displays the image.

<!doctype html><meta charset=utf-8>
        <script src="/public/jquery-3.3.1.min.js"></script>
        <script type="text/javascript">
            setInterval(function() {
                $.get('/frame', function(data) {
                    $('#image').attr('src', data);
            }, {{ . }});


        <img id="image" src="" style="display: block;"/>


If you’re not familiar with Go, the { . } is just the value that is replaced in the final HTML that is displayed by the browser. In this case, it’s the value (in milliseconds) of how often the image should be refreshed.

That’s it, the software part is done!

Running the software

Let’s take a look at running the software. Remember this is going to be run on the Pi4.

This is image title

running the software on the Pi4

The larger window on this screenshot is VNC-ing into the Pi4 while the smaller browser on the side is running it on my MacBook Pro Safari browser. I was sitting down on a chair and raising my right hand.

Making it a service

The software runs but we need to start it from the command-line. As an IoT device this is not ok. It should start when the device is powered on and we shouldn’t need to log into the device, start up the command line and type in the command to start it!

This means the thermal camera software should be run as a service on startup. To do this, I’m going to make it into a systemd service. Here are the steps:

  1. Go to the directory /lib/systemd/system
  2. Create a file named thermalcam.service with the following content (this is the unit file). Remember, you need to have sudo rights to do this. The important part is the ExecStart which specifies the command that will be executed
Description=Thermal Camera
ExecStart=/home/sausheong/go/src/github.com/sausheong/thermalcam/thermalcam -min=27.75 -max=30.75 -f=50 -s=480

3. Give the file the necessary permissions:

$ sudo chmod 644 /etc/systemd/system/thermalcam.service

4. Now you can start the service:

$ sudo systemctl start thermalcam

5. You can check the status here:

$ sudo systemctl status thermalcam

You should get something like this if it’s working. You can start or stop the service using systemctl.

This is image title

6. Finally to make sure the service starts whenever the Pi4 is powered on:

$ sudo systemctl enable thermalcam

Now you can place the thermal camera anywhere and the software will start as soon as the Pi4 is powered on! Here it is the camera in action on a shelf, next to my TV. I used a battery pack to power the Pi4 but I can also pull a USB power adapter to do the same.

This is image title
The thermal camera in action, powered by a portable battery pack

Let’s see how this looks like on the iPhone Safari browser.

Further thoughts

You might notice the picture quality is not that amazing. That’s to be expected, the sensor is after all only a 8x8 grid. With 64 data points, it’s not so easy to make out the details. There are definitely better thermal camera sensors out there. Here are some that I found from sniffing around:

  1. FLIR Lepton — https://www.flir.com/products/lepton/ (more than $100, so out of my price range)
  2. MLX90640—https://www.adafruit.com/product/4407 (it was out of stock when I was looking around)

I have a feeling that the MLX90640 will be better, after all it is a 32x24 pixel sensor, and with 768 data points, that’s 12x more than the AMG8833. Unfortunately I couldn’t get hold of one since it’s out of stock everywhere I looked around.

The software can detect people but it can’t really be used for thermal screening because it needs to be tuned to the correct temperature to screen. Unfortunately (or fortunately) I don’t have any way of doing this.

So far I’m only using this for thermal imaging, you can think of other things you can do with it, like detecting people or detecting if certain equipment is too hot and so on.

Knock yourself out!

#raspberry-pi #pi #Go

Build a thermal camera with Raspberry Pi and Go