Interfacing three different sensors with Raspberry Pi through different protocols, presenting the sensor data, and integrating an AI model on a dashboard.
The project has been developed and tested on Raspbian operating system.
- Raspberry Pi 4 Model B 8GB.
- Ultrasonic sensor (US100).
- Digital intensity sensor (BH1750FVI).
- Temperature sensor (TMP36).
- RPi Approved Phidisk Class10 U1 MicroSD-64GB.
- MCP3008 - 8-Channel 10-Bit ADC With SPI Interface.
- Logitech USB camera C270 model.
- USB microSD Card Reader and Writer.
- Raspberry Pi 4 Power Switch Supply Cable USB C.
- Soldering tools (Soldering Iron, Solder Wire, Desoldering Pump, Soldering Iron Stand, Cleaning Sponge, Tweezers, Wire Stripper/Cutter).
- Multimeter.
- Raspbian operating system.
- Node-RED.
The installation of Raspberry Pi operating system can be done by manually or automatically. Recommended to install automatically because it saves a lot of time.
-
Installation for Raspberry Pi operating system (Manually)
- Format SD card
- Open file explorer, click format sd card
- File system = FAT32 (Default). -
Install OS Raspberry Pi (Raspbian)
- Install BalenaEtcher.
- Open BalenaEtcher, Select Image (Installed OS zip file), select sd card and flash. -
Setup in Raspberry Pi - Download Raspbian (Recommended).
-
Update Raspberry Pi
- Open terminal type
sudo apt-get update
- Then,
sudo apt-get upgrade
- Reboot.
sudo reboot
5. Installation for Raspberry Pi operating system (Automatically)
- Install Raspberry Pi Imager
- Format SD card
- Choose operating system and install ***
Running on Raspberry Pi. Installing and Upgrading Node-RED.
- Running the following command will download and run the script. If you want to review the contents of the script first, you can view it here.
bash <(curl -sL https://raw.githubusercontent.com/node-red/linux-installers/master/deb/update-nodejs-and-nodered)
- This script will:
- remove the pre-packaged version of Node-RED and Node.js. if they are present.
- install the current Node.js LTS release using the NodeSource. If it detects Node.js is already installed from NodeSource, it will ensure it is at least Node 8, but otherwise leave it alone
- install the latest version of Node-RED using npm
- optionally install a collection of useful Pi-specific nodes
- setup Node-RED to run as a service and provide a set of commands to work with the service
-
The install script for the Pi also sets it up to run as a service. This means it can run in the background and be enabled to automatically start on boot.
-
The following commands are provided to work with the service:
-
node-red-start (this starts the Node-RED service and displays its log output. Pressing Ctrl-C or closing the window does not stop the service; it keeps running in the background)
-
node-red-stop (this stops the Node-RED service)
-
node-red-restart (this stops and restarts the Node-RED service)
-
node-red-log (this displays the log output of the service)
- You can also start the Node-RED service on the Raspbian Desktop by selecting the Menu -> Programming -> Node-RED menu option.
In this mini project, an object detection model has been used in order to demonstrate the AI model on the dashboard. Object detection model that aims to localize and identify multiple objects in a single image. It uses a cocossd dataset where it contains hundreds of thousands of images with millions of already labeled objects for training. The Common Objects in Context (COCO) dataset is one of the most popular open source object recognition databases used to train deep learning programs.
The model is capable of detecting 80 classes of objects. (SSD stands for Single Shot MultiBox Detection). The green bounding box used for indicates an object for example person, tie, bottle etc. At the top left corner, shows the type of object and confidence level of the object. It uses html code to display “Object Detection using COCOSSD” word.
The real-time sensors data dashboard presents three different sensor readings and also a line chart for each sensor readings. Temperature reading use gauge indicator, light level use level indicator and distance reading use text and all of these reading are real-time.
The line chart displayed the sensor reading against the time. Skymind logo also shows at the top of the dashboard and real-time clock and date that are up-to-date time by time.
In the object detection dashboard, it is actually from p5.js. It works exactly like p5.js since the codes are similar. It uses an iframe in order to display the object detection webpage and run the object detection model.