Skip to content

History

jkaraki edited this page Apr 26, 2019 · 2 revisions

Development of Version 3

A few factors led to the overhaul of the scenery robot that ended up becoming v3. The Intel Edison chip that was being used became obsolete, thus developing on it was not ideal. Additionally, the current software was unfit for a cue based system, the end goal of the project. Finally, the working group decided that an "Internet of Things" approach was best. Therefore, some major decisions were made, ie switching to an Arduino YUN and to rebuild the entire control interface, along with altering the communications framework. For a simple overview of how this system works, just look at this image below!

(andy pic)

An Exploration of Data Flow for Networked Stage Robotics

The Internet of Things (IoT) design paradigm is the central motivation behind the networked data flow of this stage robot iteration. Radio-based RC options provide low latency, low infrastructure deployment capabilities, but they lack the backend to support features like cueing, on-the-fly traversal updates, homing, and dynamic collision avoidance. By allowing the robot to interact with a WiFi network, the computing required to support a robust feature set can be distributed across devices to simultaneously optimize for latency and processing efficiency. The IoT approach also allows for multiple devices to interact over a single network, opening the doors for multiple robots, visualizers, and interfaces to synchronize as the stage robotics project continues to advance in the future.

Before the flow of data in this networked solution can be explained, it is first beneficial to explore each piece of hardware in the system to understand why it is responsible for the various modes of processing in the system. Beginning with the lowest level of compute-enabled hardware, the first item on the list is the Arduino processor. Arduino provides a well-supported environment for low-level device control and, in conjunction with libraries like Software Serial and Sabertooth, allows for intuitive programming for device drive control. Unfortunately, the Arduino’s device facing processor, the ATmega32u4, hereon the “Arduino processor,” is too lightweight to support the more complex network tasks required of the robot.

Enter the Arduino YUN, a device with the same hardware interfacing capabilities of an Arduino Uno but with a Linux environment directly on board. The YUN is not to be mistaken for a more powerful Uno, but rather it keeps the Arduino processor but adds an Atheros AR9331 MIPS processor, hereon the “Linux processor,” to support the network-facing capabilities of the board. The two processor are connected with a data bridge, so information from the Linux processor can be piped down to the Arduino processor, and likewise data from the Arduino processor can be sent upstream to the Linux environment. In the deployment used for the stage robot, the Arduino processor is responsible for device-facing tasks like driving the wheels and collecting encoder data. Notably, the Arduino processor does not determine how to drive the robot, it simply passes along and formats the drive command received from the Linux processor.

The Linux processor uses its network capabilities to communicate with the web server, where the cue control data structure is hosted alongside other server functions including a manual control interface. The Linux processor on the robot is but one device communicating with the server; other computers on the network can also see the control data for monitoring, visualization, and control purposes. The networked applications merely provide different interpretations of the same server data to suit the needs of the specific interface goal. The server is hosted on a Raspberry Pi physically connected to a router, though there are many alternatives to hosting a web page on a local network as well.

With the hardware laid out, it is now possible to examine the flow of data from the user to the robot’s hardware. There are many possible control interfaces for stage robotics, and if desired, it is still possible to drive the robot manually with a remote control. To accomplish this task, a computer on the network with a controller attached simply needs to log into the web interface so that controller data can be posted on the server. An alternative option that is one of the more exciting features of the robot in its third revision is to control based of off a cue system, where traversals can be plotted out and saved in the web interface. This interface uses the capability of the cue data structure to calculate movement vectors based on end-point locations.

The choice of a web interface is key here. In V2, the application itself was a C-based executable runnable only on Windows machines. In order to make our platform as universal as possible, we decided to make it a Node.js based web interface, meaning that any computer with a web browser will be able to host and run our app.

Regardless of the source of the input, all control data eventually is posted on the server. Cue data is posted in the JSON format on a page, manual data is posted as a set of drive values on a separate page, and flags are hosted on yet another page. There is also a page set up for devices to talk back to the server without using a front end. This page is used by the robot to post encoder data and status reports, which can be interpreted for the front end by the server.

The Linux processor runs a Python script that scrapes these web pages and interprets the information into specific drive commands for the Arduino processor. For example, the user may update the cue list, which will cause the server to set a “load cue” flag that the Linux processor will see. When the flag is set, the Linux processor will scrape the JSON cue control data and interpret it into the cue data structure in the Linux system memory. The Linux processor will also communicate with the Arduino processor to terminate any drives currently in motion and set an LED indicator color. When the parsing of cue data is complete, the Linux processor posts to the server indicating a successful processing of data, prompting the server to revert the “load cue” flag.

Important considerations throughout the system are bandwidth, latency, and processing power, each of which vary between connection points in the data flow. For this reason, the Linux processor is also responsible for mathematically determining the execution of a cue in terms of specific motor drive values while balancing the equation with live encoder data send up from the Arduino processor. While the server is the most powerful device in the flowchart, with the possible exception of a client computer, it is too far abstracted from the hardware of the robot to calculate these drive commands fast enough, and even if it could, the Linux processor would become responsible for much more intense web parsing that may fall outside of its hardware capabilities. The cue vector is thus interpreted by the Linux processor into drive commands representing the acceleration, deceleration, and total drive time segments of the cue based on real time progress towards the destination. Encoder values are regularly posted to the network for monitoring. The drive data itself is sent down the bridge to the Arduino processor.

The Arduino processor’s algorithm is built with safety at the forefront. On every loop execution, the robot will immediately deploy its breaks unless it is explicitly told to power the drive wheels by the Linux environment. If there is a break anywhere in the chain of communication, the risk of a rouge drive is significantly mitigated by this algorithm’s halt assumption. Ideally, the Arduino processor would be responsible for processing the encoder data to avoid latency introduction from the two-way bridge travel, but the Arduino is not powerful enough to handle all of the different consequences of the data, so it must be sent up stream for processing.

In summary, a GUI is hosted on the local network for users to input cues for the robot’s movement. The user may decide to input direct measurements, draw the path over a ground plan, or even drive the robot freely with a controller. The cue list itself is a data structure consisting of vectors with an acceleration and deceleration, which are extracted from the path. The GUI input, whatever its source, is interpreted into the vector data structure, which is hosted on a Pi running a webpage on the local network. This page can be interpreted in different ways bet different devices. Multiple robots could derive their instructions from a common source, or a mobile application could provide real-time updates on expected movement all from the data on this page. Once the robot has copied the cue list, any cue can be triggered with a lightweight “Go” web update. A python script running on the YUN repeatedly check the local web pages for movement instructions and interprets them into specific motor control arguments for the adjacent microprocessor. The python script is also responsible for managing communication across the Bridge to the microprocessor, informing the network as needed. The YUN’s on-board Arduino processor reads values off of the bridge and applies them to the proper hardware devices using a Software Serial implementation of the Sabertooth library. The encoder devices on the robot report to the Arduino script, which in turn hands the required information back to the Bridge for processing on the Linux device. A custom PCB directs values from the YUN pinout an activates the motors according to the planned path of traversal in conjunction with an emergency stop system build outside of this dataflow.

Where We Go Next

The underlying greatest aspect of the IOT approach is its scalability. Future plans include the ability to host multiple robots in the same application, the ability to have more than one one laptop on the network without crossover issues, and adding functionality for mobile devices so that technicians will only have to open an app on their smartphone to cue up the robot and monitor its location.