Ming He ECE 5160 FAST ROBOTS

LinkedIn: ww.linkedin.com/in/huaming-he
heyming98@gmail.com

Welcome to my Fast Robots website page.
I am an ECE student interested in robotics.


LAB1: The Artemis board and Bluetooth

Part1: Objective

The goal of this part of the lab is to setup and become familiar with the Arduino IDE and the Artemis board. It covers skills in programming on Arduino IDE, testing board with LED blinks, communication over serial protocol, and the application of the onboard temperature sensor and Pulse Density Microphone.

22nd, Jan 2024 - 7th, Feb 2024

Example: Blink it Up

After successfully installing the SparkFun Apollo3 support software in the Arduino IDE and connecting the SparkFun RedBoard Artemis Nano to the laptop, I first tested programming the board by making an LED blink. In the example video below, the LED was on for 1 seconds, and then off for 1 second.

Example: Serial Communication

The screenshot below demonstrates how to test serial communication. In this example, the user types an input into the serial monitor, and the Artemis board reads this input and outputs it back to the serial monitor.

Serial Communication

Example: Analog Input Read

The example video below demonstrates how to use the temperature sensor. The temperature recorded by the sensor on the Artemis board is sent to the serial monitor in Fahrenheit. To illustrate how the sensor responds, I covered the Artemis board with my hand, causing the temperature to increase to above 80°F.

Example: Microphone Usage

The example video below demonstrates how to use the microphone on the Artemis board to detect the loudest frequency.

(5000-Level) Example: Musical Tuner

The example video below was used to identify the musical note "A". When the microphone detected an "A", corresponding to a sound frequency of approximately 526 Hz, the LED would turn on. For sounds of other frequencies, the LED remains off.


Part2: Objective

The goal of part two is to establish Bluetooth communication between the computer and the Artemis board, utilizing Python in a Jupyter notebook and the Arduino programming language. This session aims to create a foundational framework for Bluetooth data transmission, setting the stage for its application in future lab exercises.

22nd, Jan 2024 - 7th, Feb 2024

Setup

Start Jupyter Server

After activating the virtual Python environment, I started the Jupyter server to use Jupyter notebooks for writing Python code.

Jupyter Lab Starts

Artemis Board Setup

After installing ArduinoBLE from the library manager in the Arduino IDE, I loaded and burned the sketch ble_arduino.ino onto the Artemis board from the ble_arduino directory in the codebase, following the instructions provided.


Bluetooth Connection

After successfully setting up the Artemis board and starting the Jupyter server, the next step was to connect the Artemis board to the Python code. I first needed to read the MAC address of the Artemis board and replace the default MAC address in the Arduino code. After uploading the provided ble_arduino.ino code file to the Artemis board, the MAC address c0:89:f4:6b:86:4b was returned in the serial monitor, as shown in the figure below.


Since the BLEService is used in this lab, in addition to the MAC address, a UUID is also needed to identify the service. This is to differentiate the various types of data sent or received between the Artemis and the computer. Therefore, in the Jupyter Notebook, I ran the following code:
from uuid import uuid4
uuid4()
This generated a unique UUID, which I then assigned to the BLE service in both the Arduino IDE and the Jupyter notebooks.

BLE connection

Connection

For the connection between the Artemis board and the computer, I first obtained the ArtemisBLEController object and connected to the Artemis Device before conducting any further tests.

BLE connection

Connection via commands

For testing the connection between the Artemis board and the computer, some new commands were created and utilized for different tasks. When a new command type is introduced, it should be added to both the Arduino IDE and the Python code to ensure they can communicate correctly under a specific command.

command types in Arduino command types in Python

Example: ECHO Command

This ECHO command involved sending a string value to the Artemis board from the computer, which corresponds to the Python code. Then, the Artemis board sent back the augmented phrase, which had additional words added to the original phrase, as shown below in both the Jupyter notebooks and the Arduino IDE screenshot, to the computer.

Echo command in Arduino Echo command in Python

Example: GET_TIME_MILLIS Command

The next command, GET_TIME_MILLIS, involved receiving the current time from the Artemis board by using the built-in millis() function in the Arduino IDE language. Following the instructions, I converted this value to an integer and sent it as a string to Python.

Get millis in Arduino Get millis in Python

Notification Handler

Instead of manually sending a command and then receiving the data, a notification handler could be used to monitor the data transfer. It is an asynchronous event handler, which means it would handle the case when the Artemis board is sending data and then receiving the data in Python.

The code below demonstrates how I used a notification handler to asynchronously receive the time data sent from the Artemis board.

Event handler

Example: GET_TIME_MILLIS_3s Command

The screenshot below demonstrates that I obtained the current time in milliseconds from the Artemis board and then sent it back to the laptop. After collecting these time values for 3 seconds, I was able to calculate an average speed of how fast messages can be sent. The effective data transfer rate of this method was 832 bytes/s.

collect and send data for 3 secs

Example: SEND_TIME_DATA

Next, a new command, SEND_TIME_DATA, was introduced. In the Arduino IDE, I created a loop that, instead of sending each timestamp to the laptop, stored each timestamp in an integer array named time_stamps. By calling the command SEND_TIME_DATA, the Arduino code looped through the array and sent each data point as a string to the laptop for processing. Additionally, the data transfer rate was calculated, which was 411,000 bytes/s. This rate was in contrast to the previous method, which sent each timestamp immediately instead of storing them in an array.

collect and send data stored in an array which collects 3 secs data

Example: GET_TEMP_READINGS

Next, as instructed, I added a second array of the same size as the time stamp array to store temperature readings. As shown in the screenshot below, each element in both arrays corresponded, meaning that the first timestamp was recorded at the same time as the first temperature reading. A command, GET_TEMP_READINGS, was introduced to loop through both arrays concurrently and send the data back to the laptop. I was able to configure the notification handler to manage the event and wrote code in Python to parse these strings, populating the data into two lists.

BLE connection

Example: Discussion about Two methods

The first method involves the Artemis board sending data immediately as it becomes available, without any buffering. The second approach, on the other hand, has the Artemis board buffering data into a packet before sending it to the computer. This method is faster, as indicated by the results above. The Artemis board is equipped with 384 kB of RAM, and considering that one character (char) is equivalent to 1 byte, it has the capacity to store up to 384,000 characters in its memory for transmission without running out of space.

(5000-Level) : Effeefctive Data Rate And Overhead

Next, I sent messages from the computer and received replies from the Artemis board. I calculated the data rate for 5-byte replies to be about 103 bytes/s and for 120-byte replies to be 1602 bytes/s. Additionally, to understand the relationship between the number of bytes and the data transfer rate, I tested multiple byte replies at intervals of 5. It was observed that the data rate increases with the amount of bytes sent, as depicted in the graph below. This trend indicates that larger replies reduce overhead. However, as shown on the graph, when the byte replies become too large—specifically, when they exceed 80 bytes—the trend might break, and the rate might drop to a lower rate than expected.

BLE connection BLE connection

(5000-Level) : Reliability

By modifying the code for the command SEND_TIME_DATA to send 10 unique data points at a high speed without any delay, the reliability of the Artemis was demonstrated. In the screenshot attached below, we can see that the data is reliable by comparing the data sent from the Artemis board and the data received on the laptop.

reliability

LAB2: IMU

Set Up the IMU

The goal of this part of the lab is to set up and become familiar with the IMU. It covers skills in programming on the Arduino IDE, testing the IMU with collecting accelerometer and gyroscope data.


Given that the IMU communicates with the Artemis board using the I2C protocol, knowing the slave address is crucial. The variable AD0_VAL signifies the last bit of the IMU's I2C address, which changes depending on the ADR jumper's connection. Initially, my IMU failed to provide any data, and the connection was unsuccessful because I had AD0_VAL set to 1, whereas it should have been 0 for my device.

IMU connected
7th, Feb 2024 - 14th, Feb 2024

Visualizing IMU Accelerometer and Gyroscope Data via Serial Plotter

The IMU was first connected to the Artemis board via the QWIIC connectors. To set up the IMU in the software, the ICM_20948 library was installed in the Arduino IDE to enable communication between the Artemis board and the IMU, allowing for data collection. The accelerometer and gyroscope data were visualized using the serial plotter.

Visualizing the difference between accelerometer and gyroscope data via Serial Monitor

Based on the data printed out on the serial monitor, the accelerometer data indicates the acceleration of motion around each coordinate axis. Thus, when moving the device along each axis, it is noticeable that the corresponding accelerometer data for that axis changes rapidly. The gyroscope measures the rate of angular rotation around the spatial coordinate axes. When the device is stationary, the data remains relatively small; however, it experiences a slight drift towards larger values.

Accelerometer

In order to use the accelerometer to determine tilt and roll, we can use the atan2 function, which returns a number in radians within the range [−π,π].
The functions used here to calculate angles are:

\(\boldsymbol{\theta = \text{atan2}(a_x , a_z )}\)

\(\boldsymbol{\phi = \text{atan2}(a_y , a_z )}\)

The screenshot provided below demonstrates the implementation of the atan2 function to calculate the angles of pitch and roll in radians, which are then converted to degrees.

acc_data in degree

The video below illustrates the pitch and roll data being printed out. It clearly demonstrates that the system operates effectively within the range of -90 degrees to 90 degrees along both the x and y axes, showcasing the pitch and roll actions.


Accelerometer Accuracy

As I record the data for a few seconds at each angle (0, -90, 90) and review the recorded data, it appears to be accurate enough. The teaching assistant instructed that there is no need to calculate the conversion factor.

Accelerometer Noise

As demonstrated in the video below, when the device is stationary, the signal in the time domain continues to fluctuate, indicating noise that may require attention.


When inspecting the noise, a fast Fourier transform (FFT)—an algorithm that computes the discrete Fourier transform (DFT) of a sequence—plays a crucial role. It is instrumental in visualizing the data more effectively in the frequency domain


When the device is completely stationary, collecting the data and applying the Fast Fourier Transform (FFT) reveals the signal in the frequency domain, as shown below.

stationary signal in frequency domain

However, when a vibration occurs during the collection of accelerometer data, the signal in the frequency domain appears as illustrated in the screenshot below.

vibrated signal in frequency domain

According to the datasheet, a hardware low pass filter is implemented, which explains the relatively clean signal in the FFT diagram when the device is stationary. New signals appearing up to about 30Hz were observed when vibration was introduced. Implementing a low pass filter via software might be beneficial. Based on the diagram, I selected a cut-off frequency of 10Hz.


The cut-off frequency was set at 10 Hz. Using the formula \(\boldsymbol{f = \frac{1}{2\pi RC}}\), where the f is the cut-off frequency, we can solve for the RC constant. Substituting \(\boldsymbol{f = 10}\) into the equation yields an RC constant of approximately 0.0159155. Consequently, using the formula \(\boldsymbol{\alpha = \frac{T}{T + RC}}\), where T is the period corresponding to the sample rate (\(\boldsymbol{T = \frac{1}{\text{sample rate}}}\)), and the sample rate is determined by recording a certain number of data points over a specific period of time, which calculates to 294.76787030213706 Hz. This results in \(\boldsymbol{T = 0.00339\, \text{seconds}}\), and therefore, \(\boldsymbol{\alpha}\) is calculated to be approximately 0.1756.


After introducing the low pass filter, which effectively reduces the noise to some extent, the data monitoring through the serial plotter is demonstrated in the video below. The video clearly distinguishes between the raw data, represented by the blue line, and the processed data, shown as the orange line, after implementing the low pass filter. It is evident that the low pass filter significantly reduces the noise.

Gyroscope

The gyroscope measures the rate of angular change in degrees per second. To calculate the angle, it adds the product of the angular rate and time to the current angle. A key advantage of the gyroscope is its lower noise levels compared to those of the accelerometer. However, a notable drawback is data drift over time. This means that even when the device is stationary on a table and the readings should ideally be zero, the data will gradually accumulate, leading to progressively larger values as time advances.


Data Drift

The video below displays data measured from the gyroscope, clearly showing how the data drifts over time even when the device is stationary.

Comparison of data output: gyroscope data versus accelerometer data and accelerometer data processed with a Low Pass Filter (LPF)

The video below demonstrates that the gyroscope data follows the same pattern as the accelerometer data, albeit with an increasing offset over time. This growing offset highlights the drift characteristic of gyroscope data.

Since calculating angles from gyroscope data is heavily dependent on the time difference between each data collection, the sample rate significantly impacts accuracy. After reducing the sample rate by introducing a delay, visualization of the gyro data through the serial plotter showed that updates to the new angles became slower and noticeably less accurate.

Complimentary Filter

Despite the application recommended by the instructor, which is capable of auto-scaling the data, not functioning on my laptop, it is still evident that after implementing the complementary filter in my system, the vibrations caused no more than a 3-degree difference.

Additionally, the working range from -90 degrees to 90 degrees was verified.

Sample Data

I increased the speed of IMU data collection by eliminating print statements and opting to store a time value at each iteration of the while loop, and store another time value when myICM.dataReady() returns true. This resulted in an average delay of approximately 0.0005 seconds between collections. However, this delay is expected to increase with the addition of more specific function calls and filter processing. Furthermore, by comparing two arrays that store time values for the main while loop and an inner loop—which merely checks for new IMU data readiness—the pacing between them appeared consistent.


Discussion

Consider if it makes sense to have one big array, or separate arrays for storing ToF, Accelerometer, and Gyroscope data

Storing data in separate arrays is more logical because each sensor—Time-of-Flight (ToF), Accelerometer, and Gyroscope—generates data with distinct meanings and units. Separate arrays ensure clearer organization, simplifying data processing and analysis. This is particularly important for cases like the Accelerometer, which requires the implementation of a Low Pass Filter, whereas Gyroscope data does not. Thus, processing data from different sensors may necessitate different filters or logic, indicating that using separate arrays can streamline implementation.

Consider the best data type to store your data. Should you use string, floats, double, integers?

For the Time-of-Flight (ToF) sensor, integers should be sufficient since distances are typically measured in whole numbers of millimeters or centimeters. However, floats are necessary if the application requires more precise distance measurements. For both the gyroscope and accelerometer, which measure different parameters, floats are adequate because they offer a good balance between precision and memory usage.

Consider the memory of the Artemis; how much memory can you allocate to your arrays? What does that correspond to in seconds?

Assuming I have three separate arrays for the three sensors—ToF, Gyroscope, and Accelerometer—with each sensor's reading represented as a single float, I would need 12 bytes per set of sensor readings (4 bytes × 3). Assuming the entire 384 KB of RAM is available for data storage and with a sampling rate of 100 readings per second, I can store approximately 32,000 sets of data points. This corresponds to about 320 seconds, which is equivalent to approximately 5 minutes and 20 seconds of data at a sampling rate of 100 Hz.

Demonstration of data collection for over 5 seconds

The image below demonstrates that the board is capable of capturing data for more than 5 seconds and then transmitting it to the computer via Bluetooth.

5s data collection

Stunt Car Record

The video below showcases the stunt car in action. I've tested its movements forward, backward, turning around, and even flipping. The car is highly sensitive and fast, making it challenging to prevent collisions with walls or to avoid flipping due to its speed.

As the instructor clarified, there is currently no need to mount the IMU on the car for testing.


LAB3: Time of Flight Sensors

Power up your Artemis with a battery

In constructing the project, it is essential to acquire a JST connector and a battery as primary components. The process involves carefully cutting the battery wires individually to avoid simultaneously shorting the terminals, which could result in battery damage. Subsequent steps include soldering the battery wires to the JST jumper wires, and to ensure long-term durability and safety, insulating the exposed wire sections with heat shrink over electrical tape. Additionally, verifying the wire polarity is crucial, with the positive terminal of the battery connecting to the positive terminal on the Artemis device. The final stage involves powering the Artemis without the use of the USB C port, facilitating the testing of Bluetooth Low Energy (BLE) communications between the laptop and the Artemis, thereby confirming the device's proper operational status and its ability to transmit messages wirelessly. The system powered by the battery functions as expected, which is illustrated in the picture below.

the Artemis board is powered by battery

The video below demonstrates the successful powering of the Artemis board and the use of the 'ECHO' command to send a message to the Artemis board. I received a response from the Artemis board, confirming its operational status without the need for power from a USB C port. This indicates that I successfully got the Artemis board to work wirelessly.

14th, Feb 2024 - 21th, Feb 2024

Set up the ToF sensor

Starting by cutting one end of a QWIIC cable and removing the protective film, I then soldered the other end to the ToF sensor. After successfully installing the SparkFun VL53L1X 4m laser distance sensor library, I tested the ToF sensor by connecting it to the Artemis Board to read data from it. One challenge I encountered during this process was that it was my first time using a QWIIC connection. I needed to identify that the blue cord connected to SDA and the yellow cord connected to SCL before I started soldering the wires.

I2C Address Scanning

There is an example sketch within the installed library that I used to scan for the I2C address of the device. The device was successfully identified, and its slave address is 0x29, as illustrated in the screenshot below.

I2C address for ToF

The I2C address matched my expectations. According to the datasheet, "The sensor's 7-bit slave address defaults to 0101001b on power-up." The I2C scanner code returned 0x29, which is accurate because 0101001b is equal to 0x29 in hexadecimal.

Connect two ToF

In this lab, two ToF sensors are used, which presents the challenge of managing two ToF sensors that have the same I2C address. To address this, I first connected each ToF sensor separately to the Artemis board and identified the I2C address of each device by running the I2C scanner example code. It was discovered that both sensors shared the same slave address. According to the datasheet, the slave address can be changed, and the installed library includes a function to modify the device address until it is reset or powered off. By connecting the shutdown pin on one of the ToF sensors to one of GPIO pin on the Artemis board and disabling it during startup by pulling the corresponding pin low, I was able to change the address of the other sensor to 0x30. After adjusting the address, I reactivated the ToF sensor with the original address of 0x29. The connetion schematic is shown below.

Schematic for two ToF

Consequently, I successfully connected two ToF sensors to the Artemis board simultaneously, allowing both to function at the same time, as demonstrated in the video below.

ToF modes

The library currently supports two distance measuring modes: setDistanceModeShort and setDistanceModeLong. I successfully configured each sensor with a different mode. By utilizing the getDistanceMode() function provided by the library, I confirmed the individual settings for each sensor, as demonstrated in the screenshot below.

two modes read data from two sensors

Test with two modes

I tested two sensors using two distinct modes: short mode and long mode, and generated graphs as illustrated in the screenshot below. The data collection occurred under normal lab ambient lighting conditions. Due to space limitations, I did not test distances beyond 1 meter, which falls within the valid range for both modes.it appears to be accurate enough. The teaching assistant instructed that there is no need to calculate the conversion factor.

two modes read data under ambient light

I also tested the two sensors in both short mode and long mode by covering the light, in contrast to the tests conducted under lab ambient lighting conditions (with all other conditions, such as distance measurement, being the same). I plotted the data as illustrated in the screenshot below.

two modes read data by light being covered

I also conducted tests on the two sensors using targets with different textures. The data previously collected were obtained using the lab floor as the target. I conducted additional tests using a cotton chair as the target; however, the data did not show any significant differences from the results illustrated in the screenshots above.

How fast each sensor reads data

I recorded the time stamp for each measurement taken by the sensor and calculated the average sampling rate to be approximately 0.143 seconds. This means the sensor records data at a frequency of about 6.98Hz, or every 0.143 seconds.


I verified the readiness of the data from both sensors; if ready, the measurements were printed, otherwise, the time was printed by running the while loop in the main loop function. This process revealed that the main loop executes faster than the data collection by the sensors. Given that my current loop runs at approximately 7Hz, which is slower than expected, I reviewed my code in the Arduino IDE and consulted the Time-of-Flight (ToF) sensor's datasheet for insights. I discovered the slowdown was due to an excessive number of conditional checks in my loop, including checks on array length, data storage management, and executing 'start ranging and stop ranging' in every loop.

Bluetooth connection

I carefully recorded time stamps of ToF data for 3 seconds and then sent the data over Bluetooth to my computer. The screenshot below shows the plot of the distance data over 3 seconds.

3 seconds of ToF data sent over Bluetooth

5000-level student discussion:

Pros/Cons of other distance sensors are based on infrared trasmission

The Sharp GP2Y0A02YK0F infrared range sensor operates by emitting an infrared beam to measure distances between 20cm and 150cm. Its main advantages include ease of integration due to its analog output and its cost-effectiveness. However, its utility is limited by a shorter operational range and susceptibility to ambient light and reflectivity of the target. In contrast, the Ultrasonic Sensor HC-SR04, with a range of 2cm to 400cm, is not influenced by the target's color or material and remains affordable. Its accuracy, though, can be compromised by environmental factors like temperature and humidity, and its wider beam angle may reduce precision for specific targets. It also has a slower measurement speed due to the nature of sound wave propagation. The VL53L1X Time-of-Flight (ToF) sensor, utilizing FlightSense technology, excels in delivering precise and quick distance measurements up to 4 meters, irrespective of target color and ambient light. This makes it ideal for real-time applications requiring reliable data. The primary drawback of the VL53L1X is its higher cost relative to the other sensors discussed.

Sensitivity of ToF sensor VL53L1X to colors and textures

I evaluated the sensor's sensitivity to different textures by using the lab floor and a cotton chair as targets. The tests did not reveal any significant differences in the accuracy of the data collected. As mentioned in the datasheet for the VL53L1X sensor, the sensor employs FlightSense technology. This ensures that the sensor's performance is largely unaffected by ambient lighting conditions and target characteristics such as color, shape, texture, and reflectivity. This observation aligns with my discoveries from the tests mentioned above. According to the datasheet from Pololu, these external factors may only influence the maximum range of the sensor measurements. However, for the purposes of design and development in this lab, the impact of these external conditions is not deemed significant.



LAB4: Motors and Open Loop Control

Introduction

The objective of this laboratory session is to transition from manual control to automated open-loop control of the vehicle. The vehicle will be capable of performing a sequence of pre-defined maneuvers, utilizing the Artemis board in conjunction with two dual motor drivers.

21th, Feb 2024 - 6th, Mar 2024

Prelab on setting up

The connections between the motor drivers, Artemis, and battery are shown as:

schematic

I selected pins 3, 5, A13, and A15 on the Artemis as the input for the motor drivers because these pins can generate Pulse-Width Modulation (PWM). This capability is indicated in the schematic by a "~" symbol preceding the pin numbers.

board schematic

As depicted in the schematic diagram above, I am using two separate batteries to power the Artemis and the motor drivers/motors. This approach is taken because motor drivers/motors typically consume a considerable amount of power. If the same battery were used to power both the Artemis board and the motor drivers/motors, it would likely result in an unstable power supply for the Artemis board, due to the high current draw from the motor drivers/motors.

Setup and Generating PWM signals

DC Power Supply

Initially, I powered the system using a power supply to perform a sanity check. I set the voltage to 3.7V, as the motor driver (DRV8833 by Pololu) operates within a range of 2.7V to 10.8V. Additionally, I capped the current at 2A, reflecting the peak output current the driver can handle. This measure serves as a precaution to protect the device in the event of a short circuit within the system.

Oscilloscope

Subsequently, I connected one of the motor driver inputs to Channel A of the oscilloscope to visualize the generated PWM signal and verify my approach.

Setup with power supply and oscilloscope

The video below shows the whole connection with connecting the motors, the Artemis board, oscilloscope, and the power supply.

The image below illustrates the PWM signal generated with approximately a 25% duty cycle, achieved by setting the PWM-related timer to run up to 63 out of a maximum value of 255. By analyzing the signal with an oscilloscope, I confirmed the accuracy of both my connections and my programming-level understanding of its operation. This successful verification has prepared me to advance further.

PWM signal

Wheels running

The code snippet to operate one set of wheels is shown below.

code to run one side

The video below showcases my ability to generate PWM signals and control the motor, which in turn drives the wheels. This system was powered by a DC power supply and was verified with an oscilloscope connected for monitoring.

The video below demonstrates that both sets of wheels can operate, with the system powered by two batteries: one 850mAh battery powers the Artemis board, and another 850mAh battery powers the motor drivers/motors.

Complete connections with all components on the car

Following the schematic provided above for hardware connections, I connected the motor drivers as the image shows below. The figure below illustrates the connection of all components in the stunt car's system. Two time-of-flight sensors are installed at the front and rear edges of the car. The battery for the Artemis board, along with the board itself, is housed in a small basket. The IMU sensor is attached to a flat area of the axle on the car. Two motor drivers are positioned atop the battery case. The arrangement of the motor drivers and wires, secured with a zip tie, is also visible. Ultimately, the entire setup will be encased in tape to ensure its stability during operation. As depicted in the image, the battery laid out on the table will be inserted into the battery pack from the bottom and secured with tape to prevent it from dislodging during use.

complet connections

The figure below presents the final version of the stunt car, where tapes and zip ties were used to secure all mounted devices, ensuring they remain connected during high-speed tests, and the battery for the motors was inserted from the bottom and secured by being taped.

final apperance of the car

Lower limit in PWM

As the video below demonstrates, this was just one of the tests I conducted to explore the value of duty cycle which could have the car start moving.

I began testing intuitively with a starting PWM signal of 25, but this was insufficient to rotate the wheels. After repeating the process, I eventually discovered that setting both motors to 35, which corresponds to approximately a 14% duty cycle, was enough to get the car start moving, as the video below shows.

Calibration of two motors

Initially, when applying PWM signals with identical duty cycles to both motors, the car could not maintain a straight path, as shown in the video below.

The code snippet below is used to test how applying the same duty cycle to two different motors affects their operation. This suggests the need for a calibration factor to ensure both motors operate at the same speed, allowing the car to move in a straight line.

code snippet for testing straight line

Next, I adopted a trial-and-error method to reliably achieve straight-line movement of the car by adjusting the duty cycle of the PWM signal for the right-side motor, while keeping the duty cycle of the PWM signal for the left-side motor unchanged. I documented each pair of duty cycles that enabled the car to maintain a straight path as the picture below shows. The criteria for selecting the appropriate duty cycles, to ensure both motors operated at comparable speeds, were based on the car's ability to move in a straight line for a distance greater than 6 feet. Consequently, I collected and recorded the data, which detailed the relative duty cycles of the PWM signals supplied to both motors.

data points collected on Excel

Next, I proceed to plot the relationship between the PWM signals sent to the left and right sets of wheels, as illustrated by the line graph analysis presented below.

data points relationship

It is easy to deduce that the relationship is approximately linear based on the line graph analysis. Consequently, I generated a straight line to analyze the linear relationship between the PWM signals sent to the left and right motors. The mathematical function of this line is \(\boldsymbol{y = 0.87x + 15.54}\).

data points linear relationship

Open Loop Control

I implemented code that initiates the car's movement in a straight line, followed immediately by a left turn. Next, it halts for a second before turning right. After another brief pause of a second, the car then proceeds to move backwards.

open loop control

The detailed implementations of each function prototype are illustrated in the screenshots below.

code implementation code implementation

The video below demonstrates the successful operation of the implemented code, performing exactly as I anticipated.

5000-level student discussion:

PWM frequency

Based on the analysis from the oscilloscope image which is shown above, the PWM frequency was approximately 183 Hz. The function analogWrite() is generally running at 500Hz, but the datasheet of the Artemic board does not directly demonstrates the default PWM frequency. In this lab, we are only changing the duty cycle of the PWM signals, but not the frequency. The motor driver typically operates with an internal PWM frequency of around 50 kHz, as indicated in the image below.

motor driver pwm frequency

Because we are utilizing a pre-set hardware PWM signal from a PWM pin on the Artemis board, it tends to be more reliable and accurate than a software PWM signal. Adjusting the PWM frequency through programming is possible, but it may cause less accurate behavior due to timing discrepancies. Furthermore, given the current speed and the RPM of the motors, there appears to be no advantage in altering the PWM frequency.

Lowest PWM value speed (Once in motion)

The video below demonstrates that a sufficient duty cycle allows the car to start moving, after which it shifts to a lower PWM duty cycle to maintain motion. The reduced duty cycle, at 33, results in the car moving at the lowest speed.

The code to implement and test this functionality is displayed below. The car begins with a reasonable speed which overcomes the friction, and then the duty cycle is reduced to a level that allows the car to run as slowly as possible, as demonstrated in the video above.

lowest PWM code code implementation

I continued to conduct tests to determine the duration the car could maintain its lowest speed. Utilizing a duty cycle capable of overcoming friction, it was revealed that the car nearly comes to a halt after approximately 20 seconds. Ultimately, by setting a lower PWM duty cycle to 40—slightly above the threshold that enables the car to just barely move against friction—I was able to ensure the car maintained a reasonable low speed. This was achieved even though it was not the absolute lowest PWM duty cycle, as demonstrated below.



LAB5: Linear PID control and Linear interpolation

Introduction

To initiate the PID controller lab, we set up a data acquisition system from the robot using Bluetooth, capturing operational data in a 2-D array of float values on the Arduino IDE. This PID-relevant data was then sent to a Jupyter Lab notebook on demand, facilitating subsequent analysis with Python.

6th, Feb 2024 - 13th, Mar 2024

Prelab on data acquisition

I started this lab with introducing two new command types in both Python and Arduino IDE to enable the communication via Bluetooth. The command Car_data triggers the car_starts flag, initiating data acquisition from the TOF (Time of Flight) sensor in the main loop. Concurrently, the recordError helper function, as illustrated in the screenshot below, captures the time elapsed between each TOF sensor measurement and calculates the error by comparing the current distance to the target distance. The second command, Send_data, activates the sending_data flag, enabling Python users to request data transmission from the Artemis board.

cmd type to get data cmd type to send data

The codes in Python execute the command in Arduino IDE are shown in the screenshot below.

code in Python

Next, I began by establishing new data arrays to record the time taken for each data acquisition by the TOF sensor and the error, defined as the difference between the current and target distances, in this lab.

error stored in this array

P, I, D corresponding coefficients choice

Propotional gain

I began the lab by focusing on the Kp coefficient for proportional control, which uses the product of the error (the difference between the current and target distances) and Kp to estimate a pwm value, as per the equation: error * Kp. With the maximum pwm signal duty cycle set at 255 and the error at 3000 mm, I initially approximated Kp as 255/400 ≈ 0.6375, indicating that adjustments should be along with this scale of magnitude. After fine-tuning, Kp was finalized at 0.3, closely matching the anticipated range. The decrement in Kp value aimed to provide the car with a more gental thrust to move towards the target. With Kp alone integrated, the stunt car's motion, as depicted in the video, was observed.

Integral gain

Following the adjustment of Kp, I incorporated the Ki parameter to address the steady-state error and mitigate the oscillation issue. The integral term consists of the sum of Ki times the product of the distance error and its sampling period, leading to an increasing integrated value over time. I initiated Ki at a significantly lower value than Kp, setting it at 0.02 to cautiously approach the integration effect. However, through testing, it became apparent that Ki needed to be adjusted to a much higher value of 0.05 to effectively counteract the steady-state error. To prevent the integral term from becoming excessively large, I implemented a clamp on the error accumulator. Recognizing that the robot starts a considerable distance from the equilibrium point, I aimed to prevent excessive error accumulation. Therefore, I set the error accumulator clamp at approximately 1000mm, facilitating a smoother approach to equilibrium and easier correction of the car's trajectory.

Integrator wind-up

When the car remains stationary and the error is non-zero, the integral term will persistently accumulate. This causes the integral term's contribution to the PWM signal to increase until the motor reaches saturation. Beyond this point, the integral term continues to grow, surpassing the saturation level of the motor. It will start to decrease only when the car moves past the set point or the target distance.


These are the conditions under which I prevent the integrator from accumulating further value: First, I halt the increase of the integral term or stop its accumulation when the controller output is saturated, which I set max 40 duty cycles from itegral part, to prevent the integral term from increasing further. Second, as observed from the graph "All data over time" above, the integral part (represented by the orange line), which is the product of the integral gain (Ki) and the accumulated distance error in this lab, continues to increase until the error becomes negative. Therefore, I prevent the integrator from accumulating when the controller output's sign matches the error's sign. In other words, as soon as the error changes sign, I release the clamp on the integrator term, allowing it to decrease immediately. This approach helps to limit overshoot.

Differential gain

As demonstrated in the previous video, the stunt car exhibited slight overshooting, prompting the introduction of the Kd parameter. Kd, applied within the derivative term of the PID controller, anticipates future changes in error. When the error decreases rapidly, its derivative becomes significantly negative, leading the derivative term to counteract the proportional and integral terms by reducing motor speed. Kd's application involves it being multiplied by the change rate between consecutive distance measurements. Initially, I set Kd to 10. However, subsequent observations indicated that this differential gain was insufficient to effectively counterbalance the contributions of the proportional and integral gains, as the car continued to overshoot its target.

To refine the controller's performance, I incrementally adjusted Kd by increments of 10, ultimately finding that a Kd value of 60 enabled the stunt car to move smoothly and stop in alignment with my expectations, effectively minimizing overshoot and enhancing control precision.

However, the car was slightly jittering while it was running and even after it stopped. The differential term is relatively challenging to handle, so I started to graphing the contributions from P, I, D terms to the PWM duty cycles control. The entire differential term was calculated based on the error difference, which involves subtracting the previous error from the current distance error and then dividing by the elapsed time. The screenshot below illustrates how the PWM duty cycle changes as the PID controller contributes. The spike data results from the contribution of the D term.

two nearby errors monitored from D terms

I suspected there could be a sudden change in the error, causing the D term to approach infinity due to division which is shown as spike. To address this, I began debugging by calculating the slope or the rate of error change. This was done by taking two sets of error data spaced at two different time points and dividing by the total amount of time elapsed.

two errors further away monitored from D terms

As shown in the screenshot, the spike problem has been mitigated, though it still exists. I adjusted the Kd parameter to 1 and conducted multiple tests, incrementally increasing Kd by 1 each time. When Kd reached 3, the resulting graph, as shown below, suggests that the issue I encountered earlier was due to an improper Kd parameter setting.

with relatively nicer d

Sampling Time for PID control sending PWM signal

The screenshot below displays the time difference measured at each iteration of the main loop. The average execution time of the main loop is 0.03 seconds.

code to run one side

The screenshot below shows the time period each time the TOF sensor measures a new distance. The average time it takes for the TOF sensor to collect data is 0.1 seconds.

code to run one side

Extrapolation

As illustrated by the error graph below, it is evident that the error updates slowly. This delay is primarily because the TOF sensor takes longer to collect a new data measurement. As discussed in the sampling time section above, the TOF sensor processing the measurement takes longer than the PID controller takes to update the PWM duty cycle. Consequently, given that a moving car might collide with an obstacle before receiving an updated error measurement from the TOF sensor, extrapolation could prove to be very beneficial.

errors being stalled

Using the linear interpolation mathematical equation below, the data for a point on the graph can be obtained by calculating from two known data points.

linear intrapolation formula

Therefore, I selected two data points obtained earlier and used them to calculate the subsequent distance error. This error is then used to adjust the Proportional, Integral, and Derivative (PID) terms, affecting the PWM duty cycle. Subsequently, this adjustment influences the car's speed and its distance to the target.

The code implemented for the logic is as the pseudocode described below:

if (distanceSensor.checkForDataReady()) {

intermediate_distance = distanceSensor.getDistance();

data_obtained_time = millis();

}

else {

intermediate_distance = intermediate_distance + (current_time - data_obtained_time) * speed;

}

As shown in the screenshot below, after extrapolation predicts the unknown data points while the TOF sensor's data is not ready, the distance error updates much more frequently, and the PWM signals used to control the car become more accurate.

data points after extrapolation

LAB6: Orientation PID control

Introduction

This lab is designed to provide experience with orientation PID control using the IMU. Specifically, the objective of this lab is to control the yaw of a robotic car using the IMU by implementing PID controls on PWM signals sent to two motors. This will eventually enable the car to turn to a set angle by receiving commands via Bluetooth.

13th, Feb 2024 - 20th, Mar 2024

Prelab on data acquisition

I began this lab by creating a 2D array used to record the relevant data from the robot, as shown in the image below.

array declare

To implement PID control for orientation, multiple types of data are required. Additionally, I included other values such as the value of the entire P term, which is the product of Kp and the angle error (the difference between the set angle and the current angle), as well as the values of the I and D terms, for further analysis, such as conducting graph analysis on the data sent from the robot to the Jupyter notebook.

cmd type to send data

To capture data recorded during its rotation on the floor and transmitted from the robot to the Jupyter notebook, as well as to remotely control the car's orientation by sending commands from my laptop via bluetooth, three types of commands are implemented.

case of Start case of Stop and Record data

Command sending was implemented in the Jupyter Notebook, as illustrated below. Three specific cases—Start, Stop, and Send_data—were implemented for distinct purposes. The Start command allows the user to define three PID control parameters, offering a more convenient means to employ a trial-and-error strategy for identifying the optimal parameter pair for orientation control. It also facilitates setting different target angles to streamline subsequent testing. The Stop command sets the duty cycles of the PWM to zero for both motors, thereby stopping the car. The Send_data command prompts the robot to transmit all recorded data to Python, where a notification handler receives and stores the data in a list named 'data' for further analysis.

start stop py send data cmd in py

PID Input Signal

Propotional gain

I began the lab by focusing on the Kp coefficient for proportional control, which calculates a PWM value by multiplying the error (the difference between the current and target angles) with Kp, as shown by the equation: error_angle * Kp. Initially, I used the same Kp value of 0.3 that was used in a previous lab for distance PID control with TOF sensors. However, this value proved too small for orientation control, as the potential error, such as 90 degrees in this lab, is significantly less than the errors encountered in the previous lab, which could be as large as 4000mm. Consequently, I experimented with a Kp value of 3. As indicated by the graph below, the robot exhibited slight overshooting.

only has Kp (3)

I also tested a Kp value of 5 to observe any changes. According to the screenshot and video, there were no significant differences. Therefore, to address the issue of overshooting, I introduced the derivative term. This term counteracts the rate of error change and is calculated by multiplying the change in error by the derivative coefficient, Kd.

only has Kp (5)

Differential gain

As demonstrated in the previous video, the stunt car exhibited slight overshooting, prompting the introduction of the Kd parameter. Kd, applied within the derivative term of the PID controller, anticipates future changes in error. When the error decreases rapidly, its derivative becomes significantly negative, leading the derivative term to counteract the proportional and integral terms by reducing motor speed. Kd's application involves it being multiplied by the change rate between consecutive distance measurements. Initially, I set Kd to 0.25. However, subsequent observations indicated that this differential gain was insufficient to effectively counterbalance the contributions of the proportional gains, as the car continued to overshoot its target. The graph analysis is shown below.

Kp (3), Kd (0.25)

To refine the controller's performance, I incrementally adjusted Kd by doubling it to 0.5, but it made no significant difference and it shows that Kd = 0.25 was sufficient for the orientation control.

Kp (3), Kd (0.5)

As the screenshot above demonstrates, after introducing the Kd term, the overshooting issue was mitigated, and the error stabilized at a positive value. Interpreting the graph suggests that the overshooting was significantly reduced by the derivative gain's counteraction to the change in error. Consequently, the derivative gain had a more dominant effect than the contribution from the proportional term.

Moreover, to ensure that any future increases in derivative gain do not lead to sudden spikes in the derivative term—and consequently, spikes in the PWM duty cycle signal—I implemented FFT (Fast Fourier Transform) analysis on the derivative term, as illustrated by the screenshots below.

d term signal d term signal after FFT

It appears that there is no significant high-frequency noise, so there is no need to worry about implementing a digital low-pass filter to address potential spikes caused by the derivative term. Currently, with Kp set to 3 and Kd set to 0.25, a steady error is observed as the screenshot above demonstrates, and the PD control is unable to self-correct. Therefore, the integral gain has been introduced.

Integral gain

Following the adjustment of Kp and Kd, I incorporated the Ki parameter to address the steady-state error. The integral term consists of the sum of Ki times the product of the error, which is the difference between the set angle and the current angle, and its sampling period, leading to an increasing integrated value over time. I initiated Ki at a significantly lower value than Kp, setting it at 0.1 to cautiously approach the integration effect.

kp = 3, ki = 0.1, kd = 0.25

As illustrated in the screenshot above, the system performs well, and the error remains within an acceptable range. I implemented the PID control so that when the error angle falls within +/- 5 degrees of the set angle, the motors receive a 0 duty cycle. According to the graph analysis, the car behaved as expected.

For further testing, I experimented with another set of parameters: Kp set to 5, Ki set to 0.1, and Kd set to 0.25. The graph obtained during the test, while the car was rotating on the floor, indicates that the system still performed as expected.

kp = 5, ki = 0.1, kd = 0.25

Based on the graph, the increase in orientation change speed is evident. Consequently, I also increased Kd, which was set to 0.5. However, as Kd became more dominant, the steady error became more noticeable. Therefore, I eventually decided to stick with the set of parameters: Kp = 3, Ki = 0.1, and Kd = 0.25.

kp = 3, ki = 0.1, kd = 0.5

Discussion

Are there any problems that digital integration might lead to over time? Are there ways to minimize these problems?

In digital integration, especially within the Integral component of a PID controller, there's a potential for what's called "integral windup." This occurs when the integral term accumulates a significant error over time, especially during periods when the actuator is saturated (cannot increase output further). This can lead to the system overshooting its target and exhibiting oscillatory behavior, which can be slow to correct.


Integrator wind-up

When the car remains stationary and the error is non-zero, the integral term will persistently accumulate. This causes the integral term's contribution to the PWM signal to increase until the motor reaches saturation. Beyond this point, the integral term continues to grow, surpassing the saturation level of the motor. It will start to decrease only when the car moves past the set angle which when the error changes the sign, as how the orange line draws on the screenshot above.


These are the conditions under which I prevent the integrator from accumulating further value: First, I halt the increase of the integral term or stop its accumulation when the controller output is saturated, which I set max 60 duty cycles from itegral part, to prevent the integral term from increasing further. Second, as observed from the graph "All data over time" above, the integral part (represented by the orange line), which is the product of the integral gain (Ki) and the accumulated distance error in this lab, continues to increase until the error becomes negative. Therefore, I prevent the integrator from accumulating when the controller output's sign matches the error's sign. In other words, as soon as the error changes sign, I release the clamp on the integrator term, allowing it to decrease immediately. This approach helps to limit overshoot.


Does your sensor have any bias, and are there ways to fix this?

As is widely known, gyroscope sensors are prone to data drifting. I visualized this data drift by measuring the angle while keeping the car stationary.

data drifts on angel measurement

There are several methods to mitigate data drifting. My approach involved accumulating 30 data points of the yaw value while the car was stationary, just before it began to move. I then calculated the average value of this noise. By subtracting this average noise value from subsequent yaw data measurements, I was able to significantly reduce the impact of data drift.

calculate average data drifting noise

Are there limitations on the sensor itself to be aware of?

As demonstrated in the screenshot below, by default, the gyroscope (referred to as myFSS.g) of this IMU is limited to sensing angle changes at a maximum rate of 250 degrees per second.

calculate average data drifting noise

I verified that my car reached this limitation by practically rotating the car on the floor, capturing the yaw data (myICM.gyrZ()), and graphing the results using Python, as displayed below.

the car reached limimation

After increasing the speed limitation to 500 degrees per second, I repeated the graphing of the yaw data to ensure that the limitation was not exceeded.

the car limimation changes to 500 the car does not reach the limitation anymore

The issue with the previous limitation was that the sensor's maximum sensing speed was reached, so ,for example, even though the car actually rotated 120 degrees, the sensor could only register a 90-degree rotation. This discrepancy was the cause of the observed overshooting.


Does it make sense to take the derivative of a signal that is the integral of another signal?

In this orientation control lab, the angle signal is obtained from the integral of the yaw rotation velocity. The derivative term in a PID controller is utilized to compute the derivative of the error signal, which represents the discrepancy between the desired angle (setpoint) and the current angle. This term plays a crucial role in forecasting future errors and mitigating changes in the error. Given that the angle signal in this lab is derived by integrating rotational velocity, applying the derivative to the accumulated angle—which would yield the error's rate of change—might seem redundant. This is because the derivative term can be directly obtained from reading the gyro rotational velocity along with Z axis. Overall, the derivative term remains valuable for anticipating and correcting future errors, thereby reducing oscillations and improving the stability of the system.


Does changing your setpoint while the robot is running cause problems with your implementation of the PID controller?

In this experiment, potential issues from sudden error changes are mitigated by two factors. First, the motor used responds too slowly to react immediately to abrupt changes in error. Second, there's a limitation on the maximum rotational velocity that further reduces the impact of any rapid error fluctuations.


Is a lowpass filter needed before your derivative term?

First, as demonstrated by all the screenshots above, the contribution of the derivative term to the PWM duty cycles does not result in significant spikes.


Also, after conducting FFT analysis on the derivative term multiple times, it appears that implementing a low-pass filter is not necessary.

no need for lpf

This is essential for being able to tune the PID gains/Set points quickly.

Will you need to be able to update the setpoint in real time?

In my setup, I've configured the case statement in the Arduino IDE to accept commands from the Jupyter Notebook. This setup enables the separate transmission of PID parameters and the setting of the target angle. As a result, my implementation allows for easy adjustments to the set angle by sending commands from the Jupyter Notebook to the robot via Bluetooth in real-time, as well as the modification of PID parameters on-the-fly.


Can you control the orientation while the robot is driving forward or backward?

I've realized that the stunt car can only rotate to a set angle and stops once the error—the difference between the set angle and the current angle—falls within an acceptable range of +/- 5 degrees, in this lab. To enhance this functionality, I could introduce a base duty cycle for the PWM signals sent to the motors. This would enable the car to continuously move forward or backward. At the same time, it could adjust the direction by controlling one specific motor. By adding extra duty cycle to the motor controlling one side of the wheels, for example, adding controls to the right side of wheels while the car is moving forward would allow the car to turn left.

Result

After fining a set of parameters for PID terms with Kp = 3, Ki = 0.1, Kd = 0.25, the car behaved as expected as shown in the video below. The screenshot shows the data about how the PID controls applies on controlling the car's orientation.

final analysis

LAB7: Kalman Filter

Introduction

The objective of this lab is to implement a Kalman Filter to predict the TOF (Time of Flight) values that are sampled at a slow rate. The aim is for the car to achieve the highest possible speed while maintaining the ability to halt 1ft from a wall or execute a turn within a 2ft range.

20th, Feb 2024 - 27th, Mar 2024

ESTIMATE THE DRAG AND MOMENTUM OF THE ROBOT

Since the TOF sensor measures distance relatively slowly in comparison to its speed, a Kalman filter will be implemented in this lab to predict the robot's position using a state space model when the TOF sensor's data is unavailable.

kalman filter content

To determine the values of all coefficients required for the Kalman filter implementation, I began by calculating 'd' (drag) and 'm' (mass), which are utilized in the state prediction equation, as illustrated in the figure below.

state space equation

According to the lab handout, determining the value of 'd' (drag) requires finding the steady-state speed, and 'u' represents the input percentage, which is set to 1 (100% of input) during the implementation of the state space model.

how to find d

To acquire the steady speed data for the robot in motion, I set the motor's duty cycle to 160 (out of 255) and directed the robot toward a wall.

The TOF sensor recorded the measurements, which were transmitted to the Jupyter notebook via Bluetooth. I then plotted both distance and velocity against time, as demonstrated in the following illustration.

steady speed

Based on the calculated steady speed of 3206.07 mm/s, I was able to calculate 'd' (drag coefficient) as 0.000311. Next, I needed to calculate 'm' (mass) using the value of 'd' (drag) and the time required for the robot to reach 90% of its steady speed. The equation used for calculating 'm' is presented below.

finding m value

From the graph, I selected the point corresponding to approximately 90% of the steady speed, which is around 2600 mm/s. The time to reach 90% of the steady speed (t90%) is 1.16 seconds. Thus, after analyzing the data and applying the relevant equations, the mass 'm' is calculated to be 0.000157.

Initialize KF (Python)

Compute the A, B, and C matrix

To implement the Kalman Filter effectively, it was necessary to discretize both the A and B matrices using the functions outlined below. This step ensures that the continuous-time model is accurately translated into a form suitable for digital computation.

Discretize A B matrixes

The function presented below utilizes Python to discretize the A and B matrices. This approach ensures the matrices are adapted for use in the discrete-time version of the Kalman Filter algorithm.

discretize A B in python

The code snippet in the screenshot below demonstrates that I changed the TOF sensor's sampling rate from the default 100 ms to 20 ms. Consequently, the sampling rate used for calculating the A and B matrices was adjusted to 20 ms (0.02s).

change TOF sampling rate

The C matrix was given, highlighting that only the matrix entry corresponding to distance is needed for the calculations. Given that the car is moving towards the wall, this is represented with a negative sign in the matrix.

ABC matrix

There are two matrices corresponding to noise that need to be accounted for in the implementation: sigma_z(\texttt{sig_z} in the screenshot below), which corresponds to measurement noise, and sigma_u(\texttt{sig_u} in the screenshot below), which refers to process noise.

sigma z and sigma u

Based on the provided standard deviations for position and speed, which are 20 mm and 20 mm/s respectively, these values were used in calculating the noise and uncertainty matrices.

sigma and sigma z and sigma u initialization

Implement and test your Kalman Filter in Jupyter (Python)

Implementation of Kalman Filter

The implementation of the Kalman filter was given as depicted in the screenshot below.

given KF implementation

Following the provided code snippet, I implemented the Kalman Filter using Python.

I implemented the KF using Python

To gain a deeper understanding of the Kalman Filter, I initially disabled its prediction functionality. I input only the actual TOF sensor measurements into the model and recorded the distances updated by the Kalman Filter, as implemented in the code snippet below.

only update distance

Subsequently, I compared these real distance measurements with the data filtered by the Kalman Filter, as illustrated in the accompanying screenshot.

only update data no prediction

Parameters Discussion

To visualize the impact of measurement and process noise on the Kalman filter model, I initially increased the measurement noise (sigma z) from 400 to 1000. The results, as shown below, are logical; a higher measurement noise suggests less confidence in the TOF sensor data, leading to readings that are far away from their actual values. Consequently, the Kalman Filter compensates by producing output values that are lower than the TOF data plot.

sigma z 1000

Next, I increased the process noise (for both entries in sigma u ) from 400 to 1000. The results, as illustrated below, logically follow: a higher process noise implies greater trust in the model's ability to filter, resulting in readings that are very close to their actual values. Consequently, the Kalman Filter's output values almost overlap with the TOF data plot.

trust the model more

Implemented Kalman Filter on linear PID control and interpolation (5000-level)

Initially, I operated the car towards the wall using PID control to halt at a predetermined point, without employing the Kalman Filter. I then collected the TOF measurement data, which was transmitted to the Jupyter notebook via Bluetooth.

With a setpoint of 400mm from the wall, I collected data on the PWM duty cycle, which was implemented with integrator wind-up and derivative kick PID control, along with the distance measurements, as detailed below.

set point 400

Despite manually adjusting the TOF sensor's sampling rate from 100ms to 20ms, it remained insufficiently fast compared to the main loop, which controls the motor's PWM duty cycle. Therefore, I employed the Kalman filter to predict the car's position. The code snippet below illustrates how I manually inserted 5 data points, predicted by the Kalman Filter, recorded for later plotting.

predict position when TOF data is not there

Plotting the KF estimated distance using Python is shown below. As can be seen, the red plot represents the KF estimate, which behaves as expected. Specifically, around the 20-second mark, the KF model predicts a lower distance, whereas the actual TOF sensor measurement increases. Consequently, the KF model quickly adjusts itself.

predict position when TOF data is not there

Implement the Kalman Filter on the Robo

Now, since I have already understood and built the Kalman Filter model using Python, I've moved on to implementing the Kalman Filter on my robot. I started programming the matrices needed for the Kalman Filter in the Arduino IDE.

matrixes and coefficients needed for KF

Subsequently, I implemented the function 'KF' to update and predict a pair of data: the state (mu) and the uncertainty (sigma).

KF function in Arduino

Regarding the use of the Kalman Filter, the TOF sensor processes distance data relatively slowly. There are two scenarios: Firstly, when the TOF sensor data is available, we input the distance measurement into the Kalman Filter model to process the state (mu) and the uncertainty (sigma). The component of the state output corresponding to distance is then utilized to calculate the duty cycle via PID control, which drives the motor. Secondly, in the absence of TOF sensor data, we rely on the Kalman Filter for predictions. Subsequently, the PID control receives the distance error based on the predicted distance to calculate the PWM duty cycle, ensuring that the PID controller continues to control the vehicle even in the absence of actual TOF sensor data.

Use KF function in Arduino

Result

Lastly, I operated the car towards the wall with the Kalman Filter applied, alongside the PID controller.

The data recorded and transmitted from the robot is plotted in the figure below. This plot confirms that the Kalman Filter is operational on the Artemis. It is observed that at approximately 23.7 seconds, the Kalman Filter predicts the position of the robot to be lower than its actual position, but it adjusts itself at a relatively fast pace.

plot KF processed data

LAB8: Stunts!

Introduction

So far in this course, we have built a robot and connected it to our computer for quick testing and analysis. We have created a PID controller and added a Kalman filter to the robot to enhance its performance. This lab builds upon the previous labs to complete a fast-paced stunt. For this lab, I chose task 2, which involves implementing orientation control for the stunt.

27th, Feb 2024 - 3th, Apr 2024

Task 2: Orientation Control

The goal of this lab is for the robot to start at a distance less than 4 meters from the wall, quickly drive forward, and, when it is within 3 feet (equivalent to 914mm or 3 floor tiles in the lab) from the wall, initiate a 180-degree turn.

Successes

Success 1

This is the first successful attempt, but as shown in the video, the car did not execute a rapid 180-degree turn. This suggests to me that the proportional term of the PID controller is not sufficiently larger than the derivative term, which is necessary for initiating and completing a fast turn.

The distance is calculated by combining the TOF sensor measurements with the Kalman filter predictions implemented in the previous lab. The analysis plot below shows that, as the PWM duty cycle and its corresponding angle change, the car attempts to self-adjust its angle, a process that takes approximately 2 seconds.

all data from first sucess detailed data from first success

Based on the analysis, the distance detection was what I expected. Thus, I only increased the propotional term in PID controller to make the turn complete faster.

Success 2

Success 3

After the PID control was adjusted, the car could eventually complete the turn in less than half a second. The accompanying screenshot demonstrates that by using the wall as a bumping force and then accelerating, the car is able to reach the target angle more quickly, and the error also decreases at a faster rate. This effect is particularly noticeable in the change observed between 11 and 11.1 seconds.

bumping on the wall

Slow Motions

Slow Motion 1

As indicated by the first slow-motion video, the car successfully initiated a 180-degree turn within the span of 3 floor tiles. Personally, I enjoy bumper cars in real life, where utilizing external forces always plays a significant role in car racing. Therefore, I adjusted the distance at which the car begins its turn initiation for another test, as shown below.

The bloopers

Regarding the bloopers, when the PID controller was not optimally tuned, the car exhibited some interesting behaviors. For instance, when the integral term was not large enough to self-adjust, the car would end up on the black mat, mimicking a 'turn and park' maneuver.

Turn and Park

Dizzy and Lost

The video below presents a blooper where, due to the proportional term being relatively larger, the car appeared lost and dizzy.

Task 1 (Not Required)

Moreover, I find Task 1, focusing on position control, to be a really interesting challenge as well. Had I been afforded more time, I would have completed it. Nevertheless, I did dedicate some effort to this task. Initially, I managed to have it approach the wall, and then swiftly reverse its direction of movement, as demonstrated in the video below.

Of course, there were instances when it did flip, though not in the manner I had anticipated.


LAB9: Mapping

Introduction

The goal of this lab is to map out a static room, specifically the front room of the lab. Throughout the entire experiment, a robot will be placed at several marked locations around the lab. It will rotate around its axis while collecting Time-of-Flight (ToF) readings. These readings will then be plotted in Python, showing Y-axis distance measurements against X-axis distance measurements to create a rough map of the room. The map, which includes the front room of the lab and five marked locations, is shown in the image below.

front room map
10th, Feb 2024 - 17th, Apr 2024

Orientation Control

For this lab, I selected orientation control and adapted the PID controller from a previous experiment (specifically, Task B orientation control in Lab 8). I tuned the Proportional, Integral, and Derivative parameters to enable the robot to perform on-axis turns in small, precise increments. It collects distance measurements along the X and Y axes using two Time-of-Flight (ToF) sensors.

Since the robot needs to complete a 360-degree rotation to map out the entire surrounding environment, the accuracy of the map increases with the number of data points the Time-of-Flight (ToF) sensor can measure around the turning axis. Therefore, I designed the robot to rotate 18 times per full circle, equating to 20 degrees per rotation. This approach allows for more detailed sensing and, consequently, a more accurate map.

Next, I implemented a function named 'rotate' to rotate the car by 20 degrees each time, with an error tolerance of absolutely 0.3 degrees. It also records the current angle after every rotation in relation to the original starting position.

void rotate(){ angle = 0; car_rotating = 1; lastTime_IMU = (float)millis(); init_time = (float)millis(); integral_IMU = 0; lastError_IMU = setpoint_IMU - angle; while(car_rotating){ if (myICM.dataReady()) { myICM.getAGMT(); // Update the sensor data gyroZ = myICM.gyrZ()-avg_angle_noise; curr_time_IMU = (float)millis(); // PID calculations dt = (curr_time_IMU - lastTime_IMU) / 1000.0; // Delta time in seconds lastTime_IMU = curr_time_IMU; // Update last time angle += (gyroZ * dt); // Angle estimation error_angle = setpoint_IMU - angle; // Calculate error integral_IMU += error_angle * dt; // Calculate integral // Integral wind-up if (integral_IMU > 1200.0){ integral_IMU = 1200.0; } else if (integral_IMU < -1200.0){ integral_IMU = -1200.0; } derivative_IMU = (error_angle - lastError_IMU) / dt; P_calc = Kp_IMU * error_angle; I_calc = Ki_IMU * integral_IMU; D_calc = Kd_IMU * derivative_IMU; pidOutput_IMU = P_calc + I_calc + D_calc; // Calculate PID output lastError_IMU = error_angle; // Update last error for next cycle if (abs(error_angle) < 0.3 || (float)millis() - init_time > 30000) { stop_car(); overall_angle += angle; // Logging data if (itr_IMU < 3000) { data_buffer_IMU[itr_IMU][0] = overall_angle; itr_IMU++; } car_rotating = 0; } else { if ((int)pidOutput_IMU > 0){ // Ensure PWM is within valid range pwmSignal_IMU = constrain((int)pidOutput_IMU, 35, 245); record_PWM_IMU = pwmSignal_IMU; left_turn(pwmSignal_IMU); } else { // Ensure PWM is within valid range pwmSignal_IMU = constrain(-(int)pidOutput_IMU, 35, 245); record_PWM_IMU = -pwmSignal_IMU; right_turn(pwmSignal_IMU); } } } } }

Data Collection (at one marked location)

To improve accuracy, two approaches were utilized. First, after rotating 20 degrees, each Time of Flight (TOF) sensor measures the distance, accumulating these measurements 10 times. The average of these values is then taken as the actual distance measurement. Second, to address data drift from the Inertial Measurement Unit (IMU) sensor, the gyroscope readings along the Z-axis (which indicate the rotation rate) are accumulated 30 times before the system starts. The average of these readings is subsequently subtracted from each gyroscope reading on the Z-axis (myICM.gyrZ()).
As demonstrated in the video below, the car rotates 360 degrees at one of the marked locations, and two TOF sensors collect distance measurements 18 times. The processes at the remaining four locations are repetitive.Moreover, I find Task 1, focusing on position control, to be a really interesting challenge as well. Had I been afforded more time, I would have completed it. Nevertheless, I did dedicate some effort to this task. Initially, I managed to have it approach the wall, and then swiftly reverse its direction of movement, as demonstrated in the video below.

Data Processing

After collecting the measurement data from the two Time of Flight (TOF) sensors, along with the corresponding angles at which each measurement was taken, a Homogeneous Transformation Matrix is used to map the surrounding environment. This matrix helps correlate the local coordinate frame, which refers to the car itself. In this frame, the direction along the wheels represents the local X-axis, the Y-axis is perpendicular to it, and the origin is at the center of the car body. The transformation matrix involves both rotation and translation components. The rotation matrix is implemented in Python, as shown in the code snippet below.

rotation matrix

The rotation matrix is then multiplied by a 2x1 direction matrix, which includes two entries. For the front sensor, the first entry of the direction matrix corresponds to the local X-axis and contains the distance measured by the front TOF sensor, while the Y-axis value is zero. For the side sensor, the first index in the direction matrix refers to the X-axis value, which is zero relative to the side sensor. The second index refers to the Y-axis and includes the distance measured by the side TOF sensor. This matrix multiplication projects the measurements from the local frame to the global frame, effectively mapping the data onto the global map.

rotation matrix multiplication and plus the tranlation

Translation was also considered because the front sensor is located at the front end of the car, and the side sensor is positioned at the edge of the car's side, while the car itself rotates around its center. As depicted in the image above, the dimensions of the car are factored into the translation calculations after rotation mapping onto the global map. The car dimensions value are demonstrated in the image below.

tranlation dimensions

Transformed distance plots and Polar plots of TOF scans

Location (0, 0)
(0,0) (0,0) polar plot
Location (-3, -2)
(-3,-2) (-3,-2) polar plot
Location (0, 3)
(0,3) (0,3) polar plot
Location (5, 3)
(5, 3) (5,3) polar plot
Location (5, -3)
(5,-3) (5,-3) polar plot
The Whole Map
overlapped images to form a map

I manually estimated the actual positions of walls and obstacles based on the scatter plot and drew purple lines to represent them. There were some noticeable errors; for example, in the bottom right corner, the car could not rotate a full 360 degrees, which is evident from the tilted dots. However, I manually corrected this when drawing the black lines. Ideally, the walls should align with these lines, as the car was positioned an integer number of floor tiles from the origin. In the graph, though, the black lines do not overlap with the drawn lines due to an offset in the location of the TOF sensors on the car.

overlapped images to form a map with lines drawn

Lab 10: Grid Localization using Bayes Filter (Simulation)

Introduction

The goal of this lab is to simulate a robot navigating a grid map and to localize the robot, a process known as robot localization. This involves determining the robot's location relative to its environment by employing a Bayes filter.

17th, Feb 2024 - 24th, Apr 2024

Bayes Filter Implementation

COMPUTE CONTROL

The compute_control function is designed to extract control information, represented in the format of first rotation, translation, and second rotation, [rot1,trans,rot2] with units specified in degrees and meters [degrees,meters,degrees]. This function calculates the control based on a given previous pose at time (t-1) and updates to the current pose (at time t) of the robot. The code snippet is attached below.

def compute_control(cur_pose, prev_pose): """ Given the current and previous odometry poses, this function extracts the control information based on the odometry motion model. Args: cur_pose ([Pose]): Current Pose prev_pose ([Pose]): Previous Pose Returns: [delta_rot_1]: Rotation 1 (degrees) [delta_trans]: Translation (meters) [delta_rot_2]: Rotation 2 (degrees) """ # changes in x and y change_x = cur_pose[0] - prev_pose[0] change_y = cur_pose[1] - prev_pose[1] rot_1 = math.atan2(change_y, change_x) - prev_pose[2] #initial rotation normalized_rot_1 = mapper.normalize_angle(rot_1) trans = math.sqrt((change_x**2) + (change_y**2)) #translation rot_2 = cur_pose[2] - prev_pose[2] - rot_1 #final rotation normalized_rot_2 = mapper.normalize_angle(rot_2) return normalized_rot_1, trans, normalized_rot_2

Odometry Motion Model

In the odometry motion model, three inputs are required: cur_pose, prev_pose, and u, which represent the current position, previous position, and the actual control input u (rot1, trans, rot2), respectively. The control input u is calculated during the prediction step, based on the current and previous positions as provided by the odometry readings. After determining the actual u, the probability of having a required control input u for a given pair of possible previous and current poses of the robot can be calculated by invoking the Gaussian function.

def odom_motion_model(cur_pose, prev_pose, u): """ Odometry Motion Model Args: cur_pose ([Pose]): Current Pose prev_pose ([Pose]): Previous Pose (rot1, trans, rot2) (float, float, float): A tuple with control data in the format format (rot1, trans, rot2) with units (degrees, meters, degrees) Returns: prob [float]: Probability p(x'|x, u) """ # calculated input control matrix normalized_rot_1, trans, normalized_rot_2 = compute_control(cur_pose, prev_pose) normalized_given_rot1 = mapper.normalize_angle(u[0]) given_trans = u[1] normalized_given_rot2 = mapper.normalize_angle(u[2]) rot1_prob = loc.gaussian(normalized_given_rot1, normalized_rot_1, loc.odom_rot_sigma) tans_prob = loc.gaussian(given_trans, trans, loc.odom_trans_sigma) rot2_prob = loc.gaussian(normalized_given_rot2, normalized_rot_2, loc.odom_rot_sigma) return rot1_prob*tans_prob*rot2_prob

Sensor Model

The sensor model function accepts a 1D array as input, which contains the precached observations associated with a specific robot pose on the map. Each observation consists of 18 individual measurements recorded at equidistant angular positions during the robot's anticlockwise rotation. These 18 true measurement values are captured at the same equidistant angular positions for each grid cell (state) and can be accessed through the BaseLocalization class by calling the loc.obs_range_data function. Therefore, for the 18 measurements at a given state, the likelihood of each true individual measurement relative to the expected measurement is calculated and returned by the sensor model function.

def sensor_model(obs): """ This is the equivalent of p(z|x). Args: obs ([ndarray]): A 1D array consisting of the precached observations for a specific robot pose in the map Returns: [ndarray]: Returns a 1D array of size 18 (=loc.OBS_PER_CELL) with the likelihoods of each individual sensor measurement """ data_len = mapper.OBS_PER_CELL # default is 18 prob_array = np.zeros(data_len) for itr,real_o, expect_o in zip(range(data_len), loc.obs_range_data, obs): prob_array[itr] = loc.gaussian(real_o[0], expect_o, loc.sensor_sigma) return prob_array

Bayes Filter Algorithm

Essentially, every iteration of the Bayes filter has two steps: prediction step and update step. A prediction step incorporates the control input (movement) data. An update step incorporates the observation (measurement) data. The prediction step increases uncertainty in the belief, while the update step decreases it. The belief calculated after the prediction step is often referred to as the prior belief.

Prediction Step

In the prediction step, the probability of transitioning from the previous time step to the current time step is calculated using the previous and current odometry data. This is done by calling the odom_motion_model function, which multiplies the loc.bel. The result is then updated in bel_bar. As seen in the code implementation, a grid cell is only used to calculate and update the probability for the next potential grid cell in the inner loops of the Bayes filter's prediction step if its probability exceeds 0.0001. This is because any value multiplied by zero results in zero. Therefore, if a state has a probability less than 0.0001, we can skip these states since they contribute minimally to the belief, thus significantly reducing computation time.

def prediction_step(cur_odom, prev_odom): """ Prediction step of the Bayes Filter. Update the probabilities in loc.bel_bar based on loc.bel from the previous time step and the odometry motion model. Args: cur_odom ([Pose]): Current Pose prev_odom ([Pose]): Previous Pose """ u = compute_control(cur_odom, prev_odom) x_s, y_s, a_s = (mapper.MAX_CELLS_X, mapper.MAX_CELLS_Y, mapper.MAX_CELLS_A) bel_bar = np.zeros([x_s,y_s,a_s]) for x_prev in range(x_s): for y_prev in range(y_s): for a_prev in range(a_s): if loc.bel[x_prev, y_prev, a_prev] > 0.0001: for x_curr in range(x_s): for y_curr in range(y_s): for a_curr in range(a_s): prev_pose = np.array(mapper.from_map(x_prev, y_prev, a_prev)) cur_pose = np.array(mapper.from_map(x_curr, y_curr, a_curr)) bel_bar[x_curr][y_curr][a_curr] += odom_motion_model(cur_pose, prev_pose, u) * loc.bel[x_prev][y_prev][a_prev]

Update Step

In the update step, the sensor receives a new measurement and the belief probabilities need to be updated. This is accomplished by updating the probabilities stored in loc.bel.

def update_step(): """ Update step of the Bayes Filter. Update the probabilities in loc.bel based on loc.bel_bar and the sensor model. """ bel = loc.bel x_s, y_s, a_s = (mapper.MAX_CELLS_X,mapper.MAX_CELLS_Y,mapper.MAX_CELLS_A) for x_cur in range(x_s): for y_cur in range(y_s): for a_cur in range(a_s): prob_sensor_model = np.prod(sensor_model(mapper.get_views(x_cur, y_cur, a_cur))) bel[x_cur][y_cur][a_cur] = prob_sensor_model * loc.bel_bar[x_cur][y_cur][a_cur] loc.bel = np.true_divide(bel,np.sum(bel))

Results

Here is a photo showing the output of the Bayes Filter when running these methods in the simulator. The red path indicates the odometry, the green path represents the real car trajectory, and the blue path refers to the predicted trajectory.

simulate trajectory graphs

The video below demonstrates how the car moves along the green trajectory path. The red path, representing the odometry, is inaccurate and not utilized in our lab. The blue path shows the predicted locations and the path between each movement. As observed, the Bayes filter generally remains close to the robot's position, although there are instances where they are relatively far away from each other; however, adjustments are made quickly. As the robot begins to make larger, continuous movements throughout the run, the filter increasingly gets close the robot’s path. Nevertheless, near the end a quick turn causes the filter to deviate from the course.


Lab 11: Localization on the real robot

Introduction

In this lab, localization using the Bayes filter will be implemented on the actual robot. The update step will rely solely on full 360-degree scans with the ToF sensor, as the motion of these particular robots is typically so noisy that the prediction step is not helpful. The purpose of the lab is to appreciate the difference between simulated and real-world systems.

24th, Feb 2024 - 1st, May 2024

Grid Localization using Bayes Filter (Simulation)

To ensure that the necessary Bayes filter functions are implemented correctly, the functions are provided and tested by running the simulation, as shown in the screenshot below. This also verifies that my implementations from the previous lab were accurate.

simulator plottor

Grid Localization using Bayes Filter (Real)

To use the update step in the Bayes filter to determine the robot's location, the TOF sensor needs to measure distances, and this data must be sent to the laptop via Bluetooth. To achieve this, I controlled the robot to rotate between 0 degrees and 340 degrees (inclusive) and 20 degrees per rotation via a PID controlled implemented in previous labs, making 17 rotations in total. I recorded the distance data 18 times and transmitted it to the laptop when 18 measurements are received by the laptop, as demonstrated in the approach below.

In order for the event handler to receive the data sent from the robot in real time, a coroutine—specifically, the asyncio sleep coroutine—is called directly by executing asyncio.run(asyncio.sleep(3)) within a while loop. This allows the system to listen for data while simultaneously waiting for the process to complete.

def perform_observation_loop(self, rot_vel=120): """Perform the observation loop behavior on the real robot, where the robot does a 360 degree turn in place while collecting equidistant (in the angular space) sensor readings, with the first sensor reading taken at the robot's current heading. The number of sensor readings depends on "observations_count"(=18) defined in world.yaml. Keyword arguments: rot_vel -- (Optional) Angular Velocity for loop (degrees/second) Do not remove this parameter from the function definition, even if you don't use it. Returns: sensor_ranges -- A column numpy array of the range values (meters) sensor_bearings -- A column numpy array of the bearings at which the sensor readings were taken (degrees) The bearing values are not used in the Localization module, so you may return a empty numpy array """ # global data sensor_ranges = np.zeros((18,1)) # sensor_ranges = [[0.365],[0.411],[0.515],[0.839],[2.227],[1.299],[1.285],[2.642],[2.968],[1.565],[1.311],[0.671],[0.454],[0.364],[0.362],[0.412],[0.535],[0.44 ]] sensor_bearings = np.zeros(18) # not used so it does not matter ble.send_command(CMD.START, "18.0|13.3|1.3|20") print("Car starts rotating and measuring distance") ble.send_command(CMD.Send_data, "") while(len(data) < 18): asyncio.run(asyncio.sleep(3)) print("18 measurements data received!") print(data) for i in range(18): distance = (float((data[i].split("|")[1].split(": ")[1])))/1000 sensor_ranges[i][0] = distance print(sensor_ranges) print("Complete") return sensor_ranges, sensor_bearings

The video below demonstrates the car's rotation, pausing briefly to record a distance using the front TOF sensor every 20 degrees. To obtain relatively more accurate distance measurements, I recorded and transmitted the average value of 10 distance measurements.

Marked Poses

the global x-y frame

(5 ft, 3 ft)

The localization for the true waypoint at (5, 3) shown as green dot which corresponds to the 5th tile on the global x-axis and the 3rd tile on the global y-axis is displayed in the plot below, with the origin represented by the position of the robot car in the image. The global x-y coordinate frame is illustrated above.


This point was the least accurate in terms of location estimates which is the blue dot produced by the Bayes filter update step. This inaccuracy arose because the robot was unable to obtain sufficient unique readings; the surrounding area has relatively large empty spaces and no unique layout around it which can be seen that the layout around both (5, 3) and (5, -3) is similar, making it difficult to determine its actual location accurately. Despite this, the distance between the actual location and the estimated one is still considered acceptable.

5ft, 3ft point

Additionally, as depicted in the plot, there are two green dots. The dot on the left represents the center of the car, not the front where the TOF sensor is actually located. The green dot on the right side accurately marks the location of the TOF sensor. Thus, all the green dots in the screenshots below indicate the locations of both the car's center and the TOF sensor. The front of the car is 73mm away from the center.

(0 ft, 3 ft)

The points (0, 3) is better localized, as shown in the screenshot below. It can be seen that the y-axis location is accurately estimated, while the x-axis locations is less accurately predicted.

0ft, 3ft point

(-3 ft, -2 ft), (5 ft, -3 ft)

The points (-3, -2) and (5, -3) are better localized, as shown in the screenshot below. It is evident that the y-axis locations are accurately estimated, while the x-axis locations are less accurately predicted. However, this is acceptable since the discrepancy is merely the distance from the car's center to the front.

0ft, 3ft point 5ft, -3ft point

When the difference between the front head of the car and the center of the car is not a concern, the images below show that the robot's estimated location at (-3, -2) and (5, -3) perfectly aligns with its actual location (blue dot represents the estimated location, green dot indicates the actual location).

-3ft, -2ft point estimate location -3ft, -2ft point actual location -3ft, -2ft point estimate location -3ft, -2ft point actual location

Discussion

From these results, the estimated locations of both (-3, -2) and (5, -3) align perfectly with the known positions. This accuracy is due to both locations having the most walls and the most unique layouts, providing plenty of distinct distance values to pinpoint a location accurately. Conversely, the other points were slightly off, attributed to there being fewer walls or the walls being too distant to determine the robot's exact location accurately.


LAB12: Path Planning and Execution

Introduction

The robot car now has feedback loop control, which was implemented in Labs 5-7. It can map its environment, a feature developed in Lab 9, and is also capable of localizing itself within the map, as established in Lab 11. At this point, I would like the robot to navigate through a set of waypoints in the maze. I have tried multiple approaches to achieve this goal.

1st, May 2024 - 15th, May 2024

PID control for orientation and open-loop control on movement

From the experiments in Lab 8 and Lab 11, it is evident that my orientation control is effective, allowing the robot to rotate to a specific angle both accurately and quickly. Thus, I began this lab by using the implemented PID orientation control in conjunction with open-loop control for straight movement.

To enable the car to move from one point to another, I used a Python function named calculate_distance_and_angle. This function calculates both the displacement and the angle the car needs to rotate from its initial position. It takes four inputs: the current position's (x, y) coordinates and the next position's (x, y) coordinates.

def calculate_distance_and_angle(current_belief_x, current_belief_y, goal_x, goal_y): # Calculate the differences in the coordinates dx = goal_x - current_belief_x dy = goal_y - current_belief_y # Calculate the distance using the Euclidean distance formula distance = math.sqrt(dx**2 + dy**2) # Calculate the angle using atan2 (returns the angle in radians) angle_radians = math.atan2(dy, dx) # Convert radians to degrees angle_degrees = math.degrees(angle_radians) return int(distance), int(angle_degrees)

Both movement and rotation occur by sending the PID controller parameters, distance, and rotation angles to the robot. The function named rotate in the Arduino IDE accepts an input that specifies the angle the robot needs to rotate from its initial position toward the target point. After moving, it then reverses by turning to the negative value of the angle, returning to its initial position for the next movement.

void rotate(int setpoint_IMU){ angle = 0; car_rotating = 1; lastTime_IMU = (float)millis(); init_time = (float)millis(); integral_IMU = 0; lastError_IMU = setpoint_IMU - angle; while(car_rotating){ if (myICM.dataReady()) { myICM.getAGMT(); // Update the sensor data gyroZ = myICM.gyrZ()-avg_angle_noise; // Assuming gyroZ gives rotation rate in degrees/s curr_time_IMU = (float)millis(); // PID calculations dt = (curr_time_IMU - lastTime_IMU) / 1000.0; // Delta time in seconds lastTime_IMU = curr_time_IMU; // Update last time angle += (gyroZ * dt); // Angle estimation error_angle = setpoint_IMU - angle; // Calculate error integral_IMU += error_angle * dt; // Calculate integral // Integral wind-up if (integral_IMU > 1200.0){ integral_IMU = 1200.0; } else if (integral_IMU < -1200.0){ integral_IMU = -1200.0; } derivative_IMU = (error_angle - lastError_IMU) / dt; // Calculate derivative P_calc = Kp_IMU * error_angle; I_calc = Ki_IMU * integral_IMU; D_calc = Kd_IMU * derivative_IMU; pidOutput_IMU = P_calc + I_calc + D_calc; // Calculate PID output lastError_IMU = error_angle; // Update last error for next cycle if (abs(error_angle) < 0.3 || (float)millis() - init_time > 30000) { stop_car(); overall_angle += angle; // Logging data if (itr_IMU < 3000) { data_buffer_IMU[itr_IMU][0] = overall_angle; itr_IMU++; } car_rotating = 0; } else { if ((int)pidOutput_IMU > 0){ pwmSignal_IMU = constrain((int)pidOutput_IMU, 35, 245); // Ensure PWM is within valid range record_PWM_IMU = pwmSignal_IMU; left_turn(pwmSignal_IMU); } else { pwmSignal_IMU = constrain(-(int)pidOutput_IMU, 35, 245); // Ensure PWM is within valid range record_PWM_IMU = -pwmSignal_IMU; right_turn(pwmSignal_IMU); } } } } }

Regarding the displacement, my algorithm works as follows: after the car rotates, the robot records the initial distance measurement. It then subtracts the displacement, as calculated and provided by the Python function calculate_distance_and_angle. This calculation determines the 'away-from-the-wall' distance, which is the target distance for the car to maintain from the wall when it stops. Subsequently, the program compares the real-time measurement to the 'away-from-the-wall' distance. Whenever the real-time measurement is smaller than the 'away-from-the-wall' distance, the car should stop. It then rotates by the negative value of the angle to prepare for the next movement.

In the attached video, I manually tuned the distance that the robot car should travel. Since one floor tile is approximately 304 mm, I calculated the distance between the first two points to be about 860 mm. I found that when the distance was tuned to 862 mm, the car could reach the second point with relative accuracy.

In the video below, the robot operates with open loop and timed control. I manually set a timer for how long the car should run straight. I started by running the car for 500 ms, then 1 s, 2 s, and finally 2.5 s to determine how long it takes for the car to traverse an entire floor tile. The video below demonstrates the car running for a tuned time setting and reaching the second point.

However, using either open loop control to run the car for a specifically tuned distance, or open loop with timed control, proved to be time-consuming and buggy for me. As the video below demonstrates, when the car slips due to dirt on the floor, it continues to run as programmed, regardless of the errors. This hardcoded behavior leads to significant inaccuracies in its performance.

As demonstrated between 41 and 47 seconds in the video below, the error becomes quite obvious when the manually tuned timed control is in use. Additionally, it appears that these errors accumulate over time, leading to increased unpredictability.

PID controll on movement as well as orientation

Based on the details mentioned above, I decided to introduce PID movement control to complete this lab. I implemented PID control code, and the corresponding function is attached below. This function accepts an input of the calculated displacement between two points. The error is defined as the difference between the distance the car has traveled and the calculated displacement.

void go_straight(int dist_move){ int starting_time = (int)millis(); int initial_dis = 0; int error_correction = 0; Serial.print("need dis: "); Serial.println(dist_move); while (!distanceSensor2.checkForDataReady()) { delay(1); } distance_2 = distanceSensor2.getDistance(); //Get the result of the measurement from the sensor distanceSensor2.clearInterrupt(); lastTime_TOF = (float)millis(); Serial.println(distance_2); initial_dis = distance_2; Serial.print("Initial dis: "); Serial.println(initial_dis); error_dist = initial_dis - dist_move; error_correction = error_dist; lastError_TOF = error_dist; //run_fast(46); while (((int)millis() - starting_time) < 10000){ if(distanceSensor2.checkForDataReady()){ TOF_time_2 = (float)millis(); distance_2 = distanceSensor2.getDistance(); //Get the result of the measurement from the sensor distanceSensor2.clearInterrupt(); Serial.print("New reading: "); Serial.println(distance_2); error_dist = distance_2 - error_correction; Serial.print("Error correction: "); Serial.print(error_correction); Serial.print(" | Error distance: "); Serial.print(error_dist); Serial.print(" | Error distance: "); Serial.print(distance_2 - error_correction); Serial.print("Kp_TOF: "); Serial.println(Kp_TOF); P_calc_TOF = Kp_TOF * error_dist; Serial.print("P: "); Serial.print(P_calc_TOF); curr_time_TOF = (float)millis(); dt_TOF = (curr_time_TOF - lastTime_TOF) / 1000.0; // Delta time in seconds lastTime_TOF = curr_time_TOF; // Update last time integral_TOF += error_dist * dt_TOF; // Calculate integral // Integral wind-up if (integral_TOF > 600.0){ integral_TOF = 600.0; } else if (integral_TOF < -600.0){ integral_TOF = -600.0; } if (integral_TOF * error_dist < 0){ integral_TOF = 0; } I_calc_TOF = Ki_TOF * integral_TOF; Serial.print(" | I: "); Serial.print(I_calc_TOF); derivative_TOF = (error_dist - lastError_TOF) / dt_TOF; // Calculate derivative D_calc_TOF = Kd_TOF * derivative_TOF; Serial.print(" | D: "); Serial.print(D_calc_TOF); pidOutput_TOF = P_calc_TOF + I_calc_TOF + D_calc_TOF; // Calculate PID output Serial.print(" | pidOutput_TOF: "); Serial.print(pidOutput_TOF); lastError_TOF = error_dist; // Update last error for next cycle if ((abs(error_dist) < 2)) { Serial.println("get there"); stop_car(); break; } else { if ((int)pidOutput_TOF > 0){ pwmSignal_TOF = constrain((int)pidOutput_TOF, 36, 50); // Ensure PWM is within valid range run_fast(pwmSignal_TOF); } else { pwmSignal_TOF = constrain(-(int)pidOutput_TOF, 36, 50); // Ensure PWM is within valid range run_back(pwmSignal_TOF); } Serial.print(" | PWM: "); Serial.println(pwmSignal_TOF); } } } }

The video below demonstrates that the car can accurately traverse a floor tile as instructed.

In addition, I noticed that when I set all motor input PWM values to 0, the wheels still run, which could lead to errors. Therefore, I set the PWM values to 255 to lock the wheels and ensure the robot stops when required.

One technical issue was that the front TOF sensor needed occasional angle corrections to prevent it from leaning toward the floor instead of remaining perfectly horizontal toward the front wall. Apart from this technical error, I encountered other challenges. An unobvious error was identified by monitoring the real-time measurements and PID contributions to the PWM values. I initially implemented the algorithm assuming that the initial measurement would be larger than the calculated displacement between two points. However, as demonstrated in the video above, when the car moves off track, such as stopping near the box in the middle, then the next initial measurement is less than the distance required for the next movement. This results in a consistently positive error, causing the PID system to erroneously continue moving the car forward.

The video below shows that, instead of programming the robot to send data back for analysis, I decided to connect the laptop directly to the robot. This setup allows me to monitor the real-time PID contributions to the PWM values and the measurements from TOF devices.

Navigation through the map

The entire map consists of a series of points which are linked togetehr to be the trajectory where the robot should follow and navigate, the relevant screenshot is shown below.

trajectory on the maze

The localization was implemented in Lab 11 through the function named perform_observation_loop(self, rot_vel=120). This function controls the robot to rotate 360 degrees, collect 18 distance measurements, and asynchronously send them back to the laptop for further analysis.

The car runs quite erratically between the first two points in the trajectory. This is because when the car rotates from the first point toward the second, the front TOF sensor measures the distance to a diagonally placed wall, which is the furthest distance it needs to read in the entire trajectory. Consequently, the two videos below demonstrate that it is not reliable for the car to consistently stop at the second point.

Therefore, I subsequently skipped the first point during testing. As the video below demonstrates, the car was able to consistently navigate through a predefined series of points in the maze, succeeding in multiple trials.

In every video, the error appears to be a maximum of one tile away, with the third attempt showing even better accuracy. Next, I decided to introduce localization to further improve accuracy as the robot navigates through the map.

Navigation and localization

Initially, I had the car navigate from the first to the fourth point along the defined trajectory. The ambient environment at the fourth point is relatively more complex, which I assumed would yield more accurate updated belief data. Consequently, after reaching the fourth point, the robot continued along the trajectory based on this updated data. However, challenges arose due to the limited number of measurements (only 18) and occasional inaccuracies in the car's 360-degree rotation or errors introduced by the car's movement. These factors sometimes caused the updated data, which estimates the car's location, to be inaccurate. When the car's guessed location is incorrect but the next target location is predefined, both the displacement and the angle required for the car to move and rotate from the guessed location to the known next location can be significantly off, as demonstrated in the video below.

Next, I had the car navigate the entire maze, following a predefined series of points along the trajectory. At the second to last point, I performed a localization to evaluate its accuracy, since I believed the environment at this point was suitable for the car to accurately determine its location.

The result indicates that the error is only one tile away, as seen at the end of the video where the blue dot appears. This level of localization accuracy is considered acceptable.

Success 1 (skipped one point in the trajectory)

After testing the localization at each point, I discovered that using the updated localization data from the fifth to the seventh point enhances the reliability and likelihood of the car successfully navigating the entire maze. For the remaining points, I controlled the car to follow the pre-defined trajectory directly.

The trajectory plotted for ground truth and updated belief points is acceptable, considering there are only 18 measurement data points and the unavoidable errors on the robot. Additionally, it's worth noting that the updated belief is relatively accurate. For example, as the video shows, when the car is at the right bottom and right top corners, the blue dot appears in the correct location.

trajectory plot for gt and belief

Success 2 (Complete trajectory - video was not quality)

Success 3 (Complete trajectory)

trajectory plot for gt and belief

This plot, which includes the floor tiles, effectively demonstrates the accuracy of the system in relation to the tiles.

trajectory plot for gt and belief