1 Introduction

Robotics is a practical field of study. As we discussed earlier in the book, it is essential to actively construct your own knowledge of the subject through experience. The series of projects presented in this chapter builds on the theoretical foundation developed throughout the book and flags the need to explore further when there are gaps in your current knowledge based on the material covered. These projects expand on the embodied design and prototyping concepts covered in Chap. 12 about the hexapod robot. We provide you with a hands-on guide to build a robot from scratch using “first principles”. Once completed, you will gain experience in implementing mathematical concepts in a practical application and communicating with hardware. Please note that this chapter references online resources associated with this book regularly (https://foundations-of-robotics.org). You will;

  • Learn about programming techniques for deriving kinematic equations from the leg component of the robot hexapod. These equations include the direct kinematic (DK) homogenous transformation and inverse kinematic (IK) equations of the end-effector (EE) frame.

  • Create and manipulate the geometric Jacobian with programming techniques to analyse the behaviour of the robotic leg.

  • Understand and implement serial-based communication, a standard method by which we can communicate with robotic systems.

We will discuss the development of a single leg from the hexapod and perform a kinematic analysis across the first three projects within this chapter. We emphasise intuitive descriptions of mathematical concepts within robotics with Python code examples. The final project will explore serial communication using C++ code, discussing bit-wise operations to efficiently send numerical data to a microcontroller. A provided docker configuration from the associated online resources ensures examples introduced in this chapter work out of the box. Please use Python 3.6 or newer to operate the code segments presented in this chapter.

2 Project One: Defining the Robot System

2.1 Project Objectives

  • Define a mechanical system from a conceptual design and plan modelling techniques.

  • Research and identify suitable actuators for the task.

2.2 Project Description

Let us begin by summarising the requirements of our hexapod system. As described in Chap. 12, the hexapod has six legs, each leg containing four actuators. The hexapod moved with the tripod gait, as discussed in Chap. 8. First, we actuate the base of the leg in a lateral motion. Next, three rotational link pairs follow this first actuator, making up the rest of the leg component, as shown in Fig. 17.1.

Fig. 17.1
figure 1

A rendering of a single leg from the proposed hexapod platform. Bryce Cronin/CC BY-NC-ND 4.0. www.cronin.cloud/hexapod

The actuators used in the original case study described in Chap. 12 were the rotational Dynamixel MX28r (Dynamixel, 2021) actuators. Each MX28r contains a microcontroller running a lower-level PID loop with an encoder. These lower-level controllers enabled the original developers of the hexapod case study to command the actuator’s desired angle and velocity and be confident that the behaviour would execute correctly.

2.3 Project Tasks

2.3.1 Task One: Basic Questions

  • Take the robot leg shown in Fig. 17.1 and indicate the type of mechanical device of the leg, i.e. a parallel structure or a serial-link robot.

  • Referring to the definitions in Chap. 10, list suitable techniques for modelling the following relationships.

    • The position relationships, how would the position of the tip/foot position of the robot leg relate to the actuator positions?

    • The velocity relationships, how would the speed of the tip/foot position of the robot leg relate to the actuator velocities?

    • The dynamics relationship, how would the desired motion, forces and torques of the foot/tip of our robot relate to the torques exerted by our actuators?

2.3.2 Task Two: Research Components

  • The original design of the hexapod called for the MX28r actuator, as described earlier. What design considerations do you think were made in selecting these actuators? You may make assumptions in answering this point, think about;

    • The weight of the hexapod and its distribution over the legs

    • Components such as the battery and electronics

    • Actuator weights and torque capacity

  • You can find a list of suitable actuators for this project listed on the associated website of this book. Go through the list of actuators present on this page and analyse each actuator’s datasheet while taking note of the various benefits. Then, identify the optimal solution for the hexapod model. Consider parameters such as tracking the position/velocity, handling collisions while walking and actuator strength.

3 Project Two: Modelling the Position Kinematics

3.1 Project Objectives

  • Implement the DH parameters of the hexapod leg using Python.

  • Programmatically calculate the direct kinematics homogenous transformation matrix.

  • Create an inverse kinematic solution of the robotic leg.

  • Learn how we can validate positional relationships.

3.2 Project Description

As our robot leg follows a standard joint-link pair structure, we can define the system with the DH parameters. Table 17.1 shows the parameters assuming the world coordinate frame equals the leg base, i.e. the connection point to the hexapod body. These parameters also include the additional qlim parameter (which holds the position limits of each actuator). Finally, we visualise the parameters with a kinematic diagram illustrated in Fig. 17.2.

Table 17.1 DH parameters of the robot leg
Fig. 17.2
figure 2

A visualisation of the presented DH parameters from Table 17.1. We highlight the EE position in a red coordinate frame, with the world coordinate frame displayed in blue. Note that the shown configuration is with the four joints at arbitrary angles

3.3 Project Tasks

3.3.1 Task One: Codifying the DH Parameters

Note: Coding segments referenced in these questions are in Sect. 3.4

  • Take our presented DH parameters from Table 17.1 and Fig. 17.2. Using the numerical values (Table 17.1), create a simulated version of the robot leg using the robotics toolbox in Python.

  • When creating the robot with the toolbox, make sure you include the joint limit (qlim) variables in Table 17.1. See Coding Segment 2 for a starting point on this task.

3.3.2 Task Two: Kinematics

  • While using the spatial math library and numerical values from Table 17.1, calculate the DK homogenous transformation of the robotic leg. See Coding Segments 1 and 3 for help with this problem.

  • Confirm the DK homogenous transform calculation by comparing the output to the “fkine” function from the robot created in Task One. Hint, Look at Coding Segments 2 and 3 for help with this problem. This task requires you to investigate the robotics toolbox function “fkine”.

3.3.3 Task Three: Advanced Kinematics

  • Modify Coding Segment 3 to find the leg EE’s x-, y- and z-positions using symbolic variables. Remember the structure of the homogenous transformation matrix for this step.

  • Validate your equations by manually deriving the position equations with diagrams. For a guide on this step, see Sect. 3.4. Do the equations derived match the symbolic expression previously calculated?

Note: The following two items are an extension exercise for the readers without explicit instructions in this chapter. However, answers are available with the associated online resources.

  • Derive an inverse kinematics algorithm for our robotic leg using Python. We leave this question as an open exercise to readers who can use their knowledge of kinematics and previous information presented throughout the book.

  • Validate your inverse kinematics algorithm. Validation can occur by running through the workspace of the leg and comparing outputs from your direct and inverse kinematic solutions.

3.4 Case Study Example

3.4.1 Representing the DH Parameters with Code

We wanted to go back briefly to the mathematical representation of the DH parameters presented in Chap. 10. In this chapter, we may use some slightly different notations. The first significant change concerns the values of \(\theta\). Henceforth, we represent the values of \(\theta\) with q, which indicates the angle of an actuator. The method by which we represent homogenous transformations is also slightly different.

A homogenous transformation, \(A_{i - 1}^i\), would represent the ith link-joint pair of the DH parameters. We utilise the spatial math Python library to hold various transformations and rotations of the DH parameters. In code, this may look slightly different to the previously presented notation. Assuming that we’re utilising the standard DH convention, we can represent the transformation of \(A_{i - 1}^i\) with the expression \(A_{i - 1}^i = R_z \left( {q_i } \right)T_z \left( {d_i } \right)T_x \left( {a_i } \right)R_x \left( {\alpha_i } \right)\). In this case, \(R\) and \(T\) represent functions that produce a homogenous transformation matrix from a single rotation or translation. The subscript denotes the axis the transformation takes place along. In the case of our robot leg, remember that for the ith joint-link pair, the variable \(q_i\) represents the position of our actuator while the other parameters are constant. In Python code, the calculation of \(A_{i - 1}^i\) is illustrated in Code Segment 1.

Code Segment 1. An example of a single joint-link pair represented with four sequential transformations

#import requirements – Note we are using python 3.6 import spatialmath as sm #A joint link pair of the DH parameters, assuming the variables of the variables of q_i, d_i, a_i and alpha_i already exist A_ i = sm . SE3.Rz ( q_i) * sm . SE3.Tz ( d_i) * sm . SE3.Tx ( a_i) * sm . SE3.Rx ( alpha_i )

Additionally, in this chapter, we utilise the robotics toolbox, a Python library presented by Corke and Haviland (Corke & Haviland, 2021) that can model serial-link structures like the proposed leg. We show an example of the robotics toolbox below, creating a simple two-link manipulator.

Code Segment 2. An example of the robotics toolbox creating a simple two-link simulated robot.

#import requirements – Note we are using python 3.6 import roboticstoolbox as rtb import math as rwm import spatialmath as sm import numpy as np #We delcare link lengths for the a variables in this robot a0 = 0.5 a1 = 0.5 #We also include a base transform variable base_transform=sm.SE3(np.identity(4)) #base_transform= sm.SE3.Rx(rwm.pi/2) #base_transform=sm.SE3.Ry(rwm.pi/2)*sm.SE3.Tz(0.5) #Create the robot with the toolbox, note how theta is not present as the joint position is not a constant linkjoint_0 = rtb.RevoluteDH(d=0, alpha=0, a=a0, offset=0, qlim=None) linkjoint_1 = rtb.RevoluteDH(d=0, alpha=0, a=a1, offset=0, qlim=None) example_robot  =  rtb . DHRobot ( [linkjoint_0, linkjoint_1], base  =  base_transform, name  =  'Simple_Example' ) example_robot . teach( )

Note how in Code Segment 2, there are several additional parameters, including the base parameter in the function DHRobot. Also, in each joint-link pair, defined by RevoluteDH, there are two parameters of offset and qlim. These parameters operate as follows:

  • Offset (offset i =)—Adds a constant offset to the position of our ith actuator. Take Code Segment 2 and modify it. Change the offset parameter for linkjoint_1, currently 0, and change it to \(\pi /2\) (in code rwm . pi/2). Observe how this change impacts the simulation generated.

  • Joint Limits (qlim i =)—Set our ith actuator’s upper and lower position limits. Change the qlim variable in Code Segment 2 for either linkjoint_0 or linkjoint_1. Currently, this variable is None. Try changing this variable to [-0.3, 0.3]. Once again, observe how changing this variable impacts the generated simulation.

  • Base Position (base =)—The pose of your manipulator’s initial position relative to the world coordinate frame. It is essential that as you move forward in this chapter, you define your world coordinate and where your robot is relatively located when developing your system. In Code Segment 2, several alternative definitions for the variable base_transform are in commented lines of code. Uncomment different variations of base_transform and observe how that impacts the simulation.

While not part of the traditional mathematical definition of the DH convention, these parameters can help define a robot to a desired zero configuration or implement multiple serial-link systems in a single world.

3.4.2 Deriving the Forward Kinematics

The next step in modelling a robot is defining the forward kinematics, estimating the EE’s position and rotation relative to the robot’s base. We highlight this problem in Fig. 17.2. We want to establish the location and orientation of the red coordinate frame (EE), previously referred to as the tip/foot, relative to the blue frame (base) based on actuator positions. We split the problem into the rotation and position components.

Let us first discuss the orientation problem. Broadly, we want to estimate the rotation matrix using the DH parameters and the actuator positions. We can do this using Python with the spatial math library. Thus, we present Coding Segment 3, which retrieves the rotation matrix of an end-effector from the base of our robot leg.

Code Segment 3. The Python code for calculating the rotation matrix of our robot leg.

#import requirements import spatialmath as sm import spatialmath.base as base import numpy as np #Creates a set of symbolic variables a_1, a_2, a_3, a_4 = base.sym.symbol('a_1, a_2, a_3, a_4') d_1, d_2, d_3, d_4 = base.sym.symbol('d_1, d_2, d_3, d_4') alpha_1, alpha_2, alpha_3, alpha_4 = base.sym.symbol('alpha_1, alpha_2, alpha_3, alpha_4') q_1, q_2, q_3, q_4 = base.sym.symbol('q_1, q_2, q_3, q_4') #Base transform, equivalent to an identity matrix since no base transform exists on this leg base_transform=sm.SE3(np.identity(4)) #A joint link pair of the DH parameters, remembering our leg contains no base transform leg_linkjoint_1 = sm.SE3.Rz(q_1) * sm.SE3.Tz(d_1) * sm.SE3.Tx(a_1) * sm.SE3.Rx(alpha_1) leg_linkjoint_2 = sm.SE3.Rz(q_2) * sm.SE3.Tz(d_2) * sm.SE3.Tx(a_2) * sm.SE3.Rx(alpha_2) leg_linkjoint_3 = sm.SE3.Rz(q_3) * sm.SE3.Tz(d_3) * sm.SE3.Tx(a_3) * sm.SE3.Rx(alpha_3) leg_linkjoint_4 = sm.SE3.Rz(q_4) * sm.SE3.Tz(d_4) * sm.SE3.Tx(a_4) * sm.SE3.Rx(alpha_4) #A joint link pair of the DH parameters DK_transform = base_transform * leg_linkjoint_1 * leg_linkjoint_2 * leg_linkjoint_3 * leg_linkjoint_4 #Extract and print the rotation matrix component RotationMatrix = DK_transform.R print ( RotationMatrix )

Code Segment 3 presents a valuable tool for deriving the rotation matrix using the spatial math library. But we also need to know the x, y and z displacements from the base frame to the EE frame. It is possible to gather these variables using a similar technique to Code Segment 3. However, let us extrapolate these equations manually. There are several reasons for extracting these equations manually, but the primary motivation is validation. While the symbolic calculations and the robotic toolbox are helpful, using them in conjecture with our own manually extrapolated equations reinforces that we understand our system.

So we begin with two images of our robotic system as shown in Figs. 3 and 4. These are the two images we use to calculate our forward kinematic transformation. Figure 17.3 shows a top-down view perpendicular to the Z-axis, which observes the X- and Y-axes and the manipulator’s EE position within those axes. Alternatively, in Fig. 17.4, we illustrate a side view of a robot leg highlighting the position of the final three actuators. This figure also highlights the Z-position of the EE. In both images, the EE position is the red node at the end of the leg. These images also contain a variable, mx, that highlights the extended length of the leg. Please note that both figures’ axis are from the base/world coordinate frame of Fig. 17.2.

Fig. 17.3
figure 3

Position of our robot leg from a top-down view highlights the x- and y-positions of our EE

Fig. 17.4
figure 4

By observing our manipulator from a side view, we display the positions of the final three actuators along with the length of mx and the z displacement

We start by estimating the values of mx and z from Fig. 17.4. One method of thinking about these values is that mx = a 1 + w 2 + w 3 + w 4 and z = h 2 + h3 + h4. The equations below can explicitly express these variables from standard trigonometric functions.

$$mx = a_1 + a_2 \cos \left( {q_2 } \right) + a_3 \cos \left( {q_2 + q_3 } \right) + a_4 \cos \left( {q_2 + q_3 + q_4 } \right)$$
$$z = a_2 sin\left( {q_2 } \right) + a_3 sin\left( {q_2 + q_3 } \right) + a_4 sin\left( {q_2 + q_3 + q_4 } \right)$$

Having established these variables, we can now use the length of mx to estimate the x- and y-positions in Fig. 17.3.

$$x = mx\cos \left( {q_1 } \right)$$
$$y = mx\sin \left( {q_1 } \right)$$

4 Project Three: Modelling the Velocity Kinematics with Python

4.1 Project Objectives

  • Using programming techniques, derive the geometric Jacobian of the serial-link leg relating the actuators and the EE coordinate frame’s velocity.

  • Using programming techniques, manipulate the geometric Jacobian and find properties such as the determinant, inverse and transpose.

  • Learn how the Jacobian operates by relating joint speed and EE velocity parameters.

4.2 Project Description

Modelling the velocity relationship between the leg EE coordinate frame and the actuators is crucial for building a robotic system, as described in Chap. 10. To briefly reintroduce the Jacobian matrix for serial-link robots, it is generally a 6 × N sized matrix where N equals the number of actuators within our system. We split our Jacobian into the position and orientation components to calculate this matrix. First, let’s discuss the position component, henceforth called J P(q). Next, we take our direct kinematics and partially derive each equation by every actuator variable in the structure below. As can be observed, the equation is simply a matrix of partial derivatives.

$$J_p \left( q \right) = \left[ {\begin{array}{*{20}c} {\frac{\partial x}{{\partial q_1 }}} & {\frac{\partial x}{{\partial q_2 }}} & {\frac{\partial x}{{\partial q_3 }}} & {\frac{\partial x}{{\partial q_4 }}} \\ {\frac{\partial y}{{\partial q_1 }}} & {\frac{\partial y}{{\partial q_2 }}} & {\frac{\partial y}{{\partial q_3 }}} & {\frac{\partial y}{{\partial q_4 }}} \\ {\frac{\partial z}{{\partial q_1 }}} & {\frac{\partial z}{{\partial q_2 }}} & {\frac{\partial z}{{\partial q_3 }}} & {\frac{\partial z}{{\partial q_4 }}} \\ \end{array} } \right]$$

Once we know the entire matrix of J P(q), we can utilise it in the equation below, where multiplying the Jacobian by the \(\dot{q}\) vector (speed of the actuators) produces the velocities of the EE in the x-, y- and z-directions.

$$\left[ {\begin{array}{*{20}c} {\dot{x}} \\ {\dot{y}} \\ {\dot{z}} \\ \end{array} } \right] = J_P \left( q \right)\left[ {\begin{array}{*{20}c} {\dot{q}_1 } \\ {\dot{q}_2 } \\ {\dot{q}_3 } \\ {\dot{q}_4 } \\ \end{array} } \right]$$

The orientation Jacobian, J R(q), requires a slightly different calculation to the previously presented translation components. A rule of thumb to consider is that, \(J_R \left( q \right)\) and \(J_P \left( q \right)\) should be the same size. Since we have a rotation matrix and no explicit equations to differentiate, we need to calculate the partial derivative of a rotation matrix. The broad rule to consider when building your orientation Jacobian is that the ith column, \(J_{Ri}\), is equal to the rotation matrix of the previous joint-link pair multiplied by the vector [0, 0, 1]. If we consider R i − 1 as the rotation matrix before \(R_z \left( {q_i } \right)\), the expression below represents ith column, \(J_{Ri}\). Please note that this methodology is specific to the constraints on the hexapod leg as it uses the standard DH convention and only contains rotational actuators.

$$J_{Ri} = R_{i - 1} \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ 1 \\ \end{array} } \right]$$

When we put together this matrix, we express our orientation Jacobian in the form below (where R 0 is the rotation matrix of our base transform);

$$J_R \left( q \right) = \left[ {\begin{array}{*{20}c} {R_0 \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ 1 \\ \end{array} } \right]} & {R_1 \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ 1 \\ \end{array} } \right]} & {R_2 \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ 1 \\ \end{array} } \right]} & {R_3 \left[ {\begin{array}{*{20}c} 0 \\ 0 \\ 1 \\ \end{array} } \right]} \\ \end{array} } \right]$$

Once calculated, J R(q) can perform the matrix multiplication operation displayed below, in which ω x, ω y and ω z are angular velocity components about the three-axis.

$$\left[ {\begin{array}{*{20}c} {\omega_x } \\ {\omega_y } \\ {\omega_z } \\ \end{array} } \right] = J_R \left( q \right)\left[ {\begin{array}{*{20}c} {\dot{q}_0 } \\ {\dot{q}_1 } \\ {\dot{q}_2 } \\ {\dot{q}_3 } \\ \end{array} } \right]$$

Once the calculations for J P(q) and J R(q) are complete, we can create what is known as the Geometric Jacobian J(q). The calculation for the geometric Jacobian is the vertical concatenation of J P(q) and J R(q). It can calculate the velocities of EE DOF as shown in the expression below.

$$\left[ {\begin{array}{*{20}c} {\dot{x}} \\ {\dot{y}} \\ {\dot{z}} \\ {\omega_x } \\ {\omega_y } \\ {\omega_z } \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {J_P \left( q \right)} \\ {J_R \left( q \right)} \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {\dot{q}_0 } \\ {\dot{q}_1 } \\ {\dot{q}_2 } \\ {\dot{q}_3 } \\ \end{array} } \right] = J\left( q \right)\dot{q}$$

4.3 Project Tasks

4.3.1 Task One: Calculating and Using the Geometric Jacobian Components

  • Using the DH parameters and direct kinematic equations, calculate the matrix \(J_P \left( q \right)\). See Coding Segment 4 in Sect. 4.4 for assistance.

  • Assuming our joint speed vector \(\dot{q} = \left[ {0.1,0.5,0.2, - 0.3} \right]\), what is the speed of the EE in the x-, y- and z-directions?

  • Using the DH parameters, calculate the matrix \(J_R \left( q \right)\). See Coding Segment 5 in Sect. 4.4 for assistance.

  • Assuming our joint speed vector \(\dot{q} = \left[ {0.1,0.5,0.2, - 0.3} \right]\), what is the angular velocity of the EE for the values of ω x, ω y and ω z?

4.3.2 Task Two: Completing and Manipulating the Geometric Jacobian

Note: The case study presented in Sect. 4.4 only deals with the Jacobian calculation. Please peruse the online resources for a more in-depth guide for the steps below.

  • Put together the complete geometric Jacobian \(J\left( q \right)\).

  • Make \(J\left( q \right)\) a square matrix by eliminating rows. Present a summary of how this operation impacts your Jacobian matrix and why we need to perform this step.

  • Establish the following features of our now square Jacobian matrix;

    • Using Python code, calculate the determinant. As previously mentioned in Chap. 10, the determinant can find singularities. Using the determinant, can you find any possible kinematic singularities?

    • Using Python code, calculate the inverse of the square matrix. What can this matrix do?

    • Using Python code, calculate the transpose of the square matrix. What can this matrix do?

4.4 Case Study Example

We again use Python and the sympy library to find the Jacobian of our system. We require the forward kinematics we have previously calculated in Project Two to proceed with the case study. The Python sympy library calculates J P(q) with in-built differentiation. In Code Segment 4, we provide an example by calculating the first element of J P(q), the term of \(\frac{\partial x}{{\partial q_1 }}\).

Code Segment 4. An example of calculating a single element of position Jacobian JP(q)

#import requirements – Note we are using python 3.6 from sympy import * import spatialmath.base as base #Create symbolic variables to use a1, a2, a3, a4, q1, q2, q3, q4 = base.sym.symbol('a0,a1,a2,a3,q0,q1,q2,q3') #Rememeber our DK expression for x? If not check the forward kinematics derivation of the robot leg x = (a1 + a2 * cos(q2) + a3 * cos(q2 + q3) + a4 * cos(q2 + q3 + q4)) * cos(q1) dx_dq1 = diff(x, q1) print(dx_dq1)

Coding Segment 5 shows how we would calculate the first two columns of J R (q). Remember from our previous description of J R (q) how each column corresponds to a rotation matrix before an actuator motion.

Code Segment 5. Python code which calculates the first two columns of orientation Jacobian JR(q)

#import requirements – Note we are using python 3.6 from sympy import * import spatialmath.base as base import spatialmath as sm import numpy as np a1, a2, a3, a4, q1, q2, q3, q4, Pi = base.sym.symbol('a0,a1,a2,a3,q0,q1,q2,q3,Pi') #The base transformation of the DH parameters, see how this is transformation before Rz(q_0) baseTransformation = sm.SE3(np.identity(4)) #The first joint link pair, see how this is transformation before Rz(q_1) of the DH parameters) linkjoint_1 = sm.SE3.Rz(q1) * sm.SE3.Tz(0) * sm.SE3.Tx(a1) * sm.SE3.Rx(Pi / 2) #The second row of our DH parameters i.e. A1 linkjoint_2 = linkjoint_1 * sm.SE3.Rz(q2) * sm.SE3.Tz(0) * sm.SE3.Tx(a2) * sm.SE3.Rx(0) #Calculate the first column of the orientation jacobian C1Jac = Matrix(baseTransformation.R) * Matrix([0, 0, 1]) #Calculate the second column of the orientation jacobian. See how we combine the first link-joint pair followed by the second C2Jac = Matrix ( baseTransformation . R) * Matrix ( linkjoint_1.R) * Matrix ( [0, 0, 1] )

5 Project Four: Building Communication Protocols

5.1 Project Objectives

  • Learn about bytes and different types of integer variables.

  • Learn about basic serial (TTL) communication through implementing C++ code.

  • Implement a ROS package to communicate with an Arduino microcontroller.

5.2 Project Description

This final project discusses communication protocols between a robot and a host PC. This section describes how the original case study contributors controlled the robot leg using ROS. The techniques presented will be helpful in many other robotics projects. When developing robots, we rarely send radian commands or decimal values directly to an actuator when commanding the system to move to a position. Instead, many actuators simply take a tick or step value input, usually an unsigned 8-bit or 16-bit integer. The MX28r actuators used originally received encoder values from 0 to 4095 to indicate the desired position of our actuator and other feedback or command values.

The method by which we implemented a solution was by designing a custom message protocol that utilised TTL, or serial communication. We write custom 8-bit integer arrays to the serial port. If you are unfamiliar with the terms unsigned or 8-bit integer, let’s very briefly go over what these mean. These terms relate to the binary numeral system. Binary numbers are sequences of 1 s or 0 s that can represent integers. An 8-bit number is a binary number with eight 1 s and 0 s. A 16-bit number would have 16 values, and so on.

So how do sequences of 1 s and 0 s represent other numbers? First, we treat each value as 2i with i determined by its place in the sequence by reading from right to left. We then sum up all the values where a 1 is in the sequence. For example, we would treat the number 1011 in binary as the sum of 23 + 21 + 20. Notice how we omit 22 since the third number in our sequence is 0 (remembering we are reading from right to left). So, in binary, the number 1011 is equivalent to 11.

This example is also what we would refer to as unsigned. An unsigned integer simply means that we do not consider negative values and sum up all the observed values in sequence. Therefore, it stands to reason that the maximum value of an unsigned 8-bit integer (also known as a byte) is 255, i.e. 11,111,111. So this isn’t particularly useful for a robot system, particularly our system where the target and feedback values range from 0–4095.

So, for each value in our 8-bit integer array, we’re limited to values of 255 and cannot utilise negative numbers. Let’s first address negative numbers. We can use signed integers which utilise a mathematical operation to represent negative numbers with binary sequences. We refer those curious to the “two’s complement” method for in-depth information about this operation. Using “two’s complement”, a signed 8-bit binary sequence can now represent a value between −128 and 127. We can apply this technique to any binary sequence, including 16- and 32-bit numbers.

However, a constraint of serial communication is that we are sending bytes (unsigned 8-bit integers) across. Thus, we are still left with the problem of how we can communicate with our microcontroller with only unsigned 8-bit values. Especially when considering we want to send both unsigned integers and larger values.

Thus we utilise bit-shifting. For example, let’s say we have a signed 16-bit integer of 1058. In binary, this would be the value of 00,000,100 00,100,010 using “two’s complement”. Such a value would be inconvenient to send in an 8-bit unsigned integer array. Essentially, we would write a function that would split our 16-bit integer in half and represent it as two unsigned 8-bit numbers. We highlight this process in the image below. As we can observe, we input a signed 16-bit integer and return two 8-bit unsigned values. We can then recombine these two numbers if required.

Let’s now take a look at bit-shifting in code. Note that we now use C++ rather than Python. Code Segment 6 demonstrates the operation illustrated in Fig. 17.5, in which we split a single signed 16-bit number into two unsigned 8-bit numbers. It also performs the step of rejoining the two unsigned 8-bit integers.

Fig. 17.5
figure 5

Splitting a signed 16-bit integer into two unsigned 8-bit integers

Code Segment 6. A simple C++ example that shows bit-shifting operations in a coding context

#include <iostream> #include <string> using namespace std; //Our functions for bit shifting and altering data #define UPPER_BYTE(b) (b >> 8) #define LOWER_BYTE(b) (b & 0xff) #define INT_JOIN_BYTE(u, l) (u << 8) | l int main() { int16_t exampleNumber = 1058; //Manipulates our 16-bit integer in a variety of methods uint8_t upper = UPPER_BYTE(exampleNumber);//will be 4 uint8_t lower = LOWER_BYTE(exampleNumber);//will be 34 int16_t rejoined = INT_JOIN_BYTE(upper, lower);//will be 1058 //print the newly calculated variables cout << "input = " << exampleNumber << endl; cout << "lower = " << int(lower) << endl; cout << "upper = " << int(upper) << endl; cout << "rejoined = " << rejoined << endl; return 0; }

5.3 Project Tasks

5.3.1 Task One: Basic Bit-Shifting

  • We performed some basic bit-shifting in Code Segment 6. By researching C++ and bit-wise operations, discuss what the functions UPPER_BYTE, LOWER_BYTE and INT_JOIN_BYTE are doing to the inputs of these functions.

  • Take a random number represented by a signed 16-bit integer and calculate the three outputs from the above functions (Hint: See the process illustrated in Fig. 17.5 for inspiration).

5.3.2 Task Two: A ROS Example of Serial Communication

  • For serial communication, the online resources present a ROS1 package with an Arduino Uno microcontroller [Arduino, 2021]. Examine both of these repositories and implement the ROS package by following the instructions available in the README.md file of the ROS package.

  • Draw a diagram of how the data passes through this package’s serial communication and ROS network. Include screenshots demonstrating how you input the data and view any published information within the ROS network.

  • The ROS and Arduino example are limited to sending 16-bit values within the serial communication module. Modify both the ROS package and Arduino code to receive signed 32-bit integers instead.

  • Describe your methodology and what variables you had to modify. Include screenshots demonstrating that you can successfully send these larger integers to the Arduino and the ROS network.

6 Some Final Thoughts

This chapter discussed the practical implementation of hardware and software related to the Hexapod case study in Chap. 12. By presenting the underlying concepts and applied techniques, we hope that readers can take these skills and apply them to their robotic systems. Please be aware that there are many different software packages and libraries to develop robots. However, understanding the core concepts will maximise your impact and use of these various tools.