Trajectory Basics VIII: A Case Study

Autonomous driving and robotics are two seemingly different fields, but in theory they are close relatives, very much the same in a sense. Robot, as a definition, is very broad, and wheeled-vehicle is one particular form. The kinematic bicycle model is a widely accepted simplification for the non-holonomic vehicle dynamics. In this article, we will perform a side-to-side comparison between the whole control stack of a 6-DoF articulated robot arm and that of a vehicle, starting from the command upstream all the way down to the actuator signalling.

Left: Autonomous-Driving Vehicle System; Right: Articulated Robot Arm System

As shown in the diagram above, both systems contain similar modules in the pipeline. They start from perceiving environment obstacles, both static and dynamic ones, as well as their own proprioceptive information. The maneuvers are namely commands, either reactively self-generated from perception or ordered by application users. This combination formulates the trajectory planning problem for the downstream.

Theoretically, the trajectory planning problem can be solved as a whole. This is known as kinodynamic planning, where both dynamics constraints (velocity, acceleration, force/torque, etc) and kinematics constraints (obstacles, boundaries) have to be satisfied, under an optimized profile. As for our operational-space-controlled robot arm, the optimal projection from Cartesian frame (operational space) to the joint space already renders itself as a challenging control problem over dynamics, eliminating the option to incorporate kinematics constraints in the formulation. This aligns with our discussion in previous articles, where we assumed given path points (the entire path, not just way points around obstacles) and spent most of the time on velocity planning. For systems with more straightforward motion control, such as vehicles, where velocity and acceleration are almost directly controllable and do not have force/torque constraints, solving path and velocity in one optimization setting is doable, as shown in this trajectory planner. However, considering the generalizability and the complexity of the method, to tackle the problem in real time safely, in practice people almost always divide the problem into path planning and velocity planning subparts. The path planning problem can also be formulated as a convex optimization problem as shown in this work of Tedrake’s, but usually a search- or sampling-based algorithm suffices most tasks, and runs much faster.

The differences in velocity planning algorithms between two systems stem from their use cases: the robot arm showcases a simpler scenario, where the environment solely consists of static obstacles, while the autonomous vehicle is interacting with other dynamic road agents. The former only needs to traverse a fixed path, occasionally correcting the errors by adaptively adjusting velocity target. On the other hand, the vehicle has to handle dynamic objects in a model-predictive style, re-planning the entire trajectory at every frame. It is practically planning trajectories at a certain frequency in real time. To ensure comfort experience of the passengers, the planning algorithms also emphasizes additional smoothness of the path. The velocity planning for the vehicle system is typically formulated as a search problem, where all road agents’ states are projected onto an S-T graph. We may introduce this process in later articles. Optimization-based methods also exists, but a major concern is the computation time. Note for special cases such as passing or merging, an optimizer that solves both path and velocity can generate better results than decoupled planners.

Both robot arm and vehicle systems would perform better if the velocity planner takes system dynamics into account. In previous article we briefly touched on how TOPPRA is trying to generate such velocity profile for the robot arm, but for a vehicle operating under various scenarios, the dynamics is often too hard to capture. Admittedly, this misalignment between dynamics model and actual system also holds for the robot arm, but that one is small and consistent enough to converge, if we employ the simulation method: assume a perfect robot arm in a simulator that given torque/velocity inputs, outputs joint positions, which are then fed to the actual motors. This method would not stably hold for vehicle models when driving on different road surfaces, having different levels of wear and tear, etc. To address this challenge, we have to constrain ourselves to reasonable outputs by empirically approximating set of parameters, such as maximum velocity under certain road curvature, in order to provide achievable targets to the downstream virtual control module.

Virtual control shares a blurry boundary with motion control, and is sometimes bypass-able. We title it with “virtual” because even though it is “controlling” the next target based on current system states and planned trajectory, it is not the “real control” that dictates the system actuators. It is in fact the “tracker” of the system. The planner generates the complete trajectory, and the tracker determines which points are sent to motion control as targets. The targets include position, velocity, and sometimes acceleration/force targets, depending on the implementation of the motion controller. One or more points are sent for point- or segment-style goal chasing. For vehicle systems, mostly only position and orientation targets are kept, where other planner byproducts such as acceleration and steering angle derivatives are stripped off, because those derivatives are at motion control’s discretion for smooth and adaptive performance. As mentioned in article VI, stitching is also performed for smooth tracking.

Motion control module is the crucial part of the system. This is namely what people refer to as the “controller”. It interprets the state targets and generates control signal \(u\) for the actuator module, acting as an interface between the software and the hardware/mechanical stack. If the tracker output (which sometimes has same definition as system state) and the control inputs \(u\) are in the same space, the motion control could be a no-op. For example, a velocity-controlled robot arm taking joint space velocity targets does not require interpretation of the command. Of course, the module can always add/remove information on the derivatives, such as converting joint position and velocity targets from tracker into torques. For robot arms operating in Cartesian space, methods such as operational space control and inverse kinematics are employed.

The non-holonomic property of vehicle kinematics adds to the variety of control algorithms. The vehicle controller is commonly decoupled into longitudinal and lateral components, solved by separate algorithms. The longitudinal part is relatively simple to solve with PID, and methods such as Adaptive Longitudinal Control goes one step further on the level of comfort, with prior of approximate vehicle dynamics. The lateral part is harder, as steering angle is not equivalent to the car turning angle (non-holonomic constraints). There are plenty of research fruits in this field, such as Stanley algorithm, which takes the curvature of the trajectory into account for better steering, as well as Proximally Optimal Predictive control, which focuses on the computation efficiency and uses neighboring previous inputs to optimize control actions. Most control algorithms rely on the kinematic bicycle model, but some also incorporate dynamics for better solution. The vehicle dynamics is similar to the robot arm dynamics described in previous article,

\[\begin{align} M(q)\ddot{q} + \dot{q}^TC(q)\dot{q} + g(q) + F(\dot{q}) +\tau_{\epsilon} &= B(q)\tau - A^T(q)\lambda \\ A(q)\dot{q} &= 0 \end{align}\]

with the addition of unknown disturbance \(\tau_\epsilon\), left/right wheel torque vector \(\tau\), constraint force vector \(\lambda\), and input transformation \(B(q)\). Equation (2) is the Pfaffian constraint that represents the non-holonomic constraint, where \(A(q)\) encodes the restricted lateral motion direction. These methods can be selected and combined based on the application scenario, to satisfy different performance and behaviors requirements.

Below the motion control module are firmware and mechatronics for the actuators, which are the bottom blocks in the diagram. For the robot arm, this signal can be positional encoding or velocity/torque values for the joint motors, depending on the type of the motor. Each type has its advantages and disadvantages, but joint torque is always the end form that directly converts into current on the motors. Similarly, the end form of vehicle system control signal is joint torque on motors for the tires and steering wheel. Both cases employ PID control for its robustness and ability to capture unmodeled dynamics.

This concludes the case study, and we have reached the end of the trajectory basics series. Hope you have enjoyed reading along, and had a better understanding about classic methods of trajectory tracking!




Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Trajectory Basics VI: Adaptive Tracking II
  • Trajectory Basics VII: Adaptive Tracking III
  • Trajectory Basics I: Foundations