制御工学ブログ / Control Engineering Blog

制御工学の基礎から専門まで、動画・MATLABコード付きで解説。20年以上の研究経験をもつ大学教員が運営。Control engineering tutorials, research articles, and MATLAB code by a university researcher. Topics: LMIs, state estimation, model error compensator, multirate systems, observer design.

State Feedback Control and State-Space Design: A Comprehensive Guide

This article is a comprehensive guide to state feedback control and state-space design in control engineering. It covers the fundamentals of state-space models, controllability and observability, stability analysis, pole placement, optimal regulators (LQR), integral-type servo systems for tracking control, LMI-based design, observer-based feedback with the separation principle, and discrete-time state feedback. Links to detailed articles, research papers, videos, and MATLAB codes are provided throughout.

Author: Hiroshi Okajima, Associate Professor, Kumamoto University, Japan — 20 years of control engineering research

Why State-Space Design Matters

Classical control design based on transfer functions — such as PID tuning — is powerful for single-input single-output (SISO) systems but becomes increasingly difficult when the plant has multiple inputs, multiple outputs, or when internal states must be monitored. State-space design provides a unified framework that naturally handles these situations:

  • Multi-input multi-output (MIMO) systems are treated as naturally as SISO systems, since the plant model is expressed using matrices rather than scalar transfer functions.
  • Internal states are explicitly represented, enabling direct access to all dynamic variables for feedback, monitoring, and fault detection.
  • Systematic design algorithms — pole placement, LQR, and LMI-based methods — provide clear procedures for computing feedback gains with guaranteed stability and performance properties.
  • Observer-based feedback bridges the gap between full state feedback (which assumes all states are measured) and practical implementations (where only partial outputs are available).

State feedback control is a foundational topic in modern control engineering and serves as the prerequisite for more advanced subjects such as state observer design, system identification, and robust control with the Model Error Compensator (MEC).


State-Space Model Fundamentals

Continuous-Time State Equations

A continuous-time linear time-invariant (LTI) system is described by:

 \displaystyle \dot{x}(t) = Ax(t) + Bu(t)
 \displaystyle y(t) = Cx(t) + Du(t)

where  x(t) \in \mathbb{R}^{n} is the state vector,  u(t) \in \mathbb{R}^{m} is the control input, and  y(t) \in \mathbb{R}^{l} is the output. The matrices  A \in \mathbb{R}^{n \times n},  B \in \mathbb{R}^{n \times m},  C \in \mathbb{R}^{l \times n}, and  D \in \mathbb{R}^{l \times m} characterize the plant dynamics.

The integer  n is the order (or dimension) of the system, corresponding to the number of state variables. For example, in a 3rd-order SISO system ( n = 3, m = 1, l = 1),  A is a  3 \times 3 matrix,  B is a  3 \times 1 column vector, and  C is a  1 \times 3 row vector.

Discrete-Time State Equations

In discrete time, the state equations take the form:

 \displaystyle x(k+1) = Ax(k) + Bu(k)
 \displaystyle y(k) = Cx(k) + Du(k)

where  k is the discrete time index. The conversion between continuous-time and discrete-time models is covered in detail in the Discretization article.

Equivalent Transformations

Given a nonsingular matrix  T, the state coordinate transformation  \tilde{x}(t) = T^{-1}x(t) produces an equivalent state-space representation with matrices  \tilde{A} = T^{-1}AT,  \tilde{B} = T^{-1}B,  \tilde{C} = CT, and  \tilde{D} = D. This transformation preserves the transfer function, the eigenvalues of  A, and the controllability/observability properties of the system.

Equivalent transformations are essential for converting systems into canonical forms — such as the controllable canonical form — that simplify feedback gain computation.

For details, see: Equivalent Transformations of State Equations


Controllability and Observability

Controllability and observability are the two fundamental structural properties of state-space systems. They determine whether state feedback and state estimation are possible.

Controllability

A system  (A, B) is controllable if the state can be driven from any initial condition to any desired state in finite time using the input. Controllability is verified by checking the rank of the controllability matrix:

 \displaystyle U_c = \begin{pmatrix} B & AB & A^2 B & \cdots & A^{n-1}B \end{pmatrix}

The system is controllable if and only if  \mathrm{rank}(U_c) = n.

Controllability and state feedback: If the system is controllable, then for any desired set of  n closed-loop poles, there exists a feedback gain  K such that  A - BK has those poles. If the system is only stabilizable (i.e., all uncontrollable modes are already stable), then stabilizing feedback exists but arbitrary pole placement is not possible.

Observability

A system  (C, A) is observable if the initial state can be uniquely determined from the input and output history. Observability is verified by checking:

 \displaystyle U_o = \begin{pmatrix} C \cr CA \cr CA^2 \cr \vdots \cr CA^{n-1} \end{pmatrix}

The system is observable if and only if  \mathrm{rank}(U_o) = n. Observability is essential for state observer design — the dual of state feedback.

For details and examples at different system orders, see: Controllability and Observability of Systems


Stability of State-Space Systems

The stability of an autonomous system  \dot{x}(t) = Ax(t) is determined entirely by the eigenvalues of  A:

  • Asymptotically stable: All eigenvalues of  A have strictly negative real parts (continuous time) or lie strictly inside the unit circle (discrete time).
  • Unstable: At least one eigenvalue has a positive real part (continuous time) or lies outside the unit circle (discrete time).

For the scalar case ( n = 1), the system  \dot{x} = ax has the solution  x(t) = e^{at}x(0). If  a \lt 0, the state converges to zero; if  a \gt 0, it diverges. For higher-order systems, the same principle applies to each eigenvalue: the most "dangerous" eigenvalue (the one with the largest real part) determines the stability of the overall system.

Lyapunov stability theory provides an alternative characterization: the system is stable if and only if there exists a positive definite matrix  P \gt 0 satisfying the Lyapunov equation  A^{T}P + PA \lt 0. This matrix inequality formulation is the basis for LMI-based controller design.

For details, see: Stability of Systems Represented by State Equations


State Feedback: Pole Placement

Basic Structure

State feedback control applies the control law:

 \displaystyle u(t) = -Kx(t)

where  K \in \mathbb{R}^{m \times n} is the feedback gain matrix. Substituting into the state equation yields the closed-loop autonomous system:

 \displaystyle \dot{x}(t) = (A - BK)x(t)

The dynamics of the closed-loop system are entirely determined by the eigenvalues of  A - BK. The pole placement problem is to find  K such that these eigenvalues match a desired set of closed-loop poles.

Pole Placement via Controllable Canonical Form

For SISO systems, pole placement can be performed systematically using the controllable canonical form. In this form, the state feedback gain can be computed by directly matching the coefficients of the desired characteristic polynomial with those of the closed-loop characteristic polynomial  \det(sI - A + BK) = 0.

Example (2nd-order system): If the controllable canonical form has the characteristic polynomial  s^{2} + a_{1}s + a_{2} and the desired poles are  s = -2 and  s = -1, the desired characteristic polynomial is  s^{2} + 3s + 2, so we set  k_{1} = 2 - a_{2} and  k_{2} = 3 - a_{1}.

In MATLAB, the functions place and acker compute the feedback gain for arbitrary pole locations.

Effect of Pole Locations on Performance

The location of the closed-loop poles directly affects the transient response:

  • Poles further to the left (larger negative real parts): Faster convergence, but typically requiring larger control effort.
  • Poles closer to the imaginary axis: Slower convergence, smaller control effort.
  • Poles far from the real axis (large imaginary parts): Oscillatory response.
  • Poles close to the real axis: Smooth, non-oscillatory response.

The designer must balance convergence speed against control effort and actuator limitations.

For a detailed treatment with 3rd-order examples and MATLAB simulations, see: State Feedback Control: Design Gains using Pole Placement


State Feedback: Optimal Regulator (LQR)

Concept

While pole placement gives direct control over the closed-loop pole locations, the optimal regulator (also known as LQR: Linear Quadratic Regulator) provides a systematic way to balance state regulation against control effort by minimizing a quadratic cost function:

 \displaystyle J = \int_0^{\infty} \bigl( x(t)^T Q\, x(t) + u(t)^T R\, u(t) \bigr) \, dt

where  Q \geq 0 is a positive semi-definite weight matrix for the state and  R \gt 0 is a positive definite weight matrix for the input. A larger  Q relative to  R penalizes state deviations more heavily, resulting in faster convergence; a larger  R penalizes control effort, resulting in a more conservative response.

Solution via the Riccati Equation

The optimal feedback gain is obtained in two steps:

Step 1: Solve the algebraic Riccati equation (ARE) for a positive definite matrix  P:

 \displaystyle A^T P + PA - PBR^{-1}B^T P + Q = 0

Step 2: Compute the optimal gain:

 \displaystyle K = R^{-1}B^T P

The resulting closed-loop system  A - BK is guaranteed to be asymptotically stable (provided  (A, B) is stabilizable and  (Q^{1/2}, A) is detectable).

In MATLAB, the function lqr(A, B, Q, R) solves the Riccati equation and returns the optimal gain in a single call.

Design Guidelines

  • Identity weights ( Q = I, R = rI): A good starting point. Increase  r for less aggressive control, decrease  r for faster response.
  • Diagonal weights: Use  Q = \mathrm{diag}(q_{1}, \ldots, q_{n}) to penalize specific state variables more heavily.
  • Bryson's rule: Set  q_{ii} = 1 / x_{i,\mathrm{max}}^{2} and  r_{jj} = 1 / u_{j,\mathrm{max}}^{2} to normalize the cost function by the maximum acceptable values of each variable.

For a detailed treatment with numerical examples and MATLAB code, see: State Feedback Control: Design Gains using Optimal Regulators


Integral-Type Servo System

The Steady-State Error Problem

Both pole placement and LQR are regulator designs: they drive the state to the origin from a nonzero initial condition. However, in many practical applications, the control objective is to track a reference signal  r(t) rather than to regulate to zero. A pure state feedback regulator generally cannot achieve zero steady-state error for a step reference input unless the plant has an integrator.

Structure of the Integral-Type Servo System

The integral-type servo system solves this problem by augmenting the state with an integrator that accumulates the tracking error:

 \displaystyle \dot{x}_I(t) = r(t) - y(t) = r(t) - Cx(t)

The augmented state vector is:

 \displaystyle x_a(t) = \begin{pmatrix} x(t) \cr x_I(t) \end{pmatrix}

and the augmented system becomes:

 \displaystyle \dot{x}_a(t) = \begin{pmatrix} A & 0 \cr -C & 0 \end{pmatrix} x_a(t) + \begin{pmatrix} B \cr 0 \end{pmatrix} u(t) + \begin{pmatrix} 0 \cr I \end{pmatrix} r(t)

The control law is:

 \displaystyle u(t) = -K_x x(t) - K_I x_I(t)

where  K_x is the state feedback gain and  K_I is the integral gain. Both can be designed simultaneously by applying pole placement or LQR to the augmented system  (A_a, B_a) with:

 \displaystyle A_a = \begin{pmatrix} A & 0 \cr -C & 0 \end{pmatrix}, \quad B_a = \begin{pmatrix} B \cr 0 \end{pmatrix}

Key Properties

  • The integrator ensures zero steady-state error for constant reference inputs, even in the presence of constant disturbances.
  • The augmented system has dimension  n + l, so  n + l poles must be placed.
  • The controllability of the augmented system requires that the original system  (A, B) is controllable and that  A does not have an eigenvalue at zero (otherwise the augmented system loses controllability).

The integral-type servo system is the state-space counterpart of the integral action in PID control. In MATLAB, the augmented system can be formed manually, and then place or lqr can be applied directly.


LMI-Based State Feedback Design

Linear Matrix Inequalities (LMIs) provide a powerful generalization of the Lyapunov and Riccati equation approaches. The basic idea is to reformulate the controller design as a convex optimization problem involving matrix inequalities.

Basic Stabilization

A state feedback gain  K that stabilizes the closed-loop system can be found by solving the LMI:

 \displaystyle (A - BK)X + X(A - BK)^T \lt 0, \quad X \gt 0

where  X = P^{-1} and the variable substitution  Y = KX transforms this into a linear problem in  X and  Y.

Advanced Capabilities

LMI-based design can handle:

  • H∞ control: Minimizing the worst-case gain from disturbance to output.
  • Regional pole placement: Constraining poles to lie within a specified region of the complex plane (e.g., a disk, a cone, or a vertical strip).
  • Polytopic uncertainty: Designing a single gain that stabilizes all plants in a convex hull of vertex systems.
  • Multi-objective design: Simultaneously satisfying multiple performance specifications.

For details, see: Linear Matrix Inequalities (LMIs) and Controller Design and Advanced LMI Techniques for Control: Schur Complement and KYP Lemma


Observer-Based Feedback and the Separation Principle

When Full State Measurement is Not Available

State feedback  u = -Kx requires that all state variables are measured. In practice, only  l \lt n outputs are available. The solution is to use a state observer (state estimator) to reconstruct the unmeasured states from the available input-output data.

Observer Structure

The Luenberger observer estimates the state as:

 \displaystyle \dot{\hat{x}}(t) = A\hat{x}(t) + Bu(t) + L\bigl(y(t) - C\hat{x}(t)\bigr)

where  L \in \mathbb{R}^{n \times l} is the observer gain. The estimation error  e(t) = x(t) - \hat{x}(t) satisfies  \dot{e}(t) = (A - LC)e(t). If  (C, A) is observable,  L can be designed to make  A - LC stable with any desired convergence rate.

The Separation Principle

The separation principle states that the state feedback gain  K and the observer gain  L can be designed independently. The closed-loop poles of the combined system (controller + observer) are the union of the eigenvalues of  A - BK and  A - LC. This dramatically simplifies the design: first design  K assuming full state access, then design  L to estimate the state.

The control law for the observer-based feedback controller is:

 \displaystyle u(t) = -K\hat{x}(t)

For the full treatment of state observers, see: State Observer and State Estimation: A Comprehensive Guide

For the observer-based feedback structure, see: State Observer for State Space Model


Discrete-Time State Feedback

All of the above concepts have direct discrete-time counterparts. For the discrete-time plant:

 \displaystyle x(k+1) = Ax(k) + Bu(k)

the state feedback  u(k) = -Kx(k) yields the closed-loop system:

 \displaystyle x(k+1) = (A - BK)x(k)

The stability condition is that all eigenvalues of  A - BK lie strictly inside the unit circle  |z| \lt 1, as opposed to the left half-plane in the continuous-time case.

The discrete-time LQR minimizes:

 \displaystyle J = \sum_{k=0}^{\infty} \bigl( x(k)^T Q\, x(k) + u(k)^T R\, u(k) \bigr)

and is solved by the discrete-time algebraic Riccati equation (DARE). In MATLAB, dlqr(A, B, Q, R) computes the discrete-time optimal gain.

For details on continuous-to-discrete conversion, see: Discretization of Continuous-Time Control Systems


Experimental Example: Inverted Pendulum

The inverted pendulum is one of the most widely used benchmark problems for demonstrating state feedback control. In this experiment, the state vector consists of the cart position, cart velocity, pendulum angle, and angular velocity. The system is inherently unstable (the pendulum falls without feedback), and state feedback stabilizes it by placing the closed-loop poles in the left half-plane.

For the experimental setup and results, see: Inverted Pendulum: State Feedback Stabilization


From State Feedback to Advanced Topics

State feedback control is the starting point for a wide range of advanced control techniques:


Blog Articles (blog.control-theory.com)

Research Web Pages (www.control-theory.com)

Video (YouTube, in English)

Video Portal

MATLAB Simulation

MATLAB and Python Code

  • GitHub: control_state_feedback — MATLAB and Python simulation code for state feedback control design (pole placement, LQR, integral servo, observer-based feedback, LMI-based design)

Self-Introduction

Hiroshi Okajima — Associate Professor, Graduate School of Science and Technology, Kumamoto University. Member of SICE, ISCIE, and IEEE.


If you found this article helpful, please consider bookmarking or sharing it.

StateFeedback #StateSpaceControl #PolePlace #LQR #OptimalRegulator #SeparationPrinciple #ControlEngineering #MATLAB #LMI #ModernControl