This article is a comprehensive guide to state feedback control and state-space design in control engineering. It covers the fundamentals of state-space models, controllability and observability, stability analysis, pole placement, optimal regulators (LQR), integral-type servo systems for tracking control, LMI-based design, observer-based feedback with the separation principle, and discrete-time state feedback. Links to detailed articles, research papers, videos, and MATLAB codes are provided throughout.
Author: Hiroshi Okajima, Associate Professor, Kumamoto University, Japan — 20 years of control engineering research
- Why State-Space Design Matters
- State-Space Model Fundamentals
- Controllability and Observability
- Stability of State-Space Systems
- State Feedback: Pole Placement
- State Feedback: Optimal Regulator (LQR)
- Integral-Type Servo System
- LMI-Based State Feedback Design
- Observer-Based Feedback and the Separation Principle
- Discrete-Time State Feedback
- Experimental Example: Inverted Pendulum
- From State Feedback to Advanced Topics
- Related Articles and Videos
- MATLAB and Python Code
- StateFeedback #StateSpaceControl #PolePlace #LQR #OptimalRegulator #SeparationPrinciple #ControlEngineering #MATLAB #LMI #ModernControl
Why State-Space Design Matters
Classical control design based on transfer functions — such as PID tuning — is powerful for single-input single-output (SISO) systems but becomes increasingly difficult when the plant has multiple inputs, multiple outputs, or when internal states must be monitored. State-space design provides a unified framework that naturally handles these situations:
- Multi-input multi-output (MIMO) systems are treated as naturally as SISO systems, since the plant model is expressed using matrices rather than scalar transfer functions.
- Internal states are explicitly represented, enabling direct access to all dynamic variables for feedback, monitoring, and fault detection.
- Systematic design algorithms — pole placement, LQR, and LMI-based methods — provide clear procedures for computing feedback gains with guaranteed stability and performance properties.
- Observer-based feedback bridges the gap between full state feedback (which assumes all states are measured) and practical implementations (where only partial outputs are available).
State feedback control is a foundational topic in modern control engineering and serves as the prerequisite for more advanced subjects such as state observer design, system identification, and robust control with the Model Error Compensator (MEC).
State-Space Model Fundamentals
Continuous-Time State Equations
A continuous-time linear time-invariant (LTI) system is described by:
where is the state vector,
is the control input, and
is the output. The matrices
,
,
, and
characterize the plant dynamics.
The integer is the order (or dimension) of the system, corresponding to the number of state variables. For example, in a 3rd-order SISO system (
),
is a
matrix,
is a
column vector, and
is a
row vector.
Discrete-Time State Equations
In discrete time, the state equations take the form:
where is the discrete time index. The conversion between continuous-time and discrete-time models is covered in detail in the Discretization article.
Equivalent Transformations
Given a nonsingular matrix , the state coordinate transformation
produces an equivalent state-space representation with matrices
,
,
, and
. This transformation preserves the transfer function, the eigenvalues of
, and the controllability/observability properties of the system.
Equivalent transformations are essential for converting systems into canonical forms — such as the controllable canonical form — that simplify feedback gain computation.
For details, see: Equivalent Transformations of State Equations
Controllability and Observability
Controllability and observability are the two fundamental structural properties of state-space systems. They determine whether state feedback and state estimation are possible.
Controllability
A system is controllable if the state can be driven from any initial condition to any desired state in finite time using the input. Controllability is verified by checking the rank of the controllability matrix:
The system is controllable if and only if .
Controllability and state feedback: If the system is controllable, then for any desired set of closed-loop poles, there exists a feedback gain
such that
has those poles. If the system is only stabilizable (i.e., all uncontrollable modes are already stable), then stabilizing feedback exists but arbitrary pole placement is not possible.
Observability
A system is observable if the initial state can be uniquely determined from the input and output history. Observability is verified by checking:
The system is observable if and only if . Observability is essential for state observer design — the dual of state feedback.
For details and examples at different system orders, see: Controllability and Observability of Systems
Stability of State-Space Systems
The stability of an autonomous system is determined entirely by the eigenvalues of
:
- Asymptotically stable: All eigenvalues of
have strictly negative real parts (continuous time) or lie strictly inside the unit circle (discrete time).
- Unstable: At least one eigenvalue has a positive real part (continuous time) or lies outside the unit circle (discrete time).
For the scalar case (), the system
has the solution
. If
, the state converges to zero; if
, it diverges. For higher-order systems, the same principle applies to each eigenvalue: the most "dangerous" eigenvalue (the one with the largest real part) determines the stability of the overall system.
Lyapunov stability theory provides an alternative characterization: the system is stable if and only if there exists a positive definite matrix satisfying the Lyapunov equation
. This matrix inequality formulation is the basis for LMI-based controller design.
For details, see: Stability of Systems Represented by State Equations
State Feedback: Pole Placement
Basic Structure
State feedback control applies the control law:
where is the feedback gain matrix. Substituting into the state equation yields the closed-loop autonomous system:
The dynamics of the closed-loop system are entirely determined by the eigenvalues of . The pole placement problem is to find
such that these eigenvalues match a desired set of closed-loop poles.
Pole Placement via Controllable Canonical Form
For SISO systems, pole placement can be performed systematically using the controllable canonical form. In this form, the state feedback gain can be computed by directly matching the coefficients of the desired characteristic polynomial with those of the closed-loop characteristic polynomial .
Example (2nd-order system): If the controllable canonical form has the characteristic polynomial and the desired poles are
and
, the desired characteristic polynomial is
, so we set
and
.
In MATLAB, the functions place and acker compute the feedback gain for arbitrary pole locations.
Effect of Pole Locations on Performance
The location of the closed-loop poles directly affects the transient response:
- Poles further to the left (larger negative real parts): Faster convergence, but typically requiring larger control effort.
- Poles closer to the imaginary axis: Slower convergence, smaller control effort.
- Poles far from the real axis (large imaginary parts): Oscillatory response.
- Poles close to the real axis: Smooth, non-oscillatory response.
The designer must balance convergence speed against control effort and actuator limitations.
For a detailed treatment with 3rd-order examples and MATLAB simulations, see: State Feedback Control: Design Gains using Pole Placement
State Feedback: Optimal Regulator (LQR)
Concept
While pole placement gives direct control over the closed-loop pole locations, the optimal regulator (also known as LQR: Linear Quadratic Regulator) provides a systematic way to balance state regulation against control effort by minimizing a quadratic cost function:
where is a positive semi-definite weight matrix for the state and
is a positive definite weight matrix for the input. A larger
relative to
penalizes state deviations more heavily, resulting in faster convergence; a larger
penalizes control effort, resulting in a more conservative response.
Solution via the Riccati Equation
The optimal feedback gain is obtained in two steps:
Step 1: Solve the algebraic Riccati equation (ARE) for a positive definite matrix :
Step 2: Compute the optimal gain:
The resulting closed-loop system is guaranteed to be asymptotically stable (provided
is stabilizable and
is detectable).
In MATLAB, the function lqr(A, B, Q, R) solves the Riccati equation and returns the optimal gain in a single call.
Design Guidelines
- Identity weights (
): A good starting point. Increase
for less aggressive control, decrease
for faster response.
- Diagonal weights: Use
to penalize specific state variables more heavily.
- Bryson's rule: Set
and
to normalize the cost function by the maximum acceptable values of each variable.
For a detailed treatment with numerical examples and MATLAB code, see: State Feedback Control: Design Gains using Optimal Regulators
Integral-Type Servo System
The Steady-State Error Problem
Both pole placement and LQR are regulator designs: they drive the state to the origin from a nonzero initial condition. However, in many practical applications, the control objective is to track a reference signal rather than to regulate to zero. A pure state feedback regulator generally cannot achieve zero steady-state error for a step reference input unless the plant has an integrator.
Structure of the Integral-Type Servo System
The integral-type servo system solves this problem by augmenting the state with an integrator that accumulates the tracking error:
The augmented state vector is:
and the augmented system becomes:
The control law is:
where is the state feedback gain and
is the integral gain. Both can be designed simultaneously by applying pole placement or LQR to the augmented system
with:
Key Properties
- The integrator ensures zero steady-state error for constant reference inputs, even in the presence of constant disturbances.
- The augmented system has dimension
, so
poles must be placed.
- The controllability of the augmented system requires that the original system
is controllable and that
does not have an eigenvalue at zero (otherwise the augmented system loses controllability).
The integral-type servo system is the state-space counterpart of the integral action in PID control. In MATLAB, the augmented system can be formed manually, and then place or lqr can be applied directly.
LMI-Based State Feedback Design
Linear Matrix Inequalities (LMIs) provide a powerful generalization of the Lyapunov and Riccati equation approaches. The basic idea is to reformulate the controller design as a convex optimization problem involving matrix inequalities.
Basic Stabilization
A state feedback gain that stabilizes the closed-loop system can be found by solving the LMI:
where and the variable substitution
transforms this into a linear problem in
and
.
Advanced Capabilities
LMI-based design can handle:
- H∞ control: Minimizing the worst-case gain from disturbance to output.
- Regional pole placement: Constraining poles to lie within a specified region of the complex plane (e.g., a disk, a cone, or a vertical strip).
- Polytopic uncertainty: Designing a single gain that stabilizes all plants in a convex hull of vertex systems.
- Multi-objective design: Simultaneously satisfying multiple performance specifications.
For details, see: Linear Matrix Inequalities (LMIs) and Controller Design and Advanced LMI Techniques for Control: Schur Complement and KYP Lemma
Observer-Based Feedback and the Separation Principle
When Full State Measurement is Not Available
State feedback requires that all state variables are measured. In practice, only
outputs are available. The solution is to use a state observer (state estimator) to reconstruct the unmeasured states from the available input-output data.
Observer Structure
The Luenberger observer estimates the state as:
where is the observer gain. The estimation error
satisfies
. If
is observable,
can be designed to make
stable with any desired convergence rate.
The Separation Principle
The separation principle states that the state feedback gain and the observer gain
can be designed independently. The closed-loop poles of the combined system (controller + observer) are the union of the eigenvalues of
and
. This dramatically simplifies the design: first design
assuming full state access, then design
to estimate the state.
The control law for the observer-based feedback controller is:
For the full treatment of state observers, see: State Observer and State Estimation: A Comprehensive Guide
For the observer-based feedback structure, see: State Observer for State Space Model
Discrete-Time State Feedback
All of the above concepts have direct discrete-time counterparts. For the discrete-time plant:
the state feedback yields the closed-loop system:
The stability condition is that all eigenvalues of lie strictly inside the unit circle
, as opposed to the left half-plane in the continuous-time case.
The discrete-time LQR minimizes:
and is solved by the discrete-time algebraic Riccati equation (DARE). In MATLAB, dlqr(A, B, Q, R) computes the discrete-time optimal gain.
For details on continuous-to-discrete conversion, see: Discretization of Continuous-Time Control Systems
Experimental Example: Inverted Pendulum
The inverted pendulum is one of the most widely used benchmark problems for demonstrating state feedback control. In this experiment, the state vector consists of the cart position, cart velocity, pendulum angle, and angular velocity. The system is inherently unstable (the pendulum falls without feedback), and state feedback stabilizes it by placing the closed-loop poles in the left half-plane.
For the experimental setup and results, see: Inverted Pendulum: State Feedback Stabilization
From State Feedback to Advanced Topics
State feedback control is the starting point for a wide range of advanced control techniques:
- When the model is unknown: Use system identification to obtain the state-space model from input-output data, then design the feedback gain.
- When the model is inaccurate: Use the Model Error Compensator (MEC) to compensate for model uncertainties in the feedback loop.
- When sensors have different sampling rates: Use multi-rate observers and multi-rate system identification to handle heterogeneous sensor data.
- For robust control design: Use LMI-based methods to design feedback gains that guarantee stability and performance under model uncertainty.
Related Articles and Videos
Blog Articles (blog.control-theory.com)
- Controllability and Observability of Systems
- Equivalent Transformations of State Equations
- Stability of Systems Represented by State Equations
- State Feedback Control: Design Gains using Pole Placement
- State Feedback Control: Design Gains using Optimal Regulators
- State Observer for State Space Model
- State Observer and State Estimation: A Comprehensive Guide
- System Identification: From Data to Dynamical Models
- Model Error Compensator (MEC)
- Linear Matrix Inequalities (LMIs) and Controller Design
- Advanced LMI Techniques for Control
- Discretization of Continuous-Time Control Systems
Research Web Pages (www.control-theory.com)
- Publications / LMI / MEC
Video (YouTube, in English)
- State-space model and control 01
- State-space model and control 02: Controllability and Observability
- State observer (control engineering)
- State-space model and control: Stability
- State-space model and control: Feedback controller design
- State-space model and control: Discrete-time system
Video Portal
MATLAB Simulation
MATLAB and Python Code
- GitHub: control_state_feedback — MATLAB and Python simulation code for state feedback control design (pole placement, LQR, integral servo, observer-based feedback, LMI-based design)
Self-Introduction
Hiroshi Okajima — Associate Professor, Graduate School of Science and Technology, Kumamoto University. Member of SICE, ISCIE, and IEEE.
If you found this article helpful, please consider bookmarking or sharing it.