Note: The course catalogues, the SGS Calendar, and ACORN list all graduate courses associated with ECE – please note that not all courses will be offered every year.
Fundamentals of linear time-invariant control systems. State space modeling and control design, controllability, stabilization, pole placement controllers, observability, Kalman filters, observer design, optimal control, tracking controllers. Labs: real-time control experiments on design techniques. Course credit is not available for students who have taken ECE410H1.
Prerequisite: ECE410H1 or ECE557H1 or an equivalent State Space Control course.
The course presents a more advanced treatment of linear control theory via the geometric approach. The coverage roughly corresponds to the first six chapters of “Linear Multivariable Control: A Geometric Approach”, by W.M. Wonham. We adopt the abstract algebra approach of the text to study controllability, observability, controlled invariant subspaces, controllability subspaces, and controllability indices. These concepts are applied to solve the problems of stabilization, output stabilization, disturbance decoupling, and the restricted regulator problem. Areas of current research in linear geometric control will also be discussed.
This course is an introduction to the control of discrete, asynchronous, nondeterministic systems like manufacturing systems, traffic systems, and certain communication systems. Architectural issues (modular, decentralized and hierarchical control) are emphasized. The theory is developed in an elementary framework of automata and formal languages, and is supported by a software package for creating applications. There are no special prerequisites.
This course is a continuation of ECE1636H, and is conducted on a seminar basis. Participants will present and discuss articles in the current literature, and complete a project that could lead into graduate research in the discrete-event system area. Topics recently examined include controlled Petri nets, min-max algebra, real-time control via timed-transition-models (TTMs), recursive process algebras, and state charts.
This is the first course of a two-term sequence on stochastic systems designed to cover some of the basic results on estimation, identification, stochastic control and adaptive control. Topics include: stochastic processes and their descriptions, analysis of linear systems with random inputs; prediction and filtering theory: prediction for ARMAX systems, the Kalman filter and the Riccati equation; stochastic control methods based on dynamic programming; the LQG problem and the separation theorem; minimum variance control.
Prerequisites: A course in linear control theory (e.g., ECE557) and a course in probability theory. Courses in measure theory and real analysis are recommended but not required.
The study of dynamical systems under uncertainty is an exciting and important research area. This course provides a survey of mathematical methods from stochastic control theory and reinforcement learning. In lectures, we focus on presenting the mathematical foundations of existing methods and the pros and cons of existing methods to highlight research gaps (e.g., quality-of-approximation guarantees vs. scalability to high-dimensional systems). Lecture topics include introductions to measure theory, Borel spaces, continuous-state Markov decision processes (MDPs), finite-horizon MDP problems, stochastic safety analysis, solution methods via value iteration, risk functionals, risk-aware control theory, and parametric approximation methods. The exploration of additional topics and applications related to stochastic control and reinforcement learning is encouraged through literature critiques and research projects. This course is designed to practice and enhance critical creative thinking skills and to launch and inspire your research.
This course is a mathematical introduction to nonlinear control theory, a subject with roots in dynamical systems theory, mechanics, and differential geometry. The focus of this course is on the dynamical systems perspective. The material covered in this course finds application in fields as diverse as orbital mechanics and aerospace engineering, circuit theory, power systems, robotics, and mathematical biology, to name a few. The course is organized in four chapters, as follows.
- Vector Fields and Dynamical Systems: Finite dimensional dynamical systems, vector fields, and their equivalence. Existence and uniqueness of solutions of ODEs.
- Foundations of Dynamical Systems Theory: Invariant sets and their characterization by the Nagumo theorem. Limit sets as a tool to characterize the asymptotic behaviour of bounded orbits. Limit sets of two-dimensional systems: the Poincaré-Bendixson theorem. Poincaré theory of stability of closed orbits. Linearization of vector fields about equilibria. Linearization of vector fields about closed orbits.
- Foundations of Stability Theory: Equilibrium stability and its characterization by means of Lyapunov’s theorem. Domain of attraction of an equilibrium. The Krasovskii-LaSalle invariance principle. Stability of LTI systems, and exponential stability of equilibria. Converse stability theorems.
- Introduction to Nonlinear Stabilization: Control-Lyapunov functions. Parametrization of equilibrium stabilizers by CLFs (Artstein-Sontag theorem). Passive systems and passivity-based equilibrium stabilization. Passivity of mechanical control systems. Port-Hamiltonian systems.
Prerequisites to take this course are ECE1647F and ECE557H1 (or an advanced linear systems course).
This course is a continuation of ECE1647H. It covers the design and analysis of nonlinear control systems from a geometric perspective, with emphasis on properties of the system that do not change under coordinate transformations. Frequent references to linear system theory and linear geometric methods are made. A detailed list of topics will be posted on the course website (http://www.control.utoronto.ca/~maggiore/) before the course starts.
Prerequisites: ECE410 or ECE557H1 or equivalent.
The course explores design of control systems that achieve complex specifications. This is an emerging area in control theory that contrasts with traditional control design focused on stabilization and tracking. We introduce linear temporal logic (LTL) and show how LTL specifications capture a rich class of transient and steady-state behaviours of control systems. The LTL control problem is reduced to a hybrid control problem using ideas from computer science to obtain a design that includes high-level discrete algorithms and low-level continuous time controllers. The course covers the most important techniques and tools that come to play in this methodology, including triangulation, behaviour of affine systems on polytopes, the Reach Control Problem (RCP), and flow functions. We explore in depth control synthesis methods for the RCP based on affine, continuous state, and piecewise affine feedbacks. The techniques studied in the course come together to solve the problem of motion planning for a group of quadrocopters.
This course studies dynamical models used in biology and introduces the nonlinear dynamics tools necessary for their analysis. The first part reviews basic nonlinear dynamics concepts with illustrations on biological models like population growth and logistic models. The second part focuses on biochemical reactions and genetic regulation while the third part reviews and analyzes basic neuronal models. The last part of the course introduces specific nonlinear systems concepts (such as passivity, monotonicity, cooperativity) for the quantitative study of biological networked systems (such as biochemical reaction networks and neuron populations). Selected topics will be covered independently via small research projects or paper reading and presented in workshop style meetings.
This course presents a mathematical treatment of classical and evolutionary game theory. Topics covered in classical game theory: matrix games, continuous games, Nash equilibrium (NE) solution, existence and uniqueness, best-response correspondence. Topics covered in evolutionary games: evolutionary stable strategy concept, population games, replicator dynamics, relation to dynamic asymptotic stability. Learning in games: imitation dynamics, fictitious play and their relation to replicator dynamics. Applications to engineering: communication networks, multi-agent learning. There is no required textbook. PDF course notes are available; the notes are self-contained and serve as a textbook. Weekly formal lectures based on the course notes.
Required backgound: This course has no formal prerequisites, but assumes knowledge of vector calculus and linear algebra. Ideally, the student taking this course will have taken an introductory course on nonlinear control theory, such as ECE1647F, and be familiar with the Lagrangian modelling of robots from a course like ECE470.
This course presents recent developments on control of underactuated mechanical systems, focusing on the notion of virtual constraint.
Traditionally, motion control problems in robotics are partitioned in two parts: motion planning and trajectory tracking. The motion planning algorithm converts the motion specification into reference signals for the robot joints. The trajectory tracker uses feedback control to make the robot joints track the reference signals. There is an emerging consensus in the academic community that this approach is inadequate for sophisticated motion control problems, in that reference signals impose a timing on the control loop which is unnatural and inherently non robust.
The virtual constraint technique does not rely on any reference signal, and does not impose any timing in the feedback loop. Motions are characterized implicitly through constraints that are enforced via feedback. Through judicious choice of the constraints, one may induce motions that are surprisingly natural and biologically plausible. For this reason, the virtual constraints technique has become a dominant paradigm in bipedal robot locomotion, and has the potential of becoming even more widespread in other area of robot locomotion.
The virtual constraint approach is geometric in nature. This course presents the required mathematical tools from differential geometry and surveys the basic results in this emergent research area. Topics covered will include:
– Differentiable manifolds and basic operations.
– Controlled invariant manifolds and zero dynamics of nonlinear control systems
– Euler-Lagrange robot models and models of impulsive impacts
– Virtual holonomic constraints (VHCs)
– Constrained dynamics resulting from VHCs, and conditions for existence of a Lagrangian structure
– Virtual constraint generators
– Stabilization of periodic orbits on the constraint manifold.
– Virtual constraints for walking robots
Prerequisites: ECE557H1 or equivalent
Convex optimization methods based on Linear Matrix Inequalities (LMIs) have dramatically expanded our ability to analyze and design complex multivariable control systems. This course explores material from the broad areas of robust and optimal control, with an emphasis on formulating systems analysis and controller design problems using LMIs. Topics include: historical context of robust control, fundamentals of optimization, linear matrix inequalities and semidefinite programming. Linear systems theory: Lyapunov inequalities, input-output performance criteria for dynamic systems, dissipative dynamical systems, and the generalized plant framework for optimal control. LMI solutions of H2 and H-Infinity state and output feedback control problems. Uncertain systems: linear and nonlinear uncertainty modelling, linear fractional representations, robust stability analysis. Time permitting: frequency-domain stability criteria, the KYP lemma, and introduction to integral quadratic constraints.