Optimal nonlinear control with AI techniques

Optimal control can address practical problems appearing in a wide variety of fields, such as automatic control, artificial intelligence, operations research, economics, medicine, etc. In these problems, a nonlinear dynamical system must be controlled so as to optimize a cumulative performance index over time. While optimal solutions have been theoretically characterized starting in the 1950s, computational methods to find them are still a challenging, open area of research.

In this research direction, we focus on the development of methods originating from artificial intelligence and their usage in automatic control. In particular, we are investigating online planning and reinforcement learning techniques, developing fundamental lines like complexity analysis on the one hand, and on the other hand adapting the techniques to solve to open problems in nonlinear control, such as networked or hybrid control systems. On the application side, we are investigating the application of these methods to the control of mobile robotic assistants.

Our open and ongoing projects in this area are listed below, together with a selection of completed projects where relevant.

AIRGUIDE: A Learning Aerial Guide for the Elderly and Disabled

Robotic assistants can greatly improve the life of the ever-increasing elderly and disabled population. However, current efforts in assistive robotics are focused on ground robots and manipulators in controlled, indoor environments. AIRGUIDE will break away from this by exploiting unmanned aerial vehicles (UAVs) and their versatile motion capabilities. Specifically, the project will develop aerial assistive technology for independent mobility of an elderly or disabled person over a wide, outdoor area, via monitoring risks and guiding the person when needed.

AI planning and learning for nonlinear control applications

Planning methods for optimal control use a model of the system and the reward function to derive an optimal control strategy. Here we will consider in particular optimistic planning, a recent predictive approach that optimistically explores  possible action sequences from the current state. Due to generality in the dynamics and objective functions that it can address, it has a wide range of potential applications to problems in nonlinear control.

Optimal control of a communicating robot

Mobile robots typically communicate wirelessly, both to receive commands and to provide sensing data. The range of communication is finite and bandwidth varies with the relative position to base wireless antennas, so communication quality is strongly affected by the trajectory of the robot. However, trajectory control design rarely takes this into account. In this project, we aim to design and study a trajectory control strategy that optimally takes into account communication needs of the robot.

Observation and control for a power-assisted wheelchair

This project takes place in the context of a collaboration with the University of Valenciennes, France, involving Professors Thierry-Marie Guerra and Jimmy Lauber, Sami Mohammad at Autonomad Mobility, and PhD student Guoxi Feng. The overall objective is to control the power supplied by the electrical motor of the wheelchair, so as to push (or brake) together with the user without taking over entirely. This ensures that the user can achieve their driving task but still keeps them active. Specific tasks, each of which could be handled by a student, include:

Assistive robot arms

Robots that assist elderly or disabled persons in their day-to-day tasks can lead to a huge improvement in quality of life. At ROCON we are pursuing assistive manipulators, as well as UAVs for monitoring at-risk persons. This project focuses on the first direction, and presents a wide range of opportunities for a team of students, starting from low-level control design and vision tasks, to high-level control using artificial intelligence tools. Each student will work on one well-defined subtopic in these areas. Specific tasks include:

Sliding mode control of inverted pendulum

This project will develop sliding mode controllers and observers for the Quanser rotational inverted pendulum. The control objective is to stabilize the inverted pendulum at the upward position from a single swing up. The control system should ensure robustness properties in respect with parametric uncertainties, measurement noise, external disturbance, small time delays. Preliminary results will be validated in simulations, after which real-time implementation and validation will be performed.

AUF-RO grant: AI methods for the networked control of assistive UAVs (NETASSIST)

This project develops methods for the networked control and sensing for a team of unmanned, assistive aerial vehicles that follows a group of vulnerable persons. On the control side, we consider multiagent and consensus techniques, while on the vision side the focus is egomotion estimation of the UAVs and cooperative tracking of persons with filtering techniques. NETASSIST is an international cooperation project involving the Technical University of Cluj-Napoca in Romania, the University of Szeged in Hungary, and the University of Lorraine at Nancy, France.

PHC Brancusi grant: Artificial-Intelligence-Based Optimization for the Stable and Optimal Control of Networked Systems (AICONS)

The optimal operation of communication, energy, transport, and other networks is of paramount importance in today's society, and will certainly become more important in the future. Operating these networks optimally requires the effective control of their component systems. Our project AICONS therefore focuses on the control of general networked systems. We consider both the coordinated behavior of multiple systems having a local view of the network, as well as the networked control of individual systems where new challenges arise from the limitations of the network.

Nonlinear control for commercial drones in autonomous railway maintenance

Drones are getting widespread and low-cost platforms already offer good flight and video recording experience. This project intends to use such drones in the context of railway maintenance by developing applications for autonomous navigation in railway environment.

Young Teams grant: Reinforcement learning and planning for large-scale systems

Many controlled systems, such as robots in open environments, traffic and energy networks, etc. are large-scale: they have many continuous variables. Such systems may also be nonlinear, stochastic, and impossible to model accurately. Optimistic planning (OP) is a recent paradigm for general nonlinear and stochastic control, which works when a model is available; reinforcement learning (RL) additionally works model-free, by learning from data. However, existing OP and RL methods cannot handle the number of continuous variables required in large-scale systems.

Subscribe to Optimal nonlinear control with AI techniques