Kyriakos Vamvoudakis

Hometown: Athens Greece

B.S. Degree: Electronic and Computer Engineering from Technical University of Crete, Greece

MS and Ph.D.: Electrical Engineering from Automation and Robotics Research Institute, University of Texas, Arlington

Important Awards and Honors: International Neural Network Society's Young Investigator Award, 2016

                  Senior Member of IEEE, 2015

                  Inclusion of the 2010 Automatica paper in the IFAC Virtual Special Issue of Annual Reviews in Control as one of the papers with the highest citation rates in the control field (This virtual special issue highlights papers published between 2010 and 2013 that have appeared in the six journals of the International Federation of Automatic Control), 2014

                  Certificates for four research articles featured in ScienceDirect Top-25 list of most popular (Hottest) articles in Automatica, Elsevier, 2010-2013

                  Honor by the Office of the Provost and the University Library for Faculty Creative Works and Awards, University of Texas at Arlington, 2011

                  Best Researcher Award, Automation and Robotics Research Institute, University of Texas, 2011

                  Best Paper Award for Autonomous/Unmanned Vehicles, 27th Army Science Conference, Orlando, Florida, 2010

                  Best Presentation Award at the World Congress of Computational Intelligence, Barcelona, July 2010

                  Invited member of Golden Key Honour Society since 2009

                  Biography appears in Marquis Who’s Who in the World since 2009, Who’s Who in Science and Engineering since 2010 and Who’s Who in America since 2012

                  UTA STEM Doctoral Fellowship (2008-2011)

                  Registered Electrical/Computer Engineer (PE), Technical Chamber of Greece, since 2007

                  Invited member of Tau Beta Pi (TBP) and Eta Kappa Nu (HKN) Honor engineering societies since 2008

Graduate Study Area and Main Area of Research: Electrical Engineering with Main Area on feedback control theory

Advisor and Lab: I did my PhD with Frank Lewis at the Automation and Robotics Research Institute at the University of Texas and since then (2012-onwards) I have been working as a research scientist at the Center for Control, Dynamical systems and Computation with Joao Hespanha

Research Interests: My research is multi-disciplinary and draws from the areas of control theory, game theory, computational intelligence and renewable energy (smart grid). The idea of interactions between self-interested agents along with bio-inspired approaches are used to design and develop control system algorithms with guaranteed performance and stability in which traditional techniques are unable to provide solutions. My current research is focusing on bio-inspired feedback control systems, game theory based network security and multi-agent optimization with applications to cyber-physical systems.

Professional Memberships: Senior Member of IEEE, AIAA

Hobbies: Basketball, playing the piano, hiking and swimming

Currently what are you working on?

I am teaching an undergraduate class on “Basic Electrical and Electronic Circuits” at the Mechanical Engineering Department at UCSB and I am conducting research in the general area of network security, game theory, smart grid and multi-agent optimization. I have developed several sophisticated resilient control system architectures that guarantee desired behavior and protect the systems from cyber-attacks. Different scenarios of attacks have been considered, including byzantine faults, persistent attacks (measurement corruption and jamming attacks) on network teams, and attacks on cyber-missions. Under such circumstances, the defender must be able to adapt its control strategy according to the effects induced by the attackers. Our work on smart grid develops optimization-based control algorithms that can guarantee the optimal performance of voltage source micro-inverters without any phasor domain analysis or pulse width modulation. Moreover, our proposed framework does not require any plant parameter estimation, but instead, plant information is used to find the controller parameters directly online.

Tell us about your recent “Young Investigator Award” and what contributions you have made in the field of Neural Networks that lead to you being recognized for this prestigious award.

I have made pioneering engineering contributions in the area of neuro-inspired control systems and adaptive dynamic programming (ADP) since 2008. I have designed a family of optimal adaptive learning systems for continuous-time systems based on actor/critic neural network structures that converge online in real time to optimal control solutions. Standard adaptive learning controllers do not converge to optimal solutions, and standard optimal controllers are designed offline. I provided such online learning solutions to the optimal control problem, the nonzero-sum multiplayer game problem, and the zero-sum game problem. I used reinforcement learning methods such as Policy Iteration to design the tuning laws for the actor/critic neural networks then provided rigorous proofs to guarantee their performance, stability, and robustness. My methods allow for the first time, the learning solution of complicated continuous-time Hamilton-Jacobi equations that serve as the basis for optimal and game theoretic design, online and in real-time, by observing state variable information along the system trajectories. I have also proposed novel output feedback ADP algorithms for affine in the control input linear time-invariant systems. This data-based optimal control is implemented on-line using novel Policy Iteration and Value Iteration ADP algorithms based only on reduced measured information available at the system outputs. These two classes of output feedback algorithms do not require any knowledge of the system dynamics and as such are similar to Q-learning, but they have an added advantage of requiring only measurements of input/output data and not the full system state. I introduced the new concept of differential Graphical Games in neuro-inspired distributed control, where the dynamics and cost function of each player only depend on its immediate neighbors in a communication graph topology. Standard definitions of Nash equilibrium need to be extended for the case of Graphical Games. I have recently used reinforcement learning ideas in network security, complex biological networks, smart grid and for the control of agile automotive vehicles.

My contributions have yielded numerous publications since working in the field of neural network research and engineering applications for more than 7 years. In various areas of ADP I have published several journal papers that are on the top 25 cited papers according to ScienceDirect. My 2010 regular Automatica paper has received more than 282 citations and was included in the Virtual Special Issue of Annual Reviews in Control, as one of the papers with the highest citation rates in the control field.

What is your education background?

Ph.D., Electrical Engineering                                                                                                                                              2011

University of Texas, Arlington, TX, USA

Dissertation title: Online Learning Algorithms for Differential Dynamic Games and Optimal Control.

Advisor: Frank L. Lewis

Major: Dynamic Systems and Control,                    Minor: Mathematics

M.Sc., Electrical Engineering                                                                                                                                               2008

University of Texas, Arlington, TX, USA

Advisor: Frank L. Lewis

Major: Dynamic Systems and Control,                    Minor: Mathematics

Diploma in Electronic and Computer Engineering (B.Sc. and M.Sc. in a 5 year program)               2006

Technical University of Crete, Greece

Diploma Thesis title: Adaptive Control for MAPK Cascade Models using RBF Neural Networks.

Advisor: Manolis A. Christodoulou


List some of your favorite publications.

Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 1998.
Basar, Tamer, and Geert Jan Olsder. Dynamic noncooperative game theory. Vol. 23. Siam, 1999.
Lewis, Frank L., and Vassilis L. Syrmos. Optimal control. John Wiley & Sons, 1995.
Ioannou, Petros, and Jing Sun. Robust Adaptive Control. Courier Corporation, 2013.
Hespanha, Joao P., Payam Naghshtabrizi, and Yonggang Xu. "A survey of recent results in networked control systems." PROCEEDINGS-IEEE 95.1 (2007): 138.
Haddad, Wassim M., and VijaySekhar Chellaboina. Nonlinear dynamical systems and control: a Lyapunov-based approach. Princeton University Press, 2008.

How and why did you get into your area of research?

It was during my first undergraduate class on feedback control that I was fascinated by the idea that complicated systems can be brought under our control by playing with mathematical equations. I guess that curiosity developed into a love of feedback control and the mathematics that drives them. During my graduate studies in Texas, I had the pleasure to work with Frank Lewis, a pioneer in neuro-adaptive control, who gave me the initial passion for the field, taught me how to come up with new ideas while I pay close attention to detail.

Why did you select UCSB and ECE in regards to your research?

UCSB is known to have a world renowned control center. Having that in mind, I was attending the 49th IEEE Conference on Decision and Control, Dec. 2010 where Joao Hespanha was giving a semi-plenary talk on “Why Should I Care About Stochastic Hybrid Systems”. This was the first time I was listening to him live apart from reading his papers. I had him in my mind for a postdoctoral position after I was graduating. I sent him an email and I immediately got an answer and got invited for an interview. I have learned so many things from him, including being technically precise in everything I write on a paper.

Through my years as a researcher I have been involved in a MURI project on network security and several other projects on neuro-learning from ICB.

What do you find rewarding about your research?

It is great to see that others whom I respect deeply think these ideas were useful and cite my work.  

Where will your research take you next?

The search for practical and implementable automatic controllers is very important in industry. My future research direction will also impact new parallel areas that will include Nonlinear Adaptive Optimal Control, Smart Grid, Game Theory, Network Security, Complex Networks and Control of Agile Vehicles.

Life as a research scientist and how you balance school, work, social, and family life?

The excitements of researcher lifestyle barely leave time for anything outside of work.  I have lived in Goleta. Every morning I love going to Coffee Bean and pick up my coffee. I find lots of UCSB engineers and professors, including Joao, hanging out there and working on exciting projects, so I started doing that also and found it to be productive. Apart from that, I enjoy hiking, playing the piano and eating good food. I enjoy going on small road trips to LA and take walks in Santa Monica and Malibu listening to the waves.