Research

My research focuses on solving fundamental problems in electric power systems engineering through the lens of signal processing, statistical estimation, and optimization. I am particularly interested in the role that these fields play in designing resilient, sustainable, and just electric power systems.

Complex power

Reactive power, the imaginary component of complex power, is a unifying thread throughout much of my work. Reactive power is a key component of the power flow equations, which are the fundamental equations of electric power systems. Many classical approximations of the power flow equations do not consider reactive power; however, controllable reactive power is a vital compnent of distributed energy resources. In distribution systems, we often have phaseless measurements are measurements of the magnitude of a complex quantity. In electric power systems, phaseless measurements are often the only measurements available. For example, synchrophasor measurements, which are measurements of the voltage phase angle, are often unavailable in distribution systems.

Decentralized decision making

In recent years,

In recent years, however, all three of these fields have come under mounting pressure to accommodate massive amounts of increasingly high-dimensional data. For example, witness the explosion in the quantity of high-resolution audio, imagery, video, and other sensed data produced by relatively inexpensive mobile devices. Similar "big data" scenarios arise in such diverse application areas as medical and scientific imaging, genomic data analysis, meteorology, remote surveillance, digital communications, and large-scale network analysis. Despite extraordinary advances in computational power, such high-dimensional data continues to pose a number of challenges in electric power systems, signal processing, and control theory. For example, mobile devices equipped with a range of sensors can easily acquire high-dimensional data at a rate which exceeds local storage and/or communication capacity by several orders of magnitude. In many other scientific and industrial applications (such as at the Large Hadron Collider at CERN), data can be generated at rates that far outstrip our ability to store it. In such scenarios we are faced with the challenge of extracting meaningful information from massively undersampled data. Moreover, even if we could collect all of this data, any effort to extract information must somehow overcome the "curse of dimensionality."

From a classical perspective, there is no obvious way to overcome these challenges. However, in many cases these nominally high-dimensional signals actually obey some sort of low-dimensional model, i.e., a model with only a few degrees of freedom. Geometrically, this means that the data lies near a low-dimensional structure such as a manifold, a union of subspaces, the set of low-rank matrices, or some other natural set. This structure, when present, allows for principled approaches to handling high-dimensional data. For example, the model of sparsity lies at the heart of compressive sensing — an emerging framework for efficient sensing that is revolutionizing the data processing pipeline starting with the acquisition of the data itself.

To address the challenges posed by high-dimensional data, my research focuses largely on the use of low-dimensional models such as sparsity, low-rank structure, and manifold or parametric models to perform practical signal processing and machine learning tasks. Such models can allow for elegant solutions to the challenges posed by high-dimensional data in a range of possible contexts.

CC BY-SA 4.0 Samuel Talkington. Last modified: October 03, 2023. Website built with Franklin.jl and the Julia programming language.