I2S Masters/ Doctoral Theses


All students and faculty are welcome to attend the final defense of I2S graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Sherwan Jalal Abdullah

A Versatile and Programmable UAV Platform for Integrated Terrestrial and Non-Terrestrial Network Measurements in Rural Areas

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Shawn Keshmiri


Abstract

Reliable cellular connectivity is essential for modern services such as telehealth, precision agriculture, and remote education; yet, measuring network performance in rural areas presents significant challenges. Traditional drive testing cannot access large geographic areas between roads, while crowdsourced data provides insufficient spatial resolution in low-population regions. To address these limitations, we develop an open-source UAV-based measurement platform that integrates an onboard computation unit, commercial cellular modem, and automated flight control to systematically capture Radio Access Network (RAN) signals and end-to-end network performance metrics at different altitudes. Our platform collects synchronized measurements of signal strength (RSRP, RSSI), signal quality (RSRQ, SINR), latency, and bidirectional throughput, with each measurement tagged with GPS coordinates and altitude. Experimental results from a semi-rural deployment reveal a fundamental altitude-dependent trade-off: received signal power improves at higher altitudes due to enhanced line-of-sight conditions, while signal quality degrades from increased interference with neighboring cells. Our analysis indicates that most of the measurement area maintains acceptable signal quality, along with adequate throughput performance, for both uplink and downlink communications. We further demonstrate that strong radio signal metrics for individual cells do not necessarily translate to spatial coverage dominance such that the cell serving the majority of our test area exhibited only moderate performance, while cells with superior metrics contributed minimally to overall coverage. Next, we develop several machine learning (ML) models to improve the prediction accuracy of signal strength at unmeasured altitudes. Finally, we extend our measurement platform by integrating non-terrestrial network (NTN) user terminals with the UAV components to investigate the performance of Low-earth Orbit (LEO) satellite networks with UAV mobility. Our measurement results demonstrate that NTN offers a viable fallback option by achieving acceptable latency and throughput performance during flight operations. Overall, this work establishes a reproducible methodology for three-dimensional rural network characterization and provides practical insights for network operators, regulators, and researchers addressing connectivity challenges in underserved areas.


Satya Ashok Dowluri

Comparison of Copy-and-Patch and Meta-Tracing Compilation techniques in the context of Python

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Prasad Kulkarni, Chair
David Johnson
Hossein Saiedian


Abstract

Python's dynamic nature makes performance enhancement challenging. Recently, a JIT compiler using a novel copy-and-patch compilation approach was implemented in the reference Python implementation, CPython. Our goal in this work is to study and understand the performance properties of CPython's new JIT compiler. To facilitate this study, we compare the quality and performance of the code generated by this new JIT compiler with a more mature and traditional meta-tracing based JIT compiler implemented in PyPy (another Python implementation). Our thorough experimental evaluation reveals that, while it achieves the goal of fast compilation speed, CPython's JIT severely lags in code quality/performance compared with PyPy. While this observation is a known and intentional property of the copy-and-patch approach, it results in the new JIT compiler failing to elevate Python code performance beyond that achieved by the default interpreter, despite significant added code complexity. In this thesis, we report and explain our novel experiments, results, and observations.


Arya Hadizadeh Moghaddam

Learning Personalized and Robust Patient Representations across Graphical and Temporal Structures in Electronic Health Records

When & Where:


Eaton Hall, Room 2001B

Degree Type:

PhD Comprehensive Defense

Committee Members:

Zijun Yao, Chair
Bo Luo
Fengjun Li
Dongjie Wang
Xinmai Yang

Abstract

Recent research in Electronic Health Records (EHRs) has enabled personalized and longitudinal modeling of patient trajectories for health outcome improvement. Despite this progress, existing methods often struggle to capture the dynamic, heterogeneous, and interdependent nature of medical data. Specifically, many representation methods learn a rich set of EHR features in an independent way but overlook the intricate relationships among them. Moreover, data scarcity and bias, such as the cold-start scenarios where patients only have a few visits or rare conditions, remain fundamental challenges in clinical decision support in real-life. To address these challenges, this dissertation aims to introduce an integrated machine learning framework for sophisticated, interpretable, and adaptive EHR representation modeling. Specifically, the dissertation comprises three thrusts:

  1. A time-aware graph transformer model that dynamically constructs personalized temporal graph representations that capture patient trajectory over different visits.
  2. A contrasted multi-Intent recommender system that can disentangle the multiple temporal patterns that coexist in a patient’s long medical history, while considering distinct health profiles.
  3. A few-shot meta-learning framework that can address the patient cold-start issue through a self- and peer-adaptive model enhanced by uncertainty-based filtering.

Together, these contributions advance a data-efficient, generalizable, and interpretable foundation for large-scale clinical EHR mining toward truly personalized medical outcome prediction.


Junyi Zhao

On the Security of Speech-based Machine Translation Systems: Vulnerabilities and Attacks

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Bo Luo, Chair
Fengjun Li
Zijun Yao


Abstract

In the light of rapid advancement of global connectivity and the increasing reliance on multilingual communication, speech-based Machine Translation (MT) systems have emerged as essential technologies for facilitating seamless cross-lingual interaction. These systems enable individuals and organizations to overcome linguistic boundaries by automatically translating spoken language in real time. However, despite their growing ubiquity in various applications such as virtual assistants, international conferencing, and accessibility services, the security and robustness of speech-based MT systems remain underexplored. In particular, limited attention has been given to understanding their vulnerabilities under adversarial conditions, where malicious actors intentionally craft or manipulate speech inputs to mislead or degrade translation performance.

This thesis presents a comprehensive investigation into the security landscape of speech-based machine translation systems from an adversarial perspective. We systematically categorize and analyze potential attack vectors, evaluate their success rates across diverse system architectures and environmental settings, and explore the practical implications of such attacks. Furthermore, through a series of controlled experiments and human-subject evaluations, we demonstrate that adversarial manipulations can significantly distort translation outputs in realistic use cases, thereby posing tangible risks to communication reliability and user trust.

Our findings reveal critical weaknesses in current MT models and underscore the urgent need for developing more resilient defense strategies. We also discuss open research challenges and propose directions for building secure, trustworthy, and ethically responsible speech translation technologies. Ultimately, this work contributes to a deeper understanding of adversarial robustness in multimodal language systems and provides a foundation for advancing the security of next-generation machine translation frameworks.


Kyrian C. Adimora

Machine Learning-Based Multi-Objective Optimization for HPC Workload Scheduling: A GNN-RL Approach

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Comprehensive Defense

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni
Zijun Yao
Michael J. Murray

Abstract

As high-performance computing (HPC) systems achieve exascale capabilities, traditional single-objective schedulers that optimize solely for performance prove inadequate for environments requiring simultaneous optimization of energy efficiency and system resilience. Current scheduling approaches result in suboptimal resource utilization, excessive energy consumption, and reduced fault tolerance in the demanding requirements of large-scale scientific applications. This dissertation proposes a novel multi-objective optimization framework that integrates graph neural networks (GNNs) with reinforcement learning (RL) to jointly optimize performance, energy efficiency, and system resilience in HPC workload scheduling. The central hypothesis posits that graph-structured representations of workloads and system states, combined with adaptive learning policies, can significantly outperform traditional scheduling methods in complex, dynamic HPC environments. The proposed framework comprises three integrated components: (1) GNN-RL, which combines graph neural networks with reinforcement learning for adaptive policy development; (2) EA-GATSched, an energy-aware scheduler leveraging Graph Attention Networks; and (3) HARMONIC (Holistic Adaptive Resource Management for Optimized Next-generation Interconnected Computing), a probabilistic model for workload uncertainty quantification. The proposed methodology encompasses novel uncertainty modeling techniques, scalable GNN-based scheduling algorithms, and comprehensive empirical evaluation using production supercomputing workload traces. Preliminary results demonstrate 10-19% improvements in energy efficiency while maintaining comparable performance metrics. The framework will be evaluated across makespan reduction, energy consumption, resource utilization efficiency, and fault tolerance in various operational scenarios. This research advances sustainable and resilient HPC resource management, providing critical infrastructure support for next-generation scientific computing applications.


Sarah Johnson

Ordering Attestation Protocols

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Comprehensive Defense

Committee Members:

Perry Alexander, Chair
Michael Branicky
Sankha Guria
Emily Witt
Eileen Nutting

Abstract

Remote attestation is a process of obtaining verifiable evidence from a remote party to establish trust. A relying party makes a request of a remote target that responds by executing an attestation protocol producing evidence reflecting the target's system state and meta-evidence reflecting the evidence’s integrity and provenance. This process occurs in the presence of adversaries intent on misleading the relying party to trust a target they should not. This research introduces a robust approach for evaluating and comparing attestation protocols based on their relative resilience against such adversaries. I develop a Rocq-based, formally-verified mathematical model aimed at describing the difficulty for an active adversary to successfully compromise the attestation. The model supports systematically ranking attestation protocols by the level of adversary effort required to produce evidence that does not accurately reflect the target’s state. My work aims to facilitate the selection of a protocol resilient to adversarial attack.


Lohithya Ghanta

Used Car Analytics

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Morteza Hashemi
Prasad Kulkarni


Abstract

The used car market is characterized by significant pricing variability, making it challenging for buyers and sellers to determine fair vehicle values. To address this, the project applies a machine learning–driven approach to predict used car prices based on real market data extracted from Cars.com. Following extensive data cleaning, feature engineering, and exploratory analysis, several predictive models were developed and evaluated. Among these, the Stacking Regressor demonstrated superior performance, effectively capturing non-linear pricing patterns and achieving the highest accuracy with the lowest prediction error. Key insights indicate that vehicle age and mileage are the primary drivers of price depreciation, while brand and vehicle category exert notable secondary influence. The resulting pricing model provides a data-backed, transparent framework that supports more informed decision-making and promotes fairness and consistency within the used car marketplace.


Ashish Adhikari

Towards assessing the security of program binaries

When & Where:


Eaton Hall, Room 2001B

Degree Type:

PhD Comprehensive Defense

Committee Members:

Prasad Kulkarni, Chair
Alex Bardas
Fengjun Li
Bo Luo

Abstract

Software vulnerabilities are widespread, often resulting from coding weaknesses and poor development practices. These vulnerabilities can be exploited by attackers, posing risks to confidentiality, integrity, and availability. To protect themselves, end-users of software may have an interest in knowing whether the software they purchase, and use is secure from potential attacks. Our work is motivated by this need to automatically assess and rate the security properties of binary software.

While many researchers focus on developing techniques and tools to detect and mitigate vulnerabilities in binaries, our approach is different. We aim to determine whether the software has been developed with proper care. Our hypothesis is that software created with meticulous attention to security is less likely to contain exploitable vulnerabilities. As a first step, we examined the current landscape of binary-level vulnerability detection. We categorized critical coding weaknesses in compiled programming languages and conducted a detailed survey comparing static analysis techniques and tools designed to detect these weaknesses. Additionally, we evaluated the effectiveness of open-source CWE detection tools and analyzed their challenges. To further understand their efficacy, we conducted independent assessments using standard benchmarks.

To determine whether software is carefully and securely developed, we propose several techniques. So far, we have used machine learning and deep learning methods to identify the programming language of a binary at the functional level, enabling us to handle complex cases like mixed-language binaries and we assess whether vulnerable regions in the binary are protected with appropriate security mechanisms. Additionally, we explored the feasibility of detecting secure coding practices by examining adherence to SonarQube’s security-related coding conventions.

Next, we investigate whether compiler warnings generated during binary creation are properly addressed. Furthermore, we also aim to optimize the array bounds detection in the program binary. This enhanced array bounds detection will also increase the effectiveness of detecting secure coding conventions that are related to memory safety and buffer overflow vulnerabilities.

Our ultimate goal is to combine these techniques to rate the overall security quality of a given binary software.


Bayn Schrader

Implementation and Analysis of an Efficient Dual-Beam Radar-Communications Technique

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

MS Thesis Defense

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Fully digital arrays enable realization of dual-function radar-communications systems which generate multiple simultaneous transmit beams with different modulation structures in different spatial directions. These spatially diverse transmissions are produced by designing the individual wave forms transmitted at each antenna element that combine in the far-field to synthesize the desired modulations at the specified directions. This thesis derives a look-up table (LUT) implementation of the existing Far-Field Radiated Emissions Design (FFRED) optimization framework. This LUT implementation requires a single optimization routine for a set of desired signals, rather than the previous implementation which required pulse-to-pulse optimization, making the LUT approach more efficient. The LUT is generated by representing the waveforms transmitted by each element in the array as a sequence of beamformers, where the LUT contains beamformers based on the phase difference between the desired signal modulations. The globally optimal beamformers, in terms of power efficiency, can be realized via the Lagrange dual problem for most beam locations and powers. The Phase-Attached Radar-Communications (PARC) waveform is selected for the communications waveform alongside a Linear Frequency Modulated (LFM) waveform for the radar signal. A set of FFRED LUTs are then used to simulate a radar transmission to verify the utility of the radar system. The same LUTs are then used to estimate the communications performance of a system with varying levels of the array knowledge uncertainty.


Will Thomas

Static Analysis and Synthesis of Layered Attestation Protocols

When & Where:


Eaton Hall, Room 2001B

Degree Type:

PhD Comprehensive Defense

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Sankha Guria
Eileen Nutting

Abstract

Trust is a fundamental issue in computer security. Frequently, systems implicitly trust in other
systems, especially if configured by the same administrator. This fallacious reasoning stems from the belief
that systems starting from a known, presumably good, state can be trusted. However, this statement only
holds for boot-time behavior; most non-trivial systems change state over time, and thus runtime behavior is
an important, oft-overlooked aspect of implicit trust in system security.

To address this, attestation was developed, allowing a system to provide evidence of its runtime behavior to a
verifier. This evidence allows a verifier to make an explicit informed decision about the system’s trustworthiness.
As systems grow more complex, scalable attestation mechanisms become increasingly important. To apply
attestation to non-trivial systems, layered attestation was introduced, allowing attestation of individual
components or layers, combined into a unified report about overall system behavior. This approach enables
more granular trust assessments and facilitates attestation in complex, multi-layered architectures. With the
complexity of layered attestation, discerning whether a given protocol is sufficiently measuring a system, is
executable, or if all measurements are properly reported, becomes increasingly challenging.

In this work, we will develop a framework for the static analysis and synthesis of layered attestation protocols,
enabling more robust and adaptable attestation mechanisms for dynamic systems. A key focus will be the
static verification of protocol correctness, ensuring the protocol behaves as intended and provides reliable
evidence of the underlying system state. A type system will be added to the Copland layered attestation
protocol description language to allow basic static checks, and extended static analysis techniques will be
developed to verify more complex properties of protocols for a specific target system. Further, protocol
synthesis will be explored, enabling the automatic generation of correct-by-construction protocols tailored to
system requirements.


David Felton

Optimization and Evaluation of Physical Complementary Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Comprehensive Defense

Committee Members:

Shannon Blunt, Chair
Rachel Jarvis
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

In high dynamic-range environments, matched-filter radar performance is often sidelobe-limited with correlation error being fundamentally constrained by the TB of the collective emission. To contend with the regulatory necessity of spectral containment, the gradient-based complementary-FM framework was developed to produce complementary sidelobe cancellation (CSC) after coherently combining responses from distinct pulses from within a pulse-agile emission. In contrast to most complementary subsets, which were discovered via brute force under the notion of phase-coding, these comp-FM waveform subsets achieve CSC while preserving hardware compatibility since they are FM. Although comp-FM addressed a primary limitation of complementary signals (i.e., hardware distortion), CSC hinges on the exact reconstruction of autocorrelation terms to suppress sidelobes, from which optimality is broken for Doppler shifted signals. This work introduces a Doppler-generalized comp-FM (DG-comp-FM) framework that extends the cancellation condition to account for the anticipated unambiguous Doppler span after post-summing. While this framework is developed for use within a combine-before-Doppler processing manner, it can likewise be employed to design an entire coherent processing interval (CPI) to minimize range-sidelobe modulation (RSM) within the radar point-spread-function (PSF), thereby introducing the potential for cognitive operation if sufficient scattering knowledge is available a-priori. 

Some radar systems operate with multiple emitters, as in the case of Multiple-input-multiple-output (MIMO) radar. Whereas a single emitter must contend with the self-inflicted autocorrelation sidelobes, MIMO systems must likewise contend with the cross-correlation with coincident (in time and spectrum) emissions from other emitters. As such, the determination of "orthogonal waveforms" comprises a large portion of research within the MIMO space, with a small majority now recognizing that true orthogonality is not possible for band-limited signals (albeit, with the exclusion of TDMA). The notion of complementary-FM is proposed for exploration within a MIMO context, whereby coherently combining responses can achieve CSC as well as cross-correlation cancellation for a wide Doppler space. By effectively minimizing cross-correlation terms, this enables improved channel separation on receive as well as improved estimation capability due to reduced correlation error. Proposal items include further exploration/characterization of the space, incorporating an explicit spectral.


Jigyas Sharma

SEDPD: Sampling-Enhanced Differentially Private Defense against Backdoor Poisoning Attacks of Image Classification

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

MS Thesis Defense

Committee Members:

Han Wang, Chair
Drew Davidson
Dongjie Wang


Abstract

Recent advancements in explainable artificial intelligence (XAI) have brought significant transparency to machine learning by providing interpretable explanations alongside model predictions. However, this transparency has also introduced vulnerabilities, enhancing adversaries’ ability for the model decision processes through explanation-guided attacks. In this paper, we propose a robust, model-agnostic defense framework to mitigate these vulnerabilities by explanations while preserving the utility of XAI. Our framework employs a multinomial sampling approach that perturbs explanation values generated by techniques such as SHAP and LIME. These perturbations ensure differential privacy (DP) bounds, disrupting adversarial attempts to embed malicious triggers while maintaining explanation quality for legitimate users. To validate our defense, we introduce a threat model tailored to image classification tasks. By applying our defense framework, we train models with pixel-sampling strategies that integrate DP guarantees, enhancing robustness against backdoor poisoning attacks with XAI. Extensive experiments on widely used datasets, such as CIFAR-10, MNIST, CIFAR-100 and Imagenette, and models, including ConvMixer and ResNet-50, show that our approach effectively mitigates explanation-guided attacks without compromising the accuracy of the model. We also test our defense performance against other backdoor attacks, which shows our defense framework can detect other type backdoor triggers very well. This work highlights the potential of DP in securing XAI systems and ensures safer deployment of machine learning models in real-world applications.


Dimple Galla

Intelligent Application for Cold Email Generation: Business Outreach

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

Cold emailing remains an effective strategy for software service companies to improve organizational reach by acquiring clients. Generic emails often fail to get a response.

This project leverages Generative AI to automate the cold email generation. This project is built with the Llama-3.1 model and a Chroma vector database that supports the semantic search of keywords in the job description that matches the project portfolio links of software service companies. The application automatically extracts the technology related job openings for Fortune 500 companies. Users can either select from these extracted job postings or manually enter URL of a job posting, after which the system generates email and sends email upon approval. Advanced techniques like Chain-of-Thought Prompting and Few-Shot Learning were applied to improve the relevance making the email more responsive. This AI driven approach improves engagement and simplifies the business development process for software service companies.


Past Defense Notices

Dates

Durga Venkata Suraj Tedla

AI DIETICIAN

When & Where:


Zoom (https://kansas.zoom.us/j/84997733219) Meeting ID: 849 9773 3219 Passcode: 980685

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Jennifer Lohoefener


Abstract

The artificially intelligent Dietician Web application is an innovative piece of technology that makes use of artificial intelligence to offer individualised nutritional guidance and assistance. This web application uses advanced machine learning algorithms and natural language processing to provide users with individualized nutritional advice and assistance in meal planning. Users who are interested in improving their eating habits can benefit from this bot. The system collects relevant data about users' dietary choices, as well as information about calories, and provides insights into body mass index (BMI) and basal metabolic rate (BMR) through interactive conversations, resulting in tailored recommendations. To enhance its capacity for prediction, a number of classification methods, including naive Bayes, neural networks, random forests, and support vector machines, were utilised and evaluated. Following an exhaustive analysis, the model that proved to be the most effective random forest is selected for the purpose of incorporating it into the development of the artificial intelligence Dietician Web application. The purpose of this study is to emphasise the significance of the artificial intelligence Dietician Web application as a versatile and intelligent instrument that encourages the adoption of healthy eating habits and empowers users to make intelligent decisions regarding their dietary requirements.


Zeyan Liu

On the Security of Modern AI: Backdoors, Robustness, and Detectability

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Dissertation Defense

Committee Members:

Bo Luo, Chair
Alex Bardas
Fengjun Li
Zijun Yao
John Symons

Abstract

The rapid development of AI has significantly impacted security and privacy, introducing both new cyber-attacks targeting AI models and challenges related to responsible use. As AI models become more widely adopted in real-world applications, attackers exploit adversarially altered samples to manipulate their behaviors and decisions. Simultaneously, the use of generative AI, like ChatGPT, has sparked debates about the integrity of AI-generated content.

In this dissertation, we investigate the security of modern AI systems and the detectability of AI-related threats, focusing on stealthy AI attacks and responsible AI use in academia. First, we reevaluate the stealthiness of 20 state-of-the-art attacks on six benchmark datasets, using 24 image quality metrics and over 30,000 user annotations. Our findings reveal that most attacks introduce noticeable perturbations, failing to remain stealthy. Motivated by this, we propose a novel model-poisoning neural Trojan, LoneNeuron, which minimally modifies the host neural network by adding a single neuron after the first convolution layer. LoneNeuron responds to feature-domain patterns that transform into invisible, sample-specific, and polymorphic pixel-domain watermarks, achieving a 100% attack success rate without compromising main task performance and enhancing stealth and detection resistance. Additionally, we examine the detectability of ChatGPT-generated content in academic writing. Presenting GPABench2, a dataset of over 2.8 million abstracts across various disciplines, we assess existing detection tools and challenges faced by over 240 evaluators. We also develop CheckGPT, a detection framework consisting of an attentive Bi-LSTM and a representation module, to capture subtle semantic and linguistic patterns in ChatGPT-generated text. Extensive experiments validate CheckGPT’s high applicability, transferability, and robustness.


Abishek Doodgaon

Photorealistic Synthetic Data Generation for Deep Learning-based Structural Health Monitoring of Concrete Dams

When & Where:


LEEP2, Room 1415A

Degree Type:

MS Thesis Defense

Committee Members:

Zijun Yao, Chair
Caroline Bennett
Prasad Kulkarni
Remy Lequesne

Abstract

Regular inspections are crucial for identifying and assessing damage in concrete dams, including a wide range of damage states. Manual inspections of dams are often constrained by cost, time, safety, and inaccessibility. Automating dam inspections using artificial intelligence has the potential to improve the efficiency and accuracy of data analysis. Computer vision and deep learning models have proven effective in detecting a variety of damage features using images, but their success relies on the availability of high-quality and diverse training data. This is because supervised learning, a common machine-learning approach for classification problems, uses labeled examples, in which each training data point includes features (damage images) and a corresponding label (pixel annotation). Unfortunately, public datasets of annotated images of concrete dam surfaces are scarce and inconsistent in quality, quantity, and representation.

To address this challenge, we present a novel approach that involves synthesizing a realistic environment using a 3D model of a dam. By overlaying this model with synthetically created photorealistic damage textures, we can render images to generate large and realistic datasets with high-fidelity annotations. Our pipeline uses NX and Blender for 3D model generation and assembly, Substance 3D Designer and Substance Automation Toolkit for texture synthesis and automation, and Unreal Engine 5 for creating a realistic environment and rendering images. This generated synthetic data is then used to train deep learning models in the subsequent steps. The proposed approach offers several advantages. First, it allows generation of large quantities of data that are essential for training accurate deep learning models. Second, the texture synthesis ensures generation of high-fidelity ground truths (annotations) that are crucial for making accurate detections. Lastly, the automation capabilities of the software applications used in this process provides flexibility to generate data with varied textures elements, colors, lighting conditions, and image quality overcoming the constraints of time. Thus, the proposed approach can improve the automation of dam inspection by improving the quality and quantity of training data.


Sana Awan

Towards Robust and Privacy-preserving Federated Learning

When & Where:


Zoom (ID: 935 5019 8870 Passcode: 323434)

Degree Type:

PhD Dissertation Defense

Committee Members:

Fengjun Li, Chair
Alex Bardas
Cuncong Zhong
Mei Liu
Haiyang Chao

Abstract

Machine Learning (ML) has revolutionized various fields, from disease prediction to credit risk evaluation, by harnessing abundant data scattered across diverse sources. However, transporting data to a trusted server for centralized ML model training is not only costly but also raises privacy concerns, particularly with legislative standards like HIPAA in place. In response to these challenges, Federated Learning (FL) has emerged as a promising solution. FL involves training a collaborative model across a network of clients, each retaining its own private data. By conducting training locally on the participating clients, this approach eliminates the need to transfer entire training datasets while harnessing their computation capabilities. However, FL introduces unique privacy risks, security concerns, and robustness challenges. Firstly, FL is susceptible to malicious actors who may tamper with local data, manipulate the local training process, or intercept the shared model or gradients to implant backdoors that affect the robustness of the joint model. Secondly, due to the statistical and system heterogeneity within FL, substantial differences exist between the distribution of each local dataset and the global distribution, causing clients’ local objectives to deviate greatly from the global optima, resulting in a drift in local updates. Addressing such vulnerabilities and challenges is crucial before deploying FL systems in critical infrastructures.

In this dissertation, we present a multi-pronged approach to address the privacy, security, and robustness challenges in FL. This involves designing innovative privacy protection mechanisms and robust aggregation schemes to counter attacks during the training process. To address the privacy risk due to model or gradient interception, we present the design of a reliable and accountable blockchain-enabled privacy-preserving federated learning (PPFL) framework which leverages homomorphic encryption to protect individual client updates. The blockchain is adopted to support provenance of model updates during training so that malformed or malicious updates can be identified and traced back to the source. 

We studied the challenges in FL due to heterogeneous data distributions and found that existing FL algorithms often suffer from slow and unstable convergence and are vulnerable to poisoning attacks, particularly in extreme non-independent and identically distributed (non-IID) settings. We propose a robust aggregation scheme, named CONTRA, to mitigate data poisoning attacks and ensure an accuracy guarantee even under attack. This defense strategy identifies malicious clients by evaluating the cosine similarity of their gradient contributions and subsequently removes them from FL training. Finally, we introduce FL-GMM, an algorithm designed to tackle data heterogeneity while prioritizing privacy. It iteratively constructs a personalized classifier for each client while aligning local-global feature representations. By aligning local distributions with global semantic information, FL-GMM minimizes the impact of data diversity. Moreover, FL-GMM enhances security by transmitting derived model parameters via secure multiparty computation, thereby avoiding vulnerabilities to reconstruction attacks observed in other approaches. 


Arin Dutta

Performance Analysis of Distributed Raman Amplification with Dual-Order Forward Pumping

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Degree Type:

PhD Comprehensive Defense

Committee Members:

Rongqing Hui, Chair
Christopher Allen
Morteza Hashemi
Alessandro Saladrino
Hui Zhao

Abstract

As internet services like high-definition videos, cloud computing, and artificial intelligence keep growing, optical networks need to keep up with the demand for more capacity. Optical amplifiers play a crucial role in offsetting fiber loss and enabling long-distance wavelength division multiplexing (WDM) transmission in high-capacity systems. Various methods have been proposed to enhance the capacity and reach of fiber communication systems, including advanced modulation formats, dense wavelength division multiplexing (DWDM) over ultra-wide bands, space-division multiplexing, and high-performance digital signal processing (DSP) technologies. To sustain higher data rates while maximizing the spectral efficiency of multi-level modulated signals, a higher Optical signal-to-noise ratio (OSNR) is necessary. Despite advancements in coherent optical communication systems, the spectral efficiency of multi-level modulated signals is ultimately constrained by fiber nonlinearity.

Raman amplification is an attractive solution for wide-band amplification with low noise figures in multi-band systems. Distributed Raman Amplification (DRA) has been deployed in recent high-capacity transmission experiments to achieve a relatively flat signal power distribution along the optical path and offers the unique advantage of using conventional low-loss silica fibers as the gain medium, effectively transforming passive optical fibers into active or amplifying waveguides. Additionally, DRA provides gain at any wavelength by selecting the appropriate pump wavelength, enabling operation in signal bands outside the Erbium-doped fiber amplifier (EDFA) bands. Forward (FW) Raman pumping in DRA can be adopted to further improve the DRA performance as it is more efficient in OSNR improvement because the optical noise is generated near the beginning of the fiber span and attenuated along the fiber. Dual-order FW pumping helps to reduce the non-linear effect of the optical signal and improves OSNR by more uniformly distributing the Raman gain along the transmission span.

The major concern with Forward Distributed Raman Amplification (FW DRA) is the fluctuation in pump power, known as relative intensity noise (RIN), which transfers from the pump laser to both the intensity and phase of the transmitted optical signal as they propagate in the same direction. Additionally, another concern of FW DRA is the rise in signal optical power near the start of the fiber span, leading to an increase in the Kerr-effect-induced non-linear phase shift of the signal. These factors, including RIN transfer-induced noise and non-linear noise, contribute to the degradation of the system performance in FW DRA systems at the receiver. As the performance of DRA with backward pumping is well understood with a relatively low impact of RIN transfer, our study is focused on the FW pumping scheme.

Our research is intended to provide a comprehensive analysis of the system performance impact of dual-order FW Raman pumping, including signal intensity and phase noise induced by the RINs of both the 1st and the 2nd order pump lasers, as well as the impacts of linear and nonlinear noise. The efficiencies of pump RIN to signal intensity and phase noise transfer are theoretically analyzed and experimentally verified by applying a shallow intensity modulation to the pump laser to mimic the RIN. The results indicate that the efficiency of the 2nd order pump RIN to signal phase noise transfer can be more than 2 orders of magnitude higher than that from the 1st order pump. Then the performance of the dual-order FW Raman configurations is compared with that of single-order Raman pumping to understand the trade-offs of system parameters. The nonlinear interference (NLI) noise is analyzed to study the overall OSNR improvement when employing a 2nd order Raman pump.

Finally, a DWDM system with 16-QAM modulation is used as an example to investigate the benefit of DRA with dual order Raman pumping and with different pump RIN levels. We also consider a DRA system using a 1st order incoherent pump together with a 2nd order coherent pump. Although dual-order FW pumping corresponds to a slight increase of linear amplified spontaneous emission (ASE) compared to using only a 1st order pump, its major advantage comes from the reduction of nonlinear interference noise in a DWDM system. Because the RIN of the 2nd order pump has much higher impact than that of the 1st order pump, there should be more stringent requirement on the RIN of the 2nd order pump laser when dual order FW pumping scheme is used for DRA for efficient fiber-optic communication. Also, the result of system performance analysis reveals that higher baud rate systems, like those operating at 100Gbaud, are less affected by pump laser RIN due to the low-pass characteristics of the transfer of pump RIN to signal phase noise.


Babak Badnava

Joint Communication and Computation for Emerging Applications in Next Generation of Wireless Networks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Comprehensive Defense

Committee Members:

Morteza Hashemi, Chair
Taejoon Kim
Prasad Kulkarni
Shawn Keshmiri

Abstract

Emerging applications in next-generation wireless networks are driving the need for innovative communication and computation systems. Notable examples include augmented and virtual reality (AR/VR), autonomous vehicles, and mobile edge computing, all of which demand significant computational and communication resources at the network edge. These demands place a strain on edge devices, which are often resource-constrained. In order to incorporate available communication and computation resources, while enhancing user experience, this PhD research is dedicated to developing joint communication and computation solutions for next generation wireless applications that could potentially operate in high frequencies such as millimeter wave (mmWave) bands.

In the first thrust of this study, we examine the problem of energy-constrained computation offloading to edge servers in a multi-user multi-channel wireless network. To develop a decentralized offloading policy for each user, we model the problem as a partially observable Markov decision problem (POMDP). Leveraging bandit learning methods, we introduce a decentralized task offloading solution, where edge users offload their computation tasks to a nearby edge server using a selected communication channel. The proposed framework aims to meet user's requirements, such as task completion deadline and computation throughput (i.e., the rate at which computational results are produced).

The second thrust of the study emphasizes user-driven requirements for these resource-intensive applications, specifically the Quality of Experience (QoE) in 2D and 3D video streaming. Given the unique characteristics of mmWave networks, we develop a beam alignment and buffer predictive multi-user scheduling algorithm for 2D video streaming applications. This scheduling algorithm balances the trade-off between beam alignment overhead and playback buffer levels for optimal resource allocation across users. Next, we extend our investigation and develop a joint rate adaptation and computation distribution algorithm for 3D video streaming in mmWave-based VR systems. Our proposed framework balances the trade-off between communication and computation resource allocation to enhance the users’ QoE. Our numerical results using real-world mmWave traces and 3D video dataset, show promising improvements in terms of video quality, rebuffering time, and quality variation perceived by users.


Arman Ghasemi

Task-Oriented Communication and Distributed Control in Smart Grids with Time-Series Forecasting

When & Where:


Nichols Hall, Room 246

Degree Type:

PhD Comprehensive Defense

Committee Members:

Morteza Hashemi, Chair
Alex Bardas
Taejoon Kim
Prasad Kulkarni
Zsolt Talata

Abstract

Smart grids face challenges in maintaining the balance between generation and consumption at the residential and grid scales with the integration of renewable energy resources. Decentralized, dynamic, and distributed control algorithms are necessary for smart grids to function effectively. The inherent variability and uncertainty of renewables, especially wind and solar energy, complicate the deployment of distributed control algorithms in smart grids. In addition, smart grid systems must handle real-time data collected from interconnected devices and sensors while maintaining reliable and secure communication regardless of network failures. To address these challenges, our research models the integration of renewable energy resources into the smart grid and evaluates how predictive analytics can improve distributed control and energy management, while recognizing the limitations of communication channels and networks.

In the first thrust of this research, we develop a model of a smart grid with renewable energy integration and evaluate how forecasting affects distributed control and energy management. In particular, we investigate how contextual weather information and renewable energy time-series forecasting affect smart grid energy management. In addition to modeling the smart grid system and integrating renewable energy resources, we further explore the use of deep learning methods, such as the Long Short-Term Memory (LSTM) and Transformer models, for time-series forecasting. Time-series forecasting techniques are applied within Reinforcement Learning (RL) frameworks to enhance decision-making processes.

In the second thrust, we note that data collection and sharing across the smart grids require considering the impact of network and communication channel limitations in our forecasting models. As renewable energy sources and advanced sensors are integrated into smart grids, communication channels on wireless networks are overflowed with data, requiring a shift from transmitting raw data to processing only useful information to maximize efficiency and reliability. To this end, we develop a task-oriented communication model that integrates data compression and the effects of data packet queuing with considering limitation of communication channels, within a remote time-series forecasting framework. Furthermore, we jointly integrate data compression technique with age of information metric to enhance both relevance and timeliness of data used in time-series forecasting.


Neel Patel

Near-Memory Acceleration of Compressed Far Memory

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Degree Type:

MS Thesis Defense

Committee Members:

Mohammad Allan, Chair
David Johnson
Prasad Kulkarni


Abstract

DRAM constitutes over 50% of server cost and 75% of the embodied carbon footprint of a server. To mitigate DRAM cost, far memory architectures have emerged. They can be separated into two broad categories: software-defined far memory (SFM) and disaggregated far memory (DFM). In this work, we compare the cost of SFM and DFM in terms of their required capital investment, operational expense, and carbon footprint. We show that, for applications whose data sets are compressible and have predictable memory access patterns, it takes several years for a DFM to break even with an equivalent capacity SFM in terms of cost and sustainability. We then introduce XFM, a near-memory accelerated SFM architecture, which exploits the coldness of data during SFM-initiated swap ins and outs. XFM leverages refresh cycles to seamlessly switch the access control of DRAM between the CPU and near-memory accelerator. XFM parallelizes near-memory accelerator accesses with row refreshes and removes the memory interference caused by SFM swap ins and outs.

We modify an open source far memory implementation to implement a full-stack, user-level XFM. Our experimental results use a combination of an FPGA implementation, simulation, and analytical modeling to show that XFM eliminates memory bandwidth utilization when performing compression and decompression operations with SFMs of capacities up to 1TB. The memory and cache utilization reductions translate to 5∼27% improvement in the combined performance of co-running applications.


Durga Venkata Suraj Tedla

Block chain based inter organization file sharing system

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Drew Davidson
Sankha Guria


Abstract

A coalition of companies collaborates collectively and shares information to improve their operations together. Distributed trust and transparency cannot be obtained with centralized file-sharing platforms. File sharing may be done transparently and securely with blockchain technology. This project suggests an inter-organizational secure file-sharing system based on blockchain technology. The group can use it to securely share files in a distributed manner. The creation of smart contracts and the configuration of blockchain networks are carried out by Hyperledger Fabric, an enterprise blockchain platform. Distributed file storage is accomplished through the usage of the Inter Planetary File System (IPFS).

The workflow for file-sharing and identity management procedures is provided in the paper. Using blockchain technology, the recommended approach enables a group of businesses to share files with availability, integrity, and confidentiality. The suggested method uses blockchain to enable safe file exchange amongst a group of enterprises. It offers shared file availability, confidentiality, and integrity. It guarantees complete file encryption. The blockchain provides tamper-resistant storage for the shared file's content ID. On the distributed storage and blockchain ledger, respectively, the encrypted file and file metadata are stored.


Dang Qua Nguyen

Hybrid Precoding Optimization and Private Federated Learning for Future Wireless Systems

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Comprehensive Defense

Committee Members:

Taejoon Kim, Chair
Morteza Hashemi
Erik Perrins
Zijun Yao
KC Kong

Abstract

This PhD research addresses two challenges in future wireless systems: hybrid precoder design for sub-Terahertz (sub-THz) massive multiple-input multiple-output (MIMO) communications and private federated learning (FL) over wireless channels. The first part of the research introduces a novel hybrid precoding framework that combines true-time delay (TTD) and phase shifters (PS) precoders to counteract the beam squint effect - a significant challenge in sub-THz massive MIMO systems that leads to considerable loss in array gain.

Our research presents a novel joint optimization framework for the TTD and PS precoder design, incorporating realistic time delay constraints for each TTD device. We first derive a lower bound on the achievable rate of the system and show that, in the asymptotic regime, the optimal analog precoder that fully compensates for the beam squint is equivalent to the one that maximizes this lower bound. Unlike previous methods, our framework does not rely on the unbounded time delay assumption and optimizes the TTD and PS values jointly to cope with the practical limitations. Furthermore, we determine the minimum number of TTD devices needed to reach a target array gain using our proposed approach.

Simulations validate that the proposed approach demonstrates performance enhancement, ensures array gain, and achieves computational efficiency. In the second part, the research devises a differentially private FL algorithm that employs time-varying noise perturbation and optimizes transmit power to counteract privacy risks, particularly those stemming from engineering-inversion attacks. This method harnesses inherent wireless channel noise to strike a balance between privacy protection and learning utility. By strategically designing noise perturbation and power control, our approach not only safeguards user privacy but also upholds the quality of the learned FL model. Additionally, the number of FL iterations is optimized by minimizing the upper bound on the learning error. We conduct simulations to showcase the effectiveness of our approach in terms of DP guarantee and learning utility.