I2S Masters/ Doctoral Theses


All students and faculty are welcome to attend the final defense of I2S graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Zhaohui Wang

Detection and Mitigation of Cross-App Privacy Leakage and Interaction Threats in IoT Automation

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Degree Type:

PhD Dissertation Defense

Committee Members:

Fengjun Li, Chair
Alex Bardas
Drew Davidson
Bo Luo
Haiyang Chao

Abstract

The rapid growth of Internet of Things (IoT) technology has brought unprecedented convenience to everyday life, enabling users to deploy automation rules and develop IoT apps tailored to their specific needs. However, modern IoT ecosystems consist of numerous devices, applications, and platforms that interact continuously. As a result, users are increasingly exposed to complex and subtle security and privacy risks that are difficult to fully comprehend. Even interactions among seemingly harmless apps can introduce unforeseen security and privacy threats. In addition, violations of memory integrity can undermine the security guarantees on which IoT apps rely.

The first approach investigates hidden cross-app privacy leakage risks in IoT apps. These risks arise from cross-app interaction chains formed among multiple seemingly benign IoT apps. Our analysis reveals that interactions between apps can expose sensitive information such as user identity, location, tracking data, and activity patterns. We quantify these privacy leaks by assigning probability scores to evaluate risk levels based on inferences. In addition, we provide a fine-grained categorization of privacy threats to generate detailed alerts, enabling users to better understand and address specific privacy risks.

The second approach addresses cross-app interaction threats in IoT automation systems by leveraging a logic-based analysis model grounded in event relations. We formalize event relationships, detect event interferences, and classify rule conflicts, then generate risk scores and conflict rankings to enable comprehensive conflict detection and risk assessment. To mitigate the identified interaction threats, an optimization-based approach is employed to reduce risks while preserving system functionality. This approach ensures comprehensive coverage of cross-app interaction threats and provides a robust solution for detecting and resolving rule conflicts in IoT environments.

To support the development and rigorous evaluation of these security analyses, we further developed a large-scale, manually verified, and comprehensive dataset of real-world IoT apps. This clean and diverse benchmark dataset supports the development and validation of IoT security and privacy solutions. All proposed approaches are evaluated using this dataset of real-world apps, collectively offering valuable insights and practical tools for enhancing IoT security and privacy against cross-app threats. Furthermore, we examine the integrity of the execution environment that supports IoT apps. We show that, even under non-privileged execution, carefully crafted memory access patterns can induce bit flips in physical memory, allowing attackers to corrupt data and compromise system integrity without requiring elevated privileges.


Shawn Robertson

A Low-Power Low-Throughput Communications Solution for At-Risk Populations in Resource Constrained Contested Environments

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Dissertation Defense

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
Shawn Keshmiri

Abstract

In resource‑constrained contested environments (RCCEs), communications are routinely censored, surveilled, or disrupted by nation‑state adversaries, leaving at‑risk populations—including protesters, dissidents, disaster‑affected communities, and military units—without secure connectivity. This dissertation introduces MeshBLanket, a Bluetooth Mesh‑based framework designed for low‑power, low‑throughput messaging with minimal electromagnetic spectrum exposure. Built on commercial off‑the‑shelf hardware, MeshBLanket extends the Bluetooth Mesh specification with automated provisioning and network‑wide key refresh to enhance scalability and resilience.

We evaluated MeshBLanket through field experimentation (range, throughput, battery life, and security enhancements) and qualitative interviews with ten senior U.S. Army communications experts. Thematic analysis revealed priorities of availability, EMS footprint reduction, and simplicity of use, alongside adoption challenges and institutional skepticism. Results demonstrate that MeshBLanket maintains secure messaging under load, supports autonomous key refresh, and offers operational relevance at the forward edge of battlefields.

Beyond military contexts, parallels with protest environments highlight MeshBLanket’s broader applicability for civilian populations facing censorship and surveillance. By unifying technical experimentation with expert perspectives, this work contributes a proof‑of‑concept communications architecture that advances secure, resilient, and user‑centric connectivity in environments where traditional infrastructure is compromised or weaponized.


 


Past Defense Notices

Dates

Agraj Magotra

Data-Driven Insights into Sustainability: An Artificial Intelligence (AI) Powered Analysis of ESG Practices in the Textile and Apparel Industry

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

Sumaiya Shomaji, Chair
Prasad Kulkarni
Zijun Yao


Abstract

The global textile and apparel (T&A) industry is under growing scrutiny for its substantial environmental and social impact, producing 92 million tons of waste annually and contributing to 20% of global water pollution. In Bangladesh, one of the world's largest apparel exporters, the integration of Environmental, Social, and Governance (ESG) practices is critical to meet international sustainability standards and maintain global competitiveness. This master's study leverages Artificial Intelligence (AI) and Machine Learning (ML) methodologies to comprehensively analyze unstructured corporate data related to ESG practices among LEED-certified Bangladeshi T&A factories.

Our study employs advanced techniques, including Web Scraping, Natural Language Processing (NLP), and Topic Modeling, to extract and analyze sustainability-related information from factory websites. We develop a robust ML framework that utilizes Non-Negative Matrix Factorization (NMF) for topic extraction and a Random Forest classifier for ESG category prediction, achieving an 86% classification accuracy. The study uncovers four key ESG themes: Environmental Sustainability, Social : Workplace Safety and Compliance, Social: Education and Community Programs, and Governance. The analysis reveals that 46% of factories prioritize environmental initiatives, such as energy conservation and waste management, while 44% emphasize social aspects, including workplace safety and education. Governance practices are significantly underrepresented, with only 10% of companies addressing ethical governance, healthcare provisions and employee welfare.

To deepen our understanding of the ESG themes, we conducted a Centrality Analysis to identify the most influential keywords within each category, using measures such as degree, closeness, and eigenvector centrality. Furthermore, our analysis reveals that higher certification levels, like Platinum, are associated with a more balanced emphasis on environmental, social, and governance practices, while lower levels focus primarily on environmental efforts. These insights highlight key areas where the industry can improve and inform targeted strategies for enhancing ESG practices. Overall, this ML framework provides a data-driven, scalable approach for analyzing unstructured corporate data and promoting sustainability in Bangladesh’s T&A sector, offering actionable recommendations for industry stakeholders, policymakers, and global brands committed to responsible sourcing.


Samyoga Bhattarai

Pro-ID: A Secure Face Recognition System using Locality Sensitive Hashing to Protect Human ID

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
Hongyang Sun


Abstract

Face recognition systems are widely used in various applications, from mobile banking apps to personal smartphones. However, these systems often store biometric templates in raw form, posing significant security and privacy risks. Pro-ID addresses this vulnerability by incorporating SimHash, an algorithm of Locality Sensitive Hashing (LSH), to create secure and irreversible hash codes of facial feature vectors. Unlike traditional methods that leave raw data exposed to potential breaches, SimHash transforms the feature space into high-dimensional hash codes, safeguarding user identity while preserving system functionality. 

The proposed system creates a balance between two aspects: security and the system’s performance. Additionally, the system is designed to resist common attacks, including brute force and template inversion, ensuring that even if the hashed templates are exposed, the original biometric data cannot be reconstructed.  

A key challenge addressed in this project is minimizing the trade-off between security and performance. Extensive evaluations demonstrate that the proposed method maintains competitive accuracy rates comparable to traditional face recognition systems while significantly enhancing security metrics such as irreversibility, unlinkability, and revocability. This innovative approach contributes to advancing the reliability and trustworthiness of biometric systems, providing a secure framework for applications in face recognition systems. 


Shalmoli Ghosh

High-Power Fabry-Perot Quantum-Well Laser Diodes for Application in Multi-Channel Coherent Optical Communication Systems

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

MS Thesis Defense

Committee Members:

Rongqing Hui, Chair
Shannon Blunt
James Stiles


Abstract

Wavelength Division Multiplexing (WDM) is essential for managing rapid network traffic growth in fiber optic systems. Each WDM channel demands a narrow-linewidth, frequency-stabilized laser diode, leading to complexity and increased energy consumption. Multi-wavelength laser sources, generating optical frequency combs (OFC), offer an attractive solution, enabling a single laser diode to provide numerous equally spaced spectral lines for enhanced bandwidth efficiency.

Quantum-dot and quantum-dash OFCs provide phase-synchronized lines with low relative intensity noise (RIN), while Quantum Well (QW) OFCs offer higher power efficiency, but they have higher RIN in the low frequency region of up to 2 GHz. However, both quantum-dot/dash and QW based OFCs, individual spectral lines exhibit high phase noise, limiting coherent detection. Output power levels of these OFCs range between 1-20 mW where the power of each spectral line is typically less than -5 dBm. Due to this requirement, these OFCs require excessive optical amplification, also they possess relatively broad spectral linewidths of each spectral line, due to the inverse relationship between optical power and linewidth as per the Schawlow-Townes formula. This constraint hampers their applicability in coherent detection systems, highlighting a challenge for achieving high-performance optical communication.

In this work, coherent system application of a single-section Quantum-Well Fabry-Perot (FP) laser diode is demonstrated. This laser delivers over 120 mW optical power at the fiber pigtail with a mode spacing of 36.14 GHz. In an experimental setup, 20 spectral lines from a single laser transmitter carry 30 GBaud 16-QAM signals over 78.3 km single-mode fiber, achieving significant data transmission rates. With the potential to support a transmission capacity of 2.15 Tb/s (4.3 Tb/s for dual polarization) per transmitter, including Forward Error Correction (FEC) and maintenance overhead, it offers a promising solution for meeting the escalating demands of modern network traffic efficiently.


TJ Barclay

Proof-Producing Translation from Gallina to CakeML

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Degree Type:

PhD Dissertation Defense

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Sankha Guria
Eileen Nutting

Abstract

Users of theorem provers often desire to to extract their verified code to a  more efficient, compiled language. Coq's current extraction mechanism provides this facility but does not provide a formal guarantee that the extracted code has the same semantics as the logic it is extracted from. Providing such a guarantee requires a formal semantics for the target code. The CakeML project, plemented in HOL4, provides a formally defined syntax and semantics for a subset of SML and includes a proof-producing translator from higher-order logic to CakeML. We use the CakeML definition to develop certifying extractor to CakeML from Gallina using the translation and proof techniques of the HOL4 CakeML translator. We also address how differences between HOL4 (higher-order logic) and Coq calculus of constructions) effect the implementation details of the Coq translator.


Anissa Khan

Privacy Preserving Biometric Matching

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Perry Alexander, Chair
Prasad Kulkarni
Fengjun Li


Abstract

Biometric matching is a process by which distinct features are used to identify an individual. Doing so privately is important because biometric data, such as fingerprints or facial features, is not something that can be easily changed or updated if put at risk. In this study, we perform a piece of the biometric matching process in a privacy preserving manner by using secure multiparty computation (SMPC). Using SMPC allows the identifying biological data, called a template, to remain stored by the data owner during the matching process. This provides security guarantees to the biological data while it is in use and therefore reduces the chances the data is stolen. In this study, we find that performing biometric matching using SMPC is just as accurate as performing the same match in plaintext.


Bryan Richlinski

Prioritize Program Diversity: Enumerative Synthesis with Entropy Ordering

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

MS Thesis Defense

Committee Members:

Sankha Guria, Chair
Perry Alexander
Drew Davidson
Jennifer Lohoefener

Abstract

Program synthesis is a popular way to create a correct-by-construction program from a user-provided specification. Term enumeration is a leading technique to systematically explore the space of programs by generating terms from a formal grammar. These terms are treated as candidate programs which are tested/verified against the specification for correctness. In order to prioritize candidates more likely to satisfy the specification, enumeration is often ordered by program size or other domain-specific heuristics. However, domain-specific heuristics require expert knowledge, and enumeration by size often leads to terms comprised of frequently repeating symbols that are less likely to satisfy a specification. In this thesis, we build a heuristic that prioritizes term enumeration based on variability of individual symbols in the program, i.e., information entropy of the program. We use this heuristic to order programs in both top-down and bottom-up enumeration. We evaluated our work on a subset of the PBE-String track of the 2017 SyGuS competition benchmarks and compared against size-based enumeration. In top-down enumeration, our entropy heuristic shortens runtime in ~56% of cases and tests fewer programs in ~80% before finding a valid solution. For bottom-up enumeration, our entropy heuristic improves the number of enumerated programs in ~41% of cases before finding a valid solution, without improving the runtime. Our findings suggest that using entropy to prioritize program enumeration is a promising step forward for faster program synthesis.


Elizabeth Wyss

A New Frontier for Software Security: Diving Deep into npm

When & Where:


Eaton Hall, Room 2001B

Degree Type:

PhD Comprehensive Defense

Committee Members:

Drew Davidson, Chair
Alex Bardas
Fengjun Li
Bo Luo
J. Walker

Abstract

Open-source package managers (e.g., npm for Node.js) have become an established component of modern software development. Rather than creating applications from scratch, developers may employ modular software dependencies and frameworks--called packages--to serve as building blocks for writing larger applications. Package managers make this process easy. With a simple command line directive, developers are able to quickly fetch and install packages across vast open-source repositories. npm--the largest of such repositories--alone hosts millions of unique packages and serves billions of package downloads each week. 

However, the widespread code sharing resulting from open-source package managers also presents novel security implications. Vulnerable or malicious code hiding deep within package dependency trees can be leveraged downstream to attack both software developers and the users of their applications. This downstream flow of software dependencies--dubbed the software supply chain--is critical to secure.

This research provides a deep dive into the npm-centric software supply chain, exploring various facets and phenomena that impact the security of this software supply chain. Such factors include (i) hidden code clones--which obscure provenance and can stealthily propagate known vulnerabilities, (ii) install-time attacks enabled by unmediated installation scripts, (iii) hard-coded URLs residing in package code, (iv) the impacts open-source development practices, and (v) package compromise via malicious updates. For each facet, tooling is presented to identify and/or mitigate potential security impacts. Ultimately, it is our hope that this research fosters greater awareness, deeper understanding, and further efforts to forge a new frontier for the security of modern software supply chains. 


Yousif Dafalla

Web-Armour: Mitigating Reconnaissance and Vulnerability Scanning with Injecting Scan-Impeding Delays in Web Deployments

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Dissertation Defense

Committee Members:

Alex Bardas, Chair
Drew Davidson
Fengjun Li
Bo Luo
ZJ Wang

Abstract

Scanning hosts on the internet for vulnerable devices and services is a key step in numerous cyberattacks. Previous work has shown that scanning is a widespread phenomenon on the internet and commonly targets web application/server deployments. Given that automated scanning is a crucial step in many cyberattacks, it would be beneficial to make it more difficult for adversaries to perform such activity.

In this work, we propose Web-Armour, a mitigation approach to adversarial reconnaissance and vulnerability scanning of web deployments. The proposed approach relies on injecting scanning impeding delays to infrequently or rarely used portions of a web deployment. Web-Armour has two goals: First, increase the cost for attackers to perform automated reconnaissance and vulnerability scanning; Second, introduce minimal to negligible performance overhead to benign users of the deployment. We evaluate Web-Armour on live environments, operated by real users, and on different controlled (offline) scenarios. We show that Web-Armour can effectively lead to thwarting reconnaissance and internet-wide scanning.


Daniel Herr

Information Theoretic Waveform Design with Application to Physically Realizable Adaptive-on-Transmit Radar

When & Where:


Nichols Hall, Room 129 (Ron Evans Apollo Auditorium)

Degree Type:

PhD Dissertation Defense

Committee Members:

James Stiles, Chair
Christopher Allen
Shannon Blunt
Carl Leuschen
Chris Depcik

Abstract

The fundamental task of a radar system is to utilize the electromagnetic spectrum to sense a scattering environment and generate some estimate from this measurement. This task can be posed as a Bayesian estimation problem of random parameters (the scattering environment) through an imperfect sensor (the radar system). From this viewpoint, metrics such as error covariance and estimator precision (or information) can be leveraged to evaluate and improve the performance of radar systems. Here, physically realizable radar waveforms are designed to maximize the Fisher information (FI) (specifically, a derivative of FI known as marginal Fisher information (MFI)) extracted from a scattering environment thereby minimizing the expected error covariance about an estimation parameter space. This information theoretic framework, along with the high-degree of design flexibility afforded by fully digital transmitter and receiver architectures, creates a high-dimensionality design space for optimizing radar performance.

First, the problem of joint-domain range-Doppler estimation utilizing a pulse-agile radar is posed from an estimation theoretic framework, and the minimum mean square error (MMSE) estimator is shown to suppress the range-sidelobe modulation (RSM) induced by pulse agility which may improve the signal-to-interference-plus-noise ratio (SINR) in signal-limited scenarios. A computationally efficient implementation of the range-Doppler MMSE estimator is developed as a series of range-profile estimation problems, under specific modeling and statistical assumptions. Next, a transformation of the estimation parameterization is introduced which ameliorates the high noise-gain typically associated with traditional MMSE estimation by sacrificing the super-resolution achieved by the MMSE estimator. Then, coordinate descent and gradient descent optimization methods are developed for designing MFI optimal waveforms with respect to either the original or transformed estimation space. These MFI optimal waveforms are extended to provide pulse-agility, which produces high-dimensionality radar emissions amenable to non-traditional receive processing techniques (such as MMSE estimation). Finally, informationally optimal waveform design and optimal estimation are extended into a cognitive radar concept capable of adaptive and dynamic sensing. The efficacy of the MFI waveform design and MMSE estimation are demonstrated via open-air hardware experimentation where their performance is compared against traditional techniques.


Matthew Heintzelman

Spatially Diverse Radar Techniques - Emission Optimization and Enhanced Receive Processing

When & Where:


Nichols Hall, Room 129 (Ron Evans Apollo Auditorium)

Degree Type:

PhD Dissertation Defense

Committee Members:

Shannon Blunt, Chair
Christopher Allen
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

Radar systems perform 3 basic tasks: search/detection, tracking, and imaging. Traditionally, varied operational and hardware requirements have compartmentalized these functions to distinct and specialized radars, which may communicate actionable information between them. Expedited by the growth in computational capabilities modeled by Moore’s law, next-generation radars will be sophisticated, multi-function systems comprising generalized and reprogrammable subsystems. The advance of fully Digital Array Radars (DAR) has enabled the implementation of highly directive phased arrays that can scan, detect, and track scatterers through a volume-of-interest. Conversely, DAR technology has also enabled Multiple-Input Multiple-Output (MIMO) radar methodologies that seek to illuminate all space on transmit, while forming separate but simultaneous, directive beams on receive.

Waveform diversity has been repeatedly proven to enhance radar operation through added Degrees-of-Freedom (DoF) that can be leveraged to expand dynamic range, provide ambiguity resolution, and improve parameter estimation.  In particular, diversity among the DAR’s transmitting elements provides flexibility to the emission, allowing simultaneous multi-function capability. By precise design of the emission, the DAR can utilize the operationally-continuous trade-space between a fully coherent phased array and a fully incoherent MIMO system. This flexibility could enable the optimal management of the radar’s resources, where Signal-to-Noise Ratio (SNR) would be traded for robustness in detection, measurement capability, and tracking.

Waveform diversity is herein leveraged as the predominant enabling technology for multi-function radar emission design. Three methods of emission optimization are considered to design distinct beams in space and frequency, according to classical error minimization techniques. First, a gradient-based optimization of the Space-Frequency Template Error (SFTE) is applied to a high-fidelity model for a wideband array’s far-field emission. Second, a more efficient optimization is considered, based on the SFTE for narrowband arrays. Finally, a suboptimal solution, based on alternating projections, is shown to provide rapidly reconfigurable transmit patterns. To improve the dynamic range observed for MIMO radars employing pulse-agile quasi-orthogonal waveforms, a pulse-compression model is derived that manages to suppress both autocorrelation sidelobes and multi-transmitter-induced cross-correlation. The proposed waveforms and filters are implemented in hardware to demonstrate performance, validate robustness, and reflect real-world application to the degree possible with laboratory experimentation.