I2S Masters/ Doctoral Theses


All students and faculty are welcome to attend the final defense of I2S graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Sherwan Jalal Abdullah

A Versatile and Programmable UAV Platform for Integrated Terrestrial and Non-Terrestrial Network Measurements in Rural Areas

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Shawn Keshmiri


Abstract

Reliable cellular connectivity is essential for modern services such as telehealth, precision agriculture, and remote education; yet, measuring network performance in rural areas presents significant challenges. Traditional drive testing cannot access large geographic areas between roads, while crowdsourced data provides insufficient spatial resolution in low-population regions. To address these limitations, we develop an open-source UAV-based measurement platform that integrates an onboard computation unit, commercial cellular modem, and automated flight control to systematically capture Radio Access Network (RAN) signals and end-to-end network performance metrics at different altitudes. Our platform collects synchronized measurements of signal strength (RSRP, RSSI), signal quality (RSRQ, SINR), latency, and bidirectional throughput, with each measurement tagged with GPS coordinates and altitude. Experimental results from a semi-rural deployment reveal a fundamental altitude-dependent trade-off: received signal power improves at higher altitudes due to enhanced line-of-sight conditions, while signal quality degrades from increased interference with neighboring cells. Our analysis indicates that most of the measurement area maintains acceptable signal quality, along with adequate throughput performance, for both uplink and downlink communications. We further demonstrate that strong radio signal metrics for individual cells do not necessarily translate to spatial coverage dominance such that the cell serving the majority of our test area exhibited only moderate performance, while cells with superior metrics contributed minimally to overall coverage. Next, we develop several machine learning (ML) models to improve the prediction accuracy of signal strength at unmeasured altitudes. Finally, we extend our measurement platform by integrating non-terrestrial network (NTN) user terminals with the UAV components to investigate the performance of Low-earth Orbit (LEO) satellite networks with UAV mobility. Our measurement results demonstrate that NTN offers a viable fallback option by achieving acceptable latency and throughput performance during flight operations. Overall, this work establishes a reproducible methodology for three-dimensional rural network characterization and provides practical insights for network operators, regulators, and researchers addressing connectivity challenges in underserved areas.


Satya Ashok Dowluri

Comparison of Copy-and-Patch and Meta-Tracing Compilation techniques in the context of Python

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Prasad Kulkarni, Chair
David Johnson
Hossein Saiedian


Abstract

Python's dynamic nature makes performance enhancement challenging. Recently, a JIT compiler using a novel copy-and-patch compilation approach was implemented in the reference Python implementation, CPython. Our goal in this work is to study and understand the performance properties of CPython's new JIT compiler. To facilitate this study, we compare the quality and performance of the code generated by this new JIT compiler with a more mature and traditional meta-tracing based JIT compiler implemented in PyPy (another Python implementation). Our thorough experimental evaluation reveals that, while it achieves the goal of fast compilation speed, CPython's JIT severely lags in code quality/performance compared with PyPy. While this observation is a known and intentional property of the copy-and-patch approach, it results in the new JIT compiler failing to elevate Python code performance beyond that achieved by the default interpreter, despite significant added code complexity. In this thesis, we report and explain our novel experiments, results, and observations.


Arya Hadizadeh Moghaddam

Learning Personalized and Robust Patient Representations across Graphical and Temporal Structures in Electronic Health Records

When & Where:


Eaton Hall, Room 2001B

Degree Type:

PhD Comprehensive Defense

Committee Members:

Zijun Yao, Chair
Bo Luo
Fengjun Li
Dongjie Wang
Xinmai Yang

Abstract

Recent research in Electronic Health Records (EHRs) has enabled personalized and longitudinal modeling of patient trajectories for health outcome improvement. Despite this progress, existing methods often struggle to capture the dynamic, heterogeneous, and interdependent nature of medical data. Specifically, many representation methods learn a rich set of EHR features in an independent way but overlook the intricate relationships among them. Moreover, data scarcity and bias, such as the cold-start scenarios where patients only have a few visits or rare conditions, remain fundamental challenges in clinical decision support in real-life. To address these challenges, this dissertation aims to introduce an integrated machine learning framework for sophisticated, interpretable, and adaptive EHR representation modeling. Specifically, the dissertation comprises three thrusts:

  1. A time-aware graph transformer model that dynamically constructs personalized temporal graph representations that capture patient trajectory over different visits.
  2. A contrasted multi-Intent recommender system that can disentangle the multiple temporal patterns that coexist in a patient’s long medical history, while considering distinct health profiles.
  3. A few-shot meta-learning framework that can address the patient cold-start issue through a self- and peer-adaptive model enhanced by uncertainty-based filtering.

Together, these contributions advance a data-efficient, generalizable, and interpretable foundation for large-scale clinical EHR mining toward truly personalized medical outcome prediction.


Junyi Zhao

On the Security of Speech-based Machine Translation Systems: Vulnerabilities and Attacks

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Bo Luo, Chair
Fengjun Li
Zijun Yao


Abstract

In the light of rapid advancement of global connectivity and the increasing reliance on multilingual communication, speech-based Machine Translation (MT) systems have emerged as essential technologies for facilitating seamless cross-lingual interaction. These systems enable individuals and organizations to overcome linguistic boundaries by automatically translating spoken language in real time. However, despite their growing ubiquity in various applications such as virtual assistants, international conferencing, and accessibility services, the security and robustness of speech-based MT systems remain underexplored. In particular, limited attention has been given to understanding their vulnerabilities under adversarial conditions, where malicious actors intentionally craft or manipulate speech inputs to mislead or degrade translation performance.

This thesis presents a comprehensive investigation into the security landscape of speech-based machine translation systems from an adversarial perspective. We systematically categorize and analyze potential attack vectors, evaluate their success rates across diverse system architectures and environmental settings, and explore the practical implications of such attacks. Furthermore, through a series of controlled experiments and human-subject evaluations, we demonstrate that adversarial manipulations can significantly distort translation outputs in realistic use cases, thereby posing tangible risks to communication reliability and user trust.

Our findings reveal critical weaknesses in current MT models and underscore the urgent need for developing more resilient defense strategies. We also discuss open research challenges and propose directions for building secure, trustworthy, and ethically responsible speech translation technologies. Ultimately, this work contributes to a deeper understanding of adversarial robustness in multimodal language systems and provides a foundation for advancing the security of next-generation machine translation frameworks.


Kyrian C. Adimora

Machine Learning-Based Multi-Objective Optimization for HPC Workload Scheduling: A GNN-RL Approach

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Comprehensive Defense

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni
Zijun Yao
Michael J. Murray

Abstract

As high-performance computing (HPC) systems achieve exascale capabilities, traditional single-objective schedulers that optimize solely for performance prove inadequate for environments requiring simultaneous optimization of energy efficiency and system resilience. Current scheduling approaches result in suboptimal resource utilization, excessive energy consumption, and reduced fault tolerance in the demanding requirements of large-scale scientific applications. This dissertation proposes a novel multi-objective optimization framework that integrates graph neural networks (GNNs) with reinforcement learning (RL) to jointly optimize performance, energy efficiency, and system resilience in HPC workload scheduling. The central hypothesis posits that graph-structured representations of workloads and system states, combined with adaptive learning policies, can significantly outperform traditional scheduling methods in complex, dynamic HPC environments. The proposed framework comprises three integrated components: (1) GNN-RL, which combines graph neural networks with reinforcement learning for adaptive policy development; (2) EA-GATSched, an energy-aware scheduler leveraging Graph Attention Networks; and (3) HARMONIC (Holistic Adaptive Resource Management for Optimized Next-generation Interconnected Computing), a probabilistic model for workload uncertainty quantification. The proposed methodology encompasses novel uncertainty modeling techniques, scalable GNN-based scheduling algorithms, and comprehensive empirical evaluation using production supercomputing workload traces. Preliminary results demonstrate 10-19% improvements in energy efficiency while maintaining comparable performance metrics. The framework will be evaluated across makespan reduction, energy consumption, resource utilization efficiency, and fault tolerance in various operational scenarios. This research advances sustainable and resilient HPC resource management, providing critical infrastructure support for next-generation scientific computing applications.


Sarah Johnson

Ordering Attestation Protocols

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Comprehensive Defense

Committee Members:

Perry Alexander, Chair
Michael Branicky
Sankha Guria
Emily Witt
Eileen Nutting

Abstract

Remote attestation is a process of obtaining verifiable evidence from a remote party to establish trust. A relying party makes a request of a remote target that responds by executing an attestation protocol producing evidence reflecting the target's system state and meta-evidence reflecting the evidence’s integrity and provenance. This process occurs in the presence of adversaries intent on misleading the relying party to trust a target they should not. This research introduces a robust approach for evaluating and comparing attestation protocols based on their relative resilience against such adversaries. I develop a Rocq-based, formally-verified mathematical model aimed at describing the difficulty for an active adversary to successfully compromise the attestation. The model supports systematically ranking attestation protocols by the level of adversary effort required to produce evidence that does not accurately reflect the target’s state. My work aims to facilitate the selection of a protocol resilient to adversarial attack.


Lohithya Ghanta

Used Car Analytics

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Morteza Hashemi
Prasad Kulkarni


Abstract

The used car market is characterized by significant pricing variability, making it challenging for buyers and sellers to determine fair vehicle values. To address this, the project applies a machine learning–driven approach to predict used car prices based on real market data extracted from Cars.com. Following extensive data cleaning, feature engineering, and exploratory analysis, several predictive models were developed and evaluated. Among these, the Stacking Regressor demonstrated superior performance, effectively capturing non-linear pricing patterns and achieving the highest accuracy with the lowest prediction error. Key insights indicate that vehicle age and mileage are the primary drivers of price depreciation, while brand and vehicle category exert notable secondary influence. The resulting pricing model provides a data-backed, transparent framework that supports more informed decision-making and promotes fairness and consistency within the used car marketplace.


Ashish Adhikari

Towards assessing the security of program binaries

When & Where:


Eaton Hall, Room 2001B

Degree Type:

PhD Comprehensive Defense

Committee Members:

Prasad Kulkarni, Chair
Alex Bardas
Fengjun Li
Bo Luo

Abstract

Software vulnerabilities are widespread, often resulting from coding weaknesses and poor development practices. These vulnerabilities can be exploited by attackers, posing risks to confidentiality, integrity, and availability. To protect themselves, end-users of software may have an interest in knowing whether the software they purchase, and use is secure from potential attacks. Our work is motivated by this need to automatically assess and rate the security properties of binary software.

While many researchers focus on developing techniques and tools to detect and mitigate vulnerabilities in binaries, our approach is different. We aim to determine whether the software has been developed with proper care. Our hypothesis is that software created with meticulous attention to security is less likely to contain exploitable vulnerabilities. As a first step, we examined the current landscape of binary-level vulnerability detection. We categorized critical coding weaknesses in compiled programming languages and conducted a detailed survey comparing static analysis techniques and tools designed to detect these weaknesses. Additionally, we evaluated the effectiveness of open-source CWE detection tools and analyzed their challenges. To further understand their efficacy, we conducted independent assessments using standard benchmarks.

To determine whether software is carefully and securely developed, we propose several techniques. So far, we have used machine learning and deep learning methods to identify the programming language of a binary at the functional level, enabling us to handle complex cases like mixed-language binaries and we assess whether vulnerable regions in the binary are protected with appropriate security mechanisms. Additionally, we explored the feasibility of detecting secure coding practices by examining adherence to SonarQube’s security-related coding conventions.

Next, we investigate whether compiler warnings generated during binary creation are properly addressed. Furthermore, we also aim to optimize the array bounds detection in the program binary. This enhanced array bounds detection will also increase the effectiveness of detecting secure coding conventions that are related to memory safety and buffer overflow vulnerabilities.

Our ultimate goal is to combine these techniques to rate the overall security quality of a given binary software.


Bayn Schrader

Implementation and Analysis of an Efficient Dual-Beam Radar-Communications Technique

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

MS Thesis Defense

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
Jonathan Owen


Abstract

Fully digital arrays enable realization of dual-function radar-communications systems which generate multiple simultaneous transmit beams with different modulation structures in different spatial directions. These spatially diverse transmissions are produced by designing the individual wave forms transmitted at each antenna element that combine in the far-field to synthesize the desired modulations at the specified directions. This thesis derives a look-up table (LUT) implementation of the existing Far-Field Radiated Emissions Design (FFRED) optimization framework. This LUT implementation requires a single optimization routine for a set of desired signals, rather than the previous implementation which required pulse-to-pulse optimization, making the LUT approach more efficient. The LUT is generated by representing the waveforms transmitted by each element in the array as a sequence of beamformers, where the LUT contains beamformers based on the phase difference between the desired signal modulations. The globally optimal beamformers, in terms of power efficiency, can be realized via the Lagrange dual problem for most beam locations and powers. The Phase-Attached Radar-Communications (PARC) waveform is selected for the communications waveform alongside a Linear Frequency Modulated (LFM) waveform for the radar signal. A set of FFRED LUTs are then used to simulate a radar transmission to verify the utility of the radar system. The same LUTs are then used to estimate the communications performance of a system with varying levels of the array knowledge uncertainty.


Will Thomas

Static Analysis and Synthesis of Layered Attestation Protocols

When & Where:


Eaton Hall, Room 2001B

Degree Type:

PhD Comprehensive Defense

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Sankha Guria
Eileen Nutting

Abstract

Trust is a fundamental issue in computer security. Frequently, systems implicitly trust in other
systems, especially if configured by the same administrator. This fallacious reasoning stems from the belief
that systems starting from a known, presumably good, state can be trusted. However, this statement only
holds for boot-time behavior; most non-trivial systems change state over time, and thus runtime behavior is
an important, oft-overlooked aspect of implicit trust in system security.

To address this, attestation was developed, allowing a system to provide evidence of its runtime behavior to a
verifier. This evidence allows a verifier to make an explicit informed decision about the system’s trustworthiness.
As systems grow more complex, scalable attestation mechanisms become increasingly important. To apply
attestation to non-trivial systems, layered attestation was introduced, allowing attestation of individual
components or layers, combined into a unified report about overall system behavior. This approach enables
more granular trust assessments and facilitates attestation in complex, multi-layered architectures. With the
complexity of layered attestation, discerning whether a given protocol is sufficiently measuring a system, is
executable, or if all measurements are properly reported, becomes increasingly challenging.

In this work, we will develop a framework for the static analysis and synthesis of layered attestation protocols,
enabling more robust and adaptable attestation mechanisms for dynamic systems. A key focus will be the
static verification of protocol correctness, ensuring the protocol behaves as intended and provides reliable
evidence of the underlying system state. A type system will be added to the Copland layered attestation
protocol description language to allow basic static checks, and extended static analysis techniques will be
developed to verify more complex properties of protocols for a specific target system. Further, protocol
synthesis will be explored, enabling the automatic generation of correct-by-construction protocols tailored to
system requirements.


David Felton

Optimization and Evaluation of Physical Complementary Radar Waveforms

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Comprehensive Defense

Committee Members:

Shannon Blunt, Chair
Rachel Jarvis
Patrick McCormick
James Stiles
Zsolt Talata

Abstract

In high dynamic-range environments, matched-filter radar performance is often sidelobe-limited with correlation error being fundamentally constrained by the TB of the collective emission. To contend with the regulatory necessity of spectral containment, the gradient-based complementary-FM framework was developed to produce complementary sidelobe cancellation (CSC) after coherently combining responses from distinct pulses from within a pulse-agile emission. In contrast to most complementary subsets, which were discovered via brute force under the notion of phase-coding, these comp-FM waveform subsets achieve CSC while preserving hardware compatibility since they are FM. Although comp-FM addressed a primary limitation of complementary signals (i.e., hardware distortion), CSC hinges on the exact reconstruction of autocorrelation terms to suppress sidelobes, from which optimality is broken for Doppler shifted signals. This work introduces a Doppler-generalized comp-FM (DG-comp-FM) framework that extends the cancellation condition to account for the anticipated unambiguous Doppler span after post-summing. While this framework is developed for use within a combine-before-Doppler processing manner, it can likewise be employed to design an entire coherent processing interval (CPI) to minimize range-sidelobe modulation (RSM) within the radar point-spread-function (PSF), thereby introducing the potential for cognitive operation if sufficient scattering knowledge is available a-priori. 

Some radar systems operate with multiple emitters, as in the case of Multiple-input-multiple-output (MIMO) radar. Whereas a single emitter must contend with the self-inflicted autocorrelation sidelobes, MIMO systems must likewise contend with the cross-correlation with coincident (in time and spectrum) emissions from other emitters. As such, the determination of "orthogonal waveforms" comprises a large portion of research within the MIMO space, with a small majority now recognizing that true orthogonality is not possible for band-limited signals (albeit, with the exclusion of TDMA). The notion of complementary-FM is proposed for exploration within a MIMO context, whereby coherently combining responses can achieve CSC as well as cross-correlation cancellation for a wide Doppler space. By effectively minimizing cross-correlation terms, this enables improved channel separation on receive as well as improved estimation capability due to reduced correlation error. Proposal items include further exploration/characterization of the space, incorporating an explicit spectral.


Jigyas Sharma

SEDPD: Sampling-Enhanced Differentially Private Defense against Backdoor Poisoning Attacks of Image Classification

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

MS Thesis Defense

Committee Members:

Han Wang, Chair
Drew Davidson
Dongjie Wang


Abstract

Recent advancements in explainable artificial intelligence (XAI) have brought significant transparency to machine learning by providing interpretable explanations alongside model predictions. However, this transparency has also introduced vulnerabilities, enhancing adversaries’ ability for the model decision processes through explanation-guided attacks. In this paper, we propose a robust, model-agnostic defense framework to mitigate these vulnerabilities by explanations while preserving the utility of XAI. Our framework employs a multinomial sampling approach that perturbs explanation values generated by techniques such as SHAP and LIME. These perturbations ensure differential privacy (DP) bounds, disrupting adversarial attempts to embed malicious triggers while maintaining explanation quality for legitimate users. To validate our defense, we introduce a threat model tailored to image classification tasks. By applying our defense framework, we train models with pixel-sampling strategies that integrate DP guarantees, enhancing robustness against backdoor poisoning attacks with XAI. Extensive experiments on widely used datasets, such as CIFAR-10, MNIST, CIFAR-100 and Imagenette, and models, including ConvMixer and ResNet-50, show that our approach effectively mitigates explanation-guided attacks without compromising the accuracy of the model. We also test our defense performance against other backdoor attacks, which shows our defense framework can detect other type backdoor triggers very well. This work highlights the potential of DP in securing XAI systems and ensures safer deployment of machine learning models in real-world applications.


Dimple Galla

Intelligent Application for Cold Email Generation: Business Outreach

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

Cold emailing remains an effective strategy for software service companies to improve organizational reach by acquiring clients. Generic emails often fail to get a response.

This project leverages Generative AI to automate the cold email generation. This project is built with the Llama-3.1 model and a Chroma vector database that supports the semantic search of keywords in the job description that matches the project portfolio links of software service companies. The application automatically extracts the technology related job openings for Fortune 500 companies. Users can either select from these extracted job postings or manually enter URL of a job posting, after which the system generates email and sends email upon approval. Advanced techniques like Chain-of-Thought Prompting and Few-Shot Learning were applied to improve the relevance making the email more responsive. This AI driven approach improves engagement and simplifies the business development process for software service companies.


Past Defense Notices

Dates

Masoud Ghazikor

Distributed Optimization and Control Algorithms for UAV Networks in Unlicensed Spectrum Bands

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

MS Thesis Defense

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Prasad Kulkarni


Abstract

UAVs have emerged as a transformative technology for various applications, including emergency services, delivery, and video streaming. Among these, video streaming services in areas with limited physical infrastructure, such as disaster-affected areas, play a crucial role in public safety. UAVs can be rapidly deployed in search and rescue operations to efficiently cover large areas and provide live video feeds, enabling quick decision-making and resource allocation strategies. However, ensuring reliable and robust UAV communication in such scenarios is challenging, particularly in unlicensed spectrum bands, where interference from other nodes is a significant concern. To address this issue, developing a distributed transmission control and video streaming is essential to maintaining a high quality of service, especially for UAV networks that rely on delay-sensitive data. 

In this MSc thesis, we study the problem of distributed transmission control and video streaming optimization for UAVs operating in unlicensed spectrum bands. We develop a cross-layer framework that jointly considers three inter-dependent factors: (i) in-band interference introduced by ground-aerial nodes at the physical layer, (ii) limited-size queues with delay-constrained packet arrival at the MAC layer, and (iii) video encoding rate at the application layer. This framework is designed to optimize the average throughput and PSNR by adjusting fading thresholds and video encoding rates for an integrated aerial-ground network in unlicensed spectrum bands. Using consensus-based distributed algorithm and coordinate descent optimization, we develop two algorithms: (i) Distributed Transmission Control (DTC) that dynamically adjusts fading thresholds to maximize the average throughput by mitigating trade-offs between low-SINR transmission errors and queue packet losses, and (ii) Joint Distributed Video Transmission and Encoder Control (JDVT-EC) that optimally balances packet loss probabilities and video distortions by jointly adjusting fading thresholds and video encoding rates. Through extensive numerical analysis, we demonstrate the efficacy of the proposed algorithms under various scenarios.


Mahmudul Hasan

Assertion-Based Security Assessment of Hardware IP Protection Methods

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Tamzidul Hoque, Chair
Esam El-Araby
Sumaiya Shomaji


Abstract

Combinational and sequential locking methods are promising solutions for protecting hardware intellectual property (IP) from piracy, reverse engineering, and malicious modifications by locking the functionality of the IP based on a secret key. To improve their security, researchers are developing attack methods to extract the secret key.  

 

While the attacks on combinational locking are mostly inapplicable for sequential designs without access to the scan chain, the limited applicable attacks are generally evaluated against the basic random insertion of key gates. On the other hand, attacks on sequential locking techniques suffer from scalability issues and evaluation of improperly locked designs. Finally, while most attacks provide an approximately correct key, they do not indicate which specific key bits are undetermined. This thesis proposes an oracle-guided attack that applies to both combinational and sequential locking without scan chain access. The attack applies light-weight design modifications that represent the oracle using a finite state machine and applies an assertion-based query of the unlocking key. We have analyzed the effectiveness of our attack against 46 sequential designs locked with various classes of combinational locking including random, strong, logic cone-based, and anti-SAT based. We further evaluated against a sequential locking technique using 46 designs with various key sequence lengths and widths. Finally, we expand our framework to identify undetermined key bits, enabling complementary attacks on the smaller remaining key space.


Srijanya Chetikaneni

Plant Disease Prediction Using Transfer Learning

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Han Wang


Abstract

Timely detection of plant diseases is critical to safeguarding crop yields and ensuring global food security. This project presents a deep learning-based image classification system to identify plant diseases using the publicly available PlantVillage dataset. The core objective was to evaluate and compare the performance of a custom-built Convolutional Neural Network (CNN) with two widely used transfer learning models—EfficientNetB0 and MobileNetV3Small. 

 

All models were trained on augmented image data resized to 224×224 pixels, with preprocessing tailored to each architecture. The custom CNN used simple normalization, whereas EfficientNetB0 and MobileNetV3Small utilized their respective pre-processing methods to standardize the pretrained ImageNet domain inputs. To improve robustness, the training pipeline included data augmentation, class weighting, and early stopping.

Training was conducted using the Adam optimizer and categorical cross-entropy loss over 30 epochs, with performance assessed using accuracy, loss, and training time metrics. The results revealed that transfer learning models significantly outperformed the custom CNN. EfficientNetB0 achieved the highest accuracy, making it ideal for high-precision applications, while MobileNetV3Small offered a favorable balance between speed and accuracy, making it suitable for lightweight, real-time inference on edge devices.

This study validates the effectiveness of transfer learning for plant disease detection tasks and emphasizes the importance of model-specific preprocessing and training strategies. It provides a foundation for deploying intelligent plant health monitoring systems in practical agricultural environments.

 


Rahul Purswani

Finetuning Llama on custom data for QA tasks

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Drew Davidson
Prasad Kulkarni


Abstract

Fine-tuning large language models (LLMs) for domain-specific use cases, such as question answering, offers valuable insights into how their performance can be tailored to specialized information needs. In this project, we focused on the University of Kansas (KU) as our target domain. We began by scraping structured and unstructured content from official KU webpages, covering a wide array of student-facing topics including campus resources, academic policies, and support services. From this content, we generated a diverse set of question-answer pairs to form a high-quality training dataset. LLaMA 3.2 was then fine-tuned on this dataset to improve its ability to answer KU-specific queries with greater relevance and accuracy. Our evaluation revealed mixed results—while the fine-tuned model outperformed the base model on most domain-specific questions, the original model still had an edge in handling ambiguous or out-of-scope prompts. These findings highlight the strengths and limitations of domain-specific fine-tuning, and provide practical takeaways for customizing LLMs for real-world QA applications.


Ahmet Soyyigit

Anytime Computing Techniques for LiDAR-based Perception In Cyber-Physical Systems

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Degree Type:

PhD Dissertation Defense

Committee Members:

Heechul Yun, Chair
Michael Branicky
Prasad Kulkarni
Hongyang Sun
Shawn Keshmiri

Abstract

The pursuit of autonomy in cyber-physical systems (CPS) presents a challenging task of real-time interaction with the physical world, prompting extensive research in this domain. Recent advances in artificial intelligence (AI), particularly the introduction of deep neural networks (DNN), have significantly improved the autonomy of CPS, notably by boosting perception capabilities.

CPS perception aims to discern, classify, and track objects of interest in the operational environment, a task that is considerably challenging for computers in a three-dimensional (3D) space. For this task, the use of LiDAR sensors and processing their readings with DNNs has become popular because of their excellent performance. However, in CPS such as self-driving cars and drones, object detection must be not only accurate but also timely, posing a challenge due to the high computational demand of LiDAR object detection DNNs. Satisfying this demand is particularly challenging for on-board computational platforms due to size, weight, and power constraints. Therefore, a trade-off between accuracy and latency must be made to ensure that both requirements are satisfied. Importantly, the required trade-off is operational environment dependent and should be weighted more on accuracy or latency dynamically at runtime. However, LiDAR object detection DNNs cannot dynamically reduce their execution time by compromising accuracy (i.e. anytime computing). Prior research aimed at anytime computing for object detection DNNs using camera images is not applicable to LiDAR-based detection due to architectural differences. This thesis addresses these challenges by proposing three novel techniques: Anytime-LiDAR, which enables early termination with reasonable accuracy; VALO (Versatile Anytime LiDAR Object Detection), which implements deadline-aware input data scheduling; and MURAL (Multi-Resolution Anytime Framework for LiDAR Object Detection), which introduces dynamic resolution scaling. Together, these innovations enable LiDAR-based object detection DNNs to make effective trade-offs between latency and accuracy under varying operational conditions, advancing the practical deployment of LiDAR object detection DNNs.


Rithvij Pasupuleti

A Machine Learning Framework for Identifying Bioinformatics Tools and Database Names in Scientific Literature

When & Where:


LEEP2, Room 2133

Degree Type:

MS Project Defense

Committee Members:

Cuncong Zhong, Chair
Dongjie Wang
Han Wang
Zijun Yao

Abstract

The absence of a single, comprehensive database or repository cataloging all bioinformatics databases and software creates a significant barrier for researchers aiming to construct computational workflows. These workflows, which often integrate 10–15 specialized tools for tasks such as sequence alignment, variant calling, functional annotation, and data visualization, require researchers to explore diverse scientific literature to identify relevant resources. This process demands substantial expertise to evaluate the suitability of each tool for specific biological analyses, alongside considerable time to understand their applicability, compatibility, and implementation within a cohesive pipeline. The lack of a central, updated source leads to inefficiencies and the risk of using outdated tools, which can affect research quality and reproducibility. Consequently, there is a critical need for an automated, accurate tool to identify bioinformatics databases and software mentions directly from scientific texts, streamlining workflow development and enhancing research productivity. 

 

The bioNerDS system, a prior effort to address this challenge, uses a rule-based named entity recognition (NER) approach, achieving an F1 score of 63% on an evaluation set of 25 articles from BMC Bioinformatics and PLoS Computational Biology. By integrating the same set of features such as context patterns, word characteristics and dictionary matches into a machine learning model, we developed an approach using an XGBoost classifier. This model, carefully tuned to address the extreme class imbalance inherent in NER tasks through synthetic oversampling and refined via systematic hyperparameter optimization to balance precision and recall, excels at capturing complex linguistic patterns and non-linear relationships, ensuring robust generalization. It achieves an F1 score of 82% on the same evaluation set, significantly surpassing the baseline. By combining rule-based precision with machine learning adaptability, this approach enhances accuracy, reduces ambiguities, and provides a robust tool for large-scale bioinformatics resource identification, facilitating efficient workflow construction. Furthermore, this methodology holds potential for extension to other technological domains, enabling similar resource identification in fields like data science, artificial intelligence, or computational engineering.


Vishnu Chowdary Madhavarapu

Automated Weather Classification Using Transfer Learning

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

This project presents an automated weather classification system utilizing transfer learning with pre-trained convolutional neural networks (CNNs) such as VGG19, InceptionV3, and ResNet50. Designed to classify weather conditions—sunny, cloudy, rainy, and sunrise—from images, the system addresses the challenge of limited labeled data by applying data augmentation techniques like zoom, shear, and flip, expanding the dataset images. By fine-tuning the final layers of pre-trained models, the solution achieves high accuracy while significantly reducing training time. VGG19 was selected as the baseline model for its simplicity, strong feature extraction capabilities, and widespread applicability in transfer learning scenarios. The system was trained using the Adam optimizer and evaluated on key performance metrics including accuracy, precision, recall, and F1 score. To enhance user accessibility, a Flask-based web interface was developed, allowing real-time image uploads and instant weather classification. The results demonstrate that transfer learning, combined with robust data preprocessing and fine-tuning, can produce a lightweight and accurate weather classification tool. This project contributes toward scalable, real-time weather recognition systems that can integrate into IoT applications, smart agriculture, and environmental monitoring.


Rokunuz Jahan Rudro

Using Machine Learning to Classify Driver Behavior from Psychological Features: An Exploratory Study

When & Where:


Eaton Hall, Room 1A

Degree Type:

MS Thesis Defense

Committee Members:

Sumaiya Shomaji, Chair
David Johnson
Zijun Yao
Alexandra Kondyli

Abstract

Driver inattention and human error are the primary causes of traffic crashes. However, little is known about the relationship between driver aggressiveness and safety. Although several studies that group drivers into different classes based on their driving performance have been conducted, little has been done to explore how behavioral traits are linked to driver behavior. The study aims to link different driver profiles, assessed through psychological evaluations, with their likelihood of engaging in risky driving behaviors, as measured in a driving simulation experiment. By incorporating psychological factors into machine learning algorithms, our models were able to successfully relate self-reported decision-making and personality characteristics with actual driving actions. Our results hold promise toward refining existing models of driver behavior by understanding the psychological and behavioral characteristics that influence the risk of crashes.


Md Mashfiq Rizvee

Energy Optimization in Multitask Neural Networks through Layer Sharing

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
Han Wang


Abstract

Artificial Intelligence (AI) is being widely used in diverse domains such as industrial automation, traffic control, precision agriculture, and smart cities for major heavy lifting in terms of data analysis and decision making. However, the AI life- cycle is a major source of greenhouse gas (GHG) emission leading to devastating environmental impact. This is due to expensive neural architecture searches, training of countless number of models per day across the world, in-field AI processing of data in billions of edge devices, and advanced security measures across the AI life cycle. Modern applications often involve multitasking, which involves performing a variety of analyzes on the same dataset. These tasks are usually executed on resource-limited edge devices, necessitating AI models that exhibit efficiency across various measures such as power consumption, frame rate, and model size. To address these challenges, we introduce a novel neural network architecture model that incorporates a layer sharing principle to optimize the power usage. We propose a novel neural architecture, Layer Shared Neural Networks that merges multiple similar AI/NN tasks together (with shared layers) towards creating a single AI/NN model with reduced energy requirements and carbon footprint. The experimental findings reveal competitive accuracy and reduced power consumption. The layer shared model significantly reduces power consumption by 50% during training and 59.10% during inference causing as much as an 84.64% and 87.10% decrease in CO2 emissions respectively. 

  


Fairuz Shadmani Shishir

Parameter-Efficient Computational Drug Discovery using Deep Learning

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Sumaiya Shomaji, Chair
Tamzidul Hoque
Hongyang Sun


Abstract

The accurate prediction of small molecule binding affinity and toxicity remains a central challenge in drug discovery, with significant implications for reducing development costs, improving candidate prioritization, and enhancing safety profiles. Traditional computational approaches, such as molecular docking and quantitative structure-activity relationship (QSAR) models, often rely on handcrafted features and require extensive domain knowledge, which can limit scalability and generalization to novel chemical scaffolds. Recent advances in language models (LMs), particularly those adapted to chemical representations such as SMILES (Simplified Molecular Input Line Entry System), have opened new ways for learning data-driven molecular representations that capture complex structural and functional properties. However, achieving both high binding affinity and low toxicity through a resource-efficient computational pipeline is inherently difficult due to the multi-objective nature of the task. This study presents a novel dual-paradigm approach to critical challenges in drug discovery: predicting small molecules with high binding affinity and low cardiotoxicity profiles. For binding affinity prediction, we implement a specialized graph neural network (GNN) architecture that operates directly on molecular structures represented as graphs, where atoms serve as nodes and bonds as edges. This topology-aware approach enables the model to capture complex spatial arrangements and electronic interactions critical for protein-ligand binding. For toxicity prediction, we leverage chemical language models (CLMs) fine-tuned with Low-Rank Adaptation (LoRA), allowing efficient adaptation of large pre-trained models to specialized toxicological endpoints while maintaining the generalized chemical knowledge embedded in the base model. Our hybrid methodology demonstrates significant improvements over existing computational approaches, with the GNN component achieving an average area under the ROC curve (AUROC) of 0.92 on three protein targets and the LoRA-adapted CLM reaching (AUROC) of 0.90 with 60% reduction in parameter usage in predicting cardiotoxicity. This work establishes a powerful computational framework that accelerates drug discovery by enabling both higher binding affinity and low toxicity compounds with optimized efficacy and safety profiles.