I2S Masters/ Doctoral Theses


All students and faculty are welcome to attend the final defense of I2S graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Sai Karthik Maddirala

Real-Estate Price Analysis and Prediction Using Ensemble Learning

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Morteza Hashemi
Prasad Kulkarni


Abstract

Accurate real-estate price estimation is crucial for buyers, sellers, investors, lenders, and policymakers, yet traditional valuation practices often rely on subjective judgment, inconsistent expertise, and incomplete market information. With the increasing availability of digital property listings, large volumes of structured real-estate data can now be leveraged to build objective, data-driven valuation systems. This project develops a comprehensive analytical framework for predicting different types of properties prices using real-world listing data collected from 99acres.com across major Indian cities. The workflow includes automated web scraping, extensive data cleaning, normalization of heterogeneous property attributes, and exploratory data analysis to identify important pricing patterns and structural trends within the dataset. A multi-stage learning pipeline is designed—consisting of feature preparation, hyperparameter tuning, cross-validation, and performance evaluation—to ensure that the final predictive system is both reliable and generalizable. In addition to the core prediction engine, the project proposes a future extension using Retrieval-Augmented Generation (RAG) with Large Language Models(LLM’s) to provide transparent, context-aware explanations for each valuation. Overall, this work establishes the foundation for a scalable, interpretable, and data-centric real-estate valuation platform capable of supporting informed decision-making in diverse market contexts.


Ramya Harshitha Bolla

AI Academic Assistant for Summarization and Question Answering

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

The rapid expansion of academic literature has made efficient information extraction increasingly difficult for researchers, leading to substantial time spent manually summarizing documents and identifying key insights. This project presents an AI-powered Academic Assistant designed to streamline academic reading through multi-level summarization, contextual question answering, and source-grounded traceability. The system incorporates a robust preprocessing pipeline including text extraction, artifact removal, noise filtering, and section segmentation to prepare documents for accurate analysis. After assessing the limitations of traditional NLP and transformer-based summarization models, the project adopts a Large Language Model (LLM) approach using the Gemini API, enabling deeper semantic understanding, long-context processing, and flexible summarization. The assistant provides structured short, medium, and long summaries; contextual keyword extraction; and interactive question answering with transparent source highlighting. Limitations include handling complex visual content and occasional API constraints. Overall, this project demonstrates how modern LLMs, combined with tailored prompt engineering and structured preprocessing, can significantly enhance the academic document analysis workflow.


Keerthi Sudha Borra

Intellinotes – AI-POWERED DOCUMENT UNDERSTANDING PLATFORM

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Han Wang


Abstract

This project presents Intellinotes, an AI-powered platform that transforms educational documents into multiple learning formats to address information-overload challenges in modern education. The system leverages large language models (GPT-4o-mini) to automatically generate four complementary outputs from a single document upload: educational summaries, conversational podcast scripts, hierarchical mind maps, and interactive flashcards.

The platform employs a three-tier architecture built with Next.js, FastAPI, and MongoDB, supporting multiple document formats (PDF, DOCX, PPTX, TXT, images) through a robust parsing pipeline. Comprehensive evaluation on 30 research documents demonstrates exceptional system reliability with a 100% feature success rate across 150 tests (5 features × 30 documents), and strong semantic understanding with a semantic similarity score of 0.72.

While ROUGE scores (ROUGE-1: 0.40, ROUGE-2: 0.09, ROUGE-L: 0.17) indicate moderate lexical overlap typical of abstractive summarization, the high semantic similarity demonstrates that the system effectively captures and conveys the conceptual meaning of source documents—an essential requirement for educational content. This validation of meaning preservation over word matching represents an important contribution to evaluating educational AI systems.

The system processes documents in approximately 65 seconds with perfect reliability, providing students with comprehensive multi-modal learning materials that cater to diverse learning styles. This work contributes to the growing field of AI-assisted education by demonstrating a practical application of large language models for automated educational content generation supported by validated quality metrics.


Sowmya Ambati

AI-Powered Question Paper Generator

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

Designing a well-balanced exam requires instructors to review extensive course materials, determine key concepts, and design questions that reflect appropriate difficulty and cognitive depth. This project develops an AI-powered Question Paper Generator that automates much of this process while keeping instructors in full control. The system accepts PDFs, Word documents, PPT slides, and text files, extracts their content, and builds a FAISS-based retrieval index using sentence-transformer embeddings. A large language model then generates multiple question types—MCQs, short answers, and true/false—guided by user-selected difficulty levels and Bloom’s Taxonomy distributions to ensure meaningful coverage. Each question is evaluated with a grounding score that measures how closely it aligns with the source material, improving transparency and reducing hallucination. A React frontend enables instructors to monitor progress, review questions, toggle answers, and export to PDF or Word, while an ASP.NET Core backend manages processing and metrics. The system reduces exam preparation time and enhances consistency across assessments.


George Steven Muvva

Automated Fake Content Detection Using TF-IDF-Based Machine Learning and LSTM-Driven Deep Learning Models

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

The rapid spread of misinformation across online platforms has made automated fake news detection essential. This project develops and compares machine learning (SVM, Decision Tree) and deep learning (LSTM) models to classify news headlines from the GossipCop and PolitiFact datasets as real or fake. After extensive preprocessing— including text cleaning, lemmatization, TF-IDF vectorization, and sequence tokenization—the models are trained and evaluated using standard performance metrics. Results show that SVM provides a strong baseline, but the LSTM model achieves higher accuracy and F1-scores by capturing deeper semantic and contextual patterns in the text. The study highlights the challenges of domain variation and subtle linguistic cues, while demonstrating that context-aware deep learning methods offer superior capability for automated fake content detection.


Babak Badnava

Joint Communication and Computation for Emerging Applications in Next-Generation Wireless Networks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Dissertation Defense

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Prasad Kulkarni
Taejoon Kim
Shawn Keshmiri

Abstract

Emerging applications in next-generation wireless networks, such as augmented and virtual reality (AR/VR) and autonomous vehicles, demand significant computational and communication resources at the network edge. This PhD research focuses on developing joint communication–computation solutions while incorporating various network-, application-, and user-imposed constraints. In the first thrust, we examine the problem of energy-constrained computation offloading to edge servers in a multi-user, multi-channel wireless network. To develop a decentralized offloading policy for each user, we model the problem as a partially observable Markov decision process (POMDP). Leveraging bandit learning methods, we introduce a decentralized task offloading solution in which edge users offload their computation tasks to nearby edge servers over selected communication channels. 

The second thrust focuses on user-driven requirements for resource-intensive applications, specifically the Quality of Experience (QoE) in 2D and 3D video streaming. Given the unique characteristics of millimeter-wave (mmWave) networks, we develop a beam alignment and buffer-predictive multi-user scheduling algorithm for 2D video streaming applications. This algorithm balances the trade-off between beam alignment overhead and playback buffer levels for optimal resource allocation across multiple users. We then extend our investigation to develop a joint rate adaptation and computation distribution framework for 3D video streaming in mmWave-based VR systems. Numerical results using real-world mmWave traces and 3D video datasets demonstrate significant improvements in video quality, rebuffering time, and quality variations perceived by users.

Finally, we develop novel edge computing solutions for multi-layer immersive video processing systems. By exploring and exploiting the elastic nature of computation tasks in these systems, we propose a multi-agent reinforcement learning (MARL) framework that incorporates two learning-based methods: the centralized phasic policy gradient (CPPG) and the independent phasic policy gradient (IPPG). IPPG leverages shared information and model parameters to learn edge offloading policies; however, during execution, each user acts independently based only on its local state information. This decentralized execution reduces the communication and computation overhead of centralized decision-making and improves scalability. We leverage real-world 4G, 5G, and WiGig network traces, along with 3D video datasets, to investigate the performance trade-offs of CPPG and IPPG when applied to elastic task computing.


Sri Dakshayani Guntupalli

Customer Churn Prediction for Subscription-Based Businesses

When & Where:


LEEP2, Room 2420

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

Customer churn is a critical challenge for subscription-based businesses, as it directly impacts revenue, profitability, and long-term customer loyalty. Because retaining existing customers is more cost-effective than acquiring new ones, accurate churn prediction is essential for sustainable growth. This work presents a machine learning based framework for predicting and analyzing customer churn, coupled with an interactive Streamlit web application that supports real time decision making. Using historical customer data that includes demographic attributes, usage behavior, transaction history, and engagement patterns, the system applies extensive data preprocessing and feature engineering to construct a modeling-ready dataset. Multiple models Logistic Regression, Random Forest, and XGBoost are trained and evaluated using the Scikit-Learn framework. Model performance is assessed with metrics such as accuracy, precision, recall, F1-score, and ROC-AUC to identify the most effective predictor of churn. The top performing models are serialized and deployed within a Streamlit interface that accepts individual customer inputs or batch data files to generate immediate churn predictions and summaries. Overall, this project demonstrates how machine learning can transform raw customer data into actionable business intelligence and provides a scalable approach to proactive customer retention management.


QiTao Weng

Anytime Computer Vision for Autonomous Driving

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Heechul Yun, Chair
Drew Davidson
Shawn Keshmiri


Abstract

Latency–accuracy tradeoffs are fundamental in real-time applications of deep neural networks (DNNs) for cyber-physical systems. In autonomous driving, in particular, safety depends on both prediction quality and the end-to-end delay from sensing to actuation. We observe that (1) when latency is accounted for, the latency-optimal network configuration varies with scene context and compute availability; and (2) a single fixed-resolution model becomes suboptimal as conditions change.

We present a multi-resolution, end-to-end deep neural network for the CARLA urban driving challenge using monocular camera input. Our approach employs a convolutional neural network (CNN) that supports multiple input resolutions through per-resolution batch normalization, enabling runtime selection of an ideal input scale under a latency budget, as well as resolution retargeting, which allows multi-resolution training without access to the original training dataset.

We implement and evaluate our multi-resolution end-to-end CNN in CARLA to explore the latency–safety frontier. Results show consistent improvements in per-route safety metrics—lane invasions, red-light infractions, and collisions—relative to fixed-resolution baselines.


Sherwan Jalal Abdullah

A Versatile and Programmable UAV Platform for Integrated Terrestrial and Non-Terrestrial Network Measurements in Rural Areas

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Shawn Keshmiri


Abstract

Reliable cellular connectivity is essential for modern services such as telehealth, precision agriculture, and remote education; yet, measuring network performance in rural areas presents significant challenges. Traditional drive testing cannot access large geographic areas between roads, while crowdsourced data provides insufficient spatial resolution in low-population regions. To address these limitations, we develop an open-source UAV-based measurement platform that integrates an onboard computation unit, commercial cellular modem, and automated flight control to systematically capture Radio Access Network (RAN) signals and end-to-end network performance metrics at different altitudes. Our platform collects synchronized measurements of signal strength (RSRP, RSSI), signal quality (RSRQ, SINR), latency, and bidirectional throughput, with each measurement tagged with GPS coordinates and altitude. Experimental results from a semi-rural deployment reveal a fundamental altitude-dependent trade-off: received signal power improves at higher altitudes due to enhanced line-of-sight conditions, while signal quality degrades from increased interference with neighboring cells. Our analysis indicates that most of the measurement area maintains acceptable signal quality, along with adequate throughput performance, for both uplink and downlink communications. We further demonstrate that strong radio signal metrics for individual cells do not necessarily translate to spatial coverage dominance such that the cell serving the majority of our test area exhibited only moderate performance, while cells with superior metrics contributed minimally to overall coverage. Next, we develop several machine learning (ML) models to improve the prediction accuracy of signal strength at unmeasured altitudes. Finally, we extend our measurement platform by integrating non-terrestrial network (NTN) user terminals with the UAV components to investigate the performance of Low-earth Orbit (LEO) satellite networks with UAV mobility. Our measurement results demonstrate that NTN offers a viable fallback option by achieving acceptable latency and throughput performance during flight operations. Overall, this work establishes a reproducible methodology for three-dimensional rural network characterization and provides practical insights for network operators, regulators, and researchers addressing connectivity challenges in underserved areas.


Satya Ashok Dowluri

Comparison of Copy-and-Patch and Meta-Tracing Compilation techniques in the context of Python

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Prasad Kulkarni, Chair
David Johnson
Hossein Saiedian


Abstract

Python's dynamic nature makes performance enhancement challenging. Recently, a JIT compiler using a novel copy-and-patch compilation approach was implemented in the reference Python implementation, CPython. Our goal in this work is to study and understand the performance properties of CPython's new JIT compiler. To facilitate this study, we compare the quality and performance of the code generated by this new JIT compiler with a more mature and traditional meta-tracing based JIT compiler implemented in PyPy (another Python implementation). Our thorough experimental evaluation reveals that, while it achieves the goal of fast compilation speed, CPython's JIT severely lags in code quality/performance compared with PyPy. While this observation is a known and intentional property of the copy-and-patch approach, it results in the new JIT compiler failing to elevate Python code performance beyond that achieved by the default interpreter, despite significant added code complexity. In this thesis, we report and explain our novel experiments, results, and observations.


Arya Hadizadeh Moghaddam

Learning Personalized and Robust Patient Representations across Graphical and Temporal Structures in Electronic Health Records

When & Where:


Eaton Hall, Room 2001B

Degree Type:

PhD Comprehensive Defense

Committee Members:

Zijun Yao, Chair
Bo Luo
Fengjun Li
Dongjie Wang
Xinmai Yang

Abstract

Recent research in Electronic Health Records (EHRs) has enabled personalized and longitudinal modeling of patient trajectories for health outcome improvement. Despite this progress, existing methods often struggle to capture the dynamic, heterogeneous, and interdependent nature of medical data. Specifically, many representation methods learn a rich set of EHR features in an independent way but overlook the intricate relationships among them. Moreover, data scarcity and bias, such as the cold-start scenarios where patients only have a few visits or rare conditions, remain fundamental challenges in clinical decision support in real-life. To address these challenges, this dissertation aims to introduce an integrated machine learning framework for sophisticated, interpretable, and adaptive EHR representation modeling. Specifically, the dissertation comprises three thrusts:

  1. A time-aware graph transformer model that dynamically constructs personalized temporal graph representations that capture patient trajectory over different visits.
  2. A contrasted multi-Intent recommender system that can disentangle the multiple temporal patterns that coexist in a patient’s long medical history, while considering distinct health profiles.
  3. A few-shot meta-learning framework that can address the patient cold-start issue through a self- and peer-adaptive model enhanced by uncertainty-based filtering.

Together, these contributions advance a data-efficient, generalizable, and interpretable foundation for large-scale clinical EHR mining toward truly personalized medical outcome prediction.


Junyi Zhao

On the Security of Speech-based Machine Translation Systems: Vulnerabilities and Attacks

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Bo Luo, Chair
Fengjun Li
Zijun Yao


Abstract

In the light of rapid advancement of global connectivity and the increasing reliance on multilingual communication, speech-based Machine Translation (MT) systems have emerged as essential technologies for facilitating seamless cross-lingual interaction. These systems enable individuals and organizations to overcome linguistic boundaries by automatically translating spoken language in real time. However, despite their growing ubiquity in various applications such as virtual assistants, international conferencing, and accessibility services, the security and robustness of speech-based MT systems remain underexplored. In particular, limited attention has been given to understanding their vulnerabilities under adversarial conditions, where malicious actors intentionally craft or manipulate speech inputs to mislead or degrade translation performance.

This thesis presents a comprehensive investigation into the security landscape of speech-based machine translation systems from an adversarial perspective. We systematically categorize and analyze potential attack vectors, evaluate their success rates across diverse system architectures and environmental settings, and explore the practical implications of such attacks. Furthermore, through a series of controlled experiments and human-subject evaluations, we demonstrate that adversarial manipulations can significantly distort translation outputs in realistic use cases, thereby posing tangible risks to communication reliability and user trust.

Our findings reveal critical weaknesses in current MT models and underscore the urgent need for developing more resilient defense strategies. We also discuss open research challenges and propose directions for building secure, trustworthy, and ethically responsible speech translation technologies. Ultimately, this work contributes to a deeper understanding of adversarial robustness in multimodal language systems and provides a foundation for advancing the security of next-generation machine translation frameworks.


Kyrian C. Adimora

Machine Learning-Based Multi-Objective Optimization for HPC Workload Scheduling: A GNN-RL Approach

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Comprehensive Defense

Committee Members:

Hongyang Sun, Chair
David Johnson
Prasad Kulkarni
Zijun Yao
Michael J. Murray

Abstract

As high-performance computing (HPC) systems achieve exascale capabilities, traditional single-objective schedulers that optimize solely for performance prove inadequate for environments requiring simultaneous optimization of energy efficiency and system resilience. Current scheduling approaches result in suboptimal resource utilization, excessive energy consumption, and reduced fault tolerance in the demanding requirements of large-scale scientific applications. This dissertation proposes a novel multi-objective optimization framework that integrates graph neural networks (GNNs) with reinforcement learning (RL) to jointly optimize performance, energy efficiency, and system resilience in HPC workload scheduling. The central hypothesis posits that graph-structured representations of workloads and system states, combined with adaptive learning policies, can significantly outperform traditional scheduling methods in complex, dynamic HPC environments. The proposed framework comprises three integrated components: (1) GNN-RL, which combines graph neural networks with reinforcement learning for adaptive policy development; (2) EA-GATSched, an energy-aware scheduler leveraging Graph Attention Networks; and (3) HARMONIC (Holistic Adaptive Resource Management for Optimized Next-generation Interconnected Computing), a probabilistic model for workload uncertainty quantification. The proposed methodology encompasses novel uncertainty modeling techniques, scalable GNN-based scheduling algorithms, and comprehensive empirical evaluation using production supercomputing workload traces. Preliminary results demonstrate 10-19% improvements in energy efficiency while maintaining comparable performance metrics. The framework will be evaluated across makespan reduction, energy consumption, resource utilization efficiency, and fault tolerance in various operational scenarios. This research advances sustainable and resilient HPC resource management, providing critical infrastructure support for next-generation scientific computing applications.


Past Defense Notices

Dates

Mohammad Ful Hossain Seikh

AAFIYA: Antenna Analysis in Frequency-domain for Impedance and Yield Assessment

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

Jim Stiles, Chair
Rachel Jarvis
Alessandro Salandrino


Abstract

This project presents AAFIYA (Antenna Analysis in Frequency-domain for Impedance and Yield Assessment), a modular Python toolkit developed to automate and streamline the characterization and analysis of radiofrequency (RF) antennas using both measurement and simulation data. Motivated by the need for reproducible, flexible, and publication-ready workflows in modern antenna research, AAFIYA provides comprehensive support for all major antenna metrics, including S-parameters, impedance, gain and beam patterns, polarization purity, and calibration-based yield estimation. The toolkit features robust data ingestion from standard formats (such as Touchstone files and beam pattern text files), vectorized computation of RF metrics, and high-quality plotting utilities suitable for scientific publication.

Validation was carried out using measurements from industry-standard electromagnetic anechoic chamber setups involving both Log Periodic Dipole Array (LPDA) reference antennas and Askaryan Radio Array (ARA) Bottom Vertically Polarized (BVPol) antennas, covering a frequency range of 50–1500 MHz. Key performance metrics, such as broadband impedance matching, S11 and S21 related calculations, 3D realized gain patterns, vector effective lengths,  and cross-polarization ratio, were extracted and compared against full-wave electromagnetic simulations (using HFSS and WIPL-D). The results demonstrate close agreement between measurement and simulation, confirming the reliability of the workflow and calibration methodology.

AAFIYA’s open-source, extensible design enables rapid adaptation to new experiments and provides a foundation for future integration with machine learning and evolutionary optimization algorithms. This work not only delivers a validated toolkit for antenna research and pedagogy but also sets the stage for next-generation approaches in automated antenna design, optimization, and performance analysis.


Soumya Baddham

Battling Toxicity: A Comparative Analysis of Machine Learning Models for Content Moderation

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

With the exponential growth of user-generated content, online platforms face unprecedented challenges in moderating toxic and harmful comments. Due to this, Automated content moderation has emerged as a critical application of machine learning, enabling platforms to ensure user safety and maintain community standards. Despite its importance, challenges such as severe class imbalance, contextual ambiguity, and the diverse nature of toxic language often compromise moderation accuracy, leading to biased classification performance.

This project presents a comparative analysis of machine learning approaches for a Multi-Label Toxic Comment Classification System using the Toxic Comment Classification dataset from Kaggle.  The study examines the performance of traditional algorithms, such as Logistic Regression, Random Forest, and XGBoost, alongside deep architectures, including Bi-LSTM, CNN-Bi-LSTM, and DistilBERT. The proposed approach utilizes word-level embeddings across all models and examines the effects of architectural enhancements, hyperparameter optimization, and advanced training strategies on model robustness and predictive accuracy.

The study emphasizes the significance of loss function optimization and threshold adjustment strategies in improving the detection of minority classes. The comparative results reveal distinct performance trade-offs across model architectures, with transformer models achieving superior contextual understanding at the cost of computational complexity. At the same time, deep learning approaches(LSTM models) offer efficiency advantages. These findings establish evidence-based guidelines for model selection in real-world content moderation systems, striking a balance between accuracy requirements and operational constraints.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Degree Type:

PhD Dissertation Defense

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our proposed concepts using a number of case studies including multidimensional Poisson equation, multidimensional heat equation, Black Scholes equation, and Navier-Stokes equation for computational fluid dynamics (CFD) achieving promising results. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. This work establishes a practical and effective approach for solving PDEs using quantum computing for engineering and scientific applications.


Alex Manley

Taming Complexity in Computer Architecture through Modern AI-Assisted Design and Education

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Degree Type:

MS Thesis Defense

Committee Members:

Heechul Yun, Chair
Tamzidul Hoque
Mohammad Alian
Prasad Kulkarni

Abstract

The escalating complexity inherent in modern computer architecture presents significant challenges for both professional hardware designers and students striving to gain foundational understanding. Historically, the steady improvement of computer systems was driven by transistor scaling, predictable performance increases, and relatively straightforward architectural paradigms. However, with the end of traditional scaling laws and the rise of heterogeneous and parallel architectures, designers now face unprecedented intricacies involving power management, thermal constraints, security considerations, and sophisticated software interactions. Prior tools and methodologies, often reliant on complex, command-line driven simulations, exacerbate these challenges by introducing steep learning curves, creating a critical need for more intuitive, accessible, and efficient solutions. To address these challenges, this thesis introduces two innovative, modern tools. 

The first tool, SimScholar, provides an intuitive graphical user interface (GUI) built upon the widely-used gem5 simulator. SimScholar significantly simplifies the simulation process, enabling students and educators to more effectively engage with architectural concepts through a visually guided environment, both reducing complexity and enhancing conceptual understanding. Supporting SimScholar, the gem5 Extended Modules API (gEMA) offers streamlined backend integration with gem5, ensuring efficient communication, modularity, and maintainability.

The second contribution, gem5 Co-Pilot, delivers an advanced framework for architectural design space exploration (DSE). Co-Pilot integrates cycle-accurate simulation via gem5, detailed power and area modeling through McPAT, and intelligent optimization assisted by a large language model (LLM). Central to Co-Pilot is the Design Space Declarative Language (DSDL), a Python-based domain-specific language that facilitates structured, clear specification of design parameters and constraints.

Collectively, these tools constitute a comprehensive approach to taming complexity in computer architecture, offering powerful, user-friendly solutions tailored to both educational and professional settings.


Prashanthi Mallojula

On the Security of Mobile and Auto Companion Apps

When & Where:


Eaton Hall, Room 2001B

Degree Type:

PhD Dissertation Defense

Committee Members:

Bo Luo, Chair
Alex Bardas
Fengjun Li
Hongyang Sun
Huazhen Fang

Abstract

The rapid development of mobile apps on modern smartphone platforms has raised critical concerns regarding user data privacy and the security of app-to-device communications, particularly with companion apps that interface with external IoT or cyber-physical systems (CPS). In this dissertation, we investigate two major aspects of mobile app security: the misuse of permission mechanisms and the security of app to device communication in automotive companion apps.

 

Mobile apps seek user consent for accessing sensitive information such as location and personal data. However, users often blindly accept these permission requests, allowing apps to abuse this mechanism. As long as a permission is requested, state-of-the-art security mechanisms typically treat it as legitimate. This raises a critical question: Are these permission requests always valid? To explore this, we validate permission requests using statistical analysis on permission sets extracted from groups of functionally similar apps. We identify mobile apps with abusive permission access and quantify the risk of information leakage posed by each app. Through a large-scale statistical analysis of permission sets from over 200,000 Android apps, our findings reveal that approximately 10% of the apps exhibit highly risky permission usage. 

 

Next, we present a comprehensive study of automotive companion apps, a rapidly growing yet underexplored category of mobile apps. These apps are used for vehicle diagnostics, telemetry, and remote control, and they often interface with in-vehicle networks via OBD-II dongles, exposing users to significant privacy and security risks. Using a hybrid methodology that combines static code analysis, dynamic runtime inspection, and network traffic monitoring, we analyze 154 publicly available Android automotive apps. Our findings uncover a broad range of critical vulnerabilities. Over 74% of the analyzed apps exhibit vulnerabilities that could lead to private information leakage, property theft, or even real-time safety risks while driving. Specifically, 18 apps were found to connect to open OBD-II dongles without requiring any authentication, accept arbitrary CAN bus commands from potentially malicious users, and transmit those commands to the vehicle without validation. 16 apps were found to store driving logs in external storage, enabling attackers to reconstruct trip histories and driving patterns. We demonstrate several real-world attack scenarios that illustrate how insecure data storage and communication practices can compromise user privacy and vehicular safety. Finally, we discuss mitigation strategies and detail the responsible disclosure process undertaken with the affected developers.


Syed Abid Sahdman

Soliton Generation and Pulse Optimization using Nonlinear Transmission Lines

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Alessandro Salandrino, Chair
Shima Fardad
Morteza Hashemi


Abstract

Nonlinear Transmission Lines (NLTLs) have gained significant interest due to their ability to generate ultra-short, high-power RF pulses, which are valuable in applications such as ultrawideband radar, space vehicles, and battlefield communication disruption. The waveforms generated by NLTLs offer frequency diversity not typically observed in High-Power Microwave (HPM) sources based on electron beams. Nonlinearity in lumped element transmission lines is usually introduced using voltage-dependent capacitors due to their simplicity and widespread availability. The periodic structure of these lines introduces dispersion, which broadens pulses. In contrast, nonlinearity causes higher-amplitude regions to propagate faster. The interaction of these effects results in the formation of stable, self-localized waveforms known as solitons.

Soliton propagation in NLTLs can be described by the Korteweg-de Vries (KdV) equation. In this thesis, the Bäcklund Transformation (BT) method has been used to derive both single and two-soliton solutions of the KdV equation. This method links two different partial differential equations (PDEs) and their solutions to produce solutions for nonlinear PDEs. The two-soliton solution is obtained from the single soliton solution using a nonlinear superposition principle known as Bianchi’s Permutability Theorem (BPT). Although the KdV model is suitable for NLTLs where the capacitance-voltage relationship follows that of a reverse-biased p-n junction, it cannot generally represent arbitrary nonlinear capacitance characteristics.

To address this limitation, a Finite Difference Time Domain (FDTD) method has been developed to numerically solve the NLTL equation for soliton propagation. To demonstrate the pulse sharpening and RF generation capability of a varactor-loaded NLTL, a 12-section lumped element circuit has been designed and simulated using LTspice and verified with the calculated result. In airborne radar systems, operational constraints such as range, accuracy, data rate, environment, and target type require flexible waveform design, including variation in pulse widths and pulse repetition frequencies. A gradient descent optimization technique has been employed to generate pulses with varying amplitudes and frequencies by optimizing the NLTL parameters. This work provides a theoretical analysis and numerical simulation to study soliton propagation in NLTLs and demonstrates the generation of tunable RF pulses through optimized circuit design.


Vinay Kumar Reddy Budideti

NutriBot: An AI-Powered Personalized Nutrition Recommendation Chatbot Using Rasa

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Victor Frost
Prasad Kulkarni


Abstract

In recent years, the intersection of Artificial Intelligence and healthcare has paved the way for intelligent dietary assistance. NutriBot is an AI-powered chatbot developed using the Rasa framework to deliver personalized nutrition recommendations based on user preferences, diet types, and nutritional goals. This full-stack system integrates Rasa NLU, a Flask backend, the Nutritionix API for real-time food data, and a React.js + Tailwind CSS frontend for seamless interaction. The system is containerized using Docker and deployable on cloud platforms like GCP. 

The chatbot supports multi-turn conversations, slot-filling, and remembers user preferences such as dietary restrictions or nutrient focus (e.g., high protein). Evaluation of the system showed perfect intent and entity recognition accuracy, fast API response times, and user-friendly fallback handling. While NutriBot currently lacks persistent user profiles and multilingual support, it offers a highly accurate, scalable framework for future extensions such as fitness tracker integration, multilingual capabilities, and smart assistant deployment.


Arun Kumar Punjala

Deep Learning-Based MRI Brain Tumor Classification: Evaluating Sequential Architectures for Diagnostic Accuracy

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

Accurate classification of brain tumors from MRI scans plays a vital role in assisting clinical diagnosis and treatment planning. This project investigates and compares three deep learning-based classification approaches designed to evaluate the effectiveness of integrating recurrent layers into conventional convolutional architectures. Specifically, a CNN-LSTM model, a CNN-RNN model with GRU units, and a baseline CNN classifier using EfficientNetB0 are developed and assessed on a curated MRI dataset.

The CNN-LSTM model uses ResNet50 as a feature extractor, with spatial features reshaped and passed through stacked LSTM layers to explore sequential learning on static medical images. The CNN-RNN model implements TimeDistributed convolutional layers followed by GRUs, examining the potential benefits of GRU-based modeling. The EfficientNetB0-based CNN model, trained end-to-end without recurrent components, serves as the performance baseline.

All three models are evaluated using training accuracy, validation loss, confusion matrices, and class-wise performance metrics. Results show that the CNN-LSTM architecture provides the most balanced performance across tumor types, while the CNN-RNN model suffers from mild overfitting. The EfficientNetB0 baseline offers stable and efficient classification for general benchmarking.


Ganesh Nurukurti

Customer Behavior Analytics and Recommendation System for E-Commerce

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Han Wang


Abstract

In the era of digital commerce, personalized recommendations are pivotal for enhancing user experience and boosting engagement. This project presents a comprehensive recommendation system integrated into an e-commerce web application, designed using Flask and powered by collaborative filtering via Singular Value Decomposition (SVD). The system intelligently predicts and personalizes product suggestions for users based on implicit feedback such as purchases, cart additions, and search behavior.

 

The foundation of the recommendation engine is built on user-item interaction data, derived from the Brazilian e-commerce Olist dataset. Ratings are simulated using weighted scores for purchases and cart additions, reflecting varying degrees of user intent. These interactions are transformed into a user-product matrix and decomposed using SVD, yielding latent user and product features. The model leverages these latent factors to predict user interest in unseen products, enabling precise and scalable recommendation generation.

 

To further enhance personalization, the system incorporates real-time user activity. Recent search history is stored in an SQLite database and used to prioritize recommendations that align with the user’s current interests. A diversity constraint is also applied to avoid redundancy, limiting the number of recommended products per category.

 

The web application supports robust user authentication, product exploration by category, cart management, and checkout simulations. It features a visually driven interface with dynamic visualizations for product insights and user interactions. The home page adapts to individual preferences, showing tailored product recommendations and enabling users to explore categories and details.

 

In summary, this project demonstrates the practical implementation of a hybrid recommendation strategy combining matrix factorization with contextual user behavior. It showcases the importance of latent factor modeling, data preprocessing, and user-centric design in delivering an intelligent retail experience.


Masoud Ghazikor

Distributed Optimization and Control Algorithms for UAV Networks in Unlicensed Spectrum Bands

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

MS Thesis Defense

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Prasad Kulkarni


Abstract

UAVs have emerged as a transformative technology for various applications, including emergency services, delivery, and video streaming. Among these, video streaming services in areas with limited physical infrastructure, such as disaster-affected areas, play a crucial role in public safety. UAVs can be rapidly deployed in search and rescue operations to efficiently cover large areas and provide live video feeds, enabling quick decision-making and resource allocation strategies. However, ensuring reliable and robust UAV communication in such scenarios is challenging, particularly in unlicensed spectrum bands, where interference from other nodes is a significant concern. To address this issue, developing a distributed transmission control and video streaming is essential to maintaining a high quality of service, especially for UAV networks that rely on delay-sensitive data. 

In this MSc thesis, we study the problem of distributed transmission control and video streaming optimization for UAVs operating in unlicensed spectrum bands. We develop a cross-layer framework that jointly considers three inter-dependent factors: (i) in-band interference introduced by ground-aerial nodes at the physical layer, (ii) limited-size queues with delay-constrained packet arrival at the MAC layer, and (iii) video encoding rate at the application layer. This framework is designed to optimize the average throughput and PSNR by adjusting fading thresholds and video encoding rates for an integrated aerial-ground network in unlicensed spectrum bands. Using consensus-based distributed algorithm and coordinate descent optimization, we develop two algorithms: (i) Distributed Transmission Control (DTC) that dynamically adjusts fading thresholds to maximize the average throughput by mitigating trade-offs between low-SINR transmission errors and queue packet losses, and (ii) Joint Distributed Video Transmission and Encoder Control (JDVT-EC) that optimally balances packet loss probabilities and video distortions by jointly adjusting fading thresholds and video encoding rates. Through extensive numerical analysis, we demonstrate the efficacy of the proposed algorithms under various scenarios.