I2S Masters/ Doctoral Theses


All students and faculty are welcome to attend the final defense of I2S graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Sai Karthik Maddirala

Real-Estate Price Analysis and Prediction Using Ensemble Learning

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Morteza Hashemi
Prasad Kulkarni


Abstract

Accurate real-estate price estimation is crucial for buyers, sellers, investors, lenders, and policymakers, yet traditional valuation practices often rely on subjective judgment, inconsistent expertise, and incomplete market information. With the increasing availability of digital property listings, large volumes of structured real-estate data can now be leveraged to build objective, data-driven valuation systems. This project develops a comprehensive analytical framework for predicting different types of properties prices using real-world listing data collected from 99acres.com across major Indian cities. The workflow includes automated web scraping, extensive data cleaning, normalization of heterogeneous property attributes, and exploratory data analysis to identify important pricing patterns and structural trends within the dataset. A multi-stage learning pipeline is designed—consisting of feature preparation, hyperparameter tuning, cross-validation, and performance evaluation—to ensure that the final predictive system is both reliable and generalizable. In addition to the core prediction engine, the project proposes a future extension using Retrieval-Augmented Generation (RAG) with Large Language Models(LLM’s) to provide transparent, context-aware explanations for each valuation. Overall, this work establishes the foundation for a scalable, interpretable, and data-centric real-estate valuation platform capable of supporting informed decision-making in diverse market contexts.


Ramya Harshitha Bolla

AI Academic Assistant for Summarization and Question Answering

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

The rapid expansion of academic literature has made efficient information extraction increasingly difficult for researchers, leading to substantial time spent manually summarizing documents and identifying key insights. This project presents an AI-powered Academic Assistant designed to streamline academic reading through multi-level summarization, contextual question answering, and source-grounded traceability. The system incorporates a robust preprocessing pipeline including text extraction, artifact removal, noise filtering, and section segmentation to prepare documents for accurate analysis. After assessing the limitations of traditional NLP and transformer-based summarization models, the project adopts a Large Language Model (LLM) approach using the Gemini API, enabling deeper semantic understanding, long-context processing, and flexible summarization. The assistant provides structured short, medium, and long summaries; contextual keyword extraction; and interactive question answering with transparent source highlighting. Limitations include handling complex visual content and occasional API constraints. Overall, this project demonstrates how modern LLMs, combined with tailored prompt engineering and structured preprocessing, can significantly enhance the academic document analysis workflow.


Keerthi Sudha Borra

Intellinotes – AI-POWERED DOCUMENT UNDERSTANDING PLATFORM

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Han Wang


Abstract

This project presents Intellinotes, an AI-powered platform that transforms educational documents into multiple learning formats to address information-overload challenges in modern education. The system leverages large language models (GPT-4o-mini) to automatically generate four complementary outputs from a single document upload: educational summaries, conversational podcast scripts, hierarchical mind maps, and interactive flashcards.

The platform employs a three-tier architecture built with Next.js, FastAPI, and MongoDB, supporting multiple document formats (PDF, DOCX, PPTX, TXT, images) through a robust parsing pipeline. Comprehensive evaluation on 30 research documents demonstrates exceptional system reliability with a 100% feature success rate across 150 tests (5 features × 30 documents), and strong semantic understanding with a semantic similarity score of 0.72.

While ROUGE scores (ROUGE-1: 0.40, ROUGE-2: 0.09, ROUGE-L: 0.17) indicate moderate lexical overlap typical of abstractive summarization, the high semantic similarity demonstrates that the system effectively captures and conveys the conceptual meaning of source documents—an essential requirement for educational content. This validation of meaning preservation over word matching represents an important contribution to evaluating educational AI systems.

The system processes documents in approximately 65 seconds with perfect reliability, providing students with comprehensive multi-modal learning materials that cater to diverse learning styles. This work contributes to the growing field of AI-assisted education by demonstrating a practical application of large language models for automated educational content generation supported by validated quality metrics.


Sowmya Ambati

AI-Powered Question Paper Generator

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

Designing a well-balanced exam requires instructors to review extensive course materials, determine key concepts, and design questions that reflect appropriate difficulty and cognitive depth. This project develops an AI-powered Question Paper Generator that automates much of this process while keeping instructors in full control. The system accepts PDFs, Word documents, PPT slides, and text files, extracts their content, and builds a FAISS-based retrieval index using sentence-transformer embeddings. A large language model then generates multiple question types—MCQs, short answers, and true/false—guided by user-selected difficulty levels and Bloom’s Taxonomy distributions to ensure meaningful coverage. Each question is evaluated with a grounding score that measures how closely it aligns with the source material, improving transparency and reducing hallucination. A React frontend enables instructors to monitor progress, review questions, toggle answers, and export to PDF or Word, while an ASP.NET Core backend manages processing and metrics. The system reduces exam preparation time and enhances consistency across assessments.


Past Defense Notices

Dates

Faris El-Katri

Source Separation using Sparse Bayesian Learning

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Patrick McCormick, Chair
Shannon Blunt
James Stiles


Abstract

Wireless communication in recent decades has allowed for a substantial increase in both the speed and capacity of information which may be transmitted over large distances. However, given the expanding societal needs coupled with a finite available spectrum, the question arises of how to increase the efficiency by which information may be transmitted. One natural answer to this question lies in spectrum sharing—that is, in allowing multiple noncooperative agents to inhabit the same spectrum bands. In order to achieve this, we must be able to reliably separate the desired signals from those of other agents in the background. However, since our agents are noncooperative, we must develop a model-agnostic approach at tackling this problem. For this work, we will consider cohabitation between radar signals and communication signals, with the former being the desired signal and the latter being the noncooperative agent. In order to approach such problems involving highly underdetermined linear systems, we propose utilizing Sparse Bayesian Learning and present our results on selected problems. 


Koyel Pramanick

Detect Evidence of Compiler Triggered Security Measures in Binary Code

When & Where:


Eaton Hall, Room 2001B

Degree Type:

PhD Dissertation Defense

Committee Members:

Prasad Kulkami, Chair
Drew Davidson
Fengjun Li
Bo Luo
John Symons

Abstract

The primary goal of this thesis is to develop and explore techniques to identify security measures added by compilers in software binaries. These measures, added automatically during the build process, include runtime security checks like stack canaries, AddressSanitizer (ASan), and Control Flow Integrity (CFI), which help protect against memory errors, buffer overflows, and control flow attacks. This work also investigates how unresolved compiler warnings, especially those related to security, can be identified in binaries when the source code is unavailable. By studying the patterns and markers left by these compiler features, this thesis provides methods to analyze and verify the security provisions embedded in software binaries. These efforts aim to bridge the gap between compile-time diagnostics and binary-level analysis, offering a way to better understand the security protections applied during software compilation. Ultimately, this work seeks to make software more transparent and give users the tools to independently assess the security measures present in compiled software, fostering greater trust and accountability in software systems.


Srinitha Kale

AUTOMATING SYMBOL RECOGNITION IN SPOT IT: ADVANCING AI-POWERED DETECTION

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Esam El-Araby
Prasad Kulkarni


Abstract

The "Spot It!" game, featuring 55 cards each with 8 unique symbols, presents a complex challenge of identifying a single matching symbol between any two cards. Addressing this challenge, machine learning has been employed to automate symbol recognition, enhancing gameplay and extending applications into areas like pattern recognition and visual search. Due to the scarcity of available datasets, a comprehensive collection of 57 distinct Spot It symbols was created, with each class consisting of 1,800 augmented images. These images were manipulated through techniques such as scaling, rotation, and resizing to represent various visual scenarios. Then developed a convolutional neural network (CNN) with five convolutional layers, batch normalization, and dropout layers, and employed the Adam optimizer to train model to accurately recognize these symbols. The robust dataset included over 102,600 images, each subject to extensive augmentation to improve the model's ability to generalize across different orientation and scaling conditions. 

The model was evaluated using 55 scanned "Spot It!" cards, where symbols were extracted and preprocessed for prediction. It achieved high accuracy in symbol identification, demonstrating significant resilience to common challenges such as rotations and scaling. This project illustrates the effective integration of data augmentation, deep learning, and computer vision techniques in tackling complex pattern recognition tasks, proving that artificial intelligence can significantly enhance traditional gaming experiences and create new opportunities in various fields. This project delves into the design, implementation, and testing of the CNN, providing a detailed analysis of its performance and highlighting its potential as a transformative tool in image recognition and categorization.


Sudha Chandrika Yadlapalli

BERT-Driven Sentiment Analysis: Automated Course Feedback Classification and Ratings

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

Automating the analysis of unstructured textual data, such as student course feedback, is crucial for gaining actionable insights. This project focuses on developing a sentiment analysis system leveraging the DeBERTa-v3-base model, a variant of BERT (Bidirectional Encoder Representations from Transformers), to classify feedback sentiments and generate corresponding ratings on a 1-to-5 scale.

A dataset of 100,000+ student reviews was preprocessed and fine-tuned on the model to handle class imbalances and capture contextual nuances. Training was conducted on high-performance A100 GPUs, which enhanced computational efficiency and reduced training times significantly. The trained BERT sentiment model demonstrated superior performance compared to traditional machine learning models, achieving ~82% accuracy in sentiment classification.

The model was seamlessly integrated into a functional web application, providing a streamlined approach to evaluate and visualize course reviews dynamically. Key features include a course ratings dashboard, allowing students to view aggregated ratings for each course, and a review submission functionality where new feedback is analyzed for sentiment in real-time. For the department, an admin page provides secure access to detailed analytics, such as the distribution of positive and negative reviews, visualized trends, and the access to view individual course reviews with their corresponding sentiment scores.

This project includes a comprehensive pipeline, starting from data preprocessing and model training to deploying an end-to-end application. Traditional machine learning models, such as Logistic Regression and Decision Tree, were initially tested but yielded suboptimal results. The adoption of BERT, trained on a large dataset of 100k reviews, significantly improved performance, showcasing the benefits of advanced transformer-based models for sentiment analysis tasks.


Rizwan Khan

Fatigue crack segmentation of steel bridges using deep learning models - a comparative study.

When & Where:


Learned Hall, Room 3131

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Hyongyang Sun



Abstract

Structural health monitoring (SHM) is crucial for maintaining the safety and durability of infrastructure. To address the limitations of traditional inspection methods, this study leverages cutting-edge deep learning-based segmentation models for autonomous crack identification. Specifically, we utilized the recently launched YOLOv11 model, alongside the established DeepLabv3+ model for crack segmentation. Mask R-CNN, a widely recognized model in crack segmentation studies, is used as the baseline approach for comparison. Our approach integrates the CREC cropping strategy to optimize dataset preparation and employs post-processing techniques, such as dilation and erosion, to refine segmentation results. Experimental results demonstrate that our method—combining state-of-the-art models, innovative data preparation strategies, and targeted post-processing—achieves superior mean Intersection-over-Union (mIoU) performance compared to the baseline, showcasing its potential for precise and efficient crack detection in SHM systems.


Zhaohui Wang

Enhancing Security and Privacy of IoT Systems: Uncovering and Resolving Cross-App Threats

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Degree Type:

PhD Comprehensive Defense

Committee Members:

Fengjun Li, Chair
Alex Bardas
Drew Davidson
Bo Luo
Haiyang Chao

Abstract

The rapid growth of Internet of Things (IoT) technology has brought unprecedented convenience to our daily lives, enabling users to customize automation rules and develop IoT apps to meet their specific needs. However, as IoT devices interact with multiple apps across various platforms, users are exposed to complex security and privacy risks. Even interactions among seemingly harmless apps can introduce unforeseen security and privacy threats.

In this work, we introduce two innovative approaches to uncover and address these concealed threats in IoT environments. The first approach investigates hidden cross-app privacy leakage risks in IoT apps. These risks arise from cross-app chains that are formed among multiple seemingly benign IoT apps. Our analysis reveals that interactions between apps can expose sensitive information such as user identity, location, tracking data, and activity patterns. We quantify these privacy leaks by assigning probability scores to evaluate the risks based on inferences. Additionally, we provide a fine-grained categorization of privacy threats to generate detailed alerts, enabling users to better understand and address specific privacy risks. To systematically detect cross-app interference threats, we propose to apply principles of logical fallacies to formalize conflicts in rule interactions. We identify and categorize cross-app interference by examining relations between events in IoT apps. We define new risk metrics for evaluating the severity of these interferences and use optimization techniques to resolve interference threats efficiently. This approach ensures comprehensive coverage of cross-app interference, offering a systematic solution compared to the ad hoc methods used in previous research.

To enhance forensic capabilities within IoT, we integrate blockchain technology to create a secure, immutable framework for digital forensics. This framework enables the identification, tracing, storage, and analysis of forensic information to detect anomalous behavior. Furthermore, we developed a large-scale, manually verified, comprehensive dataset of real-world IoT apps. This clean and diverse benchmark dataset supports the development and validation of IoT security and privacy solutions. Each of these approaches has been evaluated using our dataset of real-world apps, collectively offering valuable insights and tools for enhancing IoT security and privacy against cross-app threats.


Manu Chaudhary

Utilizing Quantum Computing for Solving Multidimensional Partial Differential Equations

When & Where:


Eaton Hall, Room 2001B

Degree Type:

PhD Comprehensive Defense

Committee Members:

Esam El-Araby, Chair
Perry Alexander
Tamzidul Hoque
Prasad Kulkarni
Tyrone Duncan

Abstract

Quantum computing has the potential to revolutionize computational problem-solving by leveraging the quantum mechanical phenomena of superposition and entanglement, which allows for processing a large amount of information simultaneously. This capability is significant in the numerical solution of complex and/or multidimensional partial differential equations (PDEs), which are fundamental to modeling various physical phenomena. There are currently many quantum techniques available for solving partial differential equations (PDEs), which are mainly based on variational quantum circuits. However, the existing quantum PDE solvers, particularly those based on variational quantum eigensolver (VQE) techniques, suffer from several limitations. These include low accuracy, high execution times, and low scalability on quantum simulators as well as on noisy intermediate-scale quantum (NISQ) devices, especially for multidimensional PDEs.

In this work, we propose an efficient and scalable algorithm for solving multidimensional PDEs. We present two variants of our algorithm: the first leverages finite-difference method (FDM), classical-to-quantum (C2Q) encoding, and numerical instantiation, while the second employs FDM, C2Q, and column-by-column decomposition (CCD). Both variants are designed to enhance accuracy and scalability while reducing execution times. We have validated and evaluated our algorithm using the multidimensional Poisson equation as a case study. Our results demonstrate higher accuracy, higher scalability, and faster execution times compared to VQE-based solvers on noise-free and noisy quantum simulators from IBM. Additionally, we validated our approach on hardware emulators and actual quantum hardware, employing noise mitigation techniques. We will also focus on extending these techniques to PDEs relevant to computational fluid dynamics and financial modeling, further bridging the gap between theoretical quantum algorithms and practical applications.


Hao Xuan

A Unified Algorithmic Framework for Biological Sequence Alignment

When & Where:


Nichols Hall, Room 250 (Gemini Room)

Degree Type:

PhD Comprehensive Defense

Committee Members:

Cuncong Zhong, Chair
Fengjun Li
Suzanne Shontz
Hongyang Sun
Liang Xu

Abstract

Sequence alignment is pivotal in both homology searches and the mapping of reads from next-generation sequencing (NGS) and third-generation sequencing (TGS) technologies. Currently, the majority of sequence alignment algorithms utilize the “seed-and-extend” paradigm, designed to filter out unrelated or nonhomologous sequences when no highly similar subregions are detected. A well-known implementation of this paradigm is BLAST, one of the most widely used multipurpose aligners. Over time, this paradigm has been optimized in various ways to suit different alignment tasks. However, while these specialized aligners often deliver high performance and efficiency, they are typically restricted to one or few alignment applications. To the best of our knowledge, no existing aligner can perform all alignment tasks while maintaining superior performance and efficiency.

In this work, we introduce a unified sequence alignment framework to address this limitation. Our alignment framework is built on the seed-and-extend paradigm but incorporates novel designs in its seeding and indexing components to maximize both flexibility and efficiency. The resulting software, the Versatile Alignment Toolkit (VAT), allows the users to switch seamlessly between nearly all major alignment tasks through command-line parameter configuration. VAT was rigorously benchmarked against leading aligners for DNA and protein homolog searches, NGS and TGS read mapping, and whole-genome alignment. The results demonstrated VAT’s top-tier performance across all benchmarks, underscoring the feasibility of using a unified algorithmic framework to handle diverse alignment tasks. VAT can simplify and standardize bioinformatic analysis workflows that involve multiple alignment tasks. 


Venkata Sai Krishna Chaitanya Addepalli

A Comprehensive Approach to Facial Emotion Recognition: Integrating Established Techniques with a Tailored Model

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

Facial emotion recognition has become a pivotal application of machine learning, enabling advancements in human-computer interaction, behavioral analysis, and mental health monitoring. Despite its potential, challenges such as data imbalance, variation in expressions, and noisy datasets often hinder accurate prediction.

This project presents a novel approach to facial emotion recognition by integrating established techniques like data augmentation and regularization with a tailored convolutional neural network (CNN) architecture. Using the FER2013 dataset, the study explores the impact of incremental architectural improvements, optimized hyperparameters, and dropout layers to enhance model performance.

The proposed model effectively addresses issues related to data imbalance and overfitting while achieving enhanced accuracy and precision in emotion classification. The study underscores the importance of feature extraction through convolutional layers and optimized fully connected networks for efficient emotion recognition. The results demonstrate improvements in generalization, setting a foundation for future real-time applications in diverse fields.


Tejarsha Arigila

Benchmarking Aggregation Free Federated Learning using Data Condensation: Comparison with Federated Averaging

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

Fengjun Li, Chair
Bo Luo
Sumaiya Shomaji


Abstract

This project investigates the performance of Federated Learning Aggregation-Free (FedAF) compared to traditional federated learning methods under non-independent and identically distributed (non-IID) data conditions, characterized by Dirichlet distribution parameters (alpha = 0.02, 0.05, 0.1). Utilizing the MNIST and CIFAR-10 datasets, the study benchmarks FedAF against Federated Averaging (FedAVG) in terms of accuracy, convergence speed, communication efficiency, and robustness to label and feature skews.  

Traditional federated learning approaches like FedAVG aggregate locally trained models at a central server to form a global model. However, these methods often encounter challenges such as client drift in heterogeneous data environments, which can adversely affect model accuracy and convergence rates. FedAF introduces an innovative aggregation-free strategy wherein clients collaboratively generate a compact set of condensed synthetic data. This data, augmented by soft labels from the clients, is transmitted to the server, which then uses it to train the global model. This approach effectively reduces client drift and enhances resilience to data heterogeneity. Additionally, by compressing the representation of real data into condensed synthetic data, FedAF improves privacy by minimizing the transfer of raw data.  

The experimental results indicate that while FedAF converges faster, it struggles to stabilize under highly heterogenous environments due to limited real data representation capacity of condensed synthetic data.