I2S Masters/ Doctoral Theses


All students and faculty are welcome to attend the final defense of I2S graduate students completing their M.S. or Ph.D. degrees. Defense notices for M.S./Ph.D. presentations for this year and several previous years are listed below in reverse chronological order.

Students who are nearing the completion of their M.S./Ph.D. research should schedule their final defenses through the EECS graduate office at least THREE WEEKS PRIOR to their presentation date so that there is time to complete the degree requirements check, and post the presentation announcement online.

Upcoming Defense Notices

Sai Karthik Maddirala

Real-Estate Price Analysis and Prediction Using Ensemble Learning

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Morteza Hashemi
Prasad Kulkarni


Abstract

Accurate real-estate price estimation is crucial for buyers, sellers, investors, lenders, and policymakers, yet traditional valuation practices often rely on subjective judgment, inconsistent expertise, and incomplete market information. With the increasing availability of digital property listings, large volumes of structured real-estate data can now be leveraged to build objective, data-driven valuation systems. This project develops a comprehensive analytical framework for predicting different types of properties prices using real-world listing data collected from 99acres.com across major Indian cities. The workflow includes automated web scraping, extensive data cleaning, normalization of heterogeneous property attributes, and exploratory data analysis to identify important pricing patterns and structural trends within the dataset. A multi-stage learning pipeline is designed—consisting of feature preparation, hyperparameter tuning, cross-validation, and performance evaluation—to ensure that the final predictive system is both reliable and generalizable. In addition to the core prediction engine, the project proposes a future extension using Retrieval-Augmented Generation (RAG) with Large Language Models(LLM’s) to provide transparent, context-aware explanations for each valuation. Overall, this work establishes the foundation for a scalable, interpretable, and data-centric real-estate valuation platform capable of supporting informed decision-making in diverse market contexts.


Ramya Harshitha Bolla

AI Academic Assistant for Summarization and Question Answering

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

The rapid expansion of academic literature has made efficient information extraction increasingly difficult for researchers, leading to substantial time spent manually summarizing documents and identifying key insights. This project presents an AI-powered Academic Assistant designed to streamline academic reading through multi-level summarization, contextual question answering, and source-grounded traceability. The system incorporates a robust preprocessing pipeline including text extraction, artifact removal, noise filtering, and section segmentation to prepare documents for accurate analysis. After assessing the limitations of traditional NLP and transformer-based summarization models, the project adopts a Large Language Model (LLM) approach using the Gemini API, enabling deeper semantic understanding, long-context processing, and flexible summarization. The assistant provides structured short, medium, and long summaries; contextual keyword extraction; and interactive question answering with transparent source highlighting. Limitations include handling complex visual content and occasional API constraints. Overall, this project demonstrates how modern LLMs, combined with tailored prompt engineering and structured preprocessing, can significantly enhance the academic document analysis workflow.


Keerthi Sudha Borra

Intellinotes – AI-POWERED DOCUMENT UNDERSTANDING PLATFORM

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Han Wang


Abstract

This project presents Intellinotes, an AI-powered platform that transforms educational documents into multiple learning formats to address information-overload challenges in modern education. The system leverages large language models (GPT-4o-mini) to automatically generate four complementary outputs from a single document upload: educational summaries, conversational podcast scripts, hierarchical mind maps, and interactive flashcards.

The platform employs a three-tier architecture built with Next.js, FastAPI, and MongoDB, supporting multiple document formats (PDF, DOCX, PPTX, TXT, images) through a robust parsing pipeline. Comprehensive evaluation on 30 research documents demonstrates exceptional system reliability with a 100% feature success rate across 150 tests (5 features × 30 documents), and strong semantic understanding with a semantic similarity score of 0.72.

While ROUGE scores (ROUGE-1: 0.40, ROUGE-2: 0.09, ROUGE-L: 0.17) indicate moderate lexical overlap typical of abstractive summarization, the high semantic similarity demonstrates that the system effectively captures and conveys the conceptual meaning of source documents—an essential requirement for educational content. This validation of meaning preservation over word matching represents an important contribution to evaluating educational AI systems.

The system processes documents in approximately 65 seconds with perfect reliability, providing students with comprehensive multi-modal learning materials that cater to diverse learning styles. This work contributes to the growing field of AI-assisted education by demonstrating a practical application of large language models for automated educational content generation supported by validated quality metrics.


Sowmya Ambati

AI-Powered Question Paper Generator

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Prasad Kulkarni
Dongjie Wang


Abstract

Designing a well-balanced exam requires instructors to review extensive course materials, determine key concepts, and design questions that reflect appropriate difficulty and cognitive depth. This project develops an AI-powered Question Paper Generator that automates much of this process while keeping instructors in full control. The system accepts PDFs, Word documents, PPT slides, and text files, extracts their content, and builds a FAISS-based retrieval index using sentence-transformer embeddings. A large language model then generates multiple question types—MCQs, short answers, and true/false—guided by user-selected difficulty levels and Bloom’s Taxonomy distributions to ensure meaningful coverage. Each question is evaluated with a grounding score that measures how closely it aligns with the source material, improving transparency and reducing hallucination. A React frontend enables instructors to monitor progress, review questions, toggle answers, and export to PDF or Word, while an ASP.NET Core backend manages processing and metrics. The system reduces exam preparation time and enhances consistency across assessments.


George Steven Muvva

Automated Fake Content Detection Using TF-IDF-Based Machine Learning and LSTM-Driven Deep Learning Models

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

The rapid spread of misinformation across online platforms has made automated fake news detection essential. This project develops and compares machine learning (SVM, Decision Tree) and deep learning (LSTM) models to classify news headlines from the GossipCop and PolitiFact datasets as real or fake. After extensive preprocessing— including text cleaning, lemmatization, TF-IDF vectorization, and sequence tokenization—the models are trained and evaluated using standard performance metrics. Results show that SVM provides a strong baseline, but the LSTM model achieves higher accuracy and F1-scores by capturing deeper semantic and contextual patterns in the text. The study highlights the challenges of domain variation and subtle linguistic cues, while demonstrating that context-aware deep learning methods offer superior capability for automated fake content detection.


Babak Badnava

Joint Communication and Computation for Emerging Applications in Next-Generation Wireless Networks

When & Where:


Nichols Hall, Room 246 (Executive Conference Room)

Degree Type:

PhD Dissertation Defense

Committee Members:

Morteza Hashemi, Chair
Victor Frost
Prasad Kulkarni
Taejoon Kim
Shawn Keshmiri

Abstract

Emerging applications in next-generation wireless networks, such as augmented and virtual reality (AR/VR) and autonomous vehicles, demand significant computational and communication resources at the network edge. This PhD research focuses on developing joint communication–computation solutions while incorporating various network-, application-, and user-imposed constraints. In the first thrust, we examine the problem of energy-constrained computation offloading to edge servers in a multi-user, multi-channel wireless network. To develop a decentralized offloading policy for each user, we model the problem as a partially observable Markov decision process (POMDP). Leveraging bandit learning methods, we introduce a decentralized task offloading solution in which edge users offload their computation tasks to nearby edge servers over selected communication channels. 

The second thrust focuses on user-driven requirements for resource-intensive applications, specifically the Quality of Experience (QoE) in 2D and 3D video streaming. Given the unique characteristics of millimeter-wave (mmWave) networks, we develop a beam alignment and buffer-predictive multi-user scheduling algorithm for 2D video streaming applications. This algorithm balances the trade-off between beam alignment overhead and playback buffer levels for optimal resource allocation across multiple users. We then extend our investigation to develop a joint rate adaptation and computation distribution framework for 3D video streaming in mmWave-based VR systems. Numerical results using real-world mmWave traces and 3D video datasets demonstrate significant improvements in video quality, rebuffering time, and quality variations perceived by users.

Finally, we develop novel edge computing solutions for multi-layer immersive video processing systems. By exploring and exploiting the elastic nature of computation tasks in these systems, we propose a multi-agent reinforcement learning (MARL) framework that incorporates two learning-based methods: the centralized phasic policy gradient (CPPG) and the independent phasic policy gradient (IPPG). IPPG leverages shared information and model parameters to learn edge offloading policies; however, during execution, each user acts independently based only on its local state information. This decentralized execution reduces the communication and computation overhead of centralized decision-making and improves scalability. We leverage real-world 4G, 5G, and WiGig network traces, along with 3D video datasets, to investigate the performance trade-offs of CPPG and IPPG when applied to elastic task computing.


Sri Dakshayani Guntupalli

Customer Churn Prediction for Subscription-Based Businesses

When & Where:


LEEP2, Room 2420

Degree Type:

MS Project Defense

Committee Members:

David Johnson, Chair
Rachel Jarvis
Prasad Kulkarni


Abstract

Customer churn is a critical challenge for subscription-based businesses, as it directly impacts revenue, profitability, and long-term customer loyalty. Because retaining existing customers is more cost-effective than acquiring new ones, accurate churn prediction is essential for sustainable growth. This work presents a machine learning based framework for predicting and analyzing customer churn, coupled with an interactive Streamlit web application that supports real time decision making. Using historical customer data that includes demographic attributes, usage behavior, transaction history, and engagement patterns, the system applies extensive data preprocessing and feature engineering to construct a modeling-ready dataset. Multiple models Logistic Regression, Random Forest, and XGBoost are trained and evaluated using the Scikit-Learn framework. Model performance is assessed with metrics such as accuracy, precision, recall, F1-score, and ROC-AUC to identify the most effective predictor of churn. The top performing models are serialized and deployed within a Streamlit interface that accepts individual customer inputs or batch data files to generate immediate churn predictions and summaries. Overall, this project demonstrates how machine learning can transform raw customer data into actionable business intelligence and provides a scalable approach to proactive customer retention management.


QiTao Weng

Anytime Computer Vision for Autonomous Driving

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Thesis Defense

Committee Members:

Heechul Yun, Chair
Drew Davidson
Shawn Keshmiri


Abstract

Latency–accuracy tradeoffs are fundamental in real-time applications of deep neural networks (DNNs) for cyber-physical systems. In autonomous driving, in particular, safety depends on both prediction quality and the end-to-end delay from sensing to actuation. We observe that (1) when latency is accounted for, the latency-optimal network configuration varies with scene context and compute availability; and (2) a single fixed-resolution model becomes suboptimal as conditions change.

We present a multi-resolution, end-to-end deep neural network for the CARLA urban driving challenge using monocular camera input. Our approach employs a convolutional neural network (CNN) that supports multiple input resolutions through per-resolution batch normalization, enabling runtime selection of an ideal input scale under a latency budget, as well as resolution retargeting, which allows multi-resolution training without access to the original training dataset.

We implement and evaluate our multi-resolution end-to-end CNN in CARLA to explore the latency–safety frontier. Results show consistent improvements in per-route safety metrics—lane invasions, red-light infractions, and collisions—relative to fixed-resolution baselines.


Past Defense Notices

Dates

Archana Chalicheemala

A Machine Learning Study using Gene Expression Profiles to Distinguish Patients with Non-Small Cell Lung Cancer

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

Zijun Yao, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

Early diagnosis can effectively treat non-small cell lung cancer (NSCLC). Lung cancer cells usually have altered gene expression patterns compared to normal cells, which can be utilized to predict cancer through gene expression tests. This study analyzed gene expression values measured from 15227-probe microarray, and 290 patients consisting of cancer and control groups, to find relations between the gene expression features and lung cancer. The study explored k-means, statistical tests, and deep neural networks to obtain optimal feature representations and achieved the highest accuracy of 82%. Furthermore, a bipartite graph was built using the Bio Grid database and gene expression values, where the probe-to-probe relationship based on gene relevance was leveraged to enhance the prediction performance.


Yoganand Pitta

Insightful Visualization: An Interactive Dashboard Uncovering Disease Patterns in Patient Healthcare Data

When & Where:


Eaton Hall, Room 2001B

Degree Type:

MS Project Defense

Committee Members:

Zijun Yao, Chair
Prasad Kulkarni
Hongyang Sun


Abstract

As Electronic Health Records (EHRs) become more available, there is increasing interest in discovering hidden disease patterns by leveraging cutting-edge data visualization techniques, such as graph-based knowledge representation and interactive graphical user interfaces (GUIs). In this project, we have developed a web-based interactive EHR analytics and visualization tool to provide healthcare professionals with valuable insights that can ultimately improve the quality and cost-efficiency of patient care. Specifically, we have developed two visualization panels: one for the intelligence of individual patients and the other for the relevance among diseases. For individual patients, we capture the similarity between them by linking them based on their relatedness in diagnosis. By constructing a graph representation of patients based on this similarity, we can identify patterns and trends in patient data that may not be apparent through traditional methods. For disease relationships, we provide an ontology graph for the specific diagnosis (ICD10 code), which helps to identify ancestors and predecessors of a particular diagnosis. Through the demonstration of this dashboard, we show that this approach can provide valuable insights to better understand patient outcomes with an informative and user-friendly web interface.


Michael Cooley

Machine Learning for Navel Discharge Review

When & Where:


Eaton Hall, Room 1

Degree Type:

MS Project Defense

Committee Members:

Prasad Kulkarni, Chair
David Johnson
Jerzy Grzymala-Busse


Abstract

This research project aims to predict the outcome of the Naval Discharge Review Board decision for an applicant based on factors in the application, using Machine Learning techniques. The study explores three popular machine learning algorithms: MLP, Adaboost, and KNN, with KNN providing the best results. The training is verified through hyperparameter optimization and cross fold validation.

Additionally, the study investigates the ability of ChatGPT's API to classify the data that couldn't be classified manually. A total of over 8000 samples were classified by ChatGPT's API, and an MLP model was trained using the same hyperparameters that were found to be optimal for the 3000 size manual sample.The model was then tested on the manual sample. The results show that the model trained on data labeled by ChatGPT performed equivalently, suggesting that ChatGPT's API is a promising tool for labeling in this domain.


Sarah Johnson

Formal Analysis of TPM Key Certification Protocols

When & Where:


Nichols Hall, Room 246

Degree Type:

MS Thesis Defense

Committee Members:

Perry Alexander, Chair
Michael Branicky
Emily Witt


Abstract

Development and deployment of trusted systems often require definitive identification of devices. A remote entity should have confidence that a device is as it claims to be. An ideal method for fulfulling this need is through the use of secure device identitifiers. A secure device identifier (DevID) is defined as an identifier that is cryptographically bound to a device. A DevID must not be transferable from one device to another as that would allow distinct devices to be identified as the same. Since the Trusted Platform Module (TPM) is a secure Root of Trust for Storage, it provides the necessary protections for storing these identifiers. Consequently, the Trusted Computing Group (TCG) recommends the use of TPM keys for DevIDs. The TCG's specification TPM 2.0 Keys for Device Identity and Attestation describes several methods for remotely proving a key to be resident in a specific device's TPM. These methods are carefully constructed protocols which are intended to be performed by a trusted Certificate Authority (CA) in communication with a certificate-requesting device. DevID certificates produced by an OEM's CA at device manufacturing time may be used to provide definitive evidence to a remote entity that a key belongs to a specific device. Whereas DevID certificates produced by an Owner/Administrator's CA require a chain of certificates in order to verify a chain of trust to an OEM-provided root certificate. This distinction is due to the differences in the respective protocols prescribed by the TCG's specification. We aim to abstractly model these protocols and formally verify that their resulting assurances on TPM-residency do in fact hold. We choose this goal since the TCG themselves do not provide any proofs or clear justifications for how the protocols might provide these assurances. The resulting TPM-command library and execution relation modeled in Coq may easily be expanded upon to become useful in verifying a wide range of properties regarding DevIDs and TPMs.


Anna Fritz

Negotiating Remote Attestation Protocols

When & Where:


Nichols Hall, Room 246

Degree Type:

PhD Comprehensive Defense

Committee Members:

Perry Alexander, Chair
Alex Bardas
Drew Davidson
Fengjun Li
Emily Witt

Abstract

During remote attestation, a relying party prompts a target to perform some stateful measurement which can be appraised to determine trust in the target's system. In this current framework, requested measurement operations must be provisioned by a knowledgeable system user who may fail to consider situational demands which potentially impact the desired measurement. To solve this problem, we introduce negotiation: a framework that allows the target and relying party to mutually determine an attestation protocol that satisfies both the target's need to protect sensitive information and the relying party's desire for a comprehensive measurement. We designed and verified this negotiation procedure such that for all negotiations, we can provably produce an executable protocol that satisfies the targets privacy standards. With the remainder of this work, we aim to realize and instantiate protocol orderings ensuring negotiation produces a protocol sufficient for the relying party. All progress is towards our ultimate goal of producing a working, fully verified negotiation scheme which will be integrated into our current attestation framework for flexible, end-to-end attestations.