Fall 2025

1. Identifying Community Information Needs During Hurricane Harvey: A Dynamic Topic Modeling Approach. This paper studies what kinds of information people were looking for during different stages of Hurricane Harvey. It uses text analysis to see how needs like safety, shelter, and recovery changed over time.

2. Democratizing AI Safety Research: Examining Sycophancy in Open-Source LLMs. This paper looks at how open-source AI models sometimes “agree” with users too much, even when the user is wrong or harmful. It measures this behavior and shows how having open models makes it easier for more people to study and fix these safety issues.

3. Visualizing Routing Detours. This work shows how Internet traffic sometimes takes long or strange paths, called routing detours. It builds visual tools to help users see where these detours happen and how they might affect performance or security.

4. Classifying Instances of DNS Based Internet Censorship Using Machine Learning Models Trained on Reachability Data. This paper uses machine learning to tell apart normal connection failures from DNS-based censorship. It trains models on large sets of reachability tests to automatically spot when DNS is being used to block websites.

5. What’s Predictable in the First K Minutes? Leakage-Safe Early Forecasting of Social Media Cascades
This work asks how well we can predict the future popularity of a post just from what happens in the first few minutes. It carefully uses only early signals from the post, its early viewers, and the content to avoid “cheating” with future data.

6. Can Generative AI Fact-Check Like Humans? A Comparative Analysis of Agent- and Human-Authored Community Notes. This paper compares fact-checking notes written by AI to notes written by human contributors. It looks at how accurate, useful, and well-supported each type of note is, and where AI still falls behind people.

7. Rethinking Censorship: From Centralized Power to Distributed Governance. This work argues that online censorship is shifting from a few powerful actors to more distributed systems and communities. It discusses the trade-offs of this change for control, fairness, and user freedom.

8. Small Language Models in Practice: Deployment, Efficiency, and Trustworthiness in Constrained Environments. This paper reviews recent work on small language models, which are compact AI models that can run on phones, sensors, or edge devices. It summarizes how they are built, deployed, and secured, and points out open problems for making them both efficient and trustworthy.

Summer 2025

1. Timing, Ties, and Tweets: How Social Bots Amplified Discourse Around the 2024 U.S. Presidential Election. This study examines how social bots behaved during the 2024 U.S. Presidential Election on Twitter/X. It looks at when bots were active, who they were connected to, and how they helped certain messages spread.

2. Categorization of Internet Outages as seen from Network Operators’ conversations. This paper analyzes messages from network operators discussing real Internet outages. It uses text analysis to group outages into common types and causes, like fiber cuts, misconfigurations, or shutdowns.

3. Measuring Hate Speech Exposure Across Algorithmic Feeds on Bluesky. This work measures how much hate speech users actually see under different feed and moderation setups on Bluesky. It uses automated and AI-assisted labeling to compare exposure across feeds and show how design choices affect user experience.

Spring 2025

1. Content Duplication Networks: Detecting Websites Involved in Coordinated Misinformation Sharing. The paper focuses on websites that spread misinformation and investigates if it is feasible to detect relationships between websites based on shared infrastructure (e.g., hosting, domain metadata) that possibly indicate coordination—even when the content is not identical or has been modified.  

2. Analyzing Political Podcasts with Automated Ideology Scoring and Visualizations. The project designs and prototypes a tool to automatically and transparently analyze political opinions in podcast content using speech recognition and large language models. 

3. Understanding Regulations for Internet Cross Border Data Transfers: A Systematic Literature Review. The paper focuses on understanding the regulations that are involved with international data flows and how they are enforced in practice. The paper surveys regulations related to blocking, throttling, or traffic discrimination, and how they might indicate that data is monitored or potentially controlled. 

4. Cyber-Physical Checkup: A Systematic Review of Security in Healthcare Cyber-Physical Systems. The paper looks into what recent research has taught us about building better (secure, scalable, reliable) healthcare cyber-physical systems, and possible gaps we still need to solve to make these systems work well in real clinical settings. 

5. Investigating Whether Cryptocurrency Prices Maybe Influenced by Reddit Discussions. This case study investigates how social media activity, particularly on Reddit, influences the price dynamics of cryptocurrencies, with a focus on memecoins. Analyzing trends in discussion intensity and corresponding price fluctuations, it aims to better understand the relationship between social media discussions and market prices. 

Fall 2024

1. Detecting Constitutional Risks in AI Governance Policy: A Scalable Predictive Framework. Featured in the OMSCS Student Spotlight.This paper is motivated by the need to help policymakers early in the drafting process by: identifying possible conflicts with constitutional rights, avoiding legal setbacks and creating more robust regulations.   

2. Evaluating Moderation Strategies to Combat Toxicity on Social Platforms. This paper uses a simulation-based approach to evaluate different moderation strategies for reducing toxicity on social media platforms. Modeling user interactions and applying various moderation techniques, it assesses the effectiveness of each method in improving community behavior.  

3. Understanding Toxicity on Decentralized Social Platforms. This paper looked into a decentralized social platform and analyzed a sample of public posts and community moderation practices, to identify patterns in toxic behavior and how they are addressed in the absence of centralized control. 

4. Improving Cloud Configuration with a Multi-Agent LLM Approach. This paper investigates whether checking cloud configurations using a team of specialized AI models (LLMs) that work together, is better than using one AI model on its own. 

5. A Systematic Literature Review: User Communication Practices in Countries of Surveillance 

6. A Systematic Literature Review: Leveraging Large Language Models in Content Advertising: Opportunities and Challenges 

7. A Systematic Literature Review:  Understanding the Role of LLMs in Financial Text Processing