LLMs for Programming Languages

LLMs for Programming Languages

My research explores the use of Large Language Models (LLMs) for programming language tasks, particularly focusing on code translation, generation, and verification. Recent projects include using instruction-tuned models like ChatGPT as effective Java decompilers, producing more readable source code compared to traditional software-based approaches, and developing robust strategies to detect malicious or incorrect code through cross-model validation. By combining model-based methods with multi-model consensus, this work aims to enhance both the quality and security of AI-generated code.

Relevant Publications

Beyond Trusting Trust: Multi-Model Validation for Robust Code Generation

Bradley McDanel
UMBC CODEBOT ‘25 Workshop
paper
slides

Designing LLM-Resistant Programming Assignments: Insights and Strategies for CS Educators.

Bradley McDanel, Ed Novak
Special Interest Group Computer Science Educators (SIGCSE), 2025.
paper
code

a diagram of a computer screen with a keyboard and mouse
ChatGPT as a Java Decompiler

B. McDanel and Zhanhao Liu
3rd Generation, Evaluation & Metrics (GEM) Workshop at EMNLP’23
paper
code