As a former principal, I often collected surveys, feedback, and comments from students, parents, and staff. The goal was always the same: to better understand the needs, concerns, and priorities of our school community. The data would yield plenty of numbers, such as attendance ratings, program satisfaction scores, and “yes” or “no” responses that were easy enough to chart and discuss.
But alongside those neat columns of percentages sat the long, messy paragraphs of written comments. These were stories, frustrations, suggestions, and moments of gratitude, and they were where the real insight lived.
However, as a busy administrator, sifting through all those comments was daunting. I would try to read them all, highlighter in hand, but I often found myself skimming, stopping when I hit a few memorable quotes, and moving on. Sometimes little was actually done with the data. Valuable patterns and connections were left undiscovered.
With the advent of AI, all this has changed. I can now conduct thematic analysis—spotting and making sense of recurring ideas in text-based data—on large volumes of qualitative data quickly and systematically, ensuring that all those valuable comments don’t get wasted.
Here’s how it works. The process I’m about to explain has been adapted from researchers Naeem, Smith, and Thomas’s step-by-step framework for using ChatGPT in thematic analysis, but I’ve modified for the realities of school leadership.
For this walkthrough, I’ll use a recent set of professional development (PD) feedback from our teachers as the example dataset. Each step will show how AI can help analyze these comments, turning them into meaningful insights for improving future PD sessions.
These were stories, frustrations, suggestions, and moments of gratitude, and they were where the real insight lived.
1. Set the Leadership Question
Before diving into AI tools, I first scan the PD feedback myself to get a feel for the tone, spot recurring concerns, and identify what I most need to learn. This step helps me clarify my leadership question—for example, am I trying to determine if the PD improved classroom practice, or if certain elements (like hands-on activities) resonated more than others?
Once that focus is clear, I then turn to AI to help surface broader patterns or tone summaries that align with my priorities.
Example AI prompt: "You are a school data analyst. Analyze this anonymized teacher feedback from our recent PD sessions. Identify initial impressions, the general tone of the comments, and any stand-out observations. My goal is to determine whether the PD improved classroom practice."
2. Sort Feedback Data into Categories
Once I understand the big picture, I break the feedback into smaller pieces, tagging specific ideas so I can see exactly what teachers are saying. I start coding, which means tagging specific parts of the feedback with short labels that describe what’s being said. For example, if a teacher writes, “We didn’t have enough time to practice the strategies,” I might code it as “time for application.” In a PD context, I might end up with codes for pacing, relevance, collaboration, or follow-up support. With AI, I can get a first draft of these codes in minutes, but I still read and adjust them to reflect my context.
Example AI prompt: "From this anonymized PD feedback, generate a list of initial codes that capture the key ideas being expressed. Include the exact quote for each code so I can verify accuracy."
3. Search for Broader Themes
With dozens of codes in hand, my next move is to look for bigger patterns that tell a broader story about the PD. For example, codes like “sessions too short,” “content rushed,” and “no time for practice” could all fit under a bigger theme like “pacing and application.” AI can suggest these groupings, but I always review them to ensure they make sense for my leadership goals.
Example AI prompt: "Based on these PD feedback codes, group them into broader themes. For each theme, explain what it represents and which codes belong to it."
4. Check Fit and Relevance
Even great-sounding themes need a reality check. Here, I go back to the raw comments to be sure each theme reflects what teachers actually said. In PD feedback, a theme like “content alignment” should be supported by multiple comments from different teachers, not just one person’s opinion. This step keeps the analysis grounded in evidence and prevents overgeneralization.
Example AI prompt: "For each PD feedback theme, list at least two direct teacher quotes from the original dataset that support it. Flag any theme that has limited supporting evidence."
Once I understand the big picture, I break the feedback into smaller pieces, tagging specific ideas so I can see exactly what teachers are saying.
5. Define and Name Themes
When it’s time to share my findings, I give each theme a clear, concise name so my team can understand it immediately. For example, instead of saying “issues with pacing and timing,” I might call it “PD pacing limits application.” The goal is to make themes short enough for a slide deck but precise enough that my team knows exactly what they mean. In PD effectiveness reviews, I might also include why each theme matters for student learning or instructional practice.
Example AI prompt: "For each PD feedback theme, create a concise name (five words or fewer) and a one-sentence definition that explains its meaning and importance for instructional improvement."
6. Turn Themes into Leadership Action
Finally, I translate these themes into concrete leadership actions—changes my team can make to strengthen our PD moving forward. This is the reporting stage. For example, if “practical classroom application” is a theme, I might schedule more live modeling during PD. If “follow-up support” emerges, I might add coaching cycles or PLC discussions. I might use AI for help with these ideas but ultimately shape them with my own goals in mind.
Example AI prompt: "For each PD feedback theme, suggest three realistic leadership actions to address it. Keep them practical, affordable, and aligned to school visions and teacher needs."
Guardrails for Data Privacy and Safety
These six steps can unlock powerful insights from PD feedback, but how we handle the data matters just as much as the insights themselves. Before running your own analysis, it’s important to set clear guardrails to protect privacy and ensure ethical use of AI.
Always begin by removing personally identifiable information such as names, ID numbers, and addresses before uploading any file. Work only with aggregated, anonymized responses, and ensure all actions comply with FERPA and your district’s data-sharing policies.
Keep a secure, offline copy of the original data, and treat AI outputs as working drafts rather than final answers. Research shows that while ChatGPT can generate initial descriptive codes rapidly, it cannot replace the nuanced interpretation that human leaders provide.
Finally, occasional misquotations or misclassifications can occur, so all AI-generated codes and quotes should be verified against the original dataset and interpreted in the context of your school community. For more ideas on safeguarding student information, the guide Ethically Using AI with Student Data by Graham Clay offers a useful starting point.
The ability to make sense of large volumes of qualitative data is no longer optional for school leaders; it is essential. AI-powered thematic analysis offers a way to honor every voice, uncover patterns that might otherwise be missed, and turn feedback into focused, actionable change. When paired with ethical safeguards and sound leadership judgment, AI becomes more than a convenience; it becomes a strategic partner in shaping your school’s future. Try applying this six-step process to your most recent surveys or feedback, then share your findings with your leadership team. You may be surprised by how much clarity and direction you uncover in the voices you have been hearing all along.


