Using AI to Understand Customer Needs
- Aaron Weinstein
- 4 days ago
- 5 min read
Updated: 3 days ago
Guest post by Chris Wyczalkowski, Director of Customer Insights, MARTA
Customer Experience (CX) departments are popping up all over North American transit agencies. The goal is simple: treat riders like actual customers and give them what they need. But figuring out exactly what they need? That’s the tricky part.
Can Artificial Intelligence help? Definitely, yes.

Here is a look at how the Metropolitan Atlanta Rapid Transit Authority (MARTA) used AI to tackle some of the mountain of customer feedback we receive.
(Note: These are my own thoughts, mistakes and all! An AI agent was used to improve readability)
Why "Open Text" is a Superpower (and a Headache)
We all know standard surveys—the ones where you check a box from 1 to 5. They’re the workhorses of data, and although they are easy to analyze, they can be a bit restrictive. Checking a box forces complex feelings and opinions into one set of prescribed options.
Working with MARTA’s Research & Analysis department, the CX Office of Customer Insights wanted to supplement our quantitative surveys with open-ended responses to dig for deeper insights. This lets riders speak, unprompted, in their own words. Whether it’s praise, a specific complaint, or a suggestion, this feedback can tell us why people feel the way they do, what is most important and top of mind.
For example, if someone rates "cleanliness" poorly, a checkbox won't tell you why. But a written comment might say that a certain stairwell always smells like urine or a specific restroom is frequently unavailable. Unprompted comments also can surface new ideas or other items that may have previously escaped scrutiny. These comments can provide the kind of detail agencies need to act.
The catch? Reading thousands of these comments by hand is a nightmare. It’s slow, hard to keep consistent, and requires manpower that most teams just don’t have. This is where AI swoops in to save the day.
The MARTA Experiment
My Office wanted to test whether AI could handle the heavy lifting of categorizing open-ended comments – interpreting the comments, generating themes, and quantifying the results. We ran a pilot program with three off-the-shelf AI tools, using our own data, to see if they could interpret and categorize open-ended rider comments.
The Verdict: Yes, they can!
Real-World Test: For one of the tests of this tool, we put up posters with QR codes asking riders what station amenities they wanted in stations. We got about 400 responses and utilized the AI tools to categorize and quantify the responses.
The Old Way: It would have taken staff at least 30 hours to categorize these data.
The AI Way: The software categorized and ranked the categories automatically, and staff only needed about one hour to analyze the results.
That is a massive time-saver. We got insights from open-ended comments with almost zero manual grunt work.
After a competitive bidding process, we chose a platform called Beehive.ai. We picked Beehive.ai for primarily three reasons:
Consistency: The results were consistent over time.
Trainability and verification: We could teach the model, and easily trace categories to individual comments to confirm accuracy.
Usability: The visual interface was intuitive, provided needed functions, and easy to use.
Seeing the Data
Beehive.ai provides a dynamic dashboard with three columns. The Categorization column on the left side of the dashboard shows categories and the number of comments for each category. The Responses column in the middle shows the actual verbatim comments within any selected category. And the AI Insights column on the right provides an AI-generated summary for any selected category. As different categories or subcategories are selected on the left, the Responses and AI Insights columns automatically adjust to that category or subcategory.
Figure 1: Overview of the three columns

Note that the data source for this comes from a survey question that asked non-riders what would get them to use the train.
Figures 2-5 below zoom in on each Beehive.ai column to show more detail.
Categories: Figure 2 illustrates how open-end comments were categorized and quantified by Beehive.ai. Figure 3 shows subcategories under the "events" category. It indicates that people going to events often take MARTA to avoid traffic.
Figure 2: Categories

Figure 3: Subcategories

Verbatim Comments: Figure 4 shows individual verbatim comments and how they were categorized. Next to each comment there is a bookmark button you can use to bookmark comments of interest to later use in presentations or reports.
Figure 4: Verbatim Comments

AI Insights: Figure 5 provides an example of AI insights generated by Beehive.ai. It shows that safety, convenience, and cost are the most important factors overall. The AI insights are dynamic, so they adjust when different categories or subcategories are selected.
Figure 5: AI Insights

Another use case is MARTA’s Voice of the Customer (VOC) survey, which prominently invites open-end comments (see Figure 6).
Figure 6
The MARTA VOC surveys leave a generous amount of space to invite customers to write detailed comments that give us deep insights into pain points they experience during their journey.

The 2025 MARTA VOC Survey collected well in excess of 10,000 responses, and more than half of those riders took the time to write open-ended comments! Without AI, we could do a word search for anecdotes. With AI, we can do cool new things, like segmentation:
Riders with cars? They mostly talk about personal safety.
Riders without cars? They care most about punctuality.
These use cases are not exhaustive, there are many more sources of open-text data that can be explored, and equally more types of analysis that can be done. It’s also worth underscoring that at the scale of the VOC, an analysis of open-ended text could not be practically carried out at all.
Pro-Tips for Testing an AI tool
After testing analysis of qualitative data with AI, here is what we learned:
Use your own data: Don’t just trust the demo; test it with your real messy feedback.
Get hands-on: You need a staff member to help "train" the model to make it smarter and more accurate over time.
Volume matters: AI needs a lot of data to be useful. It works best when you have a high volume of comments.
Trust but verify: One of our favorite features was the ability to "check the math." You can look at a category and see exactly which comments the AI put there to make sure it’s right.
From Data to Executive Action
Maybe the best part is that AI translates customers’ shouting into structured data executives can use alongside quantitative survey results. Instead of just sharing anecdotes ("Someone said a certain bus route needs more service"), leaders can see how customers prioritize their needs and how that aligns with strategic priorities.
This could help leadership:
Prioritize action based on real pain points.
Justify changes with evidence.
Track if a new action is actually working over time.
Final Thought
The integration of AI into CX isn’t just about technology—it’s about culture. It reflects a commitment to listening more closely, responding more effectively, and encouraging riders to speak up when something isn’t working—because they know their voices will be heard.
AI is a tool that helps us do that at scale, but the goal remains human: to build a transit system that works better for the people who rely on it every day. And when we do that, we’re not just improving the rider experience—we’re strengthening the system itself. After all, customers want transit to succeed. Their interests are aligned with ours: a system that is reliable, responsive, and user friendly, and resilient.
