A case study on building ScalpShield, an explainable, machine-learning–based anti-scalping system developed as a six-person final project for an E-Commerce Systems Design course, with a focus on problem finding, adoption, and clear technical communication.
In my final semester at CSU Stanislaus, I worked on ScalpShield, a six-person group project for our E-Commerce Systems Design course. Unlike traditional classes that focus primarily on implementation, this course was built around the idea of problem finding. Inspired by Daniel Pink’s New ABCs, the emphasis was on identifying meaningful problems, designing solutions people would actually adopt, and clearly communicating why those solutions mattered. The final deliverable was not just a working system, but a 25-minute presentation where we had to sell the problem and the solution to both the professor and the class.
Our team chose to tackle ticket scalping and purchase fraud, a problem that affects event organizers, ticketing platforms, and customers alike. With automated bots and resale markets becoming more sophisticated, simply building a technically sound system was not enough. We wanted to design something that could realistically be used by a company, trusted by its staff, and integrated into real workflows. ScalpShield was framed as a SaaS product that companies could subscribe to, while still being flexible enough to integrate directly into an existing ticketing backend.
I was responsible for the backend architecture, machine learning integration, system documentation, and serving as the lead technical communicator for the team. I also led the backend portion of our final presentation and ran the live demo. One of my main goals throughout the project was to make sure everyone on the team understood how the system fit together end to end, so frontend, backend, and data work could move forward in parallel without confusion. With only four weeks to build the system, clear communication was just as important as clean code.
System Overview. ScalpShield was built with a Python FastAPI backend and an XGBoost-based machine learning model trained to identify suspicious purchasing behavior. The backend exposes an API that scores transactions in real time and returns both a risk score and an explanation of why a purchase was flagged. We intentionally prioritized explainability, because a system that simply labels something as “fraud” without context is difficult to trust or adopt. The frontend, built as a dashboard, visualizes this data through a live sales feed, flagged transactions, heat maps, and lists of the most suspicious users.
Live Demo and Adoption Focus. During the final presentation, I ran a live demo that simulated how a company using ScalpShield would interact with the system. The demo showed a staff member logging into the dashboard, monitoring a live feed of ticket purchases, and reviewing flagged transactions along with the reasons they were considered suspicious. Rather than focusing on the model in isolation, we framed the demo around how an operations or fraud review team would actually use the tool day to day. This perspective helped reinforce that ScalpShield was designed for people, not just for technical correctness.
Machine Learning and Explainability. The XGBoost model was trained on engineered transaction features that captured patterns common in scalping behavior. While the model achieved a confidence rate of around 95 percent, we were careful not to treat that number as the sole measure of success. Instead, we emphasized that explainability and integration mattered just as much. The backend surfaces why a transaction was flagged, which makes it easier for users to validate decisions, build trust in the system, and take action with confidence.
Working as a Team Under Time Pressure. With a six-person team and a four-week timeline, coordination was one of the biggest challenges. Clear documentation and shared understanding of system boundaries helped us avoid rework and bottlenecks. Acting as the technical point of contact, I made sure design decisions were documented and communicated early, which allowed everyone to work efficiently on their respective components while still building toward a cohesive system.
Outcome and Reflection. The project was a major success. The system worked end to end during the presentation, and the feedback from the professor and class was overwhelmingly positive. Our professor encouraged us to continue developing ScalpShield beyond the course, noting its potential as a real product. For me, the project reinforced an important lesson from the class itself: in a world where tools and AI make coding more accessible, the real value lies in finding the right problem, designing for adoption, and communicating technical systems clearly. ScalpShield was not just a fun technical challenge, but one of the most rewarding team projects I have worked on.