
Judge of the SEC Student Pitch Competition and an expert in enterprise AI implementation with experience at Oracle, HPE, and NTT DATA — on how to distinguish a market-ready product from an attractive technology demo, and which AI startups are truly ready for scale
Student startup competitions have long ceased to be merely educational exercises. Increasingly, they serve as the first real selection point for technologies that may eventually move beyond university labs and find practical application in healthcare, industry, finance, and other sectors. Against the backdrop of rapidly growing interest in AI-driven solutions and applied innovation, the question of which projects are genuinely ready for the next step has become especially relevant.
The victory of Altaris MedTech at the 2025 Southeastern Conference (SEC) Student Pitch Competition, hosted by Vanderbilt University, was notable not only because of its medical focus. It drew attention beyond the technology itself to a broader question: what kinds of innovations are now considered truly market-ready? As startups increasingly operate at the intersection of AI, medicine, industry, and digital services, evaluation criteria have become far more demanding than they were just a few years ago. This year, the jury reviewed dozens of student startups across a wide range of domains — from MedTech and artificial intelligence to automation and applied digital solutions. The winning project stood out by convincingly demonstrating a balance of technological feasibility, market understanding, and a clear path to implementation and growth.
We spoke with Alex Potapov — a judge of the SEC Student Pitch Competition and a senior consultant in enterprise technologies and artificial intelligence — about how judges today distinguish promising ideas from concepts that are not yet ready to scale. Potapov regularly evaluates early-stage startups at the intersection of AI, digital manufacturing, and applied innovation. In this interview, he shares the principles judges use when assessing early-stage innovation, why some projects inspire trust while others do not, and the most common mistakes teams make in their first public pitches.
“Market-ready projects don’t just claim to be better.”
You served as a judge at the 2025 SEC Student Pitch Competition, which brought together teams from all SEC universities, and you regularly evaluate early-stage technology startups in both academic and corporate or consulting contexts. The Altaris MedTech win drew significant attention. Looking more broadly, what key signals do you look for as a judge to understand whether you’re seeing an idea or a product that is truly ready for the market?
When I evaluate a pitch, the first thing I look for is how much real discovery the team has done. I want to understand whether the problem they are solving is grounded in actual customer pain or whether it remains abstract and hypothetical. Teams that have spoken with users, tested assumptions, or observed real workflows stand out very quickly.
From there, I look at the market itself. Even a strong solution struggles if there isn’t a critical mass of potential buyers. I pay close attention to whether the team can clearly articulate who the customer is, how they identified that target audience, and why that specific segment is the right entry point for the product.
Finally, I assess differentiation. I want to see a clear understanding of what already exists on the market and how the proposed solution meaningfully improves on current alternatives. Market-ready projects don’t just claim to be better — they can explain, in concrete terms, why customers would switch and what makes the solution defensible.
The competition featured projects across a wide range of fields — from MedTech and AI to agtech and digital services. Given your experience working with technology solutions in industry, energy, and other highly regulated sectors, why do teams that think beyond a single technology and focus early on application and scale tend to win today?
The projects that perform best tend to understand that technology alone is not the product. What consistently differentiates strong teams is their ability to connect innovation to real-world application and constraints from the very beginning. That includes regulatory realities, user behavior, cost structures, distribution, and adoption friction.
In some cases, especially today, we see AI used primarily as an attention grabber. Because AI is a popular and powerful topic, it can be tempting for teams to position it at the center of the pitch even when it doesn’t materially improve the problem being solved. Judges are quick to notice when AI is added for appeal rather than for impact.
AI is a tool, not a solution in itself, and it needs to be used thoughtfully. The strongest teams can clearly explain why AI is necessary, what role it plays in the workflow, and how it improves outcomes compared to simpler or more traditional approaches. When technology directly supports the use case — rather than distracting from it — the project feels grounded, credible, and far more likely to succeed in the real world.
“Meaningful use of AI is usually very specific.”
You have spent many years working at the intersection of enterprise technology, sales, and implementation — from Oracle and HPE to consulting at NTT DATA, where you led initiatives with multi-million-dollar commercial potential. How does this experience shape the way you evaluate early-stage startups?
My experience definitely shapes how I evaluate early-stage startups. Having worked across enterprise technology, sales, and consulting, I naturally think about what happens after the proof of concept. Even at an early stage, strong teams show awareness of integration challenges, stakeholder alignment, and operational complexity.
Because I’ve spent a significant part of my career in sales, I also pay close attention to market fit and go-to-market strategy. A product can be technically excellent, but that alone is not enough. Teams need to demonstrate that they understand who will buy the product, how they will reach those customers, and why the market is ready for the solution.
History shows that even great products can fail if a company struggles to tell its story. Clear positioning and storytelling are not marketing extras — they are essential to adoption. When a team can articulate its value in a way that resonates with customers, partners, and investors, it significantly increases the chances that the technology will actually make an impact in the market.
Today, the word “AI” appears in almost every pitch. Given your hands-on experience with generative AI in corporate and industrial contexts, how do judges distinguish meaningful AI use from a superficial “AI label”?
It appears in a large percentage of startup pitches today, which makes careful evaluation especially important. From a judging perspective, meaningful use is usually very specific and intentional. Strong teams can clearly explain why it is needed, what role it plays in the product, what data it relies on, and what would stop working if that component were removed from the solution.
In contrast, we often see this technology used as a broad label rather than a core capability. Because it is such a dominant topic right now, some teams position it as the centerpiece of their pitch even when it doesn’t materially improve the problem being solved. Judges with hands-on experience can quickly identify when it is being used as an attention grabber rather than as a necessary tool.
Another signal we look for is realism around limitations. Teams that understand data quality, bias, model maintenance, and operational costs tend to be more credible. AI is powerful, but it introduces complexity and trade-offs. When founders acknowledge those trade-offs and still make a compelling case for why this approach is justified, it signals maturity and thoughtful product design.
Ultimately, the strongest pitches treat AI as an enabler, not the story itself. The story remains the problem, the customer, and the outcome, with it serving a clear and justified role in making that outcome better.
“Judges don’t expect early-stage teams to have a detailed plan.”
In corporate projects, you have repeatedly worked on the transition from PoC to real-world deployment under tight deadlines, including in environments with high operational complexity and many stakeholders. When you listen to early-stage pitches, what tells you that a team understands this transition — even if they are still far from execution?
It’s very important. Building a prototype is relatively easy. Validation, iteration, and operationalization are much harder. Judges don’t expect early-stage teams to have a detailed plan, but they do want to see awareness of upcoming risks and stages.
Teams that acknowledge complexity and dependencies appear far more reliable than those that present a smooth, obstacle-free path forward. Realism is a sign of maturity, not weakness.
You are also actively involved in other Vanderbilt innovation programs, including The Wond’ry and the Sullivan Family Ideator Program, where you serve as an expert and judge for teams seeking funding. What mistakes do technically strong teams most often make when presenting their ideas?
One of the most common mistakes technically strong teams make is leading with technical sophistication instead of value. Many founders are understandably proud of what they’ve built, but judges are focused on why it matters — the real-world problem it solves and the impact it creates for users. A pitch should prioritize problem relevance, customer pain points, and user outcomes before diving into the technical details. When teams spend too much time on algorithms, features, or complex diagrams upfront, the audience can lose sight of the core value, which diminishes the persuasiveness of the pitch.
Another frequent issue is a lack of focus. Teams sometimes try to address too many use cases, customer segments, or product features at once, thinking breadth shows ambition. In reality, judges respond better when a team defines a narrow, well-understood starting point and demonstrates a clear path to scale from there. A focused approach allows the team to show depth of understanding, validate assumptions effectively, and convey a believable story about adoption and growth.
Finally, I often see teams underestimate storytelling. Even technically brilliant solutions need to be framed in a way that non-specialist judges can quickly grasp. Simplifying without dumbing down, highlighting the problem-solution fit, and connecting the idea to tangible outcomes make a huge difference in how a pitch is received. Teams that master this balance — combining technical credibility with clear value communication — consistently stand out.
“The most technically complex product doesn’t always win.”
In your view, how do university ecosystems like the SEC and Vanderbilt influence the quality of projects that make it to competition finals, especially considering the role of mentors, judges, and industry experts in shaping these teams?
The difference is immediately noticeable. Teams working within strong ecosystems are generally better prepared, more open to feedback, and more resilient under pressure. Access to mentors, researchers, and structured programs allows them to learn faster and avoid common mistakes.
These ecosystems also encourage knowledge exchange between teams from different industries, which often leads to more thoughtful and viable solutions. As a result, strong ecosystems don’t just produce more startups — they produce better ones with stronger chances of success beyond the competition.
If future participants of the SEC Student Pitch Competition were reading this interview, what single piece of advice would you give teams that want judges to see a future business rather than just an interesting idea?
Know your customer better than anyone else in the room. Deep customer understanding turns an idea into a business. You need to be able to explain why your solution is faster, simpler, or cheaper than the alternatives.
It’s also worth remembering that the most technically complex product doesn’t always win. The market often chooses what is easier to adopt and use. When judges see a clear problem, a viable solution, a path to adoption, and elements of defensibility, they start to view a project as a real business — not just a strong pitch.



