Skip to Content
ProcessesEvaluationJudging Philosophy

Judging Philosophy: Build What Matters

The CATS Hackathon evaluation is fundamentally different from traditional hackathons. Understanding this philosophy helps teams align their work with what judges value.

Core Principle

“Build What Matters” means:

  • Solutions address real community needs, not assumed problems
  • Decisions are evidence-based, not opinion-based
  • The journey of discovery is valued as much as the final product
  • Community voices guide development, not just technical capability

Traditional vs. CATS Evaluation

Traditional Hackathon Judging

What’s Typically Valued:

  • Polished demo and presentation
  • Technical complexity and innovation
  • Impressive tech stack (AI, blockchain, etc.)
  • Pitch quality and storytelling
  • Team credentials and experience

Problem: Teams often build solutions looking for problems, not the reverse.

CATS “Build What Matters” Judging

What We Value:

  • Authentic community research (voice notes, diverse interviews)
  • Evidence-based insights (pattern analysis, root causes)
  • Clear problem validation (specific, measurable hypothesis)
  • Thoughtful solution exploration (multiple options considered)
  • Fit between problem and solution (does it actually address the validated need?)
  • Community impact (will this help real people?)

Result: Teams build what communities need, validated through systematic research.

The Action-Learning Lens

Why the Four Quadrants Matter

Judges evaluate your complete journey, not just the destination:

Quadrant 1: Ground Truth

  • Did you talk to enough diverse people?
  • Are voice notes authentic and substantive?
  • Did you observe problems in real contexts?

Quadrant 2: Formulate Insight

  • Did you identify meaningful patterns?
  • Did you find root causes, not just symptoms?
  • Are superior interests clearly articulated?

Quadrant 3: Formulate Hypothesis

  • Is your problem statement specific and measurable?
  • Is the root cause from your research evident?
  • Is the hypothesis testable?

Quadrant 4: Define Opportunity

  • Did you explore multiple solution options?
  • Is your selection justified with evidence?
  • Are trade-offs acknowledged?

If any quadrant is weak, it affects your overall score.

Action-Learning Methodology →

What “Good” Looks Like

Scenario: Waste Management Solution

❌ Traditional Approach (Weak)

Thought Process: “Waste is a problem in Lagos. Let’s build an app for waste collection scheduling.”

Evaluation Issues:

  • No evidence of community research
  • Assumed problem without validation
  • Solution before understanding
  • No differentiation from existing services
  • Unclear why target users would adopt

Score: Low across all criteria

âś… CATS Approach (Strong)

Thought Process:

  1. Ground Truth: Interviewed 15 market traders across 3 markets. Voice notes reveal traders lose 30 minutes daily clearing garbage, affecting business setup. Municipal services skip commercial areas due to budget/access constraints.

  2. Insight: Root cause isn’t lack of collection infrastructure—it’s prioritization and coordination. Traders willing to pay but lack collective bargaining power.

  3. Hypothesis: “We believe market traders in Yaba experience daily business disruption due to waste accumulation because municipal services prioritize residential areas and private collectors require coordination traders lack. If we create a collective payment and scheduling system, then 95% of participating traders will have waste removed within 24 hours.”

  4. Opportunity: Explored 5 options (app, SMS, WhatsApp group, physical collection points, USSD). Selected WhatsApp + shared payment wallet because: already familiar interface, no download barrier, works on feature phones.

  5. Solution: Built WhatsApp bot that coordinates daily collection requests, splits costs automatically, sends receipts. Tested with 20 traders. 85% said they’d pay ₦200/day for reliable service.

Evaluation Strengths:

  • Strong Ground Truth evidence (voice notes, diverse sample)
  • Clear insights and root cause analysis
  • Specific, testable hypothesis
  • Multiple options explored, justified selection
  • Functional solution that fits validated need
  • Community validation of impact

Score: High across all criteria

Red Flags for Judges

Automatic Concerns

❌ No Voice Notes - Questions authenticity of research
❌ Superficial Research - Less than 10 interviews or homogenous sample
❌ Solution-First Thinking - Idea came before community research
❌ Buzzword Overload - “AI-powered blockchain with ML”
❌ Copying Existing Products - “Uber for X” without differentiation
❌ Ignoring Quadrants - Jumping from problem to solution
❌ Poor Documentation - Incomplete team pages
❌ Unrealistic Claims - “Will solve poverty in Lagos”

Misconceptions to Avoid

Misconception 1: “More complex = better”
Reality: Appropriate technology for the context > complexity

Misconception 2: “Judges want blockchain solutions”
Reality: Use blockchain only if it adds genuine value

Misconception 3: “Perfect demo wins”
Reality: Evidence of authentic research > polished presentation

Misconception 4: “Only one right answer”
Reality: Multiple valid approaches; justification matters

What Judges Ask Themselves

When reviewing each team:

  1. Ground Truth: “Do I believe they actually talked to these people?”
  2. Insight: “Did they understand why the problem exists?”
  3. Hypothesis: “Is this specific enough to be testable?”
  4. Opportunity: “Did they seriously consider alternatives?”
  5. Solution: “Does this actually address the validated problem?”
  6. Impact: “Will this matter to real people?”
  7. Overall: “Did they build what matters, or what’s impressive?”

Comparative Evaluation

How Judges Compare Teams

When two teams have similar problem areas:

Team A:

  • 8 voice notes, similar demographics
  • Surface-level insights
  • Built impressive AI feature
  • Complex tech stack
  • Polished demo
  • Judges think: “Impressive tech, but did they validate the need?”

Team B:

  • 15 voice notes, diverse profiles
  • Root cause identified
  • Simple SMS solution
  • Basic tech stack
  • Functional but plain demo
  • Judges think: “Evidence-based, appropriate solution, likely to be adopted”

Result: Team B scores higher because their process was more rigorous, even if tech is simpler.

Balancing Criteria

The 25-25-50 Rule

Notice the weight distribution:

  • 50% on journey (Ground Truth 25% + Insight 15% + Hypothesis 10% + Opportunity 10%)
  • 25% on solution (Solution Quality 25%)
  • 15% on impact (Impact Potential 15%)

Message: Your research and methodology matter more than your demo.

When Technical Excellence Matters

Solution Quality (25%) rewards:

  • Functionality (does it work?)
  • Fit (does it match the validated need?)
  • Usability (can target users actually use it?)
  • Evidence of testing

It does NOT heavily reward:

  • Complexity for its own sake
  • Impressive but unnecessary features
  • Technology hype
  • Enterprise-grade scalability (you’re building MVP)

Continental-Level Expectations

At the continental level (Top 5), judges look for:

Exceptional Examples of “Build What Matters”:

  • Best demonstration of action-learning methodology
  • Deepest community engagement
  • Most compelling evidence and insights
  • Strongest alignment between problem and solution
  • Greatest potential for scaled impact

Not Just:

  • Best technical implementation
  • Most features
  • Slickest presentation

Advice from Judges

Do:

âś… Show authentic community engagement
âś… Document your journey thoroughly
âś… Be honest about challenges and pivots
âś… Explain your reasoning and trade-offs
âś… Demonstrate you listened to community
âś… Build something appropriate for context

Don’t:

❌ Fake research or voice notes
❌ Skip documentation to focus only on demo
❌ Over-promise impact
❌ Use buzzwords without substance
❌ Copy without understanding local context
❌ Ignore feedback from community validation

Final Thought

The CATS Hackathon is not about finding the best programmers.

It’s about finding teams who:

  • Genuinely listened to communities
  • Did rigorous, evidence-based research
  • Built solutions that matter to real people
  • Demonstrated a methodology that can be repeated

Your code might have bugs. Your demo might crash. But if your research is solid and your solution addresses a real, validated need, you’re building what matters.


View Evaluation Criteria →
Learn Action-Learning Methodology →

Last updated on