case study

Pivoting to solve the right problems so that users can find the right thing.

How I stopped us from building the wrong solution to solve the wrong problems.

SLS search redesign case study cover

67% of users preferred the new design. The bigger win was stopping two builds that would have failed to solve users' problems.

SLS is Singapore's national learning platform for students and teachers across the country. Search was broken. The team's plan was semantic search, a year in the making. I came in and challenged it.


New search filters and results UI

Role

Product Designer

Contributions

Research, Problem definition, Stakeholder alignment, Solution validation

Platform

The Singapore Student Learning Space (SLS), serving 500k users

5 failure points. Semantic search fixed maybe 1, expensively.

I pulled search logs, reviewed user feedback, ran a quick survey, and read academic literature on child-computer interaction. When direct user access is limited, published research on common behaviours like search fills that gap fast. This shaped everything that followed.

What I found made the semantic search plan hard to justify.

Old search filters UI showing the complicated filtering experience

Our UI forced filtering before searching, the opposite of how users expected search to work.

44%

rated our UI complicated or very complicated

2 vs 167

results depending on whether users searched for interactive digital textbooks or IDT

16

filter combinations a user might need to try just to find the right module


The literature also showed that children's developing motor and literacy skills led to more typos. Our exact keyword matching punished them for it.

Then there was the problem nobody had caught. A national policy shift had moved students to subject-based banding but our metadata hadn't kept up. Students searching Secondary 1 Mathematics G3 found only 4 out of 62 modules. Every module was still tagged "Secondary 1 Express." No search algorithm would have fixed this.

The gap between how users searched and how content was tagged was costing other teams' work its visibility.

Convincing a team to abandon a year of momentum

Semantic search had been sold to senior management as the answer. When I sat with them, I realised their concerns matched exactly what I'd found. They knew users were failing to find things. They just didn't have evidence for why.

I mapped every problem against what semantic search could realistically solve and walked through real failing queries. The academic literature reframed typos, acronym matching, and vocabulary mismatches from minor annoyances to foundational issues affecting most users. Getting these basics right would do 80% of the heavy lifting.

"Oh, I see. Semantic search was actually just one small part of it."

— Product Teammate, after my presentation
Placeholder: before/after UI screenshots

I stopped an AI recommender before it cost the team a full sprint

Mid-project, a proposal came in for AI-generated module recommendations. I hypothesised it would fail because I saw we hadn't dived deeply enough into users' journey in selecting resources, or what content AI could use to make its recommendation.

Rather than flag this without evidence I embedded the question into my usability study. I asked users what information they needed to evaluate a module. I proved this was missing from most content. Nothing for AI to pull from.

That killed the recommender. This data is seeding my future build to redesign our search cards, without needing a separate study.

This is one of our first builds where we're measuring whether it actually worked

I made the case for building data hooks directly into search to compare user behaviour before and after launch, with senior management's support. Most builds go out without this. We're currently building data hooks so that we can track click-through rates so we can keep iterating rather than assume success.

Leadership now wants this search revamp as the reference model for all search across the system.

67%

of 30 test users preferred the new design

39%

valued filtering by questions, an existing feature that was buried

1st

build with data hooks to validate success

"Thought it is opportune to convey a special note from [boss]. He says that the revamped search feature is very well done. Keep up the great work!"

— Product Lead

What I'd do differently

Getting users on board for research is genuinely hard. Time is always tight. Knowing how useful it was to add the AI module recommender question into my usability study, I'd go further next time. I'd vibe code rough explorations of possible search card redesigns for users to interact with in the same study. Even a loose early sensing of what direction to take cards would give us a stronger foundation going into this redesign in the future.

I'd also have pushed harder on the data hooks. When we built, the team was comfortable assuming that any interaction signal indicated relevance. But clicks mean different things depending on what action was taken. If we had captured more granular data about specific actions on search cards, it would have told us not just whether results were useful, but how users were actually engaging with them. That would have directly informed the card redesign and given us a much sharper picture of what to fix next.