Designing Trust Behind the Scenes


UX Research · Case Study

Operational Ethnography in Mortgage Operations

Role — Mortgage Processing Officer · Duration — 6 months · Volume — High-volume daily servicing operations · Method — Embedded Operational Ethnography

"Employees were the glue connecting systems that should have been designed to connect themselves."

Overview

Financial services often appear seamless to customers. Payments are processed, balances are updated, obligations are met — quietly and reliably. What customers never see is the operational reality that makes that possible: the people, the repetition, the workarounds, and the human effort holding everything together behind the scenes.

Over six months as a Mortgage Processing Officer at a large financial institution, I processed hundreds of mortgage property tax servicing cases using legacy core banking systems. What I found was more than inefficiency — it was a system structurally dependent on human compensation.

Working inside this process revealed how operational friction shapes both employee workflows and customer trust.


Skills Demonstrated

Operational Ethnography · Enterprise UX Research · Systems Thinking · Service Blueprinting · Cognitive Load Analysis · Workflow Analysis · Opportunity Framing · Human-Centered Service Design · Operational Modernisation Strategy · FinTech Domain Expertise

Research Lens Embedded ethnography + workflow analysis + service blueprinting + operational systems mapping


Operational Scale

~70 — property tax bills posted manually per day, one by one Hundreds — of servicing cases processed over six months 3 — disconnected systems requiring constant switching 0 — batch processing options available

Every bill required its own manual workflow:

Lookup → Verify → Enter → Submit → Reconcile → Repeat

No bulk upload. No auto-fill. No duplicate detection. No safety net.

This was not an edge case. This was the job


Problem Statement

Mortgage property tax servicing is customer-facing in outcome but operationally invisible in execution.

Customers see: ✓ Tax paid ✓ Balance updated ✓ Account maintained

Behind that sits a manual, fragmented, memory-dependent process involving legacy green-screen interfaces, repetitive data entry, multi-system switching, no audit trail, and unclear error handling.

The system was built to process transactions. It was not built for the humans doing the processing.

How might legacy mortgage servicing systems reduce repetitive operational effort while improving workflow clarity, resilience, and transparency for both employees and customers?


Research Approach

Embedded Operational Ethnography

I did not observe this system from the outside. I worked inside it for six months, processing real cases under real conditions. That position gave me access to something interviews and surveys rarely reach — what the work actually feels like when you are the one doing it, day after day.

Methods included longitudinal observation, processing pattern analysis across hundreds of cases, reflective field notes on workflow friction, system interaction mapping, cognitive load analysis, and service blueprint mapping of frontstage versus backstage realities.

This was observational research grounded in lived operational practice — capturing tacit knowledge, recurring workflow patterns, and system frictions that formal process documentation often misses.


Operational Burden Map

Sample project image

At the centre of three disconnected legacy core banking systems — including green-screen enterprise platforms, servicing systems, and reconciliation tools — was not integration. It was people.

Employees manually transferred context between systems, remembered workflow logic that was never documented, verified inconsistencies the systems could not catch, and built personal workarounds to fill gaps the technology ignored.

Employees were not simply processing work — they were the missing integration layer between disconnected systems.


Key Findings

Finding 01 — The System Had No Memory, So Employees Became the Memory

Workflow logic, command paths, and exception handling were not documented anywhere in the system. They lived in people's heads, passed down verbally through observation and habit.

↳ Real example: When I started, there was no written guide for navigating the core banking workflows. I learned by watching a colleague and writing my own notes. If that person had not been available, I would have had nothing to refer to. When experienced staff leave, that knowledge walks out with them. The system has no way to hold it.

Opportunity → In-system guidance, searchable workflow documentation, embedded onboarding support


Finding 02 — 70 Bills a Day. One at a Time. Every Day.

Property tax processing required paying each individual client's tax bill separately — one per person, one per municipality, one at a time — entered manually into the core banking platform. There was no batch processing. Every bill was its own workflow: look up the client, verify the details, enter the amount, submit, repeat. Approximately 70 times a day.

By the end of the day, the work became mechanical — repetitive to the point where attention dulled and fatigue quietly increased the likelihood of error. A wrong digit. A missed field. A bill posted to the wrong municipality.

The risk was not complexity — it was relentless repetition without safeguards.

↳ Real example: By mid-afternoon the entries became automatic. There was no system check, no duplicate alert, nothing to catch a tired mistake — only human attention worn down by hours of identical actions. One transposition error in an account number meant the payment went to the wrong place entirely.

Opportunity → Batch import, auto-population from verified records, duplicate and anomaly detection


Finding 03 — The Reconciliation System Was Irreversible and Left No Trail

Prepayment and arrears processing required working across two systems — a servicing platform for the transaction and a reconciliation system to confirm and close it. A recap sheet tracked the work. Once a transaction closed in the reconciliation system, it could not be edited. There was no system-level audit history to check.

If a mistake surfaced after closing, the only option was a screenshot taken beforehand — if the employee had remembered to take one. The recap sheet helped, but only if it had been saved. Recovery depended entirely on personal documentation habits, not system design.

One of the most stressful moments in the workflow was realising a screenshot had not been captured before closing a transaction — and discovering something had gone wrong afterward. There was no way to explain what happened. No way to prove what had been entered. No undo. Everything had to be reconstructed and redone.

The system assumed every entry was correct. It had no plan for when it was not.

↳ Real example: Tracing a discrepancy in a closed prepayment or arrears entry meant reconstructing the transaction from memory, cross-referencing the recap sheet if it had been saved, and manually checking other records. That is not a system supporting its users. That is a system that handed the problem back to the person.

Opportunity → System-maintained audit trail, pre-close confirmation screen, post-close view-only record, auto-saved recap


Finding 04 — Errors Were Designed for Machines, Not Humans

When something went wrong, the system told you that it had — but not what, not why, and not what to do next. Error messages were cryptic. Troubleshooting meant asking someone more experienced, if they were available.

↳ Real example: The answer to most error messages was to find a senior colleague. Not because there was a support system — but because the colleague was the support system. That works until they leave.

Opportunity → Human-readable errors with suggested next steps, guided troubleshooting, embedded escalation paths


Finding 05 — Invisible Operations, Visible Consequences

Customers see a confirmed payment. They do not see what it took to get there. When everything goes right, that invisibility feels seamless. When something goes wrong, that invisibility quickly turns into uncertainty and erodes trust.

Customers should not need to call the bank to find out what happened to their money.

Opportunity → Customer-facing servicing milestones, proactive status updates, confirmation at every stage


Cognitive Load Analysis

Sample project image

Current state — the system forces recall

Employees had to remember command paths, workflow sequences, reconciliation logic, documentation habits, and exception handling — entirely from memory, under time pressure, every day.

Future state — the system supports recognition

Modern design means visible steps, guided actions, auto-populated fields, integrated records, and clear status at every point. The system does the remembering. The person does the thinking.

Good systems support recognition. Poor systems force recall.

Much of the role's cognitive effort was consumed by repetitive execution rather than judgment, analysis, or problem-solving. Good design would return that higher-value thinking to the work.


Service Blueprint

Sample project image

Customer Experience (Frontstage)

Bill received → Verification in progress → Payment scheduled → Payment sent → Confirmation issued

Simple. Quiet. Seamless.

Operational Reality (Backstage)

Bill intake → Record lookup → Manual entry ×70/day → Servicing update → Reconciliation → Exception handling → Error correction → Screenshot documentation → Final release

Complex. Repetitive. Human-dependent.

Hidden friction: Between these two experiences is the invisible layer — human vigilance, personal screenshot habits, and institutional memory that lives in people, not systems.


Design Opportunities

These are not tested solutions. They are design directions grounded in six months of observed operational pain.

01—Unified Servicing Dashboard One workspace replacing multi-system navigation. Customer record, servicing timeline, payment status, exception queue — all visible without switching screens.
02 — Batch Processing & Smart Auto-Fill The system should remember what it already knows. Bulk upload, auto-population from verified records, anomaly detection, shortcut workflows.

03 — System-Level Audit Trail Every action logged automatically. Every entry traceable. No screenshots required. No reconstruction from memory. Pre-close confirmation. Post-close view-only record.

04 — Human-Centered Error Design Errors that tell you what happened and what to do next — not just that something went wrong. Suggested fixes, guided escalation, plain language throughout.

05 — Customer Tax Visibility Layer ✓ Bill received ✓ Verification in progress ✓ Payment scheduled ✓ Payment sent ✓ Confirmation issued


Concept Exploration — ClearServe

A speculative concept grounded in six months of operational observation. Designed as a concept exploration — not as a validated product recommendation — to translate operational insight into future-state service possibilities.

After six months of doing this work manually, one question kept coming back: what if customers could do this themselves — and what if the operations team could spend their time on work that actually required them?

ClearServe explores that question across three layers.

Layer 01 — Self-Serve Portal For customers who want control

A mobile-first portal where customers initiate or schedule property tax servicing digitally through a guided portal. Upload a bill, schedule a payment, see confirmation, track history. No phone calls. No waiting. No wondering.

Layer 02 — Live Status Tracker For customers who prefer the bank handles it

Not every customer wants to self-serve — and that is a valid choice. For them, ClearServe adds visibility without requiring action.

Bill received → Verified → Scheduled → Sent → Confirmed

Same service as today. Just no longer invisible.

Layer 03 — Operations Oversight Layer For the staff who make it work

Routine cases partially automated through guided workflows. Staff focus on exceptions, anomalies, complex cases, and quality oversight — work that actually requires human judgment.

This is not job replacement. It is job redesign. The mechanical work decreases. The work that requires people increases.


Sample project image


Speculative Impact — If ClearServe Worked

On staff time At approximately 70 entries per day averaging 3 minutes each, that is roughly 3.5 hours of repetitive data entry daily. A 50% reduction in manual input returns approximately 35 hours per month per officer to higher-value work.

On error recovery Errors requiring manual reconstruction took an estimated 20–40 minutes without an audit trail. A system log reduces that to minutes.

On customer trust A status tracker requires no behaviour change from customers — just visibility into a process that already exists. Low cost to build. Significant impact on trust.

On staff retention Jobs built entirely around repetitive entry burn people out. Redesigning around judgment and oversight makes the role more sustainable and more worth staying for.

All figures are estimates from six months of operational observation. Real impact depends on implementation and adoption.

Reflection

What surprised me most was not how outdated the systems were. It was how well my colleagues had learned to live with them.

Everyone had their own system. A screenshot routine. A personal checklist. An informal habit that compensated for what the technology failed to provide. From the outside, the operation looked smooth. From the inside, I could see that smoothness was held together by individual effort and knowledge that lived inside people — not inside systems.

That is a fragile kind of reliability. It works until someone leaves. Until someone gets sick. Until someone has a hard day and forgets to take a screenshot before closing.

This experience changed how I think about UX research. The most important operational problems are often invisible because teams adapt to friction until it becomes normalised. Eventually, friction stops being recognised as friction — it simply becomes "how work is done."

Finding those problems means being inside the work. Not studying it from a distance.

Trust is not built only through successful outcomes. It is built through clarity, resilience, and confidence in systems people cannot see.

Reliable systems should not rely on extraordinary employee vigilance. Good design makes reliability systematic — so people can focus on judgment, not compensation for broken systems.

UX Research Portfolio · Operational Ethnography · Service Design · Enterprise UX · FinTech

Note: This case study is based on anonymised operational observation. All proprietary system names, workflows, and identifiable institutional details have been generalised or modified to protect confidentiality while preserving research insight.