Requirements: The art of lowering expectations

Gathering requirements is less about discovering what people need and more about negotiating what they’ll settle for. It’s a delicate social dance—equal parts group therapy and hostage negotiation. Stakeholders dream of an AI system that’s ethical, transparent, unbiased, fast, accurate, and cheap. You gently explain that they may pick two.

At best, requirements documents become a graveyard of forgotten ambitions—“must handle edge cases” (it won’t), “must be explainable” (see below), “must avoid bias” (see way below). At worst, they’re compiled into a 73-page PDF no one reads until six months after deployment, when the system is denying benefits to orphans and no one can remember why.

Success lies not in fulfilling grand visions, but in managing the fallout when you don’t. Lower expectations early, and often.

Ethics & Bias: The minefield no one maps

“We don’t discriminate!” your team insists, while your model quietly redlines half of the city. Bias isn’t just lurking in the dataset—it is the dataset. It arrives pre-installed, like bloatware on a new laptop or raisins in a fruitcake no one actually enjoys. You didn’t ask for it, but here it is.

The usual remedy? Drop the obviously problematic columns like “Race” or “Gender” and call it a day. Never mind that “Postcode” and “Forename” are carrying all that same information like mules with a grudge. If your system flags “Jamal from East Ham” for a security review while fast-tracking “Sebastian from Surrey,” something is clearly off—and it isn’t just the optics.

For maximum points, announce a “fairness audit” you conducted the night before the board meeting by searching “ethical AI best practices” and skimming the first Medium post you found.

The uncomfortable truth? Bias isn’t a bug. It’s a mirror. And most organisations don’t like what they see.

Explainability: The corporate fig leaf

“Explainable AI” is the fig leaf you slap on the algorithm before it struts into a regulatory meeting. It suggests respectability, while conveniently hiding the fact that no one—no one—actually knows what’s going on under the hood.

You tell users, “The model denied your application because Feature 147 was 0.3 above baseline.” They stare blankly, nod slowly, and silently begin looking for lawyers. Yes, SHAP values and LIME plots are clever. But they don’t help when your end user just wants to know why their mortgage was denied, not why their “cluster proximity to centroid 5” triggered a cascading sigmoid reaction.

Executives pretend to understand it. Engineers pretend to care. Regulators pretend it’s working. It’s all very polite, very corporate, and entirely beside the point.

If your “explanation” requires a PhD and three hours of whiteboarding, you might as well admit: “The algorithm works in mysterious ways.” It’s not explainability. It’s performance art.

Regulations: The compliance theatre

Enter stage left: GDPR, HIPAA, the AI Act, and a chorus of acronyms designed to ruin your quarter. Their purpose? To ensure that if you are going to ruin someone’s life with an algorithm, you at least do it with documentation.

You’ll debate whether log files count as personal data (they do), whether storing them in the cloud is legal (depends), and whether anyone will audit you (not until a newspaper does). Compliance isn’t about being ethical—it’s about being auditable. Like writing a diary not for posterity, but for your defence lawyer.

The golden rule? Document everything. Not because it helps build better systems, but because when the regulator turns up with questions, you’ll want a paper trail thick enough to absorb your tears.

In the world of AI governance, it’s not about doing the right thing. It’s about proving, on paper, that you meant to.


Last update: 2025-05-19 20:21