Transition to operations: Where code goes to die¶
This phase exists to humiliate you. It’s the corporate equivalent of sending your child to boarding school, only to learn they’ve been expelled for “unforeseen dependencies.”
The packaging farce¶
Serialisation sounds simple—until your model’s custom layers turn into pickled gibberish. You’ll Dockerise it, because “containerisation solves everything,” except the 47 ways your model can fail to load in runtime. The QA team, who’ve never seen a confusion matrix, will reject it for reasons like “the API response is 200ms too slow” (never mind that the business logic it’s replacing takes 3 days).
CI/CD: The theatre of pain¶
The pipeline is a Rube Goldberg machine of unit tests, linting, and security scans—all of which pass right up until the deploy fails catastrophically. The logs blame “an upstream dependency,” which could mean anything from a Python version mismatch to a solar flare. Meanwhile, the DevOps team mutters about “immutable infrastructure,” as if anything in ML were immutable except the technical debt.
Deployment: The user’s problem now¶
At last, your model is “served.” Maybe it’s a REST API (latency: “yes”). Maybe it’s batch inference (throughput:
“theoretical”). Either way, you’ll celebrate—until the first user hits it and gets back
{"error": "model_warmup_failed"}
. This is when you learn that “production-ready” is a lie told by people who’ve
never met production.