## Steps
1. Get your solution ready for production (plug into production data inputs, write unit tests, etc.).
2. Write [monitoring](No%20monitoring.md) code to check your system’s live performance at regular intervals and trigger alerts when it drops:
- Beware of slow degradation: models tend to “rot” as data evolves.
- Measuring performance may require a human pipeline (e.g., via a crowdsourcing service).
- Also monitor your inputs’ quality (e.g., a malfunctioning sensor sending random values, or another team’s output becoming stale). This is particularly important for online learning systems.
3. Retrain your models on a regular basis on fresh data (automate as much as possible).
## Pitfalls
- *Manual* monitoring.
- Surprising the IT department.
- Long or slow chains of approvals.
- Lack of (input and output) data trend monitoring.
- Lack of a uniform company framework for ML deployments.