Statistical models¶
Inductive learning¶
Inductive learning makes general rules from specific examples, like a particularly overzealous traffic warden who tickets one illegally parked car and concludes all vehicles must be banned. The algorithm spots patterns in the training data and assumes they apply universally, often with hilarious consequences when reality doesn’t cooperate.
Real-life¶
Credit scoring systems use this to decide if you’re trustworthy based on where you shop and what you watch on telly. They’ll approve someone who buys organic quinoa and watches BBC Parliament, while rejecting anyone who dares purchase value-brand biscuits and enjoys Love Island.
Security & privacy risks (moderate)¶
These models can inadvertently reveal sensitive correlations in the training data - like identifying political affiliations from shopping habits. They’re also vulnerable to adversarial examples - subtle data manipulations that completely fool the model.
Deductive learning¶
Deductive learning follows strict logical rules like a particularly pedantic maths teacher. If the premises are sound, the conclusions must be correct - which would be brilliant if real life wasn’t so messy. It’s the “well actually” of machine learning approaches, great for theoretical problems but hopeless when faced with actual human behaviour.
Real-life¶
Tax calculation software uses deductive logic to determine what you owe. It follows rigid rules like “all work-related expenses are deductible”, until you try to claim your Greggs loyalty card as a business expense and the whole system short-circuits.
Security & privacy risks (low)¶
Since it’s rule-based, there’s little risk of data leakage. However, the rules themselves might encode biases - like a loan approval system that systematically disadvantages certain postcodes.
Transductive learning¶
Transductive learning doesn’t bother with general rules - it just memorises the training data and wings it when new data appears. It’s the machine learning equivalent of cheating on a test by writing answers on your hand, then squinting really hard when the questions change slightly.
Real-life¶
The NHS COVID app used transductive methods for contact tracing - it memorised patterns of infection spread rather than trying to understand the underlying biology. This worked until someone sneezed differently and the model panicked.
Security & privacy risks (high)¶
Because it effectively memorises training data, there’s significant risk of private information being extracted from the model. A determined attacker could reconstruct parts of the original dataset just by querying the system enough times.