As AI becomes ubiquitous, more and more high-stakes decisions will be made automatically by machine learning models. AI can determine the very future of your business and can make life-or-death decisions for real people.
But as the world changes, an AI system is often faced with new examples that it hasn’t seen before. Without proper guardrails, these automated decisions can quickly turn into catastrophic failures if left unchecked and can reduce trust in AI. As the stakes get higher, it is critical that AI systems are built to be humble — just like humans, AI should know when it doesn’t know the right answer.
With Humble AI, models that aren’t confident in their predictions can respond accordingly, whether that means defaulting to a “safe” decision, alerting an administrator for human review, or not making a prediction at all.
Join this latest Data Science Central podcast to learn how to:
- Understand the limitations of your model and when you may need human intervention
- Create a comprehensive set of Humble AI triggers that will protect from common failures of model overconfidence
- Monitor your model over time for new errors and sources of overconfidence
Speaker:
Jett Oristaglio, Data Science Product Lead, Trusted AI - DataRobot
Hosted by:
Sean Welch, Host and Producer - Data Science Central
