06-19, 13:30–14:15 (Europe/London), Tower Suite 3
New data scientists often struggle to make major impacts on solving business problems despite impressive technical skills. A core challenge is the gap between how academics think about performance of models and what matters for a company. As an example, academic work summarizes a model’s receiver operator characteristic (ROC) curve with the area under the curve (AUC). This summary statistic is useless for business applications, which will always have unique trade-offs and constraints. Effective approaches to optimize model performance requires understanding the specific business requirements and how to map that to a well framed data science problem.
In this talk, I will go through a framework of how to think effectively about model trade-offs in terms of maximizing business utility. Through this exercise, we will build intuition for what is required for a model in production to be a success and how to collaborate more effectively with non-technical co-workers.
As demand for data scientists and machine learning engineers have skyrocketed, there has been an explosion of programs that excel at building the raw technical skills. These programs work through cutting edge research and models, which help top-notch technical skills. But there is all too often a gap in how to apply them in real-world business settings.
Academic machine learning research focuses on metrics such as F1 score, accuracy, and area under the ROC curve (AUC). These metrics are great general-use metrics to compare modeling techniques for iconic problems such as how ImageNet classification accuracy has improved over time. A practitioner at a company will face unique trade-offs and constraints, which requires a different way of thinking about model comparisons.
A useful framework is to quantify the financial benefits and costs for each entry of a confusion matrix. Using this approach, model thresholds can be set to maximize business outcomes. The exercise of going through this simple model will help build intuition around
- How to map trade-offs between false positive and false negatives into optimization problems
- Why business expenses like customer acquisition costs don’t impact threshold decisions
- How to anticipate necessary acceptance rates even before starting to do any modeling
- When a human-in-the-loop can be valuable
The target audience of this talk are people who want to transition or recently have transitioned into data science work for a company, though these ideas can be helpful for experienced data scientists.
No previous knowledge expected
Dillon is a data scientist with a passion for working on hard, messy problems. He has worked for companies in energy, agtech, and fintech as both an individual contributor and manager. Before starting work in data science, he did his PhD in physics at MIT