PyData London 2022

Machine Learning 2.0 with Hugging Face
06-18, 13:30–14:15 (Europe/London), Tower Suite 2

In this session, we’ll introduce you to Transformer models and what business problems you can solve with them. Then, we’ll show you how you can simplify and accelerate your machine learning projects end-to-end: experimenting, training, optimizing, and deploying. Along the way, we’ll run some demos to keep things concrete and exciting!


As amazing as state-of-the-art machine learning models are, training, optimizing, and deploying them remains a challenging endeavor that requires a significant amount of time, resources, and skills, all the more when different languages are involved. Unfortunately, this complexity prevents most organizations from using these models effectively, if at all. Instead, wouldn’t it be great if we could just start from pre-trained versions and put them to work immediately?

This is the exact challenge that Hugging Face is tackling. Our tools make it easy to add state-of-the-art Transformer models to your applications. Thanks to popular open-source libraries (transformers, tokenizers, and datasets libraries, developers can easily work with over 4,000 datasets and over 38,000 pre-trained models in 160+ languages. In fact, with over 60,000 stars on GitHub, the transformers library has become the de-facto tool for developers and data scientists who need state-of-the-art models for natural language processing, computer vision, and speech.


Prior Knowledge Expected

No previous knowledge expected

Julien is currently Chief Evangelist at Hugging Face. He's recently spent 6 years at Amazon Web Services where he was the Global Technical Evangelist for AI & Machine Learning. Prior to joining AWS, Julien served for 10 years as CTO/VP Engineering in large-scale startups.