Building Robust ML Models Using Federated Learning: The Future of AI Deployment


What does Deep Neural Nets learn?

Are they cramming or they are learning?

How to avoid cramming and move towards learning.

How to measure learning metrics

Is it sufficient to learn once?

What about the kinds of data the model has not seen during training phase?

Is it possible to bring all the varieties of training in one place? Just like we can bring all the pictures of dog species at one place?

If a false diagnosis was found in one location, shouldn’t the global model learn from it?

What is federated learning?

How is federated learning done in practice?

What about the privacy of data?

Issues around ownership of the final model?

Existing frameworks (in beta versions): Tensorflow federated, Pytorch Pysift

Live example with one of the frameworks.


Understanding how the ML model was built? Where did the training data come from? What were the test metrics?

Understanding Overfitting (or generalisability) of a model

Active Learning in Medicine

Federated Learning

Sample code/ tutorial