Crack the Math Behind ML Model Optimization

May
06
2026 (Wednesday)
Time 08:00 AM PDT | 11:00 AM EDT
Duration: 90 Minutes
57 Days Left To REGISTER
Id: 212026
Instructor
Mohammed Rizwan Roshan 
Live
Recorded
Live + Recorded

Overview

This advanced-level webinar dives into the mathematical foundations behind Machine Learning model optimization. Participants will explore how models learn, how performance metrics are calculated, and how optimization algorithms such as Gradient Descent and Stochastic Gradient Descent enable models to minimize error.

The session combines theoretical derivations with practical implementation in Jupyter Notebook, where participants will code optimization algorithms from scratch, observe convergence behavior, and visually compare the performance of different optimization strategies. This webinar is designed for learners who want to move beyond using ML libraries and truly understand the mathematics that powers model training.

Why you should Attend

Many Machine Learning practitioners use pre-built libraries without understanding how models are actually optimized. Without a deep grasp of optimization techniques like Gradient Descent, model convergence, and training dynamics, you risk being limited to surface-level ML knowledge - unable to debug, tune, or build models beyond standard frameworks.

Areas Covered in the Session

  • How Machine Learning models work at a mathematical level
  • Understanding prediction functions and loss functions
  • How model accuracy and error are calculated
    • Classification metrics (accuracy, loss)
    • Regression metrics (MSE, RMSE)
  • Concept of optimization in ML
  • Gradient Descent:
    • Intuition behind gradients
    • Derivative of loss functions
    • Learning rate and its impact
  • Stochastic Gradient Descent (SGD):
    • Difference between GD and SGD
    • Batch vs mini-batch learning
  • Training dynamics:
    • Epochs
    • Iterations
    • Convergence criteria
    • Stopping conditions
  • Implementing Gradient Descent from scratch in Jupyter Notebook
  • Implementing Stochastic Gradient Descent
  • Visualizing:
    • Loss curves
    • Convergence speed
  • Parameter updates
  • Comparing performance and stability of GD vs SGD
  • Practical insights on tuning optimization algorithms

Who Will Benefit

  • Advanced ML students
  • AI / Data Science postgraduate students
  • Machine Learning Engineers
  • Data Scientists
  • Research-oriented learners
  • Professionals preparing for technical ML interviews

Speaker Profile

Mohammed Rizwan Roshan is a Computer Science graduate with strong hands-on experience in software development, mobile application development, and Machine Learning. He has worked at Zoho Corporation, contributing to SaaS-based systems and gaining exposure to production-level software development. Beyond enterprise software, he has extensive experience building end-to-end applications, ranging from small-scale prototypes to fully deployed, user-facing production systems. This includes developing cross-platform mobile and web applications, several of which are actively used by organizations and users. He has also worked on multiple Machine Learning projects, applying Python-based ML techniques to real datasets. This practical ML experience is complemented by academic training, as he is currently pursuing a Masters degree in Artificial Intelligence, with exposure to core ML concepts, neural networks, NLP, and data-driven problem solving.

In addition, Rizwan Roshanhas experience in Cybersecurity fundamentals, and has presented technical papers on Google Firebase and Mobile Application Development at academic events. Having led development teams and participated in national-level competitions, He brings a balanced perspective that connects Computer Science fundamentals, Machine Learning concepts, real-world implementation, and career relevance - making complex AI topics accessible, practical, and industry-oriented.
Access Recorded Version
One Attendee / Group Attendees

Unlimited Viewing Recorded Version for 6 months ( Access information will be emailed 24 hours after the completion of live webinar)