Renzym Education

We intend to upload videos covering various courses as well as tutorials on applying theory in practice. Most of these videos are collaborative work of Abasyn University Islamabad Campus and RENZYM.

Abasyn Islamabad:

Cover for Renzym Education
I have tried to sum up complete Machine Learning course in 5 hours in this video. In addition to topics covered in original famous Andrew Ng’s Coursera course (Linear and Logistic Regression, Regularization, Neural networks, SVMs, Clustering, PCA, Recommender systems, Decision Trees) we also covered CNNs (Convolutional Neural Networks) which is part of Deep learning specialization course.This is especially useful for lazy students who want to cover the course on last day before the exams 🙂 (I don’t intend to promote such behavior). But my main objective was that a 30 lectures course becomes too big a commitment for most people. Now they can go through a shorter course instead of the complete course (they can fall back on specific videos if some topic is unclear, the link to course playlist is in the video description. course material like slides and assignments link is there too — Needless to say that without doing the assignments we can’t retain concepts for long). Its also useful for people preparing for interviews for Machine learning entry level jobs. This video as well as the course videos are all time tagged i.e. you can always jump to relevant section of a video instead of watching complete video by clicking time tag of the topics in video description.#machinelearning #neuralnetworks See MoreSee Less
View on Facebook
We dicsussed Face recognition and Face verification problems in this video. How we can create a Siamese network and use either Triplet loss function or treat it as a binary classification problem. The ideas were from famous DeepFace and FaceNet architectures. We followed it up with Neural style transfer example which can be used to create machine generated Art. Along the way, we discussed how we can visualize what features the deep layers are See MoreSee Less
View on Facebook
In this week’s machine learning lecture we covered main ideas in YOLO algorithm for Object detection. We started with landmark detection, discussed convolutional implementation of sliding window which is much faster and computationally less complex. Then we discussed intersection over union (IoU) and used it for Non-Max suppression (removing duplicate detections). Finally we discussed acnhor boxes to detect objects of different aspect ratios and put it all together as YOLO See MoreSee Less
View on Facebook
In this week’s lecture we covered 1×1 convolutions. The name confused me when I first heard the term as looked like simply a scaling, but actually its not. How 1×1 convolutions are used to construct Inception layers, which are further used in the GoogLeNet network. It was followed by practical advice on how to use transfer learning and data augmentation to get going instead of starting from See MoreSee Less
View on Facebook
This week in machine learning, we started covering convolutional neural networks.We covered convolution operation with edge detection as example and then discussed padding, strided convolution and convolution over we covered architecture of some famous CNNs like LeNet-5, AlexNet, Resnets, VGG-16 as See MoreSee Less
View on Facebook
This week in machine learning, we covered Decicion trees, Random Forests and XGBoost.We introduced Decision trees in the first lecture, its learning model, using entropy to split data, and then extended it to incorporate categorical (instead of binary) and continuous 2nd lecture, we discussed Tree ensembles i.e. Bagged decision trees, Random Forests and XGBoost and some advice on when to use See MoreSee Less
View on Facebook