dipta007's Second ðŸ§
GitHub
main
Home
Literature Notes
Advanced NLP With Scipy
Deep Learning By Ian Goodfellow
DS & Algo Interview
How To Read A Paper
How To Write A Paper
HowTo100MLearningTextVideo_2019
ML Interview
👉Books
10-Minute Mindfulness
Approaching (Almost) Any Machine Learning Problem
How To Take Smart Notes
I Will Teach You To Be Rich
Mini Habits
Programming Interviews Exposed
Speed Reading
The 30-Day Productivity Boost
The ChatGPT Millionaire
The Procrastination Cure
Unlimited Memory
Templates
Literature Notes
Permanent Notes
Zotero Template
Zettelkasten
Accuracy
Activation Function
Active Learning
Adaboost
AdaBoost Vs. Gradient Boosting Vs. XGBoost
AUC Score
Autoencoder
Autoencoder For Denoising Images
Averaging In Ensemble Learning
Bag Of Words
Bagging
Bayes Theorem
Bayesian Optimization Hyperparameter Finding
Beam Search
Behavioral Interview
BERT
Bias & Variance
Bidirectional RNN Or LSTM
Binary Cross Entropy
Binning Or Bucketing
Binomial Distribution
Bisect_Left Vs. Bisect_Right
BLEU Score
Boosting
Causality Vs. Correlation
Central Limit Theorem
Chain Rule
CNN
Co-Variance
Collinearity
Conditional Probability
Conditionally-Independent-Joint-Distribution
Confusion Matrix
Connections - Log Likelihood, Cross Entropy, KL Divergence, Logistic Regression, And Neural Networks
Continuous Random Variable
Contrastive Learning
Contrastive Loss
Convex Vs Nonconvex Function
Cosine Similarity
Cross Entropy
Cross Validation
Curse Of Dimensionality
Data Augmentation
Data Imputation
Data Normalization
DBScan Clustering
Decision Boundary
Decision Tree
Decision Tree (Classification)
Decision Tree (Regression)
Density Sparse Data
Derivative
Determinant
Diagonal-Matrix
Differentiation
Differentiation Of Product
Digit Dp
Dimensionality Reduction
Discrete Random Variable
Discriminative Vs. Generative Models
Doing-Literature-Review
Domain Vs. Codomain Vs. Range
Dropout
Dying ReLU
Eigendecomposition
Eigenvalue-Eigenvector
Elastic Net Regression
Ensemble Learning
Entropy
Entropy And Information Gain
Estimated Mean
Estimated Standard Deviation
Estimated Variance
Euclidian Norm
Expected Value
Expected Value For Continuous Events
Expected Value For Discrete Events
Exploding Gradient
Exponential Distribution
F1 Score
False Positive Rate
Feature Engineering
Feature Extraction
Feature Selection
Finding Co-relation Between Two Data Or Distribution
Frobenius-Norm
Fully-Independent-Join-Distribution
Fully-Joint-Joint-Distribution
Gaussian Distribution
GBM
Genetic Algorithm Hyperparameter Finding
Gini Impurity
Global Minima
Gradient
Gradient Boost (Classification)
Gradient Boost (Regression)
Gradient Boosting
Gradient Descent
Graph Convolutional Network (Gcn)
Greedy Decoding
Grid Search Hyperparameter Finding
GRU
Handling Imbalanced Dataset
Handling Missing Data
Handling Outliers
Heapq (Nlargest Or Nsmalles)
Hierarchical Clustering
Hinge Loss
Histogram
How To Choose Kernel In SVM
How To Combine In Ensemble Learning
How XGBoost Handle Missing Values
How-To-Read-Paper
Huber Loss
Hyperparameters
Hypothesis Testing
Identity-Matrix
InfoNCE Loss
Integration By Parts Or Integration Of Product
Internal Covariate Shift
Interview
Interview Scheduling
Joint-Distribuition
Jupyter-Notebook-On-Server
K Fold Cross Validation
K-means Clustering
K-means Vs. Hierarchical
K-nearest Neighbor (Knn)
Kernel In SVM
Kernel Regression
Kernel Trick
KL Divergence
L1 Or Lasso Regression
L1 Vs. L2 Regression
L2 Or Ridge Regression
Learning Rate Scheduler
LightGBM
Likelihood
Line Equation
Linear Regression
Local Minima
Log (Odds)
Log Scale
Log-cosh Loss
Logistic Regression
Logistic Regression Vs. Neural Network
Loss Vs. Cost
Lp-Norm
LSTM
Machine Learning Algorithm Selection
Machine Learning Vs. Deep Learning
Majority Vote In Ensemble Learning
Margin In SVM
Marginal Probability
Matrices
Max-Norm
Maximal Margin Classifier
Maximum Likelihood
Mean
Mean Absolute Error (Mae)
Mean Absolute Percentage Error (Mape)
Mean Squared Error (Mse)
Mean Squared Logarithmic Error (Msle)
Median
Merge K-sorted List
Merge Overlapping Intervals
Meteor Score
Mini Batch SGD
ML System Design
Mode
Model Based Vs. Instance Based Learning
Multi Class Cross Entropy
Multi Label Cross Entropy
Multi Layer Perceptron
Multicollinearity
Multivariate Normal Distribution
Mutual Information
N-gram Method
Naive Bayes
Negative Log Likelihood
Neural Network
Norm
Normal Distribution
Null Hypothesis
Odds
One Class Classification
One Class Gaussian
One Vs One Multi Class Classification
One Vs Rest Or One Vs All Multi Class Classification
Orthogonal-Matrix
Orthonormal-Vector
Overcomplete Autoencoder
Overfitting
Oversampling
P-Value
Padding In CNN
Parameter Vs. Hyperparameter
PCA Vs. Autoencoder
Pearson Correlation
Perceptron
Perplexity
Plots Compared
Pooling
Population
Posterior Probability
Precision
Principal Component Analysis (Pca)
Prior Probability
Probability Density Function
Probability Distribution
Probability Mass Function
Probability Vs. Likelihood
Problem Solving Algorithm Selection
Pruning In Decision Tree
PyTorch Loss Functions
Quintile Or Percentile
Quotient Rule Or Differentiation Of Division
Qustions To Ask In A Interview?
R-squared Value
Random Forest
Random Variable
Recall
Regularization
Reinforcement Learning
Relational GCN
ReLU
RNN
ROC Curve
Root Mean Squared Error (Rmse)
Root Mean Squared Logarithmic Error (Rmsle)
ROUGE-L Score
ROUGE-LSUM Score
ROUGE-N Score
Saddle Points
Scalar
Second Order Derivative Or Hessian Matrix
Semi-supervised Learning
Sensitivity
Sigmoid Function
Singular Value Decomposition (Svd)
Soft Margin In SVM
Softmax
Softplus
Softsign
Some Common Behavioral Questions
Sources Of Uncertainty
Spacy-Doc-Object
Spacy-Doc-Span-Token
Spacy-Explanation-Of-Labels
Spacy-Matcher
Spacy-Named-Entities
Spacy-Operator-Quantifier
Spacy-Pattern
Spacy-Pipeline
Spacy-Pos
Spacy-Semantic-Similarity
Spacy-Syntactic-Dependency
Specificity
Splitting Tree In Decision Tree
Stacking Or Meta Model In Ensemble Learning
Standard Deviation
Standardization
Standardization Or Normalization
Statistical Significance
Stochastic Gradient Descent Or SGD
Stride In CNN
Stump
Supervised Learning
Support Vector
Support Vector Machine (Svm)
Surprise
SVC
Swallow Vs. Deep Learning
Tanh
Text Preprocessing
TF-IDF
Three Way Partioning
Trace-Operator
Training A Deep Neural Network
Transformer Timeline
Triplet Loss
True Positive Rate
Undercomplete Autoencoder
Undersampling
Unit-Vector
Unsupervised Learning
Untitled
Vanishing Gradient
Variance
Vector
Weight Initialization
XGBoost
On this page
Differentiation
[!def] Differentiation
The process of finding a derivative is known as Differentiation