WEBINAR or ON-SITE
This class covers important fundamental topics: data collection, cleaning, encoding, preprocessing, statistics, visualizations, adaptive learning, inductive principles, generalization, overfitting, regularization, complexity control, cross validation, and error measures.
Correlations are useful because they can indicate a predictive relationship that can be exploited in practice when looking cause-effect relationships, e.g. causes of excessive variations in process and system outputs. This class includes dependency, correlation, causality, independence, variance, covariance, correlation coefficient, coefficient of determination, cross-correlation, correlation function, and autocorrelation function.
A neural network is a series of algorithms that attempt to learn underlying relationships in a data set. Neural networks can adapt to changing input; so the network generates the best possible result. This class covers basic concepts of neural networks, feedforward neural networks (with backpropagation training), radial basis function (RBF) neural networks, self-organizing maps (SOM), random decision forests, deep learning, convolutional neural networks (CNN), and reinforcement learning.
PCA is statistical procedure that uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables called principal components. PCA is used for dimensionality reduction and for finding patterns in data of high dimension. ICA is a computational method for separating a multivariate signal into additive subcomponents.
Often it is useful to transform data from time domain to frequency domain in order to gain additional knowledge about the process. This class covers FFT (fast Fourier transform), Parseval’s theorem, STFT (short-time Fourier transform), and continuous & discrete wavelets.
These classes are taught in Jupyter notebooks using Python (with AWS SageMaker, Keras, TensorFlow, SciKit-learn, and many other libraries)