Ivy Aug 18, 2015 No Comments
Updated – 26th of June 2019
In a previous blog, we covered the use of predictive modelling techniques to predict future outcomes. In this post we cover some of the common Statistical models in Predictive Analytics. The techniques used differ for various applications. However, there are some fundamental statistical techniques, mathematical algorithms and neural network systems used in predictive modeling. Statistical models basically use mathematical equations to encode information which is extracted from the data and play a key role in data exploration.
Model Building for Forecasting
Predictive model can be broadly classified into two categories : parametric and non-parametric. Parametric models make more assumptions and more specific assumptions about the characteristics of the population used in creating the model.
Some examples of parametric Machine Learning algorithms include:
Examples of popular nonparametric Machine Learning algorithms are:
1. Logistic Regression:
Logistic regression models the relation between a dependent and two or more independent variables (explanatory and response variables). It takes a look at how significant the relationship is between the variables. The probability (p) that event “1” occurs rather than event “2”. Where a good fit of the model is obtained, you can plug in the independent variable values for a new observation and predict if the dependent value will be 0 or 1.
Examples in Predictive Analytics :
Banks – for building scorecards of customers applying for loans. The loan officer identifies characteristics that indicate probability of loan default, and further use this to build a scorecard of good and bad credit risks. Data of past, current and potential customers are used to execute a Logistic Regression Model. The model is leveraged to classify potential customers who have applied for loan, as good or bad credit risks. This uses binary logistic as the ‘dependent’ variable is dichotomous (loan default OR no default).
Education institutions – An engineering college would estimate enrolments of fresh students to determine cut-off marks and freeze admissions. A multiple logistic regression model is used to factor Class10, Class 12 and related AIJEE scores, distance from college, demographic information including stream preferences, historical data of student enrolments, to calculate probability of enrollment. The estimated model has to fit the data adequately to show the significance. Calculations can also be made to estimate the effect of how a single independent variable affects the likelihood of application.
2. Time Series:
The Time Series forecasting model is used to make predictions of future values based on previously observed / historical values. The two main goals are the identification of the phenomenon represented by the sequence of observations, and the forecasting of future values in the time series variable. The pattern of observed time series data is identified, described and integrated with other data. The identified pattern is further extrapolated to predict future events.
Time Series predictive models are used to make forecasts where the temporal dimension is critical to the analysis. Typical application scenarios are demand prediction of a product during a particular month / period, estimation of inventory costs, forecast of train passengers for the next financial year, and so on.
Clusters in the data are used for modelling predictions by grouping ‘like’ objects for a probability distribution. A model is hypothesized for each of the clusters to find the best fit of that model to each other. Clusters in customer behaviour may be used for predictive modeling, i.e. behavioural clustering, to predict behaviour or buying patterns of customers. Clusters in product segmentation may be used to predict what different categories of products customers are likely to buy. Algorithms auto-segment the objects based on several variables, to devise the cluster DNA. This is then leveraged for predictive insights.
Cluster models are used to predict demand of products (customer ordering baby clothes is likely to order diapers), brand preferences, predict efficacy of drug amongst a certain age group in clinical trials, predict stock market trends, identify groups of car insurance policy holders with a higher average claim cost, and more.
4. Decision Trees:
This statistical technique is a tree-like predictive model of decisions and possible consequences. Based on Boolean tests, specific facts are used to make general conclusions / decision points represented by nodes. Rules trace the series of paths from root to nodes, till an action is derived. Problems are structured as a tree with end nodes as branches, representing a specific event or scenario, or subject probability.
A basic Decision Tree Modeling graph to predict how many buy ice cream because they crave for it, even if they don’t have extra money.
5. Neural Network:
Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. Neural networks consists of a network of nonlinear information processing elements called the neurons that are normally arranged in layers and executed in parallel. Neural networks are increasingly being used in areas of predictions and classifications, areas where statistical methods have traditionally been used. In fact, according to experts, the most commonly used artificial neural networks, called multilayer perceptrons, are actually nonlinear regression and discriminant models. A neural network can approximate a wide range of predictive models with minimal demands on model structure and assumption.
Learn Predictive Analytics with R and earn certification from one of the leading Big data and Analytics schools in the country .You can even choose a course like Data Science and Machine Learning Certification and do hands on Projects in both R & SAS. Enroll yourself today!