**Why buy when you can get it for free?**

Here is the first fantastic delivery of an amazing and fabulous selection of free and widely available business analytics learning content, which has been prepared… just for you.

**A/B testing**is a way to compare two versions of a single variable typically by testing a subject’s response to variable A against variable B, and determining which of the two variables is more effective.https://en.wikipedia.org/wiki/A/B_testing**Choice modelling**attempts to model the decision process of an individual or segment via Revealed preferences or stated preferences made in a particular context or contexts. Typically, it attempts to use discrete choices (A over B; B over A, B & C) in order to infer positions of the items (A, B and C) on some relevant latent scale (typically “utility” in economics and various related fields). https://en.wikipedia.org/wiki/Choice_modelling**Adaptive control**is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. https://en.wikipedia.org/wiki/Adaptive_control**Multivariate Testing.**In marketing,**multivariate testing**or**multi-variable testing**techniques apply statistical hypothesis testing on multi-variable systems, typically consumers on websites. Techniques of multivariate statistics are used.https://en.wikipedia.org/wiki/Multivariate_testing_in_marketing- In probability theory, the
**multi-armed bandit problem**(sometimes called the[1]*K*–**or**[2]) is a problem in which a gambler at a row of slot machines (sometimes known as “one-armed bandits”) has to decide which machines to play, how many times to play each machine and in which order to play them.https://en.wikipedia.org/wiki/Multi-armed_bandit*N*-armed bandit problem - A
is any statistical hypothesis test in which the test statistic follows a Student’s*t*-test*t*-distribution if the null hypothesis is supported.https://en.wikipedia.org/wiki/Student%27s_t-test **Visual analytics**is an outgrowth of the fields of information visualization and scientific visualization that focuses on analytical reasoning facilitated by interactive visual interfaces.https://en.wikipedia.org/wiki/Visual_analytics- In statistics,
**dependence**is any statistical relationship between two random variables or two sets of data.**Correlation**refers to any of a broad class of statistical relationships involving dependence, though in common usage it most often refers to the extent to which two variables have a linear relationship with each other. Familiar examples of dependent phenomena include the correlation between the physical statures of parents and their offspring, and the correlation between the demand for a product and its price. https://en.wikipedia.org/wiki/Correlation_and_dependence **Scenario analysis**is a process of analyzing possible future events by considering alternative possible outcomes (sometimes called “alternative worlds”). Thus, the scenario analysis, which is a main method of projections, does not try to show one exact picture of the future. Instead, it presents consciously several alternative future developments. https://en.wikipedia.org/wiki/Scenario_analysis**Forecasting**is the process of making predictions of the future based on past and present data and analysis of trends.https://en.wikipedia.org/wiki/Forecasting**Time series**comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data.*analysis***Time series**is the use of a model to predict future values based on previously observed values. https://en.wikipedia.org/wiki/Time_series*forecasting***Data mining**is an interdisciplinary subfield of computer science.[1][2][3] It is the computational process of discovering patterns in largedata sets (“big data“) involving methods at the intersection of artificial intelligence, machine learning, statistics, and database systemshttps://en.wikipedia.org/wiki/Data_mining- In statistical modeling,
**regression analysis**is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables (or ‘predictors’). https://en.wikipedia.org/wiki/Regression_analysis **Text mining**, also referred to as*text**data mining*, roughly equivalent to**text analytics**, refers to the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. https://en.wikipedia.org/wiki/Text_mining**Sentiment analysis**(also known as**opinion mining**) refers to the use of natural language processing, text analysis and computational linguistics to identify and extract subjective information in source materials. Sentiment analysis is widely applied to reviews and social media for a variety of applications, ranging from marketing to customer service.https://en.wikipedia.org/wiki/Sentiment_analysis**Image analysis**is the extraction of meaningful information from images; mainly from digital images by means of digital image processing [1] Image analysis tasks can be as simple as reading bar coded tags or as sophisticated as identifying a person from their face.https://en.wikipedia.org/wiki/Image_analysis**Video content analysis**(also**Video content analytics**,**VCA**) is the capability of automatically analyzing video to detect and determine temporal and spatial events. https://en.wikipedia.org/wiki/Video_content_analysis**Speech analytics**is the process of analyzing recorded calls to gather information, brings structure to customer interactions and exposes information buried in customer contact center interactions with an enterprise. https://en.wikipedia.org/wiki/Speech_analytics**Monte Carlo methods**(or**Monte Carlo experiments**) are a broad class of computational algorithms that rely on repeated randomsampling to obtain numerical results. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other mathematical methods. Monte Carlo methods are mainly used in three distinct problem classes:[1]optimization, numerical integration, and generating draws from a probability distribution.https://en.wikipedia.org/wiki/Monte_Carlo_method**Linear programming**(**LP**; also called**linear optimization**) is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming (mathematical optimization).https://en.wikipedia.org/wiki/Linear_programming**Cohort analysis**is a subset of behavioral analytics that takes the data from a given eCommerce platform, web application, or online game and rather than looking at all users as one unit, it breaks them into related groups for analysis. These related groups, or cohorts, usually share common characteristics or experiences within a defined time-span. https://en.wikipedia.org/wiki/Cohort_analysis**Factor analysis**is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables called**factors**. For example, it is possible that variations in say six observed variables mainly reflect the variations in two unobserved (underlying) variables.https://en.wikipedia.org/wiki/Factor_analysis**Adaptive (or Artificial) Neural Networks.**Like other machine learning methods – systems that learn from data – neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinaryrule-based programming, including computer vision and speech recognition.https://en.wikipedia.org/wiki/Artificial_neural_network**Meta Analysis.**The basic tenet of a**meta-analysis**is that there is a common truth behind all conceptually similar scientific studies, but which has been measured with a certain error within individual studies. The aim in meta-analysis then is to use approaches from statistics to derive a pooled estimate closest to the unknown common truth based on how this error is perceived. In essence, all existing methods yield a weighted average from the results of the individual studies and what differs is the manner in which these weights are allocated and also the manner in which the uncertainty is computed around the point estimate thus generated. https://en.wikipedia.org/wiki/Meta-analysis

I hope you find the content useful. Of course all thanks should really go to Wikipedia and their unpaid expert contributors.

I will try and get the next part of ‘ Free Business Analytics Content’ onto Linked Pulse over the next weekend.

**Many thanks for reading.**

Just a few points before closing.

Firstly, please consider joining The Big Data Contrarians, here on LinkedIn:https://www.linkedin.com/groups/8338976

Secondly, keep in touch. My strategy blog is here http://www.goodstrat.com and I can be followed on Twitter at @GoodStratTweet. Please also connect on LinkedIn if you wish. If you have any follow-up questions then leave a comment or send me an email on martyn.jones@cambriano.es

Thirdly, you may be interested in other articles I have written, such as:

You may also be interested in some other articles I have written on the subject of Data Warehousing.

Data Warehousing explained to Big Data friends –https://goodstrat.com/2015/07/20/data-warehousing-explained-to-big-data-friends/

Stuff a great data architect should know –https://goodstrat.com/2015/08/16/stuff-a-great-data-architect-should-know-how-to-be-a-professional-expert/

Big Data is not Data Warehousing – https://goodstrat.com/2015/03/06/consider-this-big-data-is-not-data-warehousing/

What can data warehousing do for us now –http://www.computerworld.com/article/3006473/big-data/what-can-data-warehousing-do-for-us-now.html

Looking for your most valuable data? Follow the money –http://www.computerworld.com/article/2982352/big-data/looking-for-your-most-valuable-data-follow-the-money.html