 Why buy when you can get it for free?

Here is the first fantastic delivery of an amazing and fabulous selection of free and widely available business analytics learning content, which has been prepared… just for you.

1. A/B testing is a way to compare two versions of a single variable typically by testing a subject’s response to variable A against variable B, and determining which of the two variables is more effective.https://en.wikipedia.org/wiki/A/B_testing
2. Choice modelling attempts to model the decision process of an individual or segment via Revealed preferences or stated preferences made in a particular context or contexts. Typically, it attempts to use discrete choices (A over B; B over A, B & C) in order to infer positions of the items (A, B and C) on some relevant latent scale (typically “utility” in economics and various related fields). https://en.wikipedia.org/wiki/Choice_modelling
3. Adaptive control is the control method used by a controller which must adapt to a controlled system with parameters which vary, or are initially uncertain. For example, as an aircraft flies, its mass will slowly decrease as a result of fuel consumption; a control law is needed that adapts itself to such changing conditions. https://en.wikipedia.org/wiki/Adaptive_control
4. Multivariate Testing. In marketingmultivariate testing or multi-variable testing techniques apply statistical hypothesis testing on multi-variable systems, typically consumers on websites. Techniques of multivariate statistics are used.https://en.wikipedia.org/wiki/Multivariate_testing_in_marketing
5. In probability theory, the multi-armed bandit problem (sometimes called the K or N-armed bandit problem) is a problem in which a gambler at a row of slot machines (sometimes known as “one-armed bandits”) has to decide which machines to play, how many times to play each machine and in which order to play them.https://en.wikipedia.org/wiki/Multi-armed_bandit
6. t-test is any statistical hypothesis test in which the test statistic follows a Student’s t-distribution if the null hypothesis is supported.https://en.wikipedia.org/wiki/Student%27s_t-test
7. Visual analytics is an outgrowth of the fields of information visualization and scientific visualization that focuses on analytical reasoning facilitated by interactive visual interfaces.https://en.wikipedia.org/wiki/Visual_analytics
8. In statisticsdependence is any statistical relationship between two random variables or two sets of dataCorrelation refers to any of a broad class of statistical relationships involving dependence, though in common usage it most often refers to the extent to which two variables have a linear relationship with each other. Familiar examples of dependent phenomena include the correlation between the physical statures of parents and their offspring, and the correlation between the demand for a product and its price. https://en.wikipedia.org/wiki/Correlation_and_dependence
9. Scenario analysis is a process of analyzing possible future events by considering alternative possible outcomes (sometimes called “alternative worlds”). Thus, the scenario analysis, which is a main method of projections, does not try to show one exact picture of the future. Instead, it presents consciously several alternative future developments. https://en.wikipedia.org/wiki/Scenario_analysis
10. Forecasting is the process of making predictions of the future based on past and present data and analysis of trends.https://en.wikipedia.org/wiki/Forecasting
11. Time series analysis comprises methods for analyzing time series data in order to extract meaningful statistics and other characteristics of the data. Time series forecasting is the use of a model to predict future values based on previously observed values. https://en.wikipedia.org/wiki/Time_series
12. Data mining is an interdisciplinary subfield of computer science. It is the computational process of discovering patterns in largedata sets (“big data“) involving methods at the intersection of artificial intelligencemachine learningstatistics, and database systemshttps://en.wikipedia.org/wiki/Data_mining
13. In statistical modelingregression analysis is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables (or ‘predictors’). https://en.wikipedia.org/wiki/Regression_analysis
14. Text mining, also referred to as text data mining, roughly equivalent to text analytics, refers to the process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learninghttps://en.wikipedia.org/wiki/Text_mining
15. Sentiment analysis (also known as opinion mining) refers to the use of natural language processingtext analysis and computational linguistics to identify and extract subjective information in source materials. Sentiment analysis is widely applied to reviews and social media for a variety of applications, ranging from marketing to customer service.https://en.wikipedia.org/wiki/Sentiment_analysis
16. Image analysis is the extraction of meaningful information from images; mainly from digital images by means of digital image processing  Image analysis tasks can be as simple as reading bar coded tags or as sophisticated as identifying a person from their face.https://en.wikipedia.org/wiki/Image_analysis
17. Video content analysis (also Video content analyticsVCA) is the capability of automatically analyzing video to detect and determine temporal and spatial events. https://en.wikipedia.org/wiki/Video_content_analysis
18. Speech analytics is the process of analyzing recorded calls to gather information, brings structure to customer interactions and exposes information buried in customer contact center interactions with an enterprise. https://en.wikipedia.org/wiki/Speech_analytics
19. Monte Carlo methods (or Monte Carlo experiments) are a broad class of computational algorithms that rely on repeated randomsampling to obtain numerical results. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other mathematical methods. Monte Carlo methods are mainly used in three distinct problem classes:optimizationnumerical integration, and generating draws from a probability distribution.https://en.wikipedia.org/wiki/Monte_Carlo_method
20. Linear programming (LP; also called linear optimization) is a method to achieve the best outcome (such as maximum profit or lowest cost) in a mathematical model whose requirements are represented by linear relationships. Linear programming is a special case of mathematical programming (mathematical optimization).https://en.wikipedia.org/wiki/Linear_programming
21. Cohort analysis is a subset of behavioral analytics that takes the data from a given eCommerce platform, web application, or online game and rather than looking at all users as one unit, it breaks them into related groups for analysis. These related groups, or cohorts, usually share common characteristics or experiences within a defined time-span. https://en.wikipedia.org/wiki/Cohort_analysis
22. Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a potentially lower number of unobserved variables calledfactors. For example, it is possible that variations in say six observed variables mainly reflect the variations in two unobserved (underlying) variables.https://en.wikipedia.org/wiki/Factor_analysis
23. Adaptive (or Artificial) Neural Networks. Like other machine learning methods – systems that learn from data – neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinaryrule-based programming, including computer vision and speech recognition.https://en.wikipedia.org/wiki/Artificial_neural_network
24. Meta Analysis. The basic tenet of a meta-analysis is that there is a common truth behind all conceptually similar scientific studies, but which has been measured with a certain error within individual studies. The aim in meta-analysis then is to use approaches from statistics to derive a pooled estimate closest to the unknown common truth based on how this error is perceived. In essence, all existing methods yield a weighted average from the results of the individual studies and what differs is the manner in which these weights are allocated and also the manner in which the uncertainty is computed around the point estimate thus generated. https://en.wikipedia.org/wiki/Meta-analysis

I hope you find the content useful. Of course all thanks should really go to Wikipedia and their unpaid expert contributors.

I will try and get the next part of ‘ Free Business Analytics Content’ onto Linked Pulse over the next weekend.

Just a few points before closing.

Secondly, keep in touch. My strategy blog is here http://www.goodstrat.com and I can be followed on Twitter at @GoodStratTweet. Please also connect on LinkedIn if you wish. If you have any follow-up questions then leave a comment or send me an email on martyn.jones@cambriano.es

Thirdly, you may be interested in other articles I have written, such as:

You may also be interested in some other articles I have written on the subject of Data Warehousing.

Data Warehousing explained to Big Data friends –https://goodstrat.com/2015/07/20/data-warehousing-explained-to-big-data-friends/

Stuff a great data architect should know –https://goodstrat.com/2015/08/16/stuff-a-great-data-architect-should-know-how-to-be-a-professional-expert/

Big Data is not Data Warehousing – https://goodstrat.com/2015/03/06/consider-this-big-data-is-not-data-warehousing/

What can data warehousing do for us now –http://www.computerworld.com/article/3006473/big-data/what-can-data-warehousing-do-for-us-now.html