Martyn Rhisiart Jones

Bayes’ theorem (alternatively Bayes’ law or Bayes’ rule, after Thomas Bayes) provides a mathematical rule. It is used for inverting conditional probabilities. This enables us to find the probability of a cause given its effect.[1] For example, we know the risk of developing health problems increases with age. Bayes’ theorem allows us to assess the risk to an individual of a known age more accurately. It achieves this by conditioning the risk relative to their age. This approach is better than assuming the individual is typical of the population as a whole. Based on Bayes law, you need to consider the prevalence of a disease in a population. Also, account for the error rate of an infectious disease test. This helps evaluate the meaning of a positive test result correctly and avoid the base-rate fallacy.
One of the many applications of Bayes’ theorem is Bayesian inference, a particular approach to statistical inference, where it is used to invert the probability of observations given a model configuration (i.e., the likelihood function) to obtain the probability of the model configuration given the observations (i.e., the posterior probability)
http://en.wikipedia.org/wiki/Bayes%27_theorem
Bayesian statistics (/ˈbeɪziən/ BAY-zee-ən or /ˈbeɪʒən/ BAY-zhən)[1] is a theory in the field of statistics based on the Bayesian interpretation of probability, where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation, which views probability as the limit of the relative frequency of an event after many trials.[2] More concretely, analysis in Bayesian methods codifies prior knowledge in the form of a prior distribution.
http://en.wikipedia.org/wiki/Bayesian_statistics
Consider this: Bayesian statistics is a statistical framework that applies Bayes’ Theorem to update the probability of a hypothesis as more evidence or data becomes available. It is based on the principle that we can revise our beliefs (probabilities) about a hypothesis after observing new data.
Unlike frequentist statistics, which is based on the concept of fixed probabilities and sampling distributions, Bayesian statistics treats probabilities as degrees of belief about the likelihood of an event or hypothesis. These beliefs can be updated as new data or information is acquired.
Bayesian statistics offers a powerful and flexible framework for analysing uncertainty and updating beliefs as new data become available. It is especially useful in situations where prior knowledge or expert opinion can be incorporated and where uncertainty needs to be quantified. However, its computational complexity and dependence on prior choice require careful application and specialized knowledge.
When to use: Bayesian statistics is beneficial in a variety of settings where traditional frequentist approaches may not be as effective or appropriate.
Bayesian statistics excels in situations that require flexibility, incorporation of prior knowledge, and thorough quantification of uncertainty. It is increasingly used in fields such as machine learning, medicine, economics, and environmental sciences, where its strengths complement or outperform traditional methods.
Examples of the use of Bayesian statistics can be found in numerous settings. Here are just a few of the examples:
- When prior information is available.
- In estimating the efficacy of a new drug based on previous studies of similar drugs.
- Used in early-phase clinical trials or studies of rare diseases.
- Modelling student performance within schools and districts in educational studies.
- Non-linear regression models, mixture models, or models with latent variables.
- Real-time decision-making, such as adaptive clinical trials or spam filtering.
- Predicting the probability of an event (e.g., the likelihood of a stock market crash).
- Risk assessment and management in engineering or finance.
- Handling incomplete survey responses or noisy sensor measurements.
- Selecting the best predictive model in a machine learning context.
- Modelling consumer preferences or decision-making behaviour.
- Analyzing small or highly skewed data sets where frequentist p-values or confidence intervals may not be reliable.
When not to use: Bayesian statistics should be avoided if:
- Computational resources are insufficient.
- Priorities are unjustifiable or controversial.
- Frequentist methods are more straightforward and more appropriate.
- Stakeholder expectations or regulatory requirements favour frequentist approaches.
In short, the choice between Bayesian and frequentist methods often depends on the balance between the complexity of the problem, available resources, and public acceptance.
Strengths: Bayesian analysis offers several strengths that make it a powerful approach for statistical inference, modelling, and decision-making. These strengths stem from its probabilistic framework and its flexibility in dealing with uncertainty and incorporating prior knowledge.
Bayesian analysis is especially effective at:
- Incorporating prior knowledge and dynamically updating it with data.
- Quantifying uncertainty using full posterior distributions.
- Handling small, noisy, or hierarchical data sets.
- Supporting complex models and principled decision-making under uncertainty.
These strengths make Bayesian methods invaluable in fields as diverse as healthcare, engineering, finance, machine learning, and others.
Weaknesses:
Although Bayesian analysis is a powerful and flexible approach, it also has some weaknesses and limitations. These problems are due to its dependence on prior variables, its computational complexity, and practical considerations.
Bayesian analytics faces challenges in:
- Priority selection and potential subjectivity.
- Computational intensity and scalability.
- Interpretation and normalization of results.
- Steeper learning curve and tool limitations.
Although these shortcomings do not negate its value, they highlight the need to carefully consider context, computing resources, and experience when choosing Bayesian methods.
Passing comments: “[My favourite fellow of the Royal Society is the Reverend Thomas Bayes, an obscure 18th-century Kent clergyman and a brilliant mathematician who] devised a complex equation known as the Bayes theorem, which can be used to work out probability distributions. It had no practical application in his lifetime, but today, thanks to computers, is routinely used in the modelling of climate change, astrophysics and stock-market analysis.” – Bill Bryson
Discover more from GOOD STRATEGY
Subscribe to get the latest posts sent to your email.