--- type: math --- ### What is Probability? Probability measures uncertainty and is used to create mathematical models for events with uncertain outcomes. While probability theory focuses on building these models, statistics deals with collecting data and comparing it to the models to assess how well they align with reality. --- ### Key Milestones in Probability Here are some major milestones in the history of probability: - **Girolamo Cardan (16th century):** Introduced basic probability concepts in the context of gambling. - **Blaise Pascal & Pierre de Fermat (17th century):** Developed foundational principles of probability, also inspired by games of chance. - **Jacob Bernoulli (17th century):** Pioneered ideas in statistical inference and introduced Bernoulli trials (experiments with two outcomes). - **Abraham de Moivre & Pierre Simon Laplace (18th century):** Developed the normal distribution and central limit theorem, critical tools in modern probability and statistics. - **Thomas Bayes (18th century):** Formulated Bayes’ Theorem, a key method for updating beliefs based on new evidence. - **Andrey Kolmogorov (20th century):** Formalized probability theory using set theory, creating the modern framework we use today. --- ### Sample Space and Events - **Sample Space (Ω):** The set of all possible outcomes in an experiment. - **Event (ε):** A subset of the sample space, representing a specific outcome or group of outcomes. **Example:** For a single roll of a die: - Sample Space: Ω = {1, 2, 3, 4, 5, 6} - Event (e.g., rolling an even number): ε = {2, 4, 6} --- ### Probability Frameworks There are three main approaches to defining probability: 1. **Classical Probability:** Assumes all outcomes are equally likely. The probability of an outcome is$\frac{1}{\text{total outcomes}}$. **Example:** Rolling a fair die: Each number has a probability of$\frac{1}{6}$. 2. **Frequentist Probability:** Defines probability based on the relative frequency of an outcome in repeated trials. **Example:** If you flip a coin many times, the proportion of heads approximates the probability of heads. 3. **Bayesian Probability:** Views probability as a degree of belief, incorporating prior knowledge and updating it based on new evidence. **Example:** Using weather forecasts and personal experience to estimate the chance of rain tomorrow. **Limitations:** - Frequentist methods don’t work for one-time events (e.g., predicting the chance of life on Mars). - Classical probability is unsuitable for infinite or unequal sample spaces. --- ### Axioms of Probability Probability is formally defined using these axioms: 1. **Non-negativity:**$0 \leq P(ε) \leq 1$. 2. **Certainty:**$P(Ω) = 1$. 3. **Additivity:** For mutually exclusive events,$P(ε_1 \cup ε_2) = P(ε_1) + P(ε_2)$. These axioms ensure that probabilities are consistent and logically sound. --- ### Key Properties of Probability From the axioms, we can derive useful properties: -$P(\emptyset) = 0$: The probability of an impossible event is zero. -$P(ε^c) = 1 - P(ε)$: The probability of an event not happening is 1 minus the probability of it happening. -$P(ε_1 \cup ε_2) = P(ε_1) + P(ε_2) - P(ε_1 \cap ε_2)$: The probability of either event occurring accounts for their overlap. - **Monotonicity:** If one event is a subset of another,$P(ε_1) \leq P(ε_2)$. --- ### Conditional Probability Conditional probability examines the likelihood of an event given that another event has occurred. The formula is: $$ P(ε|H) = \frac{P(ε \cap H)}{P(H)} $$ It satisfies all the axioms of probability and forms the basis of important rules, such as: - **Law of Total Probability:** Breaks down the probability of an event into parts based on a partition of the sample space. [Conditional Probability Visualization](https://setosa.io/conditional/) --- ### Bayes' Theorem Bayes' Theorem updates the probability of an event based on new information: $$ P(H|ε) = \frac{P(ε|H)P(H)}{P(ε)} $$ Where: -$P(H)$: Prior belief about event$H$. -$P(ε|H)$: Likelihood of observing$ε$if$H$is true. -$P(H|ε)$: Updated belief after observing$ε$. **Bayes' Theorem Formula Visualization:** ![Bayes' Theorem](Bayes_Theorem-1813835086.gif) This is particularly useful for analyzing rare events and understanding false positives in testing. ### Independence of Events Events are **independent** if one event occurring does not affect the probability of the other. Mathematically: $$ P(ε_1 \cap ε_2) = P(ε_1)P(ε_2) $$ This concept can extend to multiple events: - **Pairwise Independence:** Any two events in a set are independent. - **Mutual Independence:** All events in a set are independent, even in combinations.