Lecture 7 introduced more examples of using the Bayes’ theorem to help us make decisions in real life. Sometimes, the conclusion seems counterintuitive at the beginning. However, if you grasp the idea of “using the new information to update your prior belief”, you will find the Bayes’s rule very natural. We also introduced a new form of the Bayes’ rule without talking about the events $A$, $B$ etc.:
$$\mathbb{P}(H_i|E) = \cfrac{\mathbb{P}(H_i)\mathbb{P}(E|H_i)}{\mathbb{P}(E)} = \cfrac{\mathbb{P}(E|H_i)}{\sum_{i=1}^n \mathbb{P}(H_i)\mathbb{P}(E|H_i)} \cdot \mathbb{P}(H_i)$$
Instead of talking about events $A$, $B$ etc. which are confusing and sometimes swapped in different textbooks, we think about hypotheses and evidence. Before we observe anything, we have a few hypotheses $H_i$. They are all reasonable to some degree, i.e. the prior $\mathbb{P}(H_i)$. Once we have some new information, we update our prior beliefs about how likely we think each of them might be true, i.e. the posterior $\mathbb{P}(H_i|E)$.
In real life, the probability $\mathbb{P}(E|H_i)$ is relatively easy to calculate. Probability theory is really good at calculating the probability of some evidence given a hypothesis is true. That is what probability theory is all about. The difficult part is actually choosing and estimating the prior $\mathbb{P}(H_i)$. It requires more practice and experience.