PGM 读书笔记节选（一）

The key property of a declarative representation is the separation of knowledge and reasoning. The representation has its own clear semantics, separate from the algorithm that one can apply to it. Thus we can develop a general suite of the algorithms that apply any model within a broad class, whether in the domain of medical diagnosis or speech recognition. Conversely, we can improve our model for a specific application domain without having to modify our reasoning algorithms constantly.

It turns out that these two perspectives — the graph as a representation of a set of independencies, and the graph as a skeleton for factorizing a distribution — are in a deep sense, equivalent. The independece properties of the distribution are precisely what allow it to be represented compactlt in a factorized form. Conversely, a particular factorization of the distribution guarantees that certain independencies hold.

• representation：什么样的 graph 是合理的假设？
• inference：如何在给定一些条件下推断出来其他的未知变量
• learning：如何在给定一些样本情况下对系统的参数进行辨识

BN 里面有意思的一点是 reasoning pattern，我们有如下一些形式：

• causal reasoning，即通过增加起上游（ancestor）的信息（观测）即“原因”用来 reason 下游某个现象发生的概率
• evidential reasoning（explanation），即通过增加下游的信息（观测），即“现象、效果”用来 reason 上游某个原因是否发生的概率
• inter-causal reasoning（explanation away），同一个现象可能是多个原因引起的，当观测到现象时，这些原因之间就会产生相互作用（尽管一开始他们可能是先验独立的），比较常见的一种情况是，一旦某个原因（它很可能引起这个现象）成立，其他原因出现的概率反而因此下降。

$\displaystyle \Pr(X_1, \ldots, X_n) = \prod_{i = 1}^n \Pr(x_i \mid \pi (x_i) )$

• minimal I-map，给定一个独立性断言集合，图 $G$ 是 minimal I-map 当且仅当其对应的独立性断言集合相等，且去掉任意一条边都会导致不等，获得这个 graph 并不困难，我们通过条件概率分解之后，尝试选择一个集合，使得 $(X_i \perp \{ X_1, \ldots, X_{i - 1}\} - U \mid U)$ 时的最小集合 $U$ 作为 $X_i$ 的父节点即可。
• perfect I-map，如果给定独立性断言集合，对应的 perfect I-map 是 $\mathcal{I}(G) = \mathcal{I}$。如果给定的是分布 $P$，则是其分布的 perfect I-map 当且仅当 $\mathcal{I}(G) = \mathcal{I}(P)$。寻找 perfect I-map 却比较复杂，这里略去。

——————
And Abraham got up early in the morning to the place where he stood before the LORD: