高级人工智能课后作业 1
时间:2024-04-07 21:40:29 来源:网络cs 作者:亙句 栏目:卖家故事 阅读:
高级人工智能课后作业 1
课后习题
1.In the two-player adversarial search, for the MIN node achieved α denoted in the upper left site, we can prune the MIN node (at the lower-right site) and all its children when α is __________ than n. Symmetrically, for a MAX node achieved β in the upper left site, we also can prune a MAX node (at the lower-right site) and all its children when β is __________than n.
解答:Less, Greater
2.Consider the following search tree produced after expanding nodes A and B, where each arc is labeled with the cost of the corresponding operator, and the leaves are labeled with the value of a heuristic function, h. For uninformed searches, assume children are expanded left to right. In case of ties, expand in alphabetical order.
Which one node will be expanded next by each of the following search methods?
Depth-First search: __________
A* search: __________
解答:E, G
3.Consider the following Bayesian Network containing four Boolean random variables.
1)Compute P(¬A, B, ¬C, D)
2)Compute P(A | B, C, D)
解答:
1) P ( ¬ A ) = 1 − P ( A ) = 0.9 ; P(¬A) = 1- P(A) = 0.9; P(¬A)=1−P(A)=0.9;
P ( ¬ C ∣ ¬ A ) = 1 − 0.2 = 0.8 ; P( ¬C|¬A) = 1-0.2 = 0.8; P(¬C∣¬A)=1−0.2=0.8;
P ( ¬ A , B , ¬ C , D ) = P ( ¬ A ) ∗ P ( B ) ∗ P ( ¬ C ∣ ¬ A ) ∗ P ( D ∣ ¬ A , B ) = 0.9 ∗ 0.5 ∗ 0.8 ∗ 0.6 = 0.216 ; P(¬A, B, ¬C, D) = P(¬A)*P( B) *P( ¬C|¬A)* P (D|¬A, B) = 0.9* 0.5 * 0.8 * 0.6 =0.216; P(¬A,B,¬C,D)=P(¬A)∗P(B)∗P(¬C∣¬A)∗P(D∣¬A,B)=0.9∗0.5∗0.8∗0.6=0.216;
2) P ( A ∣ B , C , D ) = P ( A , B , C , D ) / P ( B , C , D ) P(A | B, C, D) = P(A, B, C, D) / P(B, C, D) P(A∣B,C,D)=P(A,B,C,D)/P(B,C,D)
P ( A , B , C , D ) = P ( A ) ∗ P ( B ) ∗ P ( C ∣ A ) ∗ P ( D ∣ A , B ) = 0.1 ∗ 0.5 ∗ 0.7 ∗ 0.9 = 0.0315 P(A, B, C, D)=P(A)*P( B) *P( C|A)* P (D|A, B)=0.1*0.5*0.7*0.9=0.0315 P(A,B,C,D)=P(A)∗P(B)∗P(C∣A)∗P(D∣A,B)=0.1∗0.5∗0.7∗0.9=0.0315
P ( ¬ A , B , C , D ) = P ( ¬ A ) ∗ P ( B ) ∗ P ( C ∣ ¬ A ) ∗ P ( D ∣ ¬ A , B ) = 0.9 ∗ 0.5 ∗ 0.2 ∗ 0.6 = 0.054 P(¬A, B, C, D)=P(¬A)*P( B) *P( C|¬A)* P (D|¬A, B)=0.9*0.5*0.2*0.6=0.054 P(¬A,B,C,D)=P(¬A)∗P(B)∗P(C∣¬A)∗P(D∣¬A,B)=0.9∗0.5∗0.2∗0.6=0.054
P ( B , C , D ) = P ( A , B , C , D ) + P ( ¬ A , B , C , D ) = 0.0315 + 0.054 = 0.0855 P(B, C, D)=P(A, B, C, D)+P(¬A, B, C, D)= 0.0315 + 0.054=0.0855 P(B,C,D)=P(A,B,C,D)+P(¬A,B,C,D)=0.0315+0.054=0.0855
So, P ( A ∣ B , C , D ) = 0.0315 / 0.0855 = 0.368 P(A | B, C, D) =0.0315/0.0855=0.368 P(A∣B,C,D)=0.0315/0.0855=0.368
4.如果将图 2.1 中 James 和 Ann 之间的 Mother 关系去掉,同时添加 David 和 Ann 之间的 Father 关系,则可得到图 2.14。试运用 FOIL 算法推出 James 和 Ann 的 Mother 关系。
解答:
背景知识样例集合 | 目标谓词训练样例集合 |
---|---|
Sibling(Ann, Mike) | Mother(James, Mike) |
Couple(James, David) | ¬Mother(James, David) |
Father(David, Mike) | ¬Mother(David, Mike) |
Father(David, Ann) | ¬Mother(David, Ann) |
¬Mother(Ann, Mike) |
在本例中,m+ = 1,m- = 4。
如果将Couple(x, z)作为前提约束谓词加入,可得到如下推理规则Couple(x, z)→Motℎer(x, y),在背景知识中,Couple(x, z)只有一个实例Couple(James, David),即x=James,z= David,将其代入Motℎer(x, y)得到Motℎer(James, y)。,在训练样本中存在正例 Mother(James, Mike)以及反例¬ Mother(James, David),即Couple(x, z)→Motℎer(x, y)覆盖正例和反例数量分别为1和1,信息增益值为:1.32
得到**Couple(x,z)→Motℎer(x, y)**新推理规则,将训练样例中与该推理规则不符的样例去掉。即x取值James,训练集中¬Mother(David, Mike) ,¬Mother(David, Ann) 和¬Mother(Ann, Mike)无法匹配。
此时训练样本集中只有正例Mother(James, Mike) 和负例¬ Mother(James, David)两个实例。
得到Fatℎer(z, y)加入信息增益最大,将Fatℎer(z, y)加入,得到新推理规则
Father(z, y)∧ Couple(x,z)→Mother(x, y)。
当x=David、y=Mike、z=James时,该推理规则覆盖训练样本集合中正例Fatℎer(David, Mike)且不覆盖任意反例,因此算法学习结束。
已知:
Motℎer(James, Ann)
Couple(David,James)
于是:Fatℎer(David, Ann)
5.图 2.15 给出了不同变量之间的依赖关系,请写出因果图 H 中 8 个变量之间的联合概率形式,并区分哪些变量是内生变量,哪些变量是外生变量。
解答:
P(X1,X2,X3,X4,X5,X6,X7,X8)
= P(X1)* P(X2|X4,X5)* P(X3|X1)* P(X4|X1,X3,X5)P(X5) P(X6|X3)* P(X7|X4,X6,X8)* P(X8|X5)
X2,X3,X4,X6,X7,X8是内生变量,X1,X5是外生变量。
6.给出五个可能的限定集 Z1,使其阻塞图 2.15 因果图 H 中的结点 X6和结点 X8。
阅读本书更多章节>>>>本文链接:https://www.kjpai.cn/gushi/2024-04-07/155309.html,文章来源:网络cs,作者:亙句,版权归作者所有,如需转载请注明来源和作者,否则将追究法律责任!