题目
BA 3551 (002 & 003) Exam 2
单项选择题
In a decision tree, variables used closer to the root are generally:
选项
A.Less important features
B.More important features
C.Equally important as all others
D.Only relevant for numeric predictions
查看解析
标准答案
Please login to view
思路分析
Question restatement: In a decision tree, variables used closer to the root are generally:
Option 1: 'Less important features' — This is incorrect. Features near the root typically have a larger impact on the prediction because they split the data earlier and influence more downstream decisions. Calling them less important contradicts how decision trees pr......Login to view full explanation登录即可查看完整答案
我们收录了全球超50000道考试原题与详细解析,现在登录,立即获得答案。
类似问题
Decision Trees split data to minimize which metric in classification?
Which of the following numbers is closest to the best parameter setting for the Decision Tree algorithm on the income data?
Question11 We will use the dataset below to learn a decision tree which predicts if a patient has COVID-19 (Yes, or No), based on the Temperature (High, Medium, or Low) and whether the patient has dry cough (Yes, or No). (note: [math] )[table] Temp. | Cough | COVID-19 Low | No | No Low | Yes | Yes Medium | No | No Medium | Yes | Yes High | No | Yes High | Yes | Yes [/table]Assuming that H(COVID-19) = 0.8 and Gain(S, Temperature) = 0.3 and Gain(S, Cough) = 0.5, which one of the following would be the full decision tree, learnt for this dataset? (select one) ResetMaximum marks: 1.5 Flag question undefined
Question4 If you use a decision tree to classify the input matrix for the output then which of the following splits at the root node gives the highest information gain? Assume f1 denotes the first feature, corresponding to the first column of X, and f2 denotes the second feature, corresponding to the second column. (select one) f2> 5 f1 > 6 f2> 2 f1> 7 ResetMaximum marks: 2 Flag question undefined
更多留学生实用工具
希望你的学习变得更简单
加入我们,立即解锁 海量真题 与 独家解析,让复习快人一步!