精华内容
下载资源
问答
  • 决策树模型

    千次阅读 2018-07-02 10:30:38
    决策树模型是一种基本的分类与回归方法。学习时,利用训练数据,根据损失函数最小化的原则建立决策树模型,预测时,对新的数据,利用决策树模型进行分类。决策树学习通常包括3个步骤:特征选择、决策树的生成,决策...

    • 决策树模型是一种基本的分类与回归方法。
    • 学习时,利用训练数据,根据损失函数最小化的原则建立决策树模型,
    • 预测时,对新的数据,利用决策树模型进行分类。
    • 决策树学习通常包括3个步骤:特征选择、决策树的生成,决策树的剪枝。
    • 常见的决策树模型有ID3(信息增益)、C4.5(信息增益比)、CART算法等。
    展开全文
  • 决策树模型 朴素贝叶斯模型Decision Trees are one of the highly interpretable models and can perform both classification and regression tasks. As the name suggests Decision Trees are tree-like structure...

    决策树模型 朴素贝叶斯模型

    Decision Trees are one of the highly interpretable models and can perform both classification and regression tasks. As the name suggests Decision Trees are tree-like structure model which resembles an upside-down tree. At this point, you might be having a question like we already have classical machine learning models like linear regression and logistic regression to perform regression and classification tasks in such case what is the necessity of having another model like Decision Tree. The answer to this question is to perform classical linear models we need to make sure that the data which is used for training the model is free from all irregularities like missing values, outliers needed to be handled, multicollinearity needs to be addressed. A whole lot of data preprocessing needs to be done before. Whereas in Decision Trees we no need to perform any sort of data preprocessing beforehand. Decision Trees are robust enough to handle all such kinds of problems to reach a decision. Also, Decision Trees are capable of handling nonlinear data that classical linear models fail to handle. Hence Decision Trees are diverse enough to perform both regression and classification tasks. A whole set of advantages and disadvantages associated with Decision Trees can be discussed in detail in the latter part of this article. Before that let’s start understanding Decision Trees.

    决策树是高度可解释的模型之一,可以执行分类和回归任务。 顾名思义,决策树是类似于倒置树的树状结构模型。 此时,您可能会遇到一个问题,例如我们已经拥有经典的机器学习模型(例如线性回归和逻辑回归)来执行回归和分类任务,在这种情况下,是否有必要使用其他模型(例如决策树)。 这个问题的答案是执行经典的线性模型,我们需要确保用于训练模型的数据没有所有不规则性,例如缺失值,需要处理的异常值,多重共线性。 之前需要完成大量数据预处理。 而在决策树中,我们无需事先执行任何类型的数据预处理。 决策树足够强大,可以处理所有此类问题以做出决策。 而且,决策树能够处理经典线性模型无法处理的非线性数据。 因此,决策树的多样性足以执行回归和分类任务。 与决策树相关的整套优缺点可以在本文的后半部分中详细讨论。 在此之前,让我们开始了解决策树。

    Decision Trees build the tree by asking a series of questions to the data to reach a decision. Hence it is said that Decision Trees mimic the human decision process. During the tree-building process, it divides the entire data into subsets of data until it reaches a decision. Let’s understand a few terminologies associated with Decision trees to better understand Decision Trees.

    决策树通过向数据提出一系列问题以做出决策来构建树。 因此,可以说决策树模仿了人类的决策过程。 在树构建过程中,它将整个数据划分为数据子集,直到达成决策为止。 让我们了解一些与决策树相关的术语,以更好地理解决策树。

    决策树中的少数术语: (Few Terminologies in Decision Trees:)

    Root Node: The topmost node of the tree corresponds to the Root Node. All the data will be present at this Root Node. The arrows in the decision tree are generally pointed away from this Root Node.

    根节点:树的最高节点与根节点相对应。 所有数据都将出现在此根节点上。 决策树中的箭头通常指向远离此根节点的方向。

    Leaf Node or Terminal Node: Also called as Terminal Node. If a particular node cannot be split further that it is considered as Leaf Node. The Decisions or the Predictions are held by this Leaf Node. The arrows in the decision tree are generally pointed to this Leaf Node.

    叶节点或终端节点:也称为终端节点。 如果无法进一步拆分特定节点,则将其视为叶节点。 决策或预测由此叶节点保留。 决策树中的箭头通常指向该叶子节点。

    Internal Node or Decision Node: The nodes between the root node and the leaf node are said to be internal nodes. These nodes can be split further into sub-nodes.

    内部节点或决策节点:根节点和叶节点之间的节点被称为内部节点。 这些节点可以进一步分为子节点。

    Please refer to the below image for a better understanding of the above-mentioned terminology.

    请参考下图以更好地理解上述术语。

    Image for post
    Decision Tree Terminologies
    决策树术语

    Decision Trees are said to be highly interpretable because of its tree-like structure. In order to interpret the Decision Tree, we transverse down the tree satisfying the conditions associated with the nodes to reach a decision. The term Decision refers to a prediction it can be either a class label if performing a classification task or a value if performing a regression task. Upon interpreting the Decision Tree, we will get to know about the associated features which lead to a particular decision. Though we will not be able to interpret the linear relation between the feature variable and the target variable and its directional effect. If interpreting the model is of major concern, then Decision Tree would be on the top. On a whole, we can think of interpreting a Decision Tree using some logical IF then ELSE statements. Then using AND logical operator we can connect the condition associated with a particular node with the previous node condition.

    由于决策树具有树状结构,因此可以高度解释。 为了解释决策树,我们将满足与与节点相关联的条件的树横向向下以做出决策。 术语“决策”是指预测,如果执行分类任务,则可以是类别标签,如果执行回归任务,则可以是值。 解释了决策树后,我们将了解导致特定决策的相关功能。 尽管我们将无法解释特征变量与目标变量之间的线性关系及其方向效应。 如果解释模型是主要问题,则决策树将排在最前面。 总体而言,我们可以考虑使用逻辑IF然后使用ELSE语句来解释决策树。 然后,使用AND逻辑运算符可以将与特定节点关联的条件与先前的节点条件连接起来。

    Having understood how to interpret Decision Tree we next understand the high-level tree building process.

    了解了如何解释“决策树”之后,我们接下来将了解高级树构建过程。

    树构建过程涉及的步骤如下: (The steps involved in the tree building process is as follows:)

    1. Recursive partition of the data into multiple subsets.2. At each node identifying the variable and the rule associated with the variable for the best split.3. Applying the split at that node using the best variable using the rule defined for the variable.4. Repeating steps 2 and 3 on the sub-nodes.5. Repeating this process until we reach a stopping condition.6. Assigning the decisions at the leaf nodes based on the majority class label present at that node if performing a classification task or considering the average of the target variable values present at that leaf node if performing a regression task.

    1.将数据递归划分为多个子集。 在每个节点上标识变量以及与该变量关联的规则以实现最佳分割3。 使用为变量定义的规则,使用最佳变量在该节点上应用拆分.4。 在子节点上重复步骤2和3.5。 重复此过程,直到达到停止状态为止。6。 如果执行分类任务,则基于存在于该节点的多数类标签在叶节点处分配决策;如果执行回归任务,则考虑存在于该叶节点的目标变量值的平均值。

    There exists different tree-building algorithms like CART, CHAID, ID3, C4.5, C5.0, etc. In each of the building algorithm, the criteria considered in selecting the best feature which provides the best split might be different like CART algorithm uses Gini Index impurity measure to determine the best feature which provides the best split. Similarly, ID3 uses Information gain, C4.5 uses Gain Ratio likewise for other algorithms as well. But the overall tree-building algorithm remains the same as mentioned above.

    存在不同的树构建算法,例如CART,CHAID,ID3,C4.5,C5.0等。在每种构建算法中,选择提供最佳分割的最佳特征所考虑的标准可能与CART算法不同使用基尼系数杂质测度确定可提供最佳分离效果的最佳功能。 同样,ID3使用信息增益,C4.5同样使用增益比用于其他算法。 但是总体的树构建算法与上面提到的相同。

    At this point, you might have questions like how to select the features which provide the best split. How to define rule associated with the feature for providing the best split and finally what is the stopping condition. These questions will be answered in the latter part of this article.

    此时,您可能会遇到诸如如何选择可提供最佳分割效果的功能之类的问题 如何定义与功能相关的规则以提供最佳分割,最后是什么停止条件。 这些问题将在本文的后半部分得到解答。

    Few things to be noted about Decision Tree building, these Decision Trees follow a top-down approach in building the tree and also said to have a Greedy approach. Greedy approach because at every split of the node these Decision Trees are concerned about the immediate result after the split. They do not take into consideration of the effect of split after two or three nodes. Hence these Decision Trees are said to have a Greedy approach. One important implication of the Greedy approach is that it makes Decision Trees high variance model meaning a small change in input data will result in a complete change in a tree structure and the final decisions.

    关于决策树的构建,没有什么要注意的事情,这些决策树在构建树时遵循自顶向下的方法,并且据说具有贪婪的方法。 贪婪方法,因为在节点的每个拆分中,这些决策树都关注拆分之后的即时结果。 他们没有考虑两个或三个节点后拆分的影响。 因此,这些决策树被称为具有贪婪方法。 贪婪方法的一个重要含义是,它使决策树成为高方差模型,这意味着输入数据的微小变化将导致树结构和最终决策的完整变化。

    Having some high-level understanding of Decision Trees and their model building process. Let’s address all of our questions one by one which we have come across during the model building process.

    对决策树及其模型构建过程有一些高级了解。 让我们一一解决我们在模型构建过程中遇到的所有问题。

    如何选择在节点上为我们提供最佳分割的功能? (How to select features which provide us with the best split at a node?)

    Before addressing this question we need to have some understanding related to the homogeneity associated with a node in case of classification setting. The same notion can be extrapolated for a regression setting. As the name suggests homogeneity refers to something of the same kind. The same definition can be extended in the case of Decision Trees as well. A particular node is said to be homogenous if the class labels associated at the node belong to a single class if performing a classification activity. If performing a regression activity then we speak in terms of variance associated with a node.

    在解决该问题之前,我们需要对分类设置中与节点关联的同质性有一些了解。 可以为回归设置外推相同的概念。 顾名思义,同质性指的是同类事物。 在决策树的情况下,也可以扩展相同的定义。 如果执行分类活动,则在该节点关联的类别标签属于单个类别,则认为该特定节点是同质的。 如果执行回归活动,那么我们说的是与节点相关的方差。

    In case of classification, the term best split refers to obtaining as homogenous as possible sub-nodes or child nodes upon splitting a parent node. The class labels of the target variable associated with the data points present at those sub nodes or child nodes should be belonging to one of the class labels then the sub-nodes obtained are said to be as homogenous as possible. In the case of regression, the term best split refers to obtaining low variance nodes upon splitting the parent node. Upon computing, Mean Square Error we get to know about the variance associated with the data points present at a node. Let’s now focus on how to identify the feature associated with some rule to split a node which results in the best split.

    在分类的情况下,术语最佳分割是指在分割父节点时获得尽可能均匀的子节点或子节点。 与存在于那些子节点或子节点上的数据点相关联的目标变量的类标签应属于这些类标签之一,然后将获得的子节点视为尽可能同质。 在回归的情况下,术语最佳分割是指在分割父节点时获得低方差节点。 通过计算均方误差,我们可以了解与节点上存在的数据点相关的方差。 现在,让我们集中讨论如何识别与某些规则关联的功能来分割节点,从而获得最佳分割效果。

    In the case of classification activity, a particular feature is selected which results in best split based on the difference in impurity created or purity gain obtained upon splitting or how well the feature is able to classify the class labels associated with the target variable or results in as homogenous as possible sub-nodes. At this point, we might get a question like how do we quantify the homogeneity of a node. Using impurity measures we can quantify the homogeneity of a node, some of the popular metrics are Classification Error, Gini Index, and Entropy. Since these metrics account for the impurity of a node, lower the metric value higher the homogeneity of a node. Let’s look at the impurity measures in detail.

    在分类活动的情况下,将根据分裂时产生的杂质差异或获得的纯度增益,或该特征能够对与目标变量相关联的类别标签进行分类的程度,来选择能够最佳分裂的特定特征。尽可能同质的子节点。 在这一点上,我们可能会遇到一个问题,例如如何量化节点的同质性。 使用杂质度量,我们可以量化节点的同质性,一些流行的度量标准是分类误差,基尼系数和熵。 由于这些指标解决了节点的杂质,因此指标值越低,节点的同质性越高。 让我们详细了解一下杂质措施。

    用于测量节点同质性的杂质度量: (Impurity measures to measure Homogeneity at a node:)

    Classification Error is the error made in assigning the class labels to the data points based on the majority class label. Upon computing the probabilities associated with each of the class labels we will get to know about the majority class label then assigning the majority class label to all the data points. Doing so misclassification is made by assigning the low probability class label data points with the majority class label.

    分类错误是基于多数类标签将类标签分配给数据点时发生的错误。 在计算与每个类别标签相关的概率后,我们将了解多数类别标签,然后将多数类别标签分配给所有数据点。 通过为低概率类别标签数据点分配多数类别标签来进行错误分类。

    Gini Index accounts for any random data point being misclassified. The value varies between 0 and 0.5. Lower the Gini index the lesser chances of any random data point will be misclassified which will help in assigning decisions or outcomes to leaf nodes without any ambiguity. If there exist all the data points belong to one single class label in such Gini Index value will be 0 as the data points are completely homogenous with a single class label associated with them. Similarly, if there exists an equal class distribution of data points in such case the Gini Index will be maximum that is 0.5 as there exists complete ambiguity in class labels and the data points are said to be highly non-homogenous.

    基尼系数(Gini Index)解释了任何随机分类的数据点。 该值在0到0.5之间变化。 基尼系数越低,随机数据点被错误分类的机会就越少,这将有助于将决策或结果分配给叶节点而没有任何歧义。 如果存在所有数据点,则此类基尼索引值将属于一个单一类别标签,因为这些数据点与与其关联的单一类别标签完全同质。 类似地,如果在这种情况下数据点的类分布相等,则由于基类标签中存在完全的歧义,并且数据点被认为是高度不均匀的,因此,基尼系数最大为0.5。

    On the other hand, Entropy which is derived from thermodynamics and Information Theory accounts for the degree of disorder present in data points at a node. Lower the entropy value lower the disorder present at a node in the class labels of the target variable. The value ranges between 0 and 1. If all the data points belong to one single class label in such case the Entropy value will be minimum that is 0 as there exists least disorder in the class labels of the target variable. Similarly, if there exists an equal distribution of class labels in data points in such case the Entropy value will be maximum that is 1 as there exists complete disorder in the class labels of the target variable and the data is to be completely non-homogenous.

    另一方面,从热力学和信息论导出的熵说明了节点数据点中存在的无序程度。 降低熵值,降低目标变量的类标签中某个节点处出现的混乱。 该值的范围是0到1。如果在这种情况下所有数据点都属于一个单一的类别标签,则熵值将是最小值,即0,因为目标变量的类别标签中存在最少的混乱。 类似地,如果在这种情况下数据标签中的类标签分布相等,则熵值将为1,因为目标变量的类标签中存在完全无序且数据将完全不均匀。

    The formulas to compute the Classification Error, Gini Index, and Entropy as follows:

    用于计算分类误差,基尼系数和熵的公式如下:

    Image for post
    Classification Error
    分类错误
    Image for post
    Gini Index
    基尼指数
    Image for post
    Entropy

    Where pi corresponds to the probability of the data point belonging to ith class label and k accounts for different class labels.

    其中pi对应于数据点属于第i个类别标签的概率,而k占不同类别标签的概率。

    Gini Index and Entropy are widely used in computing the homogeneity of a node when compared to Classification Error as Classification Error is not as sensitive as other metrics in identifying the homogeneity of a node.Numerically Gini Index and Entropy are similar to each other. When computed Scaled Entropy which is Entropy/2, the trajectories of Scaled Entropy and Gini Index will be almost touching with each other. For better understanding please refer to the below image:

    与分类误差相比,基尼系数和熵被广泛用于计算节点的同质性,因为分类误差在识别节点的同质性方面不如其他度量标准敏感。数值上,基尼系数和熵彼此相似。 当计算的比例熵为熵/ 2时,比例熵的轨迹和基尼指数几乎彼此接触。 为了更好的理解,请参考下图:

    Image for post
    Impurity Measures variation
    杂质测量变化

    Hence in order to select the feature which provides the best split, it should result in sub-nodes that have a low value of any one of the impurity measures or creates a maximum difference in impurity measure when calculated the impurity measures before and after the split meaning creating maximum purity gain. To compute the difference in impurity measures we first compute the impurity measure of the node before splitting and then compute the weighted average of impurity measure of the sub-nodes which are obtained upon splitting the node with some feature which is assigned with some rule to split. Finally computing the difference between impurity measures computed before and after the split will result in purity gain. We aim to have higher purity gain as it results in more homogenous sub-nodes. We select the feature among all other features to split a node that creates the highest purity gain.

    因此,为了选择提供最佳分割效果的特征,当计算分割前后的杂质测度时,应该导致子节点的任何一种杂质测度值较低或在杂质测度上产生最大差异意味着最大程度地提高纯度。 为了计算杂质测度的差异,我们首先计算分裂前节点的杂质测度,然后计算子节点的杂质测度的加权平均值,该子节点的杂质测度是通过将某节点划分为具有某些特征的特征而获得的,该特征分配有一些规则进行分裂。 最后,计算拆分前后所计算出的杂质测度之间的差异,将获得纯度提高。 我们的目标是获得更高的纯度,因为它会导致更均匀的子节点。 我们从所有其他特征中选择一个特征,以拆分创建最高纯度增益的节点。

    The same thought process can be extended to the regression setting in which we select the feature to split the node which results in low variance sub-nodes.

    可以将相同的思维过程扩展到回归设置,在回归设置中,我们选择特征以拆分节点,从而导致低方差子节点。

    Let’s move forward to answer our next question which is How to identify rule associated with a feature to split a node.

    让我们继续回答下一个问题,即如何识别与要素关联的规则以分割节点。

    如何识别与要素关联的规则以分割节点? (How to identify rule associated with a feature to split a node?)

    To answer this question we will restrict our discussion to the CART tree building algorithm and explore different methods in defining the rule to a feature in order to split a node.

    为了回答这个问题,我们将讨论限制在CART树构建算法上,并探索将规则定义为特征以拆分节点的不同方法。

    When building the tree using a CART algorithm, every node is split into two parts meaning it performs binary split at every node. The other specification of the CART algorithm is for every node split there exists only univariate condition associated with a feature in order to split the node meaning at every node using only one rule associated with the feature we proceed to split the node. No multiple rules associated with different features are used to split a node. In order to perform a multi-way split, there are other popular algorithms like ID3, C4.5, C5.0, CHAID.

    使用CART算法构建树时,每个节点都分为两部分,这意味着它将在每个节点上执行二进制拆分。 CART算法的另一种规范是,对于每个分割的节点,仅存在与特征关联的单变量条件,以便仅使用与该特征关联的一个规则来分割节点,从而在每个节点处分割节点的含义。 没有使用与不同功能关联的多个规则来拆分节点。 为了执行多路拆分,还有其他流行的算法,例如ID3,C4.5,C5.0,CHAID。

    Let’s come back to our question which is how to define a rule associated with a feature that provides the best split. If the predictor variable is a nominal categorical variable having some k classes in it, the possible number of splits using each of the class are (2^(k-1)-1). Among all possible splits the split which results in homogeneous sub-nodes is taken into consideration. If the predictor variable is an ordinal categorical variable having n classes in it, the possible number of splits are (n-1). Among all possible splits the split which results in homogenous sub-nodes is taken into consideration. If the predictor variable is continuous or numerical there are some discretization techniques available to select the rule from the numerical variable. One of the discretization technique is to arrange the variable in ascending order array and check on each of the numerical value by performing split using each numerical value. Finally, the one which provides the best split is taken into consideration. By following other discretization methods like considering mean, percentiles we can define rule associated with the numerical feature to split the node.

    让我们回到我们的问题,即如何定义与提供最佳分割的功能关联的规则。 如果预测变量是其中包含k个类别的名义分类变量,则使用每个类别的拆分的可能数量为(2 ^(k-1)-1)。 在所有可能的分割中,考虑导致均匀子节点的分割。 如果预测变量是其中具有n个类别的有序分类变量,则拆分的可能数量为(n-1)。 在所有可能的分割中,考虑导致均匀子节点的分割。 如果预测变量是连续变量或数字变量,则可以使用一些离散技术从数字变量中选择规则。 离散化技术之一是将变量按升序排列,并通过使用每个数值执行除法来检查每个数值。 最后,考虑提供最佳拆分的方法。 通过遵循其他离散化方法(例如考虑均值,百分位数),我们可以定义与数字特征关联的规则以拆分节点。

    Now among all the features associated with their rules which result in best split only one feature is selected finally to split the node which generates greater purity gain among all other features associated with their best rules.

    现在,在与导致最佳分割的规则相关的所有特征中,最后仅选择一个特征以分割节点,从而在与最佳规则相关的所有其他特征中产生更大的纯度增益。

    After having an understanding of the tree-building process and some concepts surrounding it. Let’s answer our last question which is What is the stopping condition.

    了解了树构建过程及其周围的一些概念之后。 让我们回答最后一个问题,即停止条件是什么。

    停止条件是什么? (What is the stopping condition?)

    In general, the tree-building process will continue until all the features have been exhausted to split the nodes or all the leaf nodes have been formed such that each of the leaf nodes corresponds to minimal data points of the training data set.

    通常,树构建过程将继续进行,直到用尽所有特征以分割节点或形成所有叶节点为止,以使每个叶节点都对应于训练数据集的最小数据点。

    Upon allowing the tree to grow to its complete logical end the tree becomes a high variance model meaning it will overfit on training data by memorizing every data point present in the training data. Once it becomes a high variance model in such case a small change in training data will alter complete tree structure as a result all the decisions associated with leaf nodes might change. Hence such trees are not trustworthy to make any decisions.

    在允许树增长到其完整的逻辑末端时,树将成为高方差模型,这意味着它将通过记忆训练数据中存在的每个数据点来过度拟合训练数据。 在这种情况下,一旦成为高方差模型,训练数据的微小变化将改变完整的树结构,结果与叶节点相关的所有决策都可能发生变化。 因此,此类树不值得做出任何决定。

    Stopping conditions can also be defined by assigning some values to the hyperparameters by hyperparameter tuning. Before understanding more let’s get an understanding of what are hyperparameters and what is hyperparameter tuning. Hyperparameters are something that is defined by modeler during the process of model building. The learning algorithm takes into consideration of these hyperparameters which are passed by the modeler before producing a final model. The model is not capable of identifying these hyperparameters implicitly during the model building. To find out the optimum hyperparameters we perform hyperparameter tuning. Some of the hyperparameters which control the tree growth are max_depth, max_features, min_samples_leaf, min_samples_split, criterion, etc. Upon defining hyperparameter we can control tree from overfitting. Let’s look at the methods to control overfitting of trees.

    还可以通过通过超参数调整为超参数分配一些值来定义停止条件。 在进一步了解之前,我们先了解一下什么是超参数以及什么是超参数调整。 超参数是建模人员在模型构建过程中定义的内容。 学习算法考虑了建模人员在生成最终模型之前传递的这些超参数。 模型无法在模型构建过程中隐式识别这些超参数。 为了找出最佳的超参数,我们执行超参数调整。 控制树生长的一些超参数是max_depth,max_features,min_samples_leaf,min_samples_split,criteria等。通过定义超参数,我们可以控制树的过度拟合。 让我们看一下控制树过度拟合的方法。

    控制树木过度拟合的方法: (Methods to control Overfitting of trees:)

    Decision Trees have high chances of overfitting on the training data as a result they become high variance models. To avoid overfitting issues in trees we follow some of the methods:- Tree Truncation or Pre Pruning Strategies- Post Pruning Strategies

    由于决策树成为高方差模型,因此决策树极有可能过度拟合训练数据。 为了避免树木过度拟合的问题,我们采用了一些方法:-树木截断或修剪前策略-修剪后策略

    Tree Truncation: Also, called as Pre Pruning Strategies as we control the tree from overfitting during the model building itself. One of the naive tree truncation strategies is to define a threshold homogeneity value in case of a classification activity. Which will be used to compare homogeneity at every node before splitting. If the homogeneity of the node is less than the threshold value then we split the node further. Similarly, if the homogeneity of the node is greater than the threshold value then we convert the node as a leaf node. Other tree truncation strategies are by defining hyperparameters during the model building we can control the tree from overfitting.

    树截断:也称为“预修剪策略”,因为我们在模型构建过程中控制树的过度拟合。 天真的树截断策略之一是在分类活动的情况下定义阈值同质性值。 在拆分之前,将使用它比较每个节点的同质性。 如果节点的同质性小于阈值,则我们将节点进一步拆分。 同样,如果节点的同质性大于阈值,则将其转换为叶节点。 其他树截断策略是通过在模型构建期间定义超参数来控制树的过度拟合。

    Post Pruning: In Post Pruning, we allow the tree to its complete logical end and then perform pruning from the bottom of the tree. Some of the popular post pruning methods are Reduced Error Pruning, Cost Complexity Pruning. We perform pruning of nodes until there exists no reduction in purity gain.

    修剪后:在修剪后,我们允许树到其完整的逻辑末端,然后从树的底部执行修剪。 一些流行的后期修剪方法是减少错误修剪,成本复杂性修剪 。 我们执行节点的修剪,直到纯度增益没有降低为止。

    Image for post
    Post Pruning
    修剪后

    Generally, Tree Truncation strategies are preferred over Post Pruning strategies as Post Pruning methods are overkill process.

    通常,树修剪策略比后修剪策略更可取,因为后修剪方法是过大的过程。

    We have answered all our questions that were lingering in our minds during the tree building process. Let’s finish off this article by discussing the advantages and disadvantages of Decision Trees.

    我们已经回答了在树木建造过程中一直困扰我们的所有问题。 让我们通过讨论决策树的优缺点来结束本文。

    决策树的优势: (Advantages of Decision Trees:)

    1. Versatile: Decision Trees can be used for building Classification and Regression model building.

      多功能:决策树可用于构建分类和回归模型。

    2. Fast: Upon defining the hyperparameters by hyperparameter tuning the Decision Tree building process is significantly fast.

      快速:通过超参数调整定义超参数后,决策树的构建过程非常快。

    3. Minimal Data Preprocessing: No much data preprocessing is needed like scaling, outlier treatment, etc.

      最少的数据预处理:不需要太多的数据预处理,例如缩放,离群值处理等。

    4. Easy Interpretable: Decision Tree is like a flow chart that can be easily interpretable without any mathematical understanding.

      易于解释:决策树就像流程图,无需任何数学知识即可轻松解释。

    5. Able to handle non-linear relationships: If there exists any non-linear relationship between the predictor variable and target variable using Decision Trees the non-linear relationship can be captured upon segmenting the data into smaller subsets and then assigning a single decision to the entire subset using a leaf node.

      能够处理非线性关系:如果使用决策树在预测变量和目标变量之间存在任何非线性关系,则可以在将数据分割成较小的子集然后将单个决策分配给整个子集后捕获非线性关系使用叶节点的子集。

    6. Handles Multicollinearity: Decision Trees can handle multicollinearity by considering only those features among multicorrelated features for node splitting as there is no meaning in considering the other multicorrelated feature as well.

      处理多重共线性决策树可以通过仅考虑多重相关特征中用于节点拆分的那些特征来处理多重共线性,因为考虑其他多重相关特征也没有意义。

    7. Non-parametric Model: Decision trees doesn’t take into consideration of any assumption and distributions related to the predictor variables or the errors associates with the predictions. Hence it is said to be a Non-parametric Model.

      非参数模型:决策树不考虑任何与预测变量或与预测相关的误差相关的假设和分布。 因此,它被称为非参数模型。

    8. Feature Importance: Feature importance can be obtained upon building a Decision Tree with which we can know which are the significant features that made a significant contribution in marking prediction about the target variable. By knowing these feature importance we can perform model-based dimensionality reduction by considering only the significant features.

      特征重要性:在构建决策树时可以获得特征重要性,通过决策树我们可以知道哪些是对标记有关目标变量的预测做出了重大贡献的重要特征。 通过了解这些特征的重要性,我们可以通过仅考虑重要特征来执行基于模型的降维。

    决策树的缺点: (Disadvantages of Decision Trees:)

    1. Loss of Inference: Using Decision trees we can get to know about the decision associated with a data point and also we can know about the factors leading to a particular decision holding by a leaf node. But we will not be knowing about the linear relationship between the predictor variable and the target variable as a result we cannot make any inferences about the population.

      推断的损失:使用决策树,我们可以了解与数据点关联的决策,也可以了解导致叶节点持有特定决策的因素。 但是我们不会知道预测变量和目标变量之间的线性关系,因此我们无法对总体进行任何推断。

    2. Loss of the numerical nature of the variable: If there exists any numerical variable upon tree building the entire subset of the numerical variable is being assigned with a single prediction value as a result the information present in the numerical variable is taken into consideration.

      变量的数值性质的损失:如果在树构建时存在任何数值变量,则将为数值变量的整个子集分配单个预测值,结果将考虑数值变量中存在的信息。

    3. Overfitting: If the tree is allowed to grow to its complete logical end then the tree overfits on the training data. Though overfitting issues can be controlled by performing the above-mentioned methods but it is an inherited problem with Decision Trees if not controlled.

      过拟合:如果允许树增长到其完整的逻辑末端,则树在训练数据上过拟合。 尽管过拟合问题可以通过执行上述方法来控制,但如果不加以控制,则是决策树的固有问题。

    翻译自: https://medium.com/@varunimmidi/overview-about-the-decision-tree-model-267c870fa147

    决策树模型 朴素贝叶斯模型

    展开全文
  • I . 决策树模型 II . 决策树模型 示例 III . 决策树算法列举 IV . 决策树算法 示例 V . 决策树算法性能要求 VI . 决策树模型创建 ( 递归创建决策树 ) VII . 决策树 树根属性 选择



    I . 决策树模型



    1 . 决策树 : 决策时基于 “树” 结构 , 这也是模拟人在进行决策时采用的策略 ;


    2 . 决策树组成 : 根节点 , 内部节点 , 叶子节点 , 这些节点都是数据的 属性 ( 特征 ) ;


    ① 根节点 : 最初始判定的属性 , 判定区域是全局的数据集 ;

    ② 内部节点 : 中间的判定属性 , 判定区域是符合某些特征的子数据集 ;

    ② 叶子节点 : 决策结果 , 位于决策树的最底层 , 每个叶子节点都是一个决策结果 ;


    3 . 决策树模型过程 :


    ① 训练过程 : 使用训练集数据确定决策时使用的属性 , 确定根节点 , 内部节点 , 叶子节点 的属性划分 , 训练决策树模型 ;

    ② 预测过程 : 从根节点特征开始 , 根据决策树中的判定序列依次从根节点向下判定 , 直到一个叶子节点 ;



    II . 决策树模型 示例



    1 . 需求场景 :


    ① 需求 : 电商网站为用户进行分类 , 目的是确定该用户是否有可能购买某件商品 , 然后为其推送指定商品的广告 ;

    ② 决策树使用 : 如何对用户进行分类 , 这里就用到了决策树模型 , 将用户分成不同的类别 ;


    2 . 数据集 : 决策过程中 , 根据每个节点所处理的数据集的特征 , 将其划分到不同的子节点中进行处理 ; 如数据集中是 100 个用户的信息 ;


    3 . 决策树构成 :


    ① 根节点决策 : 根节点 处理年龄特征 , 小于 30 岁的用户划分到一组 , 大于 30 岁的用户划分到另一组 ;

    ② 内部节点决策 : 然后在 小于 30 岁的用户中继续判定 , 学生划分成一组 , 非学生划分成一组 ;

    ③ 叶子节点决策结果 : 学生会买电脑 , 非学生不会买电脑 ;

    在这里插入图片描述



    III . 决策树算法列举



    1 . 常用的决策树算法 :


    ① CLS 算法 : 这是第一个决策树算法 , 1966 年提出 ;

    ② ID3 算法 : 该算法使决策树称为机器学习主流技术 , 1979 年提出 ;

    ③ C4.5 算法 : 最常用的决策树算法 ; 1993 年提出 ;

    ④ 区别 : 上述三个算法五个组件基本一致 , 唯一的区别是确定属性划分时的策略不同 , 即将哪个属性放在树根 , 将哪个属性放在内部节点上 , 内部节点的属性所在层级如何设置 ;


    2 . 属性划分策略 :


    ① ID3 算法属性划分策略 : ID3 使用信息增益策略 ;

    ② C4.5 算法属性划分策略 : C4.5 使用的是增益率策略 ;


    3 . CART 算法 : 既可以用于分类任务 ( 结果是离散值 ) , 也可以用于回归任务 ( 结果是连续值 ) ;


    4 . FR 算法 : 随机森林算法 ; 使用了数据挖掘 , 机器学习中的集成思想 ; 有很多差的分类器 , 准确率都很低 , 但是多个分类器集成起来 , 准确率就很高 ;



    IV . 决策树算法 示例



    1 . 需求场景 :


    ① 需求 : 电商网站为用户进行分类 , 目的是确定该用户是否有可能购买某件商品 , 然后为其推送指定商品的广告 ;

    ② 决策树使用 : 如何对用户进行分类 , 这里就用到了决策树模型 , 将用户分成不同的类别 , 买的一类 , 和不买的一类 ;


    2 . 模拟数据集 : 给出一组数据集 , 后面的所有计算都是基于该数据集进行的 ;

    需求 : 根据 年龄 , 收入水平 , 是否是学生 , 信用等级 , 预测该用户是否会购买商品 ;


    年龄 收入水平 是否是学生 信用等级 是否购买商品
    小于 30 岁 高收入 不是 一般 不会
    小于 30 岁 高收入 不是 很好 不会
    31 ~ 39 岁 高收入 不是 一般
    40 岁以上 中等收入 不是 一般
    40 岁以上 低收入 一般
    40 岁以上 低收入 很好 不会
    31 ~ 40 岁 低收入 不是 很好
    小于 30 岁 中等收入 不是 一般 不会
    小于 30 岁 低收入 一般
    40 岁以上 中等收入 一般
    小于 30 岁 中等收入 很好
    31 ~ 39 岁 中等收入 不是 很好
    31 ~ 39 岁 高收入 一般
    40 岁以上 中等收入 不是 很好 不会

    3 . 决策树模型 :

    建立模型 : 将上述数据集的 属性 ( 特征 ) 转换为树状的模型 ;

    确定树根 : 首先要确定哪个属性作为树根 , 这个选择是有一定要求的 , 不能随意指定一个任意的特征作为树根 ;


    4 . 决策树 属性划分 :

    属性划分策略 : 根据一定的策略 , 确定哪个属性作为树根 , 然后每个子树 , 在确定剩余的哪个属性作为子树的树根 , 这是递归问题 ;

    属性划分的算法性质 : 递归算法 ;

    如何决定树根属性 : 确定总树的树根 , 及每个子树的树根 , 要求根据数据的 属性 ( 特征 ) 进行的决策次数尽量能做到最少 ;

    在这里插入图片描述



    V . 决策树算法性能要求



    1 . 决策树的高度 :


    ① 决策树最大高度 : 决策属性的个数 ; ( 每个属性都要决策一次 , 才能预测出结果 )

    ② 决策时最小高度 : 1 ; ( 只需要决策一次 , 就可以预测出结果 )


    2 . 决策树性能 : 决策树越矮越好 , 即预测某特征 , 进行的决策次数越少越好 ;


    3 . 树根属性 : 越重要的属性 , 其越能将数据最大可能拆分开 , 将重要的属性放在树根 ;



    VI . 决策树模型创建 ( 递归创建决策树 )



    1 . 决策树模型创建 : 决策树模型创建的核心就是选择合适的树根 , 将重要的属性放在树根 , 然后子树中 , 继续选择子树中重要的属性放在子树的树根 , 依次递归 , 最终得到决策结果 ( 叶子节点 ) ;


    2 . 决策树创建算法 ( 递归 ) : 使用递归算法 , 递归算法分为递归操作 和 递归停止条件 ;


    3 . 递归操作 : 每个步骤先选择属性 , 选择好属性后 , 根据 总树 ( 子树 ) 的树根属性划分训练集 ;


    ① 选择属性 : 递归由上到下决定每一个节点的属性 , 依次递归构造决策树 ;

    ② 数据集划分 : 开始决策时 , 所有的数据都在树根 , 由树根属性来划分数据集 ;

    ③ 属性离散化 : 如果属性的值是连续值 , 需要将连续属性值离散化 ; 如 : 100 分满分 , 将 60 分以下分为不及格数据 , 60 分以上分为及格数据 ;


    4 . 递归停止的条件 :


    ① 子树分类完成 : 节点上的子数据集都属于同一个类别 , 该节点就不再向下划分 , 称为叶子节点 ;

    ② 属性 ( 节点 ) 全部分配完毕 : 所有的属性都已经分配完毕 , 决策树的高度等于属性个数 ;

    ③ 所有样本分类完毕 : 所有的样本数据集都分类完成 ;



    VII . 决策树 树根属性 选择



    1 . 属性选择方法 : 树根属性选择的方法很多 , 这里介绍一种常用的方法 , 信息增益 ;


    2 . 信息增益 : 信息增益 效果越大 , 其作为树根属性 , 划分的数据集分类效果越明显 ;


    3 . 信息 和 熵 : 涉及 信息论 的知识点 , 建议有空就去 B站 刷一下信息论课程 ;


    ① 信息 与 熵 的关系 : 信息 会 消除 熵 , 熵 代表了不确定性 , 信息用来消除不确定性 ;

    ② 信息增益 : 信息增益大的属性 , 能最大消除熵的不确定性 ;


    4 . 决策树中的信息增益 : 属性的 信息增益 越大 , 就越能将分类效果达到最大 ;

    如 : 想要从用户数据集中找到是否能买奢侈品的用户 , 先把高收入群体划分出来 , 将低收入者从数据集中去除 , 这个收入水平的属性 ( 特征 ) , 信息增益就很大 ;

    展开全文
  • GBDT决策树模型开发代码,详细的说明见https://blog.csdn.net/iqdutao/article/details/107698851
  • 决策树模型是一种很简单但是却很经典的机器学习模型,经历多次的改进和发展,现在已经有很多成熟的树模型,比如早期的ID3算法、现在的C45模型、CART树模型等等,决策树一个很大的优点就是可解释性比较强,当然这也是...

        决策树模型是一种很简单但是却很经典的机器学习模型,经历多次的改进和发展,现在已经有很多成熟的树模型,比如早期的ID3算法、现在的C45模型、CART树模型等等,决策树一个很大的优点就是可解释性比较强,当然这也是相对于其他模型来说的,决策树模型在训练完成后还可以通过绘制模型图片,详细了解在树中每一个分裂节点的位置是使用什么属性进行的,本文是硕士论文撰写期间一个简单的小实验,这里整理出来,留作学习记录,下面是具体的实现:

    #!usr/bin/env python
    #encoding:utf-8
    from __future__ import division
    
    '''
    __Author__:沂水寒城
    功能:使用决策树模型来对鸢尾花数据进行分析预测
          绘制DT模型
    '''
    
    
    
    
    import os
    import csv
    import csv
    from sklearn.tree import *
    from sklearn.model_selection import train_test_split
    import matplotlib as mpl
    mpl.use('Agg')
    import matplotlib.pyplot as plt 
    import pydotplus
    from sklearn.externals.six import StringIO #生成StringIO对象
    import graphviz
    os.environ["PATH"]+=os.pathsep + 'D:/Program Files (x86)/Graphviz2.38/bin/'
    from sklearn.datasets import load_iris
    from sklearn import tree
    iris = load_iris()
    
    
    def read_data(test_data='fake_result/features_cal.csv',n=1,label=1):
        '''
        加载数据的功能
        n:特征数据起始位
        label:是否是监督样本数据
        '''
        csv_reader=csv.reader(open(test_data))
        data_list=[]
        for one_line in csv_reader:
            data_list.append(one_line)
        x_list=[]
        y_list=[]
        label_dict={'setosa':0,'versicolor':1,'virginica':2}
        for one_line in data_list[1:]:
            if label==1:
                biaoqian=label_dict[one_line[-1]]
                #biaoqian=int(one_line[-1])
                y_list.append(int(biaoqian))   #标志位
                one_list=[float(o) for o in one_line[n:-1]]
                x_list.append(one_list)
            else:
                one_list=[float(o) for o in one_line[n:]]
                x_list.append(one_list)
        return x_list, y_list
    
    
    def split_data(data_list, y_list, ratio=0.30):
        '''
        按照指定的比例,划分样本数据集
        ratio: 测试数据的比率
        '''
        X_train, X_test, y_train, y_test = train_test_split(data_list, y_list, test_size=ratio, random_state=50)
        print '--------------------------------split_data shape-----------------------------------'
        print len(X_train), len(y_train)
        print len(X_test), len(y_test)
        return X_train, X_test, y_train, y_test
    
    
    def DT_model(data='XD_new_encoding.csv',rationum=0.20):
        '''
        使用决策树模型
        '''
        x_list,y_list=read_data(test_data=data,n=1,label=1)
        X_train,X_test,y_train,y_test=split_data(x_list, y_list, ratio=rationum)
        DT=DecisionTreeClassifier()
        DT.fit(X_train,y_train)
        y_predict=DT.predict(X_test)
        print 'DT model accuracy: ', DT.score(X_test,y_test)
        dot_data=StringIO() 
        export_graphviz(DT,out_file=dot_data,class_names=iris.target_names,feature_names=iris.feature_names,filled=True,
                        rounded=True,special_characters=True) 
        graph=pydotplus.graph_from_dot_data(dot_data.getvalue())
        graph.write_png('iris_result.png')
    
    
    
    
    if __name__ == '__main__':
        DT_model(data='iris.csv',rationum=0.30)
        

    输出结果为:

    --------------------------------split_data shape-----------------------------------
    105 105
    45 45
    DT model accuracy:  0.9555555555555556
    [Finished in 1.6s]

        其中,iris.csv是sklearn中的鸢尾花数据,具体的保存方法在我之前的博客中已经有了,感兴趣的话可以看一下

        DT模型如下:

        

         直观看起来还是挺漂亮的,仔细看的话足够清晰了,对于详细分析数据而言是很有帮助的。

        下面是决策树模型图构建过程中的原始数据

    digraph Tree {
    node [shape=box] ;
    0 [label="X[3] <= 0.8\ngini = 0.666\nsamples = 105\nvalue = [36, 33, 36]"] ;
    1 [label="gini = 0.0\nsamples = 36\nvalue = [36, 0, 0]"] ;
    0 -> 1 [labeldistance=2.5, labelangle=45, headlabel="True"] ;
    2 [label="X[3] <= 1.65\ngini = 0.499\nsamples = 69\nvalue = [0, 33, 36]"] ;
    0 -> 2 [labeldistance=2.5, labelangle=-45, headlabel="False"] ;
    3 [label="X[2] <= 5.0\ngini = 0.157\nsamples = 35\nvalue = [0, 32, 3]"] ;
    2 -> 3 ;
    4 [label="gini = 0.0\nsamples = 31\nvalue = [0, 31, 0]"] ;
    3 -> 4 ;
    5 [label="X[0] <= 6.05\ngini = 0.375\nsamples = 4\nvalue = [0, 1, 3]"] ;
    3 -> 5 ;
    6 [label="gini = 0.0\nsamples = 1\nvalue = [0, 1, 0]"] ;
    5 -> 6 ;
    7 [label="gini = 0.0\nsamples = 3\nvalue = [0, 0, 3]"] ;
    5 -> 7 ;
    8 [label="X[2] <= 4.85\ngini = 0.057\nsamples = 34\nvalue = [0, 1, 33]"] ;
    2 -> 8 ;
    9 [label="X[1] <= 3.1\ngini = 0.375\nsamples = 4\nvalue = [0, 1, 3]"] ;
    8 -> 9 ;
    10 [label="gini = 0.0\nsamples = 3\nvalue = [0, 0, 3]"] ;
    9 -> 10 ;
    11 [label="gini = 0.0\nsamples = 1\nvalue = [0, 1, 0]"] ;
    9 -> 11 ;
    12 [label="gini = 0.0\nsamples = 30\nvalue = [0, 0, 30]"] ;
    8 -> 12 ;
    }
    

        如果需要pdf版本的模型图也可以,下面是生成的PDF数据(因无法上传文件,这里添加的图片的后缀名,使用时直接删除图片后缀即可)

        

        

    展开全文
  • 监督学习包括线性模型、决策树模型、贝叶斯模型、支持向量机
  • 在复杂的决策情况中,企业完成一个决策后,后面可能面临n种可能状态的发生,而决策树模型是基础的数学定律,它通过已知信息,通过逻辑推理,将问题中的策略,概率,风险,收益用类似于树状的形式呈现。决策出各个...
  • 我选用了一个经典数据集来展示如何构建一个决策树模型,这个数据集是——Iris 鸢尾花数据集。里面有我进行数据预处理,分析,优化参数,训练模型以及最终分析决策树的代码。
  • 决策树模型分析

    2020-01-03 14:27:55
    决策树模型分析 # 1. 构造决策树 # 我们在图 2-23 所示的二维分类数据集上构造决策树。这个数据集由 2 个半月形组成,每个 # 类别都包含 50 个数据点。我们将这个数据集称为 two_moons # 为了构造决策树,算法搜遍...
  • 欢迎加入全国风控微信群组:免费加入,详情可...决策树模型本质是一颗由多个判断节点组成的树。在树的每个节点做参数判断,进而在树的最末枝(叶结点)能够对所关心变量的取值作出最佳判断。通常,一棵决策树包含一个...
  • 可视化决策树模型

    2019-12-21 11:02:03
    可视化决策树模型 # 可视化决策树模型 import pydotplus from sklearn.tree import DecisionTreeClassifier from sklearn import datasets from IPython.display import Image from sklearn import tree ​ iris = ...
  • 一、决策树模型与学习决策树是一种基本的分类和回归方法。在分类问题中,表示基于特征对实例进行分类的过程。它可以认为是if-then规则的集合,也可以认为是定义在特征空间与类空间上的条件概率分布。主要优点:模型...
  • python sklearn 决策树模型 """ 决策树模型 相似的输入必会产生相似的输出 """ import sklearn.datasets as sd import sklearn.utils as su import sklearn.tree as st import sklearn.metrics as sm # 加载数据...
  • 决策树模型组合算法GBDT,这个文档非常浅显易懂,非常难得一见的好文档。
  • 为解决因湿地环境复杂且类型多样导致光谱混淆而难以对其自动遥感提取的问题,采用决策树模型的湿地信息提取方法,以Landsat OLI影像光谱特征和经缨帽变换后的数据为基础,结合不同类型湿地的环境特征和空间特征信息,...
  • 决策树原理决策树模型的决策过程类似一棵树,从根节点一步一步走向叶子节点。所有的数据最终都会落到叶子节点上,既可以用作分类,也可以用作回归。决策树的组成:根节点:第一个选择点非叶子节点和分支:中间过程...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 9,546
精华内容 3,818
关键字:

决策树模型