决策树实例学习python
2023-12-21 05:54:51
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier, export_text
from sklearn import metrics
# Load the Iris dataset
iris = load_iris()
X = iris.data
y = iris.target
# Split the dataset into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Create a decision tree classifier
clf = DecisionTreeClassifier(random_state=42)
# Train the classifier on the training set
clf.fit(X_train, y_train)
# Predictions on the training set
y_train_pred = clf.predict(X_train)
# Predictions on the testing set
y_test_pred = clf.predict(X_test)
# Calculate accuracy
accuracy_train = metrics.accuracy_score(y_train, y_train_pred)
accuracy_test = metrics.accuracy_score(y_test, y_test_pred)
# Visualize the decision tree (text representation)
tree_rules = export_text(clf, feature_names=iris.feature_names)
print("Decision Tree Rules:\n", tree_rules)
# Plotting the training set
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.scatter(X_train[:, 0], X_train[:, 1], c=y_train_pred, cmap='viridis', edgecolors='k')
plt.title(f"Decision Tree - Training Accuracy: {accuracy_train:.2f}")
# Plotting the testing set
plt.subplot(1, 2, 2)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test_pred, cmap='viridis', edgecolors='k')
plt.title(f"Decision Tree - Testing Accuracy: {accuracy_test:.2f}")
plt.tight_layout()
plt.show()
这个输出是训练后决策树的文本表示。下面解释一下这个表示:
|--- petal length (cm) <= 2.45
| |--- class: 0
|--- petal length (cm) > 2.45
| |--- petal length (cm) <= 4.75
| | |--- petal width (cm) <= 1.65
| | | |--- class: 1
| | |--- petal width (cm) > 1.65
| | | |--- class: 2
| |--- petal length (cm) > 4.75
| | |--- petal width (cm) <= 1.75
| | | |--- petal length (cm) <= 4.95
| | | | |--- class: 1
| | | |--- petal length (cm) > 4.95
| | | | |--- petal width (cm) <= 1.55
| | | | | |--- class: 2
| | | | |--- petal width (cm) > 1.55
| | | | | |--- petal length (cm) <= 5.45
| | | | | | |--- class: 1
| | | | | |--- petal length (cm) > 5.45
| | | | | | |--- class: 2
| | |--- petal width (cm) > 1.75
| | | |--- petal length (cm) <= 4.85
| | | | |--- sepal width (cm) <= 3.10
| | | | | |--- class: 2
| | | | |--- sepal width (cm) > 3.10
| | | | | |--- class: 1
| | | |--- petal length (cm) > 4.85
| | | | |--- class: 2
这个表示是决策树的结构,每一行代表一个决策节点,缩进表示层次。例如,第一行表示如果花瓣长度小于等于2.45厘米,则预测类别为0。如果花瓣长度大于2.45厘米,则会根据下一个条件(petal length (cm) <= 4.75)继续分支,以此类推。
最后的类别预测(class: X)表示决策树的叶子节点,其中X是预测的类别。
这个决策树在训练时学习了如何根据输入特征来做出分类决策。
文章来源:https://blog.csdn.net/qq_42244167/article/details/135109920
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。 如若内容造成侵权/违法违规/事实不符,请联系我的编程经验分享网邮箱:veading@qq.com进行投诉反馈,一经查实,立即删除!