python利用决策树进行特征选择(注释部分为绘图功能),最后输出特征排序:
import numpy as npimport tflearnfrom tflearn.layers.core import dropoutfrom tflearn.layers.normalization import batch_normalizationfrom tflearn.data_utils import to_categoricalfrom sklearn.model_selection import train_test_splitimport sys import pandas as pdfrom pandas import Series,DataFrameimport matplotlib.pyplot as plt data_train = pd.read_csv("feature_with_dnn_todo2.dat")print(data_train.info())import matplotlib.pyplot as pltprint(data_train.columns)"""for col in data_train.columns[1:]: fig = plt.figure(figsize=(20, 16), dpi=8) fig.set(alpha=0.2) plt.figure() data_train[data_train.label == 0.0][col].plot() data_train[data_train.label == 1.0][col].plot() data_train[data_train.label == 2.0][col].plot() data_train[data_train.label == 3.0][col].plot() data_train[data_train.label == 4.0][col].plot() plt.xlabel(u"sample data id") plt.ylabel(col) plt.title(col) plt.legend((u'white', u'cdn',u'tunnel', u"msad", "todo"),loc='best') plt.show()""" from sklearn.ensemble import ExtraTreesClassifierX = data_train.iloc[:,1:] y = data_train['label'].tolist()print(X.columns)X = X.values.tolist()print(X[-3:])print("-------------")print(y[-3:])# preprocess datafrom sklearn.preprocessing import StandardScaler#X = StandardScaler().fit_transform(X)from sklearn.preprocessing import MinMaxScalerX = MinMaxScaler().fit_transform(X)from sklearn.preprocessing import Normalizer#X=Normalizer().fit_transform(X)# abnormal data process"""for i,n in enumerate(y): if n == 4.0: y[i]=0"""import collectionsprint(collections.Counter(y))print("just change to 2 classify !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!")for i,n in enumerate(y): if n != 2.0: y[i]=0 else: y[i]=1print(collections.Counter(y))#print(X.shape)from imblearn.over_sampling import SMOTE X, y = SMOTE().fit_sample(X, y)print(sorted(collections.Counter(y).items()))import sysclf = ExtraTreesClassifier()print(dir(clf))X_new = clf.fit(X, y) print (clf.feature_importances_ )names = [u'flow_cnt', u'len(srcip_arr)', u'len(dstip_arr)', u'subdomain_num', u'uniq_subdomain_ratio', u'np.average(dns_request_len_arr)', u'np.average(dns_reply_len_arr)', u'np.average(subdomain_tag_num_arr)', u'np.average(subdomain_len_arr)', u'np.average(subdomain_weird_len_arr)', u'np.average(subdomain_entropy_arr)', u'A_rr_type_ratio', u'incommon_rr_type_rato', u'valid_ipv4_ratio', u'uniq_valid_ipv4_ratio', u'request_reply_ratio', u'np.max(dns_request_len_arr)', u'np.max(dns_reply_len_arr)', u'np.max(subdomain_tag_num_arr)', u'np.max(subdomain_len_arr)', u'np.max(subdomain_weird_len_arr)', u'np.max(subdomain_entropy_arr)', u'avg_distance', u'std_distance']print "Features sorted by their score:"print sorted(zip(clf.feature_importances_, names), reverse=True)
其中,
from imblearn.over_sampling import SMOTE X, y = SMOTE().fit_sample(X, y)print(sorted(collections.Counter(y).items())) 是使用smote算法补齐样本不均衡的情况。 加如下代码可以看score! from sklearn.cross_validation import cross_val_score scores = cross_val_score(clf, X, y) print(scores.mean()) 官方文档:
1.13. Feature selection
The classes in the module can be used for feature selection/dimensionality reduction on sample sets, either to improve estimators’ accuracy scores or to boost their performance on very high-dimensional datasets.
1.13.1. Removing features with low variance
is a simple baseline approach to feature selection. It removes all features whose variance doesn’t meet some threshold. By default, it removes all zero-variance features, i.e. features that have the same value in all samples.
As an example, suppose that we have a dataset with boolean features, and we want to remove all features that are either one or zero (on or off) in more than 80% of the samples. Boolean features are Bernoulli random variables, and the variance of such variables is given by
so we can select using the threshold .8 * (1 - .8)
:
>>> from sklearn.feature_selection import VarianceThreshold>>> X = [[0, 0, 1], [0, 1, 0], [1, 0, 0], [0, 1, 1], [0, 1, 0], [0, 1, 1]] >>> sel = VarianceThreshold(threshold=(.8 * (1 - .8))) >>> sel.fit_transform(X) array([[0, 1], [1, 0], [0, 0], [1, 1], [1, 0], [1, 1]])
As expected, VarianceThreshold
has removed the first column, which has a probability of containing a zero.
1.13.2. Univariate feature selection
Univariate feature selection works by selecting the best features based on univariate statistical tests. It can be seen as a preprocessing step to an estimator. Scikit-learn exposes feature selection routines as objects that implement the transform
method:
- removes all but the highest scoring features
- removes all but a user-specified highest scoring percentage of features
- using common univariate statistical tests for each feature: false positive rate , false discovery rate , or family wise error .
- allows to perform univariate feature selection with a configurable strategy. This allows to select the best univariate selection strategy with hyper-parameter search estimator.
For instance, we can perform a test to the samples to retrieve only the two best features as follows:
>>> from sklearn.datasets import load_iris>>> from sklearn.feature_selection import SelectKBest >>> from sklearn.feature_selection import chi2 >>> iris = load_iris() >>> X, y = iris.data, iris.target >>> X.shape (150, 4) >>> X_new = SelectKBest(chi2, k=2).fit_transform(X, y) >>> X_new.shape (150, 2)
These objects take as input a scoring function that returns univariate scores and p-values (or only scores for and ):
- For regression: ,
- For classification: , ,
The methods based on F-test estimate the degree of linear dependency between two random variables. On the other hand, mutual information methods can capture any kind of statistical dependency, but being nonparametric, they require more samples for accurate estimation.
Feature selection with sparse data
If you use sparse data (i.e. data represented as sparse matrices), , , will deal with the data without making it dense.
Warning
Beware not to use a regression scoring function with a classification problem, you will get useless results.
Examples:
1.13.3. Recursive feature elimination
Given an external estimator that assigns weights to features (e.g., the coefficients of a linear model), recursive feature elimination () is to select features by recursively considering smaller and smaller sets of features. First, the estimator is trained on the initial set of features and the importance of each feature is obtained either through a coef_
attribute or through a feature_importances_
attribute. Then, the least important features are pruned from current set of features.That procedure is recursively repeated on the pruned set until the desired number of features to select is eventually reached.
performs RFE in a cross-validation loop to find the optimal number of features.
Examples:
- : A recursive feature elimination example showing the relevance of pixels in a digit classification task.
- : A recursive feature elimination example with automatic tuning of the number of features selected with cross-validation.
1.13.4. Feature selection using SelectFromModel
is a meta-transformer that can be used along with any estimator that has a coef_
or feature_importances_
attribute after fitting. The features are considered unimportant and removed, if the corresponding coef_
or feature_importances_
values are below the provided threshold
parameter. Apart from specifying the threshold numerically, there are built-in heuristics for finding a threshold using a string argument. Available heuristics are “mean”, “median” and float multiples of these like “0.1*mean”.
For examples on how it is to be used refer to the sections below.
Examples
- : Selecting the two most important features from the Boston dataset without knowing the threshold beforehand.
1.13.4.1. L1-based feature selection
penalized with the L1 norm have sparse solutions: many of their estimated coefficients are zero. When the goal is to reduce the dimensionality of the data to use with another classifier, they can be used along with to select the non-zero coefficients. In particular, sparse estimators useful for this purpose are the for regression, and of and for classification:
>>> from sklearn.svm import LinearSVC>>> from sklearn.datasets import load_iris >>> from sklearn.feature_selection import SelectFromModel >>> iris = load_iris() >>> X, y = iris.data, iris.target >>> X.shape (150, 4) >>> lsvc = LinearSVC(C=0.01, penalty="l1", dual=False).fit(X, y) >>> model = SelectFromModel(lsvc, prefit=True) >>> X_new = model.transform(X) >>> X_new.shape (150, 3)
With SVMs and logistic-regression, the parameter C controls the sparsity: the smaller C the fewer features selected. With Lasso, the higher the alpha parameter, the fewer features selected.
Examples:
- sphx_glr_auto_examples_text_document_classification_20newsgroups.py: Comparison of different algorithms for document classification including L1-based feature selection.
L1-recovery and compressive sensing
For a good choice of alpha, the can fully recover the exact set of non-zero variables using only few observations, provided certain specific conditions are met. In particular, the number of samples should be “sufficiently large”, or L1 models will perform at random, where “sufficiently large” depends on the number of non-zero coefficients, the logarithm of the number of features, the amount of noise, the smallest absolute value of non-zero coefficients, and the structure of the design matrix X. In addition, the design matrix must display certain specific properties, such as not being too correlated.
There is no general rule to select an alpha parameter for recovery of non-zero coefficients. It can by set by cross-validation (LassoCV
or LassoLarsCV
), though this may lead to under-penalized models: including a small number of non-relevant variables is not detrimental to prediction score. BIC (LassoLarsIC
) tends, on the opposite, to set high values of alpha.
Reference Richard G. Baraniuk “Compressive Sensing”, IEEE Signal Processing Magazine [120] July 2007
1.13.4.2. Tree-based feature selection
Tree-based estimators (see the module and forest of trees in the module) can be used to compute feature importances, which in turn can be used to discard irrelevant features (when coupled with the meta-transformer):
>>> from sklearn.ensemble import ExtraTreesClassifier >>> from sklearn.datasets import load_iris >>> from sklearn.feature_selection import SelectFromModel >>> iris = load_iris() >>> X, y = iris.data, iris.target >>> X.shape (150, 4) >>> clf = ExtraTreesClassifier() >>> clf = clf.fit(X, y) >>> clf.feature_importances_ array([ 0.04..., 0.05..., 0.4..., 0.4...]) >>> model = SelectFromModel(clf, prefit=True) >>> X_new = model.transform(X) >>> X_new.shape (150, 2)
Examples:
- : example on synthetic data showing the recovery of the actually meaningful features.
- : example on face recognition data.
参考:https://blog.csdn.net/code_caq/article/details/74066899
sklearn中实现如下:
from sklearn.datasets import load_bostonfrom sklearn.ensemble import RandomForestRegressorimport numpy as np #Load boston housing dataset as an example boston = load_boston() X = boston["data"] print type(X),X.shape Y = boston["target"] names = boston["feature_names"] print names rf = RandomForestRegressor() rf.fit(X, Y) print "Features sorted by their score:" print sorted(zip(map(lambda x: round(x, 4), rf.feature_importances_), names), reverse=True)
结果如下:
Features sorted by their score:[(0.5104, 'RM'), (0.2837, 'LSTAT'), (0.0812, 'DIS'), (0.0303, 'CRIM'), (0.0294, 'NOX'), (0.0176, 'PTRATIO'), (0.0134, 'AGE'), (0.0115, 'B'), (0.0089, 'TAX'), (0.0077, 'INDUS'), (0.0051, 'RAD'), (0.0006, 'ZN'), (0.0004, 'CHAS')] from:https://blog.csdn.net/lming_08/article/details/39210409
RandomForest algorithm :
有两个class,分别处理分类和回归, and classes。样本提取时允许replacement(a bootstrap sample),在随机选取的部分(而不是全部的)features上进行划分,与原论文的vote方法不同,scikit-learn通过平均每个分类器的预测概率(averaging their probabilistic prediction)来生成最终结果。
Extremely Randomized Trees :
有两个class,分别处理分类和回归, and classes。默认使用所有样本,但划分时features随机选取部分。
给个比较例子:
>>> from sklearn.cross_validation import cross_val_score>>> from sklearn.datasets import make_blobs >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.ensemble import ExtraTreesClassifier >>> from sklearn.tree import DecisionTreeClassifier >>> X, y = make_blobs(n_samples=10000, n_features=10, centers=100, ... random_state=0) >>> clf = DecisionTreeClassifier(max_depth=None, min_samples_split=1, ... random_state=0) >>> scores = cross_val_score(clf, X, y) >>> scores.mean() 0.97... >>> clf = RandomForestClassifier(n_estimators=10, max_depth=None, ... min_samples_split=1, random_state=0) >>> scores = cross_val_score(clf, X, y) >>> scores.mean() 0.999... >>> clf = ExtraTreesClassifier(n_estimators=10, max_depth=None, ... min_samples_split=1, random_state=0) >>> scores = cross_val_score(clf, X, y) >>> scores.mean() > 0.999 True
几点说明:
1)参数:最主要的调节参数是 n_estimators and max_features ,经验最好数据是,回归问题设置 max_features=n_features ,分类问题设置max_features=sqrt(n_features)(n_features是数据集的features个数).; 设置max_depth=None 并且结合min_samples_split=1 (i.e., when fully developing the trees)经常导致好的结果;但切记,最好的参数还是通过CV调出来的。
2)默认机制:random forests, bootstrap samples are used by default (bootstrap=True) while the default strategy for extra-trees is to use the whole dataset (bootstrap=False).
3)并行:设置n_jobs=k 保证使用机器的k个cores;设置n_jobs=-1 使用所有可用的cores。
4)特征重要性评估:一个决策树,节点在越高的分支,相应的特征对最终预测结果的contribute越大。这里的大,是指影响输入数据集的比例比较大(the fraction of the input samples is large)。所以,对于某一个randomized tree,可以通过 The expected fraction of the samples they contribute to can thus be used as an estimate of the relative importance of the features.,然后对于 n_estimators 个randomized tree,通过averaging those expected activity rates over several randomized trees,达到区分特征重要性、特征选择的目的。但上面的叙述没什么X用,属性 feature_importances_ 已经保留了该重要性记录。。。。