多应用+插件架构,代码干净,二开方便,首家独创一键云编译技术,文档视频完善,免费商用码云13.8K 广告
# Python API 参考 该页面提供了有关 xgboost 的 Python API 参考, 请参阅 Python 软件包介绍以了解更多关于 python 软件包的信息. 该页面中的文档是由 sphinx 自动生成的. 其中的内容不会在 github 上展示出来, 你可以在 [http://xgboost.apachecn.org/cn/latest/python/python_api.html](http://xgboost.apachecn.org/cn/latest/python/python_api.html) 页面上浏览它. ## 核心的数据结构 Core XGBoost Library. ``` class xgboost.DMatrix(data, label=None, missing=None, weight=None, silent=False, feature_names=None, feature_types=None) ``` Bases: `object` Data Matrix used in XGBoost. DMatrix is a internal data structure that used by XGBoost which is optimized for both memory efficiency and training speed. You can construct DMatrix from numpy.arrays ``` feature_names ``` Get feature names (column labels). | Returns: | **feature_names** | | --- | --- | | Return type: | list or None | | --- | --- | ``` feature_types ``` Get feature types (column types). | Returns: | **feature_types** | | --- | --- | | Return type: | list or None | | --- | --- | ``` get_base_margin() ``` Get the base margin of the DMatrix. | Returns: | **base_margin** | | --- | --- | | Return type: | float | | --- | --- | ``` get_float_info(field) ``` Get float property from the DMatrix. | Parameters: | **field** (_str_) – The field name of the information | | --- | --- | | Returns: | **info** – a numpy array of float information of the data | | --- | --- | | Return type: | array | | --- | --- | ``` get_label() ``` Get the label of the DMatrix. | Returns: | **label** | | --- | --- | | Return type: | array | | --- | --- | ``` get_uint_info(field) ``` Get unsigned integer property from the DMatrix. | Parameters: | **field** (_str_) – The field name of the information | | --- | --- | | Returns: | **info** – a numpy array of float information of the data | | --- | --- | | Return type: | array | | --- | --- | ``` get_weight() ``` Get the weight of the DMatrix. | Returns: | **weight** | | --- | --- | | Return type: | array | | --- | --- | ``` num_col() ``` Get the number of columns (features) in the DMatrix. | Returns: | **number of columns** | | --- | --- | | Return type: | int | | --- | --- | ``` num_row() ``` Get the number of rows in the DMatrix. | Returns: | **number of rows** | | --- | --- | | Return type: | int | | --- | --- | ``` save_binary(fname, silent=True) ``` Save DMatrix to an XGBoost buffer. | Parameters: | * **fname** (_string_) – Name of the output buffer file. * **silent** (_bool (optional; default: True)_) – If set, the output is suppressed. | | --- | --- | ``` set_base_margin(margin) ``` Set base margin of booster to start from. This can be used to specify a prediction value of existing model to be base_margin However, remember margin is needed, instead of transformed prediction e.g. for logistic regression: need to put in value before logistic transformation see also example/demo.py | Parameters: | **margin** (_array like_) – Prediction margin of each datapoint | | --- | --- | ``` set_float_info(field, data) ``` Set float type property into the DMatrix. | Parameters: | * **field** (_str_) – The field name of the information * **data** (_numpy array_) – The array ofdata to be set | | --- | --- | ``` set_group(group) ``` Set group size of DMatrix (used for ranking). | Parameters: | **group** (_array like_) – Group size of each group | | --- | --- | ``` set_label(label) ``` Set label of dmatrix | Parameters: | **label** (_array like_) – The label information to be set into DMatrix | | --- | --- | ``` set_uint_info(field, data) ``` Set uint type property into the DMatrix. | Parameters: | * **field** (_str_) – The field name of the information * **data** (_numpy array_) – The array ofdata to be set | | --- | --- | ``` set_weight(weight) ``` Set weight of each instance. | Parameters: | **weight** (_array like_) – Weight for each data point | | --- | --- | ``` slice(rindex) ``` Slice the DMatrix and return a new DMatrix that only contains <cite>rindex</cite>. | Parameters: | **rindex** (_list_) – List of indices to be selected. | | --- | --- | | Returns: | **res** – A new DMatrix containing only selected indices. | | --- | --- | | Return type: | [DMatrix](#xgboost.DMatrix "xgboost.DMatrix") | | --- | --- | ``` class xgboost.Booster(params=None, cache=(), model_file=None) ``` Bases: `object` “A Booster of of XGBoost. Booster is the model of xgboost, that contains low level routines for training, prediction and evaluation. ``` attr(key) ``` Get attribute string from the Booster. | Parameters: | **key** (_str_) – The key to get attribute from. | | --- | --- | | Returns: | **value** – The attribute value of the key, returns None if attribute do not exist. | | --- | --- | | Return type: | str | | --- | --- | ``` attributes() ``` Get attributes stored in the Booster as a dictionary. | Returns: | **result** – Returns an empty dict if there’s no attributes. | | --- | --- | | Return type: | dictionary of attribute_name: attribute_value pairs of strings. | | --- | --- | ``` boost(dtrain, grad, hess) ``` Boost the booster for one iteration, with customized gradient statistics. | Parameters: | * **dtrain** ([_DMatrix_](#xgboost.DMatrix "xgboost.DMatrix")) – The training DMatrix. * **grad** (_list_) – The first order of gradient. * **hess** (_list_) – The second order of gradient. | | --- | --- | ``` copy() ``` Copy the booster object. | Returns: | **booster** – a copied booster model | | --- | --- | | Return type: | <cite>Booster</cite> | | --- | --- | ``` dump_model(fout, fmap='', with_stats=False) ``` Dump model into a text file. | Parameters: | * **foout** (_string_) – Output file name. * **fmap** (_string, optional_) – Name of the file containing feature map names. * **with_stats** (_bool (optional)_) – Controls whether the split statistics are output. | | --- | --- | ``` eval(data, name='eval', iteration=0) ``` Evaluate the model on mat. | Parameters: | * **data** ([_DMatrix_](#xgboost.DMatrix "xgboost.DMatrix")) – The dmatrix storing the input. * **name** (_str, optional_) – The name of the dataset. * **iteration** (_int, optional_) – The current iteration number. | | --- | --- | | Returns: | **result** – Evaluation result string. | | --- | --- | | Return type: | str | | --- | --- | ``` eval_set(evals, iteration=0, feval=None) ``` Evaluate a set of data. | Parameters: | * **evals** (_list of tuples (DMatrix, string)_) – List of items to be evaluated. * **iteration** (_int_) – Current iteration. * **feval** (_function_) – Custom evaluation function. | | --- | --- | | Returns: | **result** – Evaluation result string. | | --- | --- | | Return type: | str | | --- | --- | ``` get_dump(fmap='', with_stats=False) ``` Returns the dump the model as a list of strings. ``` get_fscore(fmap='') ``` Get feature importance of each feature. | Parameters: | **fmap** (_str (optional)_) – The name of feature map file | | --- | --- | ``` get_score(fmap='', importance_type='weight') ``` Get feature importance of each feature. Importance type can be defined as: > ‘weight’ - the number of times a feature is used to split the data across all trees. ‘gain’ - the average gain of the feature when it is used in trees ‘cover’ - the average coverage of the feature when it is used in trees | Parameters: | **fmap** (_str (optional)_) – The name of feature map file | | --- | --- | ``` get_split_value_histogram(feature, fmap='', bins=None, as_pandas=True) ``` Get split value histogram of a feature :param feature: The name of the feature. :type feature: str :param fmap: The name of feature map file. :type fmap: str (optional) :param bin: The maximum number of bins. > Number of bins equals number of unique split values n_unique, if bins == None or bins > n_unique. | Parameters: | **as_pandas** (_bool, default True_) – Return pd.DataFrame when pandas is installed. If False or pandas is not installed, return numpy ndarray. | | --- | --- | | Returns: | * _a histogram of used splitting values for the specified feature_ * _either as numpy array or pandas DataFrame._ | | --- | --- | ``` load_model(fname) ``` Load the model from a file. | Parameters: | **fname** (_string or a memory buffer_) – Input file name or memory buffer(see also save_raw) | | --- | --- | ``` load_rabit_checkpoint() ``` Initialize the model by load from rabit checkpoint. | Returns: | **version** – The version number of the model. | | --- | --- | | Return type: | integer | | --- | --- | ``` predict(data, output_margin=False, ntree_limit=0, pred_leaf=False) ``` Predict with data. ``` NOTE: This function is not thread safe. ``` For each booster object, predict can only be called from one thread. If you want to run prediction using multiple thread, call bst.copy() to make copies of model object and then call predict | Parameters: | * **data** ([_DMatrix_](#xgboost.DMatrix "xgboost.DMatrix")) – The dmatrix storing the input. * **output_margin** (_bool_) – Whether to output the raw untransformed margin value. * **ntree_limit** (_int_) – Limit number of trees in the prediction; defaults to 0 (use all trees). * **pred_leaf** (_bool_) – When this option is on, the output will be a matrix of (nsample, ntrees) with each record indicating the predicted leaf index of each sample in each tree. Note that the leaf index of a tree is unique per tree, so you may find leaf 1 in both tree 1 and tree 0. | | --- | --- | | Returns: | **prediction** | | --- | --- | | Return type: | numpy array | | --- | --- | ``` save_model(fname) ``` Save the model to a file. | Parameters: | **fname** (_string_) – Output file name | | --- | --- | ``` save_rabit_checkpoint() ``` Save the current booster to rabit checkpoint. ``` save_raw() ``` Save the model to a in memory buffer represetation | Returns: | | | --- | --- | | Return type: | a in memory buffer represetation of the model | | --- | --- | ``` set_attr(**kwargs) ``` Set the attribute of the Booster. | Parameters: | ****kwargs** –The attributes to set. Setting a value to None deletes an attribute. | | --- | --- | ``` set_param(params, value=None) ``` Set parameters into the Booster. | Parameters: | * **params** (_dict/list/str_) – list of key,value paris, dict of key to value or simply str key * **value** (_optional_) – value of the specified parameter, when params is str key | | --- | --- | ``` update(dtrain, iteration, fobj=None) ``` Update for one iteration, with objective function calculated internally. | Parameters: | * **dtrain** ([_DMatrix_](#xgboost.DMatrix "xgboost.DMatrix")) – Training data. * **iteration** (_int_) – Current iteration number. * **fobj** (_function_) – Customized objective function. | | --- | --- | ## 学习的 API Training Library containing training routines. ``` xgboost.train(params, dtrain, num_boost_round=10, evals=(), obj=None, feval=None, maximize=False, early_stopping_rounds=None, evals_result=None, verbose_eval=True, learning_rates=None, xgb_model=None, callbacks=None) ``` Train a booster with given parameters. | Parameters: | * **params** (_dict_) – Booster params. * **dtrain** ([_DMatrix_](#xgboost.DMatrix "xgboost.DMatrix")) – Data to be trained. * **num_boost_round** (_int_) – Number of boosting iterations. * **evals** (_list of pairs (DMatrix, string)_) – List of items to be evaluated during training, this allows user to watch performance on the validation set. * **obj** (_function_) – Customized objective function. * **feval** (_function_) – Customized evaluation function. * **maximize** (_bool_) – Whether to maximize feval. * **early_stopping_rounds** (_int_) – Activates early stopping. Validation error needs to decrease at least every <early_stopping_rounds> round(s) to continue training. Requires at least one item in evals. If there’s more than one, will use the last. Returns the model from the last iteration (not the best one). If early stopping occurs, the model will have three additional fields: bst.best_score, bst.best_iteration and bst.best_ntree_limit. (Use bst.best_ntree_limit to get the correct value if num_parallel_tree and/or num_class appears in the parameters) * **evals_result** (_dict_) – This dictionary stores the evaluation results of all the items in watchlist. Example: with a watchlist containing [(dtest,’eval’), (dtrain,’train’)] and and a paramater containing (‘eval_metric’, ‘logloss’) Returns: {‘train’: {‘logloss’: [‘0.48253’, ‘0.35953’]}, > ‘eval’: {‘logloss’: [‘0.480385’, ‘0.357756’]}} * **verbose_eval** (_bool or int_) – Requires at least one item in evals. If <cite>verbose_eval</cite> is True then the evaluation metric on the validation set is printed at each boosting stage. If <cite>verbose_eval</cite> is an integer then the evaluation metric on the validation set is printed at every given <cite>verbose_eval</cite> boosting stage. The last boosting stage / the boosting stage found by using <cite>early_stopping_rounds</cite> is also printed. Example: with verbose_eval=4 and at least one item in evals, an evaluation metric is printed every 4 boosting stages, instead of every boosting stage. * **learning_rates** (_list or function_) – List of learning rate for each boosting round or a customized function that calculates eta in terms of current number of round and the total number of boosting round (e.g. yields learning rate decay) - list l: eta = l[boosting round] - function f: eta = f(boosting round, num_boost_round) * **xgb_model** (_file name of stored xgb model or ‘Booster’ instance_) – Xgb model to be loaded before training (allows training continuation). * **callbacks** (_list of callback functions_) – List of callback functions that are applied at end of each iteration. | | --- | --- | | Returns: | **booster** | | --- | --- | | Return type: | a trained booster model | | --- | --- | ``` xgboost.cv(params, dtrain, num_boost_round=10, nfold=3, stratified=False, folds=None, metrics=(), obj=None, feval=None, maximize=False, early_stopping_rounds=None, fpreproc=None, as_pandas=True, verbose_eval=None, show_stdv=True, seed=0, callbacks=None) ``` Cross-validation with given paramaters. | Parameters: | * **params** (_dict_) – Booster params. * **dtrain** ([_DMatrix_](#xgboost.DMatrix "xgboost.DMatrix")) – Data to be trained. * **num_boost_round** (_int_) – Number of boosting iterations. * **nfold** (_int_) – Number of folds in CV. * **stratified** (_bool_) – Perform stratified sampling. * **folds** (_a KFold or StratifiedKFold instance_) – Sklearn KFolds or StratifiedKFolds. * **metrics** (_string or list of strings_) – Evaluation metrics to be watched in CV. * **obj** (_function_) – Custom objective function. * **feval** (_function_) – Custom evaluation function. * **maximize** (_bool_) – Whether to maximize feval. * **early_stopping_rounds** (_int_) – Activates early stopping. CV error needs to decrease at least every <early_stopping_rounds> round(s) to continue. Last entry in evaluation history is the one from best iteration. * **fpreproc** (_function_) – Preprocessing function that takes (dtrain, dtest, param) and returns transformed versions of those. * **as_pandas** (_bool, default True_) – Return pd.DataFrame when pandas is installed. If False or pandas is not installed, return np.ndarray * **verbose_eval** (_bool, int, or None, default None_) – Whether to display the progress. If None, progress will be displayed when np.ndarray is returned. If True, progress will be displayed at boosting stage. If an integer is given, progress will be displayed at every given <cite>verbose_eval</cite> boosting stage. * **show_stdv** (_bool, default True_) – Whether to display the standard deviation in progress. Results are not affected, and always contains std. * **seed** (_int_) – Seed used to generate the folds (passed to numpy.random.seed). * **callbacks** (_list of callback functions_) – List of callback functions that are applied at end of each iteration. | | --- | --- | | Returns: | **evaluation history** | | --- | --- | | Return type: | list(string) | | --- | --- | ## Scikit-Learn 的 API Scikit-Learn Wrapper interface for XGBoost. ``` class xgboost.XGBRegressor(max_depth=3, learning_rate=0.1, n_estimators=100, silent=True, objective='reg:linear', nthread=-1, gamma=0, min_child_weight=1, max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5, seed=0, missing=None) ``` Bases: `xgboost.sklearn.XGBModel`, `object` ``` Implementation of the scikit-learn API for XGBoost regression. ``` Parameters ``` max_depth : int ``` Maximum tree depth for base learners. ``` learning_rate : float ``` Boosting learning rate (xgb’s “eta”) ``` n_estimators : int ``` Number of boosted trees to fit. ``` silent : boolean ``` Whether to print messages while running boosting. ``` objective : string or callable ``` Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below). ``` nthread : int ``` Number of parallel threads used to run xgboost. ``` gamma : float ``` Minimum loss reduction required to make a further partition on a leaf node of the tree. ``` min_child_weight : int ``` Minimum sum of instance weight(hessian) needed in a child. ``` max_delta_step : int ``` Maximum delta step we allow each tree’s weight estimation to be. ``` subsample : float ``` Subsample ratio of the training instance. ``` colsample_bytree : float ``` Subsample ratio of columns when constructing each tree. ``` colsample_bylevel : float ``` Subsample ratio of columns for each split, in each level. ``` reg_alpha : float (xgb’s alpha) ``` L1 regularization term on weights ``` reg_lambda : float (xgb’s lambda) ``` L2 regularization term on weights ``` scale_pos_weight : float ``` Balancing of positive and negative weights. ``` base_score: ``` The initial prediction score of all instances, global bias. ``` seed : int ``` Random number seed. ``` missing : float, optional ``` Value in the data which needs to be present as a missing value. If None, defaults to np.nan. Note A custom objective function can be provided for the `objective` parameter. In this case, it should have the signature `objective(y_true, y_pred) -> grad, hess`: ``` y_true: array_like of shape [n_samples] ``` The target values ``` y_pred: array_like of shape [n_samples] ``` The predicted values ``` grad: array_like of shape [n_samples] ``` The value of the gradient for each sample point. ``` hess: array_like of shape [n_samples] ``` The value of the second derivative for each sample point ``` class xgboost.XGBClassifier(max_depth=3, learning_rate=0.1, n_estimators=100, silent=True, objective='binary:logistic', nthread=-1, gamma=0, min_child_weight=1, max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5, seed=0, missing=None) ``` Bases: `xgboost.sklearn.XGBModel`, `object` Implementation of the scikit-learn API for XGBoost classification. > Parameters ``` max_depth : int ``` Maximum tree depth for base learners. ``` learning_rate : float ``` Boosting learning rate (xgb’s “eta”) ``` n_estimators : int ``` Number of boosted trees to fit. ``` silent : boolean ``` Whether to print messages while running boosting. ``` objective : string or callable ``` Specify the learning task and the corresponding learning objective or a custom objective function to be used (see note below). ``` nthread : int ``` Number of parallel threads used to run xgboost. ``` gamma : float ``` Minimum loss reduction required to make a further partition on a leaf node of the tree. ``` min_child_weight : int ``` Minimum sum of instance weight(hessian) needed in a child. ``` max_delta_step : int ``` Maximum delta step we allow each tree’s weight estimation to be. ``` subsample : float ``` Subsample ratio of the training instance. ``` colsample_bytree : float ``` Subsample ratio of columns when constructing each tree. ``` colsample_bylevel : float ``` Subsample ratio of columns for each split, in each level. ``` reg_alpha : float (xgb’s alpha) ``` L1 regularization term on weights ``` reg_lambda : float (xgb’s lambda) ``` L2 regularization term on weights ``` scale_pos_weight : float ``` Balancing of positive and negative weights. ``` base_score: ``` The initial prediction score of all instances, global bias. ``` seed : int ``` Random number seed. ``` missing : float, optional ``` Value in the data which needs to be present as a missing value. If None, defaults to np.nan. Note A custom objective function can be provided for the `objective` parameter. In this case, it should have the signature `objective(y_true, y_pred) -> grad, hess`: ``` y_true: array_like of shape [n_samples] ``` The target values ``` y_pred: array_like of shape [n_samples] ``` The predicted values ``` grad: array_like of shape [n_samples] ``` The value of the gradient for each sample point. ``` hess: array_like of shape [n_samples] ``` The value of the second derivative for each sample point ``` evals_result() ``` Return the evaluation results. If eval_set is passed to the <cite>fit</cite> function, you can call evals_result() to get evaluation results for all passed eval_sets. When eval_metric is also passed to the <cite>fit</cite> function, the evals_result will contain the eval_metrics passed to the <cite>fit</cite> function | Returns: | **evals_result** | | --- | --- | | Return type: | dictionary | | --- | --- | Example param_dist = {‘objective’:’binary:logistic’, ‘n_estimators’:2} clf = xgb.XGBClassifier([**](#id4)param_dist) ``` clf.fit(X_train, y_train, ``` eval_set=[(X_train, y_train), (X_test, y_test)], eval_metric=’logloss’, verbose=True) evals_result = clf.evals_result() The variable evals_result will contain: {‘validation_0’: {‘logloss’: [‘0.604835’, ‘0.531479’]}, > ‘validation_1’: {‘logloss’: [‘0.41965’, ‘0.17686’]}} ``` feature_importances_ ``` | Returns: | **feature_importances_** | | --- | --- | | Return type: | array of shape = [n_features] | | --- | --- | ``` fit(X, y, sample_weight=None, eval_set=None, eval_metric=None, early_stopping_rounds=None, verbose=True) ``` Fit gradient boosting classifier | Parameters: | * **X** (_array_like_) – Feature matrix * **y** (_array_like_) – Labels * **sample_weight** (_array_like_) – Weight for each instance * **eval_set** (_list, optional_) – A list of (X, y) pairs to use as a validation set for early-stopping * **eval_metric** (_str, callable, optional_) – If a str, should be a built-in evaluation metric to use. See doc/parameter.md. If callable, a custom evaluation metric. The call signature is func(y_predicted, y_true) where y_true will be a DMatrix object such that you may need to call the get_label method. It must return a str, value pair where the str is a name for the evaluation and value is the value of the evaluation function. This objective is always minimized. * **early_stopping_rounds** (_int, optional_) – Activates early stopping. Validation error needs to decrease at least every <early_stopping_rounds> round(s) to continue training. Requires at least one item in evals. If there’s more than one, will use the last. Returns the model from the last iteration (not the best one). If early stopping occurs, the model will have three additional fields: bst.best_score, bst.best_iteration and bst.best_ntree_limit. (Use bst.best_ntree_limit to get the correct value if num_parallel_tree and/or num_class appears in the parameters) * **verbose** (_bool_) – If <cite>verbose</cite> and an evaluation set is used, writes the evaluation metric measured on the validation set to stderr. | | --- | --- | ## 绘图的 API Plotting Library. ``` xgboost.plot_importance(booster, ax=None, height=0.2, xlim=None, ylim=None, title='Feature importance', xlabel='F score', ylabel='Features', importance_type='weight', grid=True, **kwargs) ``` Plot importance based on fitted trees. | Parameters: | * **booster** (_Booster, XGBModel or dict_) – Booster or XGBModel instance, or dict taken by Booster.get_fscore() * **ax** (_matplotlib Axes, default None_) – Target axes instance. If None, new figure and axes will be created. * **importance_type** (_str, default “weight”_) – How the importance is calculated: either “weight”, “gain”, or “cover” “weight” is the number of times a feature appears in a tree “gain” is the average gain of splits which use the feature “cover” is the average coverage of splits which use the feature > where coverage is defined as the number of samples affected by the split * **height** (_float, default 0.2_) – Bar height, passed to ax.barh() * **xlim** (_tuple, default None_) – Tuple passed to axes.xlim() * **ylim** (_tuple, default None_) – Tuple passed to axes.ylim() * **title** (_str, default “Feature importance”_) – Axes title. To disable, pass None. * **xlabel** (_str, default “F score”_) – X axis title label. To disable, pass None. * **ylabel** (_str, default “Features”_) – Y axis title label. To disable, pass None. * **kwargs** – Other keywords passed to ax.barh() | | --- | --- | | Returns: | **ax** | | --- | --- | | Return type: | matplotlib Axes | | --- | --- | ``` xgboost.plot_tree(booster, num_trees=0, rankdir='UT', ax=None, **kwargs) ``` Plot specified tree. | Parameters: | * **booster** (_Booster, XGBModel_) – Booster or XGBModel instance * **num_trees** (_int, default 0_) – Specify the ordinal number of target tree * **rankdir** (_str, default “UT”_) – Passed to graphiz via graph_attr * **ax** (_matplotlib Axes, default None_) – Target axes instance. If None, new figure and axes will be created. * **kwargs** – Other keywords passed to to_graphviz | | --- | --- | | Returns: | **ax** | | --- | --- | | Return type: | matplotlib Axes | | --- | --- | ``` xgboost.to_graphviz(booster, num_trees=0, rankdir='UT', yes_color='#0000FF', no_color='#FF0000', **kwargs) ``` Convert specified tree to graphviz instance. IPython can automatically plot the returned graphiz instance. Otherwise, you shoud call .render() method of the returned graphiz instance. | Parameters: | * **booster** (_Booster, XGBModel_) – Booster or XGBModel instance * **num_trees** (_int, default 0_) – Specify the ordinal number of target tree * **rankdir** (_str, default “UT”_) – Passed to graphiz via graph_attr * **yes_color** (_str, default ‘#0000FF’_) – Edge color when meets the node condigion. * **no_color** (_str, default ‘#FF0000’_) – Edge color when doesn’t meet the node condigion. * **kwargs** – Other keywords passed to graphviz graph_attr | | --- | --- | | Returns: | **ax** | | --- | --- | | Return type: | matplotlib Axes | | --- | --- |