精华内容
下载资源
问答
  • 五、模型训练信用评分卡模型一般采用逻辑回归模型,属于二分类模型Python 中的sklearn.linear_model导入LogisticRegression即可。#入模定量和定性指标model_data = data[np.append(quant_model_vars,qual_model_...

    上一篇已经完成数据集的准备和指标筛选,本篇继续介绍模型构建和评分卡的创建。

    五、模型训练

    信用评分卡的模型一般采用逻辑回归模型,属于二分类模型,Python 中的sklearn.linear_model导入LogisticRegression即可。

    #入模定量和定性指标

    model_data = data[np.append(quant_model_vars,qual_model_vars)]

    #

    model_data_WOE = pd.DataFrame()

    model_data_WOE['duration']=duration_WoE

    model_data_WOE['amount']=amount_WoE

    model_data_WOE['age']=age_WoE

    model_data_WOE['installment_rate']=installment_rate_WoE

    model_data_WOE['status']=status_WoE

    model_data_WOE['credit_history']=credit_history_WoE

    model_data_WOE['savings']=savings_WoE

    model_data_WOE['property']=property_WoE

    model_data_WOE['employment_duration']=employment_duration_WoE

    model_data_WOE['purpose']=purpose_WoE

    #model_data_WOE['credit_risk']=credit_risk

    #逻辑回归

    model = LogisticRegression()

    model.fit(model_data_WOE,credit_risk)

    coefficients = model.coef_.ravel()

    intercept = model.intercept_[0]

    注:Python中的模型不够R中模型友好,想看模型的变量、系数、检验之类的都比较麻烦,要一个变量一个变量去找,然后输出打印,反之R的模型结果就友好很多了,一个summary函数就把全部概况显示出来了。

    ###########自定义ks函数#############

    def predict_df(model,data,label,feature=None):

    if feature:

    df_feature=data.loc[:,feature]

    else:

    all_feature = list(data.columns.values)

    all_feature.remove(label)

    df_feature=data.loc[:,all_feature]

    df_prob=model.predict(df_feature)

    df_pred=pd.Series(df_prob).map(lambda x:1 if x>0.5 else 0)

    df=pd.DataFrame()

    df['predict']=df_pred

    df['label']=data.loc[:,label].values

    df['score']=df_prob

    return df

    def ks(data,model,label):

    data_df = predict_df(model,data,label)

    KS_data = data_df.sort_values(by='score',ascending=True)

    KS_data['Bad'] = KS_data['label'].cumsum() / KS_data['label'].sum()

    KS_data['Count'] = np.arange(1 , len(KS_data['label']) + 1)

    KS_data['Good'] = (KS_data['Count'] - KS_data['label'].cumsum() ) / (len(KS_data['label']) - KS_data['label'].sum())

    KS_data.index=KS_data['Count']

    ks = KS_data.iloc[::int(len(KS_data)/100),:]

    ks.index = np.arange(len(ks))

    return ks

    def ks_plot(ks_df):

    plt.figure(figsize=(6, 5))

    plt.subplot(111)

    plt.plot(ks_df['Bad'], lw=3.5, color='r', label='Bad') # train_ks['Bad']

    plt.plot(ks_df['Good'], lw=3.5, color='g',

    label='Good') # train_ks['Good']

    plt.legend(loc=4)

    plt.grid(True)

    plt.axis('tight')

    plt.title('The KS Curve of data')

    plt.show()

    KS(Kolmogorov-Smirnov):KS用于模型风险区分能力进行评估,

    指标衡量的是好坏样本累计分部之间的差值。 好坏样本累计差异越大,KS指标越大,那么模型的风险区分能力越强,通常来讲,KS>0.2即表示模型有较好的预测准确性。经过计算,模型的KS值为0.35,模型效果较好,如下:

    六、评分卡

    引用文献的评分卡计算方法:

    一般评分卡公式:Score=A - B * log(Odds)

    通常情况下,需要设定两个假设:

    (1)给某个特定的比率设定特定的预期分值;

    (2)确定比率翻番的分数(PDO)

    根据以上的分析,我们首先假设比率为x的特定点的分值为P。则比率为2x的点的分值应该为P+PDO。代入式中,可以得到如下两个等式:

    P = A - B * log(x)

    P - PDO = A - B * log(2x)

    本文中通过指定特定比率(好坏比)(1/20)的特定分值(50)和比率翻番的分数(10),来计算评分卡的系数alpha和beta

    def alpha_beta(basepoints,baseodds,pdo):

    beta = pdo/math.log(2)

    alpha = basepoints + beta * math.log(baseodds)

    return alpha,beta

    评分卡公式:Score=6.78 - 14.43 * log(Odds)

    ,代入WOE转换后的变量并进行变化,可得到最终的评分卡公式:

    式中ωijωij 为第i行第j个变量的WOE,为已知变量;βiβi为逻辑回归方程中的系数,为已知变量;δijδij为二元变量,表示变量i是否取第j个值。

    根据以上表格可计算出指标各分段的分值

    #计算基础分值

    basepoint = round(alpha - beta * intercept)

    #变量_score

    duration_score = np.round(model_data_WOE['duration']*coefficients[0]*beta)

    amount_score = np.round(model_data_WOE['amount']*coefficients[1]*beta)

    age_score = np.round(model_data_WOE['age']*coefficients[2]*beta)

    installment_rate_score = np.round(model_data_WOE['installment_rate']*coefficients[2]*beta)

    status_score = np.round(model_data_WOE['status']*coefficients[4]*beta)

    credit_history_score = np.round(model_data_WOE['credit_history']*coefficients[5]*beta)

    savings_score = np.round(model_data_WOE['savings']*coefficients[6]*beta)

    property_score = np.round(model_data_WOE['property']*coefficients[7]*beta)

    employment_duration_score = np.round(model_data_WOE['employment_duration']*coefficients[8]*beta)

    purpose_score = np.round(model_data_WOE['purpose']*coefficients[9]*beta)

    #变量的分值

    duration_scoreCard = pd.DataFrame(duration_Cutpoint,duration_score).drop_duplicates()

    amount_scoreCard = pd.DataFrame(amount_Cutpoint,amount_score).drop_duplicates()

    age_scoreCard = pd.DataFrame(age_Cutpoint,age_score).drop_duplicates()

    installment_rate_scoreCard = pd.DataFrame(installment_rate_Cutpoint,installment_rate_score).drop_duplicates()

    status_scoreCard = pd.DataFrame(np.array(discrete_data['status']),status_score).drop_duplicates()

    credit_history_scoreCard = pd.DataFrame(np.array(discrete_data['credit_history']),credit_history_score).drop_duplicates()

    savings_scoreCard = pd.DataFrame(np.array(discrete_data['savings']),savings_score).drop_duplicates()

    property_scoreCard = pd.DataFrame(np.array(discrete_data['property']),property_score).drop_duplicates()

    employment_duration_scoreCard = pd.DataFrame(np.array(discrete_data['employment_duration']),employment_duration_score).drop_duplicates()

    purpose_scoreCard = pd.DataFrame(np.array(discrete_data['purpose']),purpose_score).drop_duplicates()

    转载https://blog.csdn.net/kxiaozhuk/article/details/84612632

    至此,信用评分卡的建模介绍到这里,欢迎学习我的python信用评分卡课程。

    python金融风控评分卡模型和数据分析微专业课(博主亲自录制视频):http://dwz.date/b9vv

    展开全文
  • 建模工作就是从上述三个文件中对数据进行加工,提取特征并且建立合适的模型,对贷后表现做预测。 【Logistic原理】: https://blog.csdn.net/sunyaowu315/article/details/87866135   对数据分析、机器学习...

    【博客地址】:https://blog.csdn.net/sunyaowu315
    【博客大纲地址】:https://blog.csdn.net/sunyaowu315/article/details/82905347


    数据集介绍:

    本次案例分析所用的数据,是拍拍贷发起的一次与信贷申请审核工作相关的竞赛数据集。其中共有3份文件:

    • PPD_Training_Master_GBK_3_1_Training_Set.csv :信贷用户在拍拍贷上的申报信息和部分三方数据信息,以及需要预测的目标变量。
    • PPD_LogInfo_3_1_Training_Set : 信贷客户的登录信息
    • PPD_Userupdate_Info_3_1_Training_Set :部分客户的信息修改行为

    建模工作就是从上述三个文件中对数据进行加工,提取特征并且建立合适的模型,对贷后表现做预测。

    【Logistic原理】:https://blog.csdn.net/sunyaowu315/article/details/87866135


      对数据分析、机器学习、数据科学、金融风控等感兴趣的小伙伴,需要数据集、代码、行业报告等各类学习资料,可添加qq群(资料共享群):102755159,也可添加微信wu805686220,加入微信讨论群,相互学习,共同成长。

    在这里插入图片描述

    主程序

    import pandas as pd
    import datetime
    import collections
    import numpy as np
    import numbers
    import random
    import sys
    import sys
    _path = r'C:\Users\A3\Desktop\LR_scorecard'
    sys.path.append(_path)
    import pickle
    from itertools import combinations
    from sklearn.linear_model import LinearRegression
    from sklearn.ensemble import RandomForestClassifier
    from sklearn.model_selection import train_test_split
    from sklearn.metrics import roc_curve
    from sklearn.metrics import roc_auc_score
    import statsmodels.api as sm
    from importlib import reload
    from matplotlib import pyplot as plt
    reload(sys)
    #sys.setdefaultencoding( "utf-8")
    import scorecard_functions as sf
    #from scorecard_functions_V3 import *
    from sklearn.linear_model import LogisticRegressionCV
    # -*- coding: utf-8 -*-
    
    ################################
    ######## UDF: 自定义函数 ########
    ################################
    ### 对时间窗口,计算累计产比 ###
    def TimeWindowSelection(df, daysCol, time_windows):
        '''
        :param df: the dataset containg variabel of days
        :param daysCol: the column of days
        :param time_windows: the list of time window
        :return:
        '''
        freq_tw = {}
        for tw in time_windows:
            freq = sum(df[daysCol].apply(lambda x: int(x<=tw)))
            freq_tw[tw] = freq
        return freq_tw
    
    
    def DeivdedByZero(nominator, denominator):
        '''
        当分母为0时,返回0;否则返回正常值
        '''
        if denominator == 0:
            return 0
        else:
            return nominator*1.0/denominator
    
    
    #对某些统一的字段进行统一
    def ChangeContent(x):
        y = x.upper()
        if y == '_MOBILEPHONE':
            y = '_PHONE'
        return y
    
    def MissingCategorial(df,x):
        missing_vals = df[x].map(lambda x: int(x!=x))
        return sum(missing_vals)*1.0/df.shape[0]
    
    def MissingContinuous(df,x):
        missing_vals = df[x].map(lambda x: int(np.isnan(x)))
        return sum(missing_vals) * 1.0 / df.shape[0]
    
    def MakeupRandom(x, sampledList):
        if x==x:
            return x
        else:
            randIndex = random.randint(0, len(sampledList)-1)
            return sampledList[randIndex]
    
    
    
    ############################################################
    #Step 0: 数据分析的初始工作, 包括读取数据文件、检查用户Id的一致性等#
    ############################################################
    
    folderOfData = 'C:/Users/A3/Desktop/scorecard/'
    data1 = pd.read_csv(folderOfData+'PPD_LogInfo_3_1_Training_Set.csv', header = 0)
    data2 = pd.read_csv(folderOfData+'PPD_Training_Master_GBK_3_1_Training_Set.csv', header = 0,encoding = 'gbk')
    data3 = pd.read_csv(folderOfData+'PPD_Userupdate_Info_3_1_Training_Set.csv', header = 0)
    
    #############################################################################################
    # Step 1: 从PPD_LogInfo_3_1_Training_Set &  PPD_Userupdate_Info_3_1_Training_Set数据中衍生特征#
    #############################################################################################
    # compare whether the four city variables match
    data2['city_match'] = data2.apply(lambda x: int(x.UserInfo_2 == x.UserInfo_4 == x.UserInfo_8 == x.UserInfo_20),axis = 1)
    del data2['UserInfo_2']
    del data2['UserInfo_4']
    del data2['UserInfo_8']
    del data2['UserInfo_20']
    
    ### 提取申请日期,计算日期差,查看日期差的分布
    data1['logInfo'] = data1['LogInfo3'].map(lambda x: datetime.datetime.strptime(x,'%Y-%m-%d'))
    data1['Listinginfo'] = data1['Listinginfo1'].map(lambda x: datetime.datetime.strptime(x,'%Y-%m-%d'))
    data1['ListingGap'] = data1[['logInfo','Listinginfo']].apply(lambda x: (x[1]-x[0]).days,axis = 1)
    plt.hist(data1['ListingGap'],bins=200)
    plt.title('Days between login date and listing date')
    ListingGap2 = data1['ListingGap'].map(lambda x: min(x,365))
    plt.hist(ListingGap2,bins=200)
    
    timeWindows = TimeWindowSelection(data1, 'ListingGap', range(30,361,30))
    
    '''
    使用180天作为最大的时间窗口计算新特征
    所有可以使用的时间窗口可以有7 days, 30 days, 60 days, 90 days, 120 days, 150 days and 180 days.
    在每个时间窗口内,计算总的登录次数,不同的登录方式,以及每种登录方式的平均次数
    '''
    time_window = [7, 30, 60, 90, 120, 150, 180]
    var_list = ['LogInfo1','LogInfo2']
    data1GroupbyIdx = pd.DataFrame({'Idx':data1['Idx'].drop_duplicates()})
    
    for tw in time_window:
        data1['TruncatedLogInfo'] = data1['Listinginfo'].map(lambda x: x + datetime.timedelta(-tw))
        temp = data1.loc[data1['logInfo'] >= data1['TruncatedLogInfo']]
        for var in var_list:
            #count the frequences of LogInfo1 and LogInfo2
            count_stats = temp.groupby(['Idx'])[var].count().to_dict()
            data1GroupbyIdx[str(var)+'_'+str(tw)+'_count'] = data1GroupbyIdx['Idx'].map(lambda x: count_stats.get(x,0))
    
            # count the distinct value of LogInfo1 and LogInfo2
            Idx_UserupdateInfo1 = temp[['Idx', var]].drop_duplicates()
            uniq_stats = Idx_UserupdateInfo1.groupby(['Idx'])[var].count().to_dict()
            data1GroupbyIdx[str(var) + '_' + str(tw) + '_unique'] = data1GroupbyIdx['Idx'].map(lambda x: uniq_stats.get(x,0))
    
            # calculate the average count of each value in LogInfo1 and LogInfo2
            data1GroupbyIdx[str(var) + '_' + str(tw) + '_avg_count'] = data1GroupbyIdx[[str(var)+'_'+str(tw)+'_count',str(var) + '_' + str(tw) + '_unique']].\
                apply(lambda x: DeivdedByZero(x[0],x[1]), axis=1)
    
    
    data3['ListingInfo'] = data3['ListingInfo1'].map(lambda x: datetime.datetime.strptime(x,'%Y/%m/%d'))
    data3['UserupdateInfo'] = data3['UserupdateInfo2'].map(lambda x: datetime.datetime.strptime(x,'%Y/%m/%d'))
    data3['ListingGap'] = data3[['UserupdateInfo','ListingInfo']].apply(lambda x: (x[1]-x[0]).days,axis = 1)
    collections.Counter(data3['ListingGap'])
    hist_ListingGap = np.histogram(data3['ListingGap'])
    hist_ListingGap = pd.DataFrame({'Freq':hist_ListingGap[0],'gap':hist_ListingGap[1][1:]})
    hist_ListingGap['CumFreq'] = hist_ListingGap['Freq'].cumsum()
    hist_ListingGap['CumPercent'] = hist_ListingGap['CumFreq'].map(lambda x: x*1.0/hist_ListingGap.iloc[-1]['CumFreq'])
    
    '''
    对 QQ和qQ, Idnumber和idNumber,MOBILEPHONE和PHONE 进行统一
    在时间切片内,计算
     (1) 更新的频率
     (2) 每种更新对象的种类个数
     (3) 对重要信息如IDNUMBER,HASBUYCAR, MARRIAGESTATUSID, PHONE的更新
    '''
    data3['UserupdateInfo1'] = data3['UserupdateInfo1'].map(ChangeContent)
    data3GroupbyIdx = pd.DataFrame({'Idx':data3['Idx'].drop_duplicates()})
    
    time_window = [7, 30, 60, 90, 120, 150, 180]
    for tw in time_window:
        data3['TruncatedLogInfo'] = data3['ListingInfo'].map(lambda x: x + datetime.timedelta(-tw))
        temp = data3.loc[data3['UserupdateInfo'] >= data3['TruncatedLogInfo']]
    
        #frequency of updating
        freq_stats = temp.groupby(['Idx'])['UserupdateInfo1'].count().to_dict()
        data3GroupbyIdx['UserupdateInfo_'+str(tw)+'_freq'] = data3GroupbyIdx['Idx'].map(lambda x: freq_stats.get(x,0))
    
        # number of updated types
        Idx_UserupdateInfo1 = temp[['Idx','UserupdateInfo1']].drop_duplicates()
        uniq_stats = Idx_UserupdateInfo1.groupby(['Idx'])['UserupdateInfo1'].count().to_dict()
        data3GroupbyIdx['UserupdateInfo_' + str(tw) + '_unique'] = data3GroupbyIdx['Idx'].map(lambda x: uniq_stats.get(x, x))
    
        #average count of each type
        data3GroupbyIdx['UserupdateInfo_' + str(tw) + '_avg_count'] = data3GroupbyIdx[['UserupdateInfo_'+str(tw)+'_freq', 'UserupdateInfo_' + str(tw) + '_unique']]. \
            apply(lambda x: x[0] * 1.0 / x[1], axis=1)
    
        #whether the applicant changed items like IDNUMBER,HASBUYCAR, MARRIAGESTATUSID, PHONE
        Idx_UserupdateInfo1['UserupdateInfo1'] = Idx_UserupdateInfo1['UserupdateInfo1'].map(lambda x: [x])
        Idx_UserupdateInfo1_V2 = Idx_UserupdateInfo1.groupby(['Idx'])['UserupdateInfo1'].sum()
        for item in ['_IDNUMBER','_HASBUYCAR','_MARRIAGESTATUSID','_PHONE']:
            item_dict = Idx_UserupdateInfo1_V2.map(lambda x: int(item in x)).to_dict()
            data3GroupbyIdx['UserupdateInfo_' + str(tw) + str(item)] = data3GroupbyIdx['Idx'].map(lambda x: item_dict.get(x, x))
    
    # Combine the above features with raw features in PPD_Training_Master_GBK_3_1_Training_Set
    allData = pd.concat([data2.set_index('Idx'), data3GroupbyIdx.set_index('Idx'), data1GroupbyIdx.set_index('Idx')],axis= 1)
    allData.to_csv(folderOfData+'allData_0.csv',encoding = 'gbk')
    
    
    
    
    #######################################
    # Step 2: 对类别型变量和数值型变量进行补缺#
    ######################################
    allData = pd.read_csv(folderOfData+'allData_0.csv',header = 0,encoding = 'gbk')
    allFeatures = list(allData.columns)
    allFeatures.remove('target')
    if 'Idx' in allFeatures:
        allFeatures.remove('Idx')
    allFeatures.remove('ListingInfo')
    
    #检查是否有常数型变量,并且检查是类别型还是数值型变量
    numerical_var = []
    for col in allFeatures:
        if len(set(allData[col])) == 1:
            print('delete {} from the dataset because it is a constant'.format(col))
            del allData[col]
            allFeatures.remove(col)
        else:
            uniq_valid_vals = [i for i in allData[col] if i == i]
            uniq_valid_vals = list(set(uniq_valid_vals))
            if len(uniq_valid_vals) >= 10 and isinstance(uniq_valid_vals[0], numbers.Real):
                numerical_var.append(col)
    
    categorical_var = [i for i in allFeatures if i not in numerical_var]
    
    
    #检查变量的最多值的占比情况,以及每个变量中占比最大的值
    records_count = allData.shape[0]
    col_most_values,col_large_value = {},{}
    for col in allFeatures:
        value_count = allData[col].groupby(allData[col]).count()
        col_most_values[col] = max(value_count)/records_count
        large_value = value_count[value_count== max(value_count)].index[0]
        col_large_value[col] = large_value
    col_most_values_df = pd.DataFrame.from_dict(col_most_values, orient = 'index')
    col_most_values_df.columns = ['max percent']
    col_most_values_df = col_most_values_df.sort_values(by = 'max percent', ascending = False)
    pcnt = list(col_most_values_df[:500]['max percent'])
    vars = list(col_most_values_df[:500].index)
    plt.bar(range(len(pcnt)), height = pcnt)
    plt.title('Largest Percentage of Single Value in Each Variable')
    
    #计算多数值占比超过90%的字段中,少数值的坏样本率是否会显著高于多数值
    large_percent_cols = list(col_most_values_df[col_most_values_df['max percent']>=0.9].index)
    bad_rate_diff = {}
    for col in large_percent_cols:
        large_value = col_large_value[col]
        temp = allData[[col,'target']]
        temp[col] = temp.apply(lambda x: int(x[col]==large_value),axis=1)
        bad_rate = temp.groupby(col).mean()
        if bad_rate.iloc[0]['target'] == 0:
            bad_rate_diff[col] = 0
            continue
        bad_rate_diff[col] = np.log(bad_rate.iloc[0]['target']/bad_rate.iloc[1]['target'])
    bad_rate_diff_sorted = sorted(bad_rate_diff.items(),key=lambda x: x[1], reverse=True)
    bad_rate_diff_sorted_values = [x[1] for x in bad_rate_diff_sorted]
    plt.bar(x = range(len(bad_rate_diff_sorted_values)), height = bad_rate_diff_sorted_values)
    
    #由于所有的少数值的坏样本率并没有显著高于多数值,意味着这些变量可以直接剔除
    for col in large_percent_cols:
        if col in numerical_var:
            numerical_var.remove(col)
        else:
            categorical_var.remove(col)
        del allData[col]
    
    '''
    对类别型变量,如果缺失超过80%, 就删除,否则当成特殊的状态
    '''
    missing_pcnt_threshould_1 = 0.8
    for col in categorical_var:
        missingRate = MissingCategorial(allData,col)
        print('{0} has missing rate as {1}'.format(col,missingRate))
        if missingRate > missing_pcnt_threshould_1:
            categorical_var.remove(col)
            del allData[col]
        if 0 < missingRate < missing_pcnt_threshould_1:
            uniq_valid_vals = [i for i in allData[col] if i == i]
            uniq_valid_vals = list(set(uniq_valid_vals))
            if isinstance(uniq_valid_vals[0], numbers.Real):
                missing_position = allData.loc[allData[col] != allData[col]][col].index
                not_missing_sample = [-1]*len(missing_position)
                allData.loc[missing_position, col] = not_missing_sample
            else:
                # In this way we convert NaN to NAN, which is a string instead of np.nan
                allData[col] = allData[col].map(lambda x: str(x).upper())
    
    allData_bk = allData.copy()
    '''
    检查数值型变量
    '''
    missing_pcnt_threshould_2 = 0.8
    deleted_var = []
    for col in numerical_var:
        missingRate = MissingContinuous(allData, col)
        print('{0} has missing rate as {1}'.format(col, missingRate))
        if missingRate > missing_pcnt_threshould_2:
            deleted_var.append(col)
            print('we delete variable {} because of its high missing rate'.format(col))
        else:
            if missingRate > 0:
                not_missing = allData.loc[allData[col] == allData[col]][col]
                #makeuped = allData[col].map(lambda x: MakeupRandom(x, list(not_missing)))
                missing_position = allData.loc[allData[col] != allData[col]][col].index
                not_missing_sample = random.sample(list(not_missing), len(missing_position))
                allData.loc[missing_position,col] = not_missing_sample
                #del allData[col]
                #allData[col] = makeuped
                missingRate2 = MissingContinuous(allData, col)
                print('missing rate after making up is:{}'.format(str(missingRate2)))
    
    if deleted_var != []:
        for col in deleted_var:
            numerical_var.remove(col)
            del allData[col]
    
    
    allData.to_csv(folderOfData+'allData_1.csv', header=True,encoding='gbk', columns = allData.columns, index=False)
    
    allData = pd.read_csv(folderOfData+'allData_1.csv', header=0,encoding='gbk')
    
    
    
    
    ###################################
    # Step 3: 基于卡方分箱法对变量进行分箱#
    ###################################
    '''
    对不同类型的变量,分箱的处理是不同的:
    (1)数值型变量可直接分箱
    (2)取值个数较多的类别型变量,需要用bad rate做编码转换成数值型变量,再分箱
    (3)取值个数较少的类别型变量不需要分箱,但是要检查是否每个类别都有好坏样本。如果有类别只有好或坏,需要合并
    '''
    
    #for each categorical variable, if it has distinct values more than 5, we use the ChiMerge to merge it
    
    trainData = pd.read_csv(folderOfData+'allData_1.csv',header = 0, encoding='gbk')
    #trainData = pd.read_csv(folderOfData+'allData_1.csv',header = 0, encoding='gbk',dtype=object)
    allFeatures = list(trainData.columns)
    allFeatures.remove('ListingInfo')
    allFeatures.remove('target')
    #allFeatures.remove('Idx')
    
    #将特征区分为数值型和类别型
    numerical_var = []
    for var in allFeatures:
        uniq_vals = list(set(trainData[var]))
        if np.nan in uniq_vals:
            uniq_vals.remove( np.nan)
        if len(uniq_vals) >= 10 and isinstance(uniq_vals[0],numbers.Real):
            numerical_var.append(var)
    
    categorical_var = [i for i in allFeatures if i not in numerical_var]
    
    for col in categorical_var:
        #for Chinese character, upper() is not valid
        if col not in ['UserInfo_7','UserInfo_9','UserInfo_19']:
            trainData[col] = trainData[col].map(lambda x: str(x).upper())
    
    
    '''
    对于类别型变量,按照以下方式处理
    1,如果变量的取值个数超过5,计算bad rate进行编码
    2,除此之外,其他任何类别型变量如果有某个取值中,对应的样本全部是坏样本或者是好样本,进行合并。
    '''
    deleted_features = []   #将处理过的变量删除,防止对后面建模的干扰
    encoded_features = {}   #将bad rate编码方式保存下来,在以后的测试和生产环境中需要使用
    merged_features = {}    #将类别型变量合并方案保留下来
    var_IV = {}  #save the IV values for binned features       #将IV值保留和WOE值
    var_WOE = {}
    for col in categorical_var:
        print('we are processing {}'.format(col))
    # =============================================================================
    #     if len(set(trainData[col]))>1000:
    #         continue
    # =============================================================================
        if len(set(trainData[col]))>5:
            print('{} is encoded with bad rate'.format(col))
            col0 = str(col)+'_encoding'
    
            #(1), 计算坏样本率并进行编码
            encoding_result = sf.BadRateEncoding(trainData, col, 'target')
            trainData[col0], br_encoding = encoding_result['encoding'],encoding_result['bad_rate']
    
            #(2), 将(1)中的编码后的变量也加入数值型变量列表中,为后面的卡方分箱做准备
            numerical_var.append(col0)
    
            #(3), 保存编码结果
            encoded_features[col] = [col0, br_encoding]
    
            #(4), 删除原始值
    
            deleted_features.append(col)
        else:
            bad_bin = trainData.groupby([col])['target'].sum()
            #对于类别数少于5个,但是出现0坏样本的特征需要做处理
            if min(bad_bin) == 0:
                print('{} has 0 bad sample!'.format(col))
                col1 = str(col) + '_mergeByBadRate'
                #(1), 找出最优合并方式,使得每一箱同时包含好坏样本
                mergeBin = sf.MergeBad0(trainData, col, 'target')
                #(2), 依照(1)的结果对值进行合并
                trainData[col1] = trainData[col].map(mergeBin)
                maxPcnt = sf.MaximumBinPcnt(trainData, col1)
                #如果合并后导致有箱占比超过90%,就删除。
                if maxPcnt > 0.9:
                    print('{} is deleted because of large percentage of single bin'.format(col))
                    deleted_features.append(col)
                    categorical_var.remove(col)
                    del trainData[col]
                    continue
                #(3) 如果合并后的新的变量满足要求,就保留下来
                merged_features[col] = [col1, mergeBin]
                WOE_IV = sf.CalcWOE(trainData, col1, 'target')
                var_WOE[col1] = WOE_IV['WOE']
                var_IV[col1] = WOE_IV['IV']
                #del trainData[col]
                deleted_features.append(col)
            else:
                WOE_IV = sf.CalcWOE(trainData, col, 'target')
                var_WOE[col] = WOE_IV['WOE']
                var_IV[col] = WOE_IV['IV']
    
    
    '''
    对于连续型变量,处理方式如下:
    1,利用卡方分箱法将变量分成5个箱
    2,检查坏样本率的单调性,如果发现单调性不满足,就进行合并,直到满足单调性
    '''
    var_cutoff = {}
    for col in numerical_var:
        print("{} is in processing".format(col))
        col1 = str(col) + '_Bin'
    
        #(1),用卡方分箱法进行分箱,并且保存每一个分割的端点。例如端点=[10,20,30]表示将变量分为x<10,10<x<20,20<x<30和x>30.
        #特别地,缺失值-1不参与分箱
        if -1 in set(trainData[col]):
            special_attribute = [-1]
        else:
            special_attribute = []
        cutOffPoints = sf.ChiMerge(trainData, col, 'target',special_attribute=special_attribute)
        var_cutoff[col] = cutOffPoints
        trainData[col1] = trainData[col].map(lambda x: sf.AssignBin(x, cutOffPoints,special_attribute=special_attribute))
    
        #(2), check whether the bad rate is monotone
        BRM = sf.BadRateMonotone(trainData, col1, 'target',special_attribute=special_attribute)
        if not BRM:
            if special_attribute == []:
                bin_merged = sf.Monotone_Merge(trainData, 'target', col1)
                removed_index = []
                for bin in bin_merged:
                    if len(bin)>1:
                        indices = [int(b.replace('Bin ','')) for b in bin]
                        removed_index = removed_index+indices[0:-1]
                removed_point = [cutOffPoints[k] for k in removed_index]
                for p in removed_point:
                    cutOffPoints.remove(p)
                var_cutoff[col] = cutOffPoints
                trainData[col1] = trainData[col].map(lambda x: sf.AssignBin(x, cutOffPoints, special_attribute=special_attribute))
            else:
                cutOffPoints2 = [i for i in cutOffPoints if i not in special_attribute]
                temp = trainData.loc[~trainData[col].isin(special_attribute)]
                bin_merged = sf.Monotone_Merge(temp, 'target', col1)
                removed_index = []
                for bin in bin_merged:
                    if len(bin) > 1:
                        indices = [int(b.replace('Bin ', '')) for b in bin]
                        removed_index = removed_index + indices[0:-1]
                removed_point = [cutOffPoints2[k] for k in removed_index]
                for p in removed_point:
                    cutOffPoints2.remove(p)
                cutOffPoints2 = cutOffPoints2 + special_attribute
                var_cutoff[col] = cutOffPoints2
                trainData[col1] = trainData[col].map(lambda x: sf.AssignBin(x, cutOffPoints2, special_attribute=special_attribute))
    
        #(3), 分箱后再次检查是否有单一的值占比超过90%。如果有,删除该变量
        maxPcnt = sf.MaximumBinPcnt(trainData, col1)
        if maxPcnt > 0.9:
            # del trainData[col1]
            deleted_features.append(col)
            numerical_var.remove(col)
            print('we delete {} because the maximum bin occupies more than 90%'.format(col))
            continue
    
        WOE_IV = sf.CalcWOE(trainData, col1, 'target')
        var_IV[col] = WOE_IV['IV']
        var_WOE[col] = WOE_IV['WOE']
        #del trainData[col]
    
    
    
    trainData.to_csv(folderOfData+'allData_2.csv', header=True,encoding='gbk', columns = trainData.columns, index=False)
    
    
    
    with open(folderOfData+'var_WOE.pkl',"wb") as f:
        f.write(pickle.dumps(var_WOE))
    
    with open(folderOfData+'var_IV.pkl',"wb") as f:
        f.write(pickle.dumps(var_IV))
    
    
    with open(folderOfData+'var_cutoff.pkl',"wb") as f:
        f.write(pickle.dumps(var_cutoff))
    
    
    with open(folderOfData+'merged_features.pkl',"wb") as f:
        f.write(pickle.dumps(merged_features))
    
    ########################################
    # Step 4: WOE编码后的单变量分析与多变量分析#
    ########################################
    trainData = pd.read_csv(folderOfData+'allData_2.csv', header=0, encoding='gbk')
    
    
    with open(folderOfData+'var_WOE.pkl',"rb") as f:
        var_WOE = pickle.load(f)
    
    with open(folderOfData+'var_IV.pkl',"rb") as f:
        var_IV = pickle.load(f)
    
    
    with open(folderOfData+'var_cutoff.pkl',"rb") as f:
        var_cutoff = pickle.load(f)
    
    
    with open(folderOfData+'merged_features.pkl',"rb") as f:
        merged_features = pickle.load(f)
    
    #将一些看起来像数值变量实际上是类别变量的字段转换成字符
    num2str = ['SocialNetwork_13','SocialNetwork_12','UserInfo_6','UserInfo_5','UserInfo_10','UserInfo_17']
    for col in num2str:
        trainData[col] = trainData[col].map(lambda x: str(x))
    
    
    for col in var_WOE.keys():
        print(col)
        col2 = str(col)+"_WOE"
        if col in var_cutoff.keys():
            cutOffPoints = var_cutoff[col]
            special_attribute = []
            if -1 in cutOffPoints:
                special_attribute = [-1]
            binValue = trainData[col].map(lambda x: sf.AssignBin(x, cutOffPoints,special_attribute=special_attribute))
            trainData[col2] = binValue.map(lambda x: var_WOE[col][x])
        else:
            print('********************************************************************************************')
            print(col)
            if -1 in set(trainData[col]):
                trainData[col2] = trainData[col].map(lambda x: var_WOE[col][str(x*1.0)])
            else:
                trainData[col2] = trainData[col].map(lambda x: var_WOE[col][x])
    
    trainData.to_csv(folderOfData+'allData_3.csv', header=True,encoding='gbk', columns = trainData.columns, index=False)
    
    
    
    ### (i) 选择IV高于阈值的变量
    trainData = pd.read_csv(folderOfData+'allData_3.csv', header=0,encoding='gbk')
    all_IV = list(var_IV.values())
    all_IV = sorted(all_IV, reverse=True)
    plt.bar(x=range(len(all_IV)), height = all_IV)
    iv_threshould = 0.02
    varByIV = [k for k, v in var_IV.items() if v > iv_threshould]
    
    
    
    ### (ii) 检查WOE编码后的变量的两两线性相关性
    
    var_IV_selected = {k:var_IV[k] for k in varByIV}
    var_IV_sorted = sorted(var_IV_selected.items(), key=lambda d:d[1], reverse = True)
    var_IV_sorted = [i[0] for i in var_IV_sorted]
    
    removed_var  = []
    roh_thresould = 0.6
    for i in range(len(var_IV_sorted)-1):
        if var_IV_sorted[i] not in removed_var:
            x1 = var_IV_sorted[i]+"_WOE"
            for j in range(i+1,len(var_IV_sorted)):
                if var_IV_sorted[j] not in removed_var:
                    x2 = var_IV_sorted[j] + "_WOE"
                    roh = np.corrcoef([trainData[x1], trainData[x2]])[0, 1]
                    if abs(roh) >= roh_thresould:
                        print('the correlation coeffient between {0} and {1} is {2}'.format(x1, x2, str(roh)))
                        if var_IV[var_IV_sorted[i]] > var_IV[var_IV_sorted[j]]:
                            removed_var.append(var_IV_sorted[j])
                        else:
                            removed_var.append(var_IV_sorted[i])
    
    var_IV_sortet_2 = [i for i in var_IV_sorted if i not in removed_var]
    
    ### (iii)检查是否有变量与其他所有变量的VIF > 10
    for i in range(len(var_IV_sortet_2)):
        x0 = trainData[var_IV_sortet_2[i]+'_WOE']
        x0 = np.array(x0)
        X_Col = [k+'_WOE' for k in var_IV_sortet_2 if k != var_IV_sortet_2[i]]
        X = trainData[X_Col]
        X = np.matrix(X)
        regr = LinearRegression()
        clr= regr.fit(X, x0)
        x_pred = clr.predict(X)
        R2 = 1 - ((x_pred - x0) ** 2).sum() / ((x0 - x0.mean()) ** 2).sum()
        vif = 1/(1-R2)
        if vif > 10:
            print("Warning: the vif for {0} is {1}".format(var_IV_sortet_2[i], vif))
    
    
    
    #########################
    # Step 5: 应用逻辑回归模型#
    #########################
    multi_analysis = [i+'_WOE' for i in var_IV_sortet_2]
    y = trainData['target']
    X = trainData[multi_analysis].copy()
    X['intercept'] = [1]*X.shape[0]
    
    
    LR = sm.Logit(y, X).fit()
    summary = LR.summary2()
    pvals = LR.pvalues.to_dict()
    params = LR.params.to_dict()
    
    #发现有变量不显著,因此需要单独检验显著性
    varLargeP = {k: v for k,v in pvals.items() if v >= 0.1}
    varLargeP = sorted(varLargeP.items(), key=lambda d:d[1], reverse = True)
    varLargeP = [i[0] for i in varLargeP]
    p_value_list = {}
    for var in varLargeP:
        X_temp = trainData[var].copy().to_frame()
        X_temp['intercept'] = [1] * X_temp.shape[0]
        LR = sm.Logit(y, X_temp).fit()
        p_value_list[var] = LR.pvalues[var]
    for k,v in p_value_list.items():
        print("{0} has p-value of {1} in univariate regression".format(k,v))
    
    
    #发现有变量的系数为正,因此需要单独检验正确性
    varPositive = [k for k,v in params.items() if v >= 0]
    coef_list = {}
    for var in varPositive:
        X_temp = trainData[var].copy().to_frame()
        X_temp['intercept'] = [1] * X_temp.shape[0]
        LR = sm.Logit(y, X_temp).fit()
        coef_list[var] = LR.params[var]
    for k,v in coef_list.items():
        print("{0} has coefficient of {1} in univariate regression".format(k,v))
    
    
    selected_var = [multi_analysis[0]]
    for var in multi_analysis[1:]:
        try_vars = selected_var+[var]
        X_temp = trainData[try_vars].copy()
        X_temp['intercept'] = [1] * X_temp.shape[0]
        LR = sm.Logit(y, X_temp).fit()
        #summary = LR.summary2()
        pvals, params = LR.pvalues, LR.params
        del params['intercept']
        if max(pvals)<0.1 and max(params)<0:
            selected_var.append(var)
    
    LR.summary2()
    
    y_pred = LR.predict(X_temp)
    y_result = pd.DataFrame({'y_pred':y_pred, 'y_real':list(trainData['target'])})
    sf.KS(y_result,'y_pred','y_real')
    
    roc_auc_score(trainData['target'], y_pred)
    
    
    
    ################
    # Step 6: 尺度化#
    ################
    scores = sf.Prob2Score(y_pred,200,100)
    plt.hist(scores,bins=100)
    
    

    功能函数

    import numpy as np
    import pandas as pd
    
    def SplitData(df, col, numOfSplit, special_attribute=[]):
        '''
        :param df: 按照col排序后的数据集
        :param col: 待分箱的变量
        :param numOfSplit: 切分的组别数
        :param special_attribute: 在切分数据集的时候,某些特殊值需要排除在外
        :return: 在原数据集上增加一列,把原始细粒度的col重新划分成粗粒度的值,便于分箱中的合并处理
        '''
        df2 = df.copy()
        if special_attribute != []:
            df2 = df.loc[~df[col].isin(special_attribute)]
        N = df2.shape[0]
        n = int(N/numOfSplit)
        splitPointIndex = [i*n for i in range(1,numOfSplit)]
        rawValues = sorted(list(df2[col]))
        splitPoint = [rawValues[i] for i in splitPointIndex]
        splitPoint = sorted(list(set(splitPoint)))
        return splitPoint
    
    def MaximumBinPcnt(df,col):
        '''
        :return: 数据集df中,变量col的分布占比
        '''
        N = df.shape[0]
        total = df.groupby([col])[col].count()
        pcnt = total*1.0/N
        return max(pcnt)
    
    
    
    def Chi2(df, total_col, bad_col):
        '''
        :param df: 包含全部样本总计与坏样本总计的数据框
        :param total_col: 全部样本的个数
        :param bad_col: 坏样本的个数
        :return: 卡方值
        '''
        df2 = df.copy()
        # 求出df中,总体的坏样本率和好样本率
        badRate = sum(df2[bad_col])*1.0/sum(df2[total_col])
        # 当全部样本只有好或者坏样本时,卡方值为0
        if badRate in [0,1]:
            return 0
        df2['good'] = df2.apply(lambda x: x[total_col] - x[bad_col], axis = 1)
        goodRate = sum(df2['good']) * 1.0 / sum(df2[total_col])
        # 期望坏(好)样本个数=全部样本个数*平均坏(好)样本占比
        df2['badExpected'] = df[total_col].apply(lambda x: x*badRate)
        df2['goodExpected'] = df[total_col].apply(lambda x: x * goodRate)
        badCombined = zip(df2['badExpected'], df2[bad_col])
        goodCombined = zip(df2['goodExpected'], df2['good'])
        badChi = [(i[0]-i[1])**2/i[0] for i in badCombined]
        goodChi = [(i[0] - i[1]) ** 2 / i[0] for i in goodCombined]
        chi2 = sum(badChi) + sum(goodChi)
        return chi2
    
    
    
    def BinBadRate(df, col, target, grantRateIndicator=0):
        '''
        :param df: 需要计算好坏比率的数据集
        :param col: 需要计算好坏比率的特征
        :param target: 好坏标签
        :param grantRateIndicator: 1返回总体的坏样本率,0不返回
        :return: 每箱的坏样本率,以及总体的坏样本率(当grantRateIndicator==1时)
        '''
        total = df.groupby([col])[target].count()
        total = pd.DataFrame({'total': total})
        bad = df.groupby([col])[target].sum()
        bad = pd.DataFrame({'bad': bad})
        regroup = total.merge(bad, left_index=True, right_index=True, how='left')
        regroup.reset_index(level=0, inplace=True)
        regroup['bad_rate'] = regroup.apply(lambda x: x.bad * 1.0 / x.total, axis=1)
        dicts = dict(zip(regroup[col],regroup['bad_rate']))
        if grantRateIndicator==0:
            return (dicts, regroup)
        N = sum(regroup['total'])
        B = sum(regroup['bad'])
        overallRate = B * 1.0 / N
        return (dicts, regroup, overallRate)
    
    
    
    def AssignGroup(x, bin):
        '''
        :return: 数值x在区间映射下的结果。例如,x=2,bin=[0,3,5], 由于0<x<3,x映射成3
        '''
        N = len(bin)
        if x<=min(bin):
            return min(bin)
        elif x>max(bin):
            return 10e10
        else:
            for i in range(N-1):
                if bin[i] < x <= bin[i+1]:
                    return bin[i+1]
    
    
    def ChiMerge(df, col, target, max_interval=5,special_attribute=[],minBinPcnt=0):
        '''
        :param df: 包含目标变量与分箱属性的数据框
        :param col: 需要分箱的属性
        :param target: 目标变量,取值0或1
        :param max_interval: 最大分箱数。如果原始属性的取值个数低于该参数,不执行这段函数
        :param special_attribute: 不参与分箱的属性取值
        :param minBinPcnt:最小箱的占比,默认为0
        :return: 分箱结果
        '''
        colLevels = sorted(list(set(df[col])))
        N_distinct = len(colLevels)
        if N_distinct <= max_interval:  #如果原始属性的取值个数低于max_interval,不执行这段函数
            print("The number of original levels for {} is less than or equal to max intervals".format(col))
            return colLevels[:-1]
        else:
            if len(special_attribute)>=1:
                df1 = df.loc[df[col].isin(special_attribute)]
                df2 = df.loc[~df[col].isin(special_attribute)]
            else:
                df2 = df.copy()
            N_distinct = len(list(set(df2[col])))
    
            # 步骤一: 通过col对数据集进行分组,求出每组的总样本数与坏样本数
            if N_distinct > 100:
                split_x = SplitData(df2, col, 100)
                df2['temp'] = df2[col].map(lambda x: AssignGroup(x, split_x))
            else:
                df2['temp'] = df2[col]
            # 总体bad rate将被用来计算expected bad count
            (binBadRate, regroup, overallRate) = BinBadRate(df2, 'temp', target, grantRateIndicator=1)
    
            # 首先,每个单独的属性值将被分为单独的一组
            # 对属性值进行排序,然后两两组别进行合并
            colLevels = sorted(list(set(df2['temp'])))
            groupIntervals = [[i] for i in colLevels]
    
            # 步骤二:建立循环,不断合并最优的相邻两个组别,直到:
            # 1,最终分裂出来的分箱数<=预设的最大分箱数
            # 2,每箱的占比不低于预设值(可选)
            # 3,每箱同时包含好坏样本
            # 如果有特殊属性,那么最终分裂出来的分箱数=预设的最大分箱数-特殊属性的个数
            split_intervals = max_interval - len(special_attribute)
            while (len(groupIntervals) > split_intervals):  # 终止条件: 当前分箱数=预设的分箱数
                # 每次循环时, 计算合并相邻组别后的卡方值。具有最小卡方值的合并方案,是最优方案
                chisqList = []
                for k in range(len(groupIntervals)-1):
                    temp_group = groupIntervals[k] + groupIntervals[k+1]
                    df2b = regroup.loc[regroup['temp'].isin(temp_group)]
                    chisq = Chi2(df2b, 'total', 'bad')
                    chisqList.append(chisq)
                best_comnbined = chisqList.index(min(chisqList))
                groupIntervals[best_comnbined] = groupIntervals[best_comnbined] + groupIntervals[best_comnbined+1]
                # 当将最优的相邻的两个变量合并在一起后,需要从原来的列表中将其移除。例如,将[3,4,5] 与[6,7]合并成[3,4,5,6,7]后,需要将[3,4,5] 与[6,7]移除,保留[3,4,5,6,7]
                groupIntervals.remove(groupIntervals[best_comnbined+1])
            groupIntervals = [sorted(i) for i in groupIntervals]
            cutOffPoints = [max(i) for i in groupIntervals[:-1]]
    
            # 检查是否有箱没有好或者坏样本。如果有,需要跟相邻的箱进行合并,直到每箱同时包含好坏样本
            groupedvalues = df2['temp'].apply(lambda x: AssignBin(x, cutOffPoints))
            df2['temp_Bin'] = groupedvalues
            (binBadRate,regroup) = BinBadRate(df2, 'temp_Bin', target)
            [minBadRate, maxBadRate] = [min(binBadRate.values()),max(binBadRate.values())]
            while minBadRate ==0 or maxBadRate == 1:
                # 找出全部为好/坏样本的箱
                indexForBad01 = regroup[regroup['bad_rate'].isin([0,1])].temp_Bin.tolist()
                bin=indexForBad01[0]
                # 如果是最后一箱,则需要和上一个箱进行合并,也就意味着分裂点cutOffPoints中的最后一个需要移除
                if bin == max(regroup.temp_Bin):
                    cutOffPoints = cutOffPoints[:-1]
                # 如果是第一箱,则需要和下一个箱进行合并,也就意味着分裂点cutOffPoints中的第一个需要移除
                elif bin == min(regroup.temp_Bin):
                    cutOffPoints = cutOffPoints[1:]
                # 如果是中间的某一箱,则需要和前后中的一个箱进行合并,依据是较小的卡方值
                else:
                    # 和前一箱进行合并,并且计算卡方值
                    currentIndex = list(regroup.temp_Bin).index(bin)
                    prevIndex = list(regroup.temp_Bin)[currentIndex - 1]
                    df3 = df2.loc[df2['temp_Bin'].isin([prevIndex, bin])]
                    (binBadRate, df2b) = BinBadRate(df3, 'temp_Bin', target)
                    chisq1 = Chi2(df2b, 'total', 'bad')
                    # 和后一箱进行合并,并且计算卡方值
                    laterIndex = list(regroup.temp_Bin)[currentIndex + 1]
                    df3b = df2.loc[df2['temp_Bin'].isin([laterIndex, bin])]
                    (binBadRate, df2b) = BinBadRate(df3b, 'temp_Bin', target)
                    chisq2 = Chi2(df2b, 'total', 'bad')
                    if chisq1 < chisq2:
                        cutOffPoints.remove(cutOffPoints[currentIndex - 1])
                    else:
                        cutOffPoints.remove(cutOffPoints[currentIndex])
                # 完成合并之后,需要再次计算新的分箱准则下,每箱是否同时包含好坏样本
                groupedvalues = df2['temp'].apply(lambda x: AssignBin(x, cutOffPoints))
                df2['temp_Bin'] = groupedvalues
                (binBadRate, regroup) = BinBadRate(df2, 'temp_Bin', target)
                [minBadRate, maxBadRate] = [min(binBadRate.values()), max(binBadRate.values())]
            # 需要检查分箱后的最小占比
            if minBinPcnt > 0:
                groupedvalues = df2['temp'].apply(lambda x: AssignBin(x, cutOffPoints))
                df2['temp_Bin'] = groupedvalues
                valueCounts = groupedvalues.value_counts().to_frame()
                N = sum(valueCounts['temp'])
                valueCounts['pcnt'] = valueCounts['temp'].apply(lambda x: x * 1.0 / N)
                valueCounts = valueCounts.sort_index()
                minPcnt = min(valueCounts['pcnt'])
                while minPcnt < minBinPcnt and len(cutOffPoints) > 2:
                    # 找出占比最小的箱
                    indexForMinPcnt = valueCounts[valueCounts['pcnt'] == minPcnt].index.tolist()[0]
                    # 如果占比最小的箱是最后一箱,则需要和上一个箱进行合并,也就意味着分裂点cutOffPoints中的最后一个需要移除
                    if indexForMinPcnt == max(valueCounts.index):
                        cutOffPoints = cutOffPoints[:-1]
                    # 如果占比最小的箱是第一箱,则需要和下一个箱进行合并,也就意味着分裂点cutOffPoints中的第一个需要移除
                    elif indexForMinPcnt == min(valueCounts.index):
                        cutOffPoints = cutOffPoints[1:]
                    # 如果占比最小的箱是中间的某一箱,则需要和前后中的一个箱进行合并,依据是较小的卡方值
                    else:
                        # 和前一箱进行合并,并且计算卡方值
                        currentIndex = list(valueCounts.index).index(indexForMinPcnt)
                        prevIndex = list(valueCounts.index)[currentIndex - 1]
                        df3 = df2.loc[df2['temp_Bin'].isin([prevIndex, indexForMinPcnt])]
                        (binBadRate, df2b) = BinBadRate(df3, 'temp_Bin', target)
                        chisq1 = Chi2(df2b, 'total', 'bad')
                        # 和后一箱进行合并,并且计算卡方值
                        laterIndex = list(valueCounts.index)[currentIndex + 1]
                        df3b = df2.loc[df2['temp_Bin'].isin([laterIndex, indexForMinPcnt])]
                        (binBadRate, df2b) = BinBadRate(df3b, 'temp_Bin', target)
                        chisq2 = Chi2(df2b, 'total', 'bad')
                        if chisq1 < chisq2:
                            cutOffPoints.remove(cutOffPoints[currentIndex - 1])
                        else:
                            cutOffPoints.remove(cutOffPoints[currentIndex])
                    groupedvalues = df2['temp'].apply(lambda x: AssignBin(x, cutOffPoints))
                    df2['temp_Bin'] = groupedvalues
                    valueCounts = groupedvalues.value_counts().to_frame()
                    valueCounts['pcnt'] = valueCounts['temp'].apply(lambda x: x * 1.0 / N)
                    valueCounts = valueCounts.sort_index()
                    minPcnt = min(valueCounts['pcnt'])
            cutOffPoints = special_attribute + cutOffPoints
            return cutOffPoints
    
    
    
    def BadRateEncoding(df, col, target):
        '''
        :return: 在数据集df中,用坏样本率给col进行编码。target表示坏样本标签
        '''
        regroup = BinBadRate(df, col, target, grantRateIndicator=0)[1]
        br_dict = regroup[[col,'bad_rate']].set_index([col]).to_dict(orient='index')
        for k, v in br_dict.items():
            br_dict[k] = v['bad_rate']
        badRateEnconding = df[col].map(lambda x: br_dict[x])
        return {'encoding':badRateEnconding, 'bad_rate':br_dict}
    
    
    def AssignBin(x, cutOffPoints,special_attribute=[]):
        '''
        :param x: 某个变量的某个取值
        :param cutOffPoints: 上述变量的分箱结果,用切分点表示
        :param special_attribute:  不参与分箱的特殊取值
        :return: 分箱后的对应的第几个箱,从0开始
        例如, cutOffPoints = [10,20,30], 对于 x = 7, 返回 Bin 0;对于x=23,返回Bin 2; 对于x = 35, return Bin 3。
        对于特殊值,返回的序列数前加"-"
        '''
        cutOffPoints2 = [i for i in cutOffPoints if i not in special_attribute]
        numBin = len(cutOffPoints2)
        if x in special_attribute:
            i = special_attribute.index(x)+1
            return 'Bin {}'.format(0-i)
        if x<=cutOffPoints2[0]:
            return 'Bin 0'
        elif x > cutOffPoints2[-1]:
            return 'Bin {}'.format(numBin)
        else:
            for i in range(0,numBin):
                if cutOffPoints2[i] < x <=  cutOffPoints2[i+1]:
                    return 'Bin {}'.format(i+1)
    
    
    
    def CalcWOE(df, col, target):
        '''
        :param df: 包含需要计算WOE的变量和目标变量
        :param col: 需要计算WOE、IV的变量,必须是分箱后的变量,或者不需要分箱的类别型变量
        :param target: 目标变量,0、1表示好、坏
        :return: 返回WOE和IV
        '''
        total = df.groupby([col])[target].count()
        total = pd.DataFrame({'total': total})
        bad = df.groupby([col])[target].sum()
        bad = pd.DataFrame({'bad': bad})
        regroup = total.merge(bad, left_index=True, right_index=True, how='left')
        regroup.reset_index(level=0, inplace=True)
        N = sum(regroup['total'])
        B = sum(regroup['bad'])
        regroup['good'] = regroup['total'] - regroup['bad']
        G = N - B
        regroup['bad_pcnt'] = regroup['bad'].map(lambda x: x*1.0/B)
        regroup['good_pcnt'] = regroup['good'].map(lambda x: x * 1.0 / G)
        regroup['WOE'] = regroup.apply(lambda x: np.log(x.good_pcnt*1.0/x.bad_pcnt),axis = 1)
        WOE_dict = regroup[[col,'WOE']].set_index(col).to_dict(orient='index')
        for k, v in WOE_dict.items():
            WOE_dict[k] = v['WOE']
        IV = regroup.apply(lambda x: (x.good_pcnt-x.bad_pcnt)*np.log(x.good_pcnt*1.0/x.bad_pcnt),axis = 1)
        IV = sum(IV)
        return {"WOE": WOE_dict, 'IV':IV}
    
    
    def FeatureMonotone(x):
        '''
        :return: 返回序列x中有几个元素不满足单调性,以及这些元素的位置。
        例如,x=[1,3,2,5], 元素3比前后两个元素都大,不满足单调性;元素2比前后两个元素都小,也不满足单调性。
        故返回的不满足单调性的元素个数为2,位置为1和2.
        '''
        monotone = [x[i]<x[i+1] and x[i] < x[i-1] or x[i]>x[i+1] and x[i] > x[i-1] for i in range(1,len(x)-1)]
        index_of_nonmonotone = [i+1 for i in range(len(monotone)) if monotone[i]]
        return {'count_of_nonmonotone':monotone.count(True), 'index_of_nonmonotone':index_of_nonmonotone}
    
    ## 判断某变量的坏样本率是否单调
    def BadRateMonotone(df, sortByVar, target,special_attribute = []):
        '''
        :param df: 包含检验坏样本率的变量,和目标变量
        :param sortByVar: 需要检验坏样本率的变量
        :param target: 目标变量,0、1表示好、坏
        :param special_attribute: 不参与检验的特殊值
        :return: 坏样本率单调与否
        '''
        df2 = df.loc[~df[sortByVar].isin(special_attribute)]
        if len(set(df2[sortByVar])) <= 2:
            return True
        regroup = BinBadRate(df2, sortByVar, target)[1]
        combined = zip(regroup['total'],regroup['bad'])
        badRate = [x[1]*1.0/x[0] for x in combined]
        badRateNotMonotone = FeatureMonotone(badRate)['count_of_nonmonotone']
        if badRateNotMonotone > 0:
            return False
        else:
            return True
    
    def MergeBad0(df,col,target, direction='bad'):
        '''
         :param df: 包含检验0%或者100%坏样本率
         :param col: 分箱后的变量或者类别型变量。检验其中是否有一组或者多组没有坏样本或者没有好样本。如果是,则需要进行合并
         :param target: 目标变量,0、1表示好、坏
         :return: 合并方案,使得每个组里同时包含好坏样本
         '''
        regroup = BinBadRate(df, col, target)[1]
        if direction == 'bad':
            # 如果是合并0坏样本率的组,则跟最小的非0坏样本率的组进行合并
            regroup = regroup.sort_values(by  = 'bad_rate')
        else:
            # 如果是合并0好样本率的组,则跟最小的非0好样本率的组进行合并
            regroup = regroup.sort_values(by='bad_rate',ascending=False)
        regroup.index = range(regroup.shape[0])
        col_regroup = [[i] for i in regroup[col]]
        del_index = []
        for i in range(regroup.shape[0]-1):
            col_regroup[i+1] = col_regroup[i] + col_regroup[i+1]
            del_index.append(i)
            if direction == 'bad':
                if regroup['bad_rate'][i+1] > 0:
                    break
            else:
                if regroup['bad_rate'][i+1] < 1:
                    break
        col_regroup2 = [col_regroup[i] for i in range(len(col_regroup)) if i not in del_index]
        newGroup = {}
        for i in range(len(col_regroup2)):
            for g2 in col_regroup2[i]:
                newGroup[g2] = 'Bin '+str(i)
        return newGroup
    
    
    def Monotone_Merge(df, target, col):
        '''
        :return:将数据集df中,不满足坏样本率单调性的变量col进行合并,使得合并后的新的变量中,坏样本率单调,输出合并方案。
        例如,col=[Bin 0, Bin 1, Bin 2, Bin 3, Bin 4]是不满足坏样本率单调性的。合并后的col是:
        [Bin 0&Bin 1, Bin 2, Bin 3, Bin 4].
        合并只能在相邻的箱中进行。
        迭代地寻找最优合并方案。每一步迭代时,都尝试将所有非单调的箱进行合并,每一次尝试的合并都是跟前后箱进行合并再做比较
        '''
        def MergeMatrix(m, i,j,k):
            '''
            :param m: 需要合并行的矩阵
            :param i,j: 合并第i和j行
            :param k: 删除第k行
            :return: 合并后的矩阵
            '''
            m[i, :] = m[i, :] + m[j, :]
            m = np.delete(m, k, axis=0)
            return m
    
        def Merge_adjacent_Rows(i, bad_by_bin_current, bins_list_current, not_monotone_count_current):
            '''
            :param i: 需要将第i行与前、后的行分别进行合并,比较哪种合并方案最佳。判断准则是,合并后非单调性程度减轻,且更加均匀
            :param bad_by_bin_current:合并前的分箱矩阵,包括每一箱的样本个数、坏样本个数和坏样本率
            :param bins_list_current: 合并前的分箱方案
            :param not_monotone_count_current:合并前的非单调性元素个数
            :return:分箱后的分箱矩阵、分箱方案、非单调性元素个数和衡量均匀性的指标balance
            '''
            i_prev = i - 1
            i_next = i + 1
            bins_list = bins_list_current.copy()
            bad_by_bin = bad_by_bin_current.copy()
            not_monotone_count = not_monotone_count_current
            #合并方案a:将第i箱与前一箱进行合并
            bad_by_bin2a = MergeMatrix(bad_by_bin.copy(), i_prev, i, i)
            bad_by_bin2a[i_prev, -1] = bad_by_bin2a[i_prev, -2] / bad_by_bin2a[i_prev, -3]
            not_monotone_count2a = FeatureMonotone(bad_by_bin2a[:, -1])['count_of_nonmonotone']
            # 合并方案b:将第i行与后一行进行合并
            bad_by_bin2b = MergeMatrix(bad_by_bin.copy(), i, i_next, i_next)
            bad_by_bin2b[i, -1] = bad_by_bin2b[i, -2] / bad_by_bin2b[i, -3]
            not_monotone_count2b = FeatureMonotone(bad_by_bin2b[:, -1])['count_of_nonmonotone']
            balance = ((bad_by_bin[:, 1] / N).T * (bad_by_bin[:, 1] / N))[0, 0]
            balance_a = ((bad_by_bin2a[:, 1] / N).T * (bad_by_bin2a[:, 1] / N))[0, 0]
            balance_b = ((bad_by_bin2b[:, 1] / N).T * (bad_by_bin2b[:, 1] / N))[0, 0]
            #满足下述2种情况时返回方案a:(1)方案a能减轻非单调性而方案b不能;(2)方案a和b都能减轻非单调性,但是方案a的样本均匀性优于方案b
            if not_monotone_count2a < not_monotone_count_current and not_monotone_count2b >= not_monotone_count_current or \
                                            not_monotone_count2a < not_monotone_count_current and not_monotone_count2b < not_monotone_count_current and balance_a < balance_b:
                bins_list[i_prev] = bins_list[i_prev] + bins_list[i]
                bins_list.remove(bins_list[i])
                bad_by_bin = bad_by_bin2a
                not_monotone_count = not_monotone_count2a
                balance = balance_a
            # 同样地,满足下述2种情况时返回方案b:(1)方案b能减轻非单调性而方案a不能;(2)方案a和b都能减轻非单调性,但是方案b的样本均匀性优于方案a
            elif not_monotone_count2a >= not_monotone_count_current and not_monotone_count2b < not_monotone_count_current or \
                                            not_monotone_count2a < not_monotone_count_current and not_monotone_count2b < not_monotone_count_current and balance_a > balance_b:
                bins_list[i] = bins_list[i] + bins_list[i_next]
                bins_list.remove(bins_list[i_next])
                bad_by_bin = bad_by_bin2b
                not_monotone_count = not_monotone_count2b
                balance = balance_b
            #如果方案a和b都不能减轻非单调性,返回均匀性更优的合并方案
            else:
                if balance_a< balance_b:
                    bins_list[i] = bins_list[i] + bins_list[i_next]
                    bins_list.remove(bins_list[i_next])
                    bad_by_bin = bad_by_bin2b
                    not_monotone_count = not_monotone_count2b
                    balance = balance_b
                else:
                    bins_list[i] = bins_list[i] + bins_list[i_next]
                    bins_list.remove(bins_list[i_next])
                    bad_by_bin = bad_by_bin2b
                    not_monotone_count = not_monotone_count2b
                    balance = balance_b
            return {'bins_list': bins_list, 'bad_by_bin': bad_by_bin, 'not_monotone_count': not_monotone_count,
                    'balance': balance}
    
    
        N = df.shape[0]
        [badrate_bin, bad_by_bin] = BinBadRate(df, col, target)
        bins = list(bad_by_bin[col])
        bins_list = [[i] for i in bins]
        badRate = sorted(badrate_bin.items(), key=lambda x: x[0])
        badRate = [i[1] for i in badRate]
        not_monotone_count, not_monotone_position = FeatureMonotone(badRate)['count_of_nonmonotone'], FeatureMonotone(badRate)['index_of_nonmonotone']
        #迭代地寻找最优合并方案,终止条件是:当前的坏样本率已经单调,或者当前只有2箱
        while (not_monotone_count > 0 and len(bins_list)>2):
            #当非单调的箱的个数超过1个时,每一次迭代中都尝试每一个箱的最优合并方案
            all_possible_merging = []
            for i in not_monotone_position:
                merge_adjacent_rows = Merge_adjacent_Rows(i, np.mat(bad_by_bin), bins_list, not_monotone_count)
                all_possible_merging.append(merge_adjacent_rows)
            balance_list = [i['balance'] for i in all_possible_merging]
            not_monotone_count_new = [i['not_monotone_count'] for i in all_possible_merging]
            #如果所有的合并方案都不能减轻当前的非单调性,就选择更加均匀的合并方案
            if min(not_monotone_count_new) >= not_monotone_count:
                best_merging_position = balance_list.index(min(balance_list))
            #如果有多个合并方案都能减轻当前的非单调性,也选择更加均匀的合并方案
            else:
                better_merging_index = [i for i in range(len(not_monotone_count_new)) if not_monotone_count_new[i] < not_monotone_count]
                better_balance = [balance_list[i] for i in better_merging_index]
                best_balance_index = better_balance.index(min(better_balance))
                best_merging_position = better_merging_index[best_balance_index]
            bins_list = all_possible_merging[best_merging_position]['bins_list']
            bad_by_bin = all_possible_merging[best_merging_position]['bad_by_bin']
            not_monotone_count = all_possible_merging[best_merging_position]['not_monotone_count']
            not_monotone_position = FeatureMonotone(bad_by_bin[:, 3])['index_of_nonmonotone']
        return bins_list
    
    
    
    
    
    def Prob2Score(prob, basePoint, PDO):
        #将概率转化成分数且为正整数
        y = np.log(prob/(1-prob))
        return (basePoint+PDO/np.log(2)*(-y)).map(lambda x: int(x))
    
    
    
    ### 计算KS值
    def KS(df, score, target):
        '''
        :param df: 包含目标变量与预测值的数据集
        :param score: 得分或者概率
        :param target: 目标变量
        :return: KS值
        '''
        total = df.groupby([score])[target].count()
        bad = df.groupby([score])[target].sum()
        all = pd.DataFrame({'total':total, 'bad':bad})
        all['good'] = all['total'] - all['bad']
        all[score] = all.index
        all = all.sort_values(by=score,ascending=False)
        all.index = range(len(all))
        all['badCumRate'] = all['bad'].cumsum() / all['bad'].sum()
        all['goodCumRate'] = all['good'].cumsum() / all['good'].sum()
        KS = all.apply(lambda x: x.badCumRate - x.goodCumRate, axis=1)
        return max(KS)
    
    
    
    展开全文
  • 上一篇文章基于Python信用评分卡模型分析(一)已经介绍了信用评分卡模型的数据预处理、探索性数据分析、变量分箱和变量选择等。接下来我们将继续讨论信用评分卡的模型实现和分析,信用评分的方法和自动评分系统。六...

    上一篇文章基于Python的信用评分卡模型分析(一)已经介绍了信用评分卡模型的数据预处理、探索性数据分析、变量分箱和变量选择等。接下来我们将继续讨论信用评分卡的模型实现和分析,信用评分的方法和自动评分系统。

    六、模型分析

    证据权重(Weight of Evidence,WOE)转换可以将Logistic回归模型转变为标准评分卡格式。引入WOE转换的目的并不是为了提高模型质量,只是一些变量不应该被纳入模型,这或者是因为它们不能增加模型值,或者是因为与其模型相关系数有关的误差较大,其实建立标准信用评分卡也可以不采用WOE转换。这种情况下,Logistic回归模型需要处理更大数量的自变量。尽管这样会增加建模程序的复杂性,但最终得到的评分卡都是一样的。

    在建立模型之前,我们需要将筛选后的变量转换为WoE值,便于信用评分。

    6.1 WOE转换

    我们已经能获取了每个变量的分箱数据和woe数据,只需要根据各变量数据进行替换,实现代码如下:

    #替换成woe函数

    def replace_woe(series,cut,woe): list=[] i=0 while i=0: if value>=cut[j]: j=-1 else: j -=1 m -= 1 list.append(woe[m]) i += 1 return list

    我们将每个变量都进行替换,并将其保存到WoeData.csv文件中:

    # 替换成woe

    data['RevolvingUtilizationOfUnsecuredLines'] = Series(replace_woe(data['RevolvingUtilizationOfUnsecuredLines'], cutx1, woex1))

    data['age'] = Series(replace_woe(data['age'], cutx2, woex2)) data['NumberOfTime30-59DaysPastDueNotWorse'] = Series(replace_woe(data['NumberOfTime30-59DaysPastDueNotWorse'], cutx3, woex3)) data['DebtRatio'] = Series(replace_woe(data['DebtRatio'], cutx4, woex4)) data['MonthlyIncome'] = Series(replace_woe(data['MonthlyIncome'], cutx5, woex5)) data['NumberOfOpenCreditLinesAndLoans'] = Series(replace_woe(data['NumberOfOpenCreditLinesAndLoans'], cutx6, woex6)) data['NumberOfTimes90DaysLate'] = Series(replace_woe(data['NumberOfTimes90DaysLate'], cutx7, woex7)) data['NumberRealEstateLoansOrLines'] = Series(replace_woe(data['NumberRealEstateLoansOrLines'], cutx8, woex8)) data['NumberOfTime60-89DaysPastDueNotWorse'] = Series(replace_woe(data['NumberOfTime60-89DaysPastDueNotWorse'], cutx9, woex9)) data['NumberOfDependents'] = Series(replace_woe(data['NumberOfDependents'], cutx10, woex10)) data.to_csv('WoeData.csv', index=False)

    6.2 Logisic模型建立

    我们直接调用statsmodels包来实现逻辑回归:

    导入数据

    data = pd.read_csv('WoeData.csv')

    #应变量

    Y=data['SeriousDlqin2yrs']

    #自变量,剔除对因变量影响不明显的变量

    X=data.drop(['SeriousDlqin2yrs','DebtRatio','MonthlyIncome', 'NumberOfOpenCreditLinesAndLoans','NumberRealEstateLoansOrLines','NumberOfDependents'],axis=1) X1=sm.add_constant(X) logit=sm.Logit(Y,X1) result=logit.fit() print(result.summary())

    输出结果:

    3570735

    图6-1 逻辑回归模型结果.png

    通过图6-1可知,逻辑回归各变量都已通过显著性检验,满足要求。

    6.3 模型检验

    到这里,我们的建模部分基本结束了。我们需要验证一下模型的预测能力如何。我们使用在建模开始阶段预留的test数据进行检验。通过ROC曲线和AUC来评估模型的拟合能力。

    在Python中,可以利用sklearn.metrics,它能方便比较两个分类器,自动计算ROC和AUC。

    实现代码:

    #应变量

    Y_test = test['SeriousDlqin2yrs']

    #自变量,剔除对因变量影响不明显的变量,与模型变量对应

    X_test = test.drop(['SeriousDlqin2yrs', 'DebtRatio', 'MonthlyIncome', 'NumberOfOpenCreditLinesAndLoans','NumberRealEstateLoansOrLines', 'NumberOfDependents'], axis=1) X3 = sm.add_constant(X_test) resu = result.predict(X3)#进行预测 fpr, tpr, threshold = roc_curve(Y_test, resu) rocauc = auc(fpr, tpr)#计算AUC plt.plot(fpr, tpr, 'b', label='AUC = %0.2f' % rocauc)#生成ROC曲线 plt.legend(loc='lower right') plt.plot([0, 1], [0, 1], 'r--') plt.xlim([0, 1]) plt.ylim([0, 1]) plt.ylabel('真正率') plt.xlabel('假正率') plt.show()

    输出结果:

    3570735

    图6-2 ROC曲线

    从上图可知,AUC值为0.85,说明该模型的预测效果还是不错的,正确率较高。

    七、信用评分

    我们已经基本完成了建模相关的工作,并用ROC曲线验证了模型的预测能力。接下来的步骤,就是将Logistic模型转换为标准评分卡的形式。

    7.1 评分标准

    3570735

    3570735

    依据以上论文资料得到:

    a=log(p_good/P_bad)

    Score = offset + factor * log(odds)

    在建立标准评分卡之前,我们需要选取几个评分卡参数:基础分值、 PDO(比率翻倍的分值)和好坏比。 这里, 我们取600分为基础分值,PDO为20 (每高20分好坏比翻一倍),好坏比取20。

    # 我们取600分为基础分值,PDO为20(每高20分好坏比翻一倍),好坏比取20。

    p = 20 / math.log(2)

    q = 600 - 20 * math.log(20) / math.log(2) baseScore = round(q + p * coe[0], 0)

    个人总评分=基础分+各部分得分

    7.2 部分评分

    下面计算各变量部分的分数。各部分得分函数:

    #计算分数函数

    def get_score(coe,woe,factor): scores=[] for w in woe: score=round(coe*w*factor,0) scores.append(score) return scores

    计算各变量得分情况:

    # 各项部分分数

    x1 = get_score(coe[1], woex1, p)

    x2 = get_score(coe[2], woex2, p)

    x3 = get_score(coe[3], woex3, p)

    x7 = get_score(coe[4], woex7, p) x9 = get_score(coe[5], woex9, p)

    我们可以得到各部分的评分卡如图7-1所示:

    3570735

    图7-1 各变量的评分标准

    八、自动评分系统

    根据变量来计算分数,实现如下:

    #根据变量计算分数

    def compute_score(series,cut,score): list = [] i = 0 while i < len(series): value = series[i] j = len(cut) - 2 m = len(cut) - 2 while j >= 0: if value >= cut[j]: j = -1 else: j -= 1 m -= 1 list.append(score[m]) i += 1 return list

    我们来计算test里面的分数:

    test1 = pd.read_csv('TestData.csv')

    test1['BaseScore']=Series(np.zeros(len(test1)))+baseScore

    test1['x1'] = Series(compute_score(test1['RevolvingUtilizationOfUnsecuredLines'], cutx1, x1))

    test1['x2'] = Series(compute_score(test1['age'], cutx2, x2)) test1['x3'] = Series(compute_score(test1['NumberOfTime30-59DaysPastDueNotWorse'], cutx3, x3)) test1['x7'] = Series(compute_score(test1['NumberOfTimes90DaysLate'], cutx7, x7)) test1['x9'] = Series(compute_score(test1['NumberOfTime60-89DaysPastDueNotWorse'], cutx9, x9)) test1['Score'] = test1['x1'] + test1['x2'] + test1['x3'] + test1['x7'] +test1['x9'] + baseScore test1.to_csv('ScoreData.csv', index=False)

    批量计算的部分分结果:

    3570735

    图8-1 批量计算的部分结果

    九、总结以及展望

    本文通过对kaggle上的Give Me Some Credit数据的挖掘分析,结合信用评分卡的建立原理,从数据的预处理、变量选择、建模分析到创建信用评分,创建了一个简单的信用评分系统。

    基于AI 的机器学习评分卡系统可通过把旧数据(某个时间点后,例如2年)剔除掉后再进行自动建模、模型评估、并不断优化特征变量,使得系统更加强大。

    参考文献

    代码网盘地址

    展开全文
  • python金融风控评分卡模型和数据分析微专业课(博主亲自录制视频):http://dwz.date/b9vv一、 前言之前看到信用标准评分卡模型开发及实现的文章,是标准的评分卡建模流程在R上的实现,非常不错,就想着能不能把开发...

    python金融风控评分卡模型和数据分析微专业课(博主亲自录制视频):http://dwz.date/b9vv

    一、 前言

    之前看到信用标准评分卡模型开发及实现的文章,是标准的评分卡建模流程在R上的实现,非常不错,就想着能不能把开发流程在Python上实验一遍呢,经过一番折腾后,终于在Python上用类似的代码和包实现出来,由于Python和R上函数的差异以及样本抽样的差异,本文的结果与该文有一定的差异,这是意料之中的,也是正常,接下来就介绍建模的流程和代码实现。

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    #####代码中需要引用的包#####

    import numpy as np

    import pandas as pd

    from sklearn.utilsimport shuffle

    from sklearn.feature_selectionimport RFE, f_regression

    import scipy.stats.stats as stats

    import matplotlib.pyplot as plt

    from sklearn.linear_modelimport LogisticRegression

    import math

    import matplotlib.pyplot as plt

    二、数据集准备

    数据来自互联网上经常被用来研究信用风险评级模型的加州大学机器学习数据库中的german credit data,原本是存在R包”klaR”中的GermanCredit,我在R中把它加载进去,然后导出csv,最终导入Python作为数据集

    1

    2

    3

    4

    ############## R #################

    library(klaR)

    data(GermanCredit ,package="klaR")

    write.csv(GermanCredit,"/filePath/GermanCredit.csv")

    该数据集包含了1000个样本,每个样本包括了21个变量(属性),其中包括1个违约状态变量“credit_risk”,剩余20个变量包括了所有的7个定量和13个定性指标

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    22

    23

    24

    >>> df_raw= pd.read_csv('/filePath/GermanCredit.csv')

    >>> df_raw.dtypes

    Unnamed:0                  int64

    statusobject

    duration                    int64

    credit_historyobject

    purposeobject

    amount                      int64

    savingsobject

    employment_durationobject

    installment_rate            int64

    personal_status_sexobject

    other_debtorsobject

    present_residence           int64

    property                   object

    age                         int64

    other_installment_plansobject

    housingobject

    number_credits              int64

    jobobject

    people_liable               int64

    telephoneobject

    foreign_workerobject

    credit_riskobject

    接下来对数据集进行拆分,按照7:3拆分训练集和测试集,并将违约样本用“1”表示,正常样本用“0”表示。

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    #提取样本训练集和测试集

    def split_data(data, ratio=0.7, seed=None):

    if seed:

    shuffle_data= shuffle(data, random_state=seed)

    else:

    shuffle_data= shuffle(data, random_state=np.random.randint(10000))

    train= shuffle_data.iloc[:int(ratio*len(shuffle_data)), ]

    test= shuffle_data.iloc[int(ratio*len(shuffle_data)):, ]

    return train, test

    #设置seed是为了保证下次拆分的结果一致

    df_train,df_test= split_data(df_raw, ratio=0.7, seed=666)

    #将违约样本用“1”表示,正常样本用“0”表示。

    credit_risk= [0 if x=='good' else 1 for xin df_train['credit_risk']]

    #credit_risk = np.where(df_train['credit_risk'] == 'good',0,1)

    data= df_train

    data['credit_risk']=credit_risk

    三、定量和定性指标筛选

    Python里面可以根据dtype对指标进行定量或者定性的区分,int64为定量指标,object则为定性指标,定量指标的筛选本文通过Python sklearn包中的f_regression进行单变量指标筛选,根据F检验值和P值来选择入模变量

    1

    2

    3

    4

    #获取定量指标

    quant_index= np.where(data.dtypes=='int64')

    quant_vars= np.array(data.columns)[quant_index]

    quant_vars= np.delete(quant_vars,0)

    1

    2

    3

    4

    df_feature= pd.DataFrame(data,columns=['duration','amount','installment_rate','present_residence','age','number_credits','people_liable'])

    f_regression(df_feature,credit_risk)

    #输出逐步回归后得到的变量,选择P值<=0.1的变量

    quant_model_vars= ["duration","amount","age","installment_rate"]

    定性指标的筛选可通过计算IV值,并选择IV值大于某一数值的条件来筛选指标,此处自己实现了WOE和IV的函数计算,本文选择了IV值大于0.1的指标,算是比较高的IV值了,一般大于0.02就算是好变量

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    def woe(bad, good):

    return np.log((bad/bad.sum())/(good/good.sum()))

    all_iv= np.empty(len(factor_vars))

    woe_dict= dict()#存起来后续有用

    i= 0

    for varin factor_vars:

    data_group= data.groupby(var)['credit_risk'].agg([np.sum,len])

    bad= data_group['sum']

    good= data_group['len']-bad

    woe_dict[var]= woe(bad,good)

    iv= ((bad/bad.sum()-good/good.sum())*woe(bad,good)).sum()

    all_iv[i]= iv

    i= i+1

    high_index= np.where(all_iv>0.1)

    qual_model_vars= factor_vars[high_index]

    四、连续变量分段和离散变量降维

    接下来对连续变量进行分段,由于R包有smbinning自动分箱函数,Python没有,只好采用别的方法进行自动分箱,从网上找了一个monotonic

    binning的Python实现,本文进行了改造,增加分段时的排序和woe的计算,还支持手动分箱计算woe,具体代码如下:

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    22

    23

    24

    def binning(Y, X, n=None):

    # fill missings with median

    X2= X.fillna(np.median(X))

    if n== None:

    r= 0

    n= 10

    while np.abs(r) <1:

    #d1 = pd.DataFrame({"X": X2, "Y": Y, "Bucket": pd.qcut(X2, n)})

    d1= pd.DataFrame(

    {"X": X2,"Y": Y,"Bucket": pd.qcut(X2.rank(method='first'), n)})

    d2= d1.groupby('Bucket', as_index=True)

    r, p= stats.spearmanr(d2.mean().X, d2.mean().Y)

    n= n- 1

    else:

    d1= pd.DataFrame({"X": X2,"Y": Y,"Bucket": pd.qcut(X2.rank(method='first'), n)})

    d2= d1.groupby('Bucket', as_index=True)

    d3= pd.DataFrame()

    d3['min']= d2.min().X

    d3['max']= d2.max().X

    d3['bad']= d2.sum().Y

    d3['total']= d2.count().Y

    d3['bad_rate']= d2.mean().Y

    d3['woe']= woe(d3['bad'], d3['total']- d3['bad'])

    return d3

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    # duration

    binning(data['credit_risk'],data['duration'])

    duration_Cutpoint=list()

    duration_WoE=list()

    for xin data['duration']:

    if x <= 12:

    duration_Cutpoint.append('<= 12')

    duration_WoE.append(-0.488031)

    if x >12 and x <= 24:

    duration_Cutpoint.append('<= 24')

    duration_WoE.append(-0.109072)

    if x >24:

    duration_Cutpoint.append('> 24')

    duration_WoE.append(0.502560)

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    # amount 手动分箱

    binning(data['credit_risk'],data['amount'],2)

    amount_Cutpoint=list()

    amount_WoE=list()

    for xin data['amount']:

    if x <= 2315:

    amount_Cutpoint.append('<= 2315')

    amount_WoE.append(-0.089829)

    if x >2315:

    amount_Cutpoint.append('> 2315')

    amount_WoE.append(0.086733)

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    # age

    binning(data['credit_risk'],data['age'])

    age_Cutpoint=list()

    age_WoE=list()

    for xin data['age']:

    if x <= 28:

    age_Cutpoint.append('<= 28')

    age_WoE.append(0.279209)

    if x >28 and x <= 38:

    age_Cutpoint.append('<= 38')

    age_WoE.append(-0.066791)

    if x >38:

    age_Cutpoint.append('> 38')

    age_WoE.append(-0.241013)

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    # installment_rate

    binning(data['credit_risk'],data['installment_rate'])

    nstallment_rate_Cutpoint=list()

    installment_rate_WoE=list()

    for xin data['installment_rate']:

    if x <= 2:

    installment_rate_Cutpoint.append('<= 2')

    installment_rate_WoE.append(-0.136411)

    if x >2 and x <4:

    installment_rate_Cutpoint.append('< 4')

    installment_rate_WoE.append(-0.130511)

    if x >= 4:

    installment_rate_Cutpoint.append('>= 4')

    installment_rate_WoE.append(0.248710)

    离散变量由于不同变量的维度不一致,为了防止“维数灾难”,对多维对变量进行降维,在评级模型开发中的降维处理方法,通常是将属性相似的合并处理,以达到降维的目的,本文采用参考文献对做法进行合并降维。

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    22

    #定性指标的降维和WoE

    discrete_data= data[qual_model_vars]

    discrete_data['credit_risk']= data['credit_risk']

    #对purpose指标进行降维

    pd.value_counts(data['purpose'])

    #合并car(new)、car(used)

    discrete_data['purpose']= discrete_data['purpose'].replace('car (new)','car(new/used)')

    discrete_data['purpose']= discrete_data['purpose'].replace('car (used)','car(new/used)')

    #合并radio/television、furniture/equipment

    discrete_data['purpose']= discrete_data['purpose'].replace('radio/television','radio/television/furniture/equipment')

    discrete_data['purpose']= discrete_data['purpose'].replace('furniture/equipment','radio/television/furniture/equipment')

    #合并others、repairs、business

    discrete_data['purpose']= discrete_data['purpose'].replace('others','others/repairs/business')

    discrete_data['purpose']= discrete_data['purpose'].replace('repairs','others/repairs/business')

    discrete_data['purpose']= discrete_data['purpose'].replace('business','others/repairs/business')

    #合并retraining、education

    discrete_data['purpose']= discrete_data['purpose'].replace('retraining','retraining/education')

    discrete_data['purpose']= discrete_data['purpose'].replace('education','retraining/education')

    data_group= discrete_data.groupby('purpose')['credit_risk'].agg([np.sum,len])

    bad= data_group['sum']

    good= data_group['len']-bad

    woe_dict['purpose']= woe(bad,good)

    所有离散变量的分段和woe值如下:

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    18

    19

    20

    21

    22

    23

    24

    25

    26

    27

    28

    29

    30

    31

    32

    33

    34

    35

    36

    37

    ##存储所有离散变量的woe

    #purpose

    purpose_WoE=list()

    for xin discrete_data['purpose']:

    for iin woe_dict['purpose'].index:

    if x== i:

    purpose_WoE.append(woe_dict['purpose'][i])

    #status

    status_WoE=list()

    for xin discrete_data['status']:

    for iin woe_dict['status'].index:

    if x== i:

    status_WoE.append(woe_dict['status'][i])

    #credit_history

    credit_history_WoE=list()

    for xin discrete_data['credit_history']:

    for iin woe_dict['credit_history'].index:

    if x== i:

    credit_history_WoE.append(woe_dict['credit_history'][i])

    #savings

    savings_WoE=list()

    for xin discrete_data['savings']:

    for iin woe_dict['savings'].index:

    if x== i:

    savings_WoE.append(woe_dict['savings'][i])

    #employment_duration

    employment_duration_WoE=list()

    for xin discrete_data['employment_duration']:

    for iin woe_dict['employment_duration'].index:

    if x== i:

    employment_duration_WoE.append(woe_dict['employment_duration'][i])

    #property

    property_WoE=list()

    for xin discrete_data['property']:

    for iin woe_dict['property'].index:

    if x== i:

    property_WoE.append(woe_dict['property'][i])

    转载https://blog.csdn.net/kxiaozhuk/article/details/84612632

    至此,就完成了数据集的准备、指标的筛选和分段降维,下面就要进行逻辑回归模型的建模了

    展开全文
  • 五、模型训练信用评分卡模型一般采用逻辑回归模型,属于二分类模型Python 中的sklearn.linear_model导入LogisticRegression即可。#入模定量和定性指标model_data = data[np.append(quant_model_vars,qual_model_...
  • 上一篇文章基于Python信用评分卡模型分析(一)已经介绍了信用评分卡模型的数据预处理、探索性数据分析、变量分箱和变量选择等。接下来我们将继续讨论信用评分卡的模型实现和分析,信用评分的方法和自动评分系统。六...
  • 信用评分卡模型Python中实践(下)

    千次阅读 热门讨论 2018-11-29 10:11:42
    信用评分卡模型Python中实践(上)上一篇已经完成数据集的准备和指标筛选,本篇继续介绍模型构建和评分卡的创建。 五、模型训练  信用评分卡的模型一般采用逻辑回归模型,属于二分类模型,Python 中的sklearn....
  • 上一篇文章信用评分卡模型分析(理论部分)已经介绍了信用评分卡模型的数据预处理、探索性数据分析、变量分箱和变量选择等。接下来使用Python建立信用评分卡,对用户行为进行打分,继续讨论信用评分卡的模型python...
  • 发现一个基于Python构建信用评分卡模型的小项目,步骤非常清晰。这里分享给大家做个参考。 基于Python信用评分卡模型分析(一) 一、项目流程 二、数据获取 三、数据预处理 3.1 缺失值处理 3.2 异常值处理 3.3 ...
  •  之前看到信用标准评分卡模型开发及实现的文章,是标准的评分卡建模流程在R上的实现,非常不错,就想着能不能把开发流程在Python上实验一遍呢,经过一番折腾后,终于在Python上用类似的代码和包实现出来,由于...
  • python信用评分卡建模(附代码,博主录制)GBDT模型用于评分卡模型GBDT模型基本理论介绍GBDT模型如何调参数GBDT模型对样本违约概率进行估计(GBDT模型用于评分卡python代码实现请看下一篇博客)GBDT模型挑选变量重要性...
  • 公众号:编程派基于Python信用评分卡模型分析(一)上一篇文章《基于Python信用评分卡模型分析(一)》已经介绍了信用评分卡模型的数据预处理、探索性数据分析、变量分箱和变量选择等。接下来我们将继续讨论信用评分...
  • 目录导入数据缺失值和异常值处理特征可视化特征选择模型训练模型评估模型结果转评分计算用户总分一、导入数据#导入模块importpandas as pdimportnumpy as npfrom scipy importstatsimportseaborn as ...
  • library(ROSE) data(hacide) prop.table(table(hacide.train$cls)) #不均衡类的情况 library(rpart) treeimb pred.treeimb ##看看这个模型的预测精度, ##ROSE包提供了名为accuracy.meas()的函数, ##它能用来计算...
  • 原标题:基于Python信用评分卡模型分析(二)公众号:编程派 基于Python信用评分卡模型分析(一) 上一篇文章《基于Python信用评分卡模型分析(一)》已经介绍了信用评分卡模型的数据预处理、探索性数据分析、变量...
  • python信用评分卡建模(附代码,博主录制) https://study.163.com/course/introduction.htm?courseId=1005214003&utm_campaign=commission&utm_source=cp-400000000398149&utm_medium=share ...
  • 点击上方“编程派”,选择设为“设为星标”优质文章,第一时间送达!公众号:编程派基于Python信用评分卡模型分析(一)上一篇文章《基于Python信用评分卡模型分析(一)》已经介绍了...
  • 这个过程会产生多个(默认5个)被插补好的完整数据集,因为插补有随机的成分,所以这些完整数据集不一样 2)然后使用with函数依次对完整数据集应用统计模型 3) 最后使用pool函数将不同完整数据集的统计结果合并成一组...
  • 公众号:编程派基于Python信用评分卡模型分析(一)上一篇文章《基于Python信用评分卡模型分析(一)》已经介绍了信用评分卡模型的数据预处理、探索性数据分析、变量分箱和变量选择等。接下来我们将继续讨论信用评分...
  • python信用评分卡建模(附代码,博主录制) https://study.163.com/course/introduction.htm?courseId=1005214003&utm_campaign=commission&utm_source=cp-400000000398149&utm_medium=share ...
  • 信用评分卡分类A卡:申请评分卡,侧重贷前,在客户获取期,建立信用风险评分,预测客户带来违约风险的概率大小;B卡:行为评分卡,侧重贷中,在客户申请处理期,建立申请风险评分模型,预测客户开户后一定时期内违约...

空空如也

空空如也

1 2 3 4 5 ... 7
收藏数 121
精华内容 48
关键字:

信用评分卡模型python

python 订阅