精华内容
下载资源
问答
  • scikit-learn: Machine Learning in Python.scikit-learn库实现了很多机器学习算法。 scikit-learn是一个基于NumPy, SciPy, Matplotlib的开源机器学习工具包,主要涵盖分类,回归和聚类算法,例如SVM, 逻...

    http://blog.csdn.net/pipisorry/article/details/52128222

    scikit-learn: Machine Learning in Python.scikit-learn库实现了很多机器学习算法。

    scikit-learn是一个基于NumPy, SciPy, Matplotlib的开源机器学习工具包,主要涵盖分类,回归和聚类算法,例如SVM, 逻辑回归,朴素贝叶斯,随机森林,k-means等算法,代码和文档都非常不错,在许多Python项目中都有应用。例如在我们熟悉的NLTK中,分类器方面就有专门针对scikit-learn的接口,可以调用scikit-learn的分类算法以及训练数据来训练分类器模型。

    scikit-learn的基本功能主要被分为六个部分,分类,回归,聚类,数据降维,模型选择,数据预处理,具体可以参考官方网站上的文档。

     

    安装

    Note: 要先安装Numpy, Scipy。[linux和windows下安装python拓展包及requirement.txt安装类库 ]

    linux下安装scikit-learn

     

    Building scikit-learn with pip

    This is usually the fastest way to install or upgrade to the latest stablerelease:

    pip install -U scikit-learn

    pip install --user --install-option="--prefix=" -U scikit-learn

    Note:1. The --user flag ask pip to install scikit-learn in the $HOME/.local folder therefore not requiring root permission. This flag should make pip ignore any old version of scikit-learn previously installed on the system while benefitting from system packages for numpy and scipy. Those dependencies can be long and complex to build correctly from source.

    2. The --install-option="--prefix=" flag is only required if Python has adistutils.cfg configuration with a predefinedprefix= entry.

    [Installing scikit-learn]

    scikit-learn机器学习问题解决思路

    对于具体的机器学习问题,通常可以分为三个步骤,数据准备与预处理,模型选择与训练,模型验证与参数调优。

    逻辑回归模型示例

    scikit-learn支持多种格式的数据,包括经典的iris数据,LibSVM格式数据等等。为了方便起见,推荐使用LibSVM格式的数据,详细见LibSVM的官网。

    from sklearn.datasets importload_svmlight_file,导入这个模块就可以加载LibSVM模块的数据,

    t_X,t_y=load_svmlight_file("filename")

    机器学习模型也要导入相应的模块,逻辑回归模型在下面的模块中。

    from sklearn.linear_modelimport LogisticRegression

    regressionFunc =LogisticRegression(C=10, penalty='l2', tol=0.0001)

    train_sco=regressionFunc.fit(train_X,train_y).score(train_X,train_y)

    test_sco=regressionFunc.score(test_X,test_y)

    就可以完成模型的训练和测试了。

    为了选择更好地模型可以进行交叉实验,或者使用贪心算法进行参数调优。

    导入如下模块就可以,

    CV:

    from sklearn importcross_validation

    X_train_m, X_test_m,y_train_m, y_test_m = cross_validation.train_test_split(t_X,t_y, test_size=0.5,random_state=seed_i)

    regressionFunc_2.fit(X_train_m,y_train_m)

    sco=regressionFunc_2.score(X_test_m,y_test_m, sample_weight=None)

     

    GridSearch:

    from sklearn.grid_searchimport GridSearchCV

    tuned_parameters =[{'penalty': ['l1'], 'tol': [1e-3, 1e-4],

                         'C': [1, 10, 100, 1000]},

                        {'penalty': ['l2'], 'tol':[1e-3, 1e-4],

                         'C': [1, 10, 100, 1000]}]

    clf =GridSearchCV(LogisticRegression(), tuned_parameters, cv=5, scoring=['precision','recall'])

    print(clf.best_estimator_)

     

    当然可以利用matplotlib绘制学习曲线,需要导入相应模块如下:

    from sklearn.learning_curveimport learning_curve,validation_curve

    核心代码如下,具体参见Scikit-Learn的官方文档:

    rain_sizes, train_scores,test_scores = learning_curve(

            estimator, X, y, cv=cv, n_jobs=n_jobs,train_sizes=train_sizes)

    train_scores, test_scores =validation_curve(

            estimator, X, y, param_name,param_range,

            cv, scoring, n_jobs)

    皮皮blog

     

     

    预处理

    加载数据(Data Loading)

    我们假设输入时一个特征矩阵或者csv文件。
    首先,数据应该被载入内存中。
    scikit-learn的实现使用了NumPy中的arrays,所以,我们要使用NumPy来载入csv文件。
    以下是从UCI机器学习数据仓库中下载的数据。

    import numpy as np
    import urllib
    # url with dataset
    url = "http://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data"
    # download the file
    raw_data = urllib.urlopen(url)
    # load the CSV file as a numpy matrix
    dataset = np.loadtxt(raw_data, delimiter=",")
    # separate the data from the target attributes
    X = dataset[:,0:7]
    y = dataset[:,8]

    我们要使用该数据集作为例子,将特征矩阵作为X,目标变量作为y。

    数据归一化(Data Normalization)

    大多数机器学习算法中的梯度方法对于数据的缩放和尺度都是很敏感的,在开始跑算法之前,我们应该进行归一化或者标准化的过程,这使得特征数据缩放到0-1范围中。scikit-learn提供了归一化的方法:

    from sklearn import preprocessing
    # normalize the data attributes
    normalized_X = preprocessing.normalize(X)
    # standardize the data attributes
    standardized_X = preprocessing.scale(X)

    [Scikit-learn:数据预处理Preprocessing data]

    特征选择(Feature Selection)

    在解决一个实际问题的过程中,选择合适的特征或者构建特征的能力特别重要。这成为特征选择或者特征工程。
    特征选择时一个很需要创造力的过程,更多的依赖于直觉和专业知识,并且有很多现成的算法来进行特征的选择。
    下面的树算法(Tree algorithms)计算特征的信息量:

    from sklearn import metrics
    from sklearn.ensemble import ExtraTreesClassifier
    model = ExtraTreesClassifier()
    model.fit(X, y)
    # display the relative importance of each attribute
    print(model.feature_importances_)

    评估器

    fit,fit_transform,transform的区别与联系?

    fit方法是用于从一个训练集中学习模型参数,其中就包括了归一化时用到的均值,标准偏差;

    transform方法就是用于将模型用于位置数据;

    fit_transform就很高效的将模型训练和转化合并到一起,训练样本先做fit,得到mean,standard deviation,然后将这些参数用于transform(归一化训练数据),使得到的训练数据是归一化的。

    测试数据只需要在原先得到的mean,std上来做归一化就行了,所以用transform就行了。

    皮皮blog

     

     

    scikit-learn算法的使用

    scikit-learn实现了机器学习的大部分基础算法,让我们快速了解一下。

    逻辑回归

     

    大多数问题都可以归结为二元分类问题。这个算法的优点是可以给出数据所在类别的概率。

     

    这里我们使用Pima Indians Diabetes dataset,其中包含健康数据和糖尿病状态数据,一共有768个病人的数据。

    import pandas as pd
    url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data'
    col_names = ['pregnant', 'glucose', 'bp', 'skin', 'insulin', 'bmi', 'pedigree', 'age', 'label']
    pima = pd.read_csv(url, header=None, names=col_names)
    pima.head()
    
     pregnantglucosebpskininsulinbmipedigreeagelabel
    061487235033.60.627501
    11856629026.60.351310
    28183640023.30.672321
    318966239428.10.167210
    40137403516843.12.288331

    上面表格中的label一列,1表示该病人有糖尿病,0表示该病人没有糖尿病

    # define X and y
    feature_cols = ['pregnant', 'insulin', 'bmi', 'age']
    X = pima[feature_cols]
    y = pima.label
    
    # split X and y into training and testing sets
    from sklearn.cross_validation import train_test_split
    X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
    
    # train a logistic regression model on the training set
    from sklearn.linear_model import LogisticRegression
    logreg = LogisticRegression()
    logreg.fit(X_train, y_train)
    
    LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, 
                       max_iter=100, multi_class='ovr', penalty='l2', random_state=None, solver='liblinear', tol=0.0001,verbose=0)
    # make class predictions for the testing set
    y_pred_class = logreg.predict(X_test)
    

     

    from sklearn import metrics
    from sklearn.linear_model import LogisticRegression
    model = LogisticRegression()
    model.fit(X, y)
    print(model)
    # make predictions
    expected = y
    predicted = model.predict(X)
    # summarize the fit of the model
    print(metrics.classification_report(expected, predicted))
    print(metrics.confusion_matrix(expected, predicted))

    结果:

    LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
    intercept_scaling=1, penalty=l2, random_state=None, tol=0.0001)
    precision recall f1-score support

       0.0       0.79      0.89      0.84       500
       1.0       0.74      0.55      0.63       268

    avg / total 0.77 0.77 0.77 768

    [[447 53]
    [120 148]]

    朴素贝叶斯

    这也是著名的机器学习算法,该方法的任务是还原训练样本数据的分布密度,其在多类别分类中有很好的效果。

    from sklearn import metrics
    from sklearn.naive_bayes import GaussianNB
    model = GaussianNB()
    model.fit(X, y)
    print(model)
    # make predictions
    expected = y
    predicted = model.predict(X)
    # summarize the fit of the model
    print(metrics.classification_report(expected, predicted))
    print(metrics.confusion_matrix(expected, predicted))

    结果:

    GaussianNB()
    precision recall f1-score support

       0.0       0.80      0.86      0.83       500
        1.0       0.69      0.60      0.64       268

    avg / total 0.76 0.77 0.76 768

    [[429 71]
    [108 160]]

    k近邻

    k近邻算法常常被用作是分类算法一部分,比如可以用它来评估特征,在特征选择上我们可以用到它。

    from sklearn import metrics
    from sklearn.neighbors import KNeighborsClassifier
    # fit a k-nearest neighbor model to the data
    model = KNeighborsClassifier()
    model.fit(X, y)
    print(model)
    # make predictions
    expected = y
    predicted = model.predict(X)
    # summarize the fit of the model
    print(metrics.classification_report(expected, predicted))
    print(metrics.confusion_matrix(expected, predicted))

    结果:

    KNeighborsClassifier(algorithm=auto, leaf_size=30, metric=minkowski,
    n_neighbors=5, p=2, weights=uniform)
    precision recall f1-score support

       0.0       0.82      0.90      0.86       500
        1.0       0.77      0.63      0.69       268

    avg / total 0.80 0.80 0.80 768

    [[448 52]
    [ 98 170]]

    决策树

    分类与回归树(Classification and Regression Trees ,CART)算法常用于特征含有类别信息的分类或者回归问题,这种方法非常适用于多分类情况。

    from sklearn import metrics
    from sklearn.tree import DecisionTreeClassifier
    # fit a CART model to the data
    model = DecisionTreeClassifier()
    model.fit(X, y)
    print(model)
    # make predictions
    expected = y
    predicted = model.predict(X)
    # summarize the fit of the model
    print(metrics.classification_report(expected, predicted))
    print(metrics.confusion_matrix(expected, predicted))

    结果:

    DecisionTreeClassifier(compute_importances=None, criterion=gini,
    max_depth=None, max_features=None, min_density=None,
    min_samples_leaf=1, min_samples_split=2, random_state=None,
    splitter=best)
    precision recall f1-score support

       0.0       1.00      1.00      1.00       500
        1.0       1.00      1.00      1.00       268

    avg / total 1.00 1.00 1.00 768

    [[500 0]
    [ 0 268]]

    支持向量机

    [Scikit-learn:分类classification :svm]

    除了分类和回归算法外,scikit-learn提供了更加复杂的算法,比如聚类算法,还实现了算法组合的技术,如Bagging和Boosting算法。

    皮皮blog

     

     

    如何优化算法参数

    一项更加困难的任务是构建一个有效的方法用于选择正确的参数,我们需要用搜索的方法来确定参数。scikit-learn提供了实现这一目标的函数。
    下面的例子是一个进行正则参数选择的程序:

    import numpy as np
    from sklearn.linear_model import Ridge
    from sklearn.grid_search import GridSearchCV
    # prepare a range of alpha values to test
    alphas = np.array([1,0.1,0.01,0.001,0.0001,0])
    # create and fit a ridge regression model, testing each alpha
    model = Ridge()
    grid = GridSearchCV(estimator=model, param_grid=dict(alpha=alphas))
    grid.fit(X, y)
    print(grid)
    # summarize the results of the grid search
    print(grid.best_score_)
    print(grid.best_estimator_.alpha)

    结果:

    GridSearchCV(cv=None,
    estimator=Ridge(alpha=1.0, copy_X=True, fit_intercept=True, max_iter=None,
    normalize=False, solver=auto, tol=0.001),
    estimator__alpha=1.0, estimator__copy_X=True,
    estimator__fit_intercept=True, estimator__max_iter=None,
    estimator__normalize=False, estimator__solver=auto,
    estimator__tol=0.001, fit_params={}, iid=True, loss_func=None,
    n_jobs=1,
    param_grid={'alpha': array([ 1.00000e+00, 1.00000e-01, 1.00000e-02, 1.00000e-03,
    1.00000e-04, 0.00000e+00])},
    pre_dispatch=2*n_jobs, refit=True, score_func=None, scoring=None,
    verbose=0)
    0.282118955686
    1.0

    有时随机从给定区间中选择参数是很有效的方法,然后根据这些参数来评估算法的效果进而选择最佳的那个。

    import numpy as np
    from scipy.stats import uniform as sp_rand
    from sklearn.linear_model import Ridge
    from sklearn.grid_search import RandomizedSearchCV
    # prepare a uniform distribution to sample for the alpha parameter
    param_grid = {'alpha': sp_rand()}
    # create and fit a ridge regression model, testing random alpha values
    model = Ridge()
    rsearch = RandomizedSearchCV(estimator=model, param_distributions=param_grid, n_iter=100)
    rsearch.fit(X, y)
    print(rsearch)
    # summarize the results of the random parameter search
    print(rsearch.best_score_)
    print(rsearch.best_estimator_.alpha)

    结果:

    RandomizedSearchCV(cv=None,
    estimator=Ridge(alpha=1.0, copy_X=True, fit_intercept=True, max_iter=None,
    normalize=False, solver=auto, tol=0.001),
    estimator__alpha=1.0, estimator__copy_X=True,
    estimator__fit_intercept=True, estimator__max_iter=None,
    estimator__normalize=False, estimator__solver=auto,
    estimator__tol=0.001, fit_params={}, iid=True, n_iter=100,
    n_jobs=1,
    param_distributions={'alpha': <scipy.stats.distributions.rv_frozen object at 0x04B86DD0>},
    pre_dispatch=2*n_jobs, random_state=None, refit=True,
    scoring=None, verbose=0)
    0.282118643885
    0.988443794636

    [简书:【机器学习实验】scikit-learn的主要模块和基本使用]

    from: http://blog.csdn.net/pipisorry/article/details/52128222

    ref: [Homepage: scikit-learn Machine Learning in Python]

    [莫烦: 用 Scikit-learn 和 Python 优雅地学会机器学习 machine learning sklearn 教学 优酷教程视频列表]

    [scikit-learn User Guide]

    [scikit-learn Tutorials]

    [Scikit-learn 使用手册中文版]

    [Fabian Pedregosa, Gael Varoquaux: Scipy Lecture Notes: 2.11. scikit-learn: machine learning in Python]

    [翻译:Scikit Learn: 在Python中机器学习  Scikit Learn: 在python中机器学习]

     

    展开全文
  • 写在之前 ...如果b为true,会执行这个分号,然后打印yes 如果b为false,不会执行这个分号,然后打印yes 这样,看上去无论如何都会打印yes public class HelloWorld { public static void main(String[] a

    写在之前

    接触Java这方面的语法学习,在有C语言的基础上更易于理解学习。相对上手比较快

    条件判断语句:
    写法格式与C语言相近不做赘述…
    值得注意的是:(也是之前学c应该注意的问题)

    在第6行,if后面有一个分号; 而分号也是一个完整的表达式
    如果b为true,会执行这个分号,然后打印yes
    如果b为false,不会执行这个分号,然后打印yes
    这样,看上去无论如何都会打印yes

    public class HelloWorld {
        public static void main(String[] args) {
     
            boolean b = false;
     
            if (b);
                System.out.println("yes");
     
        }
    }
    

    else if 是多条件判断
    也是if else if else 的这种结构。

    switch 语句相当于 if else的另一种表达方式
    switch可以使用byte,short,int,char,String,enum

    注: 每个表达式结束,都应该有一个break;
    注: String在Java1.7之前是不支持的, Java从1.7开始支持switch用String的,编译后是把String转化为hash值,其实还是整数
    注: enum是枚举类型(后面有相应的学习)提醒我该回去看看C语言中的枚举类型了。

    while do while语句
    用法都和c相近。
    重述加强记忆:

    while:
    只要while中的表达式成立,就会不断地循环执行

    do{
    } while 循环

    与while的区别是,无论是否成立,先执行一次,再进行判断

    for语句:
    for循环,和while一样,只是表达方式不一样

    public class HelloWorld {
        public static void main(String[] args) {
               
            //使用while打印0到4    
            int i = 0;
            while(i<5){
                System.out.println("while循环输出的"+i);
                i++;
            }
              
            //使用for打印0到4    
            for (int j = 0; j < 5; j++) {
                System.out.println("for  循环输出的"+j);
            }
        }
    }
    

    代码示例更清晰。

    展开全文
  • Scikit-learn:分类classification

    千次阅读 2016-11-04 14:38:13
    Furthermore SVC multi-class mode is implemented using one vs one scheme while LinearSVC uses one vs the rest. It is possible to implement one vs the rest with SVC by using the sklearn.multiclass....

    http://blog.csdn.net/pipisorry/article/details/53034340

    支持向量机SVM分类

    svm分类有多种不同的算法。SVM是非常流行的机器学习算法,主要用于分类问题,如同逻辑回归问题,它可以使用一对多的方法进行多类别的分类。

    svc

    Implementation of Support Vector Machine classifier using libsvm: the kernel can be non-linear but its SMO algorithm does not scale to large number of samples as LinearSVC does. Furthermore SVC multi-class mode is implemented using one vs one scheme while LinearSVC uses one vs the rest. It is possible to implement one vs the rest with SVC by using the sklearn.multiclass.OneVsRestClassifier wrapper. Finally SVC can fit dense data without memory copy if the input is C-contiguous. Sparse data will still incur memory copy though.

    class sklearn.svm.SVC(C=1.0, kernel='rbf', degree=3, gamma='auto', coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape=None, random_state=None)

    常用参数

    probability : boolean, optional (default=False)

    Whether to enable probability estimates. This must be enabled priorto calling fit, and will slow down that method.

    常用属性

    coef_ : array, shape = [n_class-1, n_features]

    常用方法Methods

    decision_function(X)Distance of the samples X to the separating hyperplane.
    fit(X, y[, sample_weight])Fit the SVM model according to the given training data.
    get_params([deep])Get parameters for this estimator.
    predict(X)Perform classification on samples in X.
    score(X, y[, sample_weight])Returns the mean accuracy on the given test data and labels.
    set_params(**params)Set the parameters of this estimator.

    如果之前设置了参数probability=True,则可以使用输出概率函数

    predict_proba

    Compute probabilities of possible outcomes for samples in X.

    The model need to have probability information computed at trainingtime: fit with attribute probability set to True.

    Parameters:

    X : array-like, shape (n_samples, n_features)

    For kernel=”precomputed”, the expected shape of X is[n_samples_test, n_samples_train]

    Returns:

    T : array-like, shape (n_samples, n_classes)

    Returns the probability of the sample for each class inthe model. The columns correspond to the classes in sortedorder, as they appear in the attribute classes_.

    Notes The probability model is created using cross validation, sothe results can be slightly different than those obtained bypredict. Also, it will produce meaningless results on very smalldatasets.

    使用示例

    >>> import numpy as np
    >>> X = np.array([[-1, -1], [-2, -1], [1, 1], [2, 1]])
    >>> y = np.array([1, 1, 2, 2])
    >>> from sklearn.svm import SVC
    >>> clf = SVC()
    >>> clf.fit(X, y) 
    SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
        decision_function_shape=None, degree=3, gamma='auto', kernel='rbf',
        max_iter=-1, probability=False, random_state=None, shrinking=True,
        tol=0.001, verbose=False)
    >>> print(clf.predict([[-0.8, -1]]))
    [1]

    [sklearn.svm.SVC]

    LinearSVC

    Implementation of Support Vector Machine classifier using the same library as this class (liblinear).
    Scalable Linear Support Vector Machine for classification implemented using liblinear. Check the See also section of LinearSVC for more comparison element.

    SVR

    Support Vector Machine for Regression implemented using libsvm.

    NuSVR

    Support Vector Machine for regression implemented using libsvm using a parameter to control the number of support vectors.

    LinearSVR

    Scalable Linear Support Vector Machine for regression implemented using liblinear.

    [sklearn.svm]

    皮皮blog

    from: http://blog.csdn.net/pipisorry/article/details/53034340

    ref:


    展开全文
  • Let us learn C in Code <11> flowchart while

    千次阅读 2014-05-03 21:34:57
    So many days passed since the last C tutorial about the flowchart, this chapter we will go on the flowchart and while loops

    So many days passed since the last C tutorial about the flowchart, this chapter we will go on  the flowchart and  take several examples about flowchart in C program.

    1) now compare two integer numbers, then find the large one and print it

    As this suppose ,we know that we must at least declare two variables and a temporary variable , we must compare the two integer variables then print the large on to the display.  if you forget the symbols are used to flowchart  or don't know how to express the flowchart you can linked here to review them 点击打开链接. Ok , let us go on

    analyze this question, we can find here we can use the input or output symbol to input two integers and output the large one, compare symbol to compare which number is large ,start and end symbols (must) to express the start or end the program. Now, draw the flowchart as below


    Ok, i think this flowchart really like the map in our daily life . if i go some strange place or find some place first time , i often check the destination in a map or use my gps in my phone to find it. Actually, the flowchart is the same , now let code it . Oh, i almost forget, the input function is a new function, we have not introduce it before. No matter, we have learnt the output function printf(). and the input function is called scanf(); but there are at least two parameters  one called  format  which represents what data type you want to input  , other parameter is an address which you want to hold the input variable, and we must focus on that the data type is the same with the format. For example scanf("%d",&integer_var);  here"%d" is the format express you must input the integer number, and the integer_var is just a integer variable you defined. and the& is an address which represent the input integer number storage to this address (surely this declared variable). 

    Now , we have introduce the input and output functions ,they are printf() and scanf() , this functions are defined in a standard header named "stdio.h", as in our C program we can use the library function in our code just need add the header files at the top of our program like this #include <stdio.h> . The library includes some functions we often used in our program, in our program we just use these functions and surely we don't need to code them by ourselves ,we will introduce the header files lately , ok now we just finish this comparing program.

    #include <stdio.h> 
    
    main()
    {
      int a = 0;
      int b = 0;
      printf("Please input one integer number\n");
      scanf("%d",&a);
      printf("Please input other integer number\n");
      scanf("%d",&b);
      if( a > b)
      {
        printf("Large one is %d\n",a);
      }
      else
      {
         printf("Large one is %d\n",b);
      }
    }

     2) Find the Fibonacci sequence less than 500 (the first two numbers are 0 and 1).

    In mathematical terms, the sequence Fn of Fibonacci numbers is defined by the recurrence relation Fn = Fn-1 + Fn-2, here F0=0,F1=1. Before draw this flowchart we just list the integer sequence as below

    0,1,1,2,3,5,8,13,21,34,55,....


    Here we have draw the flowchart of calculating the Fibonacci  sequences , but here is another important issue we have not learnt for the newbie C programmer , that is the loop. As the red arrow shows above diagram.  The loop just like our status of everyday  "sleep -> wake up ->breakfast -> work -> lunch -> work -> supper ->sleep -> wake up -> breakfast..." , Do something again and again. Here if you come along with the red arrow's direction, you will see the same sentence -0- equation -1- compare -2- print -3- assignment then again -0- -1- -2- -3-...  

    In C program, we have two method to express the loops in C , the FIRST one is while(), the SECOND one do{} while(),the THIRD one is for(;;). Each of them can let you program do(repeat) something again and again. This chapter , i just introduce the while loop . for the do while and for loop ,  we will learn them lately in our code. 

    For the basic while loops , while(condition){  block };  if the condition is true , the braces block will be executed. So here we just keep the condition is true, then the Fibonacci sequence will be printed repeatedly. Before we have learnt the if condition , the will condition just like the if condition.  So we just write like this while(fr < 500).

    Let us code them as below

    /* Fibonacci sequence less than 500
     *
     */
    
    #include <stdio.h>
    
    main()
    {
      int fb = 0;
      int fa = 1;
      int fr = 0;
      fr = fb + ba;
      while( fr < 500)
      {
         printf("%d\n",fr);
         fb = fa;
         fa = fr;
         fr = fb + fa;
      }
    }


    Ok , time limited , May 1 Festival has passed, everybody need a rest. Finish here, good night!

    展开全文
  • Python while/try/if循环

    2021-04-26 18:23:09
    我只是在学习python,想知道是否有更好的方法来编写它,而不是使用嵌入while循环中的try/except和if/else。这是从学习编码的艰难方式,我试图给用户3次机会输入一个数字,在第3次机会,它退出使用死亡函数。(这些...
  • Shell脚本中的while循环

    千次阅读 2020-07-19 10:54:52
    Today we’ll learn about the while loop in shell scripts. Loops are an essential part of any programming language. When we write a code to execute a set of statements 15 times, writing the same ...
  • English | 2019 | ISBN: 1484248676 | 360 Pages | True PDF Stay motivated and overcome obstacles while learning to use Swift Playgrounds and Xcode 10.2 to become a great iOS developer. This book, fully...
  • English | 2019 | ISBN: 1484248676 | 360 Pages | True PDF Stay motivated and overcome obstacles while learning to use Swift Playgrounds and Xcode 10.2 to become a great iOS developer. This book, fully...
  • 下面学习一个新的循环,while循环,如果while的布尔表达式一直是True,那么会不断的重复执行代码块。 等等,我们不是在学习一个新的术语吗?用冒号结束一行语句,然后开始新的代码段,使用缩进区别这些代码段,...
  • Python“ while”循环(无限迭代)

    千次阅读 2020-07-14 22:45:54
    In this tutorial, you’ll: 在本教程中,您将: Learn about the while loop, the Python control structure used for indefinite iteration See how to break out of a loop or loop iteration prematurely ...
  • java.lang.RuntimeException: java.lang.RuntimeException: com.android.builder.dexing.DexArchiveMergerException: Error while merging dex archives: E:\Working\Gouku\coding\GoukuTookit\app\build\...
  • 从一个小游戏练习if、while和函数。 exit()是sys的方法,exit(0)是离开程序。 ex36 讲述if…else和while、for的规则。 ex37 回顾以前的关键字、数字类型、转义符、格式化字符、运算符,并试着去读别人的代码。 ex38 ...
  • print('hello,world') i+=1 例2:计算0·100之间的所有偶数之和 i=0 sum=0 while i('0~100之间的所有偶数之和的结果是 %d' %(sum)) 例3:死循环 while True: print('hello python') 4.break,continue,exit() ...
  • Learn how to program JavaScript while creating interactive audio applications with JavaScript for Sound Artists: Learn to Code With the Web Audio API! William Turner and Steve Leonard showcase the ...
  • Scikit-learn:最近邻搜索sklearn.neighbors

    万次阅读 2016-11-25 16:01:14
    While this will be quite a bit slower than using one of the optimized metrics above, it adds nice flexibility.   The kd-tree works with only the first four of the above metrics . This limitation is ...
  • Scikit-learn:模型选择之调参grid search

    千次阅读 2016-08-22 09:08:55
    http://blog.csdn.net/pipisorry/article/details/52268947Scikit-learn:并行调参Grid SearchGrid Search: Searching for estimator parametersscikit-learn中提供了pipeline(for estimator connection) & grid_...
  • 老项目重启,加入了一大堆新代码之后报错了DexArchiveMergerException: Error while merging dex archives 解决方法: 1、app的build.gradle中的dependencies加入 implementation '...
  • https://scikit-learn.org/stable/modules/feature_selection.html#feature-selection-using-selectfrommodel  """Meta-transformer for selecting features based on importance weights. .. versionadded:: ...
  • LearnPython_week1

    2021-03-03 15:55:38
    1、 Python安装2、 Hello World程序3、 变量的简单使用4、 注释#'"5、 用户输入6、 字符串格式化输出7、 continue or break8、 if...else表达式9、 for表达式10、 while表达式11、 for(whlie)...else12、 作业...
  • One of the common problem while parsing JSON in Java using Jackson API is that it fails when your JSON contains unknown properties i.e. your Java class doesn't have all the field corresponding to all ...
  • Generic Pipelines Using Docker ...Professionals who use DevOps or are part of a DevOps team, and are seeking ways to streamline their pipelines and drive more deployments while using less code
  • while循环和until循环 本篇包含的主要内容: while循环和until循环的写法及进入...类似Java中while的用法,当条件表达式为true时,会执行do以下的指令;否则不进入循环体执行指令 until循环语句 基础语法: until <
  • while_Keyword

    2019-08-09 17:32:16
    Java while Loop with ...In this quick article, we will learn how to use while loop with examples. The while loop is Java’s most fundamental loop statement. It repeats a statement or block while ...
  • 第一模块(Python语言基础)环境搭建与第一个脚本变量,用户输入与代码格式Python的数据类型(字符串,数字与布尔值)逻辑运算符与流程控制Python的数据类型(列表,集合与元组)Python循环(for)Python循环(while)Python的...
  • 学python 第十九弹 while 循环1 我们在前面学习了 for 循环或计数循环,接下来我们来学习一下第二种循环,称为while 循环或条件循环。 如果你能提前知道循环运行多少次,那么使用 for 循环很适合,不过,有时 ,我们...
  • I am using scikit learn for Gaussian process regression (GPR) operation to predict data. My training data are as follows:x_train = np.array([[0,0],[2,2],[3,3]]) #2-D cartesian coordinate pointsy_train...
  • python循环语句forIn this tutorial you will learn about Python for & while loop, break & continue statement. 在本教程中,您将了解Python for&while循环,break和Continue语句。 Loops are a ...
  • The Best Way to Learn Python-Python学习之路

    千次阅读 2013-05-15 10:50:16
    ref:http://net.tutsplus.com/tutorials/the-best-way-to-learn-python/  post by  Mark Dunne Python is more popular than ever, and is being used everywhere from back-end web servers, to ...
  • But the real reason why I should not like to be back in the book trade for life is that while I was in it, I lost my love of books. A bookseller cannot always tell the truth about his books, and that...
  • Learn Python The Hard Way 习题41详解标签: Python 博客博主最近在学习Python,看的书是Learn Python The Hard Way(Third Edition), 前40道习题没有什么难度,但是看到习题41的时候,由于出现了很多新函数和新名字...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 26,077
精华内容 10,430
关键字:

learntruewhile