精华内容
参与话题
问答
  • Faas

    2020-11-28 21:46:16
    servless:以后运维人员就可以下岗了 不需要运维了。...faas:函数方式运行,无状态,根据输入转换为输出。如何实现faas可以使用阿里云和腾讯云进行测试 目前不收费 目前开通腾讯云免费云函数 ...

    servless:以后运维人员就可以下岗了 不需要运维了。业务开发者直接写业务不需要关心服务器的运行情况

    faas:函数方式运行,无状态,根据输入转换为输出。如何实现faas可以使用阿里云和腾讯云进行测试 目前不收费

    目前开通腾讯云免费 云函数

    https://service-m6bddl5w-1304367797.gz.apigw.tencentcs.com/release/hellolishu

    搞了一个简单的函数 就写了一个简单的函数 就可以发布服务了。大家都可以访问  真的 是servless

    展开全文
  • Note: any statistics you gather about the openfaas/faas repository will be invalid, the faas repo is not representative of the project's activity. Governance OpenFaaS ® is an independent open-...
  • <ol><li>source ./startNomad.sh </li><li>nomad run ./nomad_job_files/faas.hcl</li><li>nomad run ./nomad_job_files/monitoring.hcl</li><li>faas-cli new gofunction -lang go </li><li>update gofucntion.yml ...
  • ark get faas-cli yields 404

    2020-11-27 03:52:53
    <div><p>When I try to get faas-cli, I get a 404 error. <h2>Expected Behaviour <p>A 200 response, with faas-cli having been successfully gotten. <h2>Current Behaviour <pre><code> ark get faas-cli ...
  • <div><p>faas_function_suffix is needed when functionNamespace is not empty. Otherwise queue-worker could not call function successfully. <h2>Description <h2>Motivation and Context <ul><li>[x] I ...
  • FaaS介绍

    万次阅读 2018-05-21 09:42:42
    FaaS介绍  云计算技术的核心是服务化,服务化就需要提供闭环和灵活的服务。而云计算也在持续发展中...其实一个更细分的服务化叫做FaaSFaaS是Functions as a Service的缩写,可以广义的理解为功能服务化,也可...

     

    FaaS介绍

             云计算技术的核心是服务化,服务化就需要提供闭环和灵活的服务。而云计算也在持续发展中,从最初的基础设施服务化(IaaS),平台服务化(PaaS),软件服务化(SaaS),陆续演化出数据库服务化(DBaaS),容器服务化(CaaS)。其实一个更细分的服务化叫做FaaS,FaaS是Functions as a Service的缩写,可以广义的理解为功能服务化,也可以解释为函数服务化。使用FaaS只需要关注业务代码逻辑,无需关注服务器资源,所以FaaS也跟开发者无需关注服务器Serverless密切相关。可以说FaaS提供了一个更加细分和抽象的服务化能力。

            

    FaaS和PaaS的关系

        要理解FaaS,就必须了解FaaS和PaaS的关系(关于PaaS的前世今生可以参考笔者的另一篇博客http://dockone.io/article/635)。

             在早期的PaaS的技术实现中,比如GAE、SAE,因为当时还没有像Docker容器这样的沙盒打包能力,PaaS往往会提供各种语言技术栈,比如Java\PHP等,而开发者通过上传的代码的方式进行部署,PaaS会负责代码构建和服务的生命周期管理。进一步的,PaaS开始允许开发者自定义开发和构建环境,比如像Heroku和CloudFoundry的Buildpack就是定义了代码的构建和运行逻辑,Buildpack实现了三步功能:

     

    • detect:检查当前代码是否能支持,比tomcat buildpack发现WEB-INF路径就认为自己能够运行它。
    • compile:将代码进行编译构建,比如将java代码构建成jar包。
    • release:将应用程序启动,比如运行tomcatstartup.sh

     

    备注:buildpack的详细说明可以参考https://docs.cloudfoundry.org/buildpacks/

            

             实际上BuildPack还是比较难解决代码依赖的问题,而Docker的容器一致性环境和Dockerfile组成完成了强大的沙盒打包能力,为此PaaS平台纷纷支持Docker容器来解决代码构建,像Kubernete就可以专注于容器编排和服务生命周期管理,而不用像CloudFoundry那样要在内部流程和组件提供代码构建能力,所以Kubernete也被称为CaaS,因为只关注容器的服务化管理,代码的构建由Docker或者上层Devops平台负责。

            

             所以可以说一开始FaaS是和PaaS融合在一起的,而现在PaaS就更加专注于服务编排和资源管理,而FaaS也开始独立出来,FaaS逐渐形成以代码函数为主体的事件驱动架构,使用FaaS的时候可以将函数作为一个线上服务、远程计算服务,可以通过 API 执行、通过邮件执行、通过Iot 执行,通过队列执行。

             云计算服务提供商也纷纷提出FaaS,比如AWS Lambda,Google Cloud Functions以及Azure Functions,然后也有更多的FaaS开源框架推出。接下来我们通过介绍AWS Lambda让读者更加清楚FaaS的业务形态和使用场景。

     

    AWS Lambda

        Lambda是AWS 2014年推出的计算服务。Lambda是一种FaaS,所以Lambda服务的核心概念是Lambda function(简称function,函数)。围绕function可以定义情景,包括执行环境(语言、内存、超时、 IAM角色)以及这个function要触发的另一个function

           function上传之后,开发者可以将其指定到指定的AWS资源(如某个S3 bucket,某个DynamoDB表,某个Kinesis流),然后Lambda就会建立该资源跟你的function之间的关联。当资源方面发生变动,Lambda就会去创建资源去执行你的function。用于运行function的资源的创建分配和释放都有Lambda自动来做,开发者完全不需要去干预。

             Lambda是一个事件驱动架构,应用由函数(functions,即业务逻辑的载体)+ 数据(data,即跟业务相关的输入与输出),以及这两者之间的交互——即事件(events。常见的事件如增加、变更、删除等)组成。所以Lambda可以无缝地和其他服务集成,如下:

    场景1:Amazon S3 +Lambda进行图片处理

           假设有一个照片共享应用程序。用户在应用程序上传照片,应用程序将这些用户照片存储到 Amazon S3 存中。然后应用程序针对上传的图片进行处理,包括图片压缩、加水印等等,在这种场景下,通过 Lambda 配合S3Amazon S3 Lambda支持的 AWS 事件源之一,可以发布对象创建的事件 并调用您的 Lambda 函数。 Lambda 函数代码可以从 S3 存储桶读取照片对象、进行图片处理,然后将其保存到S3

     

    场景2:Amazon APIGateway+Lambda实现API调用

           通过 Amazon API Gateway,可以根据在 AWS Lambda 中运行的代码快速、轻松地创建自定义 API,然后通过 API 调用EC2或者 Lambda。Lambda相比EC2来说是一个即开即用的函数,在没有调用的时候是不会运行和计费的,并且AWS可以根据请求的大小,自动伸缩Lambda所需的资源,这一切都是开发者无需关心的。

          

            

     

     

    作者简介

    吴龙辉,致力于云计算PaaS的研究和实践,《Kubernetes实战》作者,活跃于CloudFoundry,Docker,Kubernetes等开源社区,贡献代码和撰写技术文档。 
    邮箱:wlh6666@qq.com

    展开全文
  • <p>This is necessary to run Bolt apps safely on FaaS (Function-as-a-Service). Without this, Events API handlers may unexpectedly be terminated even while they're still running. <h3>What type of ...
  • <p>The faas-netes container uses the wrong imagePullPolicy it should use openFaasImagePullPolicy and not faasnetesd.imagePullPolicy <h2>Motivation and Context <ul><li>[x] I have raised an issue to ...
  • Using --password is insecure, consider using: cat ~/faas_pass.txt | faas-cli login -u user --password-stdin Calling the OpenFaaS server to validate the credentials... WARNING! Communication is not ...
  • Create a Helm Chart for FaaS

    2020-11-29 10:20:42
    <p>Once FaaS on kubernetes is somewhat stable we can then submit our chart to the official repo ... https://github.com/kubernetes/charts</p> <p>I am happy to work on this unless someone else wants to ...
  • <ol><li>faas-cli new --lang python3 hello-python-faas</li><li>faas-cli build -f hello-python-faas.yml</li><li>faas-cli deploy -f hello-python-faas.yml --gateway $(minikube service gateway-external --...
  • <ul><li>FaaS-CLI version ( Full output from: <code>faas-cli version</code> ):</li></ul> <p>version: 0.8.1 <ul><li>Docker version <code>docker version</code> (e.g. Docker 17.0.05 ):</li></ul> <p>Docker...
  • FaaS 的简单实践

    2018-12-17 08:00:00
    FaaS 或者说serverless是一种云计算模型,其主要特点是用户根本不需要租用任何虚拟机ーー从启动虚拟机,执行代码,返回结果和停止虚拟机这些由云提供商处理的整个过程。这比其他云计算实现更具成本效益。它还使开发...

     

    FaaS 或者说serverless是一种云计算模型,其主要特点是用户根本不需要租用任何虚拟机ーー从启动虚拟机,执行代码,返回结果和停止虚拟机这些由云提供商处理的整个过程。这比其他云计算实现更具成本效益。它还使开发人员能够更加专注于开发业务逻辑,因为应用程序的某些部分由云提供程序处理。

     

    要启动执行代码的整个过程,必须触发它。触发器可以是一个特定的事件,也可以是对API 管理系统的请求,然后将该代码作为API 端点公开。

     

    最流行的serverless服务之一是 AWS Lambda,它可以与AWS API 网关集成,创建一个serverlessRESTAPI

     

    REST API配置


     

    API的配置将由AWS API 网关处理。这包括创建路由、处理输入和输出格式、身份验证等等,而实际代码将由Lambda 管理。

     

    当开启 API 网关仪表板时,为您的网站创建一个新的API。然后,单击操作创建资源在API 中创建一个新的URL 路径。每个资源都可以支持一个或多个方法(GETPOSTput/ patchDELETE) ,这些方法通过Actions > Create Method来添加。

    例如,我们可以创建一个名为"post"的资源,它的路径是"/posts",它有两种方法:

    • GET —      fetch all posts 把所有的帖子都拿来

    • POST —      create a new post 创建一个新的帖子

    在这一点上,屏幕应该是这样的:

    640?wx_fmt=jpeg

     

     

    还需要处理显示一个单一的帖子,更新一个帖子和删除一个帖子。这些操作在REST API 中会有一个不同的路径,这意味着需要创建一个新的资源。由于这个资源的路径是"/posts / { post id }",因此它将作为一个子资源创建。要做到这一点,首先单击"posts"资源,然后去操作创建资源。这个资源将在路径(post ID)中有一个参数,可以通过将参数名包装为"/posts / { post ID }"的括号来实现。创建资源后,将GET PUT DELETE 方法添加到其中。

     

    API 现在看起来是这样的:

     

     

    640?wx_fmt=jpeg

     

    每个方法将执行相应的AWS Lambda 函数。先创建这些函数,然后将它们映射到适当的API 方法。

     

    创建 Lambda 函数


     

    点击AWS Lambda,点击"Create a Lambdafunction"。下一个屏幕允许选择编程语言(Node.js Python)和预定义的模板之一。选择microservice-http-endpoint,然后在下面的页面中选择API 名称。也可以选择空白函数,并且不用任何预先编写的代码来编写它。

     

    最后,在可以插入代码的页面。可以直接在页面上写这个函数,或者将它作为压缩存档上传(如果它包含自定义库,则需要)。当我们使用预定义模板时,函数是自动生成的,看起来是这样的:

    from __future__ importprint_function

     

    import boto3

    import json

     

    print('Loading function')

     

     

    def respond(err,res=None):

        return {

            'statusCode': '400' if err else '200',

            'body': err.message if err elsejson.dumps(res),

            'headers': {

                'Content-Type': 'application/json',

            },

        }

     

     

    def lambda_handler(event,context):

        '''Demonstrates a simple HTTP endpointusing API Gateway. You have full

        access to the request and response payload,including headers and

        status code.

     

        To scan a DynamoDB table, make a GETrequest with the TableName as a

        query string parameter. To put, update, ordelete an item, make a POST,

        PUT, or DELETE request respectively,passing in the payload to the

        DynamoDB API as a JSON body.

        '''

        #print("Received event: " +json.dumps(event, indent=2))

     

        operations = {

            'DELETE': lambda dynamo, x:dynamo.delete_item(**x),

            'GET': lambda dynamo, x:dynamo.scan(**x),

            'POST': lambda dynamo, x:dynamo.put_item(**x),

            'PUT': lambda dynamo, x:dynamo.update_item(**x),

        }

     

        operation = event['httpMethod']

        if operation in operations:

            payload = event['queryStringParameters']if operation == 'GET' else json.loads(event['body'])

            dynamo =boto3.resource('dynamodb').Table(payload['TableName'])

            return respond(None,operations[operation](dynamo, payload))

        else:

            returnrespond(ValueError('Unsupported method "{}"'.format(operation)))

     

    虽然大多数情况下不需要很多代码(许多人会使用关系数据库而不是NoSQL DynamoDB 数据库) ,但它为如何访问HTTP 请求参数和如何输出响应设置了一个很好的例子。

     

    在创建Lambda 函数时需要注意的另一件事是handler字段。它告诉Lambda 要执行哪个函数,以及函数所在的文件。例如,如果main.py 文件中有一个名为"myfunction"的函数,那么处理程序的值将是"main.myfunction"在创建函数之后,它们可以映射到相应的API 端点。

     



     

     

    要使API 调用 Lambda 函数,请单击一个API 方法,然后进入集成请求。在该页上,将集成类型设置为Lambda 函数,并输入您的亚马逊区域和所需函数的名称。对于所有的API 方法都这样做。

     

    在部署之前,可以测试API。每个API 方法都有一个测试按钮,它将执行它并显示输出。

     

     

    640?wx_fmt=jpeg

    一旦一切准备就绪,去action Deploy API 部署你的REST API。第一次,需要创造一个新的阶段(例如,它可以被称为prod 或生产),就像一个部署环境。可以有多个阶段,不同的阶段有不同的基础url 和配置。可以在屏幕左侧的Mywebsite API Stages 下找到各个阶段。点击该阶段的名称以获取API 的公共 URL,以及其他配置选项,如缓存和节流。

     

    这里展示了一个基本的例子,一个serverless的REST API,使用AWS API 网关和Lambda 构建。它展示了如何在不需要开发常见的API 管理特性的情况下轻松地创建REST API,比如认证、路由、缓存和速率限制等。

     

    更进一步, 物联网的快速采用受到正在通过技术创新改变其业务的公司的支持; 制造商正在提供低成本和高端的设备和物联网平台,使设备集成和管理成为可能。物联网应该转向灵活、可靠和高成本效益的平台,而在基础设施、软件、知识和员工方面投入最少。

    IoT的无服务架构

    如何从零开始构建一个物联网解决方案,它的基础设施和维护成本为零,只需要很少的营运成本。为了实现这个概念,可以使用AWS的云功能。 例如,创建实时报告遥测数据的设备模拟器,并通过 API 实时访问这些信息。

    AWS IoT 平台是一个强大的物联网框架。 它支持 MQTT 协议,MQTT 协议是应用最广泛的通信协议之一。 选择支持持久化和处理数据的服务也是基于其定价和维护成本。

    AWS中所使用的组件列表如下:

    • AWS IoT : 用于数据收集和设备管理,

    • DynamoDB: 文档存储以持久化数据读数,

    • AWS Lambda : 无服务器数据处理,

    • S3:用作静态网站托管的块存储,

    • Gateway API :REST 访问数据

    640?wx_fmt=jpeg

    总体数据流是以下方式工作的:

    1. 设备向 AWS IoT 发送小量数据(每5秒) ,

    2. 物联网将数据存储到 DynamoDB 表中* 

    3. Lambda函数每分钟和每小时被触发去做数据分析并将结果存储回 DynamoDB,

    4. API Gateway 通过 REST API 将 DynamoDB 的数据公开

    5. 静态 HTML 网站托管在 S3上,并使用 RESTAPI 来显示实时数据图表和分析

    第二点乍看起来可能有点傻,因为可能会认为 DynamoDB 不是存储原始时间序列数据的最佳选择。 然而,这里是为了演示的目的。可以考虑使用 Firehose 作为从物联网到 S3/reshift 和 EMR 集群的传输流来进行数据处理,但对于这个简单实践而言,这里只是一个临时的做法。

    架构设置了以下关键参数:

    • 免费,如果没有设备报告任何数据。 另外,通过亚马逊的免费版,可以免费获得少量的资源

    • 由于每个选定组件的性质,高度可扩展且可以从AWS中获取

    • 启动只需的最基本知识,只需要定义规则和用一种非常流行的语言编写逻辑: JavaScript,Python 或者 Java

    IoT无服务架构的成本分析

    假设后端操作每分钟只需要处理几个请求,这意味着大部分时间您的 CPU 处于空闲状态。 假设不想为空闲时间买单。 因此,这里提出了无服务器架构。

    假设有10000个设备每15分钟报告一小部分数据,这就导致每月平均730个小时,每月约有2920万个请求。AWS物联网每100万个请求花费5美元,DynamoDB 每秒花费0.0065美元,每秒需要花费50次。

    通过 AWS IoT,每月将付出146美元左右的,14美元用于在 DynamoDB 中运行的最小存储容量,总共有160美元,相当于每台设备每月0.02美元或者每次0.000005美元。 尽管这没有考虑到 lambda、存储器和 API 网关的使用,但它们实际上只是这些数字的一小部分,因此可以省略。

    这是令人印象深刻的。物联网解决方案与数以千计的设备连接,这将花费不到200美元每月。 然而,让我们想象一下,如果一个企业的设备每秒钟都在报告关键数据(而不是每隔15分钟) ,而且有成千上万的数据。 还愿意为FaaS付多少钱呢?

    如果一万台设备每秒发送一条消息,月付款将超过1.36万美元。如果是10万台设备, 每月每台设备的费用增加到13.61美元,还是挺贵的。

     

    无服务架构IoT方案的优缺点

    所有这些数字意味着优化的请求率将会立即和几乎线性地导致月度费用减少。这就带来了必须考虑的第二个重要结论,即所有权的总成本。 有一个虚拟的门槛值,超过这个阈值,无关紧要的方法就会变得非常昂贵,而且可能不会有效。

    例如,传统的体系结构实现成本可能不是很大程度上取决于设备的数量或每秒请求的数量,而是取决于额外的运营费用,使用开源解决方案也可以降低成本。

    毫无疑问,无服务架构有许多优点:

    • 它将资本支出转化为经营支出,并通常降低经营成本;

    • 不必考虑内部系统管理流程;

    • 它减少了开发和部署成本和时间框架(更快的上市时间) ;

    • 它具有可扩展性和容错性

    要考虑的第一个因素是为项目的需求, 如果不关心云锁定,而且是一家创业公司,需要快速验证想法,或者有一个很短的时间去营销,或者解决方案不需要频繁地将数据从设备传输到云,因此可以将每台设备的成本保持在相对较低的水平。

    另一方面,如果正在构建一个与云无关的、高度可定制的解决方案,并且使用实时数据进行操作,可以考虑使用自定义或开源物联网解决方案。


     

     

    「本文编译自:

    http://www.devx.com/enterprise/creating-a-serverless-api-using-api-gateway-and-lambda.html

    https://www.dataart.com/downloads/dataart_white_paper_art_of_low_cost_iot_solution.pdf」

    展开全文
  • 安装FaaS

    2019-10-08 20:35:10
    [root@localhost ~]# [root@localhost ~]# new OS:centos-7 [root@localhost ~]# [root@localhost ~]# vim /etc/hosts # cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.l...
    [root@localhost ~]#
    [root@localhost ~]# new OS:centos-7
    [root@localhost ~]#
    [root@localhost ~]# vim /etc/hosts
    # cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    
    10.239.85.153       fission-master
    10.239.85.167       fission-node1
    10.239.85.107       fission-node1
    [root@localhost ~]#
    [root@localhost ~]# reboot
    [root@fission-master ~]#
    [root@fission-master ~]# vim ~/.bashrc
    # .bashrc
    
    # User specific aliases and functions
    
    alias rm='rm -i'
    alias cp='cp -i'
    alias mv='mv -i'
    
    # Source global definitions
    if [ -f /etc/bashrc ]; then
            . /etc/bashrc
    fi
    
    export http_proxy="http://child-prc.intel.com:913"
    export https_proxy="http://child-prc.intel.com:913"
    export HTTP_PROXY="http://child-prc.intel.com:913"
    export HTTPS_PROXY="http://child-prc.intel.com:913"
    [root@fission-master ~]#
    [root@fission-master ~]# vim /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    sslverify=0
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    [root@fission-master ~]#
    [root@fission-master ~]# sudo setenforce 0
    [root@fission-master ~]# yum install socat-1.7.3.2 kubernetes-cni-0.6.0 kubelet-1.11.3 kubeadm-1.12.1 kubectl-1.12.1
    [root@fission-master ~]#
    [root@fission-master ~]# systemctl enable --now kubelet
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /etc/systemd/system/kubelet.service.
    [root@fission-master ~]#
    [root@fission-master ~]# vim /etc/yum.repos.d/docker-ce.repo
    [dockerrepo]
    name=Docker Repository
    baseurl=https://yum.dockerproject.org/repo/main/centos/7
    enabled=1
    gpgcheck=1
    gpgkey=https://yum.dockerproject.org/gpg
    [root@fission-master ~]#
    [root@fission-master ~]# yum install -y yum-utils-1.1.31 device-mapper-persistent-data-0.7.3  lvm2-2.02.180
    ================================================================================================================================================================================================================================
     Package                                                            Arch                                        Version                                                      Repository                                    Size
    ================================================================================================================================================================================================================================
    Installing:
     yum-utils                                                          noarch                                      1.1.31-50.el7                                                base                                         121 k
    Updating:
     device-mapper-persistent-data                                      x86_64                                      0.7.3-3.el7                                                  base                                         405 k
     lvm2                                                               x86_64                                      7:2.02.180-10.el7_6.3                                        updates                                      1.3 M
    Installing for dependencies:
     libxml2-python                                                     x86_64                                      2.9.1-6.el7_2.3                                              base                                         247 k
     python-chardet                                                     noarch                                      2.2.1-1.el7_1                                                base                                         227 k
     python-kitchen                                                     noarch                                      1.1.1-5.el7                                                  base                                         267 k
    Updating for dependencies:
     device-mapper                                                      x86_64                                      7:1.02.149-10.el7_6.3                                        updates                                      292 k
     device-mapper-event                                                x86_64                                      7:1.02.149-10.el7_6.3                                        updates                                      188 k
     device-mapper-event-libs                                           x86_64                                      7:1.02.149-10.el7_6.3                                        updates                                      188 k
     device-mapper-libs                                                 x86_64                                      7:1.02.149-10.el7_6.3                                        updates                                      320 k
     lvm2-libs                                                          x86_64                                      7:2.02.180-10.el7_6.3                                        updates                                      1.1 M
    
    Transaction Summary
    ================================================================================================================================================================================================================================
    [root@fission-master ~]#
    [root@fission-master ~]# yum-config-manager --add-repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
    Loaded plugins: fastestmirror
    adding repo from: https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
    grabbing file https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
    repo saved to /etc/yum.repos.d/docker-ce.repo
    [root@fission-master ~]#
    [root@fission-master ~]# yum install docker-ce-18.06.1.ce-3.el7
    ================================================================================================================================================================================================================================
     Package                                                      Arch                                        Version                                                   Repository                                             Size
    ================================================================================================================================================================================================================================
    Installing:
     docker-ce                                                    x86_64                                      18.06.1.ce-3.el7                                          docker-ce-stable                                       41 M
    Installing for dependencies:
     audit-libs-python                                            x86_64                                      2.8.4-4.el7                                               base                                                   76 k
     checkpolicy                                                  x86_64                                      2.5-8.el7                                                 base                                                  295 k
     container-selinux                                            noarch                                      2:2.74-1.el7                                              extras                                                 38 k
     libcgroup                                                    x86_64                                      0.41-20.el7                                               base                                                   66 k
     libsemanage-python                                           x86_64                                      2.5-14.el7                                                base                                                  113 k
     libtool-ltdl                                                 x86_64                                      2.4.2-22.el7_3                                            base                                                   49 k
     policycoreutils-python                                       x86_64                                      2.5-29.el7_6.1                                            updates                                               456 k
     python-IPy                                                   noarch                                      0.75-6.el7                                                base                                                   32 k
     setools-libs                                                 x86_64                                      3.3.8-4.el7                                               base                                                  620 k
    Updating for dependencies:
     audit                                                        x86_64                                      2.8.4-4.el7                                               base                                                  250 k
     audit-libs                                                   x86_64                                      2.8.4-4.el7                                               base                                                  100 k
     libselinux                                                   x86_64                                      2.5-14.1.el7                                              base                                                  162 k
     libselinux-python                                            x86_64                                      2.5-14.1.el7                                              base                                                  235 k
     libselinux-utils                                             x86_64                                      2.5-14.1.el7                                              base                                                  151 k
     libsemanage                                                  x86_64                                      2.5-14.el7                                                base                                                  151 k
     libsepol                                                     x86_64                                      2.5-10.el7                                                base                                                  297 k
     policycoreutils                                              x86_64                                      2.5-29.el7_6.1                                            updates                                               916 k
     selinux-policy                                               noarch                                      3.13.1-229.el7_6.9                                        updates                                               483 k
     selinux-policy-targeted                                      noarch                                      3.13.1-229.el7_6.9                                        updates                                               6.9 M
    
    Transaction Summary
    ================================================================================================================================================================================================================================
    if failed to yum install docker-ce-18.06.1.ce-3.el7 !!!!!!!!! Do the following wget, rpm and yum install 
    [root@fission-master ~]#
    [root@fission-master ~]# yum install wget
    [root@fission-master ~]# wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-18.06.1.ce-3.el7.x86_64.rpm
    [root@fission-master ~]# yum install audit-libs-python-2.8.4-4.el7 checkpolicy-2.5-8.el7 container-selinux-2.74-1.el7 libcgroup-0.41-20.el7 libsemanage-python-2.5-14.el7 libtool-ltdl-2.4.2-22.el7_3 policycoreutils-python-2.5-29.el7_6.1 python-IPy-0.75-6.el7 setools-libs-3.3.8-4.el7 libseccomp-2.3.1-3.el7
    ================================================================================================================================================================================
     Package                                            Arch                              Version                                          Repository                          Size
    ================================================================================================================================================================================
    Installing:
     audit-libs-python                                  x86_64                            2.8.4-4.el7                                      base                                76 k
     checkpolicy                                        x86_64                            2.5-8.el7                                        base                               295 k
     container-selinux                                  noarch                            2:2.74-1.el7                                     extras                              38 k
     libcgroup                                          x86_64                            0.41-20.el7                                      base                                66 k
     libsemanage-python                                 x86_64                            2.5-14.el7                                       base                               113 k
     libtool-ltdl                                       x86_64                            2.4.2-22.el7_3                                   base                                49 k
     policycoreutils-python                             x86_64                            2.5-29.el7_6.1                                   updates                            456 k
     python-IPy                                         noarch                            0.75-6.el7                                       base                                32 k
     setools-libs                                       x86_64                            3.3.8-4.el7                                      base                               620 k
    Updating for dependencies:
     audit                                              x86_64                            2.8.4-4.el7                                      base                               250 k
     audit-libs                                         x86_64                            2.8.4-4.el7                                      base                               100 k
     libselinux                                         x86_64                            2.5-14.1.el7                                     base                               162 k
     libselinux-python                                  x86_64                            2.5-14.1.el7                                     base                               235 k
     libselinux-utils                                   x86_64                            2.5-14.1.el7                                     base                               151 k
     libsemanage                                        x86_64                            2.5-14.el7                                       base                               151 k
     libsepol                                           x86_64                            2.5-10.el7                                       base                               297 k
     policycoreutils                                    x86_64                            2.5-29.el7_6.1                                   updates                            916 k
     selinux-policy                                     noarch                            3.13.1-229.el7_6.9                               updates                            483 k
     selinux-policy-targeted                            noarch                            3.13.1-229.el7_6.9                               updates                            6.9 M
    
    Transaction Summary
    ================================================================================================================================================================================
    [root@fission-master ~]#
    [root@fission-master ~]# rpm -ivh docker-ce-18.06.1.ce-3.el7.x86_64.rpm
    [root@fission-master ~]#
    [root@fission-master ~]# docker version
    Client:
     Version:           18.06.1-ce
     API version:       1.38
     Go version:        go1.10.3
     Git commit:        e68fc7a
     Built:             Tue Aug 21 17:23:03 2018
     OS/Arch:           linux/amd64
     Experimental:      false
    Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
    [root@fission-master ~]#
    [root@fission-master ~]# systemctl enable docker
    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
    [root@fission-master ~]# systemctl start docker
    [root@fission-master ~]#
    [root@fission-master ~]# docker version
    Client:
     Version:           18.06.1-ce
     API version:       1.38
     Go version:        go1.10.3
     Git commit:        e68fc7a
     Built:             Tue Aug 21 17:23:03 2018
     OS/Arch:           linux/amd64
     Experimental:      false
    
    Server:
     Engine:
      Version:          18.06.1-ce
      API version:      1.38 (minimum version 1.12)
      Go version:       go1.10.3
      Git commit:       e68fc7a
      Built:            Tue Aug 21 17:25:29 2018
      OS/Arch:          linux/amd64
      Experimental:     false
    [root@fission-master ~]#
    [root@fission-master ~]#
    [root@fission-master ~]# kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:43:08Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
    [root@fission-master ~]#
    [root@fission-master ~]# kubectl version
    Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
    The connection to the server localhost:8080 was refused - did you specify the right host or port?
    [root@fission-master ~]#
    [root@fission-master ~]# kubelet --version
    Kubernetes v1.11.3
    [root@fission-master ~]#
    [root@fission-master ~]#
    [root@fission-master ~]# docker info |grep -i cgroup
    Cgroup Driver: cgroupfs
    [root@fission-master ~]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    # Note: This dropin only works with kubeadm and kubelet v1.11+
    [Service]
    Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
    Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
    # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
    EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
    # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
    # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
    EnvironmentFile=-/etc/sysconfig/kubelet
    ExecStart=
    ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
    [root@fission-master ~]#
    [root@fission-master ~]#
    [root@fission-master ~]# cd /etc/systemd/system/
    [root@fission-master system]#
    [root@fission-master system]# mkdir docker.service.d
    [root@fission-master system]# vim /etc/systemd/system/docker.service.d/http-proxy.conf
    [Service]
    Environment="HTTP_PROXY=http://child-prc.intel.com:913" "HTTPS_PROXY=http://child-prc.intel.com:913" "NO_PROXY=localhost,127.0.0.1,10.239.85.0/24,*.intel.com,loadbalancer,gateway1,gateway2,gateway3"
    [root@fission-master system]#
    [root@fission-master system]# cd
    [root@fission-master ~]#
    [root@fission-master ~]# systemctl daemon-reload
    [root@fission-master ~]# systemctl restart kubelet
    [root@fission-master ~]# systemctl restart docker
    [root@fission-master ~]#
    [root@fission-master ~]# systemctl status docker
    ? docker.service - Docker Application Container Engine
       Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
      Drop-In: /etc/systemd/system/docker.service.d
               +-http-proxy.conf
       Active: active (running) since Tue 2019-03-12 17:40:40 EDT; 6s ago
    [root@fission-master ~]#
    [root@fission-master ~]# swapoff -a
    [root@fission-master ~]# systemctl stop firewalld
    [root@fission-master ~]# systemctl disable firewalld
    Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
    Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
    [root@fission-master ~]#
    [root@fission-master ~]# docker images
    REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
    [root@fission-master ~]#
    [root@fission-master ~]# docker pull gcr.io/google_containers/kube-apiserver-amd64:v1.9.3
    v1.9.3: Pulling from google_containers/kube-apiserver-amd64
    57310166fe88: Pull complete
    1cfb1cc5f88e: Pull complete
    Digest: sha256:a5382344aa373a90bc87d3baa4eda5402507e8df5b8bfbbad392c4fff715f043
    Status: Downloaded newer image for gcr.io/google_containers/kube-apiserver-amd64:v1.9.3
    [root@fission-master ~]# docker pull gcr.io/google_containers/kube-controller-manager-amd64:v1.9.3
    v1.9.3: Pulling from google_containers/kube-controller-manager-amd64
    57310166fe88: Already exists
    a1a3a0835d92: Pull complete
    Digest: sha256:3ac295ae3e78af5c9f88164ae95097c2d7af03caddf067cb35599769d0b7251e
    Status: Downloaded newer image for gcr.io/google_containers/kube-controller-manager-amd64:v1.9.3
    [root@fission-master ~]# docker pull gcr.io/google_containers/kube-scheduler-amd64:v1.9.3
    v1.9.3: Pulling from google_containers/kube-scheduler-amd64
    57310166fe88: Already exists
    4122df38b6ef: Pull complete
    Digest: sha256:2c17e637c8e4f9202300bd5fc26bc98a7099f49559ca0a8921cf692ffd4a1675
    Status: Downloaded newer image for gcr.io/google_containers/kube-scheduler-amd64:v1.9.3
    [root@fission-master ~]#
    [root@fission-master ~]# docker images
    REPOSITORY                                               TAG                 IMAGE ID            CREATED             SIZE
    gcr.io/google_containers/kube-apiserver-amd64            v1.9.3              360d55f91cbf        13 months ago       210MB
    gcr.io/google_containers/kube-controller-manager-amd64   v1.9.3              83dbda6ee810        13 months ago       138MB
    gcr.io/google_containers/kube-scheduler-amd64            v1.9.3              d3534b539b76        13 months ago       62.7MB
    [root@fission-master ~]#
    [root@fission-master ~]# scp root@10.239.85.167:/root/fission-env/kubeadm.yaml ./
    [root@fission-master ~]#
    [root@fission-master ~]# export
    declare -x HISTCONTROL="ignoredups"
    declare -x HISTSIZE="1000"
    declare -x HOME="/root"
    declare -x HOSTNAME="fission-master"
    declare -x HTTPS_PROXY="http://child-prc.intel.com:913"
    declare -x HTTP_PROXY="http://child-prc.intel.com:913"
    declare -x LANG="en_US.UTF-8"
    declare -x LESSOPEN="||/usr/bin/lesspipe.sh %s"
    declare -x LOGNAME="root"
    declare -x LS_COLORS="rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:"
    declare -x MAIL="/var/spool/mail/root"
    declare -x OLDPWD
    declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin"
    declare -x PWD="/root"
    declare -x SELINUX_LEVEL_REQUESTED=""
    declare -x SELINUX_ROLE_REQUESTED=""
    declare -x SELINUX_USE_CURRENT_RANGE=""
    declare -x SHELL="/bin/bash"
    declare -x SHLVL="1"
    declare -x SSH_CLIENT="10.255.30.117 50706 22"
    declare -x SSH_CONNECTION="10.255.30.117 50706 10.239.85.153 22"
    declare -x SSH_TTY="/dev/pts/0"
    declare -x TERM="xterm"
    declare -x USER="root"
    declare -x XDG_RUNTIME_DIR="/run/user/0"
    declare -x XDG_SESSION_ID="1"
    declare -x http_proxy="http://child-prc.intel.com:913"
    declare -x https_proxy="http://child-prc.intel.com:913"
    [root@fission-master ~]#
    [root@fission-master ~]#
    [root@fission-master ~]# vim kubeadm.yaml
    apiVersion: kubeadm.k8s.io/v1alpha3
    kind: InitConfiguration
    apiEndpoint:
      advertiseAddress: "10.239.85.153"
    ---
    apiVersion: kubelet.config.k8s.io/v1beta1
    kind: KubeletConfiguration
    maxPods: 4000
    ---
    apiVersion: kubeadm.k8s.io/v1alpha3
    kind: ClusterConfiguration
    kubernetesVersion: stable-1.11
    networking:
      podSubnet: 10.244.0.0/16
    controllerManagerExtraArgs:
      node-cidr-mask-size: "20"
    [root@fission-master ~]#
    [root@fission-master ~]# sysctl net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-iptables = 1
    [root@fission-master ~]#
    [root@fission-master ~]# kubeadm init --config kubeadm.yaml
    [init] using Kubernetes version: v1.11.8
      kubeadm join 10.239.85.153:6443 --token z0tohm.ui8ukoll4qfnmuck --discovery-token-ca-cert-hash sha256:e5896f7b83f543633ff32938a78a53fbdcd3f7588b0bc5c8bc20f50cbe5bd243
    [root@fission-master ~]#
    [root@fission-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
    [root@fission-master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" | tee -a ~/.bashrc
    export KUBECONFIG=/etc/kubernetes/admin.conf
    [root@fission-master ~]#
    [root@fission-master ~]# kubectl version
    Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.8", GitCommit:"4e209c9383fa00631d124c8adcc011d617339b3c", GitTreeState:"clean", BuildDate:"2019-02-28T18:40:05Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
    [root@fission-master ~]#
    [root@fission-master ~]# kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')
    serviceaccount/weave-net created
    clusterrole.rbac.authorization.k8s.io/weave-net created
    clusterrolebinding.rbac.authorization.k8s.io/weave-net created
    role.rbac.authorization.k8s.io/weave-net created
    rolebinding.rbac.authorization.k8s.io/weave-net created
    daemonset.extensions/weave-net created
    [root@fission-master ~]#
    [root@fission-master ~]# kubeadm version
    kubeadm version: &version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:43:08Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
    [root@fission-master ~]#
    [root@fission-master ~]# kubectl version
    Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.8", GitCommit:"4e209c9383fa00631d124c8adcc011d617339b3c", GitTreeState:"clean", BuildDate:"2019-02-28T18:40:05Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
    [root@fission-master ~]#
    [root@fission-master ~]# kubelet --version
    Kubernetes v1.11.3
    [root@fission-master ~]# kubeadm join on the other nodes
    [root@fission-master ~]#
    [root@fission-master ~]# kubectl proxy &
    [root@fission-master ~]# kubectl get nodes
    F0312 18:23:41.499993   18291 proxy.go:158] listen tcp 127.0.0.1:8001: bind: address already in use
    NAME             STATUS   ROLES    AGE   VERSION
    fission-master   Ready    master   7m    v1.11.3
    fission-node1    Ready    <none>   2m    v1.11.3
    [root@fission-master ~]# watch -d kubectl -n kube-system get pods -o=wide
    [root@fission-master ~]# helm version
    -bash: helm: command not found
    [root@fission-master ~]#
    [root@fission-master ~]# curl -LO https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
    100 18.2M  100 18.2M    0     0   287k      0  0:01:04  0:01:04 --:--:--  570k
    [root@fission-master ~]#
    [root@fission-master ~]# tar -zxvf helm-v2.11.0-linux-amd64.tar.gz
    [root@fission-master ~]# mv linux-amd64/helm /usr/local/bin
    [root@fission-master ~]# helm version
    Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    Error: could not find tiller
    [root@fission-master ~]#
    [root@fission-master ~]# helm init
    Creating /root/.helm
    Creating /root/.helm/repository
    Creating /root/.helm/repository/cache
    Creating /root/.helm/repository/local
    Creating /root/.helm/plugins
    Creating /root/.helm/starters
    Creating /root/.helm/cache/archive
    Creating /root/.helm/repository/repositories.yaml
    Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
    Adding local repo with URL: http://127.0.0.1:8879/charts
    $HELM_HOME has been configured at /root/.helm.
    
    Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
    
    Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
    To prevent this, run `helm init` with the --tiller-tls-verify flag.
    For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
    Happy Helming!
    [root@fission-master ~]#
    [root@fission-master ~]# watch -d kubectl -n kube-system get pods -o=wide
    [root@fission-master ~]#
    [root@fission-master ~]# kubectl get svc --namespace=kube-system
    NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
    kube-dns        ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP   10m
    tiller-deploy   ClusterIP   10.97.176.234   <none>        44134/TCP       16s
    [root@fission-master ~]#
    [root@fission-master ~]#
    [root@fission-master ~]# kubectl create serviceaccount --namespace kube-system tiller
    [root@fission-master ~]# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
    [root@fission-master ~]# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
    [root@fission-master ~]#
    [root@fission-master ~]# kubectl delete svc tiller-deploy --namespace=kube-system
    [root@fission-master ~]#
    [root@fission-master ~]# kubectl get pods --all-namespaces -o=wide
    NAMESPACE     NAME                                     READY   STATUS    RESTARTS   AGE   IP              NODE             NOMINATED NODE
    kube-system   coredns-99b9bb8bd-8hfjv                  1/1     Running   0          11m   10.40.0.2       fission-node1    <none>
    kube-system   coredns-99b9bb8bd-95k5w                  1/1     Running   0          11m   10.40.0.1       fission-node1    <none>
    kube-system   etcd-fission-master                      1/1     Running   0          6m    10.239.85.153   fission-master   <none>
    kube-system   kube-apiserver-fission-master            1/1     Running   0          6m    10.239.85.153   fission-master   <none>
    kube-system   kube-controller-manager-fission-master   1/1     Running   0          6m    10.239.85.153   fission-master   <none>
    kube-system   kube-proxy-cwkr8                         1/1     Running   0          11m   10.239.85.153   fission-master   <none>
    kube-system   kube-proxy-fmkrp                         1/1     Running   0          6m    10.239.85.167   fission-node1    <none>
    kube-system   kube-scheduler-fission-master            1/1     Running   0          6m    10.239.85.153   fission-master   <none>
    kube-system   tiller-deploy-57f988f854-w5m86           1/1     Running   0          47s   10.40.0.3       fission-node1    <none>
    kube-system   weave-net-phhvt                          2/2     Running   0          10m   10.239.85.153   fission-master   <none>
    kube-system   weave-net-tl49r                          2/2     Running   0          6m    10.239.85.167   fission-node1    <none>
    [root@fission-master ~]#
    [root@fission-master ~]# kubectl expose pod tiller-deploy-57f988f854-w5m86 --external-ip=10.239.85.167 --namespace=kube-system --name tiller-deploy
    service/tiller-deploy exposed
    [root@fission-master ~]#
    [root@fission-master ~]# kubectl get svc --namespace=kube-system
    NAME            TYPE        CLUSTER-IP       EXTERNAL-IP     PORT(S)               AGE
    kube-dns        ClusterIP   10.96.0.10       <none>          53/UDP,53/TCP         12m
    tiller-deploy   ClusterIP   10.106.173.127   10.239.85.167   44134/TCP,44135/TCP   9s
    [root@fission-master ~]# export HELM_HOST=10.239.85.167:44134
    [root@fission-master ~]# helm version
    Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    Error: cannot connect to Tiller
    [root@fission-master ~]#
    [root@fission-master ~]# unset HELM_HOST
    [root@fission-master ~]#
    [root@fission-master ~]# helm version
    Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    [root@fission-master ~]#
    [root@fission-master ~]# watch -d kubectl -n kube-system get pods -o=wide
    [root@fission-master ~]#
    [root@fission-master ~]# wget https://github.com/fission/fission/releases/download/0.12.0/fission-all-0.12.0.tgz
    [root@fission-master ~]#
    [root@fission-master ~]# watch -d kubectl get pods --all-namespaces -o=wide
    [root@fission-master ~]#
    [root@fission-master ~]#
    [root@fission-master ~]# export
    declare -x HISTCONTROL="ignoredups"
    declare -x HISTSIZE="1000"
    declare -x HOME="/root"
    declare -x HOSTNAME="fission-master"
    declare -x HTTPS_PROXY="http://child-prc.intel.com:913"
    declare -x HTTP_PROXY="http://child-prc.intel.com:913"
    declare -x KUBECONFIG="/etc/kubernetes/admin.conf"
    declare -x LANG="en_US.UTF-8"
    declare -x LESSOPEN="||/usr/bin/lesspipe.sh %s"
    declare -x LOGNAME="root"
    declare -x LS_COLORS="rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:"
    declare -x MAIL="/var/spool/mail/root"
    declare -x OLDPWD
    declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin"
    declare -x PWD="/root"
    declare -x SELINUX_LEVEL_REQUESTED=""
    declare -x SELINUX_ROLE_REQUESTED=""
    declare -x SELINUX_USE_CURRENT_RANGE=""
    declare -x SHELL="/bin/bash"
    declare -x SHLVL="1"
    declare -x SSH_CLIENT="10.255.30.117 51411 22"
    declare -x SSH_CONNECTION="10.255.30.117 51411 10.239.85.153 22"
    declare -x SSH_TTY="/dev/pts/1"
    declare -x TERM="xterm"
    declare -x USER="root"
    declare -x XDG_RUNTIME_DIR="/run/user/0"
    declare -x XDG_SESSION_ID="4"
    declare -x http_proxy="http://child-prc.intel.com:913"
    declare -x https_proxy="http://child-prc.intel.com:913"
    [root@fission-master ~]#
    [root@fission-master ~]#
    [root@fission-master ~]# scp root@10.239.85.167:/root/fission-env/pv-volume.yaml ./
    [root@fission-master ~]# scp root@10.239.85.167:/root/fission-env/pvc-volume.yaml ./
    [root@fission-master ~]#
    [root@fission-master ~]# kubectl create -f pv-volume.yaml
    [root@fission-master ~]#
    [root@fission-master ~]#
    [root@fission-master ~]# scp root@10.239.85.167:/root/fission-env/fission-all-0.12.0.tgz ./
    [root@fission-master ~]#
    [root@fission-master ~]# helm version
    Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
    [root@fission-master ~]#
    [root@fission-master ~]# helm install --name fission --namespace fission --set serviceType=NodePort ./fission-all-0.12.0.tgz
    [root@fission-master ~]#
    [root@fission-master ~]# curl -Lo fission https://github.com/fission/fission/releases/download/0.12.0/fission-cli-linux && chmod +x fission && sudo mv fission /usr/local/bin/
    [root@fission-master ~]#
    [root@fission-master ~]# fission
    VERSION:
       0.12.0
    [root@fission-master ~]#
    [root@fission-master ~]# fission -v
    client:
      fission/core:
        gitcommit: 7350cf7b196fb410d6510597caa1d7ce0bd4de9f
        builddate: 2018-11-01T20:12:13Z
        version: 0.12.0
    server:
      fission/core:
        gitcommit: 7350cf7b196fb410d6510597caa1d7ce0bd4de9f
        builddate: 2018-11-01T20:12:13Z
        version: 0.12.0
    [root@fission-master ~]#
    [root@fission-master ~]# fission env list
    NAME UID IMAGE POOLSIZE MINCPU MAXCPU MINMEMORY MAXMEMORY EXTNET GRACETIME
    [root@fission-master ~]#
    [root@fission-master ~]# kubectl get service router --namespace fission | grep router | awk '{print $5}' | sed 's/.*://g' | sed 's/\/.*//g'
    31704
    [root@fission-master ~]# export FISSION_ROUTER=10.239.85.153:31704
    
    Unset ENV
    unset KUBERNETES_HTTP_PROXY
    unset KUBERNETES_HTTPS_PROXY
    unset all_proxy
    unset ALL_PROXY
    unset socks_proxy
    unset no_proxy
    unset NO_PROXY
    unset HTTP_PROXY
    unset HTTPS_PROXY
    unset FTP_PROXY
    unset KUBECONFIG
    unset FISSION_ROUTER
    unset FISSION_URL
    [root@fmx217 fission-env]# export
    declare -x DISPLAY="localhost:10.0"
    declare -x HISTCONTROL="ignoredups"
    declare -x HISTSIZE="1000"
    declare -x HOME="/root"
    declare -x HOSTNAME="fmx217"
    declare -x KUBECONFIG="/etc/kubernetes/admin.conf"
    declare -x LANG="en_US.UTF-8"
    declare -x LESSOPEN="||/usr/bin/lesspipe.sh %s"
    declare -x LOGNAME="root"
    declare -x LS_COLORS="rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:"
    declare -x MAIL="/var/spool/mail/root"
    declare -x OLDPWD="/root"
    declare -x PATH="/root/bin:/root/bin:/root/bin:/root/bin:/root/bin:/root/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin"
    declare -x PWD="/root/changqing/fission-env"
    declare -x SELINUX_LEVEL_REQUESTED=""
    declare -x SELINUX_ROLE_REQUESTED=""
    declare -x SELINUX_USE_CURRENT_RANGE=""
    declare -x SHELL="/bin/bash"
    declare -x SHLVL="1"
    declare -x SSH_CLIENT="10.239.205.88 52473 22"
    declare -x SSH_CONNECTION="10.239.205.88 52473 10.239.85.153 22"
    declare -x SSH_TTY="/dev/pts/0"
    declare -x TERM="xterm"
    declare -x TZ="Asia/Shanghai"
    declare -x USER="root"
    declare -x XDG_DATA_DIRS="/root/.local/share/flatpak/exports/share/:/var/lib/flatpak/exports/share/:/usr/local/share/:/usr/share/"
    declare -x XDG_RUNTIME_DIR="/run/user/0"
    declare -x XDG_SESSION_ID="73"
    declare -x ftp_proxy="http://child-prc.intel.com:913"
    declare -x http_proxy="http://child-prc.intel.com:913"
    declare -x https_proxy="http://child-prc.intel.com:913"
    
    
    
    
    Problem: unable to get URL "https://dl.k8s.io/release/stable-1.11.txt"
    [root@fmx217 fission-env]# kubeadm init --config kubeadm.yaml
    I0417 10:25:42.921706   48578 version.go:89] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.11.txt": Get https://dl.k8s.io/release/stable-1.11.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    I0417 10:25:42.921869   48578 version.go:94] falling back to the local client version: v1.12.1
    [init] using Kubernetes version: v1.12.1
    [root@fmx217 fission-env]# 
    [root@fmx217 fission-env]# kubeadm config images list
    I0417 10:26:48.559512   48969 version.go:89] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    I0417 10:26:48.559666   48969 version.go:94] falling back to the local client version: v1.12.1
    k8s.gcr.io/kube-apiserver:v1.12.1
    k8s.gcr.io/kube-controller-manager:v1.12.1
    k8s.gcr.io/kube-scheduler:v1.12.1
    k8s.gcr.io/kube-proxy:v1.12.1
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.2.24
    k8s.gcr.io/coredns:1.2.2
    [root@fmx217 fission-env]#
    [root@fmx217 fission-env]# kubeadm config print-defaults --api-objects ClusterConfiguration > kubeadm.conf
    [root@fmx217 fission-env]#
    [root@fmx217 fission-env]# kubeadm config images list --config kubeadm.conf
    k8s.gcr.io/kube-apiserver:v1.12.0
    k8s.gcr.io/kube-controller-manager:v1.12.0
    k8s.gcr.io/kube-scheduler:v1.12.0
    k8s.gcr.io/kube-proxy:v1.12.0
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.2.24
    k8s.gcr.io/coredns:1.2.2
    [root@fmx217 fission-env]#
    [root@fmx217 fission-env]# vim kubeadm.conf
    kubernetesVersion: v1.11.8
    [root@fmx217 fission-env]#
    [root@fmx217 fission-env]# kubeadm config images pull --config kubeadm.conf
    [config/images] Pulled k8s.gcr.io/kube-apiserver:v1.11.8
    [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.11.8
    [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.11.8
    [config/images] Pulled k8s.gcr.io/kube-proxy:v1.11.8
    [config/images] Pulled k8s.gcr.io/pause:3.1
    [config/images] Pulled k8s.gcr.io/etcd:3.2.18
    [config/images] Pulled k8s.gcr.io/coredns:1.2.2
    [root@fmx217 fission-env]#
    [root@fmx217 fission-env]# kubeadm init --config kubeadm.conf
    [init] using Kubernetes version: v1.11.8
      kubeadm join 10.239.85.153:6443 --token fjkcom.rzk0bx01qetra2ha --discovery-token-ca-cert-hash sha256:82f58287058bf3bda071eedec1b3dfca9ed79b5b2f6cca6e19efa7d83fa82a99
    [root@fmx217 fission-env]#
    [root@fmx217 fission-env]# export KUBECONFIG=/etc/kubernetes/admin.conf
    [root@fmx217 fission-env]# kubectl version
    Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.8", GitCommit:"4e209c9383fa00631d124c8adcc011d617339b3c", GitTreeState:"clean", BuildDate:"2019-02-28T18:40:05Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
    
    Delete fission env
    helm ls --all
    helm delete --purge fission
    helm reset
    kubectl delete svc tiller-deploy --namespace=kube-system
    kubectl delete deployment tiller-deploy -n kube-system
    helm init
    
    Kubelet error
    # kubectl version
    Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
    The connection to the server 10.239.85.153:6443 was refused - did you specify the right host or port?
    # setenforce 0; systemctl enable --now kubelet; systemctl restart kubelet; swapoff -a; systemctl stop firewalld; systemctl disable firewalld; kubectl version
    
    
    Clear k8s
    echo y | sudo -S kubeadm reset
    unset KUBECONFIG
    unset HELM_HOST
    rm -rf $HOME/.kube
    rm -rf $HOME/.helm
    # rm -rf /usr/local/bin/kubectl
    # yum erase kubeadm kubectl kubelet -y
    # yum install kubeadm kubelet -y
    yum install kubeadm-1.12.1 kubelet-1.11.3 kubectl-1.13.4 -y
    # yum install kubeadm-1.11.8 kubectl-1.11.8 kubelet-1.11.8 -y
    kubeadm version
    kubectl version
    kubelet --version
    
    
    docker build . --tag classification:v0.2
    docker tag classification:v0.1 10.239.85.153:5000/classification:v0.2
    docker push 10.239.85.153:5000/classification:v0.2
    docker image rm 10.239.85.153:5000/classification:v0.2
    # docker pull 10.239.85.153:5000/stream-223mb:latest
    
    Dockerfile: 
    FROM centos
    
    ENV http_proxy=http://child-prc.intel.com:913
    ENV ftp_proxy=http://child-prc.intel.com:913
    ENV FTP_PROXY=http://child-prc.intel.com:913
    ENV socks_proxy=http://proxy-shz.intel.com:1080
    ENV HTTPS_PROXY=http://child-prc.intel.com:913
    ENV https_proxy=http://child-prc.intel.com:913
    ENV HTTP_PROXY=http://child-prc.intel.com:913
    
    ENV PATH=$PATH:/usr/bin:/usr/local/bin:/app
    ENV PYTHONPATH=$PYTHONPATH:/app
    
    RUN yum install -y https://centos7.iuscommunity.org/ius-release.rpm
    RUN yum update -y
    RUN yum install -y python36u python36u-libs python36u-devel python36u-pip stress-ng bc numactl time
    RUN ln -s /usr/bin/pip3.6 /usr/bin/pip3
    RUN ln -s /usr/bin/python3.6 /usr/bin/python3
    RUN yum install -y gcc libev
    RUN yum install -y libev-devel
    RUN pip3 install --upgrade pip
    RUN rm -r /root/.cache
    
    COPY . /app
    WORKDIR /app
    RUN pip3 install -r requirements.txt
    RUN pip3 install ./tensorflow-1.8.0-cp36-cp36m-manylinux1_x86_64.whl
    
    ENTRYPOINT ["python3"]
    CMD ["server.py"]
    
    
    
    # ubuntu 16/18 install docker
    sudo apt update
    sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
    curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
    sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
    sudo apt-get update
    sudo apt-cache madison docker-ce
    sudo apt-get install -y docker-ce=18.06.3~ce~3-0~ubuntu
     

    转载于:https://www.cnblogs.com/qccz123456/p/11610117.html

    展开全文
  • <div><p>We could run the faas-netes container in the Pod used for the gateway (as a side-car) <p>This is what we're doing with the new CRD. It means there will be a 1-1 relationship between ...
  • faas-netes 使Kubernetes成为功能即服务(FaaS)的后端
  • <div><p>Check all labs for creating secrets instructions with <code>docker secret create</code> or <code>kubectl create secret</code> and update them to use the new command <code>faas-cli secret ...
  • <ul><li>[x] I have raised an issue to propose this change (<a href="https://github.com/openfaas/faas/blob/master/CONTRIBUTING.md">required)</li></ul> <h1>189 <h2>How Has This Been Tested? <h2>...
  • IaaS, CaaS和FaaS

    2019-09-07 13:35:45
    CDN的又一次革命:Serverless + 边缘计算 CDN为什么不行,而非要用边缘计算 动态内容已日益急剧增多,可缓存内容的数量将大大减少。随着互联网用户增多,数据种类会多得多,静态网站在整个消费者生态系统中所占的...
  • # Sent: Tuesday, 7 June, 2016 3:08 ...Rename attachments in MyTask app on FaaS. The root cause of this issue is: the HTTP request header field ‘newfilename’ is missing in Gateway system. And the G...
  • Created by Wang, Jerry, last modified on May 13, 2016
  • <div><p>This is a really...<p>I am curious if there is any interest/demand to support other FaaS solutions such as kubeless, fission, Azure, etc?</p><p>该提问来源于开源项目:middyjs/middy</p></div>
  • Faas,又一个未来?

    万次阅读 2017-02-26 21:18:35
    云计算时代出现了大量XaaS形式的概念,从IaaS、PaaS、SaaS到容器云引领的CaaS,再到火热的微服务架构,以及现在越来越多被谈起的Serverless和FaaS,我们正在经历⼀一个技术飞速变革的时代。 一、什么是Faas 云计算时代...
  • FaaS 是一种新兴的技术平台,个人认为 2018 年即将迎来 FaaS 的崛起。本文从以下角度探讨和分析它的现状和未来,给大家提供一个报告,也作为自己的年度技术总结。 FaaS 是什么?和 PaaS 有什么异同?它解决了什么 ...
  • 概述FaaS,Function as a Service,"功能即服务"(也译作“函数即服务”),是一种在无状态容器中运行的事件驱动型计算执行模型,这些功能将利用服务来管理服务器端逻辑和状态。它允许开发人员以功能的形式来构建、...
  • Serverless和FaaS

    千次阅读 2018-03-06 20:16:06
    如同许多新的概念一样,Serverless目前还没有一个普遍公认的权威的定义。最新的一个定义是这样描述的:“无服务器架构是基于互联网的系统,其中应用开发不使用常规的服务进程。相反,它们仅依赖于第三方服务(例如...
  • FaaS技术框架

    2019-10-05 18:11:55
    FaaS介绍 微服务(MicroService)是以专注于单一服务/功能的小型单元块为基础,利用模块化的方式组合成复杂的大型应用服务。 FaaS是Function as a Service的缩写,可以简单理解为功能服务化。FaaS提供了一种比...

空空如也

1 2 3 4 5 ... 20
收藏数 1,000
精华内容 400
关键字:

faas