• 一. 实验环境: 3台主机安装CentOS7 minimal系统64G内存,800G+1T * 3硬盘(其中1T盘为后期ceph部署做准备),4个千兆网卡: 用途 网口 ip地址段 控制网络 enp2s0f0 192.168.118.0/24 ...10...

    一. 实验环境:

    • 3台主机安装CentOS7 minimal系统64G内存,800G+1T * 3硬盘(其中1T盘为后期ceph部署做准备),4个千兆网卡:

      用途 网口 ip地址段
      控制网络 enp2s0f0 192.168.118.0/24
      openstack external enp2s0f1 无ip
      neutron vxlan tunnel enp2s0f2 10.0.1.0/24
      ceph集群后端 enp2s0f3 10.0.0.0/24
    • 主机网络规划:

      host IP address remark
      controller203 192.168.118.203 1
      compute204 192.168.118.204 2
      compute205 192.168.118.205 3
      kolla 192.168.118.212
      virtulal IP 192.168.118.209
      虚拟地址池 192.168.118.216-220

      组网规划图

    二. 控制以及计算节点初始化操作:

    • 使用以下脚本对每个计算机进行初始化配置(kolla为0)执行 sh initnode.sh n(n代表第几台主机)

      # /usr/bin/bash
      
      if  !( test -f nodes )
      then
      	exit 1
      fi
      	
      systemctl stop firewalld && systemctl disable firewalld
      yum update -y
      yum install -y wget vim net-tools
      wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
      yum install -y docker-ce
      mkdir -pv /etc/docker
      systemctl restart docker && systemctl status docker
      
      #set hostanme
      kolla=`sed '/^kolla=/!d;s/.*=//' /opentack/nodes`
      if [ $1 -eq 0 ] ; then
      	echo "\n$kolla\tkolla" > /etc/hostname
      	`hostname kolla`
      else if [ $1 -lt 3 ] ; then
      		`hostname controller0${1}`
      		echo "controller0${1}" > /etc/hostname
      else
      		name=${printf "%03s" $1}
      		`hostname conmpute${name}`
      		echo "compute${name}" > /etc/hostname	
      fi
      #set hosts
      nodes=`sed '/^nodes=/!d;s/.*=//' /openstack/nodes`
      array=(${nodes//,/ }) 
      i=1
      for var in ${array[@]}
      do
      	if [ $i -lt 4 ]; then
      		echo -e "\n$var\tcontroller0$i"  >>  /etc/hosts
      	else
      		name=${printf "%03s" $i}
      		echo -e "\n$var\tcomppute$name"  >>  /etc/hosts
      	fi
      	$i=$i+1
      done
      
      reboot
      
    • 各节点主机初始化内容:

      • 配置网卡信息
      • 关闭防火墙
      • 安装docker
      • 修改hostname以及添加hosts信息

    三. kolla主机配置

    设置各节点主机之间免密登录

    生成并存储秘钥

    ssh-keygen
    pub_key=`cat ~/.ssh/id_rsa.pub`
    echo "$pub_key root@kolla" >> ~/.ssh/authorized_keys
    echo "$pub_key root@controller01" >> ~/.ssh/authorized_keys
    echo "$pub_key root@controller02" >> ~/.ssh/authorized_keys
    echo "$pub_key root@controller03" >> ~/.ssh/authorized_keys
    #echo "$pub_key root@compute001" >> ~/.ssh/authorized_keys
    #echo "$pub_key root@compute002" >> ~/.ssh/authorized_keys
    

    将authorized_key文件发放到各主机的~/.ssh/目录

    scp  ~/.ssh/authorized_keys  root@controller01:~/.ssh/
    scp  ~/.ssh/authorized_keys  root@controller02:~/.ssh/
    scp  ~/.ssh/authorized_keys  root@controller03:~/.ssh/
    #scp  ~/.ssh/authorized_keys  root@compute001:~/.ssh/
    #scp  ~/.ssh/authorized_keys  root@compute002:~/.ssh/
    

    配置docker仓库:

    配置国内镜像:

    [root@kolla ~]# mkdir -p /etc/docker
    [root@kolla ~]# vim /etc/docker/daemon.json
    {
    	"registry-mirrors": [
        "https://registry.docker-cn.com",
        "https://docker.mirrors.ustc.edu.cn",
        "http://hub-mirror.c.163.com",
        "https://cr.console.aliyun.com/",
        "http://f2d6cb40.m.daocloud.io"
       	 ]
    }
    

    启动docker

    [root@kolla ~]# systemctl daemon-reload && systemctl enable	docker && systemctl restart docker
    

    检查镜像站点配置是否正确

    [root@kolla ~]# docker pull hello-world
    

    安装依赖软件

    安装pip并更新

    [root@kolla ~]# yum insatll epel-release -y
    [root@kolla ~]# yum insatll python-pip -y
    [root@kolla ~]# pip install -U pip
    

    修改pip源

    [root@kolla ~]# mkdir ~/.pip
    [root@kolla ~]# vim ~/.pip/pip.conf
    [global]
    trusted-host = pypi.douban.com
    index-url = http://pypi.douban.com/simple
    

    安装其他依赖包

    [root@kolla ~]# yum install python-devel libffi-devel gcc openssl-devel libselinux-python -y
    

    安装配置ansible:

    先使用pip安装再使用yum安装,防止某些py包版本太低

    [root@kolla ~]# pip install ansible
    [root@kolla ~]# yum install ansible -y
    

    在/etc/ansible/ansible.cfg配置文件中添加以下内容:

    [defaults]
    host_key_checking=False
    pipelining=True
    forks=100
    

    安装配置kolla-ansible:

    使用pip安装kolla-ansible:

    pip install kolla-ansible
    

    复制global.yml和password.yml文件到/etc/kolla目录:

    cp -r /usr/share/kolla-ansible/etc_examples/kolla /etc/kolla/
    

    复制all-in-one 和multinode 文件到当前操作目录:

    cp /usr/share/kolla-ansible/ansible/inventory/*` .
    	 ```
    #### 修改global.yml文件
    [global.yml](http://paste.ubuntu.org.cn/4360073)
    ```bash
    ---
    # You can use this file to override _any_ variable throughout Kolla.
    # Additional options can be found in the
    # 'kolla-ansible/ansible/group_vars/all.yml' file. Default value of all the
    # commented parameters are shown here, To override the default value uncomment
    # the parameter and change its value.
     
    ###############
    # Kolla options
    ###############
    # Valid options are [ COPY_ONCE, COPY_ALWAYS ]
    #config_strategy: "COPY_ALWAYS"
     
    # Valid options are ['centos', 'debian', 'oraclelinux', 'rhel', 'ubuntu']
    kolla_base_distro: "centos"
     
    # Valid options are [ binary, source ]
    kolla_install_type: "source"
     
    # Valid option is Docker repository tag
    openstack_release: "queens"
     
    # Location of configuration overrides
    #node_custom_config: "/etc/kolla/config"
     
    # This should be a VIP, an unused IP on your network that will float between
    # the hosts running keepalived for high-availability. If you want to run an
    # All-In-One without haproxy and keepalived, you can set enable_haproxy to no
    # in "OpenStack options" section, and set this value to the IP of your
    # 'network_interface' as set in the Networking section below.
    kolla_internal_vip_address: "192.168.118.209"
     
    # This is the DNS name that maps to the kolla_internal_vip_address VIP. By
    # default it is the same as kolla_internal_vip_address.
    #kolla_internal_fqdn: "{{ kolla_internal_vip_address }}"
     
    # This should be a VIP, an unused IP on your network that will float between
    # the hosts running keepalived for high-availability. It defaults to the
    # kolla_internal_vip_address, allowing internal and external communication to
    # share the same address.  Specify a kolla_external_vip_address to separate
    # internal and external requests between two VIPs.
    #kolla_external_vip_address: "{{ kolla_internal_vip_address }}"
     
    # The Public address used to communicate with OpenStack as set in the public_url
    # for the endpoints that will be created. This DNS name should map to
    # kolla_external_vip_address.
    #kolla_external_fqdn: "{{ kolla_external_vip_address }}"
     
    ################
    # Docker options
    ################
    # Below is an example of a private repository with authentication. Note the
    # Docker registry password can also be set in the passwords.yml file.
     
    docker_registry: "192.168.118.212:4000"
    #docker_namespace: "companyname"
    #docker_registry_username: "sam"
    #docker_registry_password: "correcthorsebatterystaple"
     
    ###################
    # Messaging options
    ###################
    # Below is an example of an separate backend that provides brokerless
    # messaging for oslo.messaging RPC communications
     
    #om_rpc_transport: "amqp"
    #om_rpc_user: "{{ qdrouterd_user }}"
    #om_rpc_password: "{{ qdrouterd_password }}"
    #om_rpc_port: "{{ qdrouterd_port }}"
    #om_rpc_group: "qdrouterd"
     
     
    ##############################
    # Neutron - Networking Options
    ##############################
    # This interface is what all your api services will be bound to by default.
    # Additionally, all vxlan/tunnel and storage network traffic will go over this
    # interface by default. This interface must contain an IPv4 address.
    # It is possible for hosts to have non-matching names of interfaces - these can
    # be set in an inventory file per host or per group or stored separately, see
    #     http://docs.ansible.com/ansible/intro_inventory.html
    # Yet another way to workaround the naming problem is to create a bond for the
    # interface on all hosts and give the bond name here. Similar strategy can be
    # followed for other types of interfaces.
    network_interface: "enp0s31f6"
     
    # These can be adjusted for even more customization. The default is the same as
    # the 'network_interface'. These interfaces must contain an IPv4 address.
    #kolla_external_vip_interface: "{{ network_interface }}"
    #api_interface: "{{ network_interface }}"
    #storage_interface: "{{ network_interface }}"
    #cluster_interface: "{{ network_interface }}"
    #tunnel_interface: "{{ network_interface }}"
    #dns_interface: "{{ network_interface }}"
     
    # This is the raw interface given to neutron as its external network port. Even
    # though an IP address can exist on this interface, it will be unusable in most
    # configurations. It is recommended this interface not be configured with any IP
    # addresses for that reason.
    #neutron_external_interface: "eth1"
     
    # Valid options are [ openvswitch, linuxbridge, vmware_nsxv, vmware_dvs, opendaylight ]
    #neutron_plugin_agent: "openvswitch"
     
     
    ####################
    # keepalived options
    ####################
    # Arbitrary unique number from 0..255
    #keepalived_virtual_router_id: "51"
     
     
    #############
    # TLS options
    #############
    # To provide encryption and authentication on the kolla_external_vip_interface,
    # TLS can be enabled.  When TLS is enabled, certificates must be provided to
    # allow clients to perform authentication.
    #kolla_enable_tls_external: "no"
    #kolla_external_fqdn_cert: "{{ node_config_directory }}/certificates/haproxy.pem"
     
     
    ##############
    # OpenDaylight
    ##############
    #enable_opendaylight_qos: "no"
    #enable_opendaylight_l3: "yes"
     
    ###################
    # OpenStack options
    ###################
    # Use these options to set the various log levels across all OpenStack projects
    # Valid options are [ True, False ]
    #openstack_logging_debug: "False"
     
    # Valid options are [ none, novnc, spice, rdp ]
    #nova_console: "novnc"
     
    # OpenStack services can be enabled or disabled with these options
    enable_aodh: "yes"
    enable_barbican: "yes"
    enable_blazar: "yes"
    enable_ceilometer: "yes"
    enable_central_logging: "yes"
    enable_ceph: "yes"
    enable_ceph_mds: "no"
    enable_ceph_rgw: "no"
    enable_ceph_nfs: "no"
    enable_chrony: "yes"
    enable_cinder: "yes"
    enable_cinder_backup: "yes"
    enable_cinder_backend_hnas_iscsi: "no"
    enable_cinder_backend_hnas_nfs: "no"
    enable_cinder_backend_iscsi: "no"
    enable_cinder_backend_lvm: "no"
    enable_cinder_backend_nfs: "no"
    enable_cloudkitty: "yes"
    enable_collectd: "yes"
    enable_congress: "yes"
    enable_designate: "yes"
    enable_destroy_images: "yes"
    enable_etcd: "yes"
    enable_fluentd: "yes"
    enable_freezer: "yes"
    enable_gnocchi: "yes"
    enable_grafana: "yes"
    enable_haproxy: "yes"
    enable_heat: "yes"
    enable_horizon: "yes"
    enable_horizon_cloudkitty: "{{ enable_cloudkitty | bool }}"
    enable_horizon_designate: "{{ enable_designate | bool }}"
    enable_horizon_freezer: "{{ enable_freezer | bool }}"
    enable_horizon_ironic: "{{ enable_ironic | bool }}"
    enable_horizon_karbor: "{{ enable_karbor | bool }}"
    enable_horizon_magnum: "{{ enable_magnum | bool }}"
    enable_horizon_manila: "{{ enable_manila | bool }}"
    enable_horizon_mistral: "{{ enable_mistral | bool }}"
    enable_horizon_murano: "{{ enable_murano | bool }}"
    enable_horizon_neutron_lbaas: "{{ enable_neutron_lbaas | bool }}"
    enable_horizon_sahara: "{{ enable_sahara | bool }}"
    enable_horizon_searchlight: "{{ enable_searchlight | bool }}"
    enable_horizon_senlin: "{{ enable_senlin | bool }}"
    enable_horizon_solum: "{{ enable_solum | bool }}"
    enable_horizon_tacker: "{{ enable_tacker | bool }}"
    enable_horizon_trove: "{{ enable_trove | bool }}"
    enable_horizon_watcher: "{{ enable_watcher | bool }}"
    enable_horizon_zun: "{{ enable_zun | bool }}"
    enable_hyperv: "yes"
    enable_influxdb: "yes"
    enable_ironic: "yes"
    enable_ironic_pxe_uefi: "yes"
    enable_karbor: "yes"
    enable_kuryr: "yes"
    enable_magnum: "yes"
    enable_manila: "yes"
    enable_manila_backend_generic: "yes"
    enable_manila_backend_hnas: "yes"
    enable_manila_backend_cephfs_native: "yes"
    enable_manila_backend_cephfs_nfs: "yes"
    enable_mistral: "yes"
    enable_mongodb: "yes"
    enable_murano: "yes"
    enable_multipathd: "yes"
    enable_neutron_bgp_dragent: "yes"
    enable_neutron_dvr: "yes"
    enable_neutron_lbaas: "yes"
    enable_neutron_fwaas: "yes"
    enable_neutron_qos: "yes"
    enable_neutron_agent_ha: "yes"
    enable_neutron_vpnaas: "yes"
    enable_neutron_sriov: "yes"
    enable_neutron_sfc: "yes"
    enable_nova_fake: "yes"
    enable_nova_serialconsole_proxy: "yes"
    enable_octavia: "yes"
    enable_opendaylight: "yes"
    enable_openvswitch: "{{ neutron_plugin_agent != 'linuxbridge' }}"
    enable_ovs_dpdk: "no"
    enable_osprofiler: "yes"
    enable_panko: "yes"
    enable_qdrouterd: "yes"
    enable_rally: "yes"
    enable_redis: "yes"
    enable_sahara: "yes"
    enable_searchlight: "yes"
    enable_senlin: "yes"
    enable_skydive: "yes"
    enable_solum: "yes"
    enable_swift: "no"
    enable_telegraf: "yes"
    enable_tacker: "yes"
    enable_tempest: "yes"
    enable_trove: "yes"
    enable_vitrage: "yes"
    enable_vmtp: "yes"
    enable_watcher: "yes"
    enable_zun: "no"
     
    ##############
    # Ceph options
    ##############
    # Ceph can be setup with a caching to improve performance. To use the cache you
    # must provide separate disks than those for the OSDs
    #ceph_enable_cache: "no"
     
    # Set to no if using external Ceph without cephx.
    #external_ceph_cephx_enabled: "yes"
     
    # Ceph is not able to determine the size of a cache pool automatically,
    # so the configuration on the absolute size is required here, otherwise the flush/evict will not work.
    #ceph_target_max_bytes: ""
    #ceph_target_max_objects: ""
     
    # Valid options are [ forward, none, writeback ]
    #ceph_cache_mode: "writeback"
     
    # A requirement for using the erasure-coded pools is you must setup a cache tier
    # Valid options are [ erasure, replicated ]
    #ceph_pool_type: "replicated"
     
    # Integrate ceph rados object gateway with openstack keystone
    #enable_ceph_rgw_keystone: "no"
     
    # Set the pgs and pgps for pool
    #ceph_pool_pg_num: 128
    #ceph_pool_pgp_num: 128
     
    #############################
    # Keystone - Identity Options
    #############################
     
    # Valid options are [ uuid, fernet ]
    #keystone_token_provider: 'uuid'
     
    # Interval to rotate fernet keys by (in seconds). Must be an interval of
    # 60(1 min), 120(2 min), 180(3 min), 240(4 min), 300(5 min), 360(6 min),
    # 600(10 min), 720(12 min), 900(15 min), 1200(20 min), 1800(30 min),
    # 3600(1 hour), 7200(2 hour), 10800(3 hour), 14400(4 hour), 21600(6 hour),
    # 28800(8 hour), 43200(12 hour), 86400(1 day), 604800(1 week).
    #fernet_token_expiry: 86400
     
     
    ########################
    # Glance - Image Options
    ########################
    # Configure image backend.
    #glance_backend_file: "yes"
    #glance_backend_ceph: "no"
    #glance_backend_vmware: "no"
    #glance_backend_swift: "no"
     
     
    ##################
    # Barbican options
    ##################
    # Valid options are [ simple_crypto, p11_crypto ]
    #barbican_crypto_plugin: "simple_crypto"
    #barbican_library_path: "/usr/lib/libCryptoki2_64.so"
     
    ################
    ## Panko options
    ################
    # Valid options are [ mongodb, mysql ]
    #panko_database_type: "mysql"
     
    #################
    # Gnocchi options
    #################
    # Valid options are [ file, ceph ]
    #gnocchi_backend_storage: "{{ 'ceph' if enable_ceph|bool else 'file' }}"
     
     
    ################################
    # Cinder - Block Storage Options
    ################################
    # Enable / disable Cinder backends
    #cinder_backend_ceph: "{{ enable_ceph }}"
    #cinder_backend_vmwarevc_vmdk: "no"
    #cinder_volume_group: "cinder-volumes"
     
    # Valid options are [ nfs, swift, ceph ]
    #cinder_backup_driver: "ceph"
    #cinder_backup_share: ""
    #cinder_backup_mount_options_nfs: ""
     
     
    ###################
    # Designate options
    ###################
    # Valid options are [ bind9 ]
    #designate_backend: "bind9"
    #designate_ns_record: "sample.openstack.org"
     
    ########################
    # Nova - Compute Options
    ########################
    #nova_backend_ceph: "{{ enable_ceph }}"
     
    # Valid options are [ qemu, kvm, vmware, xenapi ]
    #nova_compute_virt_type: "kvm"
     
    # The number of fake driver per compute node
    #num_nova_fake_per_node: 5
     
    #################
    # Hyper-V options
    #################
    # Hyper-V can be used as hypervisor
    #hyperv_username: "user"
    #hyperv_password: "password"
    #vswitch_name: "vswitch"
    # URL from which Nova Hyper-V MSI is downloaded
    #nova_msi_url: "https://www.cloudbase.it/downloads/HyperVNovaCompute_Beta.msi"
     
    #############################
    # Horizon - Dashboard Options
    #############################
    #horizon_backend_database: "{{ enable_murano | bool }}"
     
    #############################
    # Ironic options
    #############################
    #ironic_dnsmasq_dhcp_range:
     
    ######################################
    # Manila - Shared File Systems Options
    ######################################
    # HNAS backend configuration
    #hnas_ip:
    #hnas_user:
    #hnas_password:
    #hnas_evs_id:
    #hnas_evs_ip:
    #hnas_file_system_name:
     
    ################################
    # Swift - Object Storage Options
    ################################
    # Swift expects block devices to be available for storage. Two types of storage
    # are supported: 1 - storage device with a special partition name and filesystem
    # label, 2 - unpartitioned disk  with a filesystem. The label of this filesystem
    # is used to detect the disk which Swift will be using.
     
    # Swift support two matching modes, valid options are [ prefix, strict ]
    #swift_devices_match_mode: "strict"
     
    # This parameter defines matching pattern: if "strict" mode was selected,
    # for swift_devices_match_mode then swift_device_name should specify the name of
    # the special swift partition for example: "KOLLA_SWIFT_DATA", if "prefix" mode was
    # selected then swift_devices_name should specify a pattern which would match to
    # filesystems' labels prepared for swift.
    #swift_devices_name: "KOLLA_SWIFT_DATA"
     
     
    ################################################
    # Tempest - The OpenStack Integration Test Suite
    ################################################
    # following value must be set when enable tempest
    tempest_image_id:
    tempest_flavor_ref_id:
    tempest_public_network_id:
    tempest_floating_network_name:
     
    # tempest_image_alt_id: "{{ tempest_image_id }}"
    # tempest_flavor_ref_alt_id: "{{ tempest_flavor_ref_id }}"
     
    ###################################
    # VMware - OpenStack VMware support
    ###################################
    #vmware_vcenter_host_ip:
    #vmware_vcenter_host_username:
    #vmware_vcenter_host_password:
    #vmware_datastore_name:
    #vmware_vcenter_name:
    #vmware_vcenter_cluster_name:
     
    #######################################
    # XenAPI - Support XenAPI for XenServer
    #######################################
    # XenAPI driver use HIMN(Host Internal Management Network)
    # to communicate with XenServer host.
    #xenserver_himn_ip:
    #xenserver_username:
    #xenserver_connect_protocol:
    

    拉取镜像

    kolla-ansible pull -vvv
    

    再次修改global.yml文件(因为上一个文件拉取的镜像缺少nova-compute等镜像)

    global.yml

    # Location of configuration overrides
    node_custom_config: "/etc/kolla/config"
     
    # This should be a VIP, an unused IP on your network that will float between
    # the hosts running keepalived for high-availability. If you want to run an
    # All-In-One without haproxy and keepalived, you can set enable_haproxy to no
    # in "OpenStack options" section, and set this value to the IP of your
    # 'network_interface' as set in the Networking section below.
    kolla_internal_vip_address: "192.168.216.160"
     
    ################
    # Docker options
    ################
    # Below is an example of a private repository with authentication. Note the
    # Docker registry password can also be set in the passwords.yml file.
     
    #docker_registry: "kolla:4000"
    #docker_namespace: "kolla"
    #docker_registry_username: "sam"
    #docker_registry_password: "correcthorsebatterystaple"
     
    ##############################
    # Neutron - Networking Options
    ##############################
     
    # This is the raw interface given to neutron as its external network port. Even
    # though an IP address can exist on this interface, it will be unusable in most
    # configurations. It is recommended this interface not be configured with any IP
    # addresses for that reason.
    #neutron_external_interface: "ens35"
     
    # OpenStack services can be enabled or disabled with these options
    #enable_aodh: "no"
    #enable_barbican: "no"
    #enable_blazar: "no"
    enable_ceilometer: "yes"
    #enable_central_logging: "no"
    #enable_ceph: "no"
    #enable_ceph_mds: "no"
    #enable_ceph_rgw: "no"
    #enable_ceph_nfs: "no"
    enable_chrony: "yes"
    enable_cinder: "yes"
    #enable_cinder_backup: "yes"
    #enable_cinder_backend_hnas_iscsi: "no"
    #enable_cinder_backend_hnas_nfs: "no"
    #enable_cinder_backend_iscsi: "no"
    enable_cinder_backend_lvm: "yes"
    #enable_cinder_backend_nfs: "no"
    #enable_cloudkitty: "no"
    #enable_collectd: "no"
    #enable_congress: "no"
    #enable_designate: "no"
    #enable_destroy_images: "no"
    #enable_etcd: "no"
    #enable_fluentd: "yes"
    #enable_freezer: "no"
    enable_gnocchi: "yes"
    #enable_grafana: "no"
    #enable_haproxy: "yes"
    #enable_heat: "yes"
    #enable_horizon: "yes"
    #enable_horizon_cloudkitty: "{{ enable_cloudkitty | bool }}"
    #enable_horizon_designate: "{{ enable_designate | bool }}"
    #enable_horizon_freezer: "{{ enable_freezer | bool }}"
    #enable_horizon_ironic: "{{ enable_ironic | bool }}"
    #enable_horizon_karbor: "{{ enable_karbor | bool }}"
    #enable_horizon_magnum: "{{ enable_magnum | bool }}"
    #enable_horizon_manila: "{{ enable_manila | bool }}"
    #enable_horizon_mistral: "{{ enable_mistral | bool }}"
    #enable_horizon_murano: "{{ enable_murano | bool }}"
    #enable_horizon_neutron_lbaas: "{{ enable_neutron_lbaas | bool }}"
    #enable_horizon_sahara: "{{ enable_sahara | bool }}"
    #enable_horizon_searchlight: "{{ enable_searchlight | bool }}"
    #enable_horizon_senlin: "{{ enable_senlin | bool }}"
    #enable_horizon_solum: "{{ enable_solum | bool }}"
    #enable_horizon_tacker: "{{ enable_tacker | bool }}"
    #enable_horizon_trove: "{{ enable_trove | bool }}"
    #enable_horizon_watcher: "{{ enable_watcher | bool }}"
    #enable_horizon_zun: "{{ enable_zun | bool }}"
    #enable_hyperv: "no"
    #enable_influxdb: "no"
    #enable_ironic: "no"
    #enable_ironic_pxe_uefi: "no"
    #enable_karbor: "no"
    #enable_kuryr: "no"
    #enable_magnum: "no"
    #enable_manila: "no"
    #enable_manila_backend_generic: "no"
    #enable_manila_backend_hnas: "no"
    #enable_manila_backend_cephfs_native: "no"
    #enable_manila_backend_cephfs_nfs: "no"
    #enable_mistral: "no"
    #enable_mongodb: "no"
    #enable_murano: "no"
    #enable_multipathd: "no"
    #enable_neutron_bgp_dragent: "no"
    #enable_neutron_dvr: "no"
    #enable_neutron_lbaas: "no"
    #enable_neutron_fwaas: "no"
    #enable_neutron_qos: "no"
    #enable_neutron_agent_ha: "no"
    #enable_neutron_vpnaas: "no"
    #enable_neutron_sriov: "no"
    #enable_neutron_sfc: "no"
    #enable_nova_fake: "no"
    #enable_nova_serialconsole_proxy: "no"
    #enable_octavia: "no"
    #enable_opendaylight: "no"
    #enable_openvswitch: "{{ neutron_plugin_agent != 'linuxbridge' }}"
    #enable_ovs_dpdk: "no"
    #enable_osprofiler: "no"
    #enable_panko: "no"
    #enable_qdrouterd: "no"
    #enable_rally: "no"
    #enable_redis: "no"
    #enable_sahara: "no"
    #enable_searchlight: "no"
    #enable_senlin: "no"
    #enable_skydive: "no"
    #enable_solum: "no"
    #enable_swift: "no"
    #enable_telegraf: "no"
    #enable_tacker: "no"
    #enable_tempest: "no"
    #enable_trove: "no"
    #enable_vitrage: "no"
    																							#enable_vmtp: "no"
    #enable_watcher: "no"
    #enable_zun: "no"
     
    ########################
    # Nova - Compute Options
    ########################
    #nova_backend_ceph: "{{ enable_ceph }}"
     
    # Valid options are [ qemu, kvm, vmware, xenapi ]
    nova_compute_virt_type: "kvm"
    

    拉取镜像

     kolla-ansible pull -vvv
    

    上传镜像到本地registry仓库:

    配置Docker共享挂载:

    [root@kolla ~]# mkdir -p /etc/systemd/system/docker.service.d
    [root@kolla ~]# vim /etc/systemd/system/docker.service.d/kolla.conf
    [Service]
    MountFlags=shared
    [root@kolla ~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker
    

    启动registry容器,并将端口映射到4000端口

    [root@kolla /]# docker run -d --name registry --restart=always -p 4000:5000 -v /opt/registry:/var/lib/registry registry:2.6.2
    

    修改Docker服务配置,信任本地Registry服务

    [root@kolla /]# vim /usr/lib/systemd/system/docker.service
    ExecStart=/usr/bin/dockerd --insecure-registry kolla:4000
    

    重新启动docker服务

    systemctl daemon-reload && systemctl restart docker
    

    测试registry服务是否正常:

    [root@kolla ~]# curl -X GET http://kolla:4000/v2/_catalog
    {"repositories":[]}
    

    修改镜像tag:

    for i in `docker images|grep -v registry|grep -v R|awk '{print $1}'`;
    do 
    	docker image tag $i:queens kolla:4000/$i:queens;
    done
    

    push到本地库

    for i in `docker images|grep kolla:4000|awk '{print $1}'`;
    do
    	docker push $i:queens;
    done
    

    查看镜像是否上传成功:

    curl -XGET http://kolla:4000/v2/_catalog
    {
    	"repositories": [
    	"kolla/centos-source-aodh-api",
    	"kolla/centos-source-aodh-evaluator",
    	"kolla/centos-source-aodh-listener",
    	"kolla/centos-source-aodh-notifier",
    	"kolla/centos-source-barbican-api",
    	"kolla/centos-source-barbican-keystone-listener",
    	"kolla/centos-source-barbican-worker",
    	"kolla/centos-source-blazar-api",
    	"kolla/centos-source-blazar-manager",
    	"kolla/centos-source-ceilometer-central",
    	"kolla/centos-source-ceilometer-compute",
    	"kolla/centos-source-ceilometer-notification",
    	"kolla/centos-source-ceph-mds",
    	"kolla/centos-source-ceph-mgr",
    	"kolla/centos-source-ceph-mon",
    	"kolla/centos-source-ceph-nfs",
    	"kolla/centos-source-ceph-osd",
    	"kolla/centos-source-ceph-rgw",
    	"kolla/centos-source-chrony",
    	"kolla/centos-source-cinder-api",
    	"kolla/centos-source-cinder-backup",
    	"kolla/centos-source-cinder-scheduler",
    	"kolla/centos-source-cinder-volume",
    	"kolla/centos-source-cloudkitty-api",
    	"kolla/centos-source-cloudkitty-processor",
    	"kolla/centos-source-collectd",
    	"kolla/centos-source-congress-api",
    	"kolla/centos-source-congress-datasource",
    	"kolla/centos-source-congress-policy-engine",
    	"kolla/centos-source-cron",
    	"kolla/centos-source-designate-api",
    	"kolla/centos-source-designate-backend-bind9",
    	"kolla/centos-source-designate-central",
    	"kolla/centos-source-designate-mdns",
    	"kolla/centos-source-designate-producer",
    	"kolla/centos-source-designate-sink",
    	"kolla/centos-source-designate-worker",
    	"kolla/centos-source-dnsmasq",
    	"kolla/centos-source-elasticsearch",
    	"kolla/centos-source-etcd",
    	"kolla/centos-source-fluentd",
    	"kolla/centos-source-freezer-api",
    	"kolla/centos-source-glance-api",
    	"kolla/centos-source-gnocchi-api",
    	"kolla/centos-source-gnocchi-metricd",
    	"kolla/centos-source-gnocchi-statsd",
    	"kolla/centos-source-grafana",
    	"kolla/centos-source-haproxy",
    	"kolla/centos-source-heat-api",
    	"kolla/centos-source-heat-api-cfn",
    	"kolla/centos-source-heat-engine",
    	"kolla/centos-source-horizon",
    	"kolla/centos-source-influxdb",
    	"kolla/centos-source-ironic-api",
    	"kolla/centos-source-ironic-conductor",
    	"kolla/centos-source-ironic-inspector",
    	"kolla/centos-source-ironic-pxe",
    	"kolla/centos-source-iscsid",
    	"kolla/centos-source-karbor-api",
    	"kolla/centos-source-karbor-operationengine",
    	"kolla/centos-source-karbor-protection",
    	"kolla/centos-source-keepalived",
    	"kolla/centos-source-keystone",
    	"kolla/centos-source-kibana",
    	"kolla/centos-source-kolla-toolbox",
    	"kolla/centos-source-kuryr-libnetwork",
    	"kolla/centos-source-magnum-api",
    	"kolla/centos-source-magnum-conductor",
    	"kolla/centos-source-manila-api",
    	"kolla/centos-source-manila-data",
    	"kolla/centos-source-manila-scheduler",
    	"kolla/centos-source-manila-share",
    	"kolla/centos-source-mariadb",
    	"kolla/centos-source-memcached",
    	"kolla/centos-source-mistral-api",
    	"kolla/centos-source-mistral-engine",
    	"kolla/centos-source-mistral-executor",
    	"kolla/centos-source-mongodb",
    	"kolla/centos-source-multipathd",
    	"kolla/centos-source-murano-api",
    	"kolla/centos-source-murano-engine",
    	"kolla/centos-source-neutron-bgp-dragent",
    	"kolla/centos-source-neutron-dhcp-agent",
    	"kolla/centos-source-neutron-l3-agent",
    	"kolla/centos-source-neutron-lbaas-agent",
    	"kolla/centos-source-neutron-metadata-agent",
    	"kolla/centos-source-neutron-openvswitch-agent",
    	"kolla/centos-source-neutron-server",
    	"kolla/centos-source-neutron-server-opendaylight",
    	"kolla/centos-source-neutron-sriov-agent",
    	"kolla/centos-source-neutron-vpnaas-agent",
    	"kolla/centos-source-nova-api",
    	"kolla/centos-source-nova-compute",
    	"kolla/centos-source-nova-compute-ironic",
    	"kolla/centos-source-nova-conductor",
    	"kolla/centos-source-nova-consoleauth",
    	"kolla/centos-source-nova-libvirt",
    	"kolla/centos-source-nova-novncproxy",
    	"kolla/centos-source-nova-placement-api",
    	"kolla/centos-source-nova-scheduler"]
    }
    

    修改部署配置文件

    修改当前目录下的multinode文件:mutinode

    # These initial groups are the only groups required to be modified. The
    # additional groups are for more control of the environment.
    [control]
    # These hostname must be resolvable from your deployment host
    controller01
    controller02
    controller03
     
    # The above can also be specified as follows:
    #control[01:03]     ansible_user=kolla
     
    # The network nodes are where your l3-agent and loadbalancers will run
    # This can be the same as a host in the control group
    [network]
    controller01
    controller02
    controller03
     
    # inner-compute is the groups of compute nodes which do not have
    # external reachability
    [inner-compute]
     
    # external-compute is the groups of compute nodes which can reach
    # outside
    [external-compute]
    compute01
    compute02
     
    [compute:children]
    inner-compute
    external-compute
     
    [monitoring]
    controller01
     
    # When compute nodes and control nodes use different interfaces,
    # you need to comment out "api_interface" and other interfaces from the globals.yml
    # and specify like below:
    #compute01 neutron_external_interface=eth0 api_interface=em1 storage_interface=em1 tunnel_interface=em1
     
    [storage]
    compute01
    compute02
     
    [deployment]
    localhost       ansible_connection=local
     
    [baremetal:children]
    control
    network
    compute
    storage
    monitoring
     
    # You can explicitly specify which hosts run each project by updating the
    # groups in the sections below. Common services are grouped together.
    [chrony-server:children]
    haproxy
     
    [chrony:children]
    control
    network
    compute
    storage
    monitoring
     
    [collectd:children]
    compute
     
    [grafana:children]
    monitoring
     
    [etcd:children]
    control
    compute
     
    [influxdb:children]
    monitoring
     
    [karbor:children]
    control
     
    [kibana:children]
    control
     
    [telegraf:children]
    compute
    control
    monitoring
    network
    storage
     
    [elasticsearch:children]
    control
     
    [haproxy:children]
    network
     
    [hyperv]
    #hyperv_host
     
    [hyperv:vars]
    #ansible_user=user
    #ansible_password=password
    #ansible_port=5986
    #ansible_connection=winrm
    #ansible_winrm_server_cert_validation=ignore
     
    [mariadb:children]
    control
     
    [rabbitmq:children]
    control
     
    [outward-rabbitmq:children]
    control
     
    [qdrouterd:children]
    control
     
    [mongodb:children]
    control
     
    [keystone:children]
    control
     
    [glance:children]
    control
     
    [nova:children]
    control
     
    [neutron:children]
    network
     
    [openvswitch:children]
    network
    compute
    manila-share
     
    [opendaylight:children]
    network
     
    [cinder:children]
    control
     
    [cloudkitty:children]
    control
     
    [freezer:children]
    control
     
    [memcached:children]
    control
     
    [horizon:children]
    control
     
    [swift:children]
    control
     
    [barbican:children]
    control
     
    [heat:children]
    control
     
    [murano:children]
    control
     
    [solum:children]
    control
     
    [ironic:children]
    control
     
    [ceph:children]
    control
     
    [magnum:children]
    control
     
    [sahara:children]
    control
     
    [mistral:children]
    control
     
    [manila:children]
    control
     
    [ceilometer:children]
    control
     
    [aodh:children]
    control
     
    [congress:children]
    control
     
    [panko:children]
    control
     
    [gnocchi:children]
    control
     
    [tacker:children]
    control
     
    [trove:children]
    control
     
    # Tempest
    [tempest:children]
    control
     
    [senlin:children]
    control
     
    [vmtp:children]
    control
     
    [vitrage:children]
    control
     
    [watcher:children]
    control
     
    [rally:children]
    control
     
    [searchlight:children]
    control
     
    [octavia:children]
    control
     
    [designate:children]
    control
     
    [placement:children]
    control
     
    [bifrost:children]
    deployment
     
    [zun:children]
    control
     
    [skydive:children]
    monitoring
     
    [redis:children]
    control
     
    [blazar:children]
    control
     
    # Additional control implemented here. These groups allow you to control which
    # services run on which hosts at a per-service level.
    #
    # Word of caution: Some services are required to run on the same host to
    # function appropriately. For example, neutron-metadata-agent must run on the
    # same host as the l3-agent and (depending on configuration) the dhcp-agent.
     
    # Glance
    [glance-api:children]
    glance
     
    [glance-registry:children]
    glance
     
    # Nova
    [nova-api:children]
    nova
     
    [nova-conductor:children]
    nova
     
    [nova-consoleauth:children]
    nova
     
    [nova-novncproxy:children]
    nova
     
    [nova-scheduler:children]
    nova
     
    [nova-spicehtml5proxy:children]
    nova
     
    [nova-compute-ironic:children]
    nova
     
    [nova-serialproxy:children]
    nova
     
    # Neutron
    [neutron-server:children]
    control
     
    [neutron-dhcp-agent:children]
    neutron
     
    [neutron-l3-agent:children]
    neutron
     
    [neutron-lbaas-agent:children]
    neutron
     
    [neutron-metadata-agent:children]
    neutron
     
    [neutron-vpnaas-agent:children]
    neutron
     
    [neutron-bgp-dragent:children]
    neutron
     
    # Ceph
    [ceph-mds:children]
    ceph
     
    [ceph-mgr:children]
    ceph
     
    [ceph-nfs:children]
    ceph
     
    [ceph-mon:children]
    ceph
     
    [ceph-rgw:children]
    ceph
     
    [ceph-osd:children]
    storage
     
    # Cinder
    [cinder-api:children]
    cinder
     
    [cinder-backup:children]
    storage
     
    [cinder-scheduler:children]
    cinder
     
    [cinder-volume:children]
    storage
     
    # Cloudkitty
    [cloudkitty-api:children]
    cloudkitty
     
    [cloudkitty-processor:children]
    cloudkitty
     
    # Freezer
    [freezer-api:children]
    freezer
     
    # iSCSI
    [iscsid:children]
    compute
    storage
    ironic
     
    [tgtd:children]
    storage
     
    # Karbor
    [karbor-api:children]
    karbor
     
    [karbor-protection:children]
    karbor
     
    [karbor-operationengine:children]
    karbor
     
    # Manila
    [manila-api:children]
    manila
     
    [manila-scheduler:children]
    manila
     
    [manila-share:children]
    network
     
    [manila-data:children]
    manila
     
    # Swift
    [swift-proxy-server:children]
    swift
     
    [swift-account-server:children]
    storage
     
    [swift-container-server:children]
    storage
     
    [swift-object-server:children]
    storage
     
    # Barbican
    [barbican-api:children]
    barbican
     
    [barbican-keystone-listener:children]
    barbican
     
    [barbican-worker:children]
    barbican
     
    # Heat
    [heat-api:children]
    heat
     
    [heat-api-cfn:children]
    heat
     
    [heat-engine:children]
    heat
     
    # Murano
    [murano-api:children]
    murano
     
    [murano-engine:children]
    murano
     
    # Ironic
    [ironic-api:children]
    ironic
     
    [ironic-conductor:children]
    ironic
     
    [ironic-inspector:children]
    ironic
     
    [ironic-pxe:children]
    ironic
     
    # Magnum
    [magnum-api:children]
    magnum
     
    [magnum-conductor:children]
    magnum
     
    # Sahara
    [sahara-api:children]
    sahara
     
    [sahara-engine:children]
    sahara
     
    # Solum
    [solum-api:children]
    solum
     
    [solum-worker:children]
    solum
     
    [solum-deployer:children]
    solum
     
    [solum-conductor:children]
    solum
     
    # Mistral
    [mistral-api:children]
    mistral
     
    [mistral-executor:children]
    mistral
     
    [mistral-engine:children]
    mistral
     
    # Ceilometer
    [ceilometer-central:children]
    ceilometer
     
    [ceilometer-notification:children]
    ceilometer
     
    [ceilometer-compute:children]
    compute
     
    # Aodh
    [aodh-api:children]
    aodh
     
    [aodh-evaluator:children]
    aodh
     
    [aodh-listener:children]
    aodh
     
    [aodh-notifier:children]
    aodh
     
    # Congress
    [congress-api:children]
    congress
     
    [congress-datasource:children]
    congress
     
    [congress-policy-engine:children]
    congress
     
    # Panko
    [panko-api:children]
    panko
     
    # Gnocchi
    [gnocchi-api:children]
    gnocchi
     
    [gnocchi-statsd:children]
    gnocchi
     
    [gnocchi-metricd:children]
    gnocchi
     
    # Trove
    [trove-api:children]
    trove
     
    [trove-conductor:children]
    trove
     
    [trove-taskmanager:children]
    trove
     
    # Multipathd
    [multipathd:children]
    compute
     
    # Watcher
    [watcher-api:children]
    watcher
     
    [watcher-engine:children]
    watcher
     
    [watcher-applier:children]
    watcher
     
    # Senlin
    [senlin-api:children]
    senlin
     
    [senlin-engine:children]
    senlin
     
    # Searchlight
    [searchlight-api:children]
    searchlight
     
    [searchlight-listener:children]
    searchlight
     
    # Octavia
    [octavia-api:children]
    octavia
     
    [octavia-health-manager:children]
    octavia
     
    [octavia-housekeeping:children]
    octavia
     
    [octavia-worker:children]
    octavia
     
    # Designate
    [designate-api:children]
    designate
     
    [designate-central:children]
    designate
     
    [designate-producer:children]
    designate
     
    [designate-mdns:children]
    network
     
    [designate-worker:children]
    designate
     
    [designate-sink:children]
    designate
     
    [designate-backend-bind9:children]
    designate
     
    # Placement
    [placement-api:children]
    placement
     
    # Zun
    [zun-api:children]
    zun
     
    [zun-compute:children]
    compute
     
    # Skydive
    [skydive-analyzer:children]
    skydive
     
    [skydive-agent:children]
    compute
    network
     
    # Tacker
    [tacker-server:children]
    tacker
     
    [tacker-conductor:children]
    tacker
     
    # Vitrage
    [vitrage-api:children]
    vitrage
     
    [vitrage-notifier:children]
    vitrage
     
    [vitrage-graph:children]
    vitrage
     
    [vitrage-collector:children]
    vitrage
     
    [vitrage-ml:children]
    vitrage
     
    # Blazar
    [blazar-api:children]
    blazar
     
    [blazar-manager:children]
    blazar
     
    
    # Location of configuration overrides
    node_custom_config: "/etc/kolla/config"
     
    # This should be a VIP, an unused IP on your network that will float between
    # the hosts running keepalived for high-availability. If you want to run an
    # All-In-One without haproxy and keepalived, you can set enable_haproxy to no
    # in "OpenStack options" section, and set this value to the IP of your
    # 'network_interface' as set in the Networking section below.
     
    ################
    # Docker options
    ################
    # Below is an example of a private repository with authentication. Note the
    # Docker registry password can also be set in the passwords.yml file.
     
    docker_registry: "kolla:4000"
    docker_namespace: "kolla"
    #docker_registry_username: "sam"
    #docker_registry_password: "correcthorsebatterystaple"
     
    ##############################
    # Neutron - Networking Options
    ##############################
    # This interface is what all your api services will be bound to by default.
    # Additionally, all vxlan/tunnel and storage network traffic will go over this
    # interface by default. This interface must contain an IPv4 address.
    # It is possible for hosts to have non-matching names of interfaces - these can
    # be set in an inventory file per host or per group or stored separately, see
    #     http://docs.ansible.com/ansible/intro_inventory.html
    # Yet another way to workaround the naming problem is to create a bond for the
    # interface on all hosts and give the bond name here. Similar strategy can be
    # followed for other types of interfaces.
    network_interface: "enp0s31f6"
     
    # This is the raw interface given to neutron as its external network port. Even
    # though an IP address can exist on this interface, it will be unusable in most
    # configurations. It is recommended this interface not be configured with any IP
    # addresses for that reason.
     
    # Valid options are [ openvswitch, linuxbridge, vmware_nsxv, vmware_dvs, opendaylight ]
    neutron_plugin_agent: "openvswitch"
     
     
    ####################
    # keepalived options
    ####################
    # Arbitrary unique number from 0..255
    #keepalived_virtual_router_id: "51"
     
     
    # Valid options are [ none, novnc, spice, rdp ]
    #nova_console: "novnc"
     
    # OpenStack services can be enabled or disabled with these options
    #enable_aodh: "no"
    #enable_barbican: "no"
    #enable_blazar: "no"
    enable_ceilometer: "yes"
    enable_central_logging: "yes"
    #enable_ceph: "no"
    #enable_ceph_mds: "no"
    #enable_ceph_rgw: "no"
    #enable_ceph_nfs: "no"
    enable_chrony: "yes"
    enable_cinder: "yes"
    #enable_cinder_backup: "yes"
    #enable_cinder_backend_hnas_iscsi: "no"
    #enable_cinder_backend_hnas_nfs: "no"
    #enable_cinder_backend_iscsi: "no"
    enable_cinder_backend_lvm: "yes"
    #enable_cinder_backend_nfs: "no"
    #enable_cloudkitty: "no"
    #enable_collectd: "no"
    #enable_congress: "no"
    #enable_designate: "no"
    #enable_destroy_images: "no"
    #enable_etcd: "no"
    #enable_fluentd: "yes"
    #enable_freezer: "no"
    enable_gnocchi: "yes"
    #enable_grafana: "no"
    #enable_haproxy: "yes"
    #enable_heat: "yes"
    #enable_horizon: "yes"
    #enable_hyperv: "no"
    #enable_influxdb: "no"
    #enable_ironic: "no"
    #enable_ironic_pxe_uefi: "no"
    #enable_karbor: "no"
    #enable_kuryr: "no"
    #enable_magnum: "no"
    #enable_manila: "no"
    #enable_manila_backend_generic: "no"
    #enable_manila_backend_hnas: "no"
    #enable_manila_backend_cephfs_native: "no"
    #enable_manila_backend_cephfs_nfs: "no"
    #enable_mistral: "no"
    #enable_mongodb: "no"
    #enable_murano: "no"
    #enable_multipathd: "no"
    #enable_neutron_bgp_dragent: "no"
    #enable_neutron_dvr: "no"
    #enable_neutron_lbaas: "no"
    #enable_neutron_fwaas: "no"
    #enable_neutron_qos: "no"
    #enable_neutron_agent_ha: "no"
    #enable_neutron_vpnaas: "no"
    #enable_neutron_sriov: "no"
    #enable_neutron_sfc: "no"
    #enable_nova_fake: "no"
    #enable_nova_serialconsole_proxy: "no"
    #enable_octavia: "no"
    #enable_opendaylight: "no"
    #enable_openvswitch: "{{ neutron_plugin_agent != 'linuxbridge' }}"
    #enable_ovs_dpdk: "no"
    #enable_osprofiler: "no"
    #enable_panko: "no"
    #enable_qdrouterd: "no"
    #enable_rally: "no"
    #enable_redis: "no"
    #enable_sahara: "no"
    #enable_searchlight: "no"
    #enable_senlin: "no"
    #enable_skydive: "no"
    #enable_solum: "no"
    #enable_swift: "no"
    #enable_telegraf: "no"
    #enable_tacker: "no"
    #enable_tempest: "no"
    #enable_trove: "no"
    #enable_vitrage: "no"
    #enable_vmtp: "no"
    #enable_watcher: "no"
    #enable_zun: "no"
     
    ########################
    # Glance - Image Options
    ########################
    # Configure image backend.
    #glance_backend_file: "no"
    glance_backend_ceph: "yes"
    #glance_backend_vmware: "no"
    #glance_backend_swift: "no"
     
     
    # Nova - Compute Options
    ########################
    #nova_backend_ceph: "{{ enable_ceph }}"
     
    # Valid options are [ qemu, kvm, vmware, xenapi ]
    nova_compute_virt_type: "kvm"
     
    

    部署:

    生成随机密码文件:

    kolla-genpwd
    

    修改horizon登录界面admin密码:

    [root@kolla ~]# vim /etc/kolla/passwords.yml
    keepalived_password: mFbTVxF6XyrrT8NqaN5UpFB098GEXuZ9oQyfQI14
    keystone_admin_password: 123  # 更改此处
    keystone_database_password: C4EzIx0zhoFjsG9dA9TBRaZfbFIdT3f9sCe7jGyg
    

    引导配置各节点依赖软件:

    kolla-ansible -i ./multinode bootstrap-servers
    PLAY RECAP *************************************************************************************************************************************************************
    compute01                  : ok=38   changed=7    unreachable=0    failed=0   
    compute02                  : ok=38   changed=7    unreachable=0    failed=0   
    controller01               : ok=38   changed=7    unreachable=0    failed=0   
    controller02               : ok=39   changed=17   unreachable=0    failed=0   
    controller03               : ok=38   changed=7    unreachable=0    failed=0   
    localhost                  : ok=1    changed=0    unreachable=0    failed=0 
    

    进行预部署检查:

    kolla-ansible -i ./multinode prechecks
    PLAY RECAP ************************************************************************************************************************************************************
    compute01                  : ok=26   changed=1    unreachable=0    failed=0   
    compute02                  : ok=26   changed=1    unreachable=0    failed=0   
    controller01               : ok=91   changed=1    unreachable=0    failed=0   
    controller02               : ok=87   changed=1    unreachable=0    failed=0   
    controller03               : ok=87   changed=1    unreachable=0    failed=0   
    localhost                  : ok=6    changed=1    unreachable=0    failed=0 
    

    Cinder出现错误

    TASK [cinder : Checking LVM volume group exists for Cinder] ***********************************************************************************************************
    skipping: [controller01]
    skipping: [controller02]
    skipping: [controller03]
    [DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|failed` use `result is failed`. This feature will be removed in version 2.9. 
    Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
    fatal: [compute01]: FAILED! => {"changed": false, "cmd": ["vgs", "cinder-volumes"], "delta": "0:00:00.009794", "end": "2018-10-13 18:33:13.868282", "failed_when_result": true, "msg": "non-zero return code", "rc": 5, "start": "2018-10-13 18:33:13.858488", "stderr": "  Volume group \"cinder-volumes\" not found\n  Cannot process volume group cinder-volumes", "stderr_lines": ["  Volume group \"cinder-volumes\" not found", "  Cannot process volume group cinder-volumes"], "stdout": "", "stdout_lines": []}
    [DEPRECATION WARNING]: Using tests as filters is deprecated. Instead of using `result|failed` use `result is failed`. This feature will be removed in version 2.9. 
    Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
    fatal: [compute02]: FAILED! => {"changed": false, "cmd": ["vgs", "cinder-volumes"], "delta": "0:00:00.010114", "end": "2018-10-13 18:33:13.860281", "failed_when_result": true, "msg": "non-zero return code", "rc": 5, "start": "2018-10-13 18:33:13.850167", "stderr": "  Volume group \"cinder-volumes\" not found\n  Cannot process volume group cinder-volumes", "stderr_lines": ["  Volume group \"cinder-volumes\" not found", "  Cannot process volume group cinder-volumes"], "stdout": "", "stdout_lines": []}
    

    解决方案:

    [root@compute02 .ssh]# vgdisplay
      --- Volume group ---
      VG Name               centos
      System ID             
      Format                lvm2
      Metadata Areas        1
      Metadata Sequence No  4
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                3
      Open LV               3
      Max PV                0
      Cur PV                1
      Act PV                1
      VG Size               <1.82 TiB
      PE Size               4.00 MiB
      Total PE              476806
      Alloc PE / Size       476806 / <1.82 TiB
      Free  PE / Size       0 / 0   
      VG UUID               FEgDXH-SBlh-x29N-qU0f-Wajd-2sJ6-rbUre5
       
    [root@compute02 .ssh]# dd if=/dev/zero of=./disk.img count=200 bs=512MB
    200+0 records in
    200+0 records out
    102400000000 bytes (102 GB) copied, 509.072 s, 201 MB/s
    [root@compute02 .ssh]# losetup -f
    /dev/loop0
    [root@compute02 .ssh]# losetup /dev/loop0 disk.img
    [root@compute02 .ssh]# pvcreate /dev/loop0
      Physical volume "/dev/loop0" successfully created.
    [root@compute02 .ssh]# vgcreate cinder-volumes /dev/loop0
      Volume group "cinder-volumes" successfully created
    

    进行正式部署:

    ```bash
    kolla-ansible -i ./multinode deploy
    ```
    

    四.初始化OpenStack

    删除ipadress的py包并重新安装

    版本过低下一步客户端安装会出错,原先安装其他包的时候作为依赖包安装的ipaddress无法通过pip删除并升级,只能手动删除再安装最新版本:

    [root@kolla ~]# cd /usr/lib/python2.7/site-packages/
    [root@kolla site-packages]# rm -rf ipaddress*
    [root@kolla site-packages]# pip install ipaddress
    

    安装OpenStack CLI客户端:

    [root@kolla site-packages]# pip install python-openstackclient python-glanceclient python-neutronclient
    

    设置环境变量:

    [root@kolla site-packages]# . /etc/kolla/admin-openrc.sh 
    

    编辑初始化脚本中的网络配置:

    [root@kolla ~]# vim /usr/share/kolla-ansible/init-runonce
    EXT_NET_CIDR='10.132.226.0/24'
    EXT_NET_RANGE='start=10.132.226.130,end=10.132.226.169'
    EXT_NET_GATEWAY='10.132.226.254'
    

    执行初始化脚本:

    [root@kolla ~]# . /usr/share/kolla-ansible/init-runonce
    Checking for locally available cirros image.
    None found, downloading cirros image.
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
    100 12.1M  100 12.1M    0     0  2040k      0  0:00:06  0:00:06 --:--:-- 2716k
    Creating glance image.
    ······
    Done.
    
    To deploy a demo instance, run:
    
    openstack server create \
        --image cirros \
        --flavor m1.tiny \
        --key-name mykey \
        --nic net-id=89a1f674-e89f-4e6d-b96d-2875446adc1e \
        demo1
    
    展开全文
  • Kolla集成外接ceph存储

    2018-01-20 15:52:30
    一、概述 在实际交付过程中,由于... 在这种情况下,需要调整部分kolla配置。 二、环境准备 1. 已存在的ceph集群(关于ceph的安装部署请参考http://blog.csdn.net/dylloveyou/article/details/79054120) 2. open

    一、概述

    在实际交付过程中,由于某些原因,可能会碰到不需要kolla部署ceph,而是集成一套已存在ceph集群的情况,这种需求也是合理的。 在这种情况下,需要调整部分kolla的配置。

    二、环境准备

    1. 已存在的ceph集群(关于ceph的安装部署请参考http://blog.csdn.net/dylloveyou/article/details/79054120)

    2. openstack需要的pool已经创建完成,集成之后各个project的数据会保存在对应的pool中

    • cinder-volumes (Cinder-volume)
    • glance-images (Glance)
    • cinder-backups (Cinder-Backup)
    • nova-vms (Nova)

    三、开启外接ceph功能

    使用外接ceph意味着不需要通过kolla去部署ceph。因此需要在全局配置中关闭ceph组件。编辑 /etc/kolla/globals.yml,配置如下:

    enable_ceph: “no
    
    glance_backend_ceph: "yes"
    cinder_backend_ceph: "yes"
    nova_backend_ceph: "yes"

    enable_ceph: "no"<service>_backend_ceph:"yes" 会触发kolla的外接ceph集群的机制。

    四、配置Kolla

    Glance

    1.glance-api.conf 配置RBD 后端

    编辑 /etc/kolla/config/glance/glance-api.conf 文件,加入如下配置:

    [glance_store]
    stores = rbd
    default_store = rbd
    rbd_store_pool = glance-images
    rbd_store_user = glance
    rbd_store_ceph_conf = /etc/ceph/ceph.conf

    以上配置会被kolla自动合并到glance的配置文件中。

    2.拷贝ceph集群配置文件(/etc/ceph/ceph.conf)到 /etc/kolla/config/glance/ceph.conf

    cat /etc/kolla/config/glance/ceph.conf
    
    [global]
    fsid = 9c424511-ade9-45e3-be88-24d72232dd7a
    mon_initial_members = ceph-node01, ceph-node02, ceph-node03
    mon_host = 11.10.37.85,11.10.37.86,11.10.37.87
    auth_cluster_required = cephx
    auth_service_required = cephx
    auth_client_required = cephx
    
    osd_journal_size = 10000
    osd_pool_default_size = 2
    
    osd_pool_default_pg_num = 512
    osd_pool_default_pgp_num = 512
    rbd_default_features = 3

    3.生成ceph.client.glance.keyring文件,并保存到 /etc/kolla/config/glance 目录

    在存储mon节点执行命令:

    ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=glance-images'

    为 glance 生成 glance.keyring 文件,并将回显保存在 /etc/kolla/config/glance/ceph.client.glance.keyring

    cat /etc/kolla/config/glance/ceph.client.glance.keyring
    [client.glance]
        key = AQD6gVRasWreLRAAPSlTc1LPIayGjPtvuK1FCw==

    Kolla会将所有名称为ceph*的文件拷贝到对应容器中的 /etc/ceph目录中。

    Cinder

    1. 编辑 /etc/kolla/config/cinder/cinder-volume.conf,并配置如下内容:

    [DEFAULT]
    enabled_backends=rbd-1
    
    [rbd-1]
    rbd_ceph_conf=/etc/ceph/ceph.conf
    rbd_user=cinder
    backend_host=storage01
    rbd_pool=cinder-volumes
    volume_backend_name=rbd-1
    volume_driver=cinder.volume.drivers.rbd.RBDDriver
    rbd_secret_uuid = {{ cinder_rbd_secret_uuid}}

    注: cinder_rbd_secret_uuid被定义在 /etc/kolla/passwords.yml 中,用来解决虚拟机boot from volume 的bug。

    2. 编辑 /etc/kolla/config/cinder/cinder-backup.conf,并配置如下内容:

    [DEFAULT]
    backup_ceph_conf=/etc/ceph/ceph.conf
    backup_ceph_user=cinder-backup
    backup_ceph_chunk_size = 134217728
    backup_ceph_pool=cinder-backups
    backup_driver = cinder.backup.drivers.ceph
    backup_ceph_stripe_unit = 0
    backup_ceph_stripe_count = 0
    restore_discard_excess_bytes = true

    3. 拷贝ceph的配置文件(/etc/ceph/ceph.conf)到 /etc/kolla/config/glance/ceph.conf

    4. 生成 ceph.client.cinder.keyring 文件

    在 ceph mon 节点运行:

    ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cinder-volumes, allow rwx pool=nova-vms ,allow rx pool=glance-images'

    为 cinder 生成 cephx keyring,将回显结果保存在 /etc/kolla/config/cinder/cinder-volume/ceph.client.cinder.keyring/etc/kolla/config/cinder/cinder-backup/ceph.client.cinder.keyring

    在ceph mon节点继续运行:

    ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=cinder-bakcups'

    为 cinder-backu p生成 cephx keyring,将回显结果保存在 /etc/kolla/config/cinder/cinder-backup/ceph.client.cinder-backup.keyring

    注: cinder-backup 需要两个 keyring 去连接 cinder-volumes 和 cinder-backups pool

    Nova

    1. 编辑 /etc/kolla/config/nova/nova-compute.conf ,配置如下内容:

    [libvirt]
    images_rbd_pool=nova-vms
    images_type=rbd
    images_rbd_ceph_conf=/etc/ceph/ceph.conf
    rbd_user=nova

    2. 生成 ceph.client.nova.keyring 文件

    ceph auth get-or-create client.nova mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=nova-vms'

    拷贝ceph.conf, nova client keyring, 和cinder client keyring 到 /etc/kolla/config/nova

    ls /etc/kolla/config/nova
    ceph.client.cinder.keyring  ceph.client.nova.keyring  ceph.conf  nova-compute.conf

    五、部署

    kolla使用外接ceph,意味着没有储存节点,而默认情况下cinder-volume和cinder-backup运行在存储节点,外接ceph存储需要指定主机去运行cinder-volume和cinder-backup容器。

    编辑multinode文件,修改配置如下(这里把cinder-volume和cinder-backup安装到controller节点):

    [storage]
    controller01

    执行kolla-ansible命令,完成部署

    kolla-ansible -i /home/multimode deploy

    参考自 九州云 微信公众号。

    展开全文
  • openstack 环境部署

    2017-08-15 13:37:47
    22.1 了解云计算 人类基于千年的物种衍变基础,在这个世纪终于有了爆发式的科技成果,尤其这二十年内互联网的发展,更像是一种催化剂,让原本已经热闹的地球更加的沸腾,互联网经济泡沫破灭后的科技研发却变得...
    22.1 了解云计算

    人类基于千年的物种衍变基础,在这个世纪终于有了爆发式的科技成果,尤其这二十年内互联网的发展,更像是一种催化剂,让原本已经热闹的地球更加的沸腾,互联网经济泡沫破灭后的科技研发却变得更加卖力,一次次的突破着传统研究中对人类脑力、科技最终式的定义,把“来自未来”的产品带到用户面前,那么到底互联网未来会变成什么样子,人类最终的归宿会是怎么样,我们不得而知,但可以肯定的是科技研发一直是由人类需求来驱动的。

    众所周知Google谷歌是一家致力于互联网搜索、云计算、广告技术等领域的科技企业,一直在努力为全球无数的用户提供着大量基于互联网的产品与服务,而Amazon亚马逊则是全美国最大的网络电子商务公司,销售内容涉及方方面面,业务范围更是遍布全球,对于这种互联网巨头企业自然少不了庞大的基础设施的支撑,但是传统的硬件设施一旦投入就要一大笔钱,并且在业务的淡季也要一直的空闲,这样无疑产生了资源和资金的巨大浪费,所以最初的云计算便是由Google与Amazon分别提出的,核心理念之一就是通过云计算服务降低用户对资源拥有的成本。

    当用户能够通过互联网方便的获取到计算、存储等服务时,我们比喻自己使用到了“云计算”,云计算并不能被称为是一种计算技术,而更像是一种服务模式,云计算服务好像拥有无穷的力量,能够预测气候变化、还能够模拟核弹爆炸,好像只要你需要,“云”就可以为你提供每秒万亿次的计算服务,满足你的一切需求,每个运维人员心里都有一个对云计算的理解,而最普遍接受的是NIST(美国国家标准与技术研究院)的定义:

    云计算是一种按使用量付费的服务模式,这是一种能够提供可用的、便捷的、按需求的网络访问模式,计算共享池能够快速的为用户提供网络、服务器、存储、应用软件及其他服务,并且只需要花费很少的管理时间。

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    NIST还针对于云计算的服务模式提出了3个服务层次:
    Iaas:提供给用户的是云计算基础设施,包括CPU、内存、存储、网络等其他的资源服务,用户不需要控制存储与网络等基础设施。
    Paas:提供给用户的是云计算中的开发和分发应用的解决方案,用户能够部署应用程序,也可以控制相关的托管环境,比如云服务器及操作系统,但用户不需要接触到云计算中的基础设施。
    Saas:提供给用户的是云计算基础设施上的应用程序,用户只需要在客户端界面访问即可使用到所需资源,而接触不到云计算的基础设施。

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    云计算服务类型,原稿
    22.2 Openstack项目

    Openstack最初是由NASA和Rackspace共同发起的云端计算服务项目,该项目以Apache许可证授权的方式成为了一款开源产品,目的是将多个组件整合后从而实现一个开源的云计算平台,目前Openstack项目正在被红帽、IBM、AMD、Intel、戴尔、思科、微软等超过一百家厂商共同研发,并已经支持了几乎所有的常见云计算环境,拥有了良好的可扩展性,而且部署搭建Openstack服务也变得十分简单,目前国内对于云计算的需求也逐渐增加,华胜天成、高德地图、京东、阿里巴巴、百度、中兴、华为等中国企业也加入到了Openstack项目研发当中,Openstack项目也正在随着全球内得到了众多厂商的参与支持而快速成熟。

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    Open是开放,Stack则是堆砌之意,合起来就是将众多的功能服务堆积起来的集合,让人们通过Openstack云计算项目,能够将诸如计算能力、存储、网络和软件等资源抽象成服务,以便让用户可以通过互联网远程来享用,付费的形式也变得因需而定,调整方便,拥有极强的虚拟可扩展性,是公共和私有云的建设与管理软件中的优秀开源项目。

    Openstack作为一个云平台的管理项目,其功能组件覆盖了网络、虚拟化、操作系统、服务器等多个方面,每个功能组件交由不同的项目委员会来研发和管理,目前核心的项目包括有:
    功能 项目名称 描述
    计算服务 Nova 负责虚拟机的创建、开关机、挂起、迁移、调整CPU、内存等规则。
    对象存储 Swift 用于在大规模可扩展系统中通过内置的冗余及高容差机制实现对象存储的系统。
    镜像服务 Glance 用于创建、上传、删除、编辑镜像信息的虚拟机镜像查找及索引系统。
    身份服务 Keystone 为其他的功能服务提供身份验证、服务规则及服务令牌的功能。
    网络管理 Neutron 用于为其他服务提供云计算的网络虚拟化技术,可自定义各种网络规则,支持主流的网络厂商技术。
    块存储 Cinder 为虚拟机实例提供稳定的数据块存储的创建、删除、挂载、卸载、管理等服务。
    图形界面 Horizon 为用户提供简单易用的Web管理界面,降低用户对功能服务的操作难度。
    测量服务 Ceilometer 收集项目内所有的事件,用于监控、计费或为其他服务提供数据支撑。
    部署编排 Heat 实现通过模板方式进行自动化的资源环境部署服务。
    数据库服务 Trove 为用户提供可扩展的关系或非关系性数据库服务。

    Openstack项目的版本按照ABCDEFG……的顺序发布,每6个月更新一次,Openstack版本发布历史:
    版本名称 发布时间
    Liberty 2015年10月15日
    Kilo 2015年4月30日
    Juno 2014年10月16日
    Icehouse 2014年4月17日
    Havana 2013年10月17日
    Grizzly 2014年4月4日
    Folsom 2012年9月27日
    Essex 2012年4月5日
    Diablo 2011年9月22日
    Cactus 2011年4月15日
    Bexar 2011年2月3日
    Austin 2010年10月21日

    开源社区成员和Linux技术爱好者可以选择使用Openstack RDO版本,RDO版本允许用户以免费授权的方式来获取openstack软件的使用资格,但是从安装开始便较为复杂(需要自行解决诸多的软件依赖关系),而且没有官方给予的保障及售后服务,请读者们仔细的按实验步骤安装,就一定没有问题的~
    22.3 服务模块组件详解

    Openstack是一个云计算的平台,也像是部署云操作系统的工具集,可以通过调取不同的组件来构建虚拟计算及云计算服务,比较重要的包括有计算(compute)、对象存储(Objectstorage)、认证(Identity)、仪表板(Dashboard)、块存储(Block Storage)、网络(Network)和镜像服务(image service),Openstack服务组件协同工作拓扑:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    Nova提供计算服务

    Nova可以称作是Openstack云计算平台中最核心的服务组件了,它作为计算的弹性控制器来管理虚拟化、网络及存储等资源,为Openstack的云主机实例提供可靠的支撑,其功能由不同的API来提供。

    Nova-api(API服务器):

    API服务器用于提供云计算设施与外界交互的接口,也是用户对云计算设施进行管理的唯一通道,用户通过网页来调用各种API接口,再由API服务器通过消息队列把请求传递至目标设置进行处理。

    Rabbit MQ Server(消息队列):

    Openstack在遵循AMQP高级消息队列协议的基础之上采用了消息队列进行通信,异步通信的方式更是能够减少了用户的等待时间,让整个平台都变得更有效率。

    Nova-compute(运算工作站):

    运算工作站通过消息队列接收用户的请求并执行,从而负责对主机实例的整个生命周期中的各种操作进行处理,一般会架设多台计算工作站,根据调度算法来按照实例在任意一个计算工作站上部署。

    Nova-network(网络控制器):

    用于处理主机的网络配置,例如分配IP地址,配置项目VLAN,设定安全群组及为计算节点配置网络。

    Nova-Volume(卷工作站):

    基于LVM的实例卷能够为一个主机实例创建、删除、附加卷或从主机中分离卷。

    Nova-scheduler(调度器)

    调度器以名为"nova-schedule"的守护进程方式进行运行,根据对比CPU架构及负载、内存占用率、子节点的远近等因素,使用调度算法从可用的资源池中选择运算服务器。

    Glance提供镜像服务

    Openstack镜像服务是一套用于主机实例来发现、注册、索引的系统,功能相比较也很简单,具有基于组件的架构、高可用、容错性、开发标准等优良特性,虚拟机的镜像可以被放置到多种存储上。

    Swift提供存储服务

    Swift模块是一种分布式、持续虚拟对象存储,具有跨节点百级对象的存储能力,并且支持内建冗余和失效备援的功能,同时还能够处理数据归档和媒体流,对于超大数据和多对象数量非常高效。
    Swfit代理服务器:

    用于通过Swift-API与代理服务器进行交互,代理服务器能够检查实例位置并路由相关的请求,当实例失效或被转移后则自动故障切换,减少重复路由请求。

    Swift对象服务器:

    用于处理处理本地存储中对象数据的存储、索引和删除操作。

    Swift容器服务器:

    用于统计容器内包含的对象数量及容量存储空间使用率,默认对象列表将存储为SQLite或者MYSQL文件。

    Swift帐户服务器:

    与容器服务器类似,列出容器中的对象。

    Ring索引环:

    用户记录着Swift中物理存储对象位置的信息,作为真实物理存储位置的虚拟映射,能够查找及定位不同集群的实体真实物理位置的索引服务,上述的代理、对象、容器、帐户都拥有自己的Ring索引环。

    Keystone提供认证服务
     Keystone模块依赖于自身的Identity API系统基于判断动作消息来源者请求的合法性来为Openstack中Swift、Glance、Nove等各个组件提供认证和访问策略服务,

    Horizon提供管理服务

    Horizon是一个用于管理、控制Openstack云计算平台服务器的Web控制面板,用户能够在网页中管理主机实例、镜像、创建密钥对、管理实例卷、操作Swift容器等操作。

    Quantum提供网络服务

    重要的网络管理组件。

    Cinder提供存储管理服务

    用于管理主机实例中的存储资源。

    Heat提供软件部署服务

    用于在主机实例创建后简化配置操作。

     

    22.4 安装Openstack软件

    此刻我写这段话的时候,Openstack Liberty版本刚刚发布几周,企业中的生产环境会以稳定性为核心标准,所以还需要较长一段时间才能接受并正式使用这个新版本的产品,为了能够让读者学完即用,本片内容则会以Juno版本来做实验,为了能够让云计算平台发挥到最好的性能,我们需要开启虚拟机的虚拟化功能内存至少为4GB(推荐8GB以上)并添加额外的一块硬盘(20G以上)

    主机名称 IP地址/子网 DNS地址
    openstack.linuxprobe.com 192.168.10.10/24 192.168.10.10

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    设置服务器的主机名称:

    [root@openstack ~]# vim /etc/hostname
    openstack.linuxprobe.com
    

    使用vim编辑器写入主机名(域名)与IP地址的映射文件:

    [root@openstack ~]# vim /etc/hosts
    127.0.0.1      localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1            localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.10.10  openstack.linuxprobe.com openstack
    

    将服务器网卡IP地址配置成"192.168.10.10"后测试主机连通状态:

    [root@openstack ~]# ping $HOSTNAME
    PING openstack.linuxprobe.com (192.168.10.10) 56(84) bytes of data.
    64 bytes from openstack.linuxprobe.com (192.168.10.10): icmp_seq=1 ttl=64 time=0.099 ms
    64 bytes from openstack.linuxprobe.com (192.168.10.10): icmp_seq=2 ttl=64 time=0.107 ms
    64 bytes from openstack.linuxprobe.com (192.168.10.10): icmp_seq=3 ttl=64 time=0.070 ms
    64 bytes from openstack.linuxprobe.com (192.168.10.10): icmp_seq=4 ttl=64 time=0.075 ms
    ^C
    --- openstack.linuxprobe.com ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3001ms
    rtt min/avg/max/mdev = 0.070/0.087/0.107/0.019 ms
    

    创建系统镜像的挂载目录:

    [root@openstack ~]# mkdir -p /media/cdrom
    

    写入镜像与挂载点的信息:

    [root@openstack ~]# vim /etc/fstab
    # HEADER: This file was autogenerated at 2016-01-28 00:57:19 +0800
    # HEADER: by puppet.  While it can still be managed manually, it
    # HEADER: is definitely not recommended.
    
    #
    # /etc/fstab
    # Created by anaconda on Wed Jan 27 15:24:00 2016
    #
    # Accessible filesystems, by reference, are maintained under '/dev/disk'
    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    /dev/mapper/rhel-root   /       xfs     defaults        1       1
    UUID=c738dff6-b025-4333-9673-61b10eaf2268       /boot   xfs     defaults        1       2
    /dev/mapper/rhel-swap   swap    swap    defaults        0       0
    /dev/cdrom      /media/cdrom    iso9660 defaults        0       0
    

    挂载系统镜像设备:

    [root@openstack ~]# mount -a
    mount: /dev/sr0 is write-protected, mounting read-only
    

    写入基本的yum仓库配置信息:

    [root@openstack ~]# vim /etc/yum.repos.d/rhel.repo
    [base]
    name=base
    baseurl=file:///media/cdrom
    enabled=1
    gpgcheck=0
    

    您可以点此下载EPEL仓库源以及Openstack-juno的软件安装包,并上传至服务器的/media目录中:

    软件资源下载地址:http://www.linuxprobe.com/tools/

    Openstack Juno——云计算平台软件

    Openstack云计算软件能够将诸如计算能力、存储、网络和软件等资源抽象成服务,以便让用户可以通过互联网远程来享用,付费的形式也变得因需而定,拥有极强的虚拟可扩展性。

    EPEL——系统的软件源仓库

    EPEL是企业版额外的资源包,提供了默认不提供的软件安装包

    Cirros——精简的操作系统

    Cirros是一款极为精简的操作系统,一般用于灌装到Openstack服务平台中。

    [root@openstack ~]# cd /media
    [root@openstack media]# ls
    cdrom epel.tar.bz2 openstack-juno.tar.bz2
    

    分别解压文件:

    [root@openstack media]# tar xjf epel.tar.bz2
    [root@openstack media]# tar xjf openstack-juno.tar.bz2
    

    分别写入EPEL与openstack的yum仓库源信息:

    [root@openstack media]# vim /etc/yum.repos.d/openstack.repo
    [openstack]
    name=openstack
    baseurl=file:///media/openstack-juno
    enabled=1
    gpgcheck=0
    [root@openstack media]# vim /etc/yum.repos.d/epel.repo
    [epel]
    name=epel
    baseurl=file:///media/EPEL
    enabled=1
    gpgcheck=0
    

    将/dev/sdb创建成逻辑卷,卷组名称为cinder-volumes:

    [root@openstack media]# pvcreate /dev/sdb
    Physical volume "/dev/sdb" successfully created
    [root@openstack media]# vgcreate cinder-volumes /dev/sdb
    Volume group "cinder-volumes" successfully created
    

    重启系统:

    [root@openstack media]# reboot
    

    安装Openstack的应答文件:

    [root@openstack ~]# yum install openstack-packstack
    ………………省略部分安装过程………………
    Installing:
    openstack-packstack noarch 2014.2-0.4.dev1266.g63d9c50.el7.centos openstack 210 k
    Installing for dependencies:
    libyaml x86_64 0.1.4-10.el7 base 55 k
    openstack-packstack-puppet noarch 2014.2-0.4.dev1266.g63d9c50.el7.centos openstack 43 k
    openstack-puppet-modules noarch 2014.2.1-0.5.el7.centos openstack 1.3 M
    perl x86_64 4:5.16.3-283.el7 base 8.0 M
    perl-Carp noarch 1.26-244.el7 base 19 k
    perl-Encode x86_64 2.51-7.el7 base 1.5 M
    perl-Exporter noarch 5.68-3.el7 base 28 k
    perl-File-Path noarch 2.09-2.el7 base 27 k
    perl-File-Temp noarch 0.23.01-3.el7 base 56 k
    perl-Filter x86_64 1.49-3.el7 base 76 k
    perl-Getopt-Long noarch 2.40-2.el7 base 56 k
    perl-HTTP-Tiny noarch 0.033-3.el7 base 38 k
    perl-PathTools x86_64 3.40-5.el7 base 83 k
    perl-Pod-Escapes noarch 1:1.04-283.el7 base 50 k
    perl-Pod-Perldoc noarch 3.20-4.el7 base 87 k
    perl-Pod-Simple noarch 1:3.28-4.el7 base 216 k
    perl-Pod-Usage noarch 1.63-3.el7 base 27 k
    perl-Scalar-List-Utils x86_64 1.27-248.el7 base 36 k
    perl-Socket x86_64 2.010-3.el7 base 49 k
    perl-Storable x86_64 2.45-3.el7 base 77 k
    perl-Text-ParseWords noarch 3.29-4.el7 base 14 k
    perl-Time-Local noarch 1.2300-2.el7 base 24 k
    perl-constant noarch 1.27-2.el7 base 19 k
    perl-libs x86_64 4:5.16.3-283.el7 base 686 k
    perl-macros x86_64 4:5.16.3-283.el7 base 42 k
    perl-parent noarch 1:0.225-244.el7 base 12 k
    perl-podlators noarch 2.5.1-3.el7 base 112 k
    perl-threads x86_64 1.87-4.el7 base 49 k
    perl-threads-shared x86_64 1.43-6.el7 base 39 k
    python-netaddr noarch 0.7.12-1.el7.centos openstack 1.3 M
    ruby x86_64 2.0.0.353-20.el7 base 66 k
    ruby-irb noarch 2.0.0.353-20.el7 base 87 k
    ruby-libs x86_64 2.0.0.353-20.el7 base 2.8 M
    rubygem-bigdecimal x86_64 1.2.0-20.el7 base 78 k
    rubygem-io-console x86_64 0.4.2-20.el7 base 49 k
    rubygem-json x86_64 1.7.7-20.el7 base 74 k
    rubygem-psych x86_64 2.0.0-20.el7 base 76 k
    rubygem-rdoc noarch 4.0.0-20.el7 base 317 k
    rubygems noarch 2.0.14-20.el7 base 211 k
    ………………省略部分安装过程………………
    Complete!
    

    安装openstack服务程序:

    [root@openstack ~]# packstack --allinone --provision-demo=n --nagios-install=n
    Welcome to Installer setup utility
    Packstack changed given value to required value /root/.ssh/id_rsa.pub
    Installing:
    Clean Up [ DONE ]
    Setting up ssh keys [ DONE ]
    Discovering hosts' details [ DONE ]
    Adding pre install manifest entries [ DONE ]
    Preparing servers [ DONE ]
    Adding AMQP manifest entries [ DONE ]
    Adding MySQL manifest entries [ DONE ]
    Adding Keystone manifest entries [ DONE ]
    Adding Glance Keystone manifest entries [ DONE ]
    Adding Glance manifest entries [ DONE ]
    Adding Cinder Keystone manifest entries [ DONE ]
    Adding Cinder manifest entries [ DONE ]
    Checking if the Cinder server has a cinder-volumes vg[ DONE ]
    Adding Nova API manifest entries [ DONE ]
    Adding Nova Keystone manifest entries [ DONE ]
    Adding Nova Cert manifest entries [ DONE ]
    Adding Nova Conductor manifest entries [ DONE ]
    Creating ssh keys for Nova migration [ DONE ]
    Gathering ssh host keys for Nova migration [ DONE ]
    Adding Nova Compute manifest entries [ DONE ]
    Adding Nova Scheduler manifest entries [ DONE ]
    Adding Nova VNC Proxy manifest entries [ DONE ]
    Adding Openstack Network-related Nova manifest entries[ DONE ]
    Adding Nova Common manifest entries [ DONE ]
    Adding Neutron API manifest entries [ DONE ]
    Adding Neutron Keystone manifest entries [ DONE ]
    Adding Neutron L3 manifest entries [ DONE ]
    Adding Neutron L2 Agent manifest entries [ DONE ]
    Adding Neutron DHCP Agent manifest entries [ DONE ]
    Adding Neutron LBaaS Agent manifest entries [ DONE ]
    Adding Neutron Metering Agent manifest entries [ DONE ]
    Adding Neutron Metadata Agent manifest entries [ DONE ]
    Checking if NetworkManager is enabled and running [ DONE ]
    Adding OpenStack Client manifest entries [ DONE ]
    Adding Horizon manifest entries [ DONE ]
    Adding Swift Keystone manifest entries [ DONE ]
    Adding Swift builder manifest entries [ DONE ]
    Adding Swift proxy manifest entries [ DONE ]
    Adding Swift storage manifest entries [ DONE ]
    Adding Swift common manifest entries [ DONE ]
    Adding MongoDB manifest entries [ DONE ]
    Adding Ceilometer manifest entries [ DONE ]
    Adding Ceilometer Keystone manifest entries [ DONE ]
    Adding post install manifest entries [ DONE ]
    Installing Dependencies [ DONE ]
    Copying Puppet modules and manifests [ DONE ]
    Applying 192.168.10.10_prescript.pp
    192.168.10.10_prescript.pp: [ DONE ]
    Applying 192.168.10.10_amqp.pp
    Applying 192.168.10.10_mysql.pp
    192.168.10.10_amqp.pp: [ DONE ]
    192.168.10.10_mysql.pp: [ DONE ]
    Applying 192.168.10.10_keystone.pp
    Applying 192.168.10.10_glance.pp
    Applying 192.168.10.10_cinder.pp
    192.168.10.10_keystone.pp: [ DONE ]
    192.168.10.10_cinder.pp: [ DONE ]
    192.168.10.10_glance.pp: [ DONE ]
    Applying 192.168.10.10_api_nova.pp
    192.168.10.10_api_nova.pp: [ DONE ]
    Applying 192.168.10.10_nova.pp
    192.168.10.10_nova.pp: [ DONE ]
    Applying 192.168.10.10_neutron.pp
    192.168.10.10_neutron.pp: [ DONE ]
    Applying 192.168.10.10_neutron_fwaas.pp
    Applying 192.168.10.10_osclient.pp
    Applying 192.168.10.10_horizon.pp
    192.168.10.10_neutron_fwaas.pp: [ DONE ]
    192.168.10.10_osclient.pp: [ DONE ]
    192.168.10.10_horizon.pp: [ DONE ]
    Applying 192.168.10.10_ring_swift.pp
    192.168.10.10_ring_swift.pp: [ DONE ]
    Applying 192.168.10.10_swift.pp
    192.168.10.10_swift.pp: [ DONE ]
    Applying 192.168.10.10_mongodb.pp
    192.168.10.10_mongodb.pp: [ DONE ]
    Applying 192.168.10.10_ceilometer.pp
    192.168.10.10_ceilometer.pp: [ DONE ]
    Applying 192.168.10.10_postscript.pp
    192.168.10.10_postscript.pp: [ DONE ]
    Applying Puppet manifests [ DONE ]
    Finalizing [ DONE ]
    
    **** Installation completed successfully ******
    Additional information:
    * A new answerfile was created in: /root/packstack-answers-20160128-004334.txt
    * Time synchronization installation was skipped. Please note that unsynchronized time on server instances might be problem for some OpenStack components.
    * Did not create a cinder volume group, one already existed
    * File /root/keystonerc_admin has been created on OpenStack client host 192.168.10.10. To use the command line tools you need to source the file.
    * To access the OpenStack Dashboard browse to http://192.168.10.10/dashboard .
    Please, find your login credentials stored in the keystonerc_admin in your home directory.
    * Because of the kernel update the host 192.168.10.10 requires reboot.
    * The installation log file is available at: /var/tmp/packstack/20160128-004334-tNBVhA/openstack-setup.log
    * The generated manifests are available at: /var/tmp/packstack/20160128-004334-tNBVhA/manifests
    

    创建云平台的网卡配置文件:

    [root@openstack ~]# vim /etc/sysconfig/network-scripts/ifcfg-br-ex
    DEVICE=br-ex
    IPADDR=192.168.10.10
    NETMASK=255.255.255.0
    BOOTPROTO=static
    DNS1=192.168.10.1
    GATEWAY=192.168.10.1
    BROADCAST=192.168.10.254
    NM_CONTROLLED=no
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=yes
    IPV6INIT=no
    ONBOOT=yes
    DEVICETYPE=ovs
    TYPE="OVSIntPort"
    OVS_BRIDGE=br-ex
    

    修改网卡参数信息为:

    [root@openstack ~]# vim /etc/sysconfig/network-scripts/ifcfg-eno16777728 
    DEVICE="eno16777728"
    ONBOOT=yes
    TYPE=OVSPort
    DEVICETYPE=ovs
    OVS_BRIDGE=br-ex
    NM_CONTROLLED=no
    IPV6INIT=no
    

    将网卡设备添加到OVS网络中:

    [root@openstack ~]# ovs-vsctl add-port br-ex eno16777728 
    [root@openstack ~]# ovs-vsctl show
    55501ff1-856c-46f1-8a00-5c61e48bb64d
        Bridge br-ex
            Port br-ex
                Interface br-ex
                    type: internal
            Port "eno16777728"
                Interface "eno16777728"
        Bridge br-int
            fail_mode: secure
            Port br-int
                Interface br-int
                    type: internal
            Port patch-tun
                Interface patch-tun
                    type: patch
                    options: {peer=patch-int}
        Bridge br-tun
            Port patch-int
                Interface patch-int
                    type: patch
                    options: {peer=patch-tun}
            Port br-tun
                Interface br-tun
                    type: internal
        ovs_version: "2.1.3"
    

    重启系统让网络设备同步:

    [root@openstack ~]# reboot

    执行身份认证脚本:

    [root@openstack ~]# source keystonerc_admin
    [root@openstack ~(keystone_admin)]# openstack-status
    == Nova services ==
    openstack-nova-api: active
    openstack-nova-cert: active
    openstack-nova-compute: active
    openstack-nova-network: inactive (disabled on boot)
    openstack-nova-scheduler: active
    openstack-nova-volume: inactive (disabled on boot)
    openstack-nova-conductor: active
    == Glance services ==
    openstack-glance-api: active
    openstack-glance-registry: active
    == Keystone service ==
    openstack-keystone: active
    == Horizon service ==
    openstack-dashboard: active
    == neutron services ==
    neutron-server: active
    neutron-dhcp-agent: active
    neutron-l3-agent: active
    neutron-metadata-agent: active
    neutron-lbaas-agent: inactive (disabled on boot)
    neutron-openvswitch-agent: active
    neutron-linuxbridge-agent: inactive (disabled on boot)
    neutron-ryu-agent: inactive (disabled on boot)
    neutron-nec-agent: inactive (disabled on boot)
    neutron-mlnx-agent: inactive (disabled on boot)
    == Swift services ==
    openstack-swift-proxy: active
    openstack-swift-account: active
    openstack-swift-container: active
    openstack-swift-object: active
    == Cinder services ==
    openstack-cinder-api: active
    openstack-cinder-scheduler: active
    openstack-cinder-volume: active
    openstack-cinder-backup: active
    == Ceilometer services ==
    openstack-ceilometer-api: active
    openstack-ceilometer-central: active
    openstack-ceilometer-compute: active
    openstack-ceilometer-collector: active
    openstack-ceilometer-alarm-notifier: active
    openstack-ceilometer-alarm-evaluator: active
    == Support services ==
    libvirtd: active
    openvswitch: active
    dbus: active
    tgtd: inactive (disabled on boot)
    rabbitmq-server: active
    memcached: active
    == Keystone users ==
    +----------------------------------+------------+---------+----------------------+
    | id | name | enabled | email |
    +----------------------------------+------------+---------+----------------------+
    | 7f1f43a0002e4fb9a04b9b1480294e08   | admin        | True | test@test.com             |
    | c7570a0d3e264f0191d8108359100cdd  | ceilometer | True | ceilometer@localhost |
    | 9d3d1b46599341638771c33bcebe17fc    | cinder         | True | cinder@localhost        |
    | 52a803edcc4e479ea147e69ca2966f46    | glance         | True | glance@localhost        |
    | 8b0bcd19b11f49059bc100d260f39d50  | neutron      | True | neutron@localhost     |
    | 953e01b228ef480db551dd05d43eb6d1 | nova            | True | nova@localhost          |
    | 16ced2f73c034e58a0951e46f22eddc8    | swift            | True | swift@localhost          |
    +----------------------------------+------------+---------+----------------------+
    == Glance images ==
    +----+------+-------------+------------------+------+--------+
    | ID | Name | Disk Format | Container Format | Size | Status |
    +----+------+-------------+------------------+------+--------+
    +----+------+-------------+------------------+------+--------+
    == Nova managed services ==
    +----+------------------+--------------------------+----------+---------+-------+----------------------------+-----------------+
    | Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
    +----+------------------+--------------------------+----------+---------+-------+----------------------------+-----------------+
    | 1 | nova-consoleauth   | openstack.linuxprobe.com | internal | enabled | up | 2016-01-29T04:36:20.000000 | - |
    | 2 | nova-scheduler      | openstack.linuxprobe.com | internal | enabled | up | 2016-01-29T04:36:20.000000 | - |
    | 3 | nova-conductor     | openstack.linuxprobe.com | internal | enabled  | up | 2016-01-29T04:36:20.000000 | - |
    | 4 | nova-compute       | openstack.linuxprobe.com | nova      | enabled  | up | 2016-01-29T04:36:16.000000 | - |
    | 5 | nova-cert           | openstack.linuxprobe.com | internal | enabled  | up | 2016-01-29T04:36:20.000000 | - |
    +----+------------------+--------------------------+----------+---------+-------+----------------------------+-----------------+
    == Nova networks ==
    +----+-------+------+
    | ID | Label | Cidr |
    +----+-------+------+
    +----+-------+------+
    == Nova instance flavors ==
    +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
    | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
    +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
    | 1 | m1.tiny            | 512       | 1    | 0 | | 1 | 1.0 | True |
    | 2 | m1.small         | 2048    | 20  | 0 | | 1 | 1.0 | True |
    | 3 | m1.medium    | 4096    | 40  | 0 | | 2 | 1.0 | True |
    | 4 | m1.large          | 8192    | 80  | 0 | | 4 | 1.0 | True |
    | 5 | m1.xlarge        | 16384 | 160 | 0 | | 8 | 1.0 | True |
    +----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
    == Nova instances ==
    +----+------+--------+------------+-------------+----------+
    | ID | Name | Status | Task State | Power State | Networks |
    +----+------+--------+------------+-------------+----------+
    +----+------+--------+------------+-------------+----------+

    打开浏览器进入http://192.168.10.10/dashboard:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    查看登录的帐号密码:

    [root@openstack ~]# cat keystonerc_admin 
    export OS_USERNAME=admin
    export OS_TENANT_NAME=admin
    export OS_PASSWORD=14ad1e723132440c
    export OS_AUTH_URL=http://192.168.10.10:5000/v2.0/
    export PS1='[\u@\h \W(keystone_admin)]\$ '
    

    输入帐号密码后进入到Openstack管理中心:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    22.5 使用Openstack服务
    22.5.1 配置虚拟网络

    要想让云平台中的虚拟实例机能够互相通信,并且让外部的用户访问到里面的数据,我们首先就必需配置好云平台中的网络环境。

    Openstack创建网络:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    编辑网络配置:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    点击创建子网:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    创建子网信息:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    填写子网详情(DHCP地址池中的IP地址用逗号间隔):
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    子网详情:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    创建私有网络:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    创建网络:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    填写网络信息:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    设置网络详情:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    查看网络信息:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    添加路由信息:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    填写路由名称:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    设置路由的网关信息:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    设置网关:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    在网络拓扑中添加接口:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    添加接口信息:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    路由的接口信息(需要等待几秒钟后,内部接口的状态会变成ACTIVE):

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    22.5.2 创建云主机类型

    我们可以预先设置多个云主机类型的模板,这样可以灵活的满足用户的需求,先来创建云主机类型:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    填写云主机的基本信息:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    创建上传镜像:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    Cirros是一款极为精简的操作系统,非常小巧精简的Linux系统镜像,一般会在搭建Openstack后测试云计算平台可用性的系统,特点是体积小巧,速度极快,那么来上传Cirros系统镜像吧:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    查看已上传的镜像(Cirros系统上传速度超级快吧!):

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    22.5.3 创建主机实例

    创建云主机实例:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    填写云主机的详情(云主机类型可以选择前面自定义创建的):
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    查看云主机的访问与安全规则:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    将私有网络网卡添加到云主机:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    查看安装后的脚本数据:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    查看磁盘的分区方式:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    主机实例的孵化过程大约需要10-30秒,然后查看已经运行的实例:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。查看实例主机的网络拓扑(当前仅在内网中):

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    为实例主机绑定浮动IP地址:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    为主机实例添加浮动IP

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    选择绑定的IP地址:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    将主机实例与IP地址关联:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    此时再查看实例的信息,IP地址段就多了一个数据值(192.168.10.51):

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    尝试从外部ping云主机实例(结果是失败的):

    [root@openstack ~]# ping 192.168.10.51
    PING 192.168.10.51 (192.168.10.51) 56(84) bytes of data.
    From 192.168.10.10 icmp_seq=1 Destination Host Unreachable
    From 192.168.10.10 icmp_seq=2 Destination Host Unreachable
    From 192.168.10.10 icmp_seq=3 Destination Host Unreachable
    From 192.168.10.10 icmp_seq=4 Destination Host Unreachable
    ^C
    --- 192.168.10.51 ping statistics ---
    6 packets transmitted, 0 received, +4 errors, 100% packet loss, time 5001ms
    pipe 4
    

    原因是我们没有设置安全组规则那,需要让外部流量允许进入到主机实例中:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    填写策略组的名称与描述:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    管理安全组的规则:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    添加安全规则:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    允许所有的ICMP数据包流入(当然根据工作有时还需要选择TCP或UDP协议,此时仅为验证网络连通性):

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    编辑实例的安全策略组:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    将新建的安全组策略作用到主机实例上:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    再次尝试从外部ping虚拟实例主机:

    [root@openstack ~]# ping 192.168.10.51
    PING 192.168.10.51 (192.168.10.51) 56(84) bytes of data.
    64 bytes from 192.168.10.51: icmp_seq=1 ttl=63 time=2.47 ms
    64 bytes from 192.168.10.51: icmp_seq=2 ttl=63 time=0.764 ms
    64 bytes from 192.168.10.51: icmp_seq=3 ttl=63 time=1.44 ms
    64 bytes from 192.168.10.51: icmp_seq=4 ttl=63 time=1.30 ms
    ^C
    --- 192.168.10.51 ping statistics ---
    4 packets transmitted, 4 received, 0% packet loss, time 3004ms
    rtt min/avg/max/mdev = 0.764/1.497/2.479/0.622 ms
    
    22.5.5 添加云硬盘

    云计算平台的特性就是要能够灵活的,弹性的调整主机实例使用的资源,我们可以来为主机实例多挂载一块云硬盘,首先来创建云硬盘设备:

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    填写云硬盘的信息(以10GB为例):

    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    编辑挂载设备到主机云实例:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    将云硬盘挂载到主机实例中:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    查看云主机实例中的硬盘信息:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。

    22.6 控制云主机实例

    经过上面的一系列配置,我们此时已经创建出了一台能够交付给用户使用的云主机实例了,查看下云平台的信息:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    编辑安全策略,允许TCP和UDP协议的数据流入到云主机实例中:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    分别添加TCP和UDP的允许规则:
    第22章 使用openstack部署云计算服务环境。第22章 使用openstack部署云计算服务环境。
    成功登录到云主机实例中(默认帐号为"cirros",密码为:"cubswin:)"):

    [root@openstack ~]# ssh cirros@192.168.10.52
    The authenticity of host '192.168.10.52 (192.168.10.52)' can't be established.
    RSA key fingerprint is 12:ef:c7:fb:57:70:fc:60:88:8c:96:13:38:b1:f6:65.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '192.168.10.52' (RSA) to the list of known hosts.
    cirros@192.168.10.52's password: 
    $
    

    查看云主机实例的网络情况:

    $ ip a 
    1: lo:  mtu 16436 qdisc noqueue 
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0:  mtu 1500 qdisc pfifo_fast qlen 1000
        link/ether fa:16:3e:4f:1c:97 brd ff:ff:ff:ff:ff:ff
        inet 10.10.10.51/24 brd 10.10.10.255 scope global eth0
        inet6 fe80::f816:3eff:fe4f:1c97/64 scope link 
           valid_lft forever preferred_lft forever
    

    挂载刚刚创建的云硬盘设备:

    $ df -h
    Filesystem                Size      Used Available Use% Mounted on
    /dev                    494.3M         0    494.3M   0% /dev
    /dev/vda1                23.2M     18.0M      4.0M  82% /
    tmpfs                   497.8M         0    497.8M   0% /dev/shm
    tmpfs                   200.0K     68.0K    132.0K  34% /run
    $ mkdir disk
    $ sudo mkfs.ext4 /dev/vdb
    mke2fs 1.42.2 (27-Mar-2012)
    Filesystem label=
    OS type: Linux
    Block size=4096 (log=2)
    Fragment size=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    655360 inodes, 2621440 blocks
    131072 blocks (5.00%) reserved for the super user
    First data block=0
    Maximum filesystem blocks=2684354560
    80 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks: 
    	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
    
    Allocating group tables: done                            
    Writing inode tables: done                            
    Creating journal (32768 blocks): done
    Writing superblocks and filesystem accounting information: done 
    $ sudo mount /dev/vdb disk/
    $ df -h
    Filesystem                Size      Used Available Use% Mounted on
    /dev                    494.3M         0    494.3M   0% /dev
    /dev/vda1                23.2M     18.0M      4.0M  82% /
    tmpfs                   497.8M         0    497.8M   0% /dev/shm
    tmpfs                   200.0K     68.0K    132.0K  34% /run
    /dev/vdb                  9.8G    150.5M      9.2G   2% /home/cirros/disk
    展开全文
  • 注意:接与上一篇博客内容 进行更新 由于整个开源openstack安装过程过于繁琐,命令太长,太繁琐,于是把整个安装命令写成shell脚本。 脚本数量内容过多,已经推送自我的github源码托管中心了。...

    注意:接与上一篇博客内容 进行更新
    由于整个开源openstack安装过程过于繁琐,命令太长,太繁琐,于是把整个安装命令写成shell脚本。

    脚本数量内容过多,已经推送自我的github源码托管中心了。
    这是本篇脚本内容所在的github仓库位置
    controller节点 compute节点
    脚本内容介绍
    controller

    #!/bin/bash
    source /etc/xiandian/openrc.sh
    source /etc/keystone/admin-openrc.sh
    yum install openstack-swift-proxy python-swiftclient python-keystoneclient python-keystonemiddleware memcached -y
    
    openstack user create --domain $DOMAIN_NAME --password $SWIFT_PASS swift
    openstack role add --project service --user swift admin
    openstack service create --name swift --description "OpenStack Object Storage" object-store
    openstack endpoint create --region RegionOne object-store public http://$HOST_NAME:8080/v1/AUTH_%\(tenant_id\)s
    openstack endpoint create --region RegionOne object-store internal http://$HOST_NAME:8080/v1/AUTH_%\(tenant_id\)s
    openstack endpoint create --region RegionOne object-store admin http://$HOST_NAME:8080/v1
    
    cat <<EOF > /etc/swift/proxy-server.conf
    [DEFAULT]
    bind_port = 8080
    swift_dir = /etc/swift
    user = swift
    [pipeline:main]
    pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quota
    s account-quotas slo dlo versioned_writes proxy-logging proxy-server
    [app:proxy-server]
    use = egg:swift#proxy
    account_autocreate = True
    [filter:tempauth]
    use = egg:swift#tempauth
    user_admin_admin = admin .admin .reseller_admin
    user_test_tester = testing .admin
    user_test2_tester2 = testing2 .admin
    user_test_tester3 = testing3
    user_test5_tester5 = testing5 service
    [filter:authtoken]
    paste.filter_factory = keystonemiddleware.auth_token:filter_factory
    auth_uri = http://$HOST_NAME:5000
    auth_url = http://$HOST_NAME:35357
    memcached_servers = $HOST_NAME:11211
    auth_type = password
    project_domain_name = $DOMAIN_NAME
    user_domain_name = $DOMAIN_NAME
    

    compute 节点脚本介绍

    #!/bin/bash
    source /etc/xiandian/openrc.sh
    yum install xfsprogs rsync openstack-swift-account openstack-swift-container openstack-swift-object -y
    mkfs.xfs -i size=1024 -f /dev/$OBJECT_DISK
    sed -i '/nodiratime/d' /etc/fstab
    echo "/dev/$OBJECT_DISK /swift/node xfs loop,noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
    mkdir -p /swift/node
    mount /dev/$OBJECT_DISK /swift/node
    scp $HOST_NAME:/etc/swift/*.ring.gz /etc/swift/
    
    cat <<EOF > /etc/rsyncd.conf
    pid file = /var/run/rsyncd.pid
    log file = /var/log/rsyncd.log
    uid = swift
    gid = swift
    address = 127.0.0.1
    [account]
    path            = /swift/node
    read only       = false
    write only      = no
    list            = yes
    incoming chmod  = 0644
    outgoing chmod  = 0644
    max connections = 25
    lock file =     /var/lock/account.lock
    [container]
    path            = /swift/node
    read only       = false
    write only      = no
    list            = yes
    incoming chmod  = 0644
    outgoing chmod  = 0644
    max connections = 25
    lock file =     /var/lock/container.lock
    [object]
    path            = /swift/node
    read only       = false
    write only      = no
    

    8 安装Swift对象存储服务
    首先一定要source一下 使我们的环境变量生效,然后在去执行脚本或者去熟悉我们的配置文件
    #Controller节点
    #source admin-openrc.sh

    8.1通过脚本安装Swift服务

    8.2-8.12对象存储服务的操作命令已经编写成shell脚本,通过脚本进行一键安装。如下:
    #Controller
    执行脚本iaas-install-swift-controller.sh进行安装
    #Compute节点
    执行脚本iaas-install-swift-compute.sh进行安装
    执行过程中需要确认登录controller节点和输入controller节点root用户密码。
    
    

    下面是我们的详细配置命令和步骤

    8.2创建用户

    openstack user create --domain default --password 000000 swift
    openstack role add --project service --user swift admin
    
    

    8.3创建Endpoint和API端点

    openstack service create --name swift --description "OpenStack Object Storage" object-store
    openstack endpoint create --region RegionOne object-store public http://controller:8080/v1/AUTH_%\(tenant_id\)s
    openstack endpoint create --region RegionOne object-store internal http://controller:8080/v1/AUTH_%\(tenant_id\)s
    openstack endpoint create --region RegionOne object-store admin http://controller:8080/v1 
    
    

    8.4 编辑/etc/swift/proxy-server.conf

    编辑配置文件如下
    [DEFAULT]
    bind_port = 8080
    swift_dir = /etc/swift
    user = swift
    [pipeline:main]
    pipeline = catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server
    [app:proxy-server]
    use = egg:swift#proxy
    account_autocreate = True
    [filter:tempauth]
    use = egg:swift#tempauth
    user_admin_admin = admin .admin .reseller_admin
    user_test_tester = testing .admin
    user_test2_tester2 = testing2 .admin
    user_test_tester3 = testing3
    user_test5_tester5 = testing5 service
    [filter:authtoken]
    paste.filter_factory = keystonemiddleware.auth_token:filter_factory
    auth_uri = http://controller:5000
    auth_url = http://controller:35357
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = swift
    password = 000000
    delay_auth_decision = True
    [filter:keystoneauth]
    use = egg:swift#keystoneauth
    operator_roles = admin,user
    [filter:healthcheck]
    use = egg:swift#healthcheck
    [filter:cache]
    memcache_servers = controller:11211
    use = egg:swift#memcache
    [filter:ratelimit]
    use = egg:swift#ratelimit
    [filter:domain_remap]
    use = egg:swift#domain_remap
    [filter:catch_errors]
    use = egg:swift#catch_errors
    [filter:cname_lookup]
    use = egg:swift#cname_lookup
    [filter:staticweb]
    use = egg:swift#staticweb
    [filter:tempurl]
    use = egg:swift#tempurl
    [filter:formpost]
    use = egg:swift#formpost
    [filter:name_check]
    use = egg:swift#name_check
    [filter:list-endpoints]
    use = egg:swift#list_endpoints
    [filter:proxy-logging]
    use = egg:swift#proxy_logging
    [filter:bulk]
    use = egg:swift#bulk
    [filter:slo]
    use = egg:swift#slo
    [filter:dlo]
    use = egg:swift#dlo
    [filter:container-quotas]
    use = egg:swift#container_quotas
    [filter:account-quotas]
    use = egg:swift#account_quotas
    [filter:gatekeeper]
    use = egg:swift#gatekeeper
    [filter:container_sync]
    use = egg:swift#container_sync
    [filter:xprofile]
    use = egg:swift#xprofile
    [filter:versioned_writes]
    use = egg:swift#versioned_writes
    
    

    8.5 创建账号、容器、对象

    存储节点存储磁盘名称以sdb为例
    swift-ring-builder account.builder create 18 1 1
    swift-ring-builder account.builder add --region 1 --zone 1 --ip 20.0.0.20 --port 6002 --device sdb --weight 100
    swift-ring-builder account.builder
    swift-ring-builder account.builder rebalance
    swift-ring-builder container.builder create 10 1 1
    swift-ring-builder container.builder add --region 1 --zone 1 --ip 20.0.0.20 --port 6001 --device sdb --weight 100
    swift-ring-builder container.builder
    swift-ring-builder container.builder rebalance
    swift-ring-builder object.builder create 10 1 1
    swift-ring-builder object.builder  add --region 1 --zone 1 --ip 20.0.0.20 --port 6000 --device sdb --weight 100  
    swift-ring-builder object.builder
    swift-ring-builder object.builder rebalance
    
    

    8.6 编辑/etc/swift/swift.conf文件

    编辑如下
    [swift-hash]
    swift_hash_path_suffix = changeme
    swift_hash_path_prefix = changeme
    [storage-policy:0]
    name = Policy-0
    default = yes
    aliases = yellow, orange
    [swift-constraints] 
    
    

    8.7 启动服务和赋予权限

    chown -R root:swift /etc/swift
    systemctl enable openstack-swift-proxy.service memcached.service
    systemctl restart openstack-swift-proxy.service memcached.service
    
    

    #Compute节点
    8.8 安装软件包

    存储节点存储磁盘名称以sdb为例
    # yum install xfsprogs rsync openstack-swift-account openstack-swift-container openstack-swift-object –y
    # mkfs.xfs -i size=1024 -f /dev/sdb
    # echo "/dev/sdb /swift/node xfs loop,noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
    # mkdir -p /swift/node
    # mount /dev/sdb /swift/node
    # scp controller:/etc/swift/*.ring.gz /etc/swift/
    
    

    8.9 配置rsync

    (1)编辑/etc/rsyncd.conf文件如下
    pid file = /var/run/rsyncd.pid
    log file = /var/log/rsyncd.log
    uid = swift
    gid = swift
    address = 127.0.0.1
    [account]
    path            = /swift/node
    read only       = false
    write only      = no
    list            = yes
    incoming chmod  = 0644
    outgoing chmod  = 0644
    max connections = 25
    lock file =     /var/lock/account.lock
    [container]
    path            = /swift/node
    read only       = false
    write only      = no
    list            = yes
    incoming chmod  = 0644
    outgoing chmod  = 0644
    max connections = 25
    lock file =     /var/lock/container.lock
    [object]
    path            = /swift/node
    read only       = false
    write only      = no
    list            = yes
    incoming chmod  = 0644
    outgoing chmod  = 0644
    max connections = 25
    lock file =     /var/lock/object.lock
    [swift_server]
    path            = /etc/swift
    read only       = true
    write only      = no
    list            = yes
    incoming chmod  = 0644
    outgoing chmod  = 0644
    max connections = 5
    lock file =     /var/lock/swift_server.lock
    (2)启动服务
    systemctl enable rsyncd.service
    systemctl restart rsyncd.service
    
    

    8.10 配置账号、容器和对象

    (1)修改/etc/swift/account-server.conf配置文件
    [DEFAULT]
    bind_port = 6002
    user = swift
    swift_dir = /etc/swift
    devices = /swift/node
    mount_check = false
    [pipeline:main]
    pipeline = healthcheck recon account-server
    [app:account-server]
    use = egg:swift#account
    [filter:healthcheck]
    use = egg:swift#healthcheck
    [filter:recon]
    use = egg:swift#recon
    recon_cache_path = /var/cache/swift
    [account-replicator]
    [account-auditor]
    [account-reaper]
    [filter:xprofile]
    use = egg:swift#xprofile
    (2)修改/etc/swift/container-server.conf配置文件
    [DEFAULT]
    bind_port = 6001
    user = swift
    swift_dir = /etc/swift
    devices = /swift/node
    mount_check = false
    [pipeline:main]
    pipeline = healthcheck recon container-server
    [app:container-server]
    use = egg:swift#container
    [filter:healthcheck]
    use = egg:swift#healthcheck
    [filter:recon]
    use = egg:swift#recon
    recon_cache_path = /var/cache/swift
    [container-replicator]
    [container-updater]
    [container-auditor]
    [container-sync]
    [filter:xprofile]
    use = egg:swift#xprofile
    (3)修改/etc/swift/object-server.conf配置文件
    [DEFAULT]
    bind_port = 6000
    user = swift
    swift_dir = /etc/swift
    devices = /swift/node
    mount_check = false
    [pipeline:main]
    pipeline = healthcheck recon object-server
    [app:object-server]
    use = egg:swift#object
    [filter:healthcheck]
    use = egg:swift#healthcheck
    [filter:recon]
    use = egg:swift#recon
    recon_cache_path = /var/cache/swift
    recon_lock_path = /var/lock
    [object-replicator]
    [object-reconstructor]
    [object-updater]
    [object-auditor]
    [filter:xprofile]
    use = egg:swift#xprofile
    
    

    8.11 修改Swift配置文件

    修改/etc/swift/swift.conf
    [swift-hash]
    swift_hash_path_suffix = changeme
    swift_hash_path_prefix = changeme
    [storage-policy:0]
    name = Policy-0
    default = yes
    aliases = yellow, orange
    [swift-constraints]
    
    
    

    8.12 重启服务和赋予权限

    
    chown -R swift:swift /swift/node
    mkdir -p /var/cache/swift
    chown -R root:swift /var/cache/swift
    chmod -R 775 /var/cache/swift
    chown -R root:swift /etc/swift
    
    systemctl enable openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    systemctl restart openstack-swift-account.service openstack-swift-account-auditor.service openstack-swift-account-reaper.service openstack-swift-account-replicator.service
    systemctl enable openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    systemctl restart openstack-swift-container.service openstack-swift-container-auditor.service openstack-swift-container-replicator.service openstack-swift-container-updater.service
    systemctl enable openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    systemctl restart openstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.service openstack-swift-object-updater.service
    
    

    下面一篇博客将会更新我们的Trove服务组件的安装

    展开全文
  • 1、机器192.168.1.211 Proxy Node192.168.1.212 Storage Node192.168.1.213 Storage Node192.168.1.214 Storage Node系统为SLES11sp12、配置软件源因为公司服务器无法连外网,所以配置局域网源和本地源来搭建...

    1、机器

    192.168.1.211    Proxy Node

    192.168.1.212    Storage Node

    192.168.1.213    Storage Node

    192.168.1.214    Storage Node


    系统为SLES11sp1


    2、配置软件源

    因为公司服务器无法连外网,所以配置局域网源和本地源来搭建环境


    上传ISO镜像文件到各台机器

    SLES-11-SP4-DVD-x86_64-GM-DVD1.iso


    每台机器挂载镜像,配置本地源

    # mkdir /mnt/cdrom

    # mount -o loop SLES-11-SP4-DVD-x86_64-GM-DVD1.iso /mnt/cdrom

    # mount

    /home/SLES-11-SP4-DVD-x86_64-GM-DVD1.iso on /mnt/cdrom type iso9660 (ro)


    # vi /etc/zypp/repos.d/Local-iso.repo 

    [local-iso]

    name=local iso repo

    enabled=1

    autorefresh=1

    baseurl=file:/mnt/cdrom/

    type=yast2

    gpgcheck=0


    http局域网源

    /etc/zypp/repos.d # cat Icehouse.repo 

    [Icehouse]

    name=Icehouse

    enabled=1

    autorefresh=1

    baseurl=http://192.168.1.206:8080/download.opensuse.org/repositories/Cloud/OpenStack/Icehouse/SLE_11_SP3/

    type=rpm-md


    3、创建swift用户并配置权限

    创建组

    # groupadd swift

    创建用户

    # useradd -g swift -d /home/swift -s /bin/bash -m swift

    为swift赋予sudo

    # vi /etc/sudoers

    在root ALL=(ALL) ALL一行下添加

    swift ALL=(ALL) NOPASSWD:ALL


    # passwd swift


    5、基础配置

    注意关闭SElinux和防火墙


    分别在四台机器上安装依赖

    # su - swift

    sudo zypper install curlgcc memcached rsync sqlite3 xfsprogs Git-core libffi-dev python-setuptools

    sudo zypper install python-coverage python-dev python-nose python-simplejson python-xattr python-eventlet python-greenlet python-pastedeploy python-netifaces python-pip python-dnspython python-mock python-swiftclient openstack-swift


    注:

    python-swiftclient是客户端工具,可以在服务器上安装使服务器只充当客户端。


    每个节点上配置

    ~> sudo chown -R swift:swift /etc/swift


    ~> /etc/swift> cat swift.conf

    [swift-hash]

    #random unique strings that can neverchange(DO NOT LOSE)

    swift_hash_path_prefix = 'od -t x8 -N 8 -An < /dev/random'

    swift_hash_path_suffix = 'od -t x8 -N 8 -An < /dev/random'


    6、安装配置proxy

    sudo zypper install openstack-swift-proxy memcached


    启动memcached

    /usr/sbin # # ./memcached -d -m 10 -u swift -l 192.168.1.211 11211 -c 256 -P /tmp/memcached/pid


    注:

    memcached为什么这样启动,而不是在配置文件里写入是因为我发现写入配置文件后启动服务并没有生效。


    /usr/sbin # service memcached status

    Checking for service memcached    

                                                            running


    /usr/sbin # netstat -an | grep 11211

    tcp        0      0 192.168.1.211:11211    0.0.0.0:*               LISTEN      

    udp        0      0 192.168.1.211:11211    0.0.0.0:*                           

    查看配置后的文件

    /etc/swift> grep -v "^#" /etc/swift/proxy-server.conf | grep -v "^$"

    [DEFAULT]

    user = swift

    bind_port = 8090

    workers = 8

    [pipeline:main]

    pipeline= healthcheck proxy-logging cache tempauth proxy-logging proxy-server

    [app:proxy-server]

    use = egg:swift#proxy

    allow_account_management = ture

    account_autocreate = true

    [filter:tempauth]

    use = egg:swift#tempauth

    user_system_root= testpass .admin http://192.168.1.211:8090/v1/AUTH_system

    user_admin_admin = admin .admin .reseller_admin

    user_test_tester = testing .admin

    user_test2_tester2 = testing2 .admin

    user_test_tester3 = testing3

    [filter:healthcheck]

    use = egg:swift#healthcheck

    [filter:cache]

    use = egg:swift#memcache

    memcache_servers = 192.168.1.211:11211

    [filter:ratelimit]

    use = egg:swift#ratelimit

    [filter:domain_remap]

    use = egg:swift#domain_remap

    [filter:catch_errors]

    use = egg:swift#catch_errors

    [filter:cname_lookup]

    use = egg:swift#cname_lookup

    [filter:staticweb]

    use = egg:swift#staticweb

    [filter:tempurl]

    use = egg:swift#tempurl

    [filter:formpost]

    use = egg:swift#formpost

    [filter:name_check]

    use = egg:swift#name_check

    [filter:list-endpoints]

    use = egg:swift#list_endpoints

    [filter:proxy-logging]

    use = egg:swift#proxy_logging

    [filter:bulk]

    use = egg:swift#bulk

    [filter:container-quotas]

    use = egg:swift#container_quotas

    [filter:slo]

    use = egg:swift#slo

    [filter:dlo]

    use = egg:swift#dlo

    [filter:account-quotas]

    use = egg:swift#account_quotas

    [filter:gatekeeper]

    use = egg:swift#gatekeeper

    [filter:container_sync]

    use = egg:swift#container_sync


    创建account、container以及object rings

    > cd /etc/swift

    /etc/swift> sudo swift-ring-builder account.builder create 18 3 1

    /etc/swift> sudo swift-ring-builder container.builder create 18 3 1

    /etc/swift> sudo swift-ring-builder object.builder create 18 3 1


    注:

    18代表2的18次幂,这个数字取决与你希望一个ring中会有多少个partition,3代表object的副本数,1代表至少一个小时后才能被移动。


    让ring记录每个storage存储设备

    /etc/swift> export ZONE=1

    /etc/swift> export STORAGE_LOCAL_NET_IP=192.168.1.212

    /etc/swift> export WEIGHT=100

    /etc/swift> export DEVICE=sdb1


    /etc/swift> sudo swift-ring-builder account.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6002/$DEVICE $WEIGHT

    WARNING: No region specified for z1-192.168.1.212:6002/sdb1. Defaulting to region 1.

    Device d4r1z1-192.168.1.212:6002R192.168.1.212:6002/sdb1_"" with 100.0 weight got id 4


    /etc/swift> sudo swift-ring-builder container.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6001/$DEVICE $WEIGHT

    WARNING: No region specified for z1-192.168.1.212:6001/sdb1. Defaulting to region 1.

    Device d0r1z1-192.168.1.212:6001R192.168.1.212:6001/sdb1_"" with 100.0 weight got id 0


    /etc/swift> sudo swift-ring-builder object.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6000/$DEVICE $WEIGHT

    WARNING: No region specified for z1-192.168.1.212:6000/sdb1. Defaulting to region 1.

    Device d0r1z1-192.168.1.212:6000R192.168.1.212:6000/sdb1_"" with 100.0 weight got id 0


    /etc/swift> export ZONE=2

    /etc/swift> export STORAGE_LOCAL_NET_IP=192.168.1.213


    /etc/swift> sudo swift-ring-builder account.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6002/$DEVICE $WEIGHT

    WARNING: No region specified for z2-192.168.1.213:6002/sdb1. Defaulting to region 1.

    Device d1r1z2-192.168.1.213:6002R192.168.1.213:6002/sdb1_"" with 100.0 weight got id 1


    /etc/swift> sudo swift-ring-builder container.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6001/$DEVICE $WEIGHT

    WARNING: No region specified for z2-192.168.1.213:6001/sdb1. Defaulting to region 1.

    Device d1r1z2-192.168.1.213:6001R192.168.1.213:6001/sdb1_"" with 100.0 weight got id 1


    /etc/swift> sudo swift-ring-builder object.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6000/$DEVICE $WEIGHT

    WARNING: No region specified for z2-192.168.1.213:6000/sdb1. Defaulting to region 1.

    Device d1r1z2-192.168.1.213:6000R192.168.1.213:6000/sdb1_"" with 100.0 weight got id 1


    /etc/swift> export ZONE=3

    /etc/swift> export STORAGE_LOCAL_NET_IP=192.168.1.214


    /etc/swift> sudo swift-ring-builder account.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6002/$DEVICE $WEIGHT

    WARNING: No region specified for z3-192.168.1.214:6002/sdb1. Defaulting to region 1.

    Device d12r1z3-192.168.1.214:6002R192.168.1.214:6002/sdb1_"" with 100.0 weight got id 12


    /etc/swift> sudo swift-ring-builder container.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6001/$DEVICE $WEIGHT

    WARNING: No region specified for z3-192.168.1.214:6001/sdb1. Defaulting to region 1.

    Device d2r1z3-192.168.1.214:6001R192.168.1.214:6001/sdb1_"" with 100.0 weight got id 2


    /etc/swift> sudo swift-ring-builder object.builder add z$ZONE-$STORAGE_LOCAL_NET_IP:6000/$DEVICE $WEIGHT

    WARNING: No region specified for z3-192.168.1.214:6000/sdb1. Defaulting to region 1.

    Device d2r1z3-192.168.1.214:6000R192.168.1.214:6000/sdb1_"" with 100.0 weight got id 2


    平衡ring

    /etc/swift> sudo swift-ring-builder account.builder rebalance

    Reassigned 262144 (100.00%) partitions. Balance is now 0.00.

    /etc/swift> sudo swift-ring-builder container.builder rebalance

    Reassigned 262144 (100.00%) partitions. Balance is now 0.00. 

    /etc/swift> sudo swift-ring-builder object.builder rebalance

    Reassigned 262144 (100.00%) partitions. Balance is now 0.00.


    如果操作失误也可以移除

    /etc/swift> sudo swift-ring-builder account.builder remove z1-192.168.1.212:6020

    d0r1z1-192.168.1.212:6020R192.168.1.212:6020/sdb1_"" marked for removal and will be removed next rebalance.


    不过我发现最后一个无法删除,只能在新建另外一个,然后删除想要删除的


    删除后需要rebalance

    /etc/swift> sudo swift-ring-builder account.builder rebalance

    Reassigned 262144 (100.00%) partitions. Balance is now 0.00.


    当然也可以加入设备名

    /etc/swift> sudo swift-ring-builder account.builder remove z1-192.168.1.212:6020/sdb1

    d0r1z1-192.168.1.212:6020R192.168.1.212:6020/sdb1_"" marked for removal and will be removed next rebalance.


    拷贝ring到其他所有节点

    所有节点

    sudo chown -R swift:swift /etc/swift


    scp swift/*.ring.gz swift@192.168.1.212:/etc/swift

    scp swift/*.ring.gz swift@192.168.1.213:/etc/swift

    scp swift/*.ring.gz swift@192.168.1.214:/etc/swift


    开启proxy

    /etc/swift> sudo swift-init proxy start


    需要注意的是对于account.builder 、container.builder以及object.builder的命名一定要严格按照小写,并且名字不能修改,不然会出现类似下面的错误

    sudo swift-init proxy start

    Starting proxy-server...(/etc/swift/proxy-server.conf)

    WARNING: SSL should only be enabled for testing purposes. Use external SSL termination for a production deployment.

    Traceback (most recent call last):

      File "/usr/bin/swift-proxy-server", line 23, in <module>

        sys.exit(run_wsgi(conf_file, 'proxy-server', default_port=8080, **options))

    ......


    IOError: [Errno 2] No such file or directory: '/etc/swift/container.ring.gz'


    因为/etc/swift/目录下是Container.ring.gz


    7、安装配置storage节点(三个节点安装配置)

    sudo zypper install openstack-swift-account openstack-swift-container openstack-swift-object python-xml

    sudo mkdir -p /srv/node/sdb1

    sudo chown swift:swift /srv/node/sdb1


    设备配置xfs卷

    机器为底层lvm逻辑卷格式

    # lvcreate -n swiftlv -L 10G vg0


    # mkfs.xfs /dev/mapper/vg0-swiftlv


    把挂载信息写入配置文件

    # vi /etc/fstab

    /dev/mapper/vg0-swiftlv /srv/node/sdb1    xfs   noatime,nodiratime,nobarrier,logbufs=8 0 0


    重新加载配置文件

    # mount -a 


    查看

    # mount

    /dev/mapper/vg0-swiftlv on /srv/node/sdb1 type xfs (rw,noatime,nodiratime,nobarrier,logbufs=8)


    按实际情况修改rsync配置文件

    /etc> sudo vi rsyncd.conf

    uid = swift

    gid = swift

    log file = /var/log/rsyncd.log

    pid file = /var/run/rsyncd.pid

    address = 192.168.1.212


    [account]

    max connections = 2

    path = /srv/node/

    read only = False

    lock file = /var/lock/account.lock


    [container]

    max connections = 2

    path = /srv/node/

    read only = False

    lock file = /var/lock/container.lock


    [object]

    max connections = 2

    path = /srv/node/

    read only = False

    lock file = /var/lock/object.lock


    其他两台注意:

    address = 192.168.1.213

    address = 192.168.1.214


    /etc> sudo vi default/rsync

    RSYNC_ENABLE=true


    # service rsyncd start


    /etc> rsync rsync://pub@192.168.1.212

    account        

    container      

    object  


    /etc> rsync rsync://pub@192.168.1.213

    account        

    container      

    object   


    /etc> rsync rsync://pub@192.168.1.214

    account        

    container      

    object 


    修改account、container以及object配置文件

    ~> grep -v "^#" /etc/swift/account-server.conf |grep -v "^$"

    [DEFAULT]

    user = swift

    bind_ip = 192.168.1.212

    bind_port = 6002

    swift_dir = /etc/swift

    devices = /srv/node

    workers = 2

    [pipeline:main]

    pipeline = account-server

    [app:account-server]

    use = egg:swift#account

    [filter:healthcheck]

    use = egg:swift#healthcheck

    [filter:recon]

    use = egg:swift#recon

    [account-replicator]

    [account-auditor]

    [account-reaper]


    ~> grep -v "^#" /etc/swift/container-server.conf |grep -v "^$"

    [DEFAULT]

    user = swift

    bind_ip = 192.168.1.212

    bind_port = 6001

    swift_dir = /etc/swift

    devices = /srv/node

    workers = 2

    [pipeline:main]

    pipeline = container-server

    [app:container-server]

    use = egg:swift#container

    [filter:healthcheck]

    use = egg:swift#healthcheck

    [filter:recon]

    use = egg:swift#recon

    [container-replicator]

    [container-updater]

    [container-auditor]

    [container-sync]


    ~> grep -v "^#" /etc/swift/object-server.conf |grep -v "^$"

    [DEFAULT]

    user = swift

    bind_ip = 192.168.1.212

    bind_port = 6000

    swift_dir = /etc/swift

    devices = /srv/node

    workers = 2

    [pipeline:main]

    pipeline = object-server

    [app:object-server]

    use = egg:swift#object

    [filter:healthcheck]

    use = egg:swift#healthcheck

    [filter:recon]

    use = egg:swift#recon

    [object-replicator]

    [object-updater]

    [object-auditor]


    拷贝配置文件到其他两台机器,记得其他两台IP做修改


    启动存储服务

    etc/swift> sudo swift-init all start

    Starting container-updater...(/etc/swift/container-server.conf)

    Starting account-auditor...(/etc/swift/account-server.conf)

    Starting object-replicator...(/etc/swift/object-server.conf)

    Unable to locate config for proxy-server

    Starting container-replicator...(/etc/swift/container-server.conf)

    Starting object-auditor...(/etc/swift/object-server.conf)

    Starting object-expirer...(/etc/swift/object-expirer.conf)

    Starting container-auditor...(/etc/swift/container-server.conf)

    Starting container-server...(/etc/swift/container-server.conf)

    Starting account-server...(/etc/swift/account-server.conf)

    Starting account-reaper...(/etc/swift/account-server.conf)

    Starting container-sync...(/etc/swift/container-server.conf)

    Starting account-replicator...(/etc/swift/account-server.conf)

    Starting object-updater...(/etc/swift/object-server.conf)

    Starting object-server...(/etc/swift/object-server.conf)


    可以使用curl来和存储进行交互

    curl -k -v -H'X-Storage-User: system:root' -H 'X-Storage-Pass: testpass' https://192.168.1.211:8080/auth/v1.0

    curl -k -v -H'X-Storage-User: system:root' -H 'X-Storage-Pass: testpass'


    http://192.168.1.211:8080/auth/v1.0


    如果curl出错:

    swift@x-shcs-creditcard-v01:~> curl -k -v -H'X-Storage-User: system:root' -H 'X-Storage-Pass: testpass' https://192.168.1.211:8080/auth/v1.0

    * Protocol "https" not supported or disabled in libcurl

    * Closing connection -1

    curl: (1) Protocol "https" not supported or disabled in libcurl


    则需要编译curl

    # tar jxf curl-7.19.0.tar.bz2

    # cd curl-7.19.0/

    # ./configure --prefix=/usr/local/curl --with-ssl=/usr/local/openssl

      curl version:    7.19.0

      Host setup:      x86_64-unknown-linux-gnu

      Install prefix:  /usr/local

      Compiler:        gcc

      SSL support:     enabled (OpenSSL)

      SSH support:     no      (--with-libssh2)

      zlib support:    enabled

      krb4 support:    no      (--with-krb4*)

      GSSAPI support:  no      (--with-gssapi)

      SPNEGO support:  no      (--with-spnego)

      c-ares support:  no      (--enable-ares)

      ipv6 support:    enabled

      IDN support:     no      (--with-libidn)

      Build libcurl:   Shared=yes, Static=yes

      Built-in manual: no      (--enable-manual)

      Verbose errors:  enabled (--disable-verbose)

      SSPI support:    no      (--enable-sspi)

      ca cert bundle:  no

      ca cert path:    /etc/ssl/certs/

      LDAP support:    no      (--enable-ldap / --with-ldap-lib / --with-lber-lib)

      LDAPS support:   no      (--enable-ldaps)


    # make && make install


    /usr/local/bin # mv curl curl.bak

    /usr/bin # mv curl curl.bak


    # ln -s /usr/local/curl/bin/curl /usr/bin/curl


    检测swift是否正常工作,出现像下面的输出就说明正常

    /etc/swift> swift -Ahttp://192.168.1.211:8090/auth/v1.0 -U system:root -K testpass stat

           Account: AUTH_system

        Containers: 0

           Objects: 0

             Bytes: 0

      Content-Type: text/plain; charset=utf-8

       X-Timestamp: 1490690568.46981

        X-Trans-Id: tx59301de1e9244b70a2065-0058da2208

    X-Put-Timestamp: 1490690568.46981


    新建一级子目录

    swift -Ahttp://192.168.1.211:8090/auth/v1.0 -U system:root -K testpass post container1


    忘存储目录上传文件

    swift -Ahttp://192.168.1.211:8090/auth/v1.0 -U system:root -K testpass upload container1 /etc/swift/*.ring.gz


    查看存储里的目录和文件

    swift -Ahttp://192.168.1.211:8090/auth/v1.0 -U system:root -K testpass list


    下载存储目录里的文件到本地

    swift -Ahttp://192.168.1.211:8090/auth/v1.0 -U system:root -K testpass download container1 

    展开全文
  • OpenStack Kolla之部署

    2019-07-05 10:06:57
    kolla项目起源于TripleO项目,聚焦于使用Docker容器部署OpenStack服务。该项目由Cisco于2014年9月提出,是OpenStack 社区Big Tent开发模式下的孵化项目。本文是接着上一篇《OpenStack Kolla探秘》,我们继续进行下...
  • 本文在monitor主机初始化基础上配置 一、安装配置ansible 1、 先使用pip安装再使用yum安装,可以防止某些py包版本太低 [root@kolla ~]# pip install ansible [root@kolla ~]# yum install ansible -y 二、安装配置...
  • kolla-ansible解析

    2018-04-14 18:19:40
    项目地址 https://github.com/openstack/kolla-ansiblehttps://git.openstack.org/cgit/openstack/kolla-ansible/kolla-ansible部署的大致流程 执行命令Kolla-ansible –i multinode deploy后,koll会调用ansible-...
  • OpenStack组件Swift单机搭建(基于Keystone) 安装环境:Ubuntu 16.04 需要有两块硬盘(一块为系统盘,一块用于安装SWIFT) 需要有IP地址 环境准备 修改hosts文件安装相关服务 修改hosts ...
  • Kolla-ansible源码分析

    2018-08-19 14:59:26
    kolla-ansible是从kolla项目中分离出来的一个可交付的项目。kolla-ansible负责部署容器化的openstack各个服务和基础设施组件;而kolla项目现在则单独负责镜像的构建,为kolla-ansible部署提供生产级别的openstack各...
  • OpenStack Kolla 源码分析 –Ansible Kolla介绍 Kolla项目利用Docker、Docker-Compose、Ansible来完成部署OpenStack,目前Kolla已经能够完成一个all-in-one的开发环境的部署。从Kolla项目spec中的描述来看,...
  • Kolla概述和openstack所有结点linux系统初始配置 安装kolla-ansible 自定义kolla-ansible安装openstack的相关配置文件 开始基于kolla-ansible安装openstack私有云 openstack 概述 openstack概述 :O...
  • 一.环境准备 1.hosts设置 每台设备两块网卡: 第一块:NAT模式,用于下载软件包,设置好IP可以上网 第二块:桥接模式,用于External...
  • 声明:此文章为本人研究所得,如有转载,请注明出处。谢谢! 目录 一. 部署结构设计 1.1.节点详细参数 1.2.架构图 二. 前期准备 2.1、更新yum 2.2、安装必要组件 ...2.7、配置NTP 2.8、通过EPEl的repo源安装依赖
  • OpenStack Kolla 源码分析 –AnsibleKolla介绍Kolla项目利用Docker、Docker-Compose、Ansible来完成部署OpenStack,目前Kolla已经能够完成一个all-in-one的开发环境的部署。从Kolla项目spec中的描述来看,主要是利用...
  • ceph存储操作创建用户(ceph-rgw)[root@ceph-node01/]#radosgw-adminusercreate--uid=registry--display-name="registry" { "user_id":"registry", "display_name":"registry", "email":"", "suspended"...
  • 现在Openstack社区的安装部署方式已经开始推荐使用kolla进行部署,kolla项目现在包括两个子项目:kolla-ansible和kolla-kubernetes,其中kolla-ansible应用于生产环境案例多些并且使用广泛一些,本文档koll...
  • OpenStack Newton版本预计在10月8号发布,作为部署的工具——Kolla,其发布日期预计在10月20号左右。那么,此次发布会带来哪些新的变化呢?8月4号至8月底,Kolla团队为其进行了134个节点的测试,整个测试过程均记录...
  • 文章目录使用 Kolla 安装 F5 Lbaasv2 agent (Stein)环境创建和准备工作安装 virutalenv,ansible下载安装 Kolla-ansible, KollaAnsible 配置F5 images 制作修改 /etc/kolla/kolla-build.conf文件F5 server 镜像制作....
1 2 3 4 5 ... 8
收藏数 152
精华内容 60
热门标签