精华内容
下载资源
问答
  • openstack搭建流程,虽然版本有点老,但可以借鉴参考;
  • NULL 博文链接:https://yi5414289.iteye.com/blog/1714274
  • Openstack云平台搭建

    千次阅读 2020-09-05 17:13:22
    一、openstack 安装 环境准备 主机名称 IP地址 最小资源配置 openstack 192.168.1.10 2CPU,8G内存 nova01 192.168.1.11 2CPU,3G内存 nova02 192.168.1.12 2CPU,3G内存 repo 192.168.1.250 2CPU,1G内存...

    前言

    openstack

    介绍·:
    由NASA(美国国家航空航天局)和Rackspace 合作研发并发起的项目,一套LaaS解决方案,一个开源的云计算管理平台,以Apache许可证为授权。
    版本:2010.10-----2018.2以吉祥物命名

    组件介绍

    	1.Horizon组件(apache+python)
    		为openstack服务提供web控制面板,可以用来管理实例、镜像、创建秘钥对,对实例添加卷、操作Swift容器等
    	  特点:
    		实例管理:创建、终止实例,查看终端日志。VNC连接,添加卷等
    		访问与安全管理:创建安全群组,管理秘钥对,设置浮动ip等
    		镜像管理:编辑或删除镜像
    		管理用户、配额及项目用途
    	2.Keystone组件
    		认证组件
    		为其他服务提供认证和授权的集中身份管理服务
    		提供集中目录服务
    		支持多种身份认证模式,如密码认证、令牌认证、以及AWS(亚马逊web服务)登录
    		为用户和其他服务提供SSO认证服务
    	3.Nova组件---vlan(1-4096)-->vxlan
    		计算组件
    		在节点上用于管理虚拟机的服务
    		Nova是一个分布式的服务,能够与Keystone交互实现认证
    		与Glance交互实现镜像管理
    		Nova被设计成在标准硬件上能够进行水平扩展
    		启动实例时,如果有则需要下载镜像
    	4.Glance组件
    		镜像管理组件
    		扮演虚拟机镜像注册的角色
    		允许用户为直接存储拷贝服务器镜像
    		镜像可以用于新建虚拟机的模板
    	5.Swift组件(类似分布式存储)
    		对象存储的组件
    		对于大部分用户来说,swift不是必须的
    		只有在存储数量到一定级别,而且是非结构化数据才有需求
    	6.Neutron组件
    		网络管理组件
    		一种软件定义网络服务
    		用于创建网络、子网、路由器、管理浮动IP地址
    		可以实现虚拟交换机、虚拟路由器
    		可用于在项目中创建VPN
    	7.Cinder组件
    		存储卷管理组件
    		为虚拟机管理存储卷的服务
    		为运行在Nova中的实例提供永久的块存储
    		可以通过快照进行数据备份
    		经常应用在实例存储环境中,例如数据库
    

    一、openstack 安装

    环境准备

    主机名称IP地址最小资源配置
    openstack192.168.1.102CPU,8G内存
    nova01192.168.1.112CPU,3G内存
    nova02192.168.1.122CPU,3G内存
    repo192.168.1.2502CPU,1G内存

    上传 RHEL7-extras.iso、RHEL7OSP-10.iso 到功能服务器[repo]

    二、功能服务器安装配置

    1.时间源服务器

    [root@repo ~]# yum install -y chrony
    [root@repo ~]# vim /etc/chrony.conf
    # 注释掉所有 server 开头的行,添加
    server ntp.aliyun.com iburst
    bindacqaddress 0.0.0.0
    allow 0/0
    local stratum 10
    [root@repo ~]# systemctl enable chronyd
    [root@repo ~]# systemctl restart chronyd
    [root@repo ~]# ss -ltun  # 查看 123 端口是否被监听成功
    

    2.网络yum源服务器

    [root@repo ~]# yum install -y vsftpd
    [root@repo ~]# systemctl enable --now vsftpd
    [root@repo ~]# mkdir -p /var/ftp/{extras,openstack}
    [root@repo ~]# cd /var/iso
    [root@repo ~]# mount -t iso9660 -o ro,loop RHEL7-extras.iso /var/ftp/extras
    [root@repo ~]# mount -t iso9660 -o ro,loop RHEL7OSP-10.iso /var/ftp/openstack
    # 在openstack上验证
    [root@openstack ~]# curl ftp://192.168.1.250/extras/
    [root@openstack ~]# curl ftp://192.168.1.250/openstack/
    

    二、openstack 实验架构图例

    Windows/真机
    nova01
    nova02
    openstack
    nova-computer
    管理节点
    nova-computer
    libvirtd
    eth0
    br-ex
    vm
    vm
    vm
    eth0
    br-ex
    vm
    vm
    vm
    libvirtd

    1.openstack系统环境安装配置

    以下操作,openstack,nova01 都需要做

    [root@openstack ~]# vim /etc/selinux/config
    # 修改 SELINUX=disabled
    [root@openstack ~]# yum -y remove firewalld-*
    [root@openstack ~]# reboot
    # 重启后验证
    [root@openstack ~]# sestatus 
    SELinux status:                 disabled
    [root@openstack ~]# rpm -qa |grep -i firewalld
    [root@openstack ~]# 
    

    卸载 NetworkManager

    [root@openstack ~]# systemctl stop NetworkManager
    [root@openstack ~]# yum remove -y NetworkManager
    [root@openstack ~]# systemctl enable --now network
    

    网卡配置文件注解

    * \# Generated by dracut initrd   # 注释
    * DEVICE="eth0"                            # 驱动名称,与ifconfig 看到的		名称一致
    * ONBOOT="yes"	                       # 开机启动
    * NM_CONTROLLED="no"            # 不接受 NetworkManager 控制
    * TYPE="Ethernet"                         # 类型
    * BOOTPROTO="static"                # 协议(dhcp|static|none)
    * IPADDR="192.168.1.10"            # IP地址
    * NETMASK="255.255.255.0"      # 子网掩码
    * GATEWAY="192.168.1.254"      # 默认网关
    

    2.Yum安装源配置

    确认软件包总数是 10670

    [root@openstack ~]# vim /etc/yum.repos.d/openstack.repo 
    [local_extras]
    name=CentOS-$releasever - Extras
    baseurl="ftp://192.168.1.250/extras"
    enabled=1
    gpgcheck=0
    
    [local_openstack]
    name=CentOS-$releasever - OpenStack
    baseurl="ftp://192.168.1.250/openstack/rhel-7-server-openstack-10-rpms"
    enabled=1
    gpgcheck=0
    
    [local_openstack_devtools]
    name=CentOS-$releasever - Openstack devtools
    baseurl="ftp://192.168.1.250/openstack/rhel-7-server-openstack-10-devtools-rpms"
    enabled=1
    gpgcheck=0
    [root@openstack ~]# yum makecache
    [root@openstack ~]# yum repolist
    Loaded plugins: fastestmirror
    Loading mirror speeds from cached hostfile
    repo id                    repo name                           status
    CentOS-Base                CentOS-7 - Base                     9,911
    local_extras               CentOS-7 - Extras                   76
    local_openstack            CentOS-7 - OpenStack                680
    local_openstack_devtools   CentOS-7 - Openstack devtools       3
    repolist: 10,670
    

    3.时间服务器配置

    [root@openstack ~]# vim /etc/chrony.conf
    # 注释掉所有 server 开头的行,添加
    server 192.168.1.250 iburst
    [root@openstack ~]# systemctl restart chronyd
    [root@openstack ~]# chronyc sources -v  # 验证配置 ^* 代表成功
    

    4.主机名与DNS配置

    [root@openstack ~]# vim /etc/hosts
    192.168.1.10    openstack
    192.168.1.11    nova01
    192.168.1.12    nova02
    192.168.1.250   repo
    # 删除所有 search 开头的行
    [root@openstack ~]# sed '/^search /d' -i /etc/resolv.conf
    

    5.nova虚拟环境安装

    nova01 安装

    [root@nova01 ~]# yum install -y qemu-kvm libvirt-daemon libvirt-daemon-driver-qemu libvirt-client python-setuptools
    [root@nova01 ~]# systemctl enable --now libvirtd
    [root@nova01 ~]# virsh version # 验证
    

    6.packstack工具安装

    只需要在 openstack 上安装即可

    [root@openstack ~]# yum install -y python-setuptools openstack-packstack
    

    7.openstack安装

    使用应答文件安装,只需要在 openstack 上安装即可

    # 创建应答文件
    [root@openstack ~]# packstack --gen-answer-file=answer.ini
    # 修改应答文件
    42:   CONFIG_SWIFT_INSTALL=n                              //存储对象组件
    45:   CONFIG_CEILOMETER_INSTALL=n                         //计费模块
    49:   CONFIG_AODH_INSTALL=n                               //计费模块
    53:   CONFIG_GNOCCHI_INSTALL=n                            //计费模块
    75:   CONFIG_NTP_SERVERS=192.168.1.250                    //时间服务器
    98:   CONFIG_COMPUTE_HOSTS=192.168.1.11                   //计算节点IP
    102:  CONFIG_NETWORK_HOSTS=192.168.1.10,192.168.1.11      //网络节点IP
    333:  CONFIG_KEYSTONE_ADMIN_PW=a                          //管理员密码
    840:  CONFIG_NEUTRON_ML2_TYPE_DRIVERS=flat,vxlan          //支持协议
    910:  CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex   //网桥设备
    921:  CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:eth0         //出口网卡
    1179: CONFIG_PROVISION_DEMO=n                             //演示模块
    

    做快照!!!
    做快照!!!
    做快照!!!

    安装过程大约 10 ~ 30 分钟不等

    [root@openstack ~]# packstack --answer-file=answer.ini
    

    四、web页面登录

    修改 apache 配置

    [root@openstack ~]# vim /etc/httpd/conf.d/15-horizon_vhost.conf 
    # 在配置文件倒数第三行添加
    WSGIApplicationGroup %{GLOBAL}
    [root@openstack ~]# systemctl reload httpd
    

    浏览器访问 http://192.168.1.10/

    命令行登录openstack

    展开全文
  • openstack云平台搭建

    2020-12-15 22:27:37
    检测端口 [root@ct ~]# systemctl enable etcd.service [root@ct ~]# systemctl start etcd.service [root@ct ~]# netstat -anutp |grep 2379 [root@ct ~]# netstat -anutp |grep 2380 注:openstack平台环境...

    基础环境配置

    [root@server1 ~]#  hostnamectl set-hostname ct
    [root@server1 ~]#  su
    
    [root@server2 ~]#  hostnamectl set-hostname c1
    [root@server2 ~]#  su
    
    [root@server3 ~]#  hostnamectl set-hostname c2
    [root@server3 ~]#  su
    

    环境依赖包

     [root@ct ~]# yum -y install net-tools bash-completion vim gcc gcc-c++ make pcre  pcre-devel expat-devel cmake  bzip2 
     [root@ct ~]# yum -y install centos-release-openstack-train python-openstackclient openstack-selinux openstack-utils
    

    设置网卡参数

    配置NAT网卡
    [root@ct etcd]# vi /etc/sysconfig/network-scripts/ifcfg-ens33
    TYPE=Ethernet
    PROXY_METHOD=none
    BROWSER_ONLY=no
    BOOTPROTO=static
    IPADDR=20.0.0.10
    NETMASK=255.255.255.0
    GATEWAY=20.0.0.2
    DNS=20.0.0.2
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_FAILURE_FATAL=no
    IPV6_ADDR_GEN_MODE=stable-privacy
    NAME=ens33
    UUID=633054e7-1f23-4fd7-9007-24c491adff63
    DEVICE=ens33
    ONBOOT=yes
    
    配置VM1网卡
    [root@ct etcd]# cp /etc/sysconfig/network-scripts/ifcfg-ens33 /etc/sysconfig/network-scripts/ifcfg-ens37
    [root@ct etcd]# vi /etc/sysconfig/network-scripts/ifcfg-ens37
    TYPE=Ethernet
    PROXY_METHOD=none
    BROWSER_ONLY=no
    BOOTPROTO=static
    IPADDR=192.168.100.100
    NETMASK=255.255.255.0
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=yes
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_FAILURE_FATAL=no
    IPV6_ADDR_GEN_MODE=stable-privacy
    NAME=ens37
    DEVICE=ens37
    ONBOOT=yes
    
    重启网卡并查看
    [root@ct ~]# systemctl restart network
    [root@ct ~]# ip addr
    

    配置Hosts

    [root@ct ~]# vi /etc/hosts
    192.168.100.100  ct
    192.168.100.101  c1
    192.168.100.102  c2
    
    [root@ct ~]# systemctl stop firewalld
    [root@ct ~]# systemctl disable firewalld
    [root@ct ~]# setenforce 0
    [root@ct ~]# vim /etc/sysconfig/selinux
    SELINUX=disabled
    

    免交互

    非对称密钥
     [root@ct ~]#  ssh-keygen -t rsa          
     [root@ct ~]#  ssh-copy-id ct
     [root@ct ~]#  ssh-copy-id c1
     [root@ct ~]#  ssh-copy-id c2
    

    控制节点ct时间同步配置

    [root@ct ~]# yum install chrony -y
    [root@ct ~]# vim /etc/chrony.conf
    #4-7行注释
    #8行ct添加两行
    server ntp6.aliyun.com iburst
    allow 192.168.100.0/24
    
    #8行另外两个节点上添加一行
    server ct iburst
    
    [root@ct ~]# systemctl enable chronyd
    [root@ct ~]# systemctl restart chronyd
    
    使用 chronyc sources 命令查询时间同步信息
    [root@ct ~]# chronyc sources
    210 Number of sources = 1
    MS Name/IP address         Stratum Poll Reach LastRx Last sample
    ===============================================================================
    ^* 203.107.6.88                  2   9   377    73  -2993us[-4411us] +/-   19ms
    
    设置周期性任务
    [root@ct ~]# crontab -e
    */30 * * * * /usr/bin/chronyc sources >> /var/log/chronyc.log
    [root@ct ~]# crontab -l
    */2 * * * * /usr/bin/chronyc sources >> /var/log/chronyc.log
    

    配置服务(控制节点)

    安装、配置MariaDB

    [root@ct ~]# yum -y install mariadb mariadb-server python2-PyMySQL
    
    #此包用于openstack的控制端连接mysql所需要的模块,如果不安装,则无法连接数据库;此包只安装在控制端
    [root@ct ~]# yum -y install libibverbs
    
    添加MySQL子配置文件,增加如下内容
    [root@ct ~]# vim /etc/my.cnf.d/openstack.cnf
    [mysqld]
    bind-address = 192.168.100.100   #控制节点局域网地址
    default-storage-engine = innodb    #默认存储引擎
    innodb_file_per_table = on            #每张表独立表空间文件
    max_connections = 4096                 #最大连接数
    collation-server = utf8_general_ci    #默认字符集
    character-set-server = utf8
    
    开机自启动、开启服务
    [root@ct my.cnf.d]# systemctl enable mariadb
    [root@ct my.cnf.d]# systemctl start mariadb
    
    执行MariaDB 安全配置脚本
    [root@ct my.cnf.d]# mysql_secure_installation
    Enter current password for root (enter for none):             #回车
    OK, successfully used password, moving on...
    Set root password? [Y/n] Y
    Remove anonymous users? [Y/n] Y
     ... Success!
    Disallow root login remotely? [Y/n] N
     ... skipping.
    Remove test database and access to it? [Y/n] Y
    Reload privilege tables now? [Y/n] Y
    

    安装RabbitMQ

    所有创建虚拟机的指令,控制端都会发送到rabbitmq,node节点监听rabbitmq

    [root@ct ~]# yum -y install rabbitmq-server
    
    配置服务,启动RabbitMQ服务,并设置其开机启动
    [root@ct ~]# systemctl enable rabbitmq-server.service
    [root@ct ~]# systemctl start rabbitmq-server.service
    
    创建消息队列用户,用于controler和node节点连接rabbitmq的认证
    [root@ct ~]# rabbitmqctl add_user openstack RABBIT_PASS
    Creating user "openstack"
    
    配置openstack用户的操作权限(正则,配置读写权限)
    [root@ct ~]# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
    Setting permissions for user "openstack" in vhost "/"
    
    查看rabbitmq插件列表
    [root@ct ~]# rabbitmq-plugins list
     Configured: E = explicitly enabled; e = implicitly enabled
     | Status:   [failed to contact rabbit@ct - status not shown]
     |/
    [e ] amqp_client                       3.6.16
    [e ] cowboy                            1.0.4
    [e ] cowlib                            1.0.2
    [  ] rabbitmq_amqp1_0                  3.6.16
    [  ] rabbitmq_auth_backend_ldap        3.6.16
    [  ] rabbitmq_auth_mechanism_ssl       3.6.16
    [  ] rabbitmq_consistent_hash_exchange 3.6.16
    [  ] rabbitmq_event_exchange           3.6.16
    [  ] rabbitmq_federation               3.6.16
    [  ] rabbitmq_federation_management    3.6.16
    [  ] rabbitmq_jms_topic_exchange       3.6.16
    [E ] rabbitmq_management               3.6.16
    [e ] rabbitmq_management_agent         3.6.16
    [  ] rabbitmq_management_visualiser    3.6.16
    [  ] rabbitmq_mqtt                     3.6.16
    [  ] rabbitmq_random_exchange          3.6.16
    [  ] rabbitmq_recent_history_exchange  3.6.16
    [  ] rabbitmq_sharding                 3.6.16
    [  ] rabbitmq_shovel                   3.6.16
    [  ] rabbitmq_shovel_management        3.6.16
    [  ] rabbitmq_stomp                    3.6.16
    [  ] rabbitmq_top                      3.6.16
    [  ] rabbitmq_tracing                  3.6.16
    [  ] rabbitmq_trust_store              3.6.16
    [e ] rabbitmq_web_dispatch             3.6.16
    [  ] rabbitmq_web_mqtt                 3.6.16
    [  ] rabbitmq_web_mqtt_examples        3.6.16
    [  ] rabbitmq_web_stomp                3.6.16
    [  ] rabbitmq_web_stomp_examples       3.6.16
    [  ] sockjs                            0.3.4
    
    开启rabbitmq的web管理界面的插件,端口为15672
    [root@ct ~]# rabbitmq-plugins enable rabbitmq_management
    The following plugins have been enabled:
    mochiweb
    webmachine
    rabbitmq_web_dispatch
    amqp_client
    rabbitmq_management_agent
    rabbitmq_management
    
    Applying plugin configuration to rabbit@likeadmin... started 6 plugins.
    
    检查端口(25672 5672 15672[root@ct ~]# ss -anpt | grep 5672
    LISTEN     0      128          *:25672                    *:*                   users:(("beam.smp",pid=35087,fd=46))
    LISTEN     0      128          *:15672                    *:*                   users:(("beam.smp",pid=35087,fd=57))
    LISTEN     0      128         :::5672                    :::*                   users:(("beam.smp",pid=35087,fd=55))
    

    网页访问http://20.0.0.10:15672,默认账户密码均为guest

    安装memcached

    作用
    安装memcached是用于存储session信息;服务身份验证机制使用Memcached来缓存令牌 在登录openstack的dashboard时,会产生一些session信息,这些session信息会存放到memcached中

    [root@ct ~]# yum install -y memcached python-memcached
    #python-*模块在OpenStack中起到连接数据库的作用
    
    修改Memcached配置文件
    [root@ct ~]# cat /etc/sysconfig/memcached
    PORT="11211"
    USER="memcached"
    MAXCONN="1024"
    CACHESIZE="64"
    OPTIONS="-l 127.0.0.1,::1,ct"
    
    [root@ct ~]# systemctl enable memcached
    [root@ct ~]# systemctl start memcached
    
    [root@ct ~]# netstat -nautp | grep 11211
    

    安装etcd

    [root@ct ~]# yum -y install etcd
    
    修改etcd配置文件
    [root@ct ~]# cd /etc/etcd/
    [root@ct etcd]# ls
    etcd.conf
    [root@ct etcd]# vim etcd.conf
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"                     #数据目录位置
    ETCD_LISTEN_PEER_URLS="http://192.168.100.100:2380"             #监听其他etcd member的url(2380端口,集群之间通讯,域名为无效值)
    ETCD_LISTEN_CLIENT_URLS="http://192.168.100.100:2379"           #对外提供服务的地址(2379端口,集群内部的通讯端口)
    ETCD_NAME="ct"                                                 #集群中节点标识(名称)
    ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.100.100:2380"
    ETCD_ADVERTISE_CLIENT_URLS="http://192.168.100.100:2379
    ETCD_INITIAL_CLUSTER="ct=http://192.168.100.100:2380"           #该节点成员的URL地址,2380端口:用于集群之间通讯。
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"                   #集群唯一标识
    ETCD_INITIAL_CLUSTER_STATE="new"                               #初始集群状态,new为静态,若为existing,则表示此ETCD服务将尝试加入已有的集群
    若为DNS,则表示此集群将作为被加入的对象
    
    开机自启动、开启服务,检测端口
    [root@ct ~]# systemctl enable etcd.service
    [root@ct ~]# systemctl start etcd.service
    [root@ct ~]# netstat -anutp |grep 2379
    [root@ct ~]# netstat -anutp |grep 2380
    

    注:openstack平台环境搭建完成

    展开全文
  • 先电Openstack云平台搭建【超级详细】【附带镜像】

    万次阅读 多人点赞 2021-02-08 11:37:17
    大二上学期学习Openstack,苦于百度与CSDN上没有对应版本的教程,学的十分艰难,在此,将我的Openstack云平台搭建过程写出,留给新手学习 准备工作: VMware Workstation Pro 虚拟机 我使用版本:15.5.2 build-...

    前言

    大二上学期学习Openstack,苦于百度与CSDN上没有对应版本的教程,学的十分艰难,在此,将我的Openstack云平台搭建过程写出,留给新手学习

    准备工作:

    VMware Workstation Pro 虚拟机 我使用版本:15.5.2 build-15785246

    CentOS-7-x86_64-DVD-1511.iso

    XianDian-IaaS-v2.2.iso

    补上需要的两个镜像

    链接:https://pan.baidu.com/s/1RUzNN4j8myJhMlFerny7uw 
    提取码:bxae 
    复制这段内容后打开百度网盘手机App,操作更方便哦

    虚拟机配置 (controller和compute配置相同):

    内存    3G

    处理器    2G

    硬盘    50G

    CD/DVD    CentOS-7-x86_64-DVD-1511.iso

    网络适配器    VMnet1

    网络适配器2  VMnet2

    controller和compute网络配置

    主机名VMnet1VMnet2
    controller192.168.28.10192.168.128.10
    compute192.168.28.20192.168.128.20

    controller改主机名字,关防火墙,设置主机映射开启虚拟机

    hostnamectl set-hostname controller
    systemctl stop firewalld
    systemctl disable firewalld
    setenforce 0
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
    cat >>/etc/hosts<<eof
    192.168.28.10 controller
    192.168.28.20 compute
    eof

    compute

    hostnamectl set-hostname compute
    systemctl stop firewalld
    systemctl disable firewalld
    setenforce 0
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
    cat >>/etc/hosts<<eof
    192.168.28.10 controller
    192.168.28.20 compute
    eof

    controller

    接下来,进入opt目录,创建挂载点 centos 以及 iaas,并使用CRT将xiandian镜像传入opt目录下

    cd /opt
    mkdir centos iaas

    设置开机自动挂载镜像文件

    cat >>/etc/fstab<<eof
    /dev/cdrom      /opt/centos     iso9660 defaults        0       0
    /opt/XianDian-IaaS-v2.2.iso     /opt/iaas   iso9660    defaults        0       0
    eof

    挂载立即生效

    mount -a    

    进入 /etc/yum.repos.d 目录,删除或移动里面的大写C开头的文件,否则后面会造成缓存错误,载创建repo源文件

    cd /etc/yum.repos.d
    mkdir bk
    mv C* bk
    cat >>/etc/yum.repos.d/local.repo<<eof
    [centos]
    name=centos
    baseurl=file:///opt/centos
    gpgcheck=0
    enabled=1
    
    [iaas]
    name=iaas
    baseurl=file:///opt/iaas/iaas-repo
    gpgcheck=0
    enabled=1
    eof

    清除缓存,生成缓存

    yum clean all
    yum makecache

     出现提示,说明是成功了(不需要和我的完全一摸一样,因为iso版本的不同,生成的缓存也不同)

    [root@controller ~]# yum makecache
    已加载插件:fastestmirror
    centos                                                                                                                                  | 3.6 kB  00:00:00     
    iaas                                                                                                                                    | 2.9 kB  00:00:00     
    (1/7): centos/filelists_db                                                                                                              | 2.9 MB  00:00:00     
    (2/7): centos/group_gz                                                                                                                  | 155 kB  00:00:00     
    (3/7): iaas/filelists_db                                                                                                                | 1.9 MB  00:00:00     
    (4/7): iaas/primary_db                                                                                                                  | 2.3 MB  00:00:00     
    (5/7): centos/primary_db                                                                                                                | 2.8 MB  00:00:00     
    (6/7): iaas/other_db                                                                                                                    | 692 kB  00:00:00     
    (7/7): centos/other_db                                                                                                                  | 1.2 MB  00:00:00     
    Determining fastest mirrors
    元数据缓存已建立

    注意,如果出现 下面展示 的段落信息,说明你之前的步骤出错了,可能是yum源没配置好,也可能是没挂载好,这里很坑

    [root@compute yum.repos.d]# yum makecache
    已加载插件:fastestmirror
    
    
     One of the configured repositories failed (未知),
     and yum doesn't have enough cached data to continue. At this point the only
     safe thing yum can do is fail. There are a few ways to work "fix" this:
    
         1. Contact the upstream for the repository and get them to fix the problem.
    
         2. Reconfigure the baseurl/etc. for the repository, to point to a working
            upstream. This is most often useful if you are using a newer
            distribution release than is supported by the repository (and the
            packages for the previous distribution release still work).
    
         3. Disable the repository, so yum won't use it by default. Yum will then
            just ignore the repository until you permanently enable it again or use
            --enablerepo for temporary usage:
    
                yum-config-manager --disable <repoid>
    
         4. Configure the failing repository to be skipped, if it is unavailable.
            Note that yum will try to contact the repo. when it runs most commands,
            so will have to try and fail each time (and thus. yum will be be much
            slower). If it is a very temporary problem though, this is often a nice
            compromise:
    
                yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
    
    Cannot find a valid baseurl for repo: centos

    如果是出现了 /var/run/yum.pid 的消息

    执行下面这段命令即可

    rm -rf /var/run/yum.pid

    接下来安装vsftpd和iaas-xiandian

    yum install iaas-xiandian vsftpd -y

    在controller上安装 ftp(文件传输)服务
    增加匿名访问

    cat >>/etc/vsftpd/vsftpd.conf<<eof
    anon_root=/opt
    eof
    systemctl restart network
    systemctl start vsftpd
    systemctl enable vsftpd

    接下来配置xiandian的openrc.sh应答文件(一个字都不能错)

    sed -i 's/.//' /etc/xiandian/openrc.sh
    sed -i 's/PASS=/PASS=000000/g' /etc/xiandian/openrc.sh
    sed -i 's/HOST_IP=/HOST_IP=192.168.28.10/g' /etc/xiandian/openrc.sh
    sed -i 's/HOST_NAME=/HOST_NAME=controller/g' /etc/xiandian/openrc.sh
    sed -i 's/HOST_IP_NODE=/HOST_IP_NODE=192.168.28.20/g' /etc/xiandian/openrc.sh
    sed -i 's/HOST_NAME_NODE=/HOST_NAME_NODE=compute/g' /etc/xiandian/openrc.sh
    sed -i 's/RABBIT_USER=/RABBIT_USER=openstack/g' /etc/xiandian/openrc.sh
    sed -i 's/DOMAIN_NAME=/DOMAIN_NAME=demo/g' /etc/xiandian/openrc.sh
    sed -i 's/METADATA_SECRET=/METADATA_SECRET=000000/g' /etc/xiandian/openrc.sh
    sed -i 's/INTERFACE_NAME=/INTERFACE_NAME=ens34/g' /etc/xiandian/openrc.sh
    

    接着使用命令   cat /etc/xiandian/openrc.sh   查看配置文件,配置完后,应如下图

    #--------------------system Config--------------------##
    #Controller Server Manager IP. example:x.x.x.x
    HOST_IP=192.168.28.10
    
    #Controller Server hostname. example:controller
    HOST_NAME=controller
    
    #Compute Node Manager IP. example:x.x.x.x
    HOST_IP_NODE=192.168.28.20
    
    #Compute Node hostname. example:compute
    HOST_NAME_NODE=compute
    
    #--------------------Rabbit Config ------------------##
    #user for rabbit. example:openstack
    RABBIT_USER=openstack
    
    #Password for rabbit user .example:000000
    RABBIT_PASS=000000
    
    #--------------------MySQL Config---------------------##
    #Password for MySQL root user . exmaple:000000
    DB_PASS=000000
    
    #--------------------Keystone Config------------------##
    #Password for Keystore admin user. exmaple:000000
    DOMAIN_NAME=demo
    ADMIN_PASS=000000
    DEMO_PASS=000000
    
    #Password for Mysql keystore user. exmaple:000000
    KEYSTONE_DBPASS=000000
    
    #--------------------Glance Config--------------------##
    #Password for Mysql glance user. exmaple:000000
    GLANCE_DBPASS=000000
    
    #Password for Keystore glance user. exmaple:000000
    GLANCE_PASS=000000
    
    #--------------------Nova Config----------------------##
    #Password for Mysql nova user. exmaple:000000
    NOVA_DBPASS=000000
    
    #Password for Keystore nova user. exmaple:000000
    NOVA_PASS=000000
    
    #--------------------Neturon Config-------------------##
    #Password for Mysql neutron user. exmaple:000000
    NEUTRON_DBPASS=000000
    
    #Password for Keystore neutron user. exmaple:000000
    NEUTRON_PASS=000000
    
    #metadata secret for neutron. exmaple:000000
    METADATA_SECRET=000000
    
    #External Network Interface. example:eth1
    INTERFACE_NAME=ens37
    
    #First Vlan ID in VLAN RANGE for VLAN Network. exmaple:101
    minvlan=
    
    #Last Vlan ID in VLAN RANGE for VLAN Network. example:200
    maxvlan=
    
    #--------------------Cinder Config--------------------##
    #Password for Mysql cinder user. exmaple:000000
    CINDER_DBPASS=
    
    #Password for Keystore cinder user. exmaple:000000
    CINDER_PASS=
    
    #Cinder Block Disk. example:md126p3
    BLOCK_DISK=
    
    #--------------------Trove Config--------------------##
    #Password for Mysql Trove User. exmaple:000000
    TROVE_DBPASS=
    
    #Password for Keystore Trove User. exmaple:000000
    TROVE_PASS=
    
    #--------------------Swift Config---------------------##
    #Password for Keystore swift user. exmaple:000000
    SWIFT_PASS=
    
    #The NODE Object Disk for Swift. example:md126p4.
    OBJECT_DISK=
    
    #The NODE IP for Swift Storage Network. example:x.x.x.x.
    STORAGE_LOCAL_NET_IP=
    
    #--------------------Heat Config----------------------##
    #Password for Mysql heat user. exmaple:000000
    HEAT_DBPASS=
    
    #Password for Keystore heat user. exmaple:000000
    HEAT_PASS=
    
    #--------------------Ceilometer Config----------------##
    #Password for Mysql ceilometer user. exmaple:000000
    CEILOMETER_DBPASS=
    
    #Password for Keystore ceilometer user. exmaple:000000
    CEILOMETER_PASS=
    
    #--------------------AODH Config----------------##
    #Password for Mysql AODH user. exmaple:000000
    AODH_DBPASS=
    
    #Password for Keystore AODH user. exmaple:000000
    AODH_PASS=

    至此,controller停止配置,开始compute配置

    compute

    查看是否连接到controller的 opt/ 下面的挂载文件

    [root@compute yum.repos.d]# curl ftp://192.168.28.10
    -rw-r--r--    1 0        0        2851502080 Jun 04  2020 XianDian-IaaS-v2.2.iso
    dr-xr-xr-x    8 0        0            2048 Dec 09  2015 centos
    drwxr-xr-x    4 0        0            2048 Nov 06  2017 iaas

    进入opt目录,创建centos和iaas文件,移动或删除大写C开头文件,创建一个ftp.repo源文件

    cd /opt
    mkdir centos iaas
    cd /etc/yum.repos.d/
    mkdir bk
    mv C* bk
    touch ftp.repo

    进入 /etc/yum.repos.d/ftp.repo 写入

    cat >>/etc/yum.repos.d/ftp.repo<<eof
    [centos]
    name=centos
    baseurl=ftp://192.168.28.10/centos
    gpgcheck=0
    enabled=1
    
    [iaas]
    name=iaas
    baseurl=ftp://192.168.28.10/iaas/iaas-repo
    gpgcheck=0
    enabled=1
    eof

    清除缓存,生成缓存

    yum clean all
    yum makecache

    如出现以下画面,说明成功

    [root@compute yum.repos.d]# yum makecache
    已加载插件:fastestmirror
    centos                                                                                                                                  | 3.6 kB  00:00:00     
    iaas                                                                                                                                    | 2.9 kB  00:00:00     
    (1/7): centos/group_gz                                                                                                                  | 155 kB  00:00:00     
    (2/7): centos/filelists_db                                                                                                              | 2.9 MB  00:00:00     
    (3/7): centos/primary_db                                                                                                                | 2.8 MB  00:00:00     
    (4/7): centos/other_db                                                                                                                  | 1.2 MB  00:00:00     
    (5/7): iaas/filelists_db                                                                                                                | 1.9 MB  00:00:00     
    (6/7): iaas/primary_db                                                                                                                  | 2.3 MB  00:00:00     
    (7/7): iaas/other_db                                                                                                                    | 692 kB  00:00:00     
    Determining fastest mirrors
    元数据缓存已建立

    下载 iaas-xiandain 进入 /etc/xiandian ,将openrc.sh 改名为openrc.sh.bk 备份 以防出现错误

    再将 controller(192.168.28.10)中的 /etc/xiandian/openrc.sh 文件传输到 本机

    yum -y install iaas-xiandian
    cd /etc/xiandian
    mv openrc.sh openrc.sh.bk
    scp 192.168.28.10:/etc/xiandian/openrc.sh openrc.sh

     

    执行安装脚本

    两个节点均执行初始化脚本

    iaas-pre-host.sh

    注意:执行需要一段时间,不要去按回车什么的了,在出现  【reboot】提示后,必须重启,否则之后的实例发不出去,问题很大!!!

    两台均重启

    reboot

    重新启动虚拟机

    控制节点安装 (controller)

    cd /usr/local/bin
    cat >>/usr/local/bin/all-in-one.sh<<eof
    iaas-install-mysql.sh 
    iaas-install-keystone.sh 
    iaas-install-glance.sh 
    iaas-install-nova-controller.sh
    iaas-install-neutron-controller.sh
    iaas-install-neutron-controller-gre.sh
    iaas-install-dashboard.sh
    eof
    source all-in-one.sh

    计算节点安装(compute)

    cd /usr/local/bin
    cat >>/usr/local/bin/all-in-one.sh<<eof
    iaas-install-nova-compute.sh
    iaas-install-neutron-compute.sh
    iaas-install-neutron-compute-gre.sh
    eof
    source all-in-one.sh

     

    安装需要很长时间,请耐心等待

    执行完成后

    使用Chrome浏览器访问(其他浏览器不太好,造成访问不到)

    http://控制IP/dashboard

    我的控制节点IP为 192.168.28.10

    所以我访问

    http://192.168.28.10/dashboard

    访问成功界面

    域:demo

    用户名:admin

    密码:000000

    进入

     

    至此,Openstack云平台的安装结束

     

    展开全文
  • 目录 前言 一、与Cloudstack 的对比 二、搭建过程 三、问题与解决 ...3.1、问题1:neutron验证时出错(已解决) ...3.2、问题2:实例无法启动(已解决) ...经过这两天的尝试,初步搭建openstack云平...

    目录

     

    前言

    一、与Cloudstack 的对比

    二、搭建过程

    三、问题与解决

    3.1、问题1:neutron验证时出错(已解决)

    3.2、问题2:实例无法启动(已解决)

    3.3、问题3:在安装块存储服务,进行验证时,出现错误(已解决)

    3.4、问题4:块存储,将卷添加到实例出错(已解决)

    四、总结

    五、参考资料


    前言

    经过这两天的尝试,初步搭建了openstack云平台,基本功能均已实现。

    一、与Cloudstack 的对比

    相比Cloudstack,OpenStack的搭建明显要复杂些。

    Cloustack只需要安装management和client。

    而openstack需要安装的组件就多了,认证服务keystone、镜像服务glance、放置服务placement、计算服务nova、网络服务neutron、Web UI horizon及块存储服务cinder。

     

    二、搭建过程

    openstack的文档还是比较多的,具体的安装过程参考:

    官方文档:https://docs.openstack.org/install-guide/openstack-services.html

    最近教程:https://blog.csdn.net/chengyinwu/category_9242444.html

    搭建结果:

    运行中的实例图

    登陆实例图

    一个有意思的事情是,虽然openstack的搭建过程比cloudstack要复杂。但对比这两次搭建过程,发现openstack的搭建过程要顺利的多,也快的多。

    分析一下原因可能是:

    1、相对而言,openstack的社区确实要活跃的多,随便搜索一下安装教程一大把,而且可以找到比较新的做参考,cloudstack的教程就比较少且时间相对较早。

    2、cloudstack安装的时候自动化程度较高,但出现问题时却可能完全不清楚究竟是哪个部分除了问题。openstack每个服务安装后都会验证,解决错误时更有针对性。

    还可能的原因是,之前毕竟已经学过cloudstack,虽然平台不同,但大致思路是相通的,而且openstack是用python开发的,相较于cloudstack的java,我对python更熟悉些,更容易通过日志找到出错原因。

     

    三、问题与解决

    虽然搭建openstack的过程相对顺利,但还是遇到了一些问题,有些已经解决,有些正在尝试。

    问题1

    3.1、问题1:neutron验证时出错(已解决)

    这是正常的验证结果

    $ openstack network agent list
    
    +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
    | ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
    +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
    | 0400c2f6-4d3b-44bc-89fa-99093432f3bf | Metadata agent     | controller | None              | True  | UP    | neutron-metadata-agent    |
    | 83cf853d-a2f2-450a-99d7-e9c6fc08f4c3 | DHCP agent         | controller | nova              | True  | UP    | neutron-dhcp-agent        |
    | ec302e51-6101-43cf-9f19-88a78613cbee | Linux bridge agent | compute    | None              | True  | UP    | neutron-linuxbridge-agent |
    | fcb9bc6e-22b1-43bc-9054-272dd517d025 | Linux bridge agent | controller | None              | True  | UP    | neutron-linuxbridge-agent |
    +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

     

    这是出错的结果

    [root@controller ~]# openstack network agent list
    +--------------------------------------+----------------+------------+-------------------+-------+-------+------------------------+
    | ID                                   | Agent Type     | Host       | Availability Zone | Alive | State | Binary                 |
    +--------------------------------------+----------------+------------+-------------------+-------+-------+------------------------+
    | 3a004b63-a091-4b14-acdc-efac793065f2 | DHCP agent     | controller | nova              | :-)   | UP    | neutron-dhcp-agent     |
    | 71a74d59-26f7-4a32-8505-4d88a485bcf6 | Metadata agent | controller | None              | :-)   | UP    | neutron-metadata-agent |
    +--------------------------------------+----------------+------------+-------------------+-------+-------+------------------------+
    

     

    可以看到controller和computer1各有一个Linux bridge agent未启动

     

    查看neutron-linuxbirdge-agent状态

    [root@controller ~]# systemctl status neutron-linuxbridge-agent
    ● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent
       Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
       Active: activating (auto-restart) (Result: exit-code) since Thu 2020-04-02 08:23:53 CST; 151ms ago
      Process: 4685 ExecStart=/usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/linuxbridge_agent.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-linuxbridge-agent --log-file /var/log/neutron/linuxbridge-agent.log (code=exited, status=1/FAILURE)
      Process: 4679 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
     Main PID: 4685 (code=exited, status=1/FAILURE)
    

    发现日志文件位置/var/log/neutron/plugins/linuxbridge-agent.log

     

    查找日志文件发现这样一条错误信息

    2020-04-02 08:24:17.914 5032 ERROR neutron.plugins.ml2.drivers.linuxbridge.agent
    .linuxbridge_neutron_agent [-] Interface eth0 for physical network provider 
    does not exist. Agent terminated!
    

    错误提示,物理网卡eth0不存在

    查看网络状态

    [root@controller ~]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 00:0c:29:60:42:1a brd ff:ff:ff:ff:ff:ff
        inet 10.0.0.11/24 brd 10.0.0.255 scope global ens33
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:fe60:421a/64 scope link 
           valid_lft forever preferred_lft forever
    

    可以看到controller的网卡是ens33,修改网卡名称和修改linuxbridge_agent配置都可以解决

    这里选择修改linuxbridge_agent

    [root@controller ~]# cat /etc/neutron/plugins/ml2/linuxbridge_agent.ini
    [DEFAULT]
    
    
    [linux_bridge]
    physical_interface_mappings = provider:eth0
    

    查看neutron-linuxbirdge-agent是否正常启动

    [root@controller ~]# systemctl status neutron-linuxbridge-agent
    ● neutron-linuxbridge-agent.service - OpenStack Neutron Linux Bridge Agent
       Loaded: loaded (/usr/lib/systemd/system/neutron-linuxbridge-agent.service; enabled; vendor preset: disabled)
       Active: active (running) since Thu 2020-04-02 08:37:36 CST; 7s ago
      Process: 14381 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS)
     Main PID: 14387 (/usr/bin/python)
       CGroup: /system.slice/neutron-linuxbridge-agent.service
               └─14387 /usr/bin/python2 /usr/bin/neutron-linuxbridge-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file ...
    
    

    问题解决

    [root@controller ~]#  openstack network agent list
    +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
    | ID                                   | Agent Type         | Host       | Availability Zone | Alive | State | Binary                    |
    +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
    | 3a004b63-a091-4b14-acdc-efac793065f2 | DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent        |
    | 71a74d59-26f7-4a32-8505-4d88a485bcf6 | Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent    |
    | c56d1eff-d6bf-45e6-b06a-33f7c2efaa4f | Linux bridge agent | controller | None              | :-)   | UP    | neutron-linuxbridge-agent |
    | cbd5b8bd-4dfc-4af8-8dab-f551f761f39c | Linux bridge agent | computer1  | None              | :-)   | UP    | neutron-linuxbridge-agent |
    +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
    
    

    3.2、问题2:实例无法启动(已解决)

    创建实例并连接,错误提示:

    Booting from hard disk
    Boot failed. not a bootable disk
    

    实例创建成功但无法登陆

    搜索之后找到相同的问题

    https://ask.openstack.org/en/question/4117/boot-failed-not-a-bootable-disk/

    根据里面的讨论可能是镜像出了问题

    找到cirrors镜像部分,到原网站查看,

    官网显示的文件信息

          cirros-0.4.0-x86_64-disk.img                       2017-11-19 19:59   12M

    这是之前下载的文件,只有273字节(其实当时下载的时候我也注意到了大小可能不太对,但没太在意)

    [root@controller download]# ll
    total 4
    -rw-r--r--. 1 root root 273 Apr  1 13:01 cirros-0.4.0-x86_64-disk.img
    
    

    大小明显不一样,对比发现,教程中的下载命令是

    wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

    虚拟机没装wget 考虑到节约虚拟机资源,我就用curl -o 下载

    curl -O http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img

    结果。。。。。

    以前我两个命令是混着用的,现在来看两个命令似乎还是有区别的

    note:

    curl虽然也能下载文件,但基础功能浏览网页 -o只是将获得的信息保存到文件中

    wget基础功能就是下载文件
    用wget下载

    wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    --2020-04-02 15:28:17--  http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    Resolving download.cirros-cloud.net (download.cirros-cloud.net)... 64.90.42.85, 2607:f298:6:a036::bd6:a72a
    Connecting to download.cirros-cloud.net (download.cirros-cloud.net)|64.90.42.85|:80... connected.
    HTTP request sent, awaiting response... 302 Found
    Location: https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img [following]
    --2020-04-02 15:28:18--  https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img
    Resolving github.com (github.com)... 13.250.177.223
    Connecting to github.com (github.com)|13.250.177.223|:443... failed: Connection refused.
    

    这个链接经过跳转似乎最终需要从github上下载,我仿佛意识到了什么,赶紧查看之前用curl -o下载的文件。

    [root@controller download]# cat cirros-0.4.0-x86_64-disk.img 
    <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
    <html><head>
    <title>302 Found</title>
    </head><body>
    <h1>Found</h1>
    <p>The document has moved <a href="https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img">here</a>.</p>
    </body></html>
    

    果然,原来我之前下载的是一个跳转页面。

    连上网,用wget重新下载

    [root@controller test]# wget http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    --2020-04-02 15:30:39--  http://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    Resolving download.cirros-cloud.net (download.cirros-cloud.net)... 64.90.42.85, 2607:f298:6:a036::bd6:a72a
    Connecting to download.cirros-cloud.net (download.cirros-cloud.net)|64.90.42.85|:80... connected.
    HTTP request sent, awaiting response... 302 Found
    Location: https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img [following]
    --2020-04-02 15:30:39--  https://github.com/cirros-dev/cirros/releases/download/0.4.0/cirros-0.4.0-x86_64-disk.img
    Resolving github.com (github.com)... 52.74.223.119
    Connecting to github.com (github.com)|52.74.223.119|:443... connected.
    HTTP request sent, awaiting response... 302 Found
    Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/219785102/b2074f00-411a-11ea-9620-afb551cf9af3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200402%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200402T072953Z&X-Amz-Expires=300&X-Amz-Signature=723552189308fe93947e74365620857df661dbe4536d96181d2db1ef5aca0f1b&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dcirros-0.4.0-x86_64-disk.img&response-content-type=application%2Foctet-stream [following]
    --2020-04-02 15:30:40--  https://github-production-release-asset-2e65be.s3.amazonaws.com/219785102/b2074f00-411a-11ea-9620-afb551cf9af3?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20200402%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20200402T072953Z&X-Amz-Expires=300&X-Amz-Signature=723552189308fe93947e74365620857df661dbe4536d96181d2db1ef5aca0f1b&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dcirros-0.4.0-x86_64-disk.img&response-content-type=application%2Foctet-stream
    Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.143.188
    Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.143.188|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 12716032 (12M) [application/octet-stream]
    Saving to: ‘cirros-0.4.0-x86_64-disk.img’
    
    100%[==============================================================================>] 12,716,032  65.2KB/s   in 3m 49s 
    
    2020-04-02 15:34:30 (54.2 KB/s) - ‘cirros-0.4.0-x86_64-disk.img’ saved [12716032/12716032]
    

    重新制作镜像即可
    再次登陆

     

    3.3、问题3:在安装块存储服务,进行验证时,出现错误(已解决)

    [root@controller test]# cinder service-list
    ERROR: Unable to establish connection to http://controller:8776/: HTTPConnectionPool(host='controller', port=8776): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f134fa0bb50>: Failed to establish a new connection: [Errno 111] Connection refused',))
    

    查看服务是否开启

    [root@controller test]# ss -nplt | grep 8776
    LISTEN     0      128          *:8776                     *:*                   users:(("cinder-api",pid=46305,fd=7),("cinder-api",pid=46304,fd=7),("cinder-api",pid=46303,fd=7),("cinder-api",pid=46302,fd=7),("cinder-api",pid=46233,fd=7))
    

    再次查看

    [root@controller test]# cinder service-list
    +------------------+------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
    | Binary           | Host       | Zone | Status  | State | Updated_at                 | Cluster | Disabled Reason | Backend State |
    +------------------+------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
    | cinder-scheduler | controller | nova | enabled | up    | 2020-04-02T08:44:20.000000 | -       | -               |               |
    +------------------+------------+------+---------+-------+----------------------------+---------+-----------------+---------------+
    [root@controller test]# 

    应当是服务尚未完全启动,给controller节点分配的资源可能过少,内存占用超过90%,swap已用1.8G,等待一段时间就好

    [root@controller test]# free -h
                  total        used        free      shared  buff/cache   available
    Mem:           2.7G        2.5G        116M        2.1M         80M         45M
    Swap:          3.9G        1.8G        2.1G

    3.4、问题4:块存储,将卷添加到实例出错(已解决)

    [root@controller test]# openstack server add volume instance volume1
    Invalid input received: Invalid volume: Volume attachments can not be created if the volume is in an error state. The Volume 27e02eb4-e33f-49c1-bf69-e4cbfbb6145a currently has a status of: error  (HTTP 400) (Request-ID: req-04b8eaef-0cde-45d7-b022-7b8986b7755c) (HTTP 400) (Request-ID: req-7b09f9a9-4ea1-48cc-82c0-b0ae55b0fee0)
    
    

    查看日志

    [root@computer1 ~]# cat /var/log/cinder/volume.log 
    2020-04-02 17:25:03.628 2330 INFO cinder.rpc [req-b42ed688-f013-46cf-8fa6-31b088eb9c74 - - - - -] Automatically selected cinder-scheduler objects version 1.38 as minimum service version.
    2020-04-02 17:25:03.637 2330 INFO cinder.rpc [req-b42ed688-f013-46cf-8fa6-31b088eb9c74 - - - - -] Automatically selected cinder-scheduler RPC version 3.11 as minimum service version.
    2020-04-02 17:25:03.708 2330 INFO cinder.volume.manager [req-b42ed688-f013-46cf-8fa6-31b088eb9c74 - - - - -] Determined volume DB was empty at startup.
    

    查看服务

    [root@controller test]# openstack volume service list
    +------------------+---------------+------+---------+-------+----------------------------+
    | Binary           | Host          | Zone | Status  | State | Updated At                 |
    +------------------+---------------+------+---------+-------+----------------------------+
    | cinder-scheduler | controller    | nova | enabled | up    | 2020-04-02T09:47:11.000000 |
    | cinder-volume    | computer1@lvm | nova | enabled | down  | 2020-04-02T09:25:04.000000 |
    +------------------+---------------+------+---------+-------+----------------------------+
    [root@controller test]# 
    

    发现这里computer1@lvm 的State为down

    查找资料

    https://blog.csdn.net/zhaihaifei/article/details/79636930

    发现可能是controller和computer1时间不同步造成的;

    经过调整,时间已同步;

    同时输入date命令,两边返回时间相同。

    但执行openstack volume service list发现仍为down;

    若重启cinder-scheduler和cinder-volume服务,再执行openstack volume service list会发现两个服务State都是up Updated At只有很小差距

    然而,过一段时间再次执行openstack volume service list发现up Updated 差距越来越大,直至超过60,cinder-volume的State变为down。

    初步猜测为机器配置过低

    尝试提高computer1 cpu核心数和内存,正待进一步解决。

    与Cloudstack 的对比

    相比Cloudstack,openstack的搭建明显要复杂些。

    Cloustack只需要安装management和client

    而openstack需要安装的组件明显就多了,认证服务keystone、镜像服务glance、放置服务placement、计算服务nova、网络服务neutron、Web UI horizon及块存储服务cinder


    2020年5月3日更新

    暂时解决,但不是太明白解决的原因

    [root@controller rabbitmq]# openstack volume service list
    +------------------+--------------+------+---------+-------+----------------------------+
    | Binary           | Host         | Zone | Status  | State | Updated At                 |
    +------------------+--------------+------+---------+-------+----------------------------+
    | cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T05:32:06.000000 |
    | cinder-volume    | computer@lvm | nova | enabled | up    | 2020-05-03T05:32:05.000000 |
    +------------------+--------------+------+---------+-------+----------------------------+
    [root@controller rabbitmq]# openstack volume service list
    +------------------+--------------+------+---------+-------+----------------------------+
    | Binary           | Host         | Zone | Status  | State | Updated At                 |
    +------------------+--------------+------+---------+-------+----------------------------+
    | cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T05:33:16.000000 |
    | cinder-volume    | computer@lvm | nova | enabled | up    | 2020-05-03T05:33:15.000000 |
    +------------------+--------------+------+---------+-------+----------------------------+
    [root@controller rabbitmq]# openstack volume create --size 1 volume1
    +---------------------+--------------------------------------+
    | Field               | Value                                |
    +---------------------+--------------------------------------+
    | attachments         | []                                   |
    | availability_zone   | nova                                 |
    | bootable            | false                                |
    | consistencygroup_id | None                                 |
    | created_at          | 2020-05-03T05:37:50.000000           |
    | description         | None                                 |
    | encrypted           | False                                |
    | id                  | 629842c9-c107-424f-b24d-344ca0b25710 |
    | migration_status    | None                                 |
    | multiattach         | False                                |
    | name                | volume1                              |
    | properties          |                                      |
    | replication_status  | None                                 |
    | size                | 1                                    |
    | snapshot_id         | None                                 |
    | source_volid        | None                                 |
    | status              | creating                             |
    | type                | __DEFAULT__                          |
    | updated_at          | None                                 |
    | user_id             | 2bd00838cd344078982969448d50bd5b     |
    +---------------------+--------------------------------------+
    [root@controller rabbitmq]# openstack volume list
    +--------------------------------------+---------+-----------+------+-------------+
    | ID                                   | Name    | Status    | Size | Attached to |
    +--------------------------------------+---------+-----------+------+-------------+
    | 629842c9-c107-424f-b24d-344ca0b25710 | volume1 | available |    1 |             |
    +--------------------------------------+---------+-----------+------+-------------+
    [root@controller rabbitmq]#
    

    解决方法:

    单独设立存储节点。

    奇怪的地方在于,和所有配置和之前完全一样,但单独可以,合并不行。

    值得注意的是,在计算节点的cinder的日志文件中

    发现了

    2020-05-03 10:06:52.308 3546 ERROR oslo.messaging._drivers.impl_rabbit [req-46ccce17-f2b4-4e8c-920a-71c6f3137f0f - - - - -] Connection failed: failed to resolve broker hostname (retrying in 32.0 seconds): error: failed to resolve broker hostname
    

    无法解析broker hostname的错误

    但是前后两次的不同在于

    前一次,日志一直报出这个错误,然后在控制节点运行openstack volume service list命令会发现计算节点的openstack-cinder-volume的updated at字段一直无法更新,直到status 变为down。当在计算节点重启openstack-cinder-volume服务时会发现,其status重新变为up,但是刷新后就会发现其updated at字段一直不变。

    #很明显,第二个的updated at字段是不变的,而第一行的是变化的,差距超过60秒status就会变为down
    [root@controller ~]# openstack volume service list
    +------------------+--------------+------+---------+-------+----------------------------+
    | Binary           | Host         | Zone | Status  | State | Updated At                 |
    +------------------+--------------+------+---------+-------+----------------------------+
    | cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T08:18:01.000000 |
    | cinder-volume    | computer@lvm | nova | enabled | down  | 2020-05-03T08:16:32.000000 |
    +------------------+--------------+------+---------+-------+----------------------------+
    [root@controller ~]# openstack volume service list
    +------------------+--------------+------+---------+-------+----------------------------+
    | Binary           | Host         | Zone | Status  | State | Updated At                 |
    +------------------+--------------+------+---------+-------+----------------------------+
    | cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T08:18:11.000000 |
    | cinder-volume    | computer@lvm | nova | enabled | down  | 2020-05-03T08:16:32.000000 |
    +------------------+--------------+------+---------+-------+----------------------------+
    [root@controller ~]# openstack volume service list
    +------------------+--------------+------+---------+-------+----------------------------+
    | Binary           | Host         | Zone | Status  | State | Updated At                 |
    +------------------+--------------+------+---------+-------+----------------------------+
    | cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T08:18:21.000000 |
    | cinder-volume    | computer@lvm | nova | enabled | down  | 2020-05-03T08:16:32.000000 |
    +------------------+--------------+------+---------+-------+----------------------------+
    

    而将存储节点独立出来后,发现cinder日志仍然报告同样的错误,但报告3-4次之后就会恢复正常

    2020-05-03 13:35:28.522 1597 ERROR oslo.messaging._drivers.impl_rabbit [req-e03f1c93-c88b-4271-aa0c-4a61c6d9dc6c - - - - -] Connection failed: failed to resolve broker hostname (retrying in 32.0 seconds): error: failed to resolve broker hostname
    2020-05-03 13:36:05.305 1597 ERROR oslo.messaging._drivers.impl_rabbit [req-e03f1c93-c88b-4271-aa0c-4a61c6d9dc6c - - - - -] Connection failed: failed to resolve broker hostname (retrying in 32.0 seconds): error: failed to resolve broker hostname
    2020-05-03 13:36:41.427 1597 ERROR oslo.messaging._drivers.impl_rabbit [req-e03f1c93-c88b-4271-aa0c-4a61c6d9dc6c - - - - -] Connection failed: failed to resolve broker hostname (retrying in 32.0 seconds): error: failed to resolve broker hostname
    2020-05-03 13:37:16.573 1597 ERROR oslo.messaging._drivers.impl_rabbit [req-e03f1c93-c88b-4271-aa0c-4a61c6d9dc6c - - - - -] Connection failed: failed to resolve broker hostname (retrying in 32.0 seconds): error: failed to resolve broker hostname
    2020-05-03 13:37:51.318 14438 INFO cinder.rpc [req-c350e9c6-b4b7-48df-940a-62891dba18d7 2bd00838cd344078982969448d50bd5b c9e71421e9264b43964133376091ce1b - default default] Automatically selected cinder-backup objects version 1.38 as minimum service version.
    2020-05-03 13:37:51.324 14438 INFO cinder.rpc [req-c350e9c6-b4b7-48df-940a-62891dba18d7 2bd00838cd344078982969448d50bd5b c9e71421e9264b43964133376091ce1b - default default] Automatically selected cinder-backup RPC version 2.1 as minimum service version.
    2020-05-03 13:37:51.335 14438 INFO cinder.rpc [req-c350e9c6-b4b7-48df-940a-62891dba18d7 2bd00838cd344078982969448d50bd5b c9e71421e9264b43964133376091ce1b - default default] Automatically selected cinder-volume objects version 1.38 as minimum service version.
    2020-05-03 13:37:51.340 14438 INFO cinder.rpc [req-c350e9c6-b4b7-48df-940a-62891dba18d7 2bd00838cd344078982969448d50bd5b c9e71421e9264b43964133376091ce1b - default default] Automatically selected cinder-volume RPC version 3.16 as minimum service version.
    2020-05-03 13:37:51.438 14438 INFO cinder.volume.flows.manager.create_volume [req-c350e9c6-b4b7-48df-940a-62891dba18d7 2bd00838cd344078982969448d50bd5b c9e71421e9264b43964133376091ce1b - default default] Volume 629842c9-c107-424f-b24d-344ca0b25710: being created as raw with specification: {'status': u'creating', 'volume_size': 1, 'volume_name': u'volume-629842c9-c107-424f-b24d-344ca0b25710'}
    2020-05-03 13:37:51.692 1597 ERROR oslo.messaging._drivers.impl_rabbit [req-e03f1c93-c88b-4271-aa0c-4a61c6d9dc6c - - - - -] Connection failed: failed to resolve broker hostname (retrying in 32.0 seconds): error: failed to resolve broker hostname
    2020-05-03 13:37:52.011 14438 INFO cinder.volume.flows.manager.create_volume [req-c350e9c6-b4b7-48df-940a-62891dba18d7 2bd00838cd344078982969448d50bd5b c9e71421e9264b43964133376091ce1b - default default] Volume volume-629842c9-c107-424f-b24d-344ca0b25710 (629842c9-c107-424f-b24d-344ca0b25710): created successfully
    @
    

     这时,再去控制节点就会发现,计算节点的openstack-cinder-volume的updated at字段可以更新。

    [root@controller rabbitmq]# openstack volume service list
    +------------------+--------------+------+---------+-------+----------------------------+
    | Binary           | Host         | Zone | Status  | State | Updated At                 |
    +------------------+--------------+------+---------+-------+----------------------------+
    | cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T05:32:06.000000 |
    | cinder-volume    | computer@lvm | nova | enabled | up    | 2020-05-03T05:32:05.000000 |
    +------------------+--------------+------+---------+-------+----------------------------+
    [root@controller rabbitmq]# openstack volume service list
    +------------------+--------------+------+---------+-------+----------------------------+
    | Binary           | Host         | Zone | Status  | State | Updated At                 |
    +------------------+--------------+------+---------+-------+----------------------------+
    | cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T05:33:16.000000 |
    | cinder-volume    | computer@lvm | nova | enabled | up    | 2020-05-03T05:33:15.000000 |
    +------------------+--------------+------+---------+-------+----------------------------+
    [root@controller ~]# openstack volume service list
    +------------------+--------------+------+---------+-------+----------------------------+                                
    | Binary           | Host         | Zone | Status  | State | Updated At                                                    |
    +------------------+--------------+------+---------+-------+----------------------------+                                   
    | cinder-scheduler | controller   | nova | enabled | up    | 2020-05-03T08:14:01.000000 |                                   
    | cinder-volume    | computer@lvm | nova | enabled | up    | 2020-05-03T08:14:01.000000 |                                   
    +------------------+--------------+------+---------+-------+----------------------------+                                   
    

    四、总结

    总的来说,openstack的搭建已基本完成,已经可以创建并连接实例。openstack需要服务虽然多,但搭建步骤大多重复,再加上资料比较详细,减轻了不少难度。

    opentstack对电脑配置还是有些要求的,在尝试用dashboard进行操作时,明显web server响应很慢,有待提高电脑配置,进一步尝试。

    五、参考资料

    官方文档:https://docs.openstack.org/install-guide/openstack-services.html

    最近教程:https://blog.csdn.net/chengyinwu/category_9242444.html

     

    展开全文
  • Fuel 是一个为openstack 端到端”一键部署“设计的工具,其功能含盖自动的PXE方式的操作系统安装,DHCP服务,Orchestration服务 和puppet 配置管理相关服务等,此外还有openstack 关键业务健康检查和log 实时查看等...
  • Openstack云平台搭建.docx
  • openstack(云平台搭建

    2015-03-17 10:00:33
    云平台搭建资料。OpenStack是一个开源的云计算管理平台项目,由几个主要的组件组合起来完成具体工作。OpenStack支持几乎所有类型的云环境,项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台。...
  • 在VMware里面创建两台centos7的虚拟机作为搭建云平台的两节点配置如下 : 1、第一台虚拟机 作为控制节点 2CPU 3G以上内存 硬盘50G 网络适配器一个nat 一个仅主机 虚拟机分区情况 Boot 分区 200M swap分区 是虚拟机...
  • 安装openstack预备包,此作用是防止高优先级软件被低优先级软件覆盖 [root@controller ~]# yum install -y yum-plugin-priorities Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile base ...
  • OpenStack云平台实战》课程测试试卷参考答案-1(20210628214350).pdf
  • Openstack 测试 试卷 ,学校考试可以用,百度花钱买的,需要的下载吧。...OpenStack为私有和公有提供可扩展的弹性的云计算服务。项目目标是提供实施简单、可大规模扩展、丰富、标准统一的云计算管理平台
  • 经典专科本科硕博研究生期刊毕业论文 仅供参考 精心整理 仅供参考 勿用作商业用途 目 录 TOC \o "1-3" \h \u 摘 要 1 Abstract 2 前 言 3 第1章 绪 论 4 1.1 研究背景及意义 4 1.2 本文的主要工作 5 1.3 本文的组织...
  • i "s/#/ /g" openrc.sh sed -i "s/ /##/" openrc.sh 进入openrc.sh手动修改余下变量值 3 控制节点ip 6 控制节点主机名 9 计算节点ip 12 计算节点主机名 16行 openstack 27行 demo(域名) 56行 密码 59行 第二张网卡...
  • 来 实 现 一 个 OpenStack 平 台 的 快 速 部 署 。Kolla-ansible 的 目 标 是 用 灵 活 , 无 痛 且 廉 价 的 部 署 过 程 代 替 OpenStack 的 僵 化 , 痛 苦 , 资 源 密 集 型 部 署 过 程 。 小 型 企 业 ...
  • Openstack云平台搭建与部署................................... 3 Keywords:Openstack、Cloud Computing、Iaas....................... 3 一、 绪论.................................................... 4 ...
  • 干货 | 手把手教你搭建一套OpenStack云平台

    万次阅读 多人点赞 2020-04-16 18:05:58
    今天我们为一位朋友搭建一套OpenStack云平台。 我们使用Kolla部署stein版本的OpenStack云平台。 kolla是用于自动化部署OpenStack的一个项目,它基于docker和ansible来实现;docker主要负责镜像制作、容器管理。而...
  • 基于openstack搭建云平台

    热门讨论 2012-11-18 20:47:21
    教你如何基于openstack搭建云平台
  • 计算已成为IT业界出现频率最高的热门词语之一。短短几年间,云计算已经从一个概念渐渐形成产品融入我们的日常生活中,让人不得不感慨这一波云计算浪潮的来势...OpenStack提供了一个操作平台,或工具包,用于编排
  • 先电OpenStack云平台(搭建与运维)

    千次阅读 2020-11-06 10:39:54
    先电OpenStack云平台(搭建与运维) IaaS平台构建 IaaS平台运维
  • 为了提升OpenStack云平台的虚拟网络性能,本文深入研究了OpenStack云环境的虚拟网络构成。同时本文分析了造成云环境中虚拟网络瓶颈的原因,给出了提升云平台虚拟网络性能的具体方法。最后将原生OpenStack与优化后的...
  • 从安全通用要求和云计算安全扩展要求对OpenStack云计算平台进行安全特性分析,提出了一套安全的OpenStack云计算平台解决方案并对解决方案进行实现和验证测试,编写了“OpenStack云安全部署指导书”。(平台搭建版本...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 6,580
精华内容 2,632
关键字:

openstack云平台搭建

友情链接: Wienerfilter.zip