精华内容
下载资源
问答
  • openstack 虚拟机 迁移

    2018-06-05 21:35:00
    迁移。 如果 你的 云 系统 正在 使用 共享 存储, 使用 nova live- migration 命令 就可以。 首先, 要 获得 需要 被 迁移 的 实例 列表: # nova list --host c01. example. com --all- tenants 接下来, 把 它们...

    迁移。 如果 你的 云 系统 正在 使用 共享 存储, 使用 nova live- migration 命令 就可以。 首先, 要 获得 需要 被 迁移 的 实例 列表:

    # nova list --host c01. example. com --all- tenants

    接下来, 把 它们 一个 一个 移走:

    # nova live- migration < uuid> c02. example. com

    如果 没有 使用 共享 存储, 可以 使用 --block- migrate 选项:

    # nova live- migration --block- migrate < uuid> c02. example. com

     

    转载于:https://www.cnblogs.com/hixiaowei/p/9142055.html

    展开全文
  • openstack虚拟机迁移操作

    千次阅读 2017-08-02 09:25:35
    ***************当你发现...转载自:http://www.cnblogs.com/kevingrace/p/6018676.html 散尽浮华openstack虚拟机迁移的操作记录需求说明: 计算节点linux-node1.openstack:192.168.1.8 计算节点linux-node2.opensta
     ***************当你发现自己的才华撑不起野心时,就请安静下来学习吧***************
    

    转载自:http://www.cnblogs.com/kevingrace/p/6018676.html 散尽浮华

    openstack虚拟机迁移的操作记录

    需求说明:
    计算节点linux-node1.openstack:192.168.1.8
    计算节点linux-node2.openstack:192.168.1.17

    这两个计算节点在同一个控制节点下(192.168.1.8既是控制节点,也是其中一个计算节点),现在需要将linux-node1.openstack上的虚拟机kvm-server005迁移到liunx-node2.openstack上。

    一、openstack的虚拟机线下迁移(”冷迁移“,迁移前关闭虚拟机)

    操作记录如下:

    linux-node1.openstack上的操作:

    1)查看虚拟机

    [root@linux-node1 src]# source admin-openrc.sh
    
    
    [root@linux-node1 src]# nova list
    

    这里写图片描述

    2)停止待迁移虚拟机kvm-server005

    [root@linux-node1 src]# nova stop 3483d9f1-4015-48d9-9837-b67ca82dd54d
    Request to stop server 3483d9f1-4015-48d9-9837-b67ca82dd54d has been accepted.
    [root@linux-node1 src]# nova list
    

    这里写图片描述

    3)查看待迁虚拟机kvm-server005所在的宿主机

    [root@linux-node1 src]# nova show 3483d9f1-4015-48d9-9837-b67ca82dd54d | grep 'OS-EXT-SRV-ATTR:hos'
    | OS-EXT-SRV-ATTR:host                      | linux-node1.openstack
    

    4)登录宿主机linux-node1.openstack上,将虚拟机kvm-server005虚拟机的数据拷贝到待迁移的宿主机linux-node2.openstack上

    [root@linux-node1 src]# cd /var/lib/nova/instances
    [root@linux-node1 instances]# ls
    
    30e5ba3e-3942-4119-9ba6-7523cf865b6f  5ec50ae5-a1f9-4425-b509-cfeb5ef62ca3  a5863e46-ef75-4601-a9df-505da5db58ed  compute_nodes
    3483d9f1-4015-48d9-9837-b67ca82dd54d  9acdb28b-02c2-41bb-87c4-5f3a8fa008ab  b6a4738d-7e01-4068-a09b-7008b612d126  locks
    377c536e-4d27-4447-8d9d-24c2686a73f6  a2893208-3ec9-4606-ab82-d7a870206cb9  _base                                 snapshots
    
    [root@linux-node1 instances]# rsync -e "ssh -p22" -avpgolr 3483d9f1-4015-48d9-9837-b67ca82dd54d 192.168.1.17:/var/lib/nova/instances/
    sending incremental file list
    3483d9f1-4015-48d9-9837-b67ca82dd54d/
    3483d9f1-4015-48d9-9837-b67ca82dd54d/console.log
    3483d9f1-4015-48d9-9837-b67ca82dd54d/disk
    3483d9f1-4015-48d9-9837-b67ca82dd54d/disk.info
    3483d9f1-4015-48d9-9837-b67ca82dd54d/disk.swap
    3483d9f1-4015-48d9-9837-b67ca82dd54d/libvirt.xml
    
    sent 381469737 bytes  received 111 bytes  69358154.18 bytes/sec
    total size is 381422781  speedup is 1.00
    

    目标节点linux-node2.openstack上的操作

    1)查看虚拟机kvm-server005的数据有没有拷贝过来,修改权限

    [root@linux-node2 instances]# pwd
    /var/lib/nova/instances
    
    [root@linux-node2 instances]# ll
    total 12
    drwxr-xr-x. 2 nova nova   85 Oct 31 14:54 0944254c-1c75-4523-9751-2389d677d59c
    drwxr-xr-x. 2 nova nova   85 Sep  6 12:59 3483d9f1-4015-48d9-9837-b67ca82dd54d
    drwxr-xr-x. 2 nova nova   85 Oct 31 17:29 946b340a-28bc-492d-8b3a-59d2fea1b464
    drwxr-xr-x. 2 nova nova 4096 Oct 31 17:17 _base
    -rw-r--r--. 1 nova nova   44 Nov  1 10:53 compute_nodes
    drwxr-xr-x. 2 nova nova   85 Oct 31 17:23 f6be1cb3-a694-4492-b2db-55ff9f09d843
    drwxr-xr-x. 2 nova nova 4096 Oct 31 17:14 locks
    
    [root@linux-node2 instances]# chown -R nova.nova 3483d9f1-4015-48d9-9837-b67ca82dd54d/
    
    [root@linux-node2 instances]# ll 3483d9f1-4015-48d9-9837-b67ca82dd54d/
    total 372492
    -rw-rw----. 1 nova nova     65214 Sep  8 13:58 console.log
    -rw-r--r--. 1 nova nova 381157376 Nov  1 10:59 disk
    -rw-r--r--. 1 nova nova       162 Sep  6 12:59 disk.info
    -rw-r--r--. 1 nova nova    197120 Sep  6 12:59 disk.swap
    -rw-r--r--. 1 nova nova      2909 Sep  6 12:59 libvirt.xml
    

    2)登录数据库更改MySQL中的host、node字段为新的物理主机名字

    [root@linux-node2 instances]# mysql -p
    Enter password:
    Welcome to the MariaDB monitor.  Commands end with ; or \g.
    Your MariaDB connection id is 4063
    Server version: 10.1.17-MariaDB MariaDB Server
    
    Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    MariaDB [(none)]> use nova;                                                                                                                   
    Database changed
    MariaDB [nova]> update instances set host='linux-node2.openstack', node='linux-node2.openstack' where uuid='3483d9f1-4015-48d9-9837-b67ca82dd54d';
    Query OK, 1 rows affected (0.01 sec)
    Rows matched: 1  Changed: 1  Warnings: 0
    
    MariaDB [nova]>
    

    3)在目标节点linux-node2.openstack上启动虚拟机,启动前重启一下该计算节点的compute服务

    [root@linux-node2 instances]# systemctl restart  openstack-nova-compute
    [root@linux-node2 instances]# nova start 3483d9f1-4015-48d9-9837-b67ca82dd54d
    

    4)验证虚拟机所在的宿主机

    [root@linux-node2 src]# nova list
    

    这里写图片描述

    [root@linux-node2 src]# nova show 3483d9f1-4015-48d9-9837-b67ca82dd54d | grep 'OS-EXT-SRV-ATTR:hos'
    | OS-EXT-SRV-ATTR:host                 | linux-node2.openstack
    

    二、openstack的虚拟机在线迁移(”热迁移“,虚拟机在运行中迁移)

    直接在控制节点上操作

    1)查看虚拟机

    [root@linux-node1 src]# source admin-openrc.sh
    [root@linux-node1 src]# nova list
    

    这里写图片描述

    2)查看待迁虚拟机kvm-server005所在的宿主机

    [root@linux-node1 src]# nova show 3483d9f1-4015-48d9-9837-b67ca82dd54d | grep 'OS-EXT-SRV-ATTR:hos'
    | OS-EXT-SRV-ATTR:host                 | linux-node1.openstack
    

    3)查看可用的计算节点

    [root@linux-node1 src]# nova-manage service list
    No handlers could be found for logger "oslo_config.cfg"
    Binary           Host                                 Zone             Status     State Updated_At
    .........
    nova-compute     linux-node1.openstack                nova             enabled    :-)   2016-11-01 05:12:56
    nova-compute     linux-node2.openstack                nova             enabled    XXX   2016-10-31 05:55:24
    

    4)查看目标计算节点linux-node2.openstack资源

    [root@linux-node1 src]# nova-manage service describe_resource linux-node2.openstack
    No handlers could be found for logger "oslo_config.cfg"
    HOST                              PROJECT     cpu mem(mb)     hdd
    linux-node2.openstack(total)                        32   64211     149
    linux-node2.openstack(used_now)                      6   14848      75
    linux-node2.openstack(used_max)                      6   14336      75
    linux-node2.openstack    0cd3632df93d48d6b2c24c67f70e56b8       6   14336      75
    

    5)执行虚拟机的在线迁移,迁移到计算节点linux-node2.openstack上

    [root@linux-node1 src]# nova live-migration 3483d9f1-4015-48d9-9837-b67ca82dd54d linux-node2.openstack
    

    6)查看迁移后虚拟机kvm-server005所在的宿主机是否已发生改变

    [root@linux-node1 src]# nova show 3483d9f1-4015-48d9-9837-b67ca82dd54d | grep 'OS-EXT-SRV-ATTR:hos'
    | OS-EXT-SRV-ATTR:host                 | linux-node2.openstack
    

    注意:

    1)/etc/hosts文件提前做好主机映射,确定节点之间能互相ping通主机名。
    2)利用id nova命令查看下控制节点nova的uid和gid,并记录,保证两个计算节点的id和gid是否和控制节点保持一致
    如果不一致,则利用
    usermod -u “控制节点的nova的uid”
    gropumod -g “控制节点的nova的gid”
    两条命令进行修改,同时在所有计算节点运行该命令,保证所有nova相关文件使用新的uid和gid

    展开全文
  • openstack虚拟机迁移

    千次阅读 2012-12-26 13:35:58
    openstack自带虚拟机迁移功能   迁移功能允许一个正在running的虚拟机实例从一个compute node迁移到另一个compute node 首先 nova list查看所有的正在运行的实例,找出想要迁移的实例 然后nova show ...

    openstack自带虚拟机的迁移功能

     

    迁移功能允许一个正在running的虚拟机实例从一个compute node迁移到另一个compute node

    首先 nova list查看所有的正在运行的实例,找出想要迁移的实例

    然后nova show  instanceid 来查看改虚拟机的情况

    可以看到其配置和所在的节点

    然后 nova-manage service list查看所有的宿主机信息

    可以看到宿主机的名称 状态 类型 运行情况等等

    然后 nova-manage service describe _resource hostname

    能够查看某台宿主机的资源占用率

    虚拟机迁移

    nova live-migration instanceid hostc Migration of instancename initiated

    这里是把虚拟机迁移到宿主机c上面来

    当使用live-migration迁移虚拟机时,改虚拟机将会被挂起

    最后使用nova list查看是否迁移成功,如果有问题,查看日志

    展开全文
  • 旧平台openstack虚拟机需要迁移到ovirt-engine环境,直接用ovirt自带的对接openstack环境一直cinder鉴权报错,调查是版本不支持,只能手动备份导入的ovirt平台。 思路:获取openstack环境虚拟机信息,读取迁移...

    1、需求背景:

    旧平台openstack虚拟机需要迁移到ovirt-engine环境,直接用ovirt自带的对接openstack环境一直cinder鉴权报错,调查是版本不支持,只能手动备份导入的ovirt平台。

    思路:获取openstack环境虚拟机信息,读取迁移虚拟机表格,导出虚拟机为qcow2,上传到ovirt平台创建虚拟机挂盘。

    2、获取openstack计算节点虚拟机信息

    通过virsh list获取instance实例列表,再通过dumpxml获取虚拟机信息利用xml模块解析所需要的信息

        def get_vms_list(self):
            # print("virsh list --all --name ...")
            list_cmd = "virsh list --all --name".split(" ")
            stdout, stderr = self.cmd_process(list_cmd)
            instance_names = list(filter(None, stdout.split("\n")))
            return instance_names
    
        def get_dumpxml_info(self, instance_name):
            # print("virsh dumpxml...")
            info_cmd = "virsh dumpxml %s" % instance_name
            stdout, _ = self.cmd_process(info_cmd.split(" "))
            return stdout
    
        def get_vms_info(self):
            instance_names = self.get_vms_list()
            vms_info = []
            for instance_name in instance_names:
                xml_str = self.get_dumpxml_info(instance_name)
                DOMTree = xml.dom.minidom.parseString(xml_str)
                collection = DOMTree.documentElement
                vm_info = {}
                data_volume = []
                vm_info["instance_name"] = collection.getElementsByTagName("name")[0].childNodes[0].data
                vm_info["vm_name"] = collection.getElementsByTagName("metadata")[0].getElementsByTagName("nova:instance")[
                    0].getElementsByTagName("nova:name")[0].childNodes[0].data
                vm_info["vm_flavor"] = collection.getElementsByTagName("metadata")[0].getElementsByTagName("nova:instance")[
                        0].getElementsByTagName("nova:flavor")[0].getAttribute("name")
                vm_info["instance_uuid"] = collection.getElementsByTagName("uuid")[0].childNodes[0].data
                vm_info["instance_vcpu"] = collection.getElementsByTagName("vcpu")[0].childNodes[0].data
                vm_info["instance_memory"] = collection.getElementsByTagName("memory")[0].childNodes[0].data
                for disk_info in collection.getElementsByTagName("devices")[0].getElementsByTagName("disk"):
                    dev = disk_info.getElementsByTagName("target")[0].getAttribute("dev")
                    if dev == "vda" or dev == "hda":
                        vm_info["system_volume"] = disk_info.getElementsByTagName("source")[0].getAttribute("name")
                    else:
                        data_volume.append(disk_info.getElementsByTagName("source")[0].getAttribute("name"))
                vm_info["data_volume"] = data_volume
                vms_info.append(vm_info)
            return vms_info

    计算节点都要执行,最后汇总到执行脚本的节点,并且汇总读取信息(注意系统卷和数据卷的判断根据具体环境修改适配)

        run_first_time = False
        if run_first_time:
            for host in HOST_LIST:
                print("** get the vm info from %s" % host)
                os.system("scp get_vm_info.py root@%s:/home" % host)
                sshclient_execmd(host, "python /home/get_vm_info.py > /home/%s_vm_info.txt" % host)
                time.sleep(1)
                sshclient_execmd(host, "scp  /home/%s_vm_info.txt root@cmp014:/home/test/info" % host)
                time.sleep(1)
        print("** update vm info successful")
    
        print("** read the info file")
        info_path = "/home/test/info"
        files = os.listdir(info_path)
        for file_name in files:
            print("** read the file : %s" % file_name)
            with open(info_path + "/" + file_name) as f:
                content = f.read()
            all_vms_info.extend(ast.literal_eval(content))

     

    3、获取卷信息后转换格式

        def rbd_export(self, volume_path, volume_name):
            rbd_cmd_str = "rbd export %s %s.qcow2" % (volume_path, volume_name)
            self.cmd_process(rbd_cmd_str.split(" "))
            print ("**%s exported %s successfully." % (volume_path, volume_name))
    
        def img_covert(self, qcow2_path, qcow2_name):
            img_covert_str = "qemu-img convert -O qcow2 %s %s" % (qcow2_path + ".qcow2", qcow2_name + ".qcow2")
            self.cmd_process(img_covert_str.split(" "))
            print ("**%s coverted successfully." % qcow2_name)
            os.remove(qcow2_path + ".qcow2")
            print ("**remove tmp file:%s" % (qcow2_path + ".qcow2"))
    
        def clean_disks(self, qcow2_name):
            if os.path.exists(qcow2_name + ".qcow2"):
                os.remove(qcow2_name + ".qcow2")
                print ("** remove disk %s" % qcow2_name)

    4、虚拟机disk转换之后利用ovirt sdk的方法上传系统盘和数据盘,创建虚拟机将盘挂载在虚拟机上,添加网卡,等等

        def upload_disk(self):
            print ("--Step01:upload disk....")
            for volume_path in self.volumes_list:
                # Get image info using qemu-img
                image_info = self.get_image_info(volume_path)
                new_disk_format = self.get_disk_format(image_info)
    
                print("**Uploaded image format: %s" % image_info["format"])
                print("**Disk content type: %s" % image_info["content_type"])
                print("**Disk format: %s" % new_disk_format)
                print("**Transfer format: %s" % image_info["transfer_format"])
    
                # This example will connect to the server and create a new `floating`
                # disk, one that isn't attached to any virtual machine.
                # Then using transfer service it will transfer disk data from local
                # qcow2 disk to the newly created disk in server.
                print("**Creating disk...")
                image_size = os.path.getsize(volume_path)
                disks_service = self.system_service.disks_service()
                disk = disks_service.add(
                    disk=types.Disk(
                        name=os.path.basename(volume_path[:-6]),
                        content_type=image_info["content_type"],
                        description='Uploaded disk',
                        format=new_disk_format,
                        initial_size=image_size,
                        provisioned_size=image_info["virtual-size"],
                        sparse=new_disk_format == types.DiskFormat.COW,
                        storage_domains=[
                            types.StorageDomain(
                                name=self.args.sd_name
                            )
                        ]
                    )
                )
    
                # Wait till the disk is up, as the transfer can't start if the
                # disk is locked:
                disk_service = disks_service.disk_service(disk.id)
                while True:
                    time.sleep(5)
                    disk = disk_service.get()
                    if disk.status == types.DiskStatus.OK:
                        break
    
                print("**Creating transfer session...")
    
                # Get a reference to the service that manages the image
                # transfer that was added in the previous step:
                transfers_service = self.system_service .image_transfers_service()
    
                # Add a new image transfer:
                transfer = transfers_service.add(
                    types.ImageTransfer(
                        image=types.Image(
                            id=disk.id
                        ),
                        # 'format' can be used only for ovirt-engine 4.3 or above
                        format=image_info["transfer_format"],
                    )
                )
    
                # Get reference to the created transfer service:
                transfer_service = transfers_service.image_transfer_service(transfer.id)
    
                # After adding a new transfer for the disk, the transfer's status will be INITIALIZING.
                # Wait until the init phase is over. The actual transfer can start when its status is "Transferring".
                while transfer.phase == types.ImageTransferPhase.INITIALIZING:
                    time.sleep(1)
                    transfer = transfer_service.get()
    
                print("**Uploading image...")
    
                # At this stage, the SDK granted the permission to start transferring the disk, and the
                # user should choose its preferred tool for doing it - regardless of the SDK.
                # In this example, we will use Python's httplib.HTTPSConnection for transferring the data.
                if args.direct:
                    if transfer.transfer_url is not None:
                        destination_url = transfer.transfer_url
                    else:
                        print("**Direct upload to host not supported (requires ovirt-engine 4.2 or above).")
                        sys.exit(1)
                else:
                    destination_url = transfer.proxy_url
    
                with ui.ProgressBar(image_size) as pb:
                    client.upload(
                        volume_path,
                        destination_url,
                        self.args.cafile,
                        secure=self.args.secure,
                        progress=pb.update)
    
                print("**Finalizing transfer session...")
                # Successful cleanup
                transfer_service.finalize()
                print("**Upload %s completed successfully" % volume_path)
    
        def attach_disk(self, disk_attachments_service, disk_name, vm_name, bootable=True):
            disks_service = self.system_service.disks_service()
            while True:
                time.sleep(5)
                my_disk = disks_service.list(search='name=%s' % disk_name)[0]
                if my_disk.status == types.DiskStatus.OK:
                    print ("**DiskStatus is ok")
                    break
            disk_attachment = disk_attachments_service.add(
                types.DiskAttachment(
                    disk=my_disk,
                    interface=types.DiskInterface.VIRTIO,
                    bootable=bootable,
                    active=True,
                ),
            )
            # Wait until the disk status is OK:
            disk_service = disks_service.disk_service(disk_attachment.disk.id)
            while True:
                time.sleep(5)
                disk = disk_service.get()
                if disk.status == types.DiskStatus.OK:
                    print("**Disk '%s' added to '%s'." % (disk.name, vm_name))
                    break
    
        def attach_nics(self, vm_name):
            vms_service = self.system_service.vms_service()
            vm = vms_service.list(search='name=%s' % vm_name)[0]
            cluster = self.system_service.clusters_service().cluster_service(vm.cluster.id).get()
            dcs_service = self.system_service.data_centers_service()
            dc = dcs_service.list(search='Clusters.name=%s' % cluster.name)[0]
            networks_service = dcs_service.service(dc.id).networks_service()
            network = next(
                (n for n in networks_service.list()
                 if n.name == 'Ext-Net'),
                None
            )
            profiles_service = self.system_service.vnic_profiles_service()
            profile_id = None
            nic_name = random.choice(NIC_LIST)
            for profile in profiles_service.list():
                if profile.name == nic_name:
                    print ("**add nic : %s" % profile.name)
                    profile_id = profile.id
                    break
    
            # Locate the service that manages the network interface cards of the
            # virtual machine:
            nics_service = vms_service.vm_service(vm.id).nics_service()
    
            # Use the "add" method of the network interface cards service to add the
            # new network interface card:
            nics_service.add(
                types.Nic(
                    name='mynic',
                    description='My network interface card',
                    vnic_profile=types.VnicProfile(
                        id=profile_id,
                    ),
                ),
            )
            print("**Nic added successfully")
    
        def assign_permission(self, permissions_service, user, group):
            if user:
                print("**assign user permission.")
                permissions_service.add(
                    types.Permission(
                        user=types.User(
                            id=user.id,
                        ),
                        role=types.Role(
                            name=USER_ROLENAME,
                        ),
                    ),
                )
            else:
                print ("**user is none, pass the user assign permission")
            if group:
                print ("**assign group permission.")
                permissions_service.add(
                    types.Permission(
                        group=types.Group(
                            id=group.id,
                        ),
                        role=types.Role(
                            name=GROUP_ROLENAME,
                        ),
                    ),
                )
            else:
                print ("**group is none, pass the user assign permission")
    
        def create_vm(self):
            print ("--Step02:begin create vm")
            vms_service = self.system_service.vms_service()
            users_service = connection.system_service().users_service()
            groups_service = connection.system_service().groups_service()
            try:
                user = users_service.list(search='usrname=%s' % op.user_name)[0]
            except:
                user = None
            try:
                group = groups_service.list(search='name=%s' % op.group_name)[0]
            except:
                group = None
            vm = vms_service.add(
                types.Vm(
                    name=op.vm_name,
                    description='migrate from openstack',
                    comment=group.name if group else None,
                    cluster=types.Cluster(
                        name='Default',
                    ),
                    cpu=types.Cpu(
                        topology=types.CpuTopology(
                            cores=1,
                            sockets=int(self.vm_info.get("instance_vcpu")),
                            threads=1
                        )
                    ),
                    template=types.Template(
                        name='Blank',
                    ),
                    high_availability=types.HighAvailability(
                        enabled=True,
                        priority=1
                    ),
                    memory=int(self.vm_info.get("instance_memory"))*1024,
                ),
            )
            # Find the service that manages the virtual machine:
            vm_service = vms_service.vm_service(vm.id)
    
            # Wait till the virtual machine is down, which means that it is
            # completely created:
            while True:
                time.sleep(5)
                vm = vm_service.get()
                if vm.status == types.VmStatus.DOWN:
                    print("**The vm %s is created successfully." % self.vm_info.get("vm_name"))
                    break
            print ("--Step03:begin to attach disk.")
            disk_attachments_service = vm_service.disk_attachments_service()
            for volume_path in self.volumes_list:
                print ("**begin to attach disk: %s" % volume_path)
                self.attach_disk(disk_attachments_service, str(volume_path[:-6]), vm.name, bootable=("system" in volume_path))
            print ("--Step04:begin to attach nic.")
            self.attach_nics(vm.name)
            print ("--Step05:assign permission to vms.")
            permissions_service = vm_service.permissions_service()
            self.assign_permission(permissions_service, user, group)

    5、执行结果

    展开全文
  • 需求说明: 计算节点linux-node1.openstack:192.168.1.8 ...这两个计算节点在同一个控制节点下(192.168.1.8既是控制节点,也是其中一个计算节点),现在需要将linux-node1.openstack上的虚拟机kvm-server00...
  • 一、热迁移的排错经历 先来看下cpu型号,从/proc/cpuinfo中可以看出所有cpu型号版本是一样的 通过novalive-migration --debug <instance-xxx> <node-xxxx>命令行迁移,可以看到报错信息 通过...
  • OpenStack虚拟机迁移机制安全性分析

    千次阅读 2015-05-14 18:48:15
    目前的云平台主要有两种迁移类型:动态迁移和块迁移。动态迁移需要实例保存在NFS共享存储中,这种迁移主要是实例的内存状态...虚拟机动态迁移是指将一台虚拟机从一个物理机器迁移至另一个物理机器,而迁移过程中虚拟机
  • openstack 虚拟机迁移过程总结

    千次阅读 2014-07-15 17:44:35
    虚机的迁移主要分为一下三个阶段: 预迁移 pre_live_migration 迁移 live_migration 迁移后 post_live_migration
  • 解决ssh权限问题: #usermod -s /bin/bash nova #su nova bash-4.1$ mkdir -p /var/lib/nova/.ssh bash-4.1$ exit #cp .ssh/* /var/lib/nova/.ssh/ 也可以自己生成
  • http://blog.csdn.net/weiyuanke/article/details/8562123 1) hostname 以及hosts文件 (2)libvirt的配置 以及权限 ...def migrate_disk_and_power_off(self, context, instance, dest,
  • 修改/etc/libvirt/libvirtd.conf 文件如下:注意此处是libvirtd.conf 不是libvirt。conf改前 : #listen_tls = 0改后: listen_tls = 0改前 : #listen_tcp = 1改后: listen_tcp = 1添加: auth_tcp = “none”修改 /etc...
  • openstack i版本中,如果迁移调度到当前计算节点,虚拟会处于resize_prep 状态 root@controller:/var/log/nova# nova list +--------------------------------------+-----------+--------+-------------+--------...
  • openstack虚拟机迁移详解,PDFopenstack虚拟机热迁移详解,PDFopenstack虚拟机热迁移详解,PDFopenstack虚拟机热迁移详解,PDF
  • 论文研究-基于混合迁移OpenStack虚拟机在线迁移改进方案.pdf, 为了提高虚拟机备份(包括运行时状态、存储、配置)的效率,提高虚拟机运行的稳定性和平衡物理机的负载...
  • 一、虚拟机迁移分析 openstacvk虚拟机迁移分为冷迁移和热迁移两种方式。 1.1冷迁移: 冷迁移(cold migration),也叫静态迁移。关闭电源的虚拟机进行迁移。通过冷迁移,可以选择将关联的磁盘从一个数据存储移动到...
  • 1、动态迁移需要使用共享存储 在cinder节点上增加共享存储如NFS,虚拟机使用共享存储的卷 2、在所有nova计算节点配置libvirtd服务 [root@node1 ~]# vim /etc/libvirt/libvirtd.conf listen_tls = 0 listen_tcp = 1 ...
  • 从computer03上将虚拟机迁移至computer02上。 computer03日志 computer03日志变化如下,注意 memory 92% remaining;这是在搜集内存数据,将其转移到computer02上。 [root@computer03 ~]# tailf /var/log/nova/nova-...
  • OpenStack虚拟机迁移

    千次阅读 2015-05-18 11:08:01
    虚拟机迁移 1. 修改所有的计算节点 配置文件/etc/nova/nova.conf,添加如下配置: allow_resize_to_same_host=true scheduler_default_filters=AllHostsFilter 重启所有计算节点的服务和控制节点的openstack-...
  • OpenStack 虚拟机迁移流程图

    千次阅读 2019-10-26 15:33:46
    pre_live_migration 阶段:热迁移前的准备阶段,主要在目的计算节点上提前准备虚拟机资源,包括网络资源,例如:建立虚拟机的网卡,然后将网卡加入 OvS br-int 网桥。如果该阶段失败,会有回滚操作。 内存迁...
  • 楼主经过把 openstack 各个版本的热迁移代码花式折腾之后发现,对于提高迁移成功率,openstack 本身能做的真的不多,本质上还是因为 openstack 是个上层建筑,受到底层的内存拷贝机制以及网络的限制。但是既然任务是...
  • Openstack Vmware虚拟机迁移openstack

    千次阅读 2019-10-30 11:04:53
    实验环境是使用vmware搭建的centos 7虚拟机,生产环境需要换到openstack上跑虚拟机。 首先, 在vmware里面导出虚拟机,导出格式为ovf 这其实是一个压缩文件,把这个.ovf 用解压软件tar -xvf提取出其中的 CentOS7-...
  • 文章目录目录前文列表虚拟机迁移的应用场景需要迁移的虚拟机数据类型虚拟机迁移的存储场景文件存储块存储非共享存储迁移的类型迁移的方式执行虚拟机冷迁移冷迁移日志分析执行虚拟机热迁移热迁移日志分析参考资料 ...
  • 目录 文章目录目录前文列表冷迁移代码分析(基于 Newton)Nova 热迁移实现原理热迁移代码分析向 libvirtd 发出热迁移指令轮询监控 libvirtd 的...《OpenStack 虚拟机的冷/热迁移功能实践》 冷迁移代码分析(基于 ...
  • 手把手教您如何使用python语言,创建OpenStack虚拟机
  • Openstack Newton版本中虚拟机迁移功能使用的计算节点补充配置。

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 6,154
精华内容 2,461
关键字:

openstack虚拟机迁移