精华内容
下载资源
问答
  • 然后,每当您安装新的npm软件包时,请重新运行electron-rebuild: $( npm bin ) /electron-rebuild 或者,如果您使用的是Windows: . \n ode_modules \. bin \e lectron-rebuild.cmd 如果您具有良好的node-gyp...
  • Clean Project :删除build目录下的所有文件,并对项目的大部分文件进行重新编译,时间相对于Rebuild Project短点; Rebuild Project :包含了Clean的操作,并对整个项目进行了重新编译(包括NDK和资源文件),因此...
  • rebuild-源码

    2021-03-17 17:49:46
    rebuild
  • 朋友有一个Rebuild Index的Job执行一般停掉了,问我是否可以查看哪些Index已经被Rebuild过了。本来以为Sys.index或者Sys.objects会存储类似的信息,结果没有找到。   从网上查了一下,SQL Server没有存储类似的信息...
  • 电商日志-用于测试和实验
  • 重建矿 jenkins 扩展,可构建当前登录用户的最后一个完整构建触发器;
  • rebuild 重建: 虚拟重置为初始状态, 或者更换镜像等 一.API API入口在nova/api/openstack/compute/servers.py @wsgi.action('rebuild') @validation.schema(schema_server_rebuild_v20, '2.0', '2.0') @...

    rebuild 重建:

    虚拟重置为初始状态, 或者更换镜像等

    一.API

    API入口在nova/api/openstack/compute/servers.py

        @wsgi.action('rebuild')
        @validation.schema(schema_server_rebuild_v20, '2.0', '2.0')
        @validation.schema(schema_server_rebuild, '2.1', '2.18')
        @validation.schema(schema_server_rebuild_v219, '2.19')
        def _action_rebuild(self, req, id, body):
            """Rebuild an instance with the given attributes."""
            rebuild_dict = body['rebuild']
    
    				// 重建的镜像参数
            image_href = rebuild_dict["imageRef"]
    
    				// 重建指定密码
            password = self._get_server_admin_password(rebuild_dict)
    
    				// 校验instance 以及 校验是否允许rebuild权限
            context = req.environ['nova.context']
            instance = self._get_server(context, req, id)
            context.can(server_policies.SERVERS % 'rebuild',
                        target={'user_id': instance.user_id,
                                'project_id': instance.project_id})
            attr_map = {
                'name': 'display_name',
                'description': 'display_description',
                'metadata': 'metadata',
            }
    
            kwargs = {}
    
            helpers.translate_attributes(helpers.REBUILD, rebuild_dict, kwargs)
    
            for request_attribute, instance_attribute in attr_map.items():
                try:
                    if request_attribute == 'name':
                        kwargs[instance_attribute] = common.normalize_name(
                            rebuild_dict[request_attribute])
                    else:
                        kwargs[instance_attribute] = rebuild_dict[
                            request_attribute]
                except (KeyError, TypeError):
                    pass
    
            try:
            
            	// 重建
                self.compute_api.rebuild(context,
                                         instance,
                                         image_href,
                                         password,
                                         **kwargs)
            except exception.InstanceIsLocked as e:
                raise exc.HTTPConflict(explanation=e.format_message())
            except exception.InstanceInvalidState as state_error:
                common.raise_http_conflict_for_instance_invalid_state(state_error,
                        'rebuild', id)
                        
    				......
    
            instance = self._get_server(context, req, id, is_detail=True)
    
            view = self._view_builder.show(req, instance, extend_address=False)
    
            # Add on the admin_password attribute since the view doesn't do it
            # unless instance passwords are disabled
            if CONF.api.enable_instance_password:
                view['server']['adminPass'] = password
    
            robj = wsgi.ResponseObject(view)
            return self._add_location(robj)
    
    

    二. compute rebuild

    compute_api.rebuild 定义在nova/compute/api.py

    该函数为一个长函数, 这里进行拆分查看

       @check_instance_lock
        @check_instance_cell
        
        // 校验虚拟机状态, 只允许在指定状态下进行
        @check_instance_state(vm_state=[vm_states.ACTIVE, vm_states.STOPPED,
                                        vm_states.ERROR])
        def rebuild(self, context, instance, image_href, admin_password,
                    files_to_inject=None, **kwargs):
    
    
    				// 获取参数, 并进行参数校验
            files_to_inject = files_to_inject or []
            metadata = kwargs.get('metadata', {})
            preserve_ephemeral = kwargs.get('preserve_ephemeral', False)
            auto_disk_config = kwargs.get('auto_disk_config')
    
            image_id, image = self._get_image(context, image_href)
            self._check_auto_disk_config(image=image, **kwargs)
    
            flavor = instance.get_flavor()
            
            
            获取磁盘的mapping信息
            bdms = objects.BlockDeviceMappingList.get_by_instance_uuid(
                context, instance.uuid)
            root_bdm = compute_utils.get_root_bdm(context, instance, bdms)
    
    
            # Check to see if the image is changing and we have a volume-backed
            # server.
            is_volume_backed = compute_utils.is_volume_backed_instance(
                context, instance, bdms)
            if is_volume_backed:
                # For boot from volume, instance.image_ref is empty, so we need to
                # query the image from the volume.
                if root_bdm is None:
                    # This shouldn't happen and is an error, we need to fail. This
                    # is not the users fault, it's an internal error. Without a
                    # root BDM we have no way of knowing the backing volume (or
                    # image in that volume) for this instance.
                    raise exception.NovaException(
                        _('Unable to find root block device mapping for '
                          'volume-backed instance.'))
    
                volume = self.volume_api.get(context, root_bdm.volume_id)
                volume_image_metadata = volume.get('volume_image_metadata', {})
                orig_image_ref = volume_image_metadata.get('image_id')
            else:
                orig_image_ref = instance.image_ref
    
    
    				//  校验参数, 包含inject的参数
            self._checks_for_create_and_rebuild(context, image_id, image,
                    flavor, metadata, files_to_inject, root_bdm)
    
            kernel_id, ramdisk_id = self._handle_kernel_and_ramdisk(
                    context, None, None, image)
    
    
    

    重置磁盘的metadata, 移除旧数据

            def _reset_image_metadata():
                """Remove old image properties that we're storing as instance
                system metadata.  These properties start with 'image_'.
                Then add the properties for the new image.
                """
                # FIXME(comstud): There's a race condition here in that if
                # the system_metadata for this instance is updated after
                # we do the previous save() and before we update.. those
                # other updates will be lost. Since this problem exists in
                # a lot of other places, I think it should be addressed in
                # a DB layer overhaul.
    
                orig_sys_metadata = dict(instance.system_metadata)
                # Remove the old keys
                for key in list(instance.system_metadata.keys()):
                    if key.startswith(utils.SM_IMAGE_PROP_PREFIX):
                        del instance.system_metadata[key]
    
                # Add the new ones
                new_sys_metadata = utils.get_system_metadata_from_image(
                    image, flavor)
    
                instance.system_metadata.update(new_sys_metadata)
                instance.save()
                return orig_sys_metadata
    

    更新instance状态

    				// 更新instance状态
    
            # Since image might have changed, we may have new values for
            # os_type, vm_mode, etc
            options_from_image = self._inherit_properties_from_image(
                    image, auto_disk_config)
            instance.update(options_from_image)
    
            instance.task_state = task_states.REBUILDING
            instance.image_ref = image_href
            instance.kernel_id = kernel_id or ""
            instance.ramdisk_id = ramdisk_id or ""
            instance.progress = 0
            instance.update(kwargs)
            instance.save(expected_task_state=[None])
    
    
            orig_sys_metadata = _reset_image_metadata()
    
            self._record_action_start(context, instance, instance_actions.REBUILD)
    
    				//  获取,或调整request_spec 调度参数
            host = instance.host
            try:
                request_spec = objects.RequestSpec.get_by_instance_uuid(
                    context, instance.uuid)
                # If a new image is provided on rebuild, we will need to run
                # through the scheduler again, but we want the instance to be
                # rebuilt on the same host it's already on.
                if orig_image_ref != image_href:
                    # We have to modify the request spec that goes to the scheduler
                    # to contain the new image. We persist this since we've already
                    # changed the instance.image_ref above so we're being
                    # consistent.
                    request_spec.image = objects.ImageMeta.from_dict(image)
                    request_spec.save()
                    if 'scheduler_hints' not in request_spec:
                        request_spec.scheduler_hints = {}
                    # Nuke the id on this so we can't accidentally save
                    # this hint hack later
                    del request_spec.id
    
                    # NOTE(danms): Passing host=None tells conductor to
                    # call the scheduler. The _nova_check_type hint
                    # requires that the scheduler returns only the same
                    # host that we are currently on and only checks
                    # rebuild-related filters.
                    
                    
                    
                    // 镜像不同时,将重新进行一次调度, 调度的主机,依旧在原宿主机上
                    request_spec.scheduler_hints['_nova_check_type'] = ['rebuild']
                    request_spec.force_hosts = [instance.host]
                    request_spec.force_nodes = [instance.node]
                    host = None
            except exception.RequestSpecNotFound:
                # Some old instances can still have no RequestSpec object attached
                # to them, we need to support the old way
                request_spec = None
    
    

    重建

            self.compute_task_api.rebuild_instance(context, instance=instance,
                    new_pass=admin_password, injected_files=files_to_inject,
                    image_ref=image_href, orig_image_ref=orig_image_ref,
                    orig_sys_metadata=orig_sys_metadata, bdms=bdms,
                    preserve_ephemeral=preserve_ephemeral, host=host,
                    request_spec=request_spec,
                    kwargs=kwargs)
    
    
    

    实际调用conductor 的rebuild_instance


    三. conductor rebuild

    rebuild_instance 定义在nova/conductor/manager.py

        def rebuild_instance(self, context, instance, orig_image_ref, image_ref,
                             injected_files, new_pass, orig_sys_metadata,
                             bdms, recreate, on_shared_storage,
                             preserve_ephemeral=False, host=None,
                             request_spec=None):
    
            with compute_utils.EventReporter(context, 'rebuild_server',
                                              instance.uuid):
                node = limits = None
    
                try:
                    migration = objects.Migration.get_by_instance_and_status(
                        context, instance.uuid, 'accepted')
                except exception.MigrationNotFoundByStatus:
                    LOG.debug("No migration record for the rebuild/evacuate "
                              "request.", instance=instance)
                    migration = None
    
    
    
    						// 如果没有指定主机, 则 重新调度
                if not host:
                    if not request_spec:
                        image_meta = nova_object.obj_to_primitive(
                            instance.image_meta)
                        request_spec = scheduler_utils.build_request_spec(
                                context, image_meta, [instance])
                    elif recreate:
    
                        request_spec.ignore_hosts = request_spec.ignore_hosts or []
                        request_spec.ignore_hosts.append(instance.host)
    
                        request_spec.reset_forced_destinations()
                        filter_properties = request_spec.\
                            to_legacy_filter_properties_dict()
                        request_spec = request_spec.to_legacy_request_spec_dict()
                    else:
                        filter_properties = request_spec. \
                            to_legacy_filter_properties_dict()
                        request_spec = request_spec.to_legacy_request_spec_dict()
                    try:
                    
                    
                    //  调度新主机, 之前使用了force host, 因此还是会分配到原来的属主机上
                        hosts = self._schedule_instances(
                                context, request_spec, filter_properties)
                        host_dict = hosts.pop(0)
                        host, node, limits = (host_dict['host'],
                                              host_dict['nodename'],
                                              host_dict['limits'])
                    except exception.NoValidHost as ex:
    										......
    
                compute_utils.notify_about_instance_usage(
                    self.notifier, context, instance, "rebuild.scheduled")
                    
                    
                    
    						调用目标段宿主机的rebuild_instance
                self.compute_rpcapi.rebuild_instance(context,
                        instance=instance,
                        new_pass=new_pass,
                        injected_files=injected_files,
                        image_ref=image_ref,
                        orig_image_ref=orig_image_ref,
                        orig_sys_metadata=orig_sys_metadata,
                        bdms=bdms,
                        recreate=recreate,
                        on_shared_storage=on_shared_storage,
                        preserve_ephemeral=preserve_ephemeral,
                        migration=migration,
                        host=host, node=node, limits=limits)
    
    

    四. compute rebuild_instance

    compute_rpcapi.rebuild_instance 定义在 nova/compute/manager.py

    定义:

        @wrap_instance_event(prefix='compute')
        @wrap_instance_fault
        def rebuild_instance(self, context, instance, orig_image_ref, image_ref,
                             injected_files, new_pass, orig_sys_metadata,
                             bdms, recreate, on_shared_storage=None,
                             preserve_ephemeral=False, migration=None,
                             scheduled_node=None, limits=None):
       
       
            context = context.elevated()
    
            LOG.info(_LI("Rebuilding instance"), instance=instance)
    
    	// 是否重新生成, 这里为False, 忽略该部分即可
            if recreate:
    
                rt = self._get_resource_tracker()
                rebuild_claim = rt.rebuild_claim
            else:
                rebuild_claim = claims.NopClaim
    
            image_meta = {}
            if image_ref:
                image_meta = self.image_api.get(context, image_ref)
    
    
    
    // 确认主机, 如果没有,则使用instance的host, 保持宿主机不变
            if not scheduled_node:
                if recreate:
                    try:
                        compute_node = self._get_compute_info(context, self.host)
                        scheduled_node = compute_node.hypervisor_hostname
                    except exception.ComputeHostNotFound:
                        LOG.exception(_LE('Failed to get compute_info for %s'),
                                      self.host)
                else:
                    scheduled_node = instance.node
    
            with self._error_out_instance_on_exception(context, instance):
                try:
                
                //  重建
                    claim_ctxt = rebuild_claim(
                        context, instance, scheduled_node,
                        limits=limits, image_meta=image_meta,
                        migration=migration)
                    self._do_rebuild_instance_with_claim(
                        claim_ctxt, context, instance, orig_image_ref,
                        image_ref, injected_files, new_pass, orig_sys_metadata,
                        bdms, recreate, on_shared_storage, preserve_ephemeral,
                        migration)
                except exception.ComputeResourcesUnavailable as e:
    								.....
                else:
                    instance.apply_migration_context()
                    # NOTE (ndipanov): This save will now update the host and node
                    # attributes making sure that next RT pass is consistent since
                    # it will be based on the instance and not the migration DB
                    # entry.
                    instance.host = self.host
                    instance.node = scheduled_node
                    instance.save()
                    instance.drop_migration_context()
    
                    # NOTE (ndipanov): Mark the migration as done only after we
                    # mark the instance as belonging to this host.
                    self._set_migration_status(migration, 'done')
    
    

    _do_rebuild_instance_with_claim定义:

        def _do_rebuild_instance_with_claim(self, claim_context, *args, **kwargs):
            """Helper to avoid deep nesting in the top-level method."""
    
            with claim_context:
                self._do_rebuild_instance(*args, **kwargs)
    
    
    
        def _do_rebuild_instance(self, context, instance, orig_image_ref,
                                 image_ref, injected_files, new_pass,
                                 orig_sys_metadata, bdms, recreate,
                                 on_shared_storage, preserve_ephemeral,
                                 migration):
            orig_vm_state = instance.vm_state
    
    
    // recreate 该部分可以直接忽略
            if recreate:
    					 ....
    
            if image_ref:
                image_meta = objects.ImageMeta.from_image_ref(
                    context, self.image_api, image_ref)
            else:
                image_meta = instance.image_meta
    
    
    // 获取磁盘信息
            orig_image_ref_url = glance.generate_image_url(orig_image_ref)
            extra_usage_info = {'image_ref_url': orig_image_ref_url}
            compute_utils.notify_usage_exists(
                    self.notifier, context, instance,
                    current_period=True, system_metadata=orig_sys_metadata,
                    extra_usage_info=extra_usage_info)
    
            # This message should contain the new image_ref
            extra_usage_info = {'image_name': self._get_image_name(image_meta)}
            self._notify_about_instance_usage(context, instance,
                    "rebuild.start", extra_usage_info=extra_usage_info)
    
    //  更新虚拟机状态
            instance.power_state = self._get_power_state(context, instance)
            instance.task_state = task_states.REBUILDING
            instance.save(expected_task_state=[task_states.REBUILDING])
    
            if recreate:
    						...
            else:
                network_info = compute_utils.get_nw_info_for_instance(instance)
    
            if bdms is None:
                bdms = objects.BlockDeviceMappingList.get_by_instance_uuid(
                        context, instance.uuid)
    
            block_device_info = \
                self._get_instance_block_device_info(
                        context, instance, bdms=bdms)
    
            def detach_block_devices(context, bdms):
                for bdm in bdms:
                    if bdm.is_volume:
                        self._detach_volume(context, bdm.volume_id, instance,
                                            destroy_bdm=False)
    
            files = self._decode_files(injected_files)
    
            kwargs = dict(
                context=context,
                instance=instance,
                image_meta=image_meta,
                injected_files=files,
                admin_password=new_pass,
                bdms=bdms,
                detach_block_devices=detach_block_devices,
                attach_block_devices=self._prep_block_device,
                block_device_info=block_device_info,
                network_info=network_info,
                preserve_ephemeral=preserve_ephemeral,
                recreate=recreate)
            try:
            
            //  调用libvirt 重建
                with instance.mutated_migration_context():
                    self.driver.rebuild(**kwargs)
            except NotImplementedError:
                # NOTE(rpodolyaka): driver doesn't provide specialized version
                # of rebuild, fall back to the default implementation
                
                //  实际使用该过程进行重建
                self._rebuild_default_impl(**kwargs)
     
     
    //  更新虚拟机状态及事件           
                
            self._update_instance_after_spawn(context, instance)
            instance.save(expected_task_state=[task_states.REBUILD_SPAWNING])
    
    
            if orig_vm_state == vm_states.STOPPED:
                LOG.info(_LI("bringing vm to original state: '%s'"),
                            orig_vm_state, instance=instance)
                instance.vm_state = vm_states.ACTIVE
                instance.task_state = task_states.POWERING_OFF
                instance.progress = 0
                instance.save()
                self.stop_instance(context, instance, False)
            self._update_scheduler_instance_info(context, instance)
            self._notify_about_instance_usage(
                    context, instance, "rebuild.end",
                    network_info=network_info,
                    extra_usage_info=extra_usage_info)
    

    注意:

    self.driver.rebuild(**kwargs) libvirt并没有提供rebuild的方法
    因此实际使用self._rebuild_default_impl(**kwargs)

        def _rebuild_default_impl(self, context, instance, image_meta,
                                  injected_files, admin_password, bdms,
                                  detach_block_devices, attach_block_devices,
                                  network_info=None,
                                  recreate=False, block_device_info=None,
                                  preserve_ephemeral=False):
            if preserve_ephemeral:
                # The default code path does not support preserving ephemeral
                # partitions.
                raise exception.PreserveEphemeralNotSupported()
    
            if recreate:
                detach_block_devices(context, bdms)
            else:
            
     //  关机       
                self._power_off_instance(context, instance, clean_shutdown=True)
    //  卸载磁盘            
                detach_block_devices(context, bdms)
                
    //  销毁虚拟机            
                self.driver.destroy(context, instance,
                                    network_info=network_info,
                                    block_device_info=block_device_info)
                                    
     //  更新主机的状态                               
    
            instance.task_state = task_states.REBUILD_BLOCK_DEVICE_MAPPING
            instance.save(expected_task_state=[task_states.REBUILDING])
    
            new_block_device_info = attach_block_devices(context, instance, bdms)
    
            instance.task_state = task_states.REBUILD_SPAWNING
            instance.save(
                expected_task_state=[task_states.REBUILD_BLOCK_DEVICE_MAPPING])
    
    // 使用spawn 创建虚拟机
            with instance.mutated_migration_context():
                self.driver.spawn(context, instance, image_meta, injected_files,
                                  admin_password, network_info=network_info,
                                  block_device_info=new_block_device_info)
    
    

    五.spawn 创建虚拟机

    driver.spawn 实际使用的就是虚拟机创建过程
    源码定义在nova/libvirt/driver.py

    创建过程定义如下:

       # NOTE(ilyaalekseyev): Implementation like in multinics
        # for xenapi(tr3buchet)
        def spawn(self, context, instance, image_meta, injected_files,
                  admin_password, network_info=None, block_device_info=None):
            disk_info = blockinfo.get_disk_info(CONF.libvirt.virt_type,
                                                instance,
                                                image_meta,
                                                block_device_info)
            injection_info = InjectionInfo(network_info=network_info,
                                           files=injected_files,
                                           admin_pass=admin_password)
            gen_confdrive = functools.partial(self._create_configdrive,
                                              context, instance,
                                              injection_info)
            self._create_image(context, instance, disk_info['mapping'],
                               injection_info=injection_info,
                               block_device_info=block_device_info)
    
            # Required by Quobyte CI
            self._ensure_console_log_for_instance(instance)
    
    				
            xml = self._get_guest_xml(context, instance, network_info,
                                      disk_info, image_meta,
                                      block_device_info=block_device_info)
            self._create_domain_and_network(
                context, xml, instance, network_info, disk_info,
                block_device_info=block_device_info,
                post_xml_callback=gen_confdrive,
                destroy_disks_on_failure=True)
            LOG.debug("Instance is running", instance=instance)
    
            def _wait_for_boot():
                """Called at an interval until the VM is running."""
                state = self.get_info(instance).state
    
                if state == power_state.RUNNING:
                    LOG.info(_LI("Instance spawned successfully."),
                             instance=instance)
                    raise loopingcall.LoopingCallDone()
    
            timer = loopingcall.FixedIntervalLoopingCall(_wait_for_boot)
            timer.start(interval=0.5).wait()
    
    

    该部分与虚拟机创建,一致, 这里不做深入分析

    展开全文
  • Android版Rebuild.fm Android的Rebuild.fm是收听Rebuild.fm的最佳播客播放器。 (非官方) 关于Rebuild.fm 的。 谈论技术,软件开发和小工具。 截屏 当前版本(0.10.0) 安装 建立在本地 $ git clone ...
  • 小牛牛对多米诺骨牌有很大兴趣,然而她的骨牌比较特别,只有黑色和白色的两种。她觉得如果存在连续三个骨牌是同一种颜色,那么这个骨牌排列便是不美观的。现在她有n个骨牌要来排列,她想知道不美观的排列的个数。...
  • electron-rebuild

    千次阅读 2021-04-01 13:55:59
    该可执行文件将根据您的Electron项目使用的Node.js版本重建原生Node.js模块(native Node.js modules)。这使您可以在Electron应用程序中使用本机Node.js模块,...npm install --save-dev electron-rebuild Then, w.

    该可执行文件将根据您的Electron项目使用的Node.js版本重建原生Node.js模块(native Node.js modules。这使您可以在Electron应用程序中使用本机Node.js模块,而无需完全匹配您的系统版本的Node.js(通常不是这种情况,有时甚至是不可能的)。

    How does it work?

    Install the package with --save-dev:

    npm install --save-dev electron-rebuild

    Then, whenever you install a new npm package, rerun electron-rebuild:

    $(npm bin)/electron-rebuild

    Or if you're on Windows:

    .\node_modules\.bin\electron-rebuild.cmd

    If you have a good node-gyp config but you see an error about a missing element on Windows like Could not load the Visual C++ component "VCBuild.exe", try to launch electron-rebuild in an npm script:

    "scripts": {
      "rebuild": "electron-rebuild -f -w yourmodule"
    }

    and then

    npm run rebuild

    What are the requirements?

    Node v10.12.0 or higher is required. Building the native modules from source uses node-gyp, refer to the link for its installation/runtime requirements.

    CLI Arguments

    Usage: electron-rebuild --version [version] --module-dir [path]
    
    Options:
      -h, --help                   Show help                               [boolean]
      -v, --version                The version of Electron to build against
      -f, --force                  Force rebuilding modules, even if we would skip
                                   it otherwise
      -a, --arch                   Override the target architecture to something
                                   other than your system's
      -m, --module-dir             The path to the app directory to rebuild
      -w, --which-module           A specific module to build, or comma separated
                                   list of modules. Modules will only be rebuilt if they 
                                   also match the types of dependencies being rebuilt
                                   (see --types).
      -e, --electron-prebuilt-dir  The path to electron-prebuilt
      -d, --dist-url               Custom header tarball URL
      -t, --types                  The types of dependencies to rebuild.  Comma
                                   separated list of "prod", "dev" and "optional".
                                   Default is "prod,optional"
      -p, --parallel               Rebuild in parallel, this is enabled by default
                                   on macOS and Linux
      -s, --sequential             Rebuild modules sequentially, this is enabled by
                                   default on Windows
      -o, --only                   Only build specified module, or comma separated
                                   list of modules. All others are ignored.
      -b, --debug                  Build debug version of modules
      --prebuild-tag-prefix        GitHub tag prefix passed to prebuild-install.
                                   Default is "v"
    
    Copyright 2016
    

     

    展开全文
  • rebuild:xp系统的虚拟机用烦了,想换个linux的操作系统,就可以使用rebuild。 evacuate:虚拟机所在的host宕机了,可以使用evacuate将虚拟机在另外一个host上启起来,其实利用这个接口配合host监控工具,可以实现...

    操作区别

    rebuild:xp系统的虚拟机用烦了,想换个linux的操作系统,就可以使用rebuild。
    evacuate:虚拟机所在的host宕机了,可以使用evacuate将虚拟机在另外一个host上启起来,其实利用这个接口配合host监控工具,可以实现虚拟机的HA能力。
    为什么要将这两个一起说呢,是因为在底层,这两个接口其实对应一个操作spawn。

    1、rebuild
    引用一下官方的API文档说明:
    在这里插入图片描述
    底层的实现,其实就是在虚拟机所在的host上,将原来的虚拟机干掉,然后再根据新的镜像创建一个新的虚拟机,实现虚拟机系统盘的变更,而用户盘的数据是不变的(软件的安装和配置会丢失),虚拟机的网络信息也不变。API里的accessIPv4和accessIPv6参数,在使用Quantum的场景下,是无效的。

    目前rebuild仅支持active和stopped状态的虚拟机。而且使用后端卷启动的虚拟机,rebuild之后系统盘不会发生变化,见后面的实验部分。

    2、evacuate
    引用官方的API文档说明:
    在这里插入图片描述
    该接口使用的前提是虚拟机所在的host宕机。
    参数onSharedStorage是让使用者指明,计算节点是否使用共享存储。其实在计算节点是有能力判断是否使用共享存储的(并且计算节点也确实会再进行判断),这里写在接口里,猜测应该是为了在API层做判断吧。
    当使用共享存储时,才是真正意义上的HA,虚拟机的软件和数据不会丢失;否则,只有虚拟机的用户盘数据不会丢失,系统盘是全新的系统盘。

    上面提到,rebuild和evacuate在底层的实现是一样的。其实想想也是,两个接口都需要重新创建虚拟机,唯一的区别是:
    1、rebuild需要多做一步删除虚拟机的操作,而evacuate是直接创建新虚拟机。
    2、rebuild使用的镜像是接口指定的新的镜像(可以与老镜像相同),而evacuate使用的是虚拟机原来的镜像。

    再次引用下wiki上的evacuate流程图:
    在这里插入图片描述
    但evacuate也有一些不足,比如如果支持系统自动选择主机,用户体验可能会更好;还有,同rebuild一样,目前evacuate仅支持active和stopped状态的虚拟机,其他状态(paused,suspended等)的虚拟机是不支持的,这也就意味着其他状态的虚拟机遇到host故障的时候是无法恢复的。
    顺带说一句,目前只有libvirt driver支持rebuild和evacuate。

    实践

    一、rebuild
    步骤如下:
    1、先使用keypaire创建cirros虚拟机,关联floatingip,创建成功后,ssh登录,操作正常。
    root@controller231:~# nova show rebuild-test2
    ±------------------------------------±---------------------------------------------------------+
    | Property | Value |
    ±------------------------------------±---------------------------------------------------------+
    | status | ACTIVE |
    | updated | 2013-06-24T08:14:45Z |
    | OS-EXT-STS:task_state | None |
    | OS-EXT-SRV-ATTR:host | controller231 |
    | key_name | mykey |
    | image | cirros (4851d2f2-ef75-4a80-91c6-f0fcbcd7276a) |
    | hostId | 083729f2f8f664fffd4cffb8c3e76615d7abc1e11efc993528dd88b9 |
    | OS-EXT-STS:vm_state | active |
    | OS-EXT-SRV-ATTR:instance_name | instance-0000000e |
    | OS-EXT-SRV-ATTR:hypervisor_hostname | controller231.openstack.org |
    | flavor | m1.small (2) |
    | id | 03774415-d9ce-4b34-b012-6891d248b767 |
    | security_groups | [{u’name’: u’default’}] |
    | user_id | f882feb345064e7d9392440a0f397c25 |
    | name | rebuild-test2 |
    | created | 2013-06-24T08:14:38Z |
    | tenant_id | 6fbe9263116a4b68818cf1edce16bc4f |
    | OS-DCF:diskConfig | MANUAL |
    | metadata | {} |
    | accessIPv4 | |
    | accessIPv6 | |
    | testnet01 network | 10.1.1.20, 192.150.73.3 |
    | progress | 0 |
    | OS-EXT-STS:power_state | 1 |
    | OS-EXT-AZ:availability_zone | nova |
    | config_drive | |
    ±------------------------------------±---------------------------------------------------------+
    root@network232:~# ssh -i mykey.pem -l cirros 192.150.73.3
    OpenSSH_5.9p1 Debian-5ubuntu1.1, OpenSSL 1.0.1 14 Mar 2012
    Authenticated to 192.150.73.3 ([192.150.73.3]:22).
    $ sudo passwd
    Changing password for root
    New password:
    Retype password:
    Password for root changed by root
    2、命令行执行rebuild,指定ubuntu镜像,注意此时虚拟机的image已经发生改变:
    root@controller231:~# nova rebuild rebuild-test2 1f7f5763-33a1-4282-92b3-53366bf7c695
    ±------------------------------------±------------------------------------------------------------------+
    | Property | Value |
    ±------------------------------------±------------------------------------------------------------------+
    | status | REBUILD |
    | updated | 2013-06-24T08:34:47Z |
    | OS-EXT-STS:task_state | rebuilding |
    | OS-EXT-SRV-ATTR:host | controller231 |
    | key_name | mykey |
    | image | Ubuntu 12.04 cloudimg i386 (1f7f5763-33a1-4282-92b3-53366bf7c695) |
    | hostId | 083729f2f8f664fffd4cffb8c3e76615d7abc1e11efc993528dd88b9 |
    | OS-EXT-STS:vm_state | active |
    | OS-EXT-SRV-ATTR:instance_name | instance-0000000e |
    | OS-EXT-SRV-ATTR:hypervisor_hostname | controller231.openstack.org |
    | flavor | m1.small (2) |
    | id | 03774415-d9ce-4b34-b012-6891d248b767 |
    | security_groups | [{u’name’: u’default’}] |
    | user_id | f882feb345064e7d9392440a0f397c25 |
    | name | rebuild-test2 |
    | created | 2013-06-24T08:14:38Z |
    | tenant_id | 6fbe9263116a4b68818cf1edce16bc4f |
    | OS-DCF:diskConfig | MANUAL |
    | metadata | {} |
    | accessIPv4 | |
    | accessIPv6 | |
    | testnet01 network | 10.1.1.20, 192.150.73.3 |
    | progress | 0 |
    | OS-EXT-STS:power_state | 1 |
    | OS-EXT-AZ:availability_zone | nova |
    | config_drive | |
    ±------------------------------------±------------------------------------------------------------------+
    3、等待虚拟机状态变为ACTIVE,再次登录虚拟机:
    root@network232:~# ssh -i mykey.pem 192.150.73.3
    Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-35-virtual i686)

    • Documentation: https://help.ubuntu.com/

    System information as of Mon Jun 24 08:47:49 UTC 2013

    System load: 0.0 Processes: 60
    Usage of /: 2.9% of 19.67GB Users logged in: 0
    Memory usage: 1% IP address for eth0: 10.1.1.20
    Swap usage: 0%

    Graph this data and manage this system at https://landscape.canonical.com/

    0 packages can be updated.
    0 updates are security updates.

    Get cloud support with Ubuntu Advantage Cloud Guest
    http://www.ubuntu.com/business/services/cloud
    Last login: Mon Jun 24 08:46:09 2013 from 192.168.82.232
    root@rebuild-test2:~#
    看到系统盘已经变成Ubuntu系统。

    4、后端卷启动的虚拟机,rebuild
    比如有一个虚拟机,后端卷启动,后端卷是cirros镜像:
    root@controller231:~# nova show kong2
    ±------------------------------------±---------------------------------------------------------+
    | Property | Value |
    ±------------------------------------±---------------------------------------------------------+
    | status | ACTIVE |
    | updated | 2013-06-26T10:01:29Z |
    | OS-EXT-STS:task_state | None |
    | OS-EXT-SRV-ATTR:host | controller231 |
    | key_name | mykey |
    | image | Attempt to boot from volume - no image supplied |
    | hostId | 083729f2f8f664fffd4cffb8c3e76615d7abc1e11efc993528dd88b9 |
    | OS-EXT-STS:vm_state | active |
    | OS-EXT-SRV-ATTR:instance_name | instance-00000021 |
    | OS-EXT-SRV-ATTR:hypervisor_hostname | controller231.openstack.org |
    | flavor | kong_flavor (6) |
    | id | 8989a10b-5a89-4f87-9b59-83578eabb997 |
    | security_groups | [{u’name’: u’default’}] |
    | user_id | f882feb345064e7d9392440a0f397c25 |
    | name | kong2 |
    | created | 2013-06-26T10:00:51Z |
    | tenant_id | 6fbe9263116a4b68818cf1edce16bc4f |
    | OS-DCF:diskConfig | MANUAL |
    | metadata | {} |
    | accessIPv4 | |
    | accessIPv6 | |
    | testnet01 network | 10.1.1.6 |
    | progress | 0 |
    | OS-EXT-STS:power_state | 1 |
    | OS-EXT-AZ:availability_zone | nova |
    | config_drive | |
    ±------------------------------------±---------------------------------------------------------+
    注意image字段中,显示该虚拟机是boot from volume。
    对该虚拟机进行rebuild操作,指定ubuntu镜像:
    root@controller231:~# nova rebuild kong2 1f7f5763-33a1-4282-92b3-53366bf7c695
    ±------------------------------------±------------------------------------------------------------------+
    | Property | Value |
    ±------------------------------------±------------------------------------------------------------------+
    | status | REBUILD |
    | updated | 2013-06-26T10:25:03Z |
    | OS-EXT-STS:task_state | rebuilding |
    | OS-EXT-SRV-ATTR:host | controller231 |
    | key_name | mykey |
    | image | Ubuntu 12.04 cloudimg i386 (1f7f5763-33a1-4282-92b3-53366bf7c695) |
    | hostId | 083729f2f8f664fffd4cffb8c3e76615d7abc1e11efc993528dd88b9 |
    | OS-EXT-STS:vm_state | active |
    | OS-EXT-SRV-ATTR:instance_name | instance-00000021 |
    | OS-EXT-SRV-ATTR:hypervisor_hostname | controller231.openstack.org |
    | flavor | kong_flavor (6) |
    | id | 8989a10b-5a89-4f87-9b59-83578eabb997 |
    | security_groups | [{u’name’: u’default’}] |
    | user_id | f882feb345064e7d9392440a0f397c25 |
    | name | kong2 |
    | created | 2013-06-26T10:00:51Z |
    | tenant_id | 6fbe9263116a4b68818cf1edce16bc4f |
    | OS-DCF:diskConfig | MANUAL |
    | metadata | {} |
    | accessIPv4 | |
    | accessIPv6 | |
    | testnet01 network | 10.1.1.6, 192.150.73.16 |
    | progress | 0 |
    | OS-EXT-STS:power_state | 1 |
    | OS-EXT-AZ:availability_zone | nova |
    | config_drive | |
    ±------------------------------------±------------------------------------------------------------------+
    待虚拟机active之后,VNC登录虚拟机,发现虚拟机并没有发生变化,还是cirros。
    因为rebuild在nova driver层调用还是spawn函数创建新的虚拟机,而后端卷启动的虚拟机,是不会跟glance打交道的,还是直接挂载系统盘。

    展开全文
  • NULL 博文链接:https://aindf0128.iteye.com/blog/752997
  • What is the difference between just a Rebuild and doing a Clean + Build in Visual Studio 2008? 只是重建

    本文翻译自:Difference between Rebuild and Clean + Build in Visual Studio

    What is the difference between just a Rebuild and doing a Clean + Build in Visual Studio 2008? 只是重建和在Visual Studio 2008中执行Clean + Build之间有什么区别? Is Clean + Build different then doing Clean + Rebuild ? Clean + Build是不同的,然后做Clean + Rebuild


    #1楼

    参考:https://stackoom.com/question/5ewh/Visual-Studio中Rebuild和Clean-Build之间的区别


    #2楼

    Rebuild = Clean + Build (usually) 重建=清洁+建造(通常)

    Notable details: 值得注意的细节:

    1. For a multi-project solution, "rebuild solution" does a "clean" followed by a "build" for each project (possibly in parallel). 对于多项目解决方案,“重建解决方案”执行“清理”,然后为每个项目执行“构建”(可能并行)。 Whereas a "clean solution" followed by a "build solution" first cleans all projects (possibly in parallel) and then builds all projects (possibly in parallel). 而“清洁解决方案”后跟“构建解决方案”首先清除所有项目(可能并行),然后构建所有项目(可能并行)。 This difference in sequencing of events can become significant when inter-project dependencies come into play. 当项目间的依赖关系发挥作用时,事件排序的这种差异会变得很大。

    2. All three actions correspond to MSBuild targets. 所有这三个操作都对应于MSBuild目标。 So a project can override the Rebuild action to do something completely different. 因此,项目可以覆盖重建操作以执行完全不同的操作。


    #3楼

    Earl is correct that 99% of the time Rebuild = Clean + Build. Earl是正确的,99%的时间Rebuild = Clean + Build。

    But they are not guaranteed to be the same. 但它们并不保证是一样的。 The 3 actions (rebuild, build, clean) represent different MSBuild targets. 3个动作(重建,构建,清理)代表不同的MSBuild目标。 Each of which can be overriden by any project file to do custom actions. 任何项目文件都可以覆盖其中的每一个以执行自定义操作。 So it is entirely possible for someone to override rebuild to do several actions before initiating a clean + build (or to remove them entirely). 因此,在启动clean + build(或完全删除它们)之前,某人完全有可能覆盖rebuild以执行多个操作。

    Very much a corner case but pointing it out due to comment discussions. 非常一个极端的案例,但由于评论讨论指出它。


    #4楼

    From http://www.cs.tufts.edu/r/graphics/resources/vs_getting_started/vs_getting_started.htm , (just googled it): 来自http://www.cs.tufts.edu/r/graphics/resources/vs_getting_started/vs_getting_started.htm ,(只是谷歌搜索):

    Build means compile and link only the source files that have changed since the last build, while Rebuild means compile and link all source files regardless of whether they changed or not. 构建意味着仅编译和链接自上次构建以来已更改的源文件,而重建意味着编译和链接所有源文件,无论它们是否更改。 Build is the normal thing to do and is faster. 构建是正常的事情并且更快。 Sometimes the versions of project target components can get out of sync and rebuild is necessary to make the build successful. 有时,项目目标组件的版本可能会失去同步,并且必须进行重建才能使构建成功。 In practice, you never need to Clean. 在实践中,您永远不需要清洁。

    Build or Rebuild Solution builds or rebuilds all projects in the your solution, while Build or Rebuild builds or rebuilds the StartUp project, "hello" in the screen shot above. 构建或重建解决方案构建或重建解决方案中的所有项目,而构建或重建构建或重建StartUp项目,在上面的屏幕截图中“hello”。 To set the StartUp project, right click on the desired project name in the Solution Explorer tab and select Set as StartUp project. 要设置StartUp项目,请在Solution Explorer选项卡中右键单击所需的项目名称,然后选择Set as StartUp project。 The project name now appears in bold. 项目名称现在以粗体显示。 Since the homework solutions typically have only one project, Build or Rebuild Solution is effectively the same as Build or Rebuild . 由于家庭作业解决方案通常只有一个项目,因此构建或重建解决方案实际上与构建或重建相同。

    Compile just compiles the source file currently being edited. 编译只编译当前正在编辑的源文件。 Useful to quickly check for errors when the rest of your source files are in an incomplete state that would prevent a successful build of the entire project. 当其他源文件处于不完整状态时,可以快速检查错误,从而阻止整个项目的成功构建。 Ctrl-F7 is the shortcut key for Compile. Ctrl-F7是Compile的快捷键。


    #5楼

    From this blog post which the author linked as a comment on this question : 此博客文章中,作者将此链接作为对此问题的评论

    Actually No!!! 其实没有!!! they are not equal. 他们不平等。

    The difference is in the sequence projects get clean and build. 不同之处在于序列项目变得干净和构建。 Let say we have two projects in a solution. 假设我们在解决方案中有两个项目。 Clean and then build will perform clean to both projects and then build will occur individually while on rebuild project A will get and clean and then build after that project B will be clean and then build and so on. 清理然后构建将对两个项目执行清理,然后在重建项目A时将单独进行构建将获得并清理然后在该项目B之后构建B将是干净的,然后构建等等。


    #6楼

    1 Per project, Rebuild project = (Clean project + Build project). 1每个项目,重建项目=(清理项目+构建项目)。

    2 Per Solution, Rebuild Sln = foreach project (Clean project + Build project) != Clean Sln + Build Sln 2每个解决方案,重建Sln = foreach项目(清理项目+构建项目)!= Clean Sln + Build Sln

    Say you have a Sln, contains proj1, proj2, and proj3. 假设您有一个Sln,包含proj1,proj2和proj3。

    Rebuild Sln = (Clean proj1 -> Build Proj1) + (Clean proj2 -> Build Proj2) + (Clean proj3 -> Build Proj3) 重建Sln =(Clean proj1 - > Build Proj1)+(Clean proj2 - > Build Proj2)+(Clean proj3 - > Build Proj3)

    Clean Sln + Build Sln = (Clean proj1 + Clean proj2 + Clean proj3) -> (Build proj1 + Build proj2 + Build proj3) Clean Sln + Build Sln =(清洁proj1 + Clean proj2 + Clean proj3) - >(构建proj1 + Build proj2 + Build proj3)

    -> means serial, + means concurrent - >表示串行,+表示并发

    so there is a chance when you submit a lot of code changes while you don't configured the project dependencies correctly, Rebuild Sln would cause some of you proj link to a stale lib because all builds aren't guaranteed being after all cleans.(In this case, Clean Sln + Build Sln will give a link error, and let you know that immediately, instead of giving you an app with odd behavior) 因此,当您没有正确配置项目依赖项时,有可能提交大量代码更改,Rebuild Sln会导致您的某些项目链接到陈旧的lib,因为所有构建都不能保证在完全清除之后。(在这种情况下,Clean Sln + Build Sln会给出链接错误,并立即通知您,而不是给您一个奇怪行为的应用程序)

    展开全文
  • 本文分享了一个笔者自己整理的存储过程,帮助技术人员快速的重新生成数据库的索引,以减轻重复的工作,有需要的朋友,可以参考一下。
  • mvn rebuild 以可预测和可重复的方式构建现代软件并不容易。 大量的软件依赖项以及隔离冲突组件的需求给管理构建环境带来了众多挑战。 尽管有许多旨在缓解这一挑战的工具,但它们大多数采用两种方法:要么依靠程序...
  • Rebuild Project

    千次阅读 2020-06-25 09:30:04
    当把一个文件放到web目录下,重启Tomcat后,去浏览器中访问这个...错误原因大概率是没有Rebuild Project,不Rebuild Project的话这个文件仅仅在开发目录中而已。Rebuild Project后会将这个文件放置到部署目录中! ...
  • .\node_modules\.bin\electron-rebuild.cmd 环境: 系统:win10 "dependencies": { "core-js": "^3.6.5", "electron-rebuild": "^3.2.3", "escpos-serialport": "^3.0.0-alpha.4", "vue": "^2.6.11" }, ...
  • node_access_rebuild-源码

    2021-06-13 12:49:51
    通过添加此命令,您可以使用drush node-access-rebuild或drush nar使用 drush 的批处理重建权限。 不幸的是,目前没有渐进式输出让您知道还剩多少时间。 要安装,请将此目录复制到~/.drush并运行drush cc drush ...
  • node-rdkafka.0.6 install: <code>node-gyp rebuild</code></li><li>npm ERR! This is most likely a problem with the node-rdkafka package,</li><li>npm ERR! Failed at the node-rdkafka.0.6 install script &#...
  • GBA_REBUILD GBA_REBUILD 我忘记了在哪个大神那里下载的了,但是要感谢他,但是他的项目里面的东西很久了所以跑不了,我就更换了里面的东西才能跑起来
  • DELL_服务器硬盘掉线后的REBUILD修复操作全过程
  • "# plazma-burst-2-website-css-rebuild" 根据: 整出来的css,使用方法在html文件里按顺序引入fix.css和pb.css就可以用了 index.html为demo,可以在里面看看样式咋用,一般是给div添加pb-XXXX的类就起作用了 来自 超...
  • 这是一个用引导的项目。 入门 首先,运行开发服务器: npm run dev # or yarn dev 用浏览器打开以查看结果。 您可以通过修改pages/index.js来开始编辑页面。 页面在您编辑文件时自动更新...ollie-portfolio-rebuild
  • 在使用命令行编译项目,发现使用 build 可以编译通过,但是通过 rebuild 编译提示找不到项目,明明在对应的文件夹存在项目输出的 dll 文件,但是会提示找不到 在命令行编译的 build 和 rebuild 的不同在于使用 ...
  • 卷启动的虚拟机rebuild无效,rebuild卷启动的虚拟机相当于仅仅对虚拟机进行了一次重启。 版本信息如下 组件 版本 openstack train nova 20.6.0-1 看了好几遍rebuild的代码,都没有找到对卷启动的云...
  • Kernelmode dumperFeatures Dump any process main module using a kernel driver (both x86 and x64) Rebuild PE32/PE64 header and sections Works on protected system processes & processes with stripped ...
  • 草稿文章为针对CanvasUpdateRegistry、ICanvasElement、LayoutGroup等分析,ICanvasElement相关Rebuild操作内容,汇总添加布局元素、图形元素时机。 每一帧渲染前会进行触发Canvas.willRenderCanvases事件,...
  • 腾讯现网15台服务器(SA5212M5 搭配3008IR(FW:14.00.02),后置硬盘两块镁光SSD(FW:037),硬盘组建RAID 1 服务器上架时间为2018年3月),报修其中一个硬盘故障,现场运维上门更换硬盘但Rebuild失败,如下图一所...
  • 触发重建后,服务将执行dokku ps:rebuild 命令来重建您的应用程序。 要求 NodeJS(在v14上测试,较旧的版本也可能工作) 安装服务 配置 编辑dist/dokku-rebuild.service文件中的APP_NAME变量以匹配您的应用程序名称...
  • 游戏 Rebuild The Universe 的附加脚本 用法:目前您必须将 rtu_scripts.js 内容复制并粘贴到您的控制台中,然后发出: init(false) 仅用于添加到单元面板的附加购买信息 init(true) 同上,加上可以访问自动购买 ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 87,921
精华内容 35,168
关键字:

rebuild