精华内容
下载资源
问答
  • 稀疏代码融合以增强夜视监控的上下文
  • 之前有一次在回滚自己本地代码时,不小心删除了同事上传的SVN代码,先给大家分享如何解决这种问题: 1.对着自己项目文件夹鼠标点击右键: 2.小乌龟后,show log,查看记录: 进入这个界面,最左边是版本号,中间...

    之前有一次在回滚自己本地代码时,不小心删除了同事上传的SVN代码,先给大家分享如何解决这种问题:

    1.对着自己项目文件夹鼠标点击右键:


    2.小乌龟后,show log,查看记录:


    进入这个界面,最左边是版本号,中间那些图标大家都懂,不啰嗦了,重点就是下面,会显示Modified(改进的)和Added(添加)以及deleted(删除),双击进去,就可以看到差异了,然后根据自己的需要Revert就好咯。

    当然,回到代码或许看不到,编译一下,无论丢失的还是差异的全都都有了


    展开全文
  • 图像融合代码

    2019-02-20 14:52:48
    图像融合的经典代码,包括金字塔族、小波族类的融合算法,非常实用。
  • DS融合代码

    2015-05-16 20:35:51
    DS的证据理论实践代码,实现对目标识别的多帧序列图像融合
  • 投影融合代码

    2018-08-13 20:39:57
    基于ofx框架的投影融合代码,对于想做投影融合的朋友会有较大帮助
  • 小波变换 图像融合代码 GUI等,做毕设的最佳资料11111111111111111111111111111111111111111111111111111111111111111111111
  • pca融合代码

    2015-05-02 21:49:15
    好用的pca融合程序,即主成分分析法,可以直接运行使用
  • 图像融合教程及代码-图像融合.pdf 图像融合的教程,系往对大家有用! 版主,如果重了的话请删除,谢谢
  • 本算法适用于多帧曝光融合代码,同时不同帧间存在着运动物体,比起传统的HDR,可以极大的避免由于运动物体的出现带来的鬼影效果,同时又很强的自适应去噪效果,很适合工程实现,
  • 数据融合代码-ESTARFM

    2020-06-24 15:58:13
    ESTARFM算法是数据融合领域经典代码,本代码基于GPU训练,包含代码部分和测试数据部分。研究方向设计相关领域的可以下载研究,亲测可用。
  • 图像融合matlab代码

    2015-10-23 17:58:08
    主要是matlab编写的图像融合代码,参考武汉大学测绘相关课本,望采纳
  • 特征融合代码理解

    2020-11-23 18:32:47
    特征融合代码理解特种融合代码理解新的改变功能快捷键合理的创建标题,有助于目录的生成如何改变文本的样式插入链接与图片如何插入一段漂亮的代码片生成一个适合你的列表创建一个表格设定内容居中、居左、居右...

    特征融合代码理解

    特种融合代码理解

    集成学习的方法在这类比赛中经常使用,要想获得好成绩集成学习必须做得好。在这里简单谈谈思路,我们使用了两个模型,我们模型也会采取不同参数去训练和预测,那么我们就会得到很多预测MASK图,此时 我们可以采取模型融合的思路,对每张结果图的每个像素点采取投票表决的思路,对每张图相应位置的像素点的类别进行预测,票数最多的类别即为该像素点的类别。正所谓“三个臭皮匠,胜过诸葛亮”,我们这种ensemble的思路,可以很好地去掉一些明显分类错误的像素点,很大程度上改善模型的预测能力。

    少数服从多数的投票表决策略代码

    // An highlighted block
    import numpy as np
    import cv2
    import argparse
     
    RESULT_PREFIXX = ['./result1/','./result2/','./result3/']
    # each mask has 5 classes: 0~4
     
    def vote_per_image(image_id):
        result_list = []
        for j in range(len(RESULT_PREFIXX)):
            im = cv2.imread(RESULT_PREFIXX[j]+str(image_id)+'.png',0)
            result_list.append(im)
            
        # each pixel
        height,width = result_list[0].shape
        vote_mask = np.zeros((height,width))
        for h in range(height):
            for w in range(width):
                record = np.zeros((1,5))
                for n in range(len(result_list)):
                    mask = result_list[n]
                    pixel = mask[h,w]
                    #print('pix:',pixel)
                    record[0,pixel]+=1
               
                label = record.argmax()
                #print(label)
                vote_mask[h,w] = label
        
        cv2.imwrite('vote_mask'+str(image_id)+'.png',vote_mask)
            
     
    vote_per_image(3)
    
    展开全文
  • 图像融合代码

    2013-01-15 20:37:06
    含有多种图像融合方法的MATLAB 源代码!!
  • PANET代码 特征融合

    2020-06-10 21:00:25
    #多尺度特征融合 #不同ROI特征融合 ############################### x = KL.Add(name=“mrcnn_mask_add_2_3”)([x2, x3]) x = KL.Add(name=“mrcnn_mask_add_2_4”)([x, x4]) x = KL.Add(name=“mrcnn_mask_add_2_5...

    在这里插入图片描述 ################################
    #多尺度特征融合
    #不同ROI特征融合
    ###############################
    x = KL.Add(name=“mrcnn_mask_add_2_3”)([x2, x3])
    x = KL.Add(name=“mrcnn_mask_add_2_4”)([x, x4])
    x = KL.Add(name=“mrcnn_mask_add_2_5”)([x, x5])

    def fpn_classifier_graph(rois, feature_maps, image_meta,
                             pool_size, num_classes, train_bn=True,
                             fc_layers_size=1024):
        """Builds the computation graph of the feature pyramid network classifier
        and regressor heads.
        rois: [batch, num_rois, (y1, x1, y2, x2)] Proposal boxes in normalized
              coordinates.
        feature_maps: List of feature maps from different layers of the pyramid,
                      [P2, P3, P4, P5]. Each has a different resolution.
        image_meta: [batch, (meta data)] Image details. See compose_image_meta()
        pool_size: The width of the square feature map generated from ROI Pooling.
        num_classes: number of classes, which determines the depth of the results
        train_bn: Boolean. Train or freeze Batch Norm layers
        fc_layers_size: Size of the 2 FC layers
        Returns:
            logits: [batch, num_rois, NUM_CLASSES] classifier logits (before softmax)
            probs: [batch, num_rois, NUM_CLASSES] classifier probabilities
            bbox_deltas: [batch, num_rois, NUM_CLASSES, (dy, dx, log(dh), log(dw))] Deltas to apply to
                         proposal boxes
        """
        # ROI Pooling
        # Shape: [batch, num_rois, POOL_SIZE, POOL_SIZE, channels]
        x2 = PyramidROIAlign_AFN([pool_size, pool_size],2,
                            name="roi_align_classifier_2")([rois, image_meta] + feature_maps)
    
        # Two 1024 FC layers (implemented with Conv2D for consistency)
        x2 = KL.TimeDistributed(KL.Conv2D(fc_layers_size, (pool_size, pool_size), padding="valid"),
                               name="mrcnn_class_conv1_2")(x2)
        x2 = KL.TimeDistributed(BatchNorm(), name='mrcnn_class_bn1_2')(x2, training=train_bn)
        x2 = KL.Activation('relu')(x2)
        #3
        x3 = PyramidROIAlign_AFN([pool_size, pool_size], 3,
                              name="roi_align_classifier_3")([rois, image_meta] + feature_maps)
    
        # Two 1024 FC layers (implemented with Conv2D for consistency)
        x3 = KL.TimeDistributed(KL.Conv2D(fc_layers_size, (pool_size, pool_size), padding="valid"),
                                name="mrcnn_class_conv1_3")(x3)
        x3 = KL.TimeDistributed(BatchNorm(), name='mrcnn_class_bn1_3')(x3, training=train_bn)
        x3 = KL.Activation('relu')(x3)
        #4
        x4 = PyramidROIAlign_AFN([pool_size, pool_size], 4,
                             name="roi_align_classifier_4")([rois, image_meta] + feature_maps)
    
        # Two 1024 FC layers (implemented with Conv2D for consistency)
        x4 = KL.TimeDistributed(KL.Conv2D(fc_layers_size, (pool_size, pool_size), padding="valid"),
                                name="mrcnn_class_conv1_4")(x4)
        x4 = KL.TimeDistributed(BatchNorm(), name='mrcnn_class_bn1_4')(x4, training=train_bn)
        x4 = KL.Activation('relu')(x4)
        #5
        x5 = PyramidROIAlign_AFN([pool_size, pool_size], 5,
                              name="roi_align_classifier_5")([rois, image_meta] + feature_maps)
    
        # Two 1024 FC layers (implemented with Conv2D for consistency)
        x5 = KL.TimeDistributed(KL.Conv2D(fc_layers_size, (pool_size, pool_size), padding="valid"),
                                name="mrcnn_class_conv1_5")(x5)
        x5 = KL.TimeDistributed(BatchNorm(), name='mrcnn_class_bn1_5')(x5, training=train_bn)
        x5 = KL.Activation('relu')(x5)
        ################################
        #多尺度特征融合
        #不同ROI特征融合
        ###############################
        x = KL.Add(name="mrcnn_mask_add_2_3")([x2, x3])
        x = KL.Add(name="mrcnn_mask_add_2_4")([x, x4])
        x = KL.Add(name="mrcnn_mask_add_2_5")([x, x5])
    
        x = KL.TimeDistributed(KL.Conv2D(fc_layers_size, (1, 1)),
                               name="mrcnn_class_conv2")(x)
        x = KL.TimeDistributed(BatchNorm(), name='mrcnn_class_bn2')(x, training=train_bn)
        x = KL.Activation('relu')(x)
    
        shared = KL.Lambda(lambda x: K.squeeze(K.squeeze(x, 3), 2),
                           name="pool_squeeze")(x)
    
        # Classifier head
        mrcnn_class_logits = KL.TimeDistributed(KL.Dense(num_classes),
                                                name='mrcnn_class_logits')(shared)
        mrcnn_probs = KL.TimeDistributed(KL.Activation("softmax"),
                                         name="mrcnn_class")(mrcnn_class_logits)
    
        # BBox head
        # [batch, num_rois, NUM_CLASSES * (dy, dx, log(dh), log(dw))]
        x = KL.TimeDistributed(KL.Dense(num_classes * 4, activation='linear'),
                               name='mrcnn_bbox_fc')(shared)
        # Reshape to [batch, num_rois, NUM_CLASSES, (dy, dx, log(dh), log(dw))]
        s = K.int_shape(x)
        mrcnn_bbox = KL.Reshape((s[1], num_classes, 4), name="mrcnn_bbox")(x)
    
        return mrcnn_class_logits, mrcnn_probs, mrcnn_bbox
    
      for i, level in enumerate(range(2, 6)):
                ix = tf.where(tf.equal(roi_level, level))
                level_boxes = tf.gather_nd(boxes, ix)
    
                # Box indices for crop_and_resize.
                box_indices = tf.cast(ix[:, 0], tf.int32)
    
                # Keep track of which box is mapped to which level
                box_to_level.append(ix)
    
                # Stop gradient propogation to ROI proposals
                level_boxes = tf.stop_gradient(level_boxes)
                box_indices = tf.stop_gradient(box_indices)
    
                # Crop and Resize
                # From Mask R-CNN paper: "We sample four regular locations, so
                # that we can evaluate either max or average pooling. In fact,
                # interpolating only a single value at each bin center (without
                # pooling) is nearly as effective."
                #
                # Here we use the simplified approach of a single value per bin,
                # which is how it's done in tf.crop_and_resize()
                # Result: [batch * num_boxes, pool_height, pool_width, channels]
                pooled.append(tf.image.crop_and_resize(
                    feature_maps[i], level_boxes, box_indices, self.pool_shape,
                    method="bilinear"))
    
            # Pack pooled features into one tensor
            pooled = tf.concat(pooled, axis=0)
    
    

    在这里插入图片描述

    def build_fpn_mask_graph(rois, feature_maps, image_meta,
                             pool_size, num_classes, train_bn=True):
        """Builds the computation graph of the mask head of Feature Pyramid Network.
        rois: [batch, num_rois, (y1, x1, y2, x2)] Proposal boxes in normalized
              coordinates.
        feature_maps: List of feature maps from different layers of the pyramid,
                      [P2, P3, P4, P5]. Each has a different resolution.
        image_meta: [batch, (meta data)] Image details. See compose_image_meta()
        pool_size: The width of the square feature map generated from ROI Pooling.
        num_classes: number of classes, which determines the depth of the results
        train_bn: Boolean. Train or freeze Batch Norm layers
        Returns: Masks [batch, num_rois, MASK_POOL_SIZE, MASK_POOL_SIZE, NUM_CLASSES]
        """
        # ROI Pooling
        # Shape: [batch, num_rois, MASK_POOL_SIZE, MASK_POOL_SIZE, channels]
        x = PyramidROIAlign([pool_size, pool_size],
                            name="roi_align_mask")([rois, image_meta] + feature_maps)
    
        # Conv layers
        x = KL.TimeDistributed(KL.Conv2D(256, (3, 3), padding="same"),
                               name="mrcnn_mask_conv1")(x)
        x = KL.TimeDistributed(BatchNorm(),
                               name='mrcnn_mask_bn1')(x, training=train_bn)
        x = KL.Activation('relu')(x)
    
        x = KL.TimeDistributed(KL.Conv2D(256, (3, 3), padding="same"),
                               name="mrcnn_mask_conv2")(x)
        x = KL.TimeDistributed(BatchNorm(),
                               name='mrcnn_mask_bn2')(x, training=train_bn)
        x = KL.Activation('relu')(x)
    
        x = KL.TimeDistributed(KL.Conv2D(256, (3, 3), padding="same"),
                               name="mrcnn_mask_conv3")(x)
        x = KL.TimeDistributed(BatchNorm(),
                               name='mrcnn_mask_bn3')(x, training=train_bn)
        x = KL.Activation('relu')(x)
    
        x1 = KL.TimeDistributed(KL.Conv2D(256, (3, 3), padding="same"),
                               name="mrcnn_mask_conv4_fc")(x)
        x1 = KL.TimeDistributed(BatchNorm(),
                               name='mrcnn_mask_conv4bn')(x1, training=train_bn)
        x1 = KL.Activation('relu')(x1)
    
        x1 = KL.TimeDistributed(KL.Conv2D(256, (3, 3),strides=(2,2), padding="same"),
                               name="mrcnn_mask_conv5_fc")(x1)
        x1 = KL.TimeDistributed(BatchNorm(),
                                name='mrcnn_mask_conv5bn')(x1, training=train_bn)
        x1 = KL.Activation('relu')(x1)
    
        #x1 = KL.TimeDistributed(KL.Dense(256*4*4,activation="sigmoid"),
        #                       name="mrcnn_mask_fc")(x1)
        x1 = KL.TimeDistributed(KL.Flatten())(x1)
        x1 = KL.TimeDistributed(KL.Dense(28*28*num_classes),name='mrcnn_mask_fc_logits')(x1)
    
        x1 = KL.Activation("softmax",name="mrcnn_class_fc")(x1)
    
    
    
        s = K.int_shape(x1)
        x1 = KL.Reshape(( s[1],28,28, num_classes), name="mrcnn_mask_fc_reshape")(x1)
        #x1 = KL.TimeDistributed(KL.Reshape((14,14)),name="mrcnn_mask_fc_reshape")(x1)
    
        x = KL.TimeDistributed(KL.Conv2D(256, (3, 3), padding="same"),
                               name="mrcnn_mask_conv4")(x)
        x = KL.TimeDistributed(BatchNorm(),
                               name='mrcnn_mask_bn4')(x, training=train_bn)
        x = KL.Activation('relu')(x)
    
        x = KL.TimeDistributed(KL.Conv2DTranspose(256, (2, 2), strides=2, activation="relu"),
                               name="mrcnn_mask_deconv")(x)
    
    
        x = KL.TimeDistributed(KL.Conv2D(num_classes, (1, 1), strides=1, activation="softmax"),
                               name="mrcnn_mask")(x)
        x = KL.Add(name="mrcnn_mask_add")([x, x1])
        x = KL.Activation('tanh',name="mrcnn_masksoftmax")(x)
    
    
        return x
    
    展开全文
  • 小波融合代码

    2012-08-06 16:05:46
    用于小波融合的图像处理代码,属于matlab文件。
  • 常用图像融合代码

    2013-01-15 15:29:23
    常用的IHS,PCA 加权图像融合三种算法,Matlab源代码
  • 图像融合代码 matlab

    热门讨论 2009-02-27 10:49:04
    图像融合代码 matlab 图像融合代码 matlab 图像融合代码 matlab
  • 多波段融合/拉普拉斯金字塔融合OpenCV_C++代码
  • 泊松融合代码

    2020-09-27 21:49:09
    ] output=cv2.seamlessClone(src, dst, mask, (700,300),cv2.NORMAL_CLONE) #泊松融合的位置和cv::NORMAL_CLONE, cv::MIXED_CLONE or cv::MONOCHROME_TRANSFER # plt.imshow(outputImage) # plt.show() cv2.imwrite...
        src=cv2.imread("/data/h201908021056/program/pt/class_practice/0010.jpg")
        mask=cv2.imread("/data/h201908021056/program/pt/class_practice/mask0010.jpg")  #.convert('RGB')
        dst=cv2.imread("/data/h201908021056/program/pt/class_practice/b1.jpg")
        # imgPenguinMask=cv2.GaussianBlur(imgPenguinMask, (21, 21), -1)
        # imgPenguinMaskNorm=imgPenguinMask.astype(np.float)/imgPenguinMask.max()
        # outputImage=imgHiking.copy()
        targetTop=10
        targetLeft=10
        #outputImage[targetTop:targetTop+imgPenguin.shape[0],targetLeft:targetLeft+imgPenguin.shape[1]]=imgPenguinMaskNorm*imgPenguin+(1-imgPenguinMaskNorm)*outputImage[targetTop:targetTop+imgPenguin.shape[0],targetLeft:targetLeft+imgPenguin.shape[1]]
        output=cv2.seamlessClone(src, dst, mask, (700,300),cv2.NORMAL_CLONE) #泊松融合的位置和cv::NORMAL_CLONE, cv::MIXED_CLONE or cv::MONOCHROME_TRANSFER
        # plt.imshow(outputImage)
        # plt.show()
        cv2.imwrite("/data/h201908021056/program/pt/class_practice/1536.png",output)
        # plt.imshow(imgPenguinMask)

     

    展开全文
  • 代码为广义证据理论融合代码 与经典D-S证据理论代码做出了区别 并对于特殊情况做了讨论 推广性高 核心代码移植性好
  • 图像融合代码.rar

    2020-07-03 22:25:11
    图像融合思路:首先对两幅图像进行灰度处理并进行Harris角点检测,然后根据检测的角点做特征匹配,接着通过homography变换将一幅图像拼接到另一幅图像上,最后将重叠的像素点进行融合得到一副图像。 文件包括:实验...
  • 多聚焦图像融合代码

    2018-03-19 21:45:10
    一款关于多聚焦图像融合算法的Matlab实现代码,附有详细图片

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 5,428
精华内容 2,171
关键字:

代码融合