精华内容
下载资源
问答
  • 带你入门多目标跟踪(三)匈牙利算法&KM算法

    万次阅读 多人点赞 2019-07-02 19:30:41
    匈牙利算法(Hungarian Algorithm)与KM算法(Kuhn-Munkres Algorithm)是做多目标跟踪的小伙伴很容易在论文中见到的两种算法。他们都是用来解决多目标跟踪中的数据关联问题。 对理论没有兴趣的小伙伴可以先跳过...

    匈牙利算法(Hungarian Algorithm)与KM算法(Kuhn-Munkres Algorithm)是做多目标跟踪的小伙伴很容易在论文中见到的两种算法。他们都是用来解决多目标跟踪中的数据关联问题。

    对理论没有兴趣的小伙伴可以先跳过本文,进行下一篇的学习,把匈牙利算法这些先当作一个黑箱来用,等需要了再回过头来学习理论。但个人建议,至少要明白这些算法的目的与大致流程。

    如果大家用这两种算法的名字在搜索引擎上搜索,一定会首先看到这个名词:二分图(二部图)。匈牙利算法与KM算法都是为了求解二分图的最大匹配问题

     

    二分图(二部图)

    有一种很特别的图,就做二分图,那什么是二分图呢?就是能分成两组,U,V。其中,U上的点不能相互连通,只能连去V中的点,同理,V中的点不能相互连通,只能连去U中的点。这样,就叫做二分图。

    读者可以把二分图理解为视频中连续两帧中的所有检测框,第一帧所有检测框的集合称为U,第二帧所有检测框的集合称为V。同一帧的不同检测框不会为同一个目标,所以不需要互相关联,相邻两帧的检测框需要相互联通,最终将相邻两帧的检测框尽量完美地两两匹配起来。而求解这个问题的最优解就要用到匈牙利算法或者KM算法。

    1. 匈牙利算法(Hungarian Algorithm)

    先介绍匈牙利算法,引用百度百科的说法,匈牙利算法是一种在多项式时间内求解任务分配问题的组合优化算法,并推动了后来的原始对偶方法。美国数学家哈罗德·库恩于1955年提出该算法。此算法之所以被称作匈牙利算法,是因为算法很大一部分是基于以前匈牙利数学家Dénes Kőnig和Jenő Egerváry的工作之上创建起来的。

    https://blog.csdn.net/dark_scope/article/details/8880547#commentBox

    这篇博文对匈牙利算法进行了非常简单清晰的解释,我这里引用它来说明一下匈牙利算法的流程。

    以上图为例,假设左边的四张图是我们在第N帧检测到的目标(U),右边四张图是我们在第N+1帧检测到的目标(V)。红线连起来的图,是我们的算法认为是同一行人可能性较大的目标。由于算法并不是绝对理想的,因此并不一定会保证每张图都有一对一的匹配,一对二甚至一对多,再甚至多对多的情况都时有发生。这时我们怎么获得最终的一对一跟踪结果呢?我们来看匈牙利算法是怎么做的。

    第一步.

    首先给左1进行匹配,发现第一个与其相连的右1还未匹配,将其配对,连上一条蓝线。

    第二步.

    接着匹配左2,发现与其相连的第一个目标右2还未匹配,将其配对。

    第三步.

    接下来是左3,发现最优先的目标右1已经匹配完成了,怎么办呢?

    我们给之前右1的匹配对象左1分配另一个对象。

    (黄色表示这条边被临时拆掉)

    可以与左1匹配的第二个目标是右2,但右2也已经有了匹配对象,怎么办呢?

    我们再给之前右2的匹配对象左2分配另一个对象(注意这个步骤和上面是一样的,这是一个递归的过程)。

    此时发现左2还能匹配右3,那么之前的问题迎刃而解了,回溯回去。

    左2对右3,左1对右2,左3对右1。

    所以第三步最后的结果就是:

    第四步.

    最后是左4,很遗憾,按照第三步的节奏我们没法给左4腾出来一个匹配对象,只能放弃对左4的匹配,匈牙利算法流程至此结束。蓝线就是我们最后的匹配结果。至此我们找到了这个二分图的一个最大匹配。

    再次感谢Dark_Scope的分享,我这里只是把其中的例子替换成了多目标跟踪的实际场景便于大家理解。

    最终的结果是我们匹配出了三对目标,由于候选的匹配目标中包含了许多错误的匹配红线(边),所以匹配准确率并不高。可见匈牙利算法对红线连接的准确率要求很高,也就是要求我们运动模型、外观模型等部件必须进行较为精准的预测,或者预设较高的阈值,只将置信度较高的边才送入匈牙利算法进行匹配,这样才能得到较好的结果。

    匈牙利算法的流程大家看到了,有一个很明显的问题相信大家也发现了,按这个思路找到的最大匹配往往不是我们心中的最优。匈牙利算法将每个匹配对象的地位视为相同,在这个前提下求解最大匹配。这个和我们研究的多目标跟踪问题有些不合,因为每个匹配对象不可能是同等地位的,总有一个真实目标是我们要找的最佳匹配,而这个真实目标应该拥有更高的权重,在此基础上匹配的结果才能更贴近真实情况。

    KM算法就能比较好地解决这个问题,我们下面来看看KM算法。

    2.KM算法(Kuhn-Munkres Algorithm)

    KM算法解决的是带权二分图的最优匹配问题。

    还是用上面的图来举例子,这次给每条连接关系加入了权重,也就是我们算法中其他模块给出的置信度分值。

    KM算法解决问题的步骤是这样的。

    第一步

    首先对每个顶点赋值,称为顶标,将左边的顶点赋值为与其相连的边的最大权重,右边的顶点赋值为0

    第二步,开始匹配

    匹配的原则是只和权重与左边分数(顶标)相同的边进行匹配,若找不到边匹配,对此条路径的所有左边顶点的顶标减d,所有右边顶点的顶标加d。参数d我们在这里取值为0.1。

    对于左1,与顶标分值相同的边先标蓝。

    然后是左2,与顶标分值相同的边标蓝。

    然后是左3,发现与右1已经与左1配对。首先想到让左3更换匹配对象,然而根据匹配原则,只有权值大于等于0.9+0=0.9(左顶标加右顶标)的边能满足要求。于是左3无法换边。那左1能不能换边呢?对于左1来说,只有权值大于等于0.8+0=0.8的边能满足要求,无法换边。此时根据KM算法,应对所有冲突的边的顶点做加减操作,令左边顶点值减0.1,右边顶点值加0.1。结果如下图所示。

    再进行匹配操作,发现左3多了一条可匹配的边,因为此时左3对右2的匹配要求只需权重大于等于0.8+0即可,所以左3与右2匹配!

    最后进行左4的匹配,由于左4唯一的匹配对象右3已被左2匹配,发生冲突。进行一轮加减d操作,再匹配,左四还是匹配失败。两轮以后左4期望值降为0,放弃匹配左4。

    至此KM算法流程结束,三对目标成功匹配,甚至在左三目标预测不够准确的情况下也进行了正确匹配。可见在引入了权重之后,匹配成功率大大提高。

    这篇文章可能信息量有点大,读起来略费脑。不过笔者力求排除复杂的理论推导,使用形象的例子来说明这两种算法。至于具体的理论,有兴趣的读者可以再去搜寻,网上有很多资料,相信有了感性的理解后,对理论的掌握会更加容易。

    最后还有一点值得注意,匈牙利算法得到的最大匹配并不是唯一的,预设匹配边、或者匹配顺序不同等,都可能会导致有多种最大匹配情况,所以有一种替代KM算法的想法是,我们只需要用匈牙利算法找到所有的最大匹配,比较每个最大匹配的权重,再选出最大权重的最优匹配即可得到更贴近真实情况的匹配结果。但这种方法时间复杂度较高,会随着目标数越来越多,消耗的时间大大增加,实际使用中并不推荐。

    展开全文
  • VS2015上C/C++实现目标检测和多目标跟踪,多目标跟踪采用了匈牙利算法和卡尔曼滤波;你在VS2015新建工程,并配置好opencv的相关设置,就可直接使用;注意:如果你想获得更好的效果,目标检测可换成基于深度学习的,...
  • 匈牙利算法 卡尔曼滤波器多目标跟踪器实现
  • New videos!Vehicles speed calculation with YOLO v4 (Thanks Sam Blake for great idea!) Traffic counting with YOLO v3 First step to ADAS with YOLO v4 Multitarget (multiple objects) tracker1....

    New videos!

    Vehicles speed calculation with YOLO v4 (Thanks Sam Blake for great idea!)

    Traffic counting with YOLO v3

    First step to ADAS with YOLO v4

    Multitarget (multiple objects) tracker

    1. Objects detector can be created with function CreateDetector with different values of the detectorType:

    1.1. Based on background substraction: built-in Vibe (tracking::Motion_VIBE), SuBSENSE (tracking::Motion_SuBSENSE) and LOBSTER (tracking::Motion_LOBSTER); MOG2 (tracking::Motion_MOG2) from opencv; MOG (tracking::Motion_MOG), GMG (tracking::Motion_GMG) and CNT (tracking::Motion_CNT) from opencv_contrib. For foreground segmentation used contours from OpenCV with result as cv::RotatedRect

    1.2. Haar face detector from OpenCV (tracking::Face_HAAR)

    1.3. HOG pedestrian detector from OpenCV (tracking::Pedestrian_HOG) and C4 pedestrian detector from sturkmen72 (tracking::Pedestrian_C4)

    1.4. MobileNet SSD detector (tracking::SSD_MobileNet) with opencv_dnn inference and pretrained models from chuanqi305

    1.5. YOLO detector (tracking::Yolo_OCV) with opencv_dnn inference and pretrained models from pjreddie

    1.6. YOLO detector (tracking::Yolo_Darknet) with darknet inference from AlexeyAB and pretrained models from pjreddie

    1.7. YOLO detector (tracking::Yolo_TensorRT) with NVidia TensorRT inference from enazoe and pretrained models from pjreddie

    1.8. You can to use custom detector with bounding or rotated rectangle as output.

    2. Matching or solve an assignment problem:

    2.1. Hungrian algorithm (tracking::MatchHungrian) with cubic time O(N^3) where N is objects count

    2.2. Algorithm based on weighted bipartite graphs (tracking::MatchBipart) from rdmpage with time O(M * N^2) where N is objects count and M is connections count between detections on frame and tracking objects. It can be faster than Hungrian algorithm

    2.3. Distance from detections and objects: euclidean distance in pixels between centers (tracking::DistCenters), euclidean distance in pixels between rectangles (tracking::DistRects), Jaccard or IoU distance from 0 to 1 (tracking::DistJaccard)

    3.1. Linear Kalman filter from OpenCV (tracking::KalmanLinear)

    3.2. Unscented Kalman filter from OpenCV (tracking::KalmanUnscented) with constant velocity or constant acceleration models

    3.3. Kalman goal is only coordinates (tracking::FilterCenter) or coordinates and size (tracking::FilterRect)

    4. Advanced visual search for objects if they have not been detected:

    4.1. No search (tracking::TrackNone)

    4.2. Built-in DAT (tracking::TrackDAT) from foolwood, STAPLE (tracking::TrackSTAPLE) from xuduo35 or LDES (tracking::TrackLDES) from yfji; KCF (tracking::TrackKCF), MIL (tracking::TrackMIL), MedianFlow (tracking::TrackMedianFlow), GOTURN (tracking::TrackGOTURN), MOSSE (tracking::TrackMOSSE) or CSRT (tracking::TrackCSRT) from opencv_contrib

    With this option the tracking can work match slower but more accuracy.

    5. Pipeline

    get frame from capture device;

    decoding;

    objects detection (1);

    tracking (2-4);

    show result.

    This pipeline is good if all algorithms are fast and works faster than time between two frames (40 ms for device with 25 fps). Or it can be used if we have only 1 core for all (no parallelization).

    1th thread takes frame t and makes capture, decoding and objects detection;

    2th thread takes frame t-1, results from first thread and makes tracking and results presentation (this is the Main read).

    So we have a latency on 1 frame but on two free CPU cores we can increase performance on 2 times.

    5.3. Fully acynchronous pipeline can be used if the objects detector works with low fps and we have a free 2 CPU cores. In this case we use 4 threads:

    1th main thread is not busy and used for GUI and result presentation;

    2th thread makes capture and decoding, puts frames in threadsafe queue;

    3th thread is used for objects detection on the newest frame from the queue;

    4th thread is used for objects tracking: waits the frame with detection from 3th tread and used advanced visual search (4) in intermediate frames from queue until it ges a frame with detections.

    This pipeline can used with slow but accuracy DNN and track objects in intermediate frame in realtime without latency.

    Also you can read Wiki in Russian.

    Demo Videos

    MobileNet SSD and tracking for low resolution and low quality videos from car DVR:

    Mouse tracking:

    Motion Detection and tracking:

    Multiple Faces tracking:

    Simple Abandoned detector:

    Tested Platforms

    Ubuntu Linux 18.04 with x86 processors

    Ubuntu Linux 18.04 with Nvidia Jetson Nano (YOLO + darknet on GPU works!)

    Windows 10 (x64 and x32 builds)

    Build

    Download project sources

    Install CMake

    Configure project CmakeLists.txt, set OpenCV_DIR (-DOpenCV_DIR=/path/to/opencv/build).

    If opencv_contrib don't installed then disable options USE_OCV_BGFG=OFF, USE_OCV_KCF=OFF and USE_OCV_UKF=OFF

    If you want to use native darknet YOLO detector with CUDA + cuDNN then set BUILD_YOLO_LIB=ON (Install first CUDA and cuDNN libraries from Nvidia)

    If you want to use YOLO detector with TensorRT then set BUILD_YOLO_TENSORRT=ON (Install first TensorRT library from Nvidia)

    For building example with low fps detector (now native darknet YOLO detector) and Tracker worked on each frame: BUILD_ASYNC_DETECTOR=ON

    For building example with line crossing detection (cars counting): BUILD_CARS_COUNTING=ON

    Go to the build directory and run make

    Full build:

    git clone https://github.com/Smorodov/Multitarget-tracker.git

    cd Multitarget-tracker

    mkdir build

    cd build

    cmake . .. -DUSE_OCV_BGFG=ON -DUSE_OCV_KCF=ON -DUSE_OCV_UKF=ON -DBUILD_YOLO_LIB=ON -DBUILD_YOLO_TENSORRT=ON -DBUILD_ASYNC_DETECTOR=ON -DBUILD_CARS_COUNTING=ON

    make -j

    How to run cmake on Windows for Visual Studio 15 2017 Win64: example. You need to add directory with cmake.exe to PATH and change build params in cmake.bat

    Usage:

    Usage:

    ./MultitargetTracker [--example]= [--start_frame]= [--end_frame]= [--end_delay]= [--out]= [--show_logs]= [--gpu]= [--async]=

    ./MultitargetTracker ../data/atrium.avi -e=1 -o=../data/atrium_motion.avi

    Press:

    * 'm' key for change mode: play|pause. When video is paused you can press any key for get next frame.

    * Press Esc to exit from video

    Params:

    1. Movie file, for example ../data/atrium.avi

    2. [Optional] Number of example: 0 - MouseTracking, 1 - MotionDetector, 2 - FaceDetector, 3 - PedestrianDetector, 4 - MobileNet SSD detector, 5 - YOLO OpenCV detector, 6 - Yolo Darknet detector, 7 - YOLO TensorRT Detector

    -e=0 or --example=1

    3. [Optional] Frame number to start a video from this position

    -sf=0 or --start_frame==1500

    4. [Optional] Play a video to this position (if 0 then played to the end of file)

    -ef=0 or --end_frame==200

    5. [Optional] Delay in milliseconds after video ending

    -ed=0 or --end_delay=1000

    6. [Optional] Name of result video file

    -o=out.avi or --out=result.mp4

    7. [Optional] Show Trackers logs in terminal

    -sl=1 or --show_logs=0

    8. [Optional] Use built-in OpenCL

    -g=1 or --gpu=0

    9. [Optional] Use 2 threads for processing pipeline

    -a=1 or --async=0

    More details here: How to run examples.

    Thirdparty libraries

    License

    Project cititations

    Jeroen PROVOOST "Camera gebaseerde analysevan de verkeersstromen aaneen kruispunt", 2014 ( https://iiw.kuleuven.be/onderzoek/eavise/mastertheses/provoost.pdf )

    Wenda Qin, Tian Zhang, Junhe Chen "Traffic Monitoring By Video: Vehicles Tracking and Vehicle Data Analysing", 2016 ( http://cs-people.bu.edu/wdqin/FinalProject/CS585%20FinalProjectReport.html )

    Ipek BARIS "CLASSIFICATION AND TRACKING OF VEHICLES WITH HYBRID CAMERA SYSTEMS", 2016 ( http://cvrg.iyte.edu.tr/publications/IpekBaris_MScThesis.pdf )

    Cheng-Ta Lee, Albert Y. Chen, Cheng-Yi Chang "In-building Coverage of Automated External Defibrillators Considering Pedestrian Flow", 2016 ( http://www.see.eng.osaka-u.ac.jp/seeit/icccbe2016/Proceedings/Full_Papers/092-132.pdf )

    Omid Noorshams "Automated systems to assess weights and activity in grouphoused mice", 2017 ( https://pdfs.semanticscholar.org/e5ff/f04b4200c149fb39d56f171ba7056ab798d3.pdf )

    RADEK VOPÁLENSKÝ "DETECTION,TRACKING AND CLASSIFICATION OF VEHICLES", 2018 ( https://www.vutbr.cz/www_base/zav_prace_soubor_verejne.php?file_id=181063 )

    Márk Rátosi, Gyula Simon "Real-Time Localization and Tracking using Visible Light Communication", 2018 ( https://ieeexplore.ieee.org/abstract/document/8533800 )

    Thi Nha Ngo, Kung-Chin Wu, En-Cheng Yang, Ta-Te Lin "Areal-time imaging system for multiple honey bee tracking and activity monitoring", 2019 ( https://www.sciencedirect.com/science/article/pii/S0168169919301498 )

    展开全文
  • ZihaoZhao:带你入门多目标跟踪(三)匈牙利算法&KM算法
    展开全文
  • Sort多目标跟踪中的:指派问题与匈牙利解法 [lot]

    有个蛮有意思的趣解,可帮助各位看官理解。

    1、指派问题概述

    有n项不同的任务,需要n个人分别完成其中的1项,每个人完成任务的时间不一样。于是就有一个问题,如何分配任务使得花费时间最少。

    通俗来讲,就是n*n矩阵中,选取n个元素,每行每列各有1个元素,使得和最小,如图表示:

    2、指派问题性质

    最优解一般满足:若从矩阵的一行(列)各元素中分别减去该行(列)的最小元素,得到归约矩阵,其最优解与原矩阵的最优解相同。

    3、匈牙利法流程图

    4、代码实现

    在多目标跟踪时,我们得到了上一帧的多个Bbox与这一帧的多个Bbox的IOU矩阵,需要找到最大IOU组合的索引对,这时我们使用匈牙利算法来计算。

    import numpy as np
    
    
    def linear_assignment(X):
        """
        使用匈牙利算法解决线性指派问题。
        """
    
        indices = _hungarian(X).tolist()
        indices.sort()
        # Re-force dtype to ints in case of empty list
        indices = np.array(indices, dtype=int)
        # Make sure the array is 2D with 2 columns.
        # This is needed when dealing with an empty list
        indices.shape = (-1, 2)
        return indices
    
    
    class _HungarianState(object):
        """
        执行匈牙利算法的状态.
    
        """
    
        def __init__(self, cost_matrix):
            cost_matrix = np.atleast_2d(cost_matrix)    # 2维矩阵
    
            # 如果行大于列,算法则不能正常工作。需要转换过来。
            transposed = (cost_matrix.shape[1] < cost_matrix.shape[0])
            if transposed:
                self.C = (cost_matrix.T).copy()
            else:
                self.C = cost_matrix.copy()
            self.transposed = transposed  # 记录这个标志
    
            # 此时 m >= n.
            n, m = self.C.shape
            self.row_uncovered = np.ones(n, dtype=np.bool)
            self.col_uncovered = np.ones(m, dtype=np.bool)
            self.Z0_r = 0
            self.Z0_c = 0
            self.path = np.zeros((n + m, 2), dtype=int)
            self.marked = np.zeros((n, m), dtype=int)
    
        def _find_prime_in_row(self, row):
            """
            Find the first prime element in the specified row. Returns
            the column index, or -1 if no starred element was found.
            """
            col = np.argmax(self.marked[row] == 2)
            if self.marked[row, col] != 2:
                col = -1
            return col
    
        def _clear_covers(self):
            """Clear all covered matrix cells"""
            self.row_uncovered[:] = True
            self.col_uncovered[:] = True
    
    
    def _hungarian(cost_matrix):
        """
        匈牙利算法。
        解决经典指派问题的Munkres解答,返回最低成的索引对。
        """
    
        state = _HungarianState(cost_matrix)
    
        # 如果有一个维度为0,则不必进行了。
        step = None if 0 in cost_matrix.shape else _step1
    
        while step is not None:
            step = step(state)
    
        # Look for the starred columns
        results = np.array(np.where(state.marked == 1)).T
    
        # We need to swap the columns because we originally
        # did a transpose on the input cost matrix.
        if state.transposed:
            results = results[:, ::-1]
    
        return results
    
    
    # 各个step返回下一步要调用的函数
    
    def _step1(state):
    
        # Step1: 找出矩阵中每行最小的元素值,并减去。
        state.C -= state.C.min(axis=1)[:, np.newaxis]
    
        # Step2: 找出零元素
        for i, j in zip(*np.where(state.C == 0)):
            if state.col_uncovered[j] and state.row_uncovered[i]:
                state.marked[i, j] = 1
                state.col_uncovered[j] = False
                state.row_uncovered[i] = False
    
        state._clear_covers()
        return _step3
    
    
    def _step3(state):
        """
          如果有一行没对应到0,则转到 _step4
        """
        marked = (state.marked == 1)
        state.col_uncovered[np.any(marked, axis=0)] = False
    
        if marked.sum() < state.C.shape[0]:
            return _step4
    
    
    def _step4(state):
        """
        Find a noncovered zero and prime it. If there is no starred zero
        in the row containing this primed zero, Go to Step 5. Otherwise,
        cover this row and uncover the column containing the starred
        zero. Continue in this manner until there are no uncovered zeros
        left. Save the smallest uncovered value and Go to Step 6.
        """
        # We convert to int as numpy operations are faster on int
        C = (state.C == 0).astype(np.int)
        covered_C = C * state.row_uncovered[:, np.newaxis]
        covered_C *= state.col_uncovered.astype(dtype=np.int, copy=False)
        n = state.C.shape[0]
        m = state.C.shape[1]
        while True:
            # Find an uncovered zero
            row, col = np.unravel_index(np.argmax(covered_C), (n, m))
            if covered_C[row, col] == 0:
                return _step6
            else:
                state.marked[row, col] = 2
                # Find the first starred element in the row
                star_col = np.argmax(state.marked[row] == 1)
                if not state.marked[row, star_col] == 1:
                    # Could not find one
                    state.Z0_r = row
                    state.Z0_c = col
                    return _step5
                else:
                    col = star_col
                    state.row_uncovered[row] = False
                    state.col_uncovered[col] = True
                    covered_C[:, col] = C[:, col] * (
                        state.row_uncovered.astype(dtype=np.int, copy=False))
                    covered_C[row] = 0
    
    
    def _step5(state):
        """
        Construct a series of alternating primed and starred zeros as follows.
        Let Z0 represent the uncovered primed zero found in Step 4.
        Let Z1 denote the starred zero in the column of Z0 (if any).
        Let Z2 denote the primed zero in the row of Z1 (there will always be one).
        Continue until the series terminates at a primed zero that has no starred
        zero in its column. Unstar each starred zero of the series, star each
        primed zero of the series, erase all primes and uncover every line in the
        matrix. Return to Step 3
        """
        count = 0
        path = state.path
        path[count, 0] = state.Z0_r
        path[count, 1] = state.Z0_c
    
        while True:
            # Find the first starred element in the col defined by
            # the path.
            row = np.argmax(state.marked[:, path[count, 1]] == 1)
            if not state.marked[row, path[count, 1]] == 1:
                # Could not find one
                break
            else:
                count += 1
                path[count, 0] = row
                path[count, 1] = path[count - 1, 1]
    
            # Find the first prime element in the row defined by the
            # first path step
            col = np.argmax(state.marked[path[count, 0]] == 2)
            if state.marked[row, col] != 2:
                col = -1
            count += 1
            path[count, 0] = path[count - 1, 0]
            path[count, 1] = col
    
        # Convert paths
        for i in range(count + 1):
            if state.marked[path[i, 0], path[i, 1]] == 1:
                state.marked[path[i, 0], path[i, 1]] = 0
            else:
                state.marked[path[i, 0], path[i, 1]] = 1
    
        state._clear_covers()
        # Erase all prime markings
        state.marked[state.marked == 2] = 0
        return _step3
    
    
    def _step6(state):
        """
        Add the value found in Step 4 to every element of each covered row,
        and subtract it from every element of each uncovered column.
        Return to Step 4 without altering any stars, primes, or covered lines.
        """
        # the smallest uncovered value in the matrix
        if np.any(state.row_uncovered) and np.any(state.col_uncovered):
            minval = np.min(state.C[state.row_uncovered], axis=0)
            minval = np.min(minval[state.col_uncovered])
            state.C[np.logical_not(state.row_uncovered)] += minval
            state.C[:, state.col_uncovered] -= minval
        return _step4
    
    展开全文
  • 匈牙利算法(KM算法):将前一帧中的跟踪框tracks与当前帧中的检测框detections进行关联,通过外观信息、马氏距离、或者IOU来计算代价矩阵 卡尔曼滤波:基于传感器的测量值(在目标跟踪中即目标检测器)与跟踪器的...
  • 最后更改 新许可证Apache 2.0代替GPLv3 添加了新的参数“批处理大小”-在多个连续帧上同时检测。 它可以在功能强大的GPU上提高... MOG(跟踪:: Motion_MOG),GMG(跟踪:: Motion_GMG),并从CNT(跟踪:: Motion_CNT
  • matlab轨迹跟踪代码快速局部跟踪 完全实现以下论文中发布的部分跟踪方法的Matlab代码: J. Neri和P. Depalle,“”,在第21届国际数字音频效果会议(DAFx-18)会议录中,葡萄牙阿威罗,第326-333页,2018年9月。 ...
  • 匈牙利算法

    2020-05-08 00:30:40
    日萌社 人工智能AI:Keras PyTorch MXNet TensorFlow ...匈牙利算法(Hungarian Algorithm)与KM算法(Kuhn-Munkres Algorithm)是用来解决多目标跟踪中的数据关联问题,匈牙利算法与KM算法都是为了求解二...
  • 匈牙利算法步骤 匈牙利算法博客推荐 KM算法步骤 KM算法标杆(又名顶标)的引入 KM流程详解 KM算法博客推荐 贪心算法 英文注解 无权二分图(unweighted bipartite graph)的最大匹配(maximum matching)和完美匹配...
  • 趣写匈牙利算法

    2018-01-13 17:21:55
    作者将匈牙利算法形象有趣的描述出来,很不错,原文地址: http://blog.csdn.net/dark_scope/article/details/8880547 【书本上的算法往往讲得非常复杂,我和我的朋友计划用一些简单通俗的例子来描述算法的...
  • 匈牙利算法的基本原理与Python实现

    千次阅读 2019-07-09 15:59:01
    一、问题描述 问题描述:N个人分配N项任务...在讲将匈牙利算法解决任务分配问题之前,先分析几个具体实例。 以3个工作人员和3项任务为实例,下图为薪酬图表和根据薪酬图表所得的cost矩阵。   利用最简单的方法(...
  • 行人检测跟踪(pedestrain_track)

    万次阅读 2018-11-01 15:43:49
    一 前景分割(foreground_segmentation) 二 卡尔曼滤波(kalman_filter) ...三 匈牙利跟踪(hungarian)   一 前景分割(foreground_segmentation) 通过提取多张同背景的图像每个点灰度值取到次...
  • 匈牙利算法解决加权二分图问题

    千次阅读 2019-04-14 18:48:32
    The Hungarian Algorithm for Weighted Bipartite ...二分图的最大匹配、完美匹配和匈牙利算法 The Optimal Assignment Problem assignment-problem-and-hungarian-algorithm The Hungarian Algorithm for Weight...
  • SORT 多目标跟踪算法笔记

    万次阅读 多人点赞 2019-04-21 15:20:46
    通过匈牙利算法关联检测框到目标; 应用试探期甄别虚检; 使用 Faster R-CNN,证明检测好跟踪可以很简单。 技术方案 所提方法以检测作为关键组件,传播目标状态到未来帧中,将当前检测与现有目标相关联,并...
  • 二分图匹配——匈牙利算法

    千次阅读 多人点赞 2019-05-31 16:57:03
    匈牙利算法是由匈牙利数学家Edmonds于1965年提出,因而得名。匈牙利算法是基于Hall定理中充分性证明的思想,它是部图匹配最常见的算法,该算法的核心就是寻找增广路径,它是一种用增广路径求二分图最大匹配的算法。 ...
  • 1.匈牙利算法 **分配问题(Assignment Problem):**假设有N个人和N个任务,每个任务可以任意分配给不同的人,已知每个人完成每个任务要花费的代价不尽相同,那么如何分配可以使得总的代价最小。 **匈牙利算法(又叫...
  • 匈牙利 图解过程

    2012-08-17 13:53:00
    匈牙利算法--过程图解 2007-08-13 01:48:06|分类: ACM分类 |标签: |字号大中小订阅 转载: 以下算法可把G中任一匹配M扩充为最大匹配,此算法是Edmonds于1965年提出的,被称为匈牙利算法,其步骤如下: ...
  • 匈牙利算法的MATLAB 程序代码

    万次阅读 2017-09-16 20:11:43
    匈牙利算法的MATLAB 程序代码如下(算例):
  • 匈牙利算法为一种二分图最大匹配的指派算法,在多目标跟踪中比较常用,以下是工程化的匈牙利算法步骤: Step1.将元素排列成为矩阵,列为被指派对象,行为匹配权重 Step2.将矩阵补齐为M*M的方阵,补的元素为原始...
  • HDU3605 匈牙利匹配

    2016-11-06 15:11:34
    各种循环优化,外加一个前驱记录,匈牙利匹配的运用。 #include #include #include using namespace std; int n, m; int line[100002][12]; bool used[12]; int girl[12]; int ok[12][100002]; int cnt[12]; ...
  • 匈牙利匹配算法可以用来做目标跟踪,根据预测算法预测box与上一帧box的iou关系可以确定是否是上一帧的目标。 也是比较常用的二分图匹配算法。 概念 图G的一个匹配是由一组没有公共端点的不是圈的边构成的集合。 ...
  • 这个项目是我实习的一部分,它包含一个经过训练可检测20个物体的SSD检测器,但仅用于人员,以生成跟踪,对每个人执行Kalman滤波,针对每个人的ID生成,处理匈牙利方法。 用法 : python2 main.py --prototxt ...
  • 卡尔曼是匈牙利数学家,Kalman滤波器源于其博士毕业了论文和1960年发表的论文《A New Approach to Linear Filtering and Prediction Problems》(线性滤波与预测问题的新方法)。
  • 目录目标跟踪、单目标跟踪、多目标跟踪的概念欧氏距离、马氏距离、余弦距离欧氏距离马氏距离余弦距离SORT算法原理SORT算法中的匈牙利匹配算法指派问题中的匈牙利算法 目标跟踪、单目标跟踪、多目标跟踪的概念 目标...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 3,504
精华内容 1,401
关键字:

匈牙利跟踪