精华内容
下载资源
问答
  • UCF101 is provided by University of Central Florida.本数据集由中央佛罗里达大学提供。 UCF101_TrainTestSplits-DetectionTask_datasets.zip UCF101_TrainTestSplits-RecognitionTask_datasets.zip
  • unzip ucf101

    2020-08-28 13:53:39
    wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.001 wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.002 wget ...
    wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.001
    wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.002
    wget http://ftp.tugraz.at/pub/feichtenhofer/tsfusion/data/ucf101_tvl1_flow.zip.003
    cat ucf101_tvl1_flow.zip* > ucf101_tvl1_flow.zip
    unzip ucf101_tvl1_flow.zip

     tar  xzvf  fenci.py.tar.gz  -C  pythontab/

     

     

    批量解壓命令:

    find . -name '*.zip' -exec unzip {} \;
    展开全文
  • 精简版的UCF101数据集

    2021-03-26 10:42:49
    精简版的UCF101数据集
  • result in ucf101

    2020-12-02 11:16:54
    <div><p>I trained the model in ucf101, after 70 epoch, the top 1 accuracy is 73.3%, the top 5 accuracy is 95.7% in training data, is this a normal result? </p><p>该提问来源于开源项目:r1ch88/...
  • ucf101 train from scratch

    2020-12-26 15:27:13
    <div><p>Use the train_c3d_ucf101.py to train the ucf101 dataset from scratch, I can only get ~30% acc, which is much more lower than the reported ~45% accuracy. <p>Can anyone get higher accuracy?</p>...
  • UCF101数据集处理

    2021-02-24 18:33:07
    UCF101数据集处理 在复现动作识别类的算法时,常需要用到数据集。ucf101就是其中一个。 之前复现代码时所用的ucf101数据集是直接将原数据集中的视频处理成图片。数据集目录如下: UCF101/ApplyEyeMakeup/v_...

    UCF101数据集处理


    在复现动作识别类的算法时,常需要用到数据集。ucf101就是其中一个。

    之前复现代码时所用的ucf101数据集是直接将原数据集中的视频处理成图片。数据集目录如下:
    UCF101/ApplyEyeMakeup/v_ApplyEyeMakeup_g01_c01/img_00001.jpg
    (此时通过train.txt和test.txt两个文本文档来读取数据集信息。两个文本中有三列内容,第一列是路径,第二列是视频的帧数,第三列是视频的类别)
    train.txt内容

    这次代码复现则需要将整个数据集分成两个部分 训练集 和测试集。那么数据集目录需要转换成以下形式:
    UCF101/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g01_c01/img_00001.jpg
    UCF101/test/ApplyEyeMakeup/v_ApplyEyeMakeup_g01_c02/img_00001.jpg

    因为手动复制文件夹过于麻烦,所以采用python编程来完成。主要思想就是通过读取上面所说的train.txt和test.txt文件,然后将对应的文件夹放入train、test文件夹下面。代码如下:

    #coding:utf-8
    import os,shutil
    #移动文件夹的函数,将一个文件夹下所有文件移动到另一个文件夹下
    def move_file(orgin_path,moved_path):  
        dir_files=os.listdir(orgin_path)   #得到该文件夹下所有文件
        for file in dir_files:
            file_path = os.path.join(orgin_path,file) #路径拼接成绝对路径
            if os.path.isfile(file_path):  #如果是文件,则打印这个文件路径
                if file.endswith(".jpg"):
                    if os.path.exists(os.path.join(moved_path,file)):
                        print("有重复文件,跳过,不移动")
                        continue
                    else:
                        shutil.move(file_path, moved_path)
            if os.path.isdir(file_path): #如果是目录,就递归子目录
                moved_path=os.path.join(moved_path,file)
                if not os.path.exists(moved_path):
                    os.mkdir(moved_path)
                move_file(file_path,moved_path)
        print("移动文件成功!")
    
    list_file='yourpath/ucf101test02.txt' #存有train/test.txt文件的路径
    root='yourpath/UCF101/origin' #原数据集的目录
    destinate='yourpath/UCF101/test'  #目标位置的目录
    tmp = [x.strip().split(',') for x in open(list_file)] #将.txt文档中的内容以逗号分隔
    
    class_ = sorted(os.listdir(root))
    
    for c in class_:
        class_path = os.path.join(destinate, c) 
        #在目标位置下建好分类级的文件夹,如/mnt/data/public/UCF101/testApplyEyemakeup,
        #因为不可以越过一个不存在的目录,新建更深层的目录
        if not os.path.exists(class_path):
            os.mkdir(class_path)
    
    for i in range(0,len(tmp)):
        root_path=os.path.join(root,tmp[i][0])   #原路径
        dest_path=os.path.join(destinate,tmp[i][0])  #目标路径
        if not os.path.exists(dest_path):  #若不存在目标路径则新建
            os.mkdir(dest_path)  
        move_file(root_path,dest_path)  #将原路径下的文件移动到目标目录下
    

    通过该程序就可以将原有的整个UCF101数据集分成 训练集 和测试集 了

    展开全文
  • train on UCF101split 1

    2021-01-06 21:31:16
    <div><p>Hi, thank you for you code, I try to use your code to train on UCF101 from scratch. Bur following problem occurs. RuntimeError: size mismatch, m1: [128 x 2048], m2: [512 x 101] at /pytorch/...
  • [data request] UCF101

    2021-01-06 17:51:57
    UCF101 is an action recognition data set of realistic action videos, collected from YouTube, having 101 action categories. This data set is an extension of UCF50 data set which has 50 action ...
  • Evaluate on ucf101 dataset

    2020-12-08 19:38:33
    <div><p>Hi, I try to reproduce the results achieved in your paper on ucf101 dataset. And I get the trained model by <code>get_models.sh</code>. The rgb stream works well, about 82.7%, as your answer ...
  • I have a question about your results on UCF101 split 1. I've evaluated your pretrained weight of "resnext-101-kinetics-ucf101_split1.pth" on UCF101 split 1 and got the accuracy of ~85.99....
  • <div><p>hi mmaction2, i met this problem several times, use UCF101 dataset to train, use default config file <pre><code> $ CUDA_VISIBLE_DEVICES=1 python tools/train.py configs/recognition/tsn/tsn_...
  • tensorflow-C3D-ucf101网络

    2018-10-17 11:33:33
    3dcnn 行为识别网络架构并使用softmax层 用于ucf101数据集
  • UCF101动作识别数据集

    千次阅读 2020-06-12 16:18:33
    UCF101是从YouTube收集的具有101个动作类别的真实动作视频的动作识别数据集。 UCF101拥有来自101个动作类别的13320个视频,在动作方面具有最大的多样性,并且在摄像机运动,物体外观和姿势,物体比例,视点,杂乱的...

    UCF101是从YouTube收集的具有101个动作类别的真实动作视频的动作识别数据集。

    UCF101拥有来自101个动作类别的13320个视频,在动作方面具有最大的多样性,并且在摄像机运动,物体外观和姿势,物体比例,视点,杂乱的背景,照明条件等方面存在很大的差异。迄今具有挑战性的数据集。由于大多数可用的动作识别数据集都是不现实的,并且是由参与者分阶段进行的,因此UCF101旨在通过学习和探索新的现实动作类别,鼓励对动作识别进行进一步的研究。

    101个动作类别的视频分为25组,每组可以包含4-7个动作的视频。来自同一组的视频可能具有一些共同的特征,例如相似的背景,相似的视角等。

    动作类别可以分为五种类型:1)人与物体的互动 2)仅肢体运动 3) 人与人的互动 4)演奏乐器 5)体育。

    具体类别:
    The action categories for UCF101 data set are: Apply Eye Makeup, Apply Lipstick, Archery, Baby Crawling, Balance Beam, Band Marching, Baseball Pitch, Basketball Shooting, Basketball Dunk, Bench Press, Biking, Billiards Shot, Blow Dry Hair, Blowing Candles, Body Weight Squats, Bowling, Boxing Punching Bag, Boxing Speed Bag, Breaststroke, Brushing Teeth, Clean and Jerk, Cliff Diving, Cricket Bowling, Cricket Shot, Cutting In Kitchen, Diving, Drumming, Fencing, Field Hockey Penalty, Floor Gymnastics, Frisbee Catch, Front Crawl, Golf Swing, Haircut, Hammer Throw, Hammering, Handstand Pushups, Handstand Walking, Head Massage, High Jump, Horse Race, Horse Riding, Hula Hoop, Ice Dancing, Javelin Throw, Juggling Balls, Jump Rope, Jumping Jack, Kayaking, Knitting, Long Jump, Lunges, Military Parade, Mixing Batter, Mopping Floor, Nun chucks, Parallel Bars, Pizza Tossing, Playing Guitar, Playing Piano, Playing Tabla, Playing Violin, Playing Cello, Playing Daf, Playing Dhol, Playing Flute, Playing Sitar, Pole Vault, Pommel Horse, Pull Ups, Punch, Push Ups, Rafting, Rock Climbing Indoor, Rope Climbing, Rowing, Salsa Spins, Shaving Beard, Shotput, Skate Boarding, Skiing, Skijet, Sky Diving, Soccer Juggling, Soccer Penalty, Still Rings, Sumo Wrestling, Surfing, Swing, Table Tennis Shot, Tai Chi, Tennis Swing, Throw Discus, Trampoline Jumping, Typing, Uneven Bars, Volleyball Spiking, Walking with a dog, Wall Pushups, Writing On Board, Yo Yo.

    下载地址:

    UCF101 data set :https://www.crcv.ucf.edu/data/UCF101/UCF101.rar

    Revised annotations
    http://www.thumos.info/download.html

    Train/Test Splits for Action Recognition
    https://www.crcv.ucf.edu/data/UCF101/UCF101TrainTestSplits-RecognitionTask.zip

    Train/Test Splits for Action Detection
    https://www.crcv.ucf.edu/data/UCF101/UCF101TrainTestSplits-DetectionTask.zip

    STIP Features for UCF101 data
    part1:
    https://www.crcv.ucf.edu/data/UCF101/UCF101_STIP_Part1.rar
    part2:
    https://www.crcv.ucf.edu/data/UCF101/UCF101_STIP_Part2.rar

    展开全文
  • and finetune it on UCF101-split1 using the command below: ''' python main.py ucf101 RGB \ --arch resnet50 --num_segments 8 \ --gd 20 --lr 0.001 --lr_steps 10 20 --epochs 25 \ --batch-...
  • Inception Score on UCF101

    2020-12-28 22:33:36
    <p>I am trying to reproduce the Inception score results on UCF101 dataset. Could you please point out, which model and parameters(number of generated videos, splits) were used for stated result? Did ...
  • Data: UCF101 split1 testlist01.txt Code: python evaluate_video_ucf101_split1.py --task-name ./../exps/models/MFNet3D_UCF-101_Split-1_96.3.pth i test the model on that data, but get lower top1-accuary:...
  • 分割ucf101的问题

    2020-12-09 14:28:10
    <div><p>Traceback (most recent call last): File "gen_dataset_lists.py&...有没有人在分割ucf101数据集的时候出的问题 看了两天实在没找到</p><p>该提问来源于开源项目:zhang-can/ECO-pytorch</p></div>
  • About finetuning on UCF101

    2020-12-25 20:19:36
    Recently I have also tried finetuing the P3D199 Kinetics models on UCF101, but only got about 85% accuracy of action recognition task. I just wonder do you have any other tricks or did I make some ...
  • <div><p>Hello, can you tell me how can achieve the the accuracy of 94.5% on UCF101 using the Resnext101? I use your code, the same network architecture(Resnext101) and your pretrained parameters...
  • <div><p>Hi, I want to do some experiments on UCF101 dataset. and I have finished training the stage1 of rgb stream, and I got about 0.78 mean accuracy. so 1. is this a reasonable result? And when ...
  • Fail to finetune on ucf101

    2020-12-27 06:54:36
    m trying to finetune the pretrained model on ucf 101 but I only get 78% accuracy. I wonder have you tried similar settings?</p><p>该提问来源于开源项目:piergiaj/pytorch-i3d</p></div>
  • 视频数据集UCF101的处理与加载

    千次阅读 2019-12-17 14:25:53
    这篇文章是对UCF101视频数据集处理以及加载的一个记录,也适用于其他的视频数据集。 二 UCF101数据集 简单介绍一下UCF101数据集。 内含13320 个短视频 视频来源:YouTube 视频类别:101 种 主要包括这5大类...

    这篇文章是对UCF101视频数据集处理以及加载的一个记录,也适用于其他的视频数据集。

     

    二 UCF101数据集

    简单介绍一下UCF101数据集。

    • 内含13320 个短视频
    • 视频来源:YouTube
    • 视频类别:101 种
    • 主要包括这5大类动作 :人和物体交互,只有肢体动作,人与人交互,玩音乐器材,各类运动

    ......

    三 具体实现思路

    1 数据集准备

    1. 下载UCF101数据集UCF101.zip并解压;
    2. 下载标注文件及训练数据和测试数据的列表文件The Train/Test Splits for Action Recognition on UCF101 data set:
      内含:


    以上两个文件都在UCF数据集官网可以下载。

     

    2 预处理

    • 参考代码:two-stream-action-recognition
    • 预处理主要分为讲视频分解为帧,统计每个视频的帧数这两个步骤。
    • 这两部分的代码在以上的参考文件中给出了,去下载video_jpg_ucf101_hmdb51.py以及n_frames_ucf101_hmdb51.py源码即可。

    这里说明一下怎么使用以及执行结果:

    1. 将UCF101中的视频保持结构不变逐帧视频分解为图像。
      python utils_fyq/video_jpg_ucf101_hmdb51.py /home/hl/Desktop/lovelyqian/CV_Learning/UCF101 /home/hl/Desktop/lovelyqian/CV_Learning/UCF101_jpg
      将UCF101中的视频保持结构不变都逐帧视频分解为图像,每个视频帧数目都不一样,150帧左右,图片大小都是320*240。

    2. 实现每个视频的帧数(图像数量)统计。
      python utils_fyq/n_frames_ucf101_hmdb51.py /home/hl/Desktop/lovelyqian/CV_Learning/UCF101_jpg
      执行结果是每个视频帧文件夹内都有一个n_frames.txt文件,记录该视频帧的数目。

    3 后续处理

    定义了UCF101类,具体目标:

    • train_x: [batch_size,16,3,160,160]
    • test_x : [batch_size,16,3,160,160]
    • 每个视频取随机取16个连续帧
    • 图片为3通道,大小随机取(160,160)
    • 总共101类,所以label值为:0-100
    • train_y: [batch_size] 返回对应的label值;
    • test_y_label: [batch_size] 根据视频名称返回对应的label,用于与预测值进行对比。
    • classNames[101]: index表示label, value表示具体的类别,例如classNames[0]='ApplyEyeMakeup`

     

    ......

    使用方法

        myUCF101=UCF101()
    
       # get classNames
        className=myUCF101.get_className()
    
        # train
        batch_num=myUCF101.set_mode('train')
        for batch_index in range(batch_num):
            train_x,train_y=myUCF101[batch_index]
            print (train_x,train_y)
            print ("train batch:",batch_index)
        
        #TEST
        batch_num=myUCF101.set_mode('test')
        for batch_index in range(batch_num):
            test_x,test_y_label=myUCF101[batch_index]
            print test_x,test_y_label
            print ("test batch: " ,batch_index)

     

     

     

    完整代码

    
    from PIL import Image
    import random
    from skimage import io, color, exposure
    from skimage.transform import resize
    import os
    import numpy as np
    import pandas as pd
    import torch
    
    
    class UCF101:
        def __init__(self,mode='train'):
            self.videos_path='/home/hl/Desktop/lovelyqian/CV_Learning/UCF101_jpg'
            self.csv_dir_path='/home/hl/Desktop/lovelyqian/CV_Learning/UCF101_TrainTestlist/'
            self.label_csv_path = os.path.join(self.csv_dir_path, 'classInd.txt')
            # self.batch_size=128
            self.batch_size=8
            self.mode= mode
    
            self.get_train()
            self.get_test()
    
            
        def get_className(self):
            data = pd.read_csv(self.label_csv_path, delimiter=' ', header=None)
            labels = []
            # labels.append("0")
            for i in range(data.shape[0]):
                labels.append(data.ix[i, 1])
            return labels
    
        def get_train(self):
            train_x_path = []
            train_y = []
            for index in range(1,4):
                tmp_path='trainlist0'+str(index)+'.txt'
                train_csv_path = os.path.join(self.csv_dir_path, tmp_path)
                # print (train_csv_path)
    
                data = pd.read_csv(train_csv_path, delimiter=' ', header=None)
                for i in range(data.shape[0]):
                    train_x_path.append(data.ix[i,0])
                    # train_y.append(data.ix[i,1])
                    train_y.append(data.ix[i,1]-1)
        
            self.train_num=len(train_x_path)
            self.train_x_path=train_x_path
            self.train_y=train_y
            return train_x_path,train_y
    
    
        def get_test(self):
            test_x_path=[]
            test_y_label=[]
            for index in range(1,4):
                temp_path='testlist0'+str(index)+'.txt'
                test_csv_path=os.path.join(self.csv_dir_path,temp_path)
                # print (test_csv_path)
    
                data=pd.read_csv(test_csv_path,delimiter=' ',header=None)
                for i in range(data.shape[0]):
                    test_x_path.append(data.ix[i,0])
                    label=self.get_label(data.ix[i,0])
                    test_y_label.append(label)
            self.test_num=len(test_x_path)
            self.test_x_path=test_x_path
            self.test_y_label=test_y_label
            return test_x_path,test_y_label
    
    
        def get_label(self,video_path):
            slash_rows = video_path.split('/')
            class_name = slash_rows[0]
            return class_name
        
    
        def get_single_image(self,image_path):
            image=resize(io.imread(image_path),output_shape=(160,160),preserve_range= True)    #240,320,3--160,160,3
            # io.imshow(image.astype(np.uint8))
            # io.show()
            image =image.transpose(2, 0, 1)              #3,160,160
            return torch.from_numpy(image)               #range[0,255]
    
        def get_single_video_x(self,train_x_path):
            slash_rows=train_x_path.split('.')
            dir_name=slash_rows[0]
            video_jpgs_path=os.path.join(self.videos_path,dir_name)
            ##get the random 16 frame
            data=pd.read_csv(os.path.join(video_jpgs_path,'n_frames'),delimiter=' ',header=None)
            frame_count=data[0][0]
            train_x=torch.Tensor(16,3,160,160)
    
            image_start=random.randint(1,frame_count-17)
            image_id=image_start
            for i in range(16):
                s="%05d" % image_id
                image_name='image_'+s+'.jpg'
                image_path=os.path.join(video_jpgs_path,image_name)
                single_image=self.get_single_image(image_path)
                train_x[i,:,:,:]=single_image
                image_id+=1
            return train_x
    
        
        def get_minibatches_index(self, shuffle=True):
            """
            :param n: len of data
            :param minibatch_size: minibatch size of data
            :param shuffle: shuffle the data
            :return: len of minibatches and minibatches
            """
            if self.mode=='train':
                n=self.train_num
            elif self.mode=='test':
                n=self.test_num
    
            minibatch_size=self.batch_size
            
            index_list = np.arange(n, dtype="int32")
     
            # shuffle
            if shuffle:
                random.shuffle(index_list)
     
            # segment
            minibatches = []
            minibatch_start = 0
            for i in range(n // minibatch_size):
                minibatches.append(index_list[minibatch_start:minibatch_start + minibatch_size])
                minibatch_start += minibatch_size
     
            # processing the last batch
            if (minibatch_start != n):
                minibatches.append(index_list[minibatch_start:])
            
            if self.mode=='train':
                self.minibatches_train=minibatches
            elif self.mode=='test':
                self.minibatches_test=minibatches
            return 
    
    
        
        def __getitem__(self, index):
            if self.mode=='train':
                batches=self.minibatches_train[index]
                N=batches.shape[0]
                train_x=torch.Tensor(N,16,3,160,160)
                train_y=torch.Tensor(N)
                for i in range (N):
                    tmp_index=batches[i]
                    tmp_video_path=self.train_x_path[tmp_index]
                    tmp_train_x= self.get_single_video_x(tmp_video_path)
                    tmp_train_y=self.train_y[tmp_index]
                    train_x[i,:,:,:]=tmp_train_x
                    train_y[i]=tmp_train_y
                train_x=train_x.permute(0,2,1,3,4)
                return train_x,train_y
            elif self.mode=='test':
                batches=self.minibatches_test[index]
                N=batches.shape[0]
                test_x=torch.Tensor(N,16,3,160,160)
                test_y_label=[]
                for i in range (N):
                    tmp_index=batches[i]
                    tmp_video_path=self.test_x_path[tmp_index]
                    tmp_test_x= self.get_single_video_x(tmp_video_path)
                    tmp_test_y=self.test_y_label[tmp_index]
                    test_x[i,:,:,:]=tmp_test_x
                    test_y_label.append(tmp_test_y)
                test_x=test_x.permute(0,2,1,3,4)
                return test_x,test_y_label
        
        def set_mode(self,mode):
            self.mode=mode
            if mode=='train':
                self.get_minibatches_index()
                return self.train_num // self.batch_size
            elif mode=='test':
                self.get_minibatches_index()
                return self.test_num // self.batch_size
    
    
    
    
    
    ##  usage 
    
    if __name__=="__main__":
        myUCF101=UCF101()
       
        className=myUCF101.get_className()
    
    
        
        # train
        batch_num=myUCF101.set_mode('train')
        for batch_index in range(batch_num):
            train_x,train_y=myUCF101[batch_index]
            print (train_x,train_y)
            print ("train batch:",batch_index)
        
        #TEST
        batch_num=myUCF101.set_mode('test')
        for batch_index in range(batch_num):
            test_x,test_y_label=myUCF101[batch_index]
            print test_x,test_y_label
            print ("test batch: " ,batch_index)
    

     

     

     

     

    展开全文
  • <div><p>We try to train i3d model on ucf101 from scratch, but it converges much slower with a final validation accuray around 60%. Can you offer some suggestions on train i3d model without imagenet ...
  • UCF101 dataset train error

    2020-12-09 11:33:18
    <p>HMDB51 dataset worked fine, to train UCF101 dataset I just changed train part: print("Preprocessing train data ...") train_data = globals();%20opt.split,%20train%20=%200,%20opt%20&...
  • <div><p>I downloaded the network ResNet-101 pretrained on Kinetics, and fine-tuned on UCF101 following the example script. However, I can only get 82.5 by averaging the three splits. In the paper, the...
  • How to use eval_ucf101.py

    2021-01-06 21:30:36
    <div><p>Hi I trained my model, and now I want to evaluate his performance on ucf101, so I thought using the "eval_ucf101.py" file, but I couldn't understand how to use "...
  • Error when finetune UCF101

    2020-11-22 09:53:51
    finetune_ucf101.sh" I follow the tutorial "https://github.com/facebookresearch/R2Plus1D/blob/master/tutorials/hmdb51_finetune.md". <p>Follow the errors: Traceback (most recent call last): ...
  • In C3D User Guide you provide the C3D model fine-tuned on UCF101 on https://www.dropbox.com/s/mkc9q7g4wnqnmcv/c3d_ucf101_finetune_whole_iter_20000 this web, but i can't open this. Is this website...
  • Has anybody tried to train and evaluate C3D model on the split 01 of UCF101? I gave it a try, where validation set is the same to test set and got the following results: <p><img alt="results" src=...
  • Cuda out of memory (UCF101)

    2020-11-26 05:09:54
    <div><p>hi I am getting the following errror while traning on UCF101. below are the details ... can someone guide me ...? <p><img alt="image" src=...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 513
精华内容 205
关键字:

ucf101