精华内容
下载资源
问答
  • nn.embedding层报错index out of range in self详解报错详情报错代码报错原因修改后代码结果显示参考文章 报错详情 --------------------------------------------------------------------------- IndexError ...

    nn.embedding层报错index out of range in self详解

    报错详情

    ---------------------------------------------------------------------------
    IndexError                                Traceback (most recent call last)
    <ipython-input-383-d67388d2e4cc> in <module>
          1 output_emb = myEmbed(total_words = total_words, embedding_dim = 8)
          2 word_vector = torch.tensor(word_vector, dtype=torch.long).clone().detach()
    ----> 3 output = output_emb(word_vector)
          4 print(output)
          5 # word_vector
    
    /opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
        720             result = self._slow_forward(*input, **kwargs)
        721         else:
    --> 722             result = self.forward(*input, **kwargs)
        723         for hook in itertools.chain(
        724                 _global_forward_hooks.values(),
    
    <ipython-input-382-10f2ec94e0ae> in forward(self, sentences_idx)
          4         self.embed = nn.Embedding(total_words,embedding_dim)
          5     def forward(self,sentences_idx):
    ----> 6         return self.embed(sentences_idx).clone().detach()
    
    /opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
        720             result = self._slow_forward(*input, **kwargs)
        721         else:
    --> 722             result = self.forward(*input, **kwargs)
        723         for hook in itertools.chain(
        724                 _global_forward_hooks.values(),
    
    /opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input)
        124         return F.embedding(
        125             input, self.weight, self.padding_idx, self.max_norm,
    --> 126             self.norm_type, self.scale_grad_by_freq, self.sparse)
        127 
        128     def extra_repr(self) -> str:
    
    /opt/anaconda3/envs/py36/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
       1812         # remove once script supports set_grad_enabled
       1813         _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
    -> 1814     return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
       1815 
       1816 
    
    IndexError: index out of range in self
    

    报错代码

    1. 数据的预处理,统计单词总数并映射成字典;
    sentences = ['It is a good day.','how are you?','I want to study the nn.embedding.','I want to elmate my pox.','the experience that I have done today is my favriate experience.']
    sentences = [sentence.split() for sentence in sentences]
    all_words = []
    total_words = 0
    for sentence in sentences:
        all_words += [ words for words in sentence ]
    no_repeat_words = set(all_words)
    total_words = len(no_repeat_words)  
    word_to_idx = {word: i+1 for i, word in enumerate(no_repeat_words)}
    word_to_idx['<unk>'] = 0
    idx_to_word = {i+1: word for i, word in enumerate(no_repeat_words)}
    print('all_words:',all_words)
    print('no_repeat_words:',no_repeat_words)
    print('idx_to_word:',idx_to_word)
    print('word_to_idx:',word_to_idx)
    print('total_words',total_words)
    
    
    >>>all_words: ['It', 'is', 'a', 'good', 'day.', 'how', 'are', 'you?', 'I', 'want', 'to', 'study', 'the', 'nn.embedding.', 'I', 'want', 'to', 'elmate', 'my', 'pox.', 'the', 'experience', 'that', 'I', 'have', 'done', 'today', 'is', 'my', 'favriate', 'experience.']
    >>>no_repeat_words: {'a', 'want', 'nn.embedding.', 'It', 'experience.', 'my', 'today', 'study', 'favriate', 'is', 'have', 'I', 'day.', 'you?', 'how', 'elmate', 'experience', 'to', 'pox.', 'the', 'that', 'good', 'done', 'are'}
    >>>idx_to_word: {1: 'a', 2: 'want', 3: 'nn.embedding.', 4: 'It', 5: 'experience.', 6: 'my', 7: 'today', 8: 'study', 9: 'favriate', 10: 'is', 11: 'have', 12: 'I', 13: 'day.', 14: 'you?', 15: 'how', 16: 'elmate', 17: 'experience', 18: 'to', 19: 'pox.', 20: 'the', 21: 'that', 22: 'good', 23: 'done', 24: 'are'}
    >>>word_to_idx: {'a': 1, 'want': 2, 'nn.embedding.': 3, 'It': 4, 'experience.': 5, 'my': 6, 'today': 7, 'study': 8, 'favriate': 9, 'is': 10, 'have': 11, 'I': 12, 'day.': 13, 'you?': 14, 'how': 15, 'elmate': 16, 'experience': 17, 'to': 18, 'pox.': 19, 'the': 20, 'that': 21, 'good': 22, 'done': 23, 'are': 24, '<unk>': 0}
    >>>total_words: 24
    
    1. WORD TO VECTOR,将句子转化成向量
    word_vector = []
    sentences_pad = []
    print('填充前句子:',sentences)
    max_len = max([len(sentence) for sentence in sentences])
      
    for sentence in sentences:
        if len(sentence) < max_len:
            sentences_pad += [sentence.extend("<unk>" for _ in range(max_len-len(sentence)))]
        else:
            sentences_pad += [sentence]
    for sentence in sentences:
        word_vector += [[ word_to_idx[word] for word in sentence]] 
    # print('填充前的句子:',sentences_pad)
    print('填充后的句子:',sentences_pad)
    print('句子转化成向量:',word_vector)
    
    >>>填充前句子: [['It', 'is', 'a', 'good', 'day.', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>'], ['how', 'are', 'you?', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>'], ['I', 'want', 'to', 'study', 'the', 'nn.embedding.', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>'], ['I', 'want', 'to', 'elmate', 'my', 'pox.', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>'], ['the', 'experience', 'that', 'I', 'have', 'done', 'today', 'is', 'my', 'favriate', 'experience.']]
    >>>填充后的句子: [['It', 'is', 'a', 'good', 'day.', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>'], ['how', 'are', 'you?', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>'], ['I', 'want', 'to', 'study', 'the', 'nn.embedding.', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>'], ['I', 'want', 'to', 'elmate', 'my', 'pox.', '<unk>', '<unk>', '<unk>', '<unk>', '<unk>'], ['the', 'experience', 'that', 'I', 'have', 'done', 'today', 'is', 'my', 'favriate', 'experience.']]
    >>>句子转化成向量: [[4, 10, 1, 22, 13, 0, 0, 0, 0, 0, 0], [15, 24, 14, 0, 0, 0, 0, 0, 0, 0, 0], [12, 2, 18, 8, 20, 3, 0, 0, 0, 0, 0], [12, 2, 18, 16, 6, 19, 0, 0, 0, 0, 0], [20, 17, 21, 12, 11, 23, 7, 10, 6, 9, 5]]
    
    1. word_vector传入nn.Embedding()
    class myEmbed(nn.Module):
        def __init__(self,total_words,embedding_dim):
            super(myEmbed,self).__init__()
            self.embed = nn.Embedding(total_words,embedding_dim)
        def forward(self,sentences_idx):
            return self.embed(sentences_idx).clone().detach()
    output_emb = myEmbed(total_words = total_words, embedding_dim = 8)
    word_vector = torch.tensor(word_vector, dtype=torch.long).clone().detach()
    output = output_emb(word_vector)
    print(output)
    
    >>> 错误信息如‘报错详情’所示
    

    报错原因

    在‘3. word_vector传入nn.Embedding()’这一步出错,传入的total_words小于传入的word_vector的单词总数,造成nn.Embedding()索引溢出,简单来说num_embeddings的值设置有误,num_embeddings应当≥total_words,即为字典的单词总数。传参详情如下:

    class torch.nn.Embedding(num_embeddings, embedding_dim, 
    						padding_idx=None, max_norm=None,
    						norm_type=2.0, scale_grad_by_freq=False, 
    						sparse=False, _weight=None)
    

    存储固定字典和大小的嵌入项的简单查找表。该模块通常用于存储词嵌入并使用索引检索它们。模块的输入是一个索引列表,输出是相应的词嵌入。

    1. num_embeddings (int) – 去重后字典的单词总数;
    2. embedding_dim (int) – 所设置的单词维度
    3. padding_idx (int, optional) – 如果给定,当遇到索引时,用嵌入向量padding_idx(初始化为0)填充输出。(选填)
    4. max_norm (float, optional) – 如果给定,则对每个范数大于max_norm的嵌入向量重新规范化,使其具有max_norm范数。(选填)
    5. norm_type (float, optional) – 要为max_norm选项计算的p-norm的值。默认2。(选填)
    6. scale_grad_by_freq (boolean, optional) – 如果给定,这将按小批量中单词频率的倒数来缩放梯度。默认为假。(选填)
    7. sparse (bool, optional) – 若为真,则梯度w.r.t权矩阵为稀疏张量。有关稀疏梯度的更多细节,请参阅注释。(选填)
    

    修改后代码

    // 前面部分照搬
    // 1. 数据的预处理
    sentences = ['It is a good day.','how are you?','I want to study the nn.embedding.','I want to elmate my pox.','the experience that I have done today is my favriate experience.']
    sentences = [sentence.split() for sentence in sentences]
    all_words = []
    total_words = 0
    for sentence in sentences:
        all_words += [ words for words in sentence ]
    no_repeat_words = set(all_words)
    total_words = len(no_repeat_words)  
    word_to_idx = {word: i+1 for i, word in enumerate(no_repeat_words)}
    word_to_idx['<unk>'] = 0
    idx_to_word = {i+1: word for i, word in enumerate(no_repeat_words)}
    
    // 2. word to vector,将句子转化成向量
    word_vector = []
    sentences_pad = []
    max_len = max([len(sentence) for sentence in sentences])
    for sentence in sentences:
        if len(sentence) < max_len:
            sentences_pad += [sentence.extend("<unk>" for _ in range(max_len-len(sentence)))]
        else:
            sentences_pad += [sentence]
    for sentence in sentences:
        word_vector += [[ word_to_idx[word] for word in sentence]]
    
    // 3.传入向量化的句子,生成字向量
    total_words = len(word_to_idx)
    class myEmbed(nn.Module):
        def __init__(self,total_words,embedding_dim):
            super(myEmbed,self).__init__()
            self.embed = nn.Embedding(total_words,embedding_dim)
        def forward(self,sentences_idx):
            return self.embed(sentences_idx).clone().detach()
    output_emb = myEmbed(total_words = total_words, embedding_dim = 8)
    word_vector = torch.tensor(word_vector, dtype=torch.long).clone().detach()
    output = output_emb(word_vector)
    print(output)
    

    结果显示

    tensor([[[-0.9028, -1.0990,  1.0646,  1.4747,  1.2577,  0.6634,  0.0188,
               0.6545],
             [-0.2176,  0.5252,  0.2574,  1.2822, -0.8745, -1.2112,  0.0584,
              -0.5189],
             [ 0.5240, -0.8862, -1.3594, -1.1795, -0.8441,  0.7830,  0.9485,
               0.5734],
             [ 1.6141,  0.2254, -0.1457,  0.7620, -1.8222,  0.4634, -0.8187,
               0.3283],
             [-0.3710,  0.8392, -0.6133,  0.6381, -1.7941,  0.2950,  0.3148,
               2.2896],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190]],
    
            [[-0.1860,  1.8636, -0.6865, -0.3979,  1.1691,  1.2467,  1.5026,
               0.2586],
             [-0.9084,  0.0882, -0.0631,  0.0667,  0.9071,  1.6767, -0.1515,
               1.1327],
             [-2.6057,  0.6494,  0.0483,  0.5032,  0.5448,  0.7419,  0.8697,
              -0.4805],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190]],
    
            [[-0.2740,  0.7465,  0.7614, -1.3599, -0.7212,  0.0880,  0.9135,
               1.8307],
             [ 0.3974, -0.0467, -0.8352,  0.2649,  1.9399, -2.1667,  0.3023,
              -1.7938],
             [-0.8383, -0.6372, -0.1922,  0.5328,  0.5292, -0.8630, -0.0764,
              -1.4630],
             [ 0.2232, -0.2855, -0.5257, -1.4286, -1.3177, -0.5152, -1.1457,
               0.3720],
             [-0.6988, -0.3652, -0.9142,  0.5403,  0.1923, -1.6566,  0.8366,
              -1.1495],
             [-0.1142, -1.0301,  1.1789,  0.4901, -0.2576,  0.4898,  0.4154,
               1.1342],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190]],
    
            [[-0.2740,  0.7465,  0.7614, -1.3599, -0.7212,  0.0880,  0.9135,
               1.8307],
             [ 0.3974, -0.0467, -0.8352,  0.2649,  1.9399, -2.1667,  0.3023,
              -1.7938],
             [-0.8383, -0.6372, -0.1922,  0.5328,  0.5292, -0.8630, -0.0764,
              -1.4630],
             [-1.1177, -0.8047,  0.2185, -0.3761,  0.8753,  2.1269,  1.4648,
              -0.1830],
             [ 0.4993,  0.5043, -0.4541, -0.2609,  2.4289,  1.5842, -1.9878,
               1.4654],
             [ 1.8740, -0.1214,  0.6446, -0.4646,  0.3363, -0.3854, -0.4768,
               0.7824],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190],
             [ 1.0500, -0.7410,  1.4759, -0.9487,  1.4232,  0.1392,  0.8788,
              -0.7190]],
    
            [[-0.6988, -0.3652, -0.9142,  0.5403,  0.1923, -1.6566,  0.8366,
              -1.1495],
             [ 0.4606,  0.2213, -0.6970, -0.1618, -1.8748, -0.4962,  0.5517,
              -0.4841],
             [ 0.0738,  0.8394, -1.1480, -0.3829, -0.0931,  1.1793,  0.2737,
              -0.9046],
             [-0.2740,  0.7465,  0.7614, -1.3599, -0.7212,  0.0880,  0.9135,
               1.8307],
             [ 1.2459,  0.6663,  1.6969, -0.2072, -1.9603, -1.4282,  0.8382,
              -0.3569],
             [-1.6661,  0.0275,  0.5090,  0.4771, -0.7955,  0.9199,  0.9401,
               0.8285],
             [ 0.2445,  0.0742,  1.6497, -0.0338,  1.8325,  0.1709,  0.7659,
              -0.7233],
             [-0.2176,  0.5252,  0.2574,  1.2822, -0.8745, -1.2112,  0.0584,
              -0.5189],
             [ 0.4993,  0.5043, -0.4541, -0.2609,  2.4289,  1.5842, -1.9878,
               1.4654],
             [ 0.1651, -0.1232,  1.1650, -1.3531,  0.1082,  0.1277, -1.0091,
              -1.3470],
             [-0.2381,  1.7149,  1.0614, -1.1837, -0.5192,  0.9356, -0.1343,
               0.9358]]])
    

    参考文章

    1. pytorch nn.Embeddding()的官方文档
    2. 在pytorch里面实现word embedding是通过一个函数来实现的:nn.Embedding在深度学习1这篇博客中讨论了word embeding层到底怎么实现的?
    3.Pytorch中的nn.Embedding()
    4.pytorch embedding层报错index out of range in self

    展开全文
  • 使用pytorch时,数据过embedding层时报错: Traceback (most recent call last): File "C:/Users/gaosiqi/PycharmProjects/DeepFM/main.py", line 68, in <module> out = model(train_data) File "C:\...

    使用pytorch时,数据过embedding层时报错:

    Traceback (most recent call last):
      File "C:/Users/gaosiqi/PycharmProjects/DeepFM/main.py", line 68, in <module>
        out = model(train_data)
      File "C:\Anaconda3\envs\tensorflow\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
        result = self.forward(*input, **kwargs)
      File "C:/Users/gaosiqi/PycharmProjects/DeepFM/main.py", line 26, in forward
        embedding = self.word_embedding(x)
      File "C:\Anaconda3\envs\tensorflow\lib\site-packages\torch\nn\modules\module.py", line 722, in _call_impl
        result = self.forward(*input, **kwargs)
      File "C:\Anaconda3\envs\tensorflow\lib\site-packages\torch\nn\modules\sparse.py", line 126, in forward
        self.norm_type, self.scale_grad_by_freq, self.sparse)
      File "C:\Anaconda3\envs\tensorflow\lib\site-packages\torch\nn\functional.py", line 1814, in embedding
        return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
    IndexError: index out of range in self
    

    原因是输入进embedding层的数据并不是经过词典映射的,而是原始数据,因此张量内部有超出embedding层合法范围的数。

    embedding层需检查张量内部具体值的大小,并确保它们的值在有效范围内[0, num_embeddings-1]。例如此次出错就是张量内最大值是30000+,最小是-2,因此这两种过大和过小的值就引起问题。

    再例如:

    train_data = [[1,-1,1,1,2,2,2,3,4,23,2,3,1,2,2,2],
    			[4,3,2,5,3,2,8,9,3,66,7,7,4,3,2,3]]
    

    print(train_data.max())  #(已转变为张量后)
    print(train_data.min())
    

    来检查张量内部的最大最小值
    在这里插入图片描述
    此时会报错:
    在这里插入图片描述
    如果我们将大于num_embeddings和小于0的值都改掉看看:

    train_data = [[1,1,1,1,2,2,2,3,4,2,2,3,1,2,2,2],
                  [4,3,2,5,3,2,8,9,3,6,7,7,4,3,2,3]]
    

    此时就顺利经过了embedding层,并且得到了嵌入层的结果:
    在这里插入图片描述

    正确使用embedding层示例:

    from collections import Counter
    import torch.nn as nn
    
    # Let's say you have 2 sentences(lowercased, punctuations removed) :
    sentences = "i am new to PyTorch i am having fun"
    words = sentences.split(' ')
    
    vocab = Counter(words)  # create a dictionary
    vocab = sorted(vocab, key=vocab.get, reverse=True)
    vocab_size = len(vocab)
    
    # map words to unique indices
    word2idx = {word: ind for ind, word in enumerate(vocab)}
    
    # word2idx = {'i': 0, 'am': 1, 'new': 2, 'to': 3, 'pytorch': 4, 'having': 5, 'fun': 6}
    
    encoded_sentences = [word2idx[word] for word in words]
    
    # encoded_sentences = [0, 1, 2, 3, 4, 0, 1, 5, 6]
    print(encoded_sentences)
    # let's say you want embedding dimension to be 3
    emb_dim = 3 
    
    展开全文
  • IndexError: index out of range in self 报这个错误是embedding层的张量输入超过了合法范围,embedding层的合法张量输入数值范围应该在[0, num_embeddings - 1]的范围内,过大过小都会报错。 因此可以在embedding...

    IndexError: index out of range in self

    报这个错误是embedding层的张量输入超过了合法范围,embedding层的合法张量输入数值范围应该在[0, num_embeddings - 1]的范围内,过大过小都会报错。
    因此可以在embedding层先输出查看一下最大范围:

    print(self.embedding.num_embeddings)
    

    然后查看一下输入embedding层数据张量的范围:

    print(input_tensor.min())
    print(input_tensor.max())
    

    一般这种错误会出现token的映射过程中(单词到index之间的映射)。embedding层的具体参数如下:

     class torch.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None)
    

    可以查看pytorch官方文档查看具体细节:
    pytorch官方nn.embedding文档
    解决方案:
    TODO

    参考博文1
    参考博文2

    展开全文
  • pytorch embedding层报错index out of range in self 解决问题:embedding嵌入向量的值的范围必须满足【0,numembedding-1】 例如我是齿轮箱原始振动数据,里面有正值有负值,那么,需要进行【0,1】之间的归一化 ...

    **

    pytorch embedding层报错index out of range in self

    解决问题:embedding嵌入向量的值的范围必须满足【0,numembedding-1】
    例如我是齿轮箱原始振动数据,里面有正值有负值,那么,需要进行【0,1】之间的归一化

    最终给nn.embedding的输入为(500,3,padding_idx = 0)

    展开全文
  • 现在在使用pytorchEmbedding层的时候出现了index out of range in self的报错,报错信息如下,现在记录一下解决方法。
  • IndexError: index out of range in self 报这个错误是embedding层的张量输入超过了合法范围,embedding层的合法张量输入数值范围应该在[0, num_embeddings - 1]的范围内,过大过小都会报错。 ...
  • 先报错了CUDA error: device-side assert triggered ...IndexError: index out of range in self 最后发现是因为改了bert预留的[unused*]导致的问题: tokenizer = BertTokenizer.from_pretrained(pret
  • 文章目录IndexError: index out of range in self IndexError: index out of range in self 错误参考 embedding层需检查张量内部具体值的大小,并确保它们的值在有效范围内[0, num_embeddings-1] 学习参考 但是...
  • 使用transformers的Tokenizer和Model 来处理文本时,torch.embedding报错IndexError: index out of range in self,原因是输入句子的长度大于512,需要处理。
  • torch.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, sparse=False) 说白了就是num_embeddings(词典的词个数)不够大,为什么不够呢 ...
  • class data_generator: def __init__(self, ...layer.py", line 597, in _add_inbound_node output_tensors[i]._keras_shape = output_shapes[i] IndexError: list index out of range 给位 挚友,能不能帮忙解决一下啊
  • 【翻译自 : Implementation of Attention Mechanism for Caption Generation on Transformers using TensorFlow】 【说明:analyticsvidhya这里的文章个人很喜欢,所以闲暇时间里会做一点翻译和学习实践的工作,...
  • 我在csdn中看了很多文章,虽然我了解了Embedding的含义,但是在后续的使用过程,很不顺畅,最终我还是自己把它给摸透了 PytorchEmbedding embedding= nn.Embedding(dict_len,Embedding_dim) dict_len:代表的...
  • #代码参考https://nlp.gluon.ai/examples/sentiment_analysis/self_attentive_sentence_embedding.html #这个模型可以用于提取影响分类的关键词的抽取,算是半监督算法。 #用的训练数据样本极不均衡,人工标注较少...
  • 通过代码学习,加深对Self Attention 和 Transformer 模型实现理解 数据预处理分析,掌握torchtext 在数据预处理应用 Self Attention 机制模型训练 ats=emb(xt)Temb(xs) a_{ts} = emb(x_t)^T emb(x_s)ats​=...
  • 有关torch.nn.embedding的bug解决记录

    千次阅读 2020-10-13 17:38:11
    File "/home/shenfei1/anaconda3/envs/bert/lib/python3.6/site-packages/torch/nn/functional.py", line 1484, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) ...
  • vars, self.global_step) # saver of the model self.saver = tf.train.Saver(tf.global_variables(), max_to_keep=5) def embedding_layer(self, char_inputs, seg_inputs, config, name=None): """ :param char_...
  • Graph Embedding优化:只对目标节点进行训练原理代码创建子图代码main代码(以gcn为例) 其实我们在训练时候,并不需要将整个图都扔进神经网络进行训练,因为需要的只是训练集中的train_mask节点,将整个图都进行...
  • 嗨各位大佬好,之前初步了解了torch的东西,但不够深入,这次有机会再次学习。 源于一个推荐方面的Chorus,自搜可得。 torch.nn.Embedding ... | embedding_dim (int): the size of each embedd
  • 日萌社 人工智能AI:Keras PyTorch MXNet TensorFlow ...Embedding版本.py """ pip install torchtext """ # 导入torchtext.datasets的文本分类任务 # from torchtext.datasets import text_classificatio...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 2,364
精华内容 945
关键字:

embedding中indexoutofrangeinself