• 自然语言处理综述Aren't we all initially got surprised when smart devices understood what we were telling them? And in fact, it answered in the most friendly manner too, isn't it? Like Apple’s Siri ...


    Aren't we all initially got surprised when smart devices understood what we were telling them? And in fact, it answered in the most friendly manner too, isn't it? Like Apple’s Siri and Amazon’s Alexa comprehend when we ask the weather, for directions, or to play a certain genre of music. Ever since then I was wondering how do these computers get our language. This long due curiosity rekindled me and I thought to write a blog as a newbie on this.

    当智能设备理解了我们告诉他们的内容后,我们所有人最初并不感到惊讶吗? 实际上,它也以最友好的方式回答,不是吗? 就像苹果公司的Siri和亚马逊公司的Alexa一样,当我们询问天气,方向或播放某种音乐时,他们就会明白。 从那时起,我一直在想这些计算机如何获得我们的语言。 这种长期的好奇心使我重新燃起了生命,我想以此为博客写一个新手。

    In this article, I will be using a popular NLP library called NLTK. Natural Language Toolkit or NLTK is one of the most powerful and probably the most popular natural language processing libraries. Not only does it have the most comprehensive library for python-based programming, but it also supports the most number of different human languages.

    在本文中,我将使用一个流行的名为NLTK的NLP库 。 自然语言工具包或NLTK是功能最强大且可能是最受欢迎的自然语言处理库之一。 它不仅具有用于基于python的编程的最全面的库,而且还支持大多数不同的人类语言。

    What is Natural Language Processing?


    Natural language processing (NLP) is a subfield of linguistics, computer science, information engineering, and artificial intelligence concerned with the interactions between computers and human languages, in particular how to train computers to process and analyze large amounts of natural language data.


    Why sorting of Unstructured Datatype is so important?


    For every tick of the clock, the world generates the overwhelming amount of data!!, yeah, this is mind-boggling!! and the majority of the data falls under unstructured datatype. The data formats such as text, audio, video, image are classic examples of unstructured data. The Unstructured Datatype will not be having fixed dimensions and structures like traditional row and column structure of relational databases. Therefore it’s more difficult to analyze and not easily searchable. Having said that, it is also important for business organizations to find ways of addressing challenges and embracing opportunities to derive insights and prosper in highly competitive environments to be successful. However, with the help of natural language processing and machine learning, this is changing fast.

    每一刻时钟,世界都会产生大量数据!是的,这真是令人难以置信! 并且大多数数据属于非结构化数据类型。 文本,音频,视频,图像等数据格式是非结构化数据的经典示例。 非结构化数据类型将没有固定的维度和结构,如关系数据库的传统行和列结构。 因此,它更难以分析且不易搜索。 话虽如此,对于企业组织来说,找到应对挑战和把握机遇的方法也很重要,以便在高竞争环境中获得见识并取得成功。 但是,借助自然语言处理和机器学习,这种情况正在Swift改变。

    Are Computers confused with our Natural Language?


    Human language is one of the powerful tools of communication. The words, the tone, the sentences, the gestures which we use draw information. There are countless different ways of assembling words in a phrase. Words can also have many shades of meaning and, to comprehend human language with the intended meaning is a challenge. A linguistic paradox is a phrase or sentence that contradicts itself, for example, “oh, this is my open secret”, “can you please act naturally”, though it sounds pointedly foolish, we humans can understand and use in everyday speech but for machines, natural language’s ambiguity and inaccurate characteristics are the hurdles to sail-off.

    语言是交流的强大工具之一。 我们使用的单词,语气,句子,手势会吸引信息。 在短语中组合单词的方式有无数种。 单词也可以具有多种含义,要使人类语言具有预期的含义是一个挑战。 语言悖论是与自己矛盾的短语或句子,例如,“哦,这是我的公开秘密”,“您能自然地行动吗”,虽然听起来很愚蠢,但我们人类可以在日常语音中理解和使用,但对于机器,自然语言的歧义和不正确的特征是航行的障碍。

    Image for post

    Most used NLP Libraries


    In the past, only pioneers could be part of NLP projects those who would have superior knowledge in mathematics, computer learning, and linguistics in natural language processing. Now developers can use ready-made libraries to simplify pre-processing of texts so that they can concentrate on creating machine learning models. These libraries have enabled text comprehension, interpretation, sentiment analysis through only a few lines of code. Most popular NLP libraries are:

    过去,只有先驱者才能成为NLP项目的一部分,他们将对数学,计算机学习和自然语言处理方面的语言有丰富的知识。 现在,开发人员可以使用现成的库来简化文本的预处理,以便他们可以专注于创建机器学习模型。 这些库仅通过几行代码就可以进行文本理解,解释和情感分析。 最受欢迎的NLP库是:

    Spark NLP, NLTK, PyTorch-Transformers, TextBlob, Spacy, Stanford CoreNLP, Apache OpenNLP, Allen NLP, GenSim, NLP Architecture, sci-kit learn.

    Spark NLP,NLTK,PyTorch-Transformers,TextBlob,Spacy,Stanford CoreNLP,Apache OpenNLP,Allen NLP,GenSim,NLP Architecture,Sci-kit学习。

    The question is from where should we start and how?


    Have you ever observed how kids start to understand and learn a language? yeah, by picking each word and then sentence formations, right! Making computers understand our language is more or less similar to it.

    您是否曾经观察过孩子如何开始理解和学习语言? 是的,先选择每个单词,然后再选择句子形式,对! 使计算机理解我们的语言或多或少类似于它。

    Pre-processing Steps :


    1. Sentence Tokenization

    2. Word Tokenization

    3. Text Lemmatization and Stemming

    4. Stop Words

    5. POS Tagging

    6. Chunking

    7. Wordnet

    8. Bag-of-Words

    9. TF-IDF

    1. Sentence Tokenization(Sentence Segmentation)To make computers understand the natural language, the first step is to break the paragraphs into the sentences. Punctuation marks are such an easy way out for splitting the sentences apart.

      句子标记化(Sentence Segmentation)为了使计算机理解自然语言,第一步是将段落分成句子。 标点符号是将句子分开的简便方法。

    import nltk
    nltk.download('punkt')text = "Home Farm is one of the biggest junior football clubs in Ireland and their senior team, from 1970 up to the late 1990s, played in the League of Ireland. However, the link between Home Farm and the senior team was severed in the late 1990s. The senior side was briefly known as Home Farm Fingal in an effort to identify it with the north Dublin area."sentences = nltk.sent_tokenize(text)
    print("The number of sentences in the paragrah:",len(sentences))for sentence in sentences:
    print(sentence)OUTPUT:The number of sentences in the paragraph: 3 Home Farm is one of the biggest junior football clubs in Ireland and their senior team, from 1970 up to the late 1990s, played in the League of Ireland. However, the link between Home Farm and the senior team was severed in the late 1990s. The senior side was briefly known as Home Farm Fingal in an effort to identify it with the north Dublin area.

    2. Word Tokenization(Word Segmentation)By now we have separated sentences with us and the next step is to break the sentences into words which are often called Tokens.


    The way of creating a space in one’s own life helps for good, similarly, Space between the words helps in breaking the words apart in a phrase. We can consider punctuation marks as separate tokens as well, as punctuation has a purpose too.

    在自己的生活中创造空间的方式有益于美好,同样,单词之间的空间有助于将单词在短语中分开。 我们也可以将标点符号也视为单独的标记,因为标点符号也是有目的的。

    for sentence in sentences:
    words = nltk.word_tokenize(sentence)
    print("The number of words in a sentence:", len(words))
    print(words)OUTPUT:The number of words in a sentence: 32
    ['Home', 'Farm', 'is', 'one', 'of', 'the', 'biggest', 'junior', 'football', 'clubs', 'in', 'Ireland', 'and', 'their', 'senior', 'team', ',', 'from', '1970', 'up', 'to', 'the', 'late', '1990s', ',', 'played', 'in', 'the', 'League', 'of', 'Ireland', '.'] The number of words in a sentence: 18
    ['However', ',', 'the', 'link', 'between', 'Home', 'Farm', 'and', 'the', 'senior', 'team', 'was', 'severed', 'in', 'the', 'late', '1990s', '.'] The number of words in a sentence: 22
    ['The', 'senior', 'side', 'was', 'briefly', 'known', 'as', 'Home', 'Farm', 'Fingal', 'in', 'an', 'effort', 'to', 'identify', 'it', 'with', 'the', 'north', 'Dublin', 'area', '.']

    The prerequisite to use word_tokenize() or sent_tokenize() functions in the program, we should have punkt package downloaded.


    3. Stemming and Text Lemmatization


    In every text document, we usually come across different forms of words like write, writes, writing with an alike meaning, and the same base word. But how to make a computer to analyze such words?That’s when Text Lemmatization and Stemming comes in the picture.

    在每个文本文档中,我们通常会遇到不同形式的单词,例如写,写,具有相似含义的写词和相同的基本单词。 但是如何使计算机分析此类单词呢? 那就是图片的词法化和词法提取的时候。

    Stemming and Text Lemmatization are the normalization techniques that offer the same idea of chopping the ends of a word to the core word. While both of them want to solve the same problem, but they are going about it in entirely different ways. Stemming is often a crude heuristic process whereas Lemmatization is a vocabulary -based morphological base word. Let’s just take a closer look!

    词干和文本词法归类化是归一化技术,它们提供将单词的结尾切成核心单词的相同思想。 虽然他们两个都想解决相同的问题,但是他们以完全不同的方式来解决这个问题。 词干提取通常是一个粗略的启发式过程,而词干提取则是基于词汇的词法基础词。 让我们仔细看看!

    Stemming- Words are reduced to their stem word. A word stem need not be the same root as a dictionary-based morphological(smallest unit) root, it just is an equal to or smaller form of the word.

    词干 -单词被简化为词干。 词干不必与基于字典的词法(最小单位)词根相同,而可以等于或小于该词的形式。

    from nltk.stem import PorterStemmer#create an object of class PorterStemmer
    porter = PorterStemmer()#A list of words to be stemmed
    word_list = ['running', ',', 'driving', 'sung', 'between', 'lasted', 'was', 'paticipated', 'before', 'severed', '1990s', '.']print("{0:20}{1:20}".format("Word","Porter Stemmer"))for word in word_list:
    Word Porter Stemmer
    running run
    , ,
    driving drive
    sung sung
    between between
    lasted last
    was wa
    paticipated paticip
    before befor
    severed sever
    1990s 1990
    . .

    Stemming is not as easy as it looks :(we might get into two issues such as under-stemming and over-stemming of a word.

    词干看起来并不容易:(我们可能会遇到两个问题,例如单词的词干 不足和词干 过度

    Lemmatization-When we think that stemming is the best estimate method to snip a word based on how it appears and meanwhile, on the other hand, lemmatization is a method that seems to be even more planned way of pruning the word. Their dictionary process includes resolving words. Indeed a word’s lemma is its dictionary or canonical form.

    词法化 -当我们认为词干是根据单词出现的方式来截断单词的最佳估计方法时,另一方面,词法化似乎是一种修剪单词的更有计划的方法。 他们的词典处理过程包括解析单词。 确实,单词的引理是其字典或规范形式。

    from nltk.stem import WordNetLemmatizer
    wordnet_lemmatizer = WordNetLemmatizer()#A list of words to lemmatizeword_list = ['running', ',', 'drives', 'sung', 'between', 'lasted', 'was', 'paticipated', 'before', 'severed', '1990s', '.']print("{0:20}{1:20}".format("Word","Lemma"))for word in word_list:
    print ("{0:20}{1:20}".format(word,wordnet_lemmatizer.lemmatize(word)))OUTPUT:Word Lemma
    running running
    , ,
    drives drive
    sung sung
    between between
    lasted lasted
    was wa
    paticipated paticipated
    before before
    severed severed
    1990s 1990s
    . .

    If speed is needed, then resorting to stemming is better. But it’s better to use lemmatization when accuracy is needed.

    如果需要速度,则最好采用阻止。 但是,当需要准确性时,最好使用定理。

    4. Stop Words‘in’, ‘at’, ‘on’, ‘so’.. etc are considered as stop words. Stop words don't play an important role in NLP, but the removal of stop words necessarily plays an important role during sentiment analysis.

    4.“在”,“在”,“在”,“如此”等上的停用词被视为停用词。 停用词在NLP中并不重要,但是在情感分析过程中停用词的去除必定起着重要作用。

    NLTK comes with the stopwords for 16 different languages that contain stop word lists.


    from nltk.corpus import stopwords
    from nltk.tokenize import word_tokenize
    stop_words = set(stopwords.words('english'))print("The stop words in NLTK lib are:", stop_words)para="""Home Farm is one of the biggest junior football clubs in Ireland and their senior team, from 1970 up to the late 1990s, played in the League of Ireland. However, the link between Home Farm and the senior team was severed in the late 1990s. The senior side was briefly known as Home Farm Fingal in an effort to identify it with the north Dublin area."""tokenized_para=word_tokenize(para)
    modified_token_list=[word for word in tokenized_para if not word in stop_words]
    print("After removing the stop words in the sentence:")
    print(modified_token_list)OUTPUT:The stop words in NLTK lib are: {'about', 'ma', "shouldn't", 's', 'does', 't', 'our', 'mightn', 'doing', 'while', 'ourselves', 'themselves', 'will', 'some', 'you', "aren't", 'by', "needn't", 'in', 'can', 'he', 'into', 'as', 'being', 'between', 'very', 'after', 'couldn', 'himself', 'herself', 'had', 'its', 've', 'him', 'll', "isn't", 'through', 'should', 'was', 'now', 'them', "you'll", 'again', 'who', 'don', 'been', 'they', 'weren', "you're", 'both', 'd', 'me', 'didn', "won't", "you'd", 'only', 'itself', 'hadn', "should've", 'than', 'how', 'few', 're', 'down', 'these', 'y', "haven't", "mightn't", 'won', "hadn't", 'other', 'above', 'all', "doesn't", 'isn', "that'll", 'not', 'yourselves', 'at', 'mustn', "it's", 'on', 'the', 'for', "didn't", 'what', "mustn't", 'his', 'haven', 'doesn', "you've", 'are', 'out', 'hers', 'with', 'has', 'she', 'most', 'ain', 'those', 'when', 'myself', 'before', 'their', 'during', 'there', 'or', 'until', 'that', 'more', "hasn't", 'o', 'we', 'and', "shan't", 'which', 'because', "don't", 'why', 'shan', 'an', 'my', 'if', 'did', 'having', "couldn't", 'your', 'theirs', 'aren', 'just', 'further', 'here', 'of', "wouldn't", 'be', 'too', 'her', 'no', 'same', 'it', 'is', 'were', 'yourself', 'have', 'off', 'this', 'needn', 'once', "wasn't", 'against', 'wouldn', 'up', 'a', 'i', 'below', "weren't", 'over', 'own', 'then', 'so', 'do', 'from', 'shouldn', 'am', 'under', 'any', 'yours', 'ours', 'hasn', 'such', 'nor', 'wasn', 'to', 'where', 'm', "she's", 'each', 'whom', 'but'} After removing the stopwords in the sentence:
    ['Home', 'Farm', 'one', 'biggest', 'junior', 'football', 'clubs', 'Ireland', 'senior', 'team', ',', '1970', 'late', '1990s', ',', 'played', 'League', 'Ireland', '.', 'However', ',', 'link', 'Home', 'Farm', 'senior', 'team', 'severed', 'late', '1990s', '.', 'The', 'senior', 'side', 'briefly', 'known', 'Home', 'Farm', 'Fingal', 'effort', 'identify', 'north', 'Dublin', 'area', '.']

    5. POS TaggingDown the memories lane of our early English grammar classes, can we all remember how our teachers used to give relevant instructions around basic parts of speech to have effective communication? Yeah, good old days!!Let's teach parts of speech to our computers too. :)

    5. POS标记我们早期英语语法课的记忆里,我们都还记得我们的老师曾经如何围绕基本的言语给予相关指导以进行有效的交流吗? 是的,过去美好!让我们也将词性教学到我们的计算机上。 :)

    The eight parts of speech are nouns, verbs, pronouns, adjectives, adverbs, prepositions, conjunctions, and interjections.


    POS Tagging is an ability to identify and assign parts of speech to the words in a sentence. There are different methods to tag, but we will be using the universal style of tagging.

    POS标记是一种识别语音部分并将其分配给句子中单词的功能。 标记的方法不同,但是我们将使用通用的标记样式。

    pos_tag= [nltk.pos_tag(i,tagset="universal") for i in words]
    print(pos_tag)[[('Home', 'NOUN'), ('Farm', 'NOUN'), ('is', 'VERB'), ('one', 'NUM'), ('of', 'ADP'), ('the', 'DET'), ('biggest', 'ADJ'), ('junior', 'NOUN'), ('football', 'NOUN'), ('clubs', 'NOUN'), ('in', 'ADP'), ('Ireland', 'NOUN'), ('and', 'CONJ'), ('their', 'PRON'), ('senior', 'ADJ'), ('team', 'NOUN'), (',', '.'), ('from', 'ADP'), ('1970', 'NUM'), ('up', 'ADP'), ('to', 'PRT'), ('the', 'DET'), ('late', 'ADJ'), ('1990s', 'NUM'), (',', '.'), ('played', 'VERB'), ('in', 'ADP'), ('the', 'DET'), ('League', 'NOUN'), ('of', 'ADP'), ('Ireland', 'NOUN'), ('.', '.')]

    One of the applications of POS tagging to analyze the qualities of a product in feedback, by sorting the adjectives in the customers’ review we can evaluate the sentiment of the feedback. Say example, how was your shopping with us?

    POS标记用于分析产品在反馈中的质量的一种应用,通过对客户评论中的形容词进行分类,我们可以评估反馈的情绪。 举例来说, 如何与我们一起购物?

    6. ChunkingChunking is used to add more structure to the sentence by tagging the following parts of speech (POS). Also named as shallow parsing. The resulting word group is named “chunks.” There are no such predefined rules to perform chunking.

    6.分块用于通过标记以下词性(POS)为句子添加更多结构。 也称为浅层解析。 所得的单词组称为“块”。 没有此类预定义规则可以执行分块。

    Phrase structure conventions:


    • S(Sentence) → NP VP.

      S(句子)→NP VP。
    • NP → {Determiner, Noun, Pronoun, Proper name}.

    • VP → V (NP)(PP)(Adverb).

    • PP → Pronoun (NP).

    • AP → Adjective (PP).


    I never had a good time with complex regular expressions, I used to remain as far as I could but off late realized, how important it is to have a grip on regular expressions in data science. Let’s start by understanding the simple instance.

    我从来没有过过使用复杂的正则表达式的美好时光,我曾经尽我所能,但后来才意识到,掌握数据科学中的正则表达式是多么重要。 让我们从了解简单实例开始。

    If we need to tag Noun, verb (past tense), adjective, and coordinating junction from the sentence. You can use the rule as below

    如果我们需要标记句子中的名词,动词(过去式),形容词和协调连接。 您可以使用以下规则


    块:{<NN。?> * <VBD。?> * <JJ。?> * <CC>?}

    import nltk
    from nltk.tokenize import word_tokenizecontent = "Home Farm is one of the biggest junior football clubs in Ireland and their senior team, from 1970 up to the late 1990s, played in the League of Ireland. However, the link between Home Farm and the senior team was severed in the late 1990s. The senior side was briefly known as Home Farm Fingal in an effort to identify it with the north Dublin area."tokenized_text = nltk.word_tokenize(content)
    print("After Split:",tokenized_text)
    tokens_tag = pos_tag(tokenized_text)
    print("After Token:",tokens_tag)patterns= """mychunk:{<NN.?>*<VBD.?>*<JJ.?>*<CC>?}"""chunker = RegexpParser(patterns)
    print("After Regex:",chunker)
    output = chunker.parse(tokens_tag)
    print("After Chunking",output)OUTPUT:After Regex: chunk.RegexpParser with 1 stages: RegexpChunkParser with 1 rules: <ChunkRule: '<NN.?>*<VBD.?>*<JJ.?>*<CC>?'> After Chunking
    (S (mychunk Home/NN Farm/NN) is/VBZ one/CD of/IN the/DT
    (mychunk biggest/JJS)
    (mychunk junior/NN football/NN clubs/NNS) in/IN
    (mychunk Ireland/NNP and/CC) their/PRP$
    (mychunk senior/JJ)
    (mychunk team/NN) ,/, from/IN 1970/CD up/IN to/TO the/DT (mychunk late/JJ) 1990s/CD ,/, played/VBN in/IN the/DT (mychunk League/NNP) of/IN (mychunk Ireland/NNP) ./.)

    7. Wordnet


    Wordnet is an NLTK corpus reader, a lexical database for English. It can be used to generate a synonym or antonym.

    Wordnet是NLTK语料库阅读器,英语的词汇数据库。 它可用于生成同义词或反义词。

    from nltk.corpus import wordnetsynonyms = []
    antonyms = []for syn in wordnet.synsets("active"):
    for lemmas in syn.lemmas():
    synonyms.append(lemmas.name())for syn in wordnet.synsets("active"):
    for lemmas in syn.lemmas():
    if lemmas.antonyms():
    antonyms.append(lemmas.antonyms()[0].name())print("Synonyms are:",synonyms)
    print("Antonyms are:",antonyms)OUTPUT:Synonyms are: ['active_agent', 'active', 'active_voice', 'active', 'active', 'active', 'active', 'combat-ready', 'fighting', 'active', 'active', 'participating', 'active', 'active', 'active', 'active', 'alive', 'active', 'active', 'active', 'dynamic', 'active', 'active', 'active'] Antonyms are: ['passive_voice', 'inactive', 'passive', 'inactive', 'inactive', 'inactive', 'quiet', 'passive', 'stative', 'extinct', 'dormant', 'inactive']

    8. Bag of WordsA bag of words model turns the raw text into words, and the frequency for the words in the text is also counted.


    import nltk
    import re # to match regular expressions
    import numpy as nptext="Home Farm is one of the biggest junior football clubs in Ireland and their senior team, from 1970 up to the late 1990s, played in the League of Ireland. However, the link between Home Farm and the senior team was severed in the late 1990s. The senior side was briefly known as Home Farm Fingal in an effort to identify it with the north Dublin area."sentences = nltk.sent_tokenize(text)
    for i in range(len(sentences)):
    sentences[i] = sentences[i].lower()
    sentences[i] = re.sub(r'\W', ' ', sentences[i])
    sentences[i] = re.sub(r'\s+', ' ', sentences[i])bag_of_words = {}
    for sentence in sentences:
    words = nltk.word_tokenize(sentence)
    for word in words:
    if word not in bag_of_words.keys():
    bag_of_words[word] = 1
    bag_of_words[word] += 1
    print(bag_of_words)OUTPUT:{'home': 3, 'farm': 3, 'is': 1, 'one': 1, 'of': 2, 'the': 8, 'biggest': 1, 'junior': 1, 'football': 1, 'clubs': 1, 'in': 4, 'ireland': 2, 'and': 2, 'their': 1, 'senior': 3, 'team': 2, 'from': 1, '1970': 1, 'up': 1, 'to': 2, 'late': 2, '1990s': 2, 'played': 1, 'league': 1, 'however': 1, 'link': 1, 'between': 1, 'was': 2, 'severed': 1, 'side': 1, 'briefly': 1, 'known': 1, 'as': 1, 'fingal': 1, 'an': 1, 'effort': 1, 'identify': 1, 'it': 1, 'with': 1, 'north': 1, 'dublin': 1, 'area': 1}

    9. TF-IDF


    TF-IDF stands for Term Frequency — Inverse document frequency.


    Text data needs to be converted to the numerical format where each word is represented in the matrix form. The encoding of a given word is the vector in which the corresponding element is set to one, and all other elements are zero. Thus TF-IDF technique is also referred to as Word Embedding.

    文本数据需要转换为数字格式,其中每个单词都以矩阵形式表示。 给定单词的编码是将相应元素设置为1并将所有其他元素设置为零的向量。 因此,TF-IDF技术也称为词嵌入

    TF-IDF works on two concepts:


    TF(t) = (Number of times term t appears in a document) / (Total number of terms in the document)


    IDF(t) = log_e(Total number of documents / Number of documents with term t in it)

    IDF(t)= log_e(文件总数/其中带有术语t的文件数)

    from sklearn.feature_extraction.text import TfidfTransformer
    from sklearn.feature_extraction.text import CountVectorizer
    import pandas as pddocs=["Home Farm is one of the biggest junior football clubs in Ireland and their senior team, from 1970 up to the late 1990s, played in the League of Ireland",
    "However, the link between Home Farm and the senior team was severed in the late 1990s",
    " The senior side was briefly known as Home Farm Fingal in an effort to identify it with the north Dublin area"]#instantiate CountVectorizer()
    cv=CountVectorizer()# this steps generates word counts for the words in your docs
    tfidf_transformer.fit(word_count_vector)# print idf values
    df_idf = pd.DataFrame(tfidf_transformer.idf_, index=cv.get_feature_names(),columns=["idf_weights"])# sort ascending
    df_idf.sort_values(by=['idf_weights'])# count matrix
    count_vector=cv.transform(docs)# tf-idf scores
    tf_idf_vector=tfidf_transformer.transform(count_vector)feature_names = cv.get_feature_names()#get tfidf vector for the document
    first_document_vector=tf_idf_vector[0]#print the scores
    df = pd.DataFrame(first_document_vector.T.todense(), index=feature_names, columns=["tfidf"])
    of 0.374810
    ireland 0.374810
    the 0.332054
    in 0.221369
    1970 0.187405
    football 0.187405
    up 0.187405
    as 0.000000
    an 0.000000and so on..

    What are these scores telling us? The more common the word across documents, the lower its score, and the more unique a word the higher the score will be.

    这些分数告诉我们什么? 文档中的单词越常见,其得分就越低,单词越独特,得分就会越高。

    So far, we learned the steps of cleaning and preprocessing the text. What can we do with the sorted data after all this? We could use this data for sentiment analysis, chatbot, market intelligence. Maybe build a recommender system based on user purchases or item reviews or customer segmentation with clustering.

    到目前为止,我们学习了清理和预处理文本的步骤。 所有这些之后,我们该如何处理排序后的数据? 我们可以使用这些数据进行情感分析,聊天机器人,市场情报。 也许可以基于用户购买或商品评论或具有集群的客户细分来构建推荐系统。

    Computers are still not accurate with human language as much as they are with numbers. With the massive proportion of text data generated every day, NLP is indeed becoming ever more significant to make sense of the data and is being used in many other applications. Hence there are endless ways to explore NLP.

    计算机对人类语言的准确性仍然不如数字。 随着每天生成大量文本数据,NLP确实变得越来越重要以理清数据,并在许多其他应用程序中得到使用。 因此,有无数种探索NLP的方法。

    翻译自: https://medium.com/analytics-vidhya/natural-language-processing-bedb2e1c8ceb


  • 自然语言处理综述

    2019-08-30 16:56:56
  • 哈工大与科大讯飞撰写,自然语言处理国际前沿动态综述,提供最新的自然语言处理研究动向和学术成果,可以窥见新的行业变化
  • 【JMBook】Daniel Jurafsky and James H. Martin,2008. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech ...自然语言处理综述的中文版。
  • 自然语言处理,是指用计算机对自然语言的形、音、义等信息进行处理,即对字、词、句、篇章的输入、输出、识别、分析、理解、生成等的操作和加工。实现人机间的信息交流,是人工智能界、计算机科学和语言学界所共同...


          自然语言处理,是指用计算机对自然语言的形、音、 义等信息进行处理,即对字、词、句、篇章的输入、输出、识别、分析、理解、生成等的操作和加工。实现人机间的信息交流,是人工智能界、计算机科学和语言学界所共同关注的重要问题。自然语言处理的具体表现形式包括机器翻译、文本摘要、文本分类、文本校对、信息抽取、语音合成、语音识别等。可以说,自然语言处理就是要计算机理解自然语言,自然语言处理机制涉及两个流程,包括自然语言理解和自然语言生成。


  • 来自:程序媛驿站跨语言自然语言处理笔记作者:匿名侠| 排版:葡萄媛01 摘要跨语言自然语言处理是当下研究的热点。其中,跨语言词向量(Cross-lingual Word Embeddi...



    作者:匿名侠 | 排版:葡萄媛

    01 摘要

    跨语言自然语言处理是当下研究的热点。其中,跨语言词向量(Cross-lingual Word Embedding)可以帮助比较词语在不同语言下的含义,同时也为模型在不同语言之间进行迁移提供了桥梁。[Ruder et al., 2017] 详细描述了跨语言词向量学习方法和分类体系,将跨语言词向量按照对齐方式分为了基于词对齐、基于句子对齐、基于文档对齐的方法。其中基于词对齐的方法是所有方法的核心和基础。在基于词对齐的方法中,又有基于平行语料的方法,基于无监督的方法等。近些年,无监督方法成为研究热点。本文主要记录一些跨语言词向量的相关论文。

    02 单词语词向量

    常用的单语词向量有 Word2Vec, GloVe, fastText 等。下面主要介绍一下 Word2Vec[Mikolovet al., 2013c,a],Word2Vec 基于分布式假设(Distributional hypothesis):拥有相似上下文(context)的词语通常拥有相似的含义。其算法分为 Skip-gram 和 Continuous Bag of Words(CBOW)。Skipgram 根据中心词预测周围的词, CBOW 根据周围的词预测中心的词语,如图1

    一种常见的方法为 Skip-gram + Negative Sampling。简单来说,该算法构造两个向量矩阵,一个 Embedding 矩阵,一个 Context 矩阵。利用 Skip-gram 来构建训练正例,使用 Negative sampling来构建负例,如图2

    训练完成以后(教程可参考The Illustrated Word2vecVector Semantics),每个词语对应两个向量,一个 Embedding 矩阵中的表示,一个 Context 矩阵中的表示,最终表示可以直接使用 Embedding 矩阵作为词向量,或者可以将两个矩阵相加得到词向量,或者可以将两个矩阵拼接得到词向量。

    03 基于词语映射的方法

    [Ruder et al., 2017] 将基于词映射的方法根据映射方法(mapping method)、种子词语的选择(seed lexicon)、映射的改进(refnement)、最近邻词语的检索方法(retrieval)进行了分类。下面简单介绍其中的一些经典工作。

    [Mikolov et al., 2013b] 观察发现,不同语言的词向量在向量空间中有着相似的几何排列。如图3

    左图为英语,右图为西班牙语(利用 PCA 进行词向量的降维)。发现,不论是数字还是动物,英语和西班牙语词向量的分布非常相似。基于这一观察,提出了一种简单地线性映射的方法来完成源语言向量空间到目标语言向量空间的转换。该方法的目标在于学习一个从源语言到目标语言的线性映射矩阵(linear transformation matrix)

    首先从源语言中选择 n = 5000 个频率最高的词语以及其对应的

    翻译作为种子词语,用于学习线性映射。使用随机梯度下降来最小化均方误差(mean squared error, MSE)。学习好映射矩阵之后,将源语言映射到目标语言空间,根据 cosine similarity 来寻找翻译。

    [Xing et al., 2015] 发现上述方法有几处不一致。词向量学习的时候使用的是内积(inner product),但是在选择词语的时候却是根据 cosine similarity,学习映射矩阵时,使用的是均方误差(mean square error),这些导致了几处不匹配。因此首先将词向量的长度限制为单位长度。这样相当于所有的向量都会在高维空间落在一个超球面上,如图4。这样就使得两个向量的内积和 cosine similarity 是一致的。然后将目标函数从以均方误差为目标修改为以 cosine similarity 为目标:之前的方法对映射矩阵是没有限制的,这里将映射矩阵限制为正交矩阵(Orthogonal transform),使得其满其实际求解是使用奇异值分解(SVD)来完成,其中为源语言向量矩阵,为目标语言向量矩阵。实验证明,该方法的实际效果更好。[Xing et al., 2015, Ruder et al., 2017]。

    04 基于无监督的方法

    Conneau et al., 2017] 提出了一种完全无监督的词级别的翻译(对齐)方法,首先使用对抗训练将两种语义空间对齐,然后使用迭代的方式来一步步更新学习到的映射矩阵,并提出了一种 CSLS方法来检索最近的翻译词语。如图5

    由于没有对齐信号,所以有一个基本的前提条件是两种语言的词汇处于同一内容空间(碎碎念:FAIR 的无监督机器翻译),这样两种语言的向量空间几何排列才是相似的,才有可能通过映射完成两个空间的对齐,不然是完全没有任何对齐信号的。首先使用对抗训练的方式使得判别器无法区分映射之后的源语言向量和目标语言向量,相当于要求将源语言映射到目标语言语义空间下。判别器的学习目标为尽可能区分映射后的源语言与目标语言:


    在得到映射矩阵以后,有一个迭代调整的过程,根据学习到的映射,选择互为最近邻的词语作为词典来学习映射,可以迭代这个过程。作者还提出了一种新的相似性度量方式,因为在高维空间中存在一种现象叫做 Hubness,即向量空间中存在密集区域,其中的一些点会是很多点的最近邻。之前的方式采用 cosine similarity 来选择最近邻,作者设计了一种 Cross-Domain Similarity Local Scaling(CSLS) 的度量方式:

    ,为和其 K 个目标语言最近邻的平均余弦距离。 

    基于上述工作, [Lample et al., 2017] 在没有对齐语料的情况下,仅使用单语语料来完成无监督机器翻译。该方法可以很好地泛化到其他语言,并且为有监督的方法提供了性能下限。其 baseline模型如 [Johnson et al., 2017]。首先使用上述无监督方法得到的翻译词典来初始化翻译模型。接着使用降噪自编码器训练,跨领域训练和对抗训练得到最终模型,如图6

    降噪自编码器部分,首先从数据集中采样一条数据 x,然后给输入数据引入噪

    使用编码器对该噪音输入进行编 ,接着使用解码器进行解码得到输出。其损失函数为:

    其中为交叉熵损失。其中噪音模型有两种方式,一种是以一定的概率丢弃每个词语。第二种是打乱输入,但是在文中限制了新的位置距离原本的位置不能超过 k,如图7。

    第二部分是跨领域训练,这部分是得到翻译模型的关键。利用到了 back translation,首先从语言中采样一个句子,使用当前翻译模型翻译到语言下,然后给加噪声使作为训练对来训练模型,其损失函数为:



    对于选择模型的的超参,论文提出了代理准则(surrogate criterion),如公式1,即输入和重构的输入之间的 BLEU 分数。还有一些细节【decoder 如何判断当前生成的语种?在多语言翻译中,通常通过在解码端添加翻译方向的标志位来控制解码方向。但是在本文的假设中,只有非此即彼的两个语种,并且 encoder 对它们一视同仁的。因此,作者只是将两者的解码起始符 <s> 加以区分,各自维护一个。

    两个训练过程是如何共享同一套 Seq2Seq 框架的?作者所谓的“同一个 encoder 和 decoder”,其实是针对隐层部分而言的。每个语种有自己的embedding 层和 pre-softmax 层,在模型训练中进行 look-up 来获取各自的参数矩阵。此外,分成“源语言”和“目标语言“是为了便于描述,实际上两者并不区别。最终训练得到的模型,可以在这两种语言中做任意方向的翻译。(碎碎念:FAIR 的无监督机器翻译)】

    [Lample et al., 2018] 指出了 [Lample et al., 2017, Artetxe et al., 2017] 几点特点:使用无监督方法推理出来的词典来初始化系统,使用了基于 Seq2Seq 的降噪自编码器模型,使用 back translation来将无监督问题转换为有监督问题。同时使用了对抗训练来将不同语言编码到同一空间。本文总结了无监督机器翻译的三个核心点。第一点,初始化,初始化可以帮助模型具有一定的先验知识。第二点,语言模型,根据大规模的单语语料可以学习到好的语言模型。第三点,迭代的反向翻译,该方法可以将无监督转换为有监督,可以完成翻译任务的学习。如图9

    对于初始化,本文使用源语言和目标语言的单语语料来共同学习 BPE,学习完成以后用来初始化编码器和解码器的向量查找表。对于语言模型,使用降噪自编码器来学习语言模型。对于反向翻译,使用迭代的反向翻译来完成翻译模型的学习。该模型同时共享了编码器和解码器的参数,期望学习到共享的语义空间表示。 

    05 基于虚拟双语语料库的方法

    [Xiao and Guo, 2014] 利用 Wikitionary 作为两种语言之间的桥梁,构建了统一的双语词典。首先构建源语言词典,然后利用 Wikitionary 找到其所有的翻译。删除满足以下条件的翻译:一个源语言词语有多个目标语言翻译、一个目标语言词语有多个源语言翻译、源语言的目标语言翻译词语在目标语言数据集中没有出现。经过以上三步处理,可以得到一个一对一的双语词典。将源语言和目标语言建立统一的双语词表 V ,利用构建好的双语词典,在词表 V 中属于词典映射关系的两个词语将会被映射到相同的词向量空间。然后利用神经网络来学习词向量表示。其任务是一个二分类问题,输入是一个子句,通过替换正例中的词语来构建负例。最终会学习到统一双语词典的向量表示,以此作为双语空间的桥梁。其模型如图10。这种方法对齐词语有同一表示。 

    [Gouws and Søgaard, 2015] 构建了一种真实的虚拟双语语料库,混合了不同的语言。针对不同的任务可以定义不同的对应等价方法,例如根据翻译,可以定义英语 house 和法语 maison 是等价的,根据词性标注,可以定义英语 car 和法语 maison 都是名词是等价的。因此这里的对齐方式不一定是翻译,可以根据具体的任务来定义,然后利用这种对齐关系来构造双语伪语料。首先将源语言和目标语言数据混合打乱。对于统一语料库中一句话的每一个词语,如果存在于对齐关系中,以一定概率来替换为另一种语言的词语。通过该方法可以构建得到真实的双语语料库。例如根据翻译关系,原始句子 build the house 经过构建可以得到 build the maison,就是将 house 替换为了 maison。利用构建好的全部语料来使用 CBOW 算法学习词向量,由于替换以后的词语有相似的上下文,因此会得到相似的表示。对于那些没有对齐关系的词语,例如“我吃苹果”和“I eat apple”,吃和 eat没有对齐关系,但如果我和 I、苹果和 apple 有对齐关系,根据构造出来的语料“I 吃 apple”也可以完成吃和 eat 的隐式对齐。这种方法对齐词语有相似表示。 

    [Ammar et al., 2016] 提出了一种将上述方法扩展到多种语言上的方法 multiCluster。借助双语词典,将词语划分为多个集合,每个集合中是相同语义的词语。然后将所有语言的单语语料库拼接,对于其中的一句话,如果词语在集合中,那就替换为集合中其他语言的词语。得到新的多语语料库以后,使用 skip-gram 来训练得到词向量表示。

    [Duong et al., 2016] 提出的方法与上述方法类似,区别在于,只在使用 CBOW 算法学习词向量的时候替换目标词语。而非预先利用词典构造多语语料库。在学习的时候会同时预测源语言目标词语及其对应的替换后的目标词语作为联合训练目标。除此以外,之前的方法都没有处理一词多义的问题,例如 bank 可能有两种意思:river bank 或者 fnancial bank,对应在意大利语中的翻译就是 sponda 和 banca。因此作者利用上下文词汇表示结合中心词汇表示的方式来选择最合适的翻译词语。通常来说,在 CBOW 算法中,会有两个矩阵,一个 context 矩阵 V ,一个 word 矩阵 U。作者指出,使用这种方式训练的词向量, V 矩阵更倾向于单语表示, U 矩阵更倾向于双语表示。其过程如图11

    06 基于预训练的方法

    [Devlin et al., 2018] 提出了 Multilingual BERT,与单语 BERT 结构一样,使用共享的 Wordpiece 表示,使用了 104 中语言进行训练。训练时,无输入语言标记,也没有强制对齐的语料有相同的表示。[Pires et al., 2019] 分析了 Multilingual BERT 的多语言表征能力,得出了几点结论:
    1.Multilingual BERT 的多语言表征能力不仅仅依赖于共享的词表,对于没有重叠(overlap)词汇语言的 zero-shot 任务,也可以完成的很好;语言越相似,效果越好;

    2.对于语言顺序(主谓宾或者形容词名词)不同的语言,效果不是很好;Multilingual BERT 的表示同时包含了多种语言共有的表示,同时也包含了语言特定的表示,这一结论, [Wu and Dredze, 2019] 在语言分类任务中也指出,Multilingual BERT 由于需要完成语言模型任务,所以需要保持一定的语言特定的表示来在词表中选择特定语言词语。

    [Lample and Conneau, 2019] 提出了基于多种语言预训练的模型 XLMs,首先从单语语料库中采样一些句子,对于资源稀少的语言可以增加数量,对于资源丰富的语言可以减少数量,将所有语言使用统一 BPE 进行表示。使用三种语言模型目标来完成学习。前两个是基于单语语料库的,最后一个是基于双语对齐数据的。第一种是 Causal Language Modeling (CLM),根据之前的词语预测下一个词语。第二个是 Masked Language Modeling (MLM),和 BERT 类似,但是使用一个词语流,而非句子对。第三种是 Translation Language Modeling (TLM),可以随机 mask 掉其中一些两种语言中的一些词语,然后进行预测。其模型如图12

    07 多语言机器翻译

    [Johnson et al., 2017] 使用一个模型来完成多种语言的机器翻译任务。唯一的不同是输入的开始需要拼接一个特殊的指示符,代表目标语言。例如 How are you? -> ¿Cómo estás? 需要修改为<2es> How are you? -> ¿Cómo estás?,代表该句将被翻译为西班牙语。另一个核心点在于使用共享的 Wordpiece,利用 BPE 来完成。模型在训练的时候,一个 mini-batch 中混合多个语言的平行数据。该模型的优点在于:简单,只需要修改输入数据就可以;可以提升资源稀缺数据的翻译效果;支持直接的 zero-shot 翻译任务。

    [Escolano et al., 2019] 利用不同语言之间共有的词表来作为知识迁移的桥梁,提出了两种方法,progAdapt 和 progGrow。第一种方法 progAdapt 将一种语言对的翻译任务迁移到另一种翻译任务上,保留词表中共享的部分,添加新任务的词语,词表大小保持不变,并使用新任务的数据。第二种方法 progGrow 利用递增的方式来学习一个多语言的机器翻译模型,将新语言的词表添加到旧词表上,并使用新旧任务一起的数据。如图13

    [Pires et al., 2019] 指出 [Johnson et al., 2017, Escolano et al., 2019] 的问题在于当语言的词表有显著的不同时,例如中文,词表会变得很大。因此提出了一种方法,每一种语言有自己的特定的编码器和解码器,编码器和解码器之间不共享参数。对于一个翻译对 X-Y,会完成自编码任务(X-X, Y-Y)和翻译任务(X-Y, Y-X),同时会要求编码器得到的两种表示相近。新来一种语言以后 Z,假设目前有 Z-X 的平行语料,只需要添加 Z 语言的编码器,然后固定住 X 语言的解码器参数来进行训练,这个过程只更新 Z 编码器的参数。如图14。 

    [Kim et al., 2019] 也认为,训练一个共享的多语言机器翻译模型一方面需要语言之间相关,以此来构建一个共享的词表,另一方面当增加一种语言时,如果该语言的词汇不在现有此表中,词表需要更新,模型需要重新训练。因此在多语言机器翻译或者迁移学习的设定下,距离较远的语言词表不匹配(vocabulary mismatch)是一个急需解决的问题。因此提出了一种在向量空间完成隐式翻译的方法,本质上是使用了跨语言词向量。当需要添加一种新的语言 t 时,首先训练语言 t 的单语词向量,然后将已经训练好的机器翻译模型的词向量参数矩阵取出,在两者之间学习一个线性映射W,用于将新的语言 t 转换到模型的语义空间下,该方法不需要重新更新词表或者重新训练模型,由于在向量空间完成了隐式对齐,当新的语言句子输入以后,会首先通过 W 矩阵来把单语向量空间映射到模型的语义空间,然后接着训练。这种方法虽然确实没有显式的两个词表对齐、增加、替换的过程。但实际上在学习完映射矩阵 W 以后,将新语言的词向量经过映射替换到训练好的模型中,实际上已经隐式的完成了词表的替换,这个映射过后的向量参数矩阵也会随着训练来更新。除此以外,新的语言和原来的语言可能语序不同,因此在训练原机器翻译模型时,会在输入端通过随机插入、删除,交换来引入一些噪音。例如 Ich arbeite hier 通过交换以后变为 Ich hier arbeite。同时由于新语言往往是低资源语言,这里没有使用 back translation 来构建新的语料。而是原来语言数据和新语言数据词表重合的部分保留,其他替换为 unk 来构建伪语料。例如德语数据 Hallo,John!会变为巴斯克语数据 <unk>,John! 保留了共有部分 John。 

    [Vázquez et al., 2019] 利用一个语言共享的自注意力机制(attention bridge)来将不同语言编码到同一空间。不同语言的编码器和解码器不共享参数,在使用 LSTM 得到特定语言的表示以后,使用共享的 attention bridge 得到语言无关表示,用来初始化解码器的初始状态。 

    08 相关论文

    [Liu et al., 2019] 利用一种共享-私有(Shared-Private)词向量来建模源语言词向量和目标语言词向量之间的关系,以及减少模型参数量。其核心想法在于,词向量的一部分是语言无关的,是共享的,另一部分是语言相关的,是私有的。并提出了三种共享关系,相似词语表示()、相同词形()、不相关()。如图15。利用 fast-align 首先根据一定的阈值找到语义对齐的词语。具体实现时,拿源语言词向量矩阵来举例,该矩阵由三个部分构成,,分别代表了三种共享关系词语的表示,每个词语只属于其中一种关系,并按照上述顺序的优先级来排序。其中每一种共享关系由共享部分和私有部分组成,例如 lm 部分,,其中代表语言和目标语言共有的,代表源语言私有的。整个实现由矩阵拼接完成。 

    [Kumar et al., 2019] 利用资源丰富的语言来辅助资源稀少语言的问题生成任务,该任务输入句子,输出问题。并构建了一个新的印度语的问题生成数据集 HiQuAD。其具体做法为:首先使用降噪自编码器(DAE)和反向翻译(back translation)来完成模型的预训练,然后在监督学习部分,分别使用各自数据进行训练。其模型在编码器部分和解码器部分会共享部分参数。其模型如图16。 

    [Duan et al., 2019, Shen et al., 2018] 利用知识蒸馏结合机器翻译来完成跨语言句子摘要任务。其核心想法为使用现有句子摘要数据集训练教师模型,为跨语言句子摘要模型提供监督信号。同时还利用目标输入句作为中间桥梁,来利用两个方向的注意力权重来指导生成。其基本执行流程如图17


    Waleed Ammar, George Mulcaire, Yulia Tsvetkov, Guillaume Lample, Chris Dyer, and Noah ASmith. Massively multilingual word embeddings. arXiv preprint arXiv:1602.01925, 2016.
    Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. Unsupervised neural machine translation. arXiv preprint arXiv:1710.11041, 2017.
    Alexis Conneau, Guillaume Lample, Marc’Aurelio Ranzato, Ludovic Denoyer, and Hervé Jégou.Word translation without parallel data. arXiv preprint arXiv:1710.04087, 2017.
    Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
    Xiangyu Duan, Mingming Yin, Min Zhang, Boxing Chen, and Weihua Luo. Zero-shot cross-lingual abstractive sentence summarization through teaching generation and attention. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 3162–3172, Florence,Italy, July 2019. Association for Computational Linguistics. URL https://www.aclweb.org/
    Long Duong, Hiroshi Kanayama, Tengfei Ma, Steven Bird, and Trevor Cohn. Learning crosslingual word embeddings without bilingual corpora. arXiv preprint arXiv:1606.09403, 2016.
    Carlos Escolano, Marta R Costa-Jussà, and José AR Fonollosa. From bilingual to multilingual neural machine translation by incremental training. arXiv preprint arXiv:1907.00735, 2019.
    Stephan Gouws and Anders Søgaard. Simple task-specifc bilingual word embeddings. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1386–1390, 2015.
    Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, et al. Google’ s multilingual neural machine translation system: Enabling zero-shot translation. Transactions of the Association for Computational Linguistics, 5:339–351, 2017.
    Yunsu Kim, Yingbo Gao, and Hermann Ney. Effective cross-lingual transfer of neural machine translation models without shared vocabularies. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1246–1257, Florence, Italy, July 2019. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P19-1120.
    Vishwajeet Kumar, Nitish Joshi, Arijit Mukherjee, Ganesh Ramakrishnan, and Preethi Jyothi. Cross-lingual training for automatic question generation. arXiv preprint arXiv:1906.02525, 2019.
    Guillaume Lample and Alexis Conneau. Cross-lingual language model pretraining. arXiv preprint arXiv:1901.07291, 2019.
    Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato. Unsupervised machine translation using monolingual corpora only. arXiv preprint arXiv:1711.00043, 2017.
    Guillaume Lample, Myle Ott, Alexis Conneau, Ludovic Denoyer, and Marc’Aurelio Ranzato.Phrase-based & neural unsupervised machine translation. arXiv preprint arXiv:1804.07755, 2018.
    Xuebo Liu, Derek F. Wong, Yang Liu, Lidia S. Chao, Tong Xiao, and Jingbo Zhu. Shared-private bilingual word embeddings for neural machine translation. In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 3613–3622, Florence, Italy, July 2019. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/P19-1352.
    Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efcient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013a.
    Tomas Mikolov, Quoc V Le, and Ilya Sutskever. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168, 2013b.
    Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013c.
    Telmo Pires, Eva Schlinger, and Dan Garrette. How multilingual is multilingual BERT? In Proceedings of the 57th Conference of the Association for Computational Linguistics, pages 4996–5001, Florence, Italy, July 2019. Association for Computational Linguistics. URL https:
    Sebastian Ruder, Ivan Vulić, and Anders Søgaard. A survey of cross-lingual word embedding models. arXiv preprint arXiv:1706.04902, 2017.
    Shi-qi Shen, Yun Chen, Cheng Yang, Zhi-yuan Liu, and Mao-song Sun. Zero-shot cross-lingual neural headline generation. IEEE/ACM Transactions on Audio, Speech and Language Processing(TASLP), 26(12):2319–2327, 2018
    Raúl Vázquez, Alessandro Raganato, Jörg Tiedemann, and Mathias Creutz. Multilingual NMT with a language-independent attention bridge. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 33–39, Florence, Italy, August 2019. Association for Computational Linguistics. URL https://www.aclweb.org/anthology/W19-4305.
    Shijie Wu and Mark Dredze. Beto, bentz, becas: The surprising cross-lingual effectiveness of bert.arXiv preprint arXiv:1904.09077, 2019.
    Min Xiao and Yuhong Guo. Distributed word representation learning for cross-lingual dependency parsing. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 119–129, 2014.
    Chao Xing, Dong Wang, Chao Liu, and Yiye Lin. Normalized word embedding and orthogonal transform for bilingual word translation. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1006–1011, 2015.


    1. 快速学习深度学习五件套资料

    2. 进入高手如云DL&NLP交流群


  • 在过去几年中,自然语言处理领域由于深度学习模型的使用取得重大突破。 本综述简要介绍了这个领域,并简要概述了深度学习架构和方法。 然后筛选当前大量最近的研究,总结大量相关的贡献。分析了计算语言学的应用及几...
  • NLP入门-综述阅读-【自然语言处理发展及应用综述】1 前言2 自然语言处理的发展3 自然语言处理的研究方法和内容3.1 自然语言处理的研究方法3.2 自然语言处理基础研究3.2.1 词法分析3.2.2 句法分析3.2.3 语义分析3.2.4...
  • 作者:匿名侠| 排版:葡萄媛01 摘要跨语言自然语言处理是当下研究的热点。其中,跨语言词向量(Cross-lingual Word Embedding)可以帮助比较词语在不同语言下的含...
  • 深度学习在自然语言处理中的应用综述 知乎-自然语言处理怎么最快入门? 从语言学到深度学习NLP,一文概述自然语言处理 http://www.360doc.com/content/17/1114/13/5315_703729214.shtml  ...
  • 有时候觉得很好笑,每天说自己做自然语言处理,可真正,自然语言处理到底是做什么的,我也没有搞明白,不透彻,整个背景还是空缺的,现在对这部分的知识点进行弥补。 看的是宗成庆老师的这本书的讲义。 我们说的...
  • 1. 自然语言处理的基本内容 语言是思维的载体,是人类交流思想、表达情感最自然、最直接、最方便的工具。人类历史上以语言文字形式记载和流传的知识占知识总量的80%以上,中国互联网上有87.8%的网页内容是文本表示的...
  • NLP (Natural Language Processing) 自然语言处理,是计算机科学、人工智能和语言学的交叉学科,目的是让计算机处理或“理解”自然语言。自然语言通常是指一种自然地随文化演化的语言,如汉语、英语、日语。 NLP ...
  • 点击上方,选择星标或置顶,每天给你送干货!阅读大概需要20分钟跟随小博主,每天进步一丢丢作者:匿名侠排版:葡萄媛来自:程序媛驿站01 摘要跨语言自然语言处理是当下研究的热点。其中,跨语言...
  • 者篇论文相当详细的描绘了自然语言处理在深度学习的基础上的研究情况,是很好的综述性质文章,可以借鉴借鉴
  • NLP入门-综述阅读-【基于深度学习的自然语言处理研究综述】基于深度学习的自然语言处理研究综述摘要0 引言1 深度学习概述卷积神经网络递归神经网络2 NLP应用研究进展3 预训练语言模型BERTXLNetERNIE4 结束语个人总结...
  • 自然语言处理发展及应用综述 笔记 一、小结 这一篇论文主要介绍了自然语言处理的整体研究方法,包括五步,即获取语料、对语料预处理、进行特征化、进行模型训练和最后的建模效果评估。绝大多数研究方法都遵循这五步...
  • 自然语言处理之语言模型综述

    千次阅读 2016-04-15 10:35:13
    文法型语言模型是人工编制的语言学文法,文法规则来源于语言学家掌握的语言学知识和领域知识,但这种语言模型不能处理大规模真实文本。 1 统计语言模型 1). 无历史,一元模型 2). 最近一个历史,二元模型(Bigram)...
  • 近年来,深度学习技术被广泛应用于各个领域,基于深度学习的预处理模型将自然语言处理带入一个新时代。预训练模型的目标是如何使预训练好的模型处于良好的初始状态,在下游任务中达到更好的性能表现。
  • Speech and Language Processing An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition Third Edition draft
  • 深度学习模型被证明存在脆弱性并容易遭到对抗样本的攻击,但目前对于对抗样本的研究主要集中在计算机视觉领域而忽略了自然语言处理模型的安全问题.
  • 自然语言处理综述 中文版 HttpClient 4.0.3的关键修复 HttpClient的紧急版本现已发布。 HttpClient 4.0.3修复了SSL连接管理代码中的关键回归问题,并引入了对4.0.2版中对多宿主主机的改进支持。 鼓励上游项目...
  • 这是一系列自然语言处理的介绍,本文不会涉及公式推导,主要是一些算法思想的随笔记录。 适用人群:自然语言处理初学者,转AI的开发人员。 编程语言:Python 参考书籍:《数学之美》 《统计自然语言处理》 --宗成庆 ...
  • 本文从自然语言处理角度对互联网垃圾信息过滤方法做了较全面的综述报告。
  • 深度神经网络极大促进了自然语言处理技术的发展。来自微软亚研的周明、段楠、刘树杰、沈向洋发表了神经自然语言处理的进展,从表示到建模再到推理,共16页pdf,值得详细查看。
  • 本文为转载文章,原文请见中文对比英文自然语言处理NLP的区别综述 达观数据:中文对比英文自然语言处理NLP的区别综述 分类:技术分享 发表:2019-03...



1 2 3 4 5 ... 20
收藏数 465
精华内容 186