精华内容
下载资源
问答
  • 马尔可夫模型

    2021-01-19 10:49:57
    马尔可夫模型前言开始认识马尔可夫模型根据马尔可夫模型生成数据生成马尔可夫模型小栗子小栗子代码精讲小栗子github作者 前言 好久没更hitroad杂货铺了,这次做一个理论性的 开始 认识马尔可夫模型 马尔可夫模型是一...

    前言

    开始

    认识马尔可夫模型

    马尔可夫模型是一种图,其中点是状态,边权是概率。它可以处理相对更离散的模型,其中最明显的特征就是接下来的状态与之前的状态无关。由于这些特征,它被广泛的应用在包括但不限于娱乐1,低辨析度的垃圾短信邮件生成,等中。

    50%
    30%
    10%
    35%
    15%
    40%
    30%
    45%
    45%
    晴天
    雨天
    阴天

    根据马尔可夫模型生成数据

    根据马尔可夫模型生成数据可以通俗的理解成一个动态规划或者一个图上的dfs,如果根据下面的马尔可夫模型生成数据,有如下过程:

    50%
    30%
    10%
    35%
    15%
    40%
    30%
    45%
    45%
    晴天
    雨天
    阴天

    稍有点不好辨析,凑活吧

    从晴天开始
    有50%概率还是晴天,15%概率雨天,35%阴天
    第二天还是晴天
    有50%概率还是晴天,15%概率雨天,35%阴天
    第三天是阴天
    有10%概率还是阴天,45%概率雨天,45%晴天
    第四天是雨天
    有30%概率还是雨天,30%晴天,40%阴天
    第五天是晴天
    有50%概率还是晴天,15%概率雨天,35%阴天
    第六天是雨天
    有30%概率还是雨天,30%晴天,40%阴天
    第七天是阴天
    ……
    见伪代码

    获取数据并存储在data中
    获取需要生成数据长度并存储在len中
    获取需要生成数据的初始状态并存储在s中
    建立空的列表res
    建立函数Marcov,参数len,S:
    	如果len为1:
    		返回从data的第S项中按照概率选取的值
    	否则
    		从data的第S项中按照概率选取值并存储在s中
    		返回将s和用len - 1,s作为参数调用Marcov的结果拼接成的字符串
    输出以len和s作为参数调用Marcov的结果
    

    见python代码

    import random
    data = eval(input())
    len = int(input())
    s = input()
    def rand(dataset):
    	ls = []
    	for v,n in dataset.items():
    		ls += [v] * n
    	return random.choice(ls)
    def rchoice(dt):
    	keys = list(dt.keys())
    	return random.choice(keys)
    def Marcov(len,S,data):
    	if(S not in data):
    		return S + Marcov(len - 1,rchoice(data),data)
    	elif(len == 1):
    		return rand(data[S])
    	else:
    		s = rand(data[S])
    		return s + Marcov(len - 1,s,data)
    print(Marcov(len,s,data))
    

    生成马尔可夫模型

    这里就不做演示了
    见伪代码

    获取数据并存储在data中
    建立空的列表dt
    从第2个元素开始遍历data并将当前索引存储在i中:
    	建立空的列表it
    	在it尾部添加data的第i - 1项
    	在it尾部添加data的第i项
    	在dt尾部添加it
    建立空的映射m
    遍历dt并将当前项存储在i,j中:
    	如果i在m中出现过:
    		m的第i项的第j项增加1
    	否则:
    		在m中建立第i项和第i项的第j项并赋值为1
    输出m
    

    见python代码

    d = input()
    data = d.split(',')
    def gen(data):
    	dt = []
    	for i in range(1,len(data)):
    		it = [data[i - 1],data[i]]
    		dt.append(it)
    	m = {}
    	for i,j in dt:
    		if(i not in m):
    			m[i] = {}
    			m[i][j] = 1
    		elif(j not in m[i]):
    			m[i][j] = 1
    		else:
    			m[i][j] += 1
    	return m
    print(gen(data))
    

    小栗子

    自然语言生成器
    其实超级"简单"兴奋到飞起

    import random
    listlen = len
    def rand(dataset):
    	ls = []
    	for v,n in dataset.items():
    		ls += [v] * n
    	return random.choice(ls)
    def rchoice(dt):
    	keys = list(dt.keys())
    	return random.choice(keys)
    def Marcov(len,S,data):
    	if(S not in data):
    		return S + Marcov(len - 1,rchoice(data),data)
    	elif(len == 1):
    		return rand(data[S])
    	else:
    		s = rand(data[S])
    		return s + Marcov(len - 1,s,data)
    def gen(data):
    	dt = []
    	for i in range(1,listlen(data)):
    		it = [data[i - 1],data[i]]
    		dt.append(it)
    	m = {}
    	for i,j in dt:
    		if(i not in m):
    			m[i] = {}
    			m[i][j] = 1
    		elif(j not in m[i]):
    			m[i][j] = 1
    		else:
    			m[i][j] += 1
    	return m
    def dict_comb(d1,d2):
    	res = dict(d1)
    	for k,v in d2.items():
    		if(k in res):
    			res[k] += d2[k]
    		else:
    			res[k] = d2[k]
    	return res
    fnamer = input()
    fnamew = input()
    splitter = input()
    len = int(input())
    s = input()
    print('generating marcov model')
    marcov = {}
    with open(fnamer,'r') as f:
    	line = f.readline()[:-1]
    	if(splitter):
    		line = line.split(splitter)
    	else:
    		line = list(line)
    	m = gen(line)
    	marcov = dict_comb(marcov,m)
    print('marcov model all set')
    print('start generaing unreadable rubbish text')
    with open(fnamew,'w') as f:
    	marc = Marcov(len,s,marcov)
    	f.write(splitter.join([str(i) for i in marc]))
    print('unreadable rubbish text generated')
    

    输入语料文件路径,生成文件路径,语料文件中使用的分隔符,生成的长度和初始字符。

    小栗子代码精讲

    import random
    listlen = len
    def rand(dataset):
    	ls = []
    	for v,n in dataset.items():
    		ls += [v] * n
    	return random.choice(ls)
    def rchoice(dt):
    	keys = list(dt.keys())
    	return random.choice(keys)
    

    导入库,复制len函数,定义按照概率生成随机数的函数和从字典中随机挑选的函数

    def rand(dataset):
    	ls = []
    	for v,n in dataset.items():
    		ls += [v] * n
    	return random.choice(ls)
    

    这部分是按照语料中的"a跟在b后面"的情况及出现次数,随机挑选一种情况的函数。
    将字典中的情况及出现次数展开成一个列表(例如{1:2,2:3}展开成[1,1,2,2,2])
    然后用random.choice随机选出一个

    def Marcov(len,S,data):
    	if(S not in data):
    		return S + Marcov(len - 1,rchoice(data),data)
    	elif(len == 1):
    		return rand(data[S])
    	else:
    		s = rand(data[S])
    		return s + Marcov(len - 1,s,data)
    def gen(data):
    	dt = []
    	for i in range(1,listlen(data)):
    		it = [data[i - 1],data[i]]
    		dt.append(it)
    	m = {}
    	for i,j in dt:
    		if(i not in m):
    			m[i] = {}
    			m[i][j] = 1
    		elif(j not in m[i]):
    			m[i][j] = 1
    		else:
    			m[i][j] += 1
    	return m
    

    这部分见根据马尔可夫模型生成数据生成马尔可夫模型,有伪代码做解释

    def dict_comb(d1,d2):
    	res = dict(d1)
    	for k,v in d2.items():
    		if(k in res):
    			res[k] += d2[k]
    		else:
    			res[k] = d2[k]
    	return res
    

    将两个dict合并。

    fnamer = input()
    fnamew = input()
    splitter = input()
    len = int(input())
    s = input()
    print('generating marcov model')
    

    输入数据并输出提示语

    marcov = {}
    with open(fnamer,'r') as f:
    	line = f.readline()[:-1]
    	if(splitter):
    		line = line.split(splitter)
    	else:
    		line = list(line)
    	m = gen(line)
    	marcov = dict_comb(marcov,m)
    print('marcov model all set')
    print('start generaing unreadable rubbish text')
    

    对语料的每一行建立马尔可夫模型,并与大模型marcov合并,并输出提示语

    with open(fnamew,'w') as f:
    	marc = Marcov(len,s,marcov)
    	f.write(splitter.join([str(i) for i in marc]))
    print('unreadable rubbish text generated')
    

    生成并存储狗屁不通文章,输出提示语

    小栗子github

    github传送门

    作者

    hit-road

    拜拜,下课!
    回到顶部



    1. 这里指进行自然语言生成产生的胡言乱语 ↩︎

    展开全文
  • 马尔可夫模型和隐式马尔可夫模型

    千次阅读 2018-03-16 10:16:56
    马尔可夫模型(Markov Model)是通过寻找事物状态的规律对未来事物状态进行预测的概率模型,在马尔可夫模型中假设当前事物的状态只与之前的n个状态有关 我们平时研究较多的则是一阶马尔可夫模型,主要有两个特点 1、...

    马尔可夫模型(Markov Model)是通过寻找事物状态的规律对未来事物状态进行预测的概率模型,在马尔可夫模型中假设当前事物的状态只与之前的n个状态有关

    我们平时研究较多的则是一阶马尔可夫模型,主要有两个特点

    1、当前状态只与上一个状态有关
    2、某个时刻的观测值只与当前的状态有关

    我觉得主要关注点在当前状态以及它的转移矩阵当前状态是今天为雨天的概率为0.5,转移矩阵为今天为雨天且明天为晴天的概率是0.8)

    用状态一直乘以它的转移矩阵,到一定次数之后状态会趋于稳态

    隐马尔可夫模型(Hidden Markov Model,HMM)是统计模型,它用来描述一个含有隐含未知参数的马尔可夫过程。其难点是从可观察的参数中确定该过程的隐含参数。

    意思就是,在隐马尔可夫模型中,转移矩阵中有些因素往往是不可见的,需要从另外一些可观测的参数中获取。比如说,我需要从观测每天早上的露珠来确定是否晴雨天(晴雨天的概率无法直接观测)

    关于隐马尔可夫模型一个例子的祖传手稿,来源于知乎一个回答
    隐式马尔可夫模型.jpg

    展开全文
  • 马尔可夫模型与隐马尔可夫模型

    千次阅读 2016-07-11 16:26:08
    ...摘要:最早接触马尔可夫模型的定义源于吴军先生《数学之美》一本。...起初参考的资料多来源于网站博客,不少介绍马尔可夫模型的文章是转载的,千篇一律且不能透彻解析其中问题。于是笔者开始自
    http://www.cnblogs.com/baiboy/p/hmm2.html


    转自:白宁超


    摘要:最早接触马尔可夫模型的定义源于吴军先生《数学之美》一本。直到做自然语言处理时,才真正使用到隐马尔可夫模型HMM,并真正体会到此模型的妙用之处。起初参考的资料多来源于网站博客,不少介绍马尔可夫模型的文章是转载的,千篇一律且不能透彻解析其中问题。于是笔者开始自己系统的学习此块内容,并作系列文章发表共享。马尔可夫模型在处理序列分类时具体强大的功能,诸如解决:词类标注、语音识别、句子切分、字素音位转换、局部句法剖析、语块分析、命名实体识别、信息抽取等。另外广泛应用于自然科学、工程技术、生物科技、公用事业、信道编码等多个领域。本文写作思路如下:第一章对马尔可夫进行个人简介;第二章介绍马尔科夫链:序列标注器、马尔可夫过程、随机过程、马尔科夫链描述、马尔可夫应用实例;第三章介绍马尔可夫链(显马尔可夫模型)和隐马尔可夫模型、隐马尔可夫模型案例分析、隐马尔可夫模型的三大问题(似然度、编码、参数学习)、隐马尔可夫模型现实运用;第四章介绍三大问题之一的向前算法相关知识;第五章介绍三大问题之一的维特比算法相关知识;第六章简述三大问题之一的向前向后算法相关知识;最后进行隐马尔可夫模型相关知识补充。本文原创,转载注明出处马尔可夫模型与隐马尔可夫模型  )

    目录


    【自然语言处理:马尔可夫模型(一)】:初识马尔可夫和马尔可夫链

    【自然语言处理:马尔可夫模型(二)】:马尔可夫模型与隐马尔可夫模型

     

    马尔可夫个人简介


    安德烈·马尔可夫,俄罗斯人,物理-数学博士,圣彼得堡科学院院士,彼得堡数学学派的代表人物,以数论和概率论方面的工作著称,他的主要著作有《概率演算》等。1878年,荣获金质奖章,1905年被授予功勋教授称号。马尔可夫是彼得堡数学学派的代表人物。以数论和概率论方面的工作著称。他的主要著作有《概率演算》等。在数论方面,他研究了连分数和二次不定式理论 ,解决了许多难题 。在概率论中,他发展了矩阵法,扩大了大数律和中心极限定理的应用范围。马尔可夫最重要的工作是在1906~1912年间,提出并研究了一种能用数学分析方法研究自然过程的一般图式——马尔可夫链。同时开创了对一种无后效性的随机过程——马尔可夫过程的研究。马尔可夫经多次观察试验发现,一个系统的状态转换过程中第n次转换获得的状态常取决于前一次(第(n-1)次)试验的结果。马尔可夫进行深入研究后指出:对于一个系统,由一个状态转至另一个状态的转换过程中,存在着转移概率,并且这种转移概率可以依据其紧接的前一种状态推算出来,与该系统的原始状态和此次转移前的马尔可夫过程无关。马尔可夫链理论与方法在现代已经被广泛应用于自然科学、工程技术和公用事业中。   

    1  引言


    当需要计算现实世界上可以直接观察到的时间序列的概率时,(如上面天气事件),马尔可夫链很有用。但是,当我们处理的事件是现实世界不能观察到的,而是隐含在观察背后,诸如词类标注(根据具体的一个个的词,我们实现看到的词的序列去标注出正确的词类,此事词类的标注是隐藏的)、语音识别(声学事件下,推断出隐藏在其背后的单词)等。类似这样的事件马尔可夫链是不能直接解决的,此处隐马尔可夫模型便派到用场。

    隐马尔科夫模型:由被观察到的事件(如:此类标记时我们输入中看到的词序列),又涉及到隐藏事件(对词的标注),这些隐藏事件在概率模型中被我们认为是引导因素。

    2 隐马尔可夫模型


    2.1 爱依斯讷(Jason Eisner)对隐马尔可夫模型的描述

    隐马尔可夫模型在现实问题中应用:

    隐马尔可夫模型在现实事件中的使用,假设从2016年之后的一千年(3016年),我们想了解2016年某段时间中国*成都天气状况(假设简单的热冷表示即H|C),恰巧又没有2016年成都天气关于热冷的记录,唯一的线索便是小明日记本中夏天的一段记录,记录的是每天小明吃冰淇淋的数量观察状态),我们可以利用隐马尔可夫模型根据小明夏天吃冰淇淋数量的记录推断中国成都天气热=0.8|冷=0.2,即隐含状态),究竟该如何做?

    (1)具有初始状态和终结状态的隐马尔科夫链描述如下:

    (2)没有初始状态和终结状态的隐马尔科夫链描述如下:

    (3)一阶隐马尔可夫的两个假设

     

    2.2 拉宾纳(Rabiner)对隐马尔可夫模型思想的三个问题

    问题1(似然度问题):给一个HMM λ=(A,B) 和一个观察序列O,确定观察序列的似然度问题 P(O|λ) 。

    问题2(解码问题):给定一个观察序列O和一个HMM λ=(A,B),找出最好的隐藏状态序列Q。

    问题3(学习问题):给定一个观察序列O和一个HMM中的状态集合,自动学习HMM的参数A和B。

    如上的冰淇淋事件是典型的问题1,似然度问题,对应向前算法解决。词类标注是典型问题2解码问题,对应维特比算法解决。问题3是机器学习问题,对应向前向后算法解决。此三个问题稍后篇章会一一介绍。

    3 案例解析隐马尔可夫模型

    下图描述小明吃冰淇淋的数量观察值)与天气热冷情况隐藏值H或C)之间的隐马尔可夫模型

     

    其中hot1的转移概率是:

    P(1|hot)=0.2    夏天某天天气热吃1根冰淇淋的概率

    P(2|hot)=0.4    夏天某天天气热吃2根冰淇淋的概率

    P(3|hot)=0.4    夏天某天天气热吃3根冰淇淋的概率

    其中cold2的转移概率是:

    P(1|hot)=0.5    夏天某天天气冷吃1根冰淇淋的概率

    P(1|hot)=0.4    夏天某天天气冷吃2根冰淇淋的概率

    P(1|hot)=0.1    夏天某天天气冷吃3根冰淇淋的概率

    其中观察值(吃的数量):

    O={1,2,3}

    隐藏值(天气的状态):

    S={H,C}

    全连通HMM:任何两个状态之间的转移一个非零概率。

    非全连通HMM:从左到右的单向序列,如对语音进程建模。

    通过小明吃冰淇淋的观察值推断天气状态的问题,留着下节采用隐含马尔可夫模型解决。此处采用一个短小简明的例子,旨在让读者明白原理,后续会拿到现实应用中深入讲解。

    4  参考文献


    【1】统计自然语言处理基础  Christopher.Manning等 著    宛春法等 译

    【2】自然语言处理简明教程  冯志伟 著

    【3】数学之美  吴军 著

    【4】Viterbi算法分析文章  王亚强

    http://www.cnblogs.com/baiboy
    展开全文
  • 马尔可夫模型和隐马尔可夫by Divya Godayal 通过Divya Godayal 词性标注和隐马尔可夫模型简介 (An introduction to part-of-speech tagging and the Hidden Markov Model) by Sachin Malhotra and Divya Godayal ...

    马尔可夫模型和隐马尔可夫

    by Divya Godayal

    通过Divya Godayal

    词性标注和隐马尔可夫模型简介 (An introduction to part-of-speech tagging and the Hidden Markov Model)

    by Sachin Malhotra and Divya Godayal

    Sachin MalhotraDivya Godayal撰写

    Let’s go back into the times when we had no language to communicate. The only way we had was sign language. That’s how we usually communicate with our dog at home, right? When we tell him, “We love you, Jimmy,” he responds by wagging his tail. This doesn’t mean he knows what we are actually saying. Instead, his response is simply because he understands the language of emotions and gestures more than words.

    让我们回到没有语言进行交流的时代。 我们唯一的方式是手语。 那就是我们通常在家里与狗交流的方式,对吗? 当我们告诉他“我们爱你,吉米”时,他用摇尾巴回答。 这并不意味着他知道我们实际上在说什么。 相反,他的React仅仅是因为他比单词更能理解情感和手势语言。

    We as humans have developed an understanding of a lot of nuances of the natural language more than any animal on this planet. That is why when we say “I LOVE you, honey” vs when we say “Lets make LOVE, honey” we mean different things. Since we understand the basic difference between the two phrases, our responses are very different. It is these very intricacies in natural language understanding that we want to teach to a machine.

    作为人类,我们对自然语言的许多细微差别的理解比对地球上任何动物的理解都多。 这就是为什么当我们说“我爱你,亲爱的”而当我们说“让我爱你,亲爱的”时,我们意味着不同的事情。 由于我们了解这两个词组之间的基本区别,因此我们的回答也大不相同。 我们想教一台机器就是自然语言理解中的这些非常复杂的东西。

    What this could mean is when your future robot dog hears “I love you, Jimmy”, he would know LOVE is a Verb. He would also realize that it’s an emotion that we are expressing to which he would respond in a certain way. And maybe when you are telling your partner “Lets make LOVE”, the dog would just stay out of your business ?.

    这可能意味着当您未来的机器狗听到“我爱您,吉米”时,他会知道爱是一个动词。 他还将意识到,这是我们正在表达的一种情感,他将以某种方式做出回应。 也许当您告诉您的伴侣“让爱成为现实”时,那只狗就不会经营您的生意了?

    This is just an example of how teaching a robot to communicate in a language known to us can make things easier.

    这只是一个例子,说明如何教机器人以我们已知的语言进行交流可以使事情变得更容易。

    The primary use case being highlighted in this example is how important it is to understand the difference in the usage of the word LOVE, in different contexts.

    在此示例中突出显示的主要用例是,在不同的上下文中,理解“爱”一词用法的区别有多重要。

    词性标记 (Part-of-Speech Tagging)

    From a very small age, we have been made accustomed to identifying part of speech tags. For example, reading a sentence and being able to identify what words act as nouns, pronouns, verbs, adverbs, and so on. All these are referred to as the part of speech tags.

    从很小的时候起,我们就习惯了识别语音标签的一部分。 例如,阅读一个句子并能够识别哪些词充当名词,代词,动词,副词等。 所有这些都称为语音标签的一部分。

    Let’s look at the Wikipedia definition for them:

    让我们看看它们的Wikipedia定义:

    In corpus linguistics, part-of-speech tagging (POS tagging or PoS tagging or POST), also called grammatical tagging or word-category disambiguation, is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition and its context — i.e., its relationship with adjacent and related words in a phrase, sentence, or paragraph. A simplified form of this is commonly taught to school-age children, in the identification of words as nouns, verbs, adjectives, adverbs, etc.

    在语料库语言学中, 词性标记 ( POS标记PoS标记POST ),也称为语法标记单词类别歧义消除 ,是将文本(语料库)中的单词标记为与特定部分相对应的过程基于其定义和上下文(即,它与短语,句子或段落中相邻和相关单词的关系)的语言表达。 通常将这种简化形式教给学龄儿童,将单词识别为名词,动词,形容词,副词等。

    Identifying part of speech tags is much more complicated than simply mapping words to their part of speech tags. This is because POS tagging is not something that is generic. It is quite possible for a single word to have a different part of speech tag in different sentences based on different contexts. That is why it is impossible to have a generic mapping for POS tags.

    识别语音标签的一部分比简单地将单词映射到语音标签的部分要复杂得多。 这是因为POS标记不是通用的。 根据不同的上下文,单个单词很有可能在不同的句子中具有不同的语音标签部分。 这就是为什么不可能有POS标签的通用映射的原因。

    As you can see, it is not possible to manually find out different part-of-speech tags for a given corpus. New types of contexts and new words keep coming up in dictionaries in various languages, and manual POS tagging is not scalable in itself. That is why we rely on machine-based POS tagging.

    如您所见,无法为给定语料库手动找到不同的词性标签。 词典中不断出现各种类型的新上下文和新单词,并且手动POS标记本身无法扩展。 这就是为什么我们依赖基于机器的POS标记。

    Before proceeding further and looking at how part-of-speech tagging is done, we should look at why POS tagging is necessary and where it can be used.

    在继续进行并研究词性标记的完成方式之前,我们应该查看为什么需要POS标记以及可以在何处使用POS标记。

    为什么使用词性标记? (Why Part-of-Speech tagging?)

    Part-of-Speech tagging in itself may not be the solution to any particular NLP problem. It is however something that is done as a pre-requisite to simplify a lot of different problems. Let us consider a few applications of POS tagging in various NLP tasks.

    词性标记本身可能不能解决任何特定的NLP问题。 但是,这是简化许多不同问题的先决条件。 让我们考虑一下POS标记在各种NLP任务中的一些应用。

    文字转语音 (Text to Speech Conversion)

    Let us look at the following sentence:

    让我们看下面的句子:

    They refuse to permit us to obtain the refuse permit.

    The word refuse is being used twice in this sentence and has two different meanings here. refUSE (/rəˈfyo͞oz/)is a verb meaning “deny,” while REFuse(/ˈrefˌyo͞os/) is a noun meaning “trash” (that is, they are not homophones). Thus, we need to know which word is being used in order to pronounce the text correctly. (For this reason, text-to-speech systems usually perform POS-tagging.)

    refuse一词在这句话中被使用了两次,在这里有两个不同的含义。 refUSE(/ rəˈfyo͞oz /)是一个动词,表示“拒绝”,而REFuse(/ ˈrefˌyo͞os /)是一个名词,表示“垃圾”(也就是说,它们不是同音字)。 因此,我们需要知道使用了哪个单词才能正确发音。 (由于这个原因,文本语音转换系统通常执行POS标记。)

    Have a look at the part-of-speech tags generated for this very sentence by the NLTK package.

    看看NLTK软件包为此语句生成的词性标签。

    >>> text = word_tokenize("They refuse to permit us to obtain the refuse permit")>>> nltk.pos_tag(text)[('They', 'PRP'), ('refuse', 'VBP'), ('to', 'TO'), ('permit', 'VB'), ('us', 'PRP'),('to', 'TO'), ('obtain', 'VB'), ('the', 'DT'), ('refuse', 'NN'), ('permit', 'NN')]

    As we can see from the results provided by the NLTK package, POS tags for both refUSE and REFuse are different. Using these two different POS tags for our text to speech converter can come up with a different set of sounds.

    从NLTK软件包提供的结果可以看出, refuse和refuse的 POS标签是不同的。 将这两个不同的POS标签用于我们的文本到语音转换器可以提供不同的声音集。

    Similarly, let us look at yet another classical application of POS tagging: word sense disambiguation.

    同样,让我们​​看一下POS标签的另一种经典应用:词义消歧。

    词义消歧 (Word Sense Disambiguation)

    Let’s talk about this kid called Peter. Since his mother is a neurological scientist, she didn’t send him to school. His life was devoid of science and math.

    让我们谈谈这个叫彼得的孩子。 由于他的母亲是神经科科学家,她没有送他去学校。 他的生活缺乏科学和数学。

    One day she conducted an experiment, and made him sit for a math class. Even though he didn’t have any prior subject knowledge, Peter thought he aced his first test. His mother then took an example from the test and published it as below. (Kudos to her!)

    一天,她进行了一次实验,让他参加数学课。 即使他以前没有任何学科知识,彼得仍然认为他参加了第一次考试。 然后,他的母亲从测试中举了一个例子,并发布如下。 (对她表示敬意!)

    Words often occur in different senses as different parts of speech. For example:

    单词通常作为不同的词性出现在不同的意义上。 例如:

    • She saw a bear.

      她看到了一只熊。

    • Your efforts will bear fruit.

      您的努力将取得成果。

    The word bear in the above sentences has completely different senses, but more importantly one is a noun and other is a verb. Rudimentary word sense disambiguation is possible if you can tag words with their POS tags.

    上述句子中的单词Bear具有完全不同的含义,但更重要的是一个是名词,另一个是动词。 如果您可以使用POS标签标记单词,则可以进行基本的单词歧义消除。

    Word-sense disambiguation (WSD) is identifying which sense of a word (that is, which meaning) is used in a sentence, when the word has multiple meanings.

    词义歧义消除(WSD)标识当单词具有多种含义时,该单词在句子中使用哪种含义(即,哪种含义)。

    Try to think of the multiple meanings for this sentence:

    试着想一想这句话的多重含义:

    Time flies like an arrow

    时间像箭一样飞逝

    Here are the various interpretations of the given sentence. The meaning and hence the part-of-speech might vary for each word.

    这是给定句子的各种解释。 每个单词的含义以及词性可能会有所不同。

    As we can clearly see, there are multiple interpretations possible for the given sentence. Different interpretations yield different kinds of part of speech tags for the words.This information, if available to us, can help us find out the exact version / interpretation of the sentence and then we can proceed from there.

    我们可以清楚地看到,给定句子可能有多种解释。 不同的解释会产生不同的词性语音标签。这些信息(如果可用)可以帮助我们找出句子的确切版本/解释,然后从那里开始。

    The above example shows us that a single sentence can have three different POS tag sequences assigned to it that are equally likely. That means that it is very important to know what specific meaning is being conveyed by the given sentence whenever it’s appearing. This is word sense disambiguation, as we are trying to find out THE sequence.

    上面的示例向我们展示了一个句子可以分配给它的三个不同的POS标签序列的可能性相同。 这意味着,当给定的句子出现时,知道它所传达的具体含义非常重要。 这是单词意义上的歧义,因为我们试图找出THE序列。

    These are just two of the numerous applications where we would require POS tagging. There are other applications as well which require POS tagging, like Question Answering, Speech Recognition, Machine Translation, and so on.

    这些只是我们需要POS标记的众多应用中的两个。 还有其他一些需要POS标记的应用程序,例如问题回答,语音识别,机器翻译等。

    Now that we have a basic knowledge of different applications of POS tagging, let us look at how we can go about actually assigning POS tags to all the words in our corpus.

    现在,我们已经对POS标记的不同应用有了基本的了解,下面让我们看一下如何将POS标记实际分配给语料库中的所有单词。

    POS标记器的类型 (Types of POS taggers)

    POS-tagging algorithms fall into two distinctive groups:

    POS标记算法分为两个不同的组:

    • Rule-Based POS Taggers

      基于规则的POS标记

    • Stochastic POS Taggers

      随机POS匕首

    E. Brill’s tagger, one of the first and most widely used English POS-taggers, employs rule-based algorithms. Let us first look at a very brief overview of what rule-based tagging is all about.

    E. Brill的标记器是最早使用最广泛的英语POS标记器之一,它使用基于规则的算法。 让我们首先看一下有关基于规则的标记的简要概述。

    基于规则的标记 (Rule-Based Tagging)

    Automatic part of speech tagging is an area of natural language processing where statistical techniques have been more successful than rule-based methods.

    语音标记的自动部分是自然语言处理的一个领域,在该领域中,统计技术比基于规则的方法更为成功。

    Typical rule-based approaches use contextual information to assign tags to unknown or ambiguous words. Disambiguation is done by analyzing the linguistic features of the word, its preceding word, its following word, and other aspects.

    典型的基于规则的方法使用上下文信息将标签分配给未知或歧义词。 通过分析单词,其前一个单词,其后一个单词以及其他方面的语言特征来实现歧义消除。

    For example, if the preceding word is an article, then the word in question must be a noun. This information is coded in the form of rules.

    例如,如果前一个单词是冠词,则该单词必须是名词。 此信息以规则的形式编码。

    Example of a rule:

    规则示例:

    If an ambiguous/unknown word X is preceded by a determiner and followed by a noun, tag it as an adjective.
    如果模棱两可/未知的单词X前面有确定词,后面是名词,则将其标记为形容词。

    Defining a set of rules manually is an extremely cumbersome process and is not scalable at all. So we need some automatic way of doing this.

    手动定义一组规则是一个非常繁琐的过程,并且根本无法扩展。 因此,我们需要一些自动的方式来执行此操作。

    The Brill’s tagger is a rule-based tagger that goes through the training data and finds out the set of tagging rules that best define the data and minimize POS tagging errors. The most important point to note here about Brill’s tagger is that the rules are not hand-crafted, but are instead found out using the corpus provided. The only feature engineering required is a set of rule templates that the model can use to come up with new features.

    Brill的标记器是一个基于规则的标记器,它遍历训练数据并找出最能定义数据并最大程度降低POS标记错误的标记规则集。 关于Brill的标记器,这里需要注意的最重要一点是规则不是手工制作的,而是使用提供的语料库找到的。 唯一需要的功能工程是一组规则模板 ,模型可以使用这些规则模板来提供新功能。

    Let’s move ahead now and look at Stochastic POS tagging.

    现在让我们继续前进,看看随机POS标签。

    随机词性标注 (Stochastic Part-of-Speech Tagging)

    The term ‘stochastic tagger’ can refer to any number of different approaches to the problem of POS tagging. Any model which somehow incorporates frequency or probability may be properly labelled stochastic.

    术语“随机标记器”可以指代解决POS标记问题的许多不同方法。 任何以某种方式结合了频率或概率的模型都可以适当地随机标记。

    The simplest stochastic taggers disambiguate words based solely on the probability that a word occurs with a particular tag. In other words, the tag encountered most frequently in the training set with the word is the one assigned to an ambiguous instance of that word. The problem with this approach is that while it may yield a valid tag for a given word, it can also yield inadmissible sequences of tags.

    最简单的随机标记器仅根据单词与特定标签一起出现的可能性来消除单词歧义。 换句话说,在训练集中最经常遇到的带有该单词的标签是分配给该单词歧义实例的标签。 这种方法的问题在于,尽管它可能为给定的单词生成一个有效的标签,但它也会生成不可接受的标签序列。

    An alternative to the word frequency approach is to calculate the probability of a given sequence of tags occurring. This is sometimes referred to as the n-gram approach, referring to the fact that the best tag for a given word is determined by the probability that it occurs with the n previous tags. This approach makes much more sense than the one defined before, because it considers the tags for individual words based on context.

    单词频率方法的替代方法是计算给定标签序列出现的概率。 这有时被称为n-gram方法,是指给定单词的最佳标记取决于它与n个先前标记一起出现的概率。 这种方法比以前定义的方法更有意义,因为它根据上下文考虑单个单词的标签。

    The next level of complexity that can be introduced into a stochastic tagger combines the previous two approaches, using both tag sequence probabilities and word frequency measurements. This is known as the Hidden Markov Model (HMM).

    可以引入到随机标记器中的下一个复杂度级别结合了前两种方法,同时使用了标记序列概率和字频测量。 这被称为隐马尔可夫模型(HMM)

    Before proceeding with what is a Hidden Markov Model, let us first look at what is a Markov Model. That will better help understand the meaning of the term Hidden in HMMs.

    在进行隐藏操作之前 马尔可夫模型,让我们首先看看什么是马尔可夫模型。 这将有助于更好地理解“ 隐藏 ”一词的含义 在HMM中。

    马尔可夫模型 (Markov Model)

    Say that there are only three kinds of weather conditions, namely

    假设只有三种天气状况,即

    • Rainy

      多雨的
    • Sunny

      阳光明媚
    • Cloudy

      多云的

    Now, since our young friend we introduced above, Peter, is a small kid, he loves to play outside. He loves it when the weather is sunny, because all his friends come out to play in the sunny conditions.

    现在,由于我们上面介绍的年轻朋友彼得是个小孩,他喜欢在户外玩。 天气晴朗时,他喜欢它,因为他的所有朋友都出来在阳光明媚的条件下比赛。

    He hates the rainy weather for obvious reasons.

    由于明显的原因,他讨厌阴雨天气。

    Every day, his mother observe the weather in the morning (that is when he usually goes out to play) and like always, Peter comes up to her right after getting up and asks her to tell him what the weather is going to be like. Since she is a responsible parent, she want to answer that question as accurately as possible. But the only thing she has is a set of observations taken over multiple days as to how weather has been.

    每天,母亲都会观察早晨的天气(也就是他通常出去玩的时间),彼得总是像往常一样站起来,要求她告诉他天气会怎样。 由于她是一个负责任的父母,她想尽可能准确地回答这个问题。 但是她唯一的一件事就是对天气进行了多天的一系列观察。

    How does she make a prediction of the weather for today based on what the weather has been for the past N days?

    她如何根据过去N天的天气情况来预测今天的天气?

    Say you have a sequence. Something like this:

    假设您有一个序列。 像这样:

    Sunny, Rainy, Cloudy, Cloudy, Sunny, Sunny, Sunny, Rainy

    Sunny, Rainy, Cloudy, Cloudy, Sunny, Sunny, Sunny, Rainy

    So, the weather for any give day can be in any of the three states.

    因此,任何给定日的天气都可以在三个州中的任何一个州。

    Let’s say we decide to use a Markov Chain Model to solve this problem. Now using the data that we have, we can construct the following state diagram with the labelled probabilities.

    假设我们决定使用马尔可夫链模型来解决此问题。 现在,使用已有的数据,我们可以构建带有标记概率的以下状态图。

    In order to compute the probability of today’s weather given N previous observations, we will use the Markovian Property.

    为了计算N个先前的观测值得出的今天天气的概率,我们将使用马尔可夫性质。

    Markov Chain is essentially the simplest known Markov model, that is it obeys the Markov property.

    马尔可夫链本质上是最简单的已知马尔可夫模型,即服从马尔可夫性质。

    The Markov property suggests that the distribution for a random variable in the future depends solely only on its distribution in the current state, and none of the previous states have any impact on the future states.

    马尔可夫性质表明,未来随机变量的分布仅取决于其在当前状态下的分布,而先前的状态都不会对未来状态产生任何影响。

    For a much more detailed explanation of the working of Markov chains, refer to this link.

    有关马尔可夫链工作原理的更多详细说明,请参阅链接。

    Also, have a look at the following example just to see how probability of the current state can be computed using the formula above, taking into account the Markovian Property.

    另外,看看下面的示例,看看在考虑马尔可夫特性的情况下如何使用上述公式计算当前状态的概率。

    Apply the Markov property in the following example.

    在以下示例中应用Markov属性。

    We can clearly see that as per the Markov property, the probability of tomorrow's weather being Sunny depends solely on today's weather and not on yesterday's .

    我们可以清楚地看到,根据Markov属性, tomorrow's天气晴朗的可能性完全取决于today's天气,而不取决于yesterday's天气。

    Let us now proceed and see what is hidden in the Hidden Markov Models.

    现在让我们继续看一下隐马尔可夫模型中隐藏的内容。

    隐马尔可夫模型 (Hidden Markov Model)

    It’s the small kid Peter again, and this time he’s gonna pester his new caretaker — which is you. (Ooopsy!!)

    再次是小彼得,这次他要缠着他的新看管人-就是你。 (糟糕!)

    As a caretaker, one of the most important tasks for you is to tuck Peter into bed and make sure he is sound asleep. Once you’ve tucked him in, you want to make sure he’s actually asleep and not up to some mischief.

    作为看守,对您来说最重要的任务之一就是让Peter卧床并确保他睡着了。 将他塞进去之后,您要确保他确实在睡觉,并且不会有任何恶作剧。

    You cannot, however, enter the room again, as that would surely wake Peter up. So all you have to decide are the noises that might come from the room. Either the room is quiet or there is noise coming from the room. These are your states.

    但是,您不能再次进入房间,因为这肯定会使Peter醒来。 因此,您只需要确定房间可能发出的噪音即可。 房间很安静,或者房间里有噪音 。 这些是你的状态。

    Peter’s mother, before leaving you to this nightmare, said:

    彼得的母亲在让您陷入这场噩梦之前说:

    May the sound be with you :)
    声音会和你在一起:)

    His mother has given you the following state diagram. The diagram has some states, observations, and probabilities.

    他的母亲给了您以下状态图。 该图具有一些状态,观察值和概率。

    Note that there is no direct correlation between sound from the room and Peter being asleep.

    请注意,房间发出的声音与Peter入睡之间没有直接关系。

    There are two kinds of probabilities that we can see from the state diagram.

    从状态图中可以看到两种概率。

    • One is the emission probabilities, which represent the probabilities of making certain observations given a particular state. For example, we have P(noise | awake) = 0.5 . This is an emission probability.

      一是发射 概率,代表在特定状态下进行某些观察的概率。 例如,我们有P(noise | awake) = 0.5 。 这是发射概率。

    • The other ones is transition probabilities, which represent the probability of transitioning to another state given a particular state. For example, we have P(asleep | awake) = 0.4 . This is a transition probability.

      另一个是过渡 概率,表示在特定状态下转换为另一状态的概率。 例如,我们有P(asleep | awake) = 0.4 。 这是转移概率。

    The Markovian property applies in this model as well. So do not complicate things too much. Markov, your savior said:

    马尔可夫属性也适用于此模型。 因此,不要使事情复杂化。 马尔可夫,您的救世主说:

    Don’t go too much into the history…
    不要过多地关注历史……

    The Markov property, as would be applicable to the example we have considered here, would be that the probability of Peter being in a state depends ONLY on the previous state.

    马尔可夫性质(适用于我们在此考虑的示例)将是Peter处于状态的概率仅取决于先前的状态。

    But there is a clear flaw in the Markov property. If Peter has been awake for an hour, then the probability of him falling asleep is higher than if has been awake for just 5 minutes. So, history matters. Therefore, the Markov state machine-based model is not completely correct. It’s merely a simplification.

    但是,马尔可夫性质存在明显缺陷。 如果彼得醒了一个小时,那么他入睡的几率比醒来仅5分钟要高。 因此,历史很重要。 因此,基于马尔可夫状态机的模型并不完全正确。 这只是一个简化。

    The Markov property, although wrong, makes this problem very tractable.

    马尔可夫性质,尽管是错误的,但使这个问题非常容易解决。

    We usually observe longer stretches of the child being awake and being asleep. If Peter is awake now, the probability of him staying awake is higher than of him going to sleep. Hence, the 0.6 and 0.4 in the above diagram.P(awake | awake) = 0.6 and P(asleep | awake) = 0.4

    我们通常会观察到较长时间的孩子处于清醒和入睡状态。 如果彼得现在醒着,那么他保持清醒的可能性比他入睡的可能性高。 因此,上图中的0.6和0.4。 P(awake | awake) = 0.6 and P(asleep | awake) = 0.4

    Before actually trying to solve the problem at hand using HMMs, let’s relate this model to the task of Part of Speech Tagging.

    在实际尝试使用HMM解决当前问题之前,让我们将此模型与语音标记的任务联系起来。

    用于语音标记的HMM (HMMs for Part of Speech Tagging)

    We know that to model any problem using a Hidden Markov Model we need a set of observations and a set of possible states. The states in an HMM are hidden.

    我们知道,要使用隐马尔可夫模型对任何问题进行建模,我们需要一组观察值和一组可能的状态。 HMM中的状态被隐藏。

    In the part of speech tagging problem, the observations are the words themselves in the given sequence.

    在语音标记问题中, 观察结果是给定序列中的单词本身。

    As for the states, which are hidden, these would be the POS tags for the words.

    至于隐藏的状态 ,这些将是单词的POS标签。

    The transition probabilities would be somewhat like P(VP | NP) that is, what is the probability of the current word having a tag of Verb Phrase given that the previous tag was a Noun Phrase.

    过渡概率有点像P(VP | NP) ,也就是说,鉴于先前的单词是名词短语,当前单词具有动词短语标签的概率是多少。

    Emission probabilities would be P(john | NP) or P(will | VP) that is, what is the probability that the word is, say, John given that the tag is a Noun Phrase.

    发射概率将是P(john | NP) or P(will | VP) ,也就是说,假设标签是名词短语,那么单词是John的概率是多少。

    Note that this is just an informal modeling of the problem to provide a very basic understanding of how the Part of Speech tagging problem can be modeled using an HMM.

    请注意,这只是问题的非正式建模,以提供对如何使用HMM建模词性标注问题的非常基本的了解。

    我们该如何解决呢? (How do we solve this?)

    Coming back to our problem of taking care of Peter.

    回到我们照顾彼得的问题。

    Irritated are we ? ?.

    我们烦了吗? ?

    Our problem here was that we have an initial state: Peter was awake when you tucked him into bed. After that, you recorded a sequence of observations, namely noise or quiet, at different time-steps. Using these set of observations and the initial state, you want to find out whether Peter would be awake or asleep after say N time steps.

    我们这里的问题是,我们有一个初始状态:当您将彼得塞到床上时,彼得醒了。 之后,您在不同的时间步长上记录了一系列观察结果,即噪音安静 。 使用这些观察结果和初始状态,您想确定在经过N个时间步长之后,Peter会醒还是睡着。

    We draw all possible transitions starting from the initial state. There’s an exponential number of branches that come out as we keep moving forward. So the model grows exponentially after a few time steps. Even without considering any observations. Have a look at the model expanding exponentially below.

    我们从初始状态开始绘制所有可能的过渡。 随着我们不断前进,出现了指数级的分支。 因此,模型经过几个时间步呈指数增长 。 即使不考虑任何观察。 看看下面的指数级扩展模型。

    If we had a set of states, we could calculate the probability of the sequence. But we don’t have the states. All we have are a sequence of observations. This is why this model is referred to as the Hidden Markov Model — because the actual states over time are hidden.

    如果我们有一组状态,我们可以计算出序列的概率。 但是我们没有州。 我们所拥有的只是一系列观察结果。 这就是为什么将此模型称为“ 马尔可夫模型”的原因-因为随着时间的推移,实际状态是隐藏的。

    So, caretaker, if you’ve come this far it means that you have at least a fairly good understanding of how the problem is to be structured. All that is left now is to use some algorithm / technique to actually solve the problem. For now, Congratulations on Leveling up!

    因此,看守者,如果您走了这么远,就意味着您至少对如何解决问题有一个很好的了解。 现在剩下的就是使用某种算法/技术来实际解决问题。 目前, 恭喜您升级!

    In the next article of this two-part series, we will see how we can use a well defined algorithm known as the Viterbi Algorithm to decode the given sequence of observations given the model. See you there!

    在这个由两部分组成的系列的下一篇文章中,我们将了解如何使用定义良好的算法(称为维特比算法)对给定模型的给定观测序列进行解码。 到时候那里见!

    翻译自: https://www.freecodecamp.org/news/an-introduction-to-part-of-speech-tagging-and-the-hidden-markov-model-953d45338f24/

    马尔可夫模型和隐马尔可夫

    展开全文
  • 马尔可夫模型差不多是学习中遇到的最难的模型了,本节通过对《统计学习方法》进行学习并结合网上笔记,用Python代码实现了隐马模型观测序列的生成、前向后向算法、Baum-Welch无监督训练、维特比算法。比较清晰的...
  • 马尔可夫模型和隐马尔可夫 使用Markov模型(以数学家Andrey Markov的名字命名)用于随机变化系统的预测。 马尔可夫的见解是,在这种情况下,仅可以从事件的最新发生来做出良好的预测,而忽略当前事件之前的任何发生...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 4,933
精华内容 1,973
关键字:

马尔可夫模型