精华内容
下载资源
问答
  • pacman吃豆人人工智能
    2020-12-04 00:57:30

    文件名称: reinforcement下载  收藏√  [

     5  4  3  2  1 ]

    开发工具: Python

    文件大小: 204 KB

    上传时间: 2013-10-13

    下载次数: 12

    提 供 者: uhauha

    详细说明:Pacman 吃豆人 一款经典老游戏的python实现的环境支持库-Pacman Pac-Man game a classic old python achieved environmental support library

    文件列表(点击判断是否您需要的文件,如果是垃圾请在下面评价投诉):

    reinforcement

    .............\value.png

    .............\graphicsUtils.py

    .............\keyboardAgents.py

    .............\crawler.png

    .............\game.py

    .............\valueIterationAgents.py

    .............\capsule.png

    .............\textGridworldDisplay.py

    .............\docs

    .............\....\graphicsGridworldDisplay.html

    .............\....\learningAgents.html

    .............\....\keyboardAgents.html

    .............\....\featureExtractors.html

    .............\....\textGridworldDisplay.html

    .............\....\crawler.html

    .............\....\game.html

    .............\....\mdp.html

    .............\....\graphicsCrawlerDisplay.html

    .............\....\analysis.html

    .............\....\qlearningAgents.html

    .............\....\pacmanAgents.html

    .............\....\environment.html

    .............\....\gridworld.html

    .............\....\graphicsUtils.html

    .............\....\pacman.html

    .............\....\ghostAgents.html

    .............\....\layout.html

    .............\....\graphicsDisplay.html

    .............\....\util.html

    .............\....\valueIterationAgents.html

    .............\....\textDisplay.html

    .............\graphicsDisplay.py

    .............\graphicsGridworldDisplay.py

    .............\commands.txt

    .............\define-eqn2.png

    .............\learningAgents.py

    .............\crawler.py

    .............\environment.py

    .............\textDisplay.py

    .............\layouts

    .............\.......\testClassic.lay

    .............\.......\mediumClassic.lay

    .............\.......\trickyClassic.lay

    .............\.......\capsuleClassic.lay

    .............\.......\trappedClassic.lay

    .............\.......\minimaxClassic.lay

    .............\.......\mediumGrid.lay

    .............\.......\contestClassic.lay

    .............\.......\originalClassic.lay

    .............\.......\smallClassic.lay

    .............\.......\openClassic.lay

    .............\.......\smallGrid.lay

    .............\reinforcement.html

    .............\layout.py

    .............\projects.css

    .............\util.py

    .............\graphicsCrawlerDisplay.py

    .............\analysis.py

    .............\define-eqn1.png

    .............\pacman.py

    .............\ghostAgents.py

    .............\featureExtractors.py

    .............\pacman.png

    .............\gridworld.py

    .............\mdp.py

    .............\qlearningAgents.py

    .............\pacmanAgents.py

    输入关键字,在本站246万海量源码库中尽情搜索:

    帮助

    [Artificial-intelligence.zip] - 人工智能pacman智能搜索代码,可以使吃豆人自动搜索走法,完成游戏

    [search.zip] - 人工智能原理实现的吃豆子游戏,包含各种搜索方法和相应的支持文件爱你

    [multiagent.zip] - 实现了一个吃豆人小游戏的功能,使用了多种搜索算法,包括DFS以及BFS等

    [track.rar] - feature points tracking from web camera or file

    [pacmansearch.zip] - Pacman from UC Berkeley

    [search.zip] - 一个可以玩的吃豆子游戏,并能进行算法扩展

    [reinforcement.zip] - file program berkeleye university

    更多相关内容
  • 针对UCB伯克利的CS188经典项目-Pacman吃豆人人工智能课常用作业,附件为project1的code,文本文档格式,包括search.py和searchAgent.py两个文件,已通过autograder测试,26/25分,有1分bonus
  • 人工智能伯克利大学经典作业pacman吃豆人python源代码
  • 针对UCB伯克利的CS188经典项目-Pacman吃豆人人工智能课常用作业,附件为project1的code,文本文档格式,包括search.py和searchAgent.py两个文件,已通过autograder测试,26/25分,有1分bonus
  • 首先从吃豆人开始,遍历所有可行的下一步,取效用最佳的action为bestAction。 和Alpha-Beta剪枝相比,本算法出现了exp-value,即计算期望值。通过对合法动作遍历,轮询所有的下一个状态,取所有评价值的平均值作为...

    Introduction

    题目介绍

    本题目来源于UC Berkeley 2021春季 CS188 Artificial Intelligence Project2上的内容,项目具体介绍链接点击此处:UC Berkeley Spring 2021Project 2: Multi-Agent Search

    文件介绍

    在这里插入图片描述

    说在前面

    本项目只完成了project2中problem2-problem5部分,若有需要查看problem1部分,请移步至其他技术博客。

    项目开发环境:Python3.9+VS Code

    若电脑上未安装Python环境或者所安装的Python环境版本较低,可以在cmd中运行python查看情况。
    若未安装,则cmd会跳转至Python安装界面:
    Python3.9
    安装后的cmd如图所示:
    cmd中python安装后的样式

    Problem2:Minimax极小极大

    Minimax极小极大算法是对抗搜索中的重要搜索方法,根据题目要求,我们可以通过迭代的方式找出失败的的最大可能性中的最小值。
    在本题中,由于需要AI去击败玩家Player,所以我们考虑Max作为AI的最大利益,Min作为Player的最小利益
    我们能够将如下的伪代码转换成我们需要的代码。值得一提的是,在2021年春季的project2中,将之前的generatSuccessor函数改为了getNextState函数,含义更具体与清晰。
    在这里插入图片描述

    class MinimaxAgent(MultiAgentSearchAgent):
        """
        Your minimax agent (question 2)
        """
    
        def getAction(self, gameState):
            """
            Returns the minimax action from the current gameState using self.depth
            and self.evaluationFunction.
    
            Here are some method calls that might be useful when implementing minimax.
    
            gameState.getLegalActions(agentIndex):
            Returns a list of legal actions for an agent
            agentIndex=0 means Pacman, ghosts are >= 1
    
            gameState.getNextState(agentIndex, action):
            Returns the child game state after an agent takes an action
    
            gameState.getNumAgents():
            Returns the total number of agents in the game
    
            gameState.isWin():
            Returns whether or not the game state is a winning state
    
            gameState.isLose():
            Returns whether or not the game state is a losing state
            """
            "*** YOUR CODE HERE ***"
            
            #Step 1.to realize get 'Min-Value'part
            def min_value(gameState,depth,agentIndex):
                #Initialize v= positively infinity
                v=float('inf')
                #if current state is gonna to stop"
                if gameState.isWin() or gameState.isLose():
                    #it returns a number, where higher numbers are better
                    return self.evaluationFunction(gameState)
                #for each successor of state:get min-value
                for legalAction in gameState.getLegalActions(agentIndex):
                    if agentIndex==gameState.getNumAgents()-1:
                        #v=min(v,max_value(successor))
                        v=min(v,max_value(gameState.getNextState(agentIndex,legalAction),depth))
                    else:
                        #go on searching the next ghost
                        v=min(v,min_value(gameState.getNextState(agentIndex,legalAction),depth,agentIndex+1))
                return v
            #Step 2.to realize get 'Max-Value'part
            def max_value(gameState,depth):
                #Initialize v= negatively infinity
                v=float('-inf')
                 #while the first step is the top point,go on and depth add 1
                depth=depth+1
                #if current state is gonna to stop
                if depth==self.depth or gameState.isLose() or gameState.isWin():
                    return self.evaluationFunction(gameState)
                #for each successor of state:get max-value
                for legalAction in gameState.getLegalActions(0):
                    #v=max(v,min_value(successor))
                    v=max(v,min_value(gameState.getNextState(0,legalAction),depth,1))
                return v
    
            nextAction=gameState.getLegalActions(0)
            Max=float('-inf')
            Result=None
    
            for nAction in nextAction:
                if(nAction!="stop"):
                    depth=0
                    value=min_value(gameState.getNextState(0,nAction),depth,1)
                    if(value>Max):
                        Max=value
                        Result=nAction
            return Result      
    

    运行及截图:

    python autograder.py -q q2 --no-graphics
    

    在这里插入图片描述

    Problem3:Alpha-Beta剪枝

    在Minimax极小极大算法中有重复计算的部分,所以要进行剪枝。
    Alpha-Beta剪枝用于裁剪搜索树中不需要搜索的树枝,来提升运算效率。
    在Alpha-Beta部分我的做法不同于Minimax,在此部分进行了一定的改善。比如在比较过程中加入了函数getValue()来简化计算过程。最大的改善之处主要max-value和min-value中需要加入参数alpha&beta,并通过比较来返回值v。
    在这里插入图片描述

    class AlphaBetaAgent(MultiAgentSearchAgent):
        """
        Your minimax agent with alpha-beta pruning (question 3)
        """
        def getAction(self, gameState):
            """
            Returns the minimax action using self.depth and self.evaluationFunction
            """
            "*** YOUR CODE HERE ***"
            alpha = float('-inf')
            beta = float('inf')
            v = float('-inf')
            bestAction = None
            for legalAction in gameState.getLegalActions(0):
                value = self.getValue(gameState.getNextState(0, legalAction),1,0,alpha,beta)
                if value is not None and value>v:
                    v = value
                    bestAction = legalAction
                #update new alpha
                alpha=max(alpha,v)
            return bestAction
    
        def getValue(self, gameState, agentIndex, depth, alpha, beta):
            legalActions = gameState.getLegalActions(agentIndex)
            if len(legalActions)==0:
                return self.evaluationFunction(gameState)
            #according to the value of agentIndex,gain the function of next state
            #to assure the next is player or ghost
            if agentIndex==0:
                #cross 1 time
                depth=depth+1
                if depth == self.depth:
                    return self.evaluationFunction(gameState)
                else:
                    return self.max_value(gameState, agentIndex, depth, alpha, beta)
            elif agentIndex>0:
                return self.min_value(gameState, agentIndex, depth, alpha, beta)
    
        def max_value(self, gameState, agentIndex, depth, alpha, beta):
            #Initialize v= negatively infinity
            v = float('-inf')
            #for each successor of state:get max-value
            for legalAction in gameState.getLegalActions(agentIndex):
                value = self.getValue(gameState.getNextState(agentIndex, legalAction),
                    (agentIndex+1)%gameState.getNumAgents(), depth, alpha, beta)
                #this condition is similar with problem2 ' s condition
                if value is not None and value > v:
                    v=value
                #if v>beta return v
                if v>beta:
                    return v
                #update new alpha
                alpha=max(alpha,v)
            return v
    
        def min_value(self, gameState, agentIndex, depth, alpha, beta):
            #Initialize v= positively infinity
            v = float('inf')
            #for each successor of state:get min-value
            for legalAction in gameState.getLegalActions(agentIndex):
                value = self.getValue(gameState.getNextState(agentIndex, legalAction),
                    (agentIndex+1)%gameState.getNumAgents(), depth, alpha, beta)
                if value is not None and value < v:
                    v=value
                #if v<alpha return v
                if v<alpha:
                    return v
                #update new beta
                beta=min(beta,v)
            return v
    

    运行及截图:

    python autograder.py -q q3 --no-graphics
    

    在这里插入图片描述

    Problem4:Expectimax

    Expectimax算法本质上就是每次进行期望值的计算,然后再选取最大值,不断递归。首先从吃豆人开始,遍历所有可行的下一步,取效用最佳的action为bestAction。
    和Alpha-Beta剪枝相比,本算法出现了exp-value,即计算期望值。通过对合法动作遍历,轮询所有的下一个状态,取所有评价值的平均值作为计算结果。

    class ExpectimaxAgent(MultiAgentSearchAgent):
        """
          Your expectimax agent (question 4)
        """
        def getAction(self, gameState): 
            """
            Returns the expectimax action using self.depth and self.evaluationFunction
    
            All ghosts should be modeled as choosing uniformly at random from their
            legal moves.
            """
            "*** YOUR CODE HERE ***"
            maxVal = float('-inf')
            bestAction = None
            for action in gameState.getLegalActions(agentIndex=0):
                value = self.getValue(gameState.getNextState(agentIndex=0, action=action), agentIndex=1, depth=0)
                if value is not None and value>maxVal:
                    maxVal = value
                    bestAction = action
            return bestAction
    
        def getValue(self, gameState, agentIndex, depth):
            legalActions = gameState.getLegalActions(agentIndex)
            if len(legalActions)==0:
                return self.evaluationFunction(gameState)
            if agentIndex==0:
                depth += 1
                if depth == self.depth:
                    return self.evaluationFunction(gameState)
                else:
                    return self.max_value(gameState, agentIndex, depth)
            elif agentIndex>0:
                return self.exp_value(gameState, agentIndex, depth)
    
        def max_value(self, gameState, agentIndex, depth):
            maxVal = -float('inf')
            legalActions = gameState.getLegalActions(agentIndex)
            for action in legalActions:
                value = self.getValue(gameState.getNextState(agentIndex, action), (agentIndex+1)%gameState.getNumAgents(), depth)
                if value is not None and value > maxVal:
                    maxVal = value
            return maxVal
    
        def exp_value(self, gameState, agentIndex, depth):
            legalActions = gameState.getLegalActions(agentIndex)
            total = 0
            for action in legalActions:
                value = self.getValue(gameState.getNextState(agentIndex, action), (agentIndex+1)%gameState.getNumAgents(), depth)
                if value is not None:
                    total += value
            return total/(len(legalActions))
    

    运行及截图:

    python autograder.py -q q4
    

    在这里插入图片描述
    “你现在应该在与鬼魂的近距离观察中观察到一种更加傲慢的方法。尤其是,如果吃豆子意识到自己可能被困住了,但可能会逃跑去抓几块食物,他至少会尝试一下。调查这两种情况的结果:”

    python pacman.py -p AlphaBetaAgent -l trappedClassic -a depth=3 -q -n 10
    

    在这里插入图片描述

    python pacman.py -p ExpectimaxAgent -l trappedClassic -a depth=3 -q -n 10
    

    在这里插入图片描述
    和预想中的相同,ExpectimaxAgent赢了大约一半的时间,而AlphaBetaAgent总是输。

    Problem5:Evaluation Function评价函数

    此题要求对Reflex Agent的代码进行改进,特别注意函数的参数发生了变化,此时我们只能观察到当前的状态,而无法得知下一个状态的信息。我们可以通过计算total这个值来表示鬼怪保持可以被吃掉状态的剩余时间,由于通过吃掉地图上的大豆豆可以得到这个正反馈,所以吃豆人会考虑吃掉附近的大豆豆。最后把计算好的各种启发值加在游戏得分上,并返回。

    def betterEvaluationFunction(currentGameState):
        """
        Your extreme ghost-hunting, pellet-nabbing, food-gobbling, unstoppable
        evaluation function (question 5).
    
        DESCRIPTION: <write something here so we know what you did>
        """
        "*** YOUR CODE HERE ***"
        #we only observe the current state and dont know the next state
        #initialize information we could use
        Pos = currentGameState.getPacmanPosition() #current position
        Food = currentGameState.getFood()          #current food
        GhostStates = currentGameState.getGhostStates()     #ghost state
        ScaredTimes = [ghostState.scaredTimer for ghostState in GhostStates]
        #find food and calculate positive reflection
        if len(Food.asList())>0:
            nearestFood = (min([manhattanDistance(Pos, food) for food in Food.asList()]))
            foodScore = 9/nearestFood
        else:
            foodScore = 0
        #find ghost and calculate negative reflection
        nearestGhost = min([manhattanDistance(Pos,ghostState.configuration.pos) for ghostState in GhostStates])
        dangerScore = -10/nearestGhost if nearestGhost!=0 else 0
        #the rest time of ghost
        totalScaredTimes = sum(ScaredTimes)
        #return sum of all value
        return currentGameState.getScore() + foodScore + dangerScore + totalScaredTimes
    
    # Abbreviation
    better = betterEvaluationFunction
    
    

    运行及截图:

    python autograder.py -q q5 --no-graphics
    

    在这里插入图片描述

    所学感悟

    本次实验中,我能够在理解他人代码以及所学理论知识的基础上,对本游戏进行完成及实现,同时,也增加了我对对抗搜索知识体系的巩固。

    参考链接

    1、【人工智能导论】吃豆人游戏(上):对抗搜索与Minimax算法
    2、敲代码学人工智能:对抗搜索问题
    3、算法学习:Pac-Man的简单对抗
    4、Berkeley Intro to AI学习笔记(一)MultiSearch
    5、解析吃豆人游戏

    展开全文
  • 吃豆子项目介绍早在 2011 年,我就参加了由 Peter Norving 和 Sebastian Thrun 教授的原始人工智能在线课程。 我非常喜欢我们学到的所有人工智能理论,但我迫切需要应用这些理论来解决问题。 那时我发现了。项目 1:...
  • Pacman 吃豆游戏 项目演示(DEMO)地址:https://passer-by.com/pacman/ 版权 本游戏由 passer-by.com 制作,请尊重作者,引用请注明来源。 功能 地图绘制 玩家控制 NPC根据玩家坐标实时自动寻径 吃豆...
  • 需要python2.7版本(最好用Anaconda安装,直接msi可能出问题)
  • 人工智能实验 搜索策略(pacman吃豆人

    万次阅读 多人点赞 2019-07-04 20:07:33
    目录 问题1:深度优先算法 问题2:广度优先搜索 问题3:不同的费用 ...https://github.com/chunxi-alpc/gcx_pacman 问题1:深度优先算法 在search.py中depthFirstSearch函数中实现深度优先算...

    目录

    问题1:深度优先算法

    问题2:广度优先搜索

    问题3:不同的费用

    问题4:A*搜索

    问题5:查找所有角落

    问题6:角落问题:启发式

    问题7:吃掉所有的“豆”

    对象总览

    具体代码


    具体项目见

    https://github.com/chunxi-alpc/gcx_pacman

    问题1:深度优先算法

    在search.py中depthFirstSearch函数中实现深度优先算法。

    在cmd 输入 Python2 pacman.py -l mediumMaze -p SearchAgent -a fn=dfs

     

    [SearchAgent] using function dfs

    [SearchAgent] using problem type PositionSearchProblem

    Path found with total cost of 130 in 0.0 seconds

    Search nodes expanded: 146

    Pacman emerges victorious! Score: 380

    Average Score: 380.0

    Scores:        380.0

    Win Rate:      1/1 (1.00)

    Record:        Win

    对于已经搜索过的状态Pacman棋盘上将显一个叠加物(overlay),并显示出访问的顺序(红色由深到浅). Pacman 在到达目的地的过程中,并不是遍访每个正方形而是把一种走法显示出来。

    使用栈Stack数据结构, 则通过DFS算法求得的mediumMaze的解长度应该为130 (假定你将后继元素按getSuccessors得到的顺序压栈; 如果按相反顺序压栈,则可能是244). 这可能不是最短的路径因为为了时间和效率,我们只追求搜索到满足目标测试的情况,DFS扩展深度最大的节点,因此第一次满足目标测试的动作序列不一定是最短的路径。

    问题2:广度优先搜索

    在search.py中breadthFirstSearch函数中,实现广度优先搜索 (BFS) 算法。

    在cmd 输入 Python2 pacman.py -l mediumMaze -p SearchAgent -a fn=bfs

    [SearchAgent] using function bfs

    [SearchAgent] using problem type PositionSearchProblem

    Path found with total cost of 68 in 0.0 seconds

    Search nodes expanded: 269

    Pacman emerges victorious! Score: 442

    Average Score: 442.0

    Scores:        442.0

    Win Rate:      1/1 (1.00)

    Record:        Win

    提示: 如Pacman移动太慢,可以试一下选项--frameTime 0

    注意: 如果你的搜索代码具有通用性, 则不用做任何修改,该代码将同样能对eight-puzzle搜索问题适用。

    问题3:不同的费用

    通过修改代价函数,我们鼓励Pacman发现不同路径。例如,有恶魔的区域,我们增加每步的代价,而在食物丰富的区域减少每步的代价,一个理性的Pacman应该相应地调整它的行为。

    在search.py的uniformCostSearch函数中,实现一致代价图搜索算法。util.py中有一些数据结构,也许会对你的实现有用。现在你应该能观察到在下面三个样板中的成功行为,所使用的智能体都是UCS(uniform cost search)智能体,其唯一区别是所使用的费用函数(其智能体和费用函数已经帮你写好了):

    Python2 pacman.py -l mediumMaze -p SearchAgent -a fn=ucs

    Python2 pacman.py -l mediumDottedMaze -p StayEastSearchAgent

    Python2 pacman.py -l mediumScaryMaze -p StayWestSearchAgent

    注: 由于其指数费用函数,在StayEastSearchAgent 和 StayWestSearchAgent中,你将分别看到很低的和很高的路径费用total  cost(详细细节可见searchAgents.py).

    [SearchAgent] using function ucs

    [SearchAgent] using problem type PositionSearchProblem

    Path found with total cost of 68 in 0.0 seconds

    Search nodes expanded: 269

    Pacman emerges victorious! Score: 442

    Average Score: 442.0

    Scores:        442.0

    Win Rate:      1/1 (1.00)

    Record:        Win

    问题4:A*搜索

    在search.py的aStarSearch函数中实现A*图搜索 A*输入参数包括一个启发式函数。启发式函数有两个输入变量:搜索问题的状态 (主参数), 和问题本身(相关参考信息). search.py中的nullHeuristic 启发函数是一个普通的实例.可以针对求通过迷宫到达固定点的原问题来测试A*实现,具体可使用Manhattan距离启发(已经在searchAgents.py中实现为 manhattanHeuristic).

    Python2 pacman.py -l bigMaze -z .5 -p SearchAgent -a fn=astar,heuristic=manhattanHeuristic

    [SearchAgent] using function astar and heuristic manhattanHeuristic

    [SearchAgent] using problem type PositionSearchProblem

    Path found with total cost of 210 in 0.1 seconds

    Search nodes expanded: 549

    Pacman emerges victorious! Score: 300

    Average Score: 300.0

    Scores:        300.0

    Win Rate:      1/1 (1.00)

    Record:        Win

    问题5:查找所有角落

    注意:确保你已经完成问题2,然后再来完成问题5,因为问题5依赖于问题2的答案。

    A*搜索的真正强大之处,在具有更大挑战性的问题上才能显现。下面,我们需要先构造一个新问题,然后为其设计一个启发式的算法。

    在角落迷宫corner mazes中, 四个角上各有一颗豆。我们新的搜索问题是找到穿过迷宫碰到所有四个角的最短路径(不论在迷宫中是否真有食物).  注意,对于象tinyCorners这样的迷宫, 最短路径不一定总是先找最近的食物! 提示: 通过tinyCorners的最短路径需要28步.

    在searchAgents.py中实现CornersProblem搜索问题。你需要选择一种状态表示方法,该方法可以对所有必要的信息编码,以便测定所有四个角点是否达到。现在, 搜索智能体应该可解下面的问题:

    Python2 pacman.py -l tinyCorners -p SearchAgent -a fn=bfs,prob=CornersProblem

    Python2 pacman.py -l mediumCorners -p SearchAgent -a fn=bfs,prob=CornersProblem

    进一步,需要定义一个抽象的状态表示,该表示不对无关信息编码(如恶魔的位置, 其他食物的位置等)。特别是不要使用Pacman的GameState作为搜索状态。如果这样,你的代码会非常、非常慢(还出错).

    提示: 在实现中,你需要访问的唯一游戏状态是Pacman的起始位置和四个角点的位置。

    [SearchAgent] using function bfs

    [SearchAgent] using problem type CornersProblem

    Path found with total cost of 106 in 0.2 seconds

    Search nodes expanded: 1966

    Pacman emerges victorious! Score: 434

    Average Score: 434.0

    Scores:        434.0

    Win Rate:      1/1 (1.00)

    Record:        Win

    问题6:角落问题:启发式

    注意:确保你已经完成问题4,然后再来完成问题6,因为问题6依赖于问题4的答案。   

    对CornersProblem实现一个启发式搜索cornersHeuristic。请在你的实现前面加上必要的备注。

    Python2 pacman.py -l mediumCorners -p AStarCornersAgent -z 0.5

    注意:AStarCornersAgent 是 -p SearchAgent -a fn=aStarSearch,prob=CornersProblem,heuristic=cornersHeuristic的缩写。

    Path found with total cost of 106 in 0.1 seconds

    Search nodes expanded: 692

    Pacman emerges victorious! Score: 434

    Average Score: 434.0

    Scores:        434.0

    Win Rate:      1/1 (1.00)

    Record:        Win

     

     

    问题7:吃掉所有的“豆”

    接下来,我们求解一个困难的搜索问题: 使用尽量少的步骤吃掉所有的食物。对此次作业,我们需要定义一个新的搜索问题,在该定义中正确描述吃掉所有食物的问题: 在searchAgents.py中的FoodSearchProblem (已经实现好了). 问题的解定义为一条收集到世界中所有食物的路径。在现在的项目中,不考虑”魔鬼“或"能量药“的存在; 解仅依赖于墙和正常食物在Pacman中的位置(当然,“魔鬼”会损坏解!) 。如果你已经正确地完成了通用搜索算法, 使用null heuristic (等价于一致费用搜索UCS) 的A* 将很快求得testSearch问题的最优解,而不用大家写任何代码(总费用7).

    Python2 pacman.py -l testSearch -p AStarFoodSearchAgent

    注: AStarFoodSearchAgent是-p SearchAgent -a fn=astar,prob=FoodSearchProblem,heuristic=foodHeuristic的缩写.

    你将看到UCS开始慢下来,即使对看起来简单的tinySearch问题。

    注意:确保你已经完成问题4,然后再来完成问题7,因为问题7依赖于问题4的答案。

    针对FoodSearchProblem,使用一致性启发式函数,在searchAgents.py中完成foodHeuristic。在函数开头添加必要的注释描述你的启发式函数。测试你的Agent:

    Python2 pacman.py -l trickySearch -p AStarFoodSearchAgent

    问题8:次优搜索

    有的时候,即使使用 A* 加上好的启发式,求通过所有“豆”的最优路径也是困难的。此时,我们还是希望能尽快地求得一个足够好的路径。在本节中,你需要写出一个智能体,它总是吃掉最近的豆. 在searchAgents.py中已经实现了ClosestDotSearchAgent, 但缺少一个关键函数,该函数搜索到最近豆的路径。

    在文件searchAgents.py中实现findPathToClosestDot函数。

    python pacman.py -l bigSearch -p ClosestDotSearchAgent -z .5

    提示: 完成 findPathToClosestDot 的最快方式是填满AnyFoodSearchProblem, 该问题缺少目标测试。然后,使用合适的搜索函数来求解问题。解会非常短!

    你的ClosestDotSearchAgent 并不总能找到通过迷宫的可能的最短路径。事实上,如果你尝试,你可以做得更好。

    对象总览

    下面是基础代码中与搜索问题有关的重要对象的总览,供大家参考:

    SearchProblem (search.py)

    SearchProblem是一个抽象对象,该对象表示状态空间,费用,和问题的目标状态。你只能通过定义在search.py顶上的方法来与SearchProblem交互

    PositionSearchProblem (searchAgents.py)

    需要处理的一种特别的SearchProblem类型 --- 对应于在迷宫中搜索单个肉丸pellet.

    CornersProblem (searchAgents.py)

    一种需要定义的特别的SearchProblem问题 --- 目的是搜索出一条到达迷宫中所有四个角点的路径.

    FoodSearchProblem (searchAgents.py)

    一个特定的需要解决的搜索问题。

    Search Function

    搜索函数是一个函数,该函数以SearchProblem的一个实例作为输入 , 运行一些算法, 返回值一列到达目标的行为. 搜索函数的实例有depthFirstSearch 和 breadthFirstSearch, 这些都要你编写。我们提供了tinyMazeSearch函数,该函数是一个非常差的函数,只能对tinyMaze得到正确结果

    SearchAgent

    SearchAgent是实现智能体Agent的类(它与世界交互) 且通过搜索函数做出规划。SearchAgent首先使用所提供的搜索函数规划出到达目标状态的行为,然后一次执行一个动作。

    具体代码

    # coding=UTF-8
    # search.py
    
    """
    In search.py, you will implement generic search algorithms which are called by
    Pacman agents (in searchAgents.py).
    """
    
    import util
    
    class SearchProblem:
        """
        This class outlines the structure of a search problem, but doesn't implement
        any of the methods (in object-oriented terminology: an abstract class).
    
        You do not need to change anything in this class, ever.
        """
    
        def getStartState(self):
            """
            Returns the start state for the search problem.
            """
            util.raiseNotDefined()
    
        def isGoalState(self, state):
            """
              state: Search state
    
            Returns True if and only if the state is a valid goal state.
            """
            util.raiseNotDefined()
    
        def getSuccessors(self, state):
            """
              state: Search state
    
            For a given state, this should return a list of triples, (successor,
            action, stepCost), where 'successor' is a successor to the current
            state, 'action' is the action required to get there, and 'stepCost' is
            the incremental cost of expanding to that successor.
            """
            util.raiseNotDefined()
    
        def getCostOfActions(self, actions):
            """
             actions: A list of actions to take
    
            This method returns the total cost of a particular sequence of actions.
            The sequence must be composed of legal moves.
            """
            util.raiseNotDefined()
    
    def tinyMazeSearch(problem):
        """
        Returns a sequence of moves that solves tinyMaze.  For any other
        maze, the sequence of moves will be incorrect, so only use this for tinyMaze
        """
        from game import Directions
        s = Directions.SOUTH
        w = Directions.WEST
        return  [s,s,w,s,w,w,s,w]
    
    def depthFirstSearch(problem):
        #初始状态
        s = problem.getStartState()
        #标记已经搜索过的状态集合exstates
        exstates = []
        #用栈实现dfs
        states = util.Stack()
        states.push((s, []))
        #循环终止条件:遍历完毕/目标测试成功
        while not states.isEmpty() and not problem.isGoalState(s):
            state, actions = states.pop()
            exstates.append(state)
            successor = problem.getSuccessors(state)
            for node in successor:
                coordinates = node[0]
                direction = node[1]
                #判断状态是否重复
                if not coordinates in exstates:
                    states.push((coordinates, actions + [direction]))
                #把最后搜索的状态赋值到s,以便目标测试
                s = coordinates
        #返回动作序列
        return actions + [direction]
        util.raiseNotDefined()
    
    def breadthFirstSearch(problem):
        #初始状态
        s = problem.getStartState()
        #标记已经搜索过的状态集合exstates
        exstates = []
        #用队列queue实现bfs
        states = util.Queue()
        states.push((s, []))
        while not states.isEmpty():
            state, action = states.pop()
            #目标测试
            if problem.isGoalState(state):
                return action
            #检查重复
            if state not in exstates:
                successor = problem.getSuccessors(state)
                exstates.append(state)
                #把后继节点加入队列
                for node in successor:
                    coordinates = node[0]
                    direction = node[1]
                    if coordinates not in exstates:
                        states.push((coordinates, action + [direction]))
        #返回动作序列
        return action
        util.raiseNotDefined()
    
    def uniformCostSearch(problem):
        #初始状态
        start = problem.getStartState()
        #标记已经搜索过的状态集合exstates
        exstates = []
        #用优先队列PriorityQueue实现ucs
        states = util.PriorityQueue()
        states.push((start, []) ,0)
        while not states.isEmpty():
            state, actions = states.pop()
            #目标测试
            if problem.isGoalState(state):
                return actions
            #检查重复
            if state not in exstates:
                #扩展
                successors = problem.getSuccessors(state)
                for node in successors:
                    coordinate = node[0]
                    direction = node[1]
                    if coordinate not in exstates:
                        newActions = actions + [direction]
                        #ucs比bfs的区别在于getCostOfActions决定节点扩展的优先级
                        states.push((coordinate, actions + [direction]), problem.getCostOfActions(newActions))
            exstates.append(state)
        return actions
        util.raiseNotDefined()
    
    def nullHeuristic(state, problem=None):
        """
        A heuristic function estimates the cost from the current state to the nearest
        goal in the provided SearchProblem.  This heuristic is trivial.
        启发式函数有两个输入变量:搜索问题的状态 (主参数), 和问题本身(相关参考信息)
        """
        return 0
    
    def aStarSearch(problem, heuristic=nullHeuristic):
        "Search the node that has the lowest combined cost f(n) and heuristic g(n) first."
        start = problem.getStartState()
        exstates = []
        # 使用优先队列,每次扩展都是选择当前代价最小的方向
        states = util.PriorityQueue()
        states.push((start, []), nullHeuristic(start, problem))
        nCost = 0
        while not states.isEmpty():
            state, actions = states.pop()
            #目标测试
            if problem.isGoalState(state):
                return actions
            #检查重复
            if state not in exstates:
                #扩展
                successors = problem.getSuccessors(state)
                for node in successors:
                    coordinate = node[0]
                    direction = node[1]
                    if coordinate not in exstates:
                        newActions = actions + [direction]
                        #计算动作代价和启发式函数值得和
                        newCost = problem.getCostOfActions(newActions) + heuristic(coordinate, problem)
                        states.push((coordinate, actions + [direction]), newCost)
            exstates.append(state)
        #返回动作序列
        return actions
        util.raiseNotDefined()
    
    # Abbreviations
    bfs = breadthFirstSearch
    dfs = depthFirstSearch
    astar = aStarSearch
    ucs = uniformCostSearch
    
    # coding=UTF-8
    # searchAgents.py
    # ---------------
    
    
    
    """
    This file contains all of the agents that can be selected to control Pacman.  To
    select an agent, use the '-p' option when running pacman.py.  Arguments can be
    passed to your agent using '-a'.  For example, to load a SearchAgent that uses
    depth first search (dfs), run the following command:
    
    > python pacman.py -p SearchAgent -a fn=depthFirstSearch
    
    Commands to invoke other search strategies can be found in the project
    description.
    
    Please only change the parts of the file you are asked to.  Look for the lines
    that say
    
    "*** YOUR CODE HERE ***"
    
    The parts you fill in start about 3/4 of the way down.  Follow the project
    description for details.
    
    Good luck and happy searching!
    """
    
    from game import Directions
    from game import Agent
    from game import Actions
    import util
    import time
    import search
    import sys
    
    class GoWestAgent(Agent):
        "An agent that goes West until it can't."
    
        def getAction(self, state):
            "The agent receives a GameState (defined in pacman.py)."
            if Directions.WEST in state.getLegalPacmanActions():
                return Directions.WEST
            else:
                return Directions.STOP
    
    #######################################################
    # This portion is written for you, but will only work #
    #       after you fill in parts of search.py          #
    #######################################################
    
    class SearchAgent(Agent):
        """
        This very general search agent finds a path using a supplied search
        algorithm for a supplied search problem, then returns actions to follow that
        path.
    
        As a default, this agent runs DFS on a PositionSearchProblem to find
        location (1,1)
    
        Options for fn include:
          depthFirstSearch or dfs
          breadthFirstSearch or bfs
    
    
        Note: You should NOT change any code in SearchAgent
        """
    
        def __init__(self, fn='uniformCostSearch', prob='PositionSearchProblem', heuristic='nullHeuristic'):
            # Warning: some advanced Python magic is employed below to find the right functions and problems
    
            # Get the search function from the name and heuristic
            if fn not in dir(search):
                raise AttributeError, fn + ' is not a search function in search.py.'
            func = getattr(search, fn)
            if 'heuristic' not in func.func_code.co_varnames:
                print('[SearchAgent] using function ' + fn)
                self.searchFunction = func
            else:
                if heuristic in globals().keys():
                    heur = globals()[heuristic]
                elif heuristic in dir(search):
                    heur = getattr(search, heuristic)
                else:
                    raise AttributeError, heuristic + ' is not a function in searchAgents.py or search.py.'
                print('[SearchAgent] using function %s and heuristic %s' % (fn, heuristic))
                # Note: this bit of Python trickery combines the search algorithm and the heuristic
                self.searchFunction = lambda x: func(x, heuristic=heur)
    
            # Get the search problem type from the name
            if prob not in globals().keys() or not prob.endswith('Problem'):
                raise AttributeError, prob + ' is not a search problem type in SearchAgents.py.'
            self.searchType = globals()[prob]
            print('[SearchAgent] using problem type ' + prob)
    
        def registerInitialState(self, state):
            """
            This is the first time that the agent sees the layout of the game
            board. Here, we choose a path to the goal. In this phase, the agent
            should compute the path to the goal and store it in a local variable.
            All of the work is done in this method!
    
            state: a GameState object (pacman.py)
            """
            if self.searchFunction == None: raise Exception, "No search function provided for SearchAgent"
            starttime = time.time()
            problem = self.searchType(state) # Makes a new search problem
            self.actions  = self.searchFunction(problem) # Find a path
            totalCost = problem.getCostOfActions(self.actions)
            print('Path found with total cost of %d in %.1f seconds' % (totalCost, time.time() - starttime))
            if '_expanded' in dir(problem): print('Search nodes expanded: %d' % problem._expanded)
    
        def getAction(self, state):
            """
            Returns the next action in the path chosen earlier (in
            registerInitialState).  Return Directions.STOP if there is no further
            action to take.
    
            state: a GameState object (pacman.py)
            """
            if 'actionIndex' not in dir(self): self.actionIndex = 0
            i = self.actionIndex
            self.actionIndex += 1
            if i < len(self.actions):
                return self.actions[i]
            else:
                return Directions.STOP
    
    class PositionSearchProblem(search.SearchProblem):
        """
        A search problem defines the state space, start state, goal test, successor
        function and cost function.  This search problem can be used to find paths
        to a particular point on the pacman board.
    
        The state space consists of (x,y) positions in a pacman game.
    
        Note: this search problem is fully specified; you should NOT change it.
        """
    
        def __init__(self, gameState, costFn = lambda x: 1, goal=(1,1), start=None, warn=True, visualize=True):
            """
            Stores the start and goal.
    
            gameState: A GameState object (pacman.py)
            costFn: A function from a search state (tuple) to a non-negative number
            goal: A position in the gameState
            """
            self.walls = gameState.getWalls()
            self.startState = gameState.getPacmanPosition()
            if start != None: self.startState = start
            self.goal = goal
            self.costFn = costFn
            self.visualize = visualize
            if warn and (gameState.getNumFood() != 1 or not gameState.hasFood(*goal)):
                print 'Warning: this does not look like a regular search maze'
            self._visited, self._visitedlist, self._expanded = {}, [], 0
    
        def getStartState(self):
            return self.startState
    
        def isGoalState(self, state):
            isGoal = state == self.goal
    
            # For display purposes only
            if isGoal and self.visualize:
                self._visitedlist.append(state)
                import __main__
                if '_display' in dir(__main__):
                    if 'drawExpandedCells' in dir(__main__._display): #@UndefinedVariable
                        __main__._display.drawExpandedCells(self._visitedlist) #@UndefinedVariable
    
            return isGoal
    
        def getSuccessors(self, state):
            """
            Returns successor states, the actions they require, and a cost of 1.
    
             As noted in search.py:
                 For a given state, this should return a list of triples,
             (successor, action, stepCost), where 'successor' is a
             successor to the current state, 'action' is the action
             required to get there, and 'stepCost' is the incremental
             cost of expanding to that successor
            """
    
            successors = []
            for action in [Directions.NORTH, Directions.SOUTH, Directions.EAST, Directions.WEST]:
                x,y = state
                dx, dy = Actions.directionToVector(action)
                nextx, nexty = int(x + dx), int(y + dy)
                if not self.walls[nextx][nexty]:
                    nextState = (nextx, nexty)
                    cost = self.costFn(nextState)
                    successors.append( ( nextState, action, cost) )
    
            # Bookkeeping for display purposes
            self._expanded += 1 # DO NOT CHANGE
            if state not in self._visited:
                self._visited[state] = True
                self._visitedlist.append(state)
    
            return successors
    
        def getCostOfActions(self, actions):
            """
            Returns the cost of a particular sequence of actions. If those actions
            include an illegal move, return 999999.
            """
            if actions == None: return 999999
            x,y= self.getStartState()
            cost = 0
            for action in actions:
                # Check figure out the next state and see whether its' legal
                dx, dy = Actions.directionToVector(action)
                x, y = int(x + dx), int(y + dy)
                if self.walls[x][y]: return 999999
                cost += self.costFn((x,y))
            return cost
    
    class StayEastSearchAgent(SearchAgent):
        """
        An agent for position search with a cost function that penalizes being in
        positions on the West side of the board.
    
        The cost function for stepping into a position (x,y) is 1/2^x.
        """
        def __init__(self):
            self.searchFunction = search.uniformCostSearch
            costFn = lambda pos: .5 ** pos[0]
            self.searchType = lambda state: PositionSearchProblem(state, costFn, (1, 1), None, False)
    
    class StayWestSearchAgent(SearchAgent):
        """
        An agent for position search with a cost function that penalizes being in
        positions on the East side of the board.
    
        The cost function for stepping into a position (x,y) is 2^x.
        """
        def __init__(self):
            self.searchFunction = search.uniformCostSearch
            costFn = lambda pos: 2 ** pos[0]
            self.searchType = lambda state: PositionSearchProblem(state, costFn)
    
    def manhattanHeuristic(position, problem, info={}):
        "The Manhattan distance heuristic for a PositionSearchProblem"
        xy1 = position
        xy2 = problem.goal
        return abs(xy1[0] - xy2[0]) + abs(xy1[1] - xy2[1])
    
    def euclideanHeuristic(position, problem, info={}):
        "The Euclidean distance heuristic for a PositionSearchProblem"
        xy1 = position
        xy2 = problem.goal
        return ( (xy1[0] - xy2[0]) ** 2 + (xy1[1] - xy2[1]) ** 2 ) ** 0.5
    
    #####################################################
    # This portion is incomplete.  Time to write code!  #
    #####################################################
    
    class CornersProblem(search.SearchProblem):
        """
        This search problem finds paths through all four corners of a layout.
        You must select a suitable state space and successor function
        """
    
        def __init__(self, startingGameState):
            """
            Stores the walls, pacman's starting position and corners.
            """
            self.walls = startingGameState.getWalls()
            self.startingPosition = startingGameState.getPacmanPosition()
            top, right = self.walls.height-2, self.walls.width-2
            self.corners = ((1,1), (1,top), (right, 1), (right, top))
            for corner in self.corners:
                if not startingGameState.hasFood(*corner):
                    print 'Warning: no food in corner ' + str(corner)
            self._expanded = 0 # Number of search nodes expanded
            # Please add any code here which you would like to use
            # in initializing the problem
            "*** YOUR CODE HERE ***"
            self.right = right
            self.top = top
            
        def getStartState(self):
            """
            Returns the start state (in your state space, not the full Pacman state
            space)
            """
            "*** YOUR CODE HERE ***"
            #初始节点(开始位置,角落情况)
            allCorners = (False, False, False, False)
            start = (self.startingPosition, allCorners)
            return start
            util.raiseNotDefined()
    
        def isGoalState(self, state):
            """
            Returns whether this search state is a goal state of the problem.
            """
            "*** YOUR CODE HERE ***"
            #目标测试:四个角落都访问过
            corners = state[1]
            boolean = corners[0] and corners[1] and corners[2] and corners[3]
            return boolean
            util.raiseNotDefined()
    
        def getSuccessors(self, state):
            """
            Returns successor states, the actions they require, and a cost of 1.
    
             As noted in search.py:
                For a given state, this should return a list of triples, (successor,
                action, stepCost), where 'successor' is a successor to the current
                state, 'action' is the action required to get there, and 'stepCost'
                is the incremental cost of expanding to that successor
            """
            successors = []
            #遍历能够做的后续动作
            for action in [Directions.NORTH, Directions.SOUTH, Directions.EAST, Directions.WEST]:
                # Add a successor state to the successor list if the action is legal
                "*** YOUR CODE HERE ***"
                 #   x,y = currentPosition
                x,y = state[0]
                holdCorners = state[1]
                #   dx, dy = Actions.directionToVector(action)
                dx, dy = Actions.directionToVector(action)
                nextx, nexty = int(x + dx), int(y + dy)
                hitsWall = self.walls[nextx][nexty]
                newCorners = ()
                nextState = (nextx, nexty)
                #不碰墙
                if not hitsWall:
                    #能到达角落,四种情况判断
                    if nextState in self.corners:
                        if nextState == (self.right, 1):
                            newCorners = [True, holdCorners[1], holdCorners[2], holdCorners[3]]
                        elif nextState == (self.right, self.top):
                            newCorners = [holdCorners[0], True, holdCorners[2], holdCorners[3]]
                        elif nextState == (1, self.top):
                            newCorners = [holdCorners[0], holdCorners[1], True, holdCorners[3]]
                        elif nextState == (1,1):
                            newCorners = [holdCorners[0], holdCorners[1], holdCorners[2], True]
                        successor = ((nextState, newCorners), action,  1)
                    #去角落的中途
                    else:
                        successor = ((nextState, holdCorners), action, 1)
                    successors.append(successor)
            self._expanded += 1 
            return successors
    
        def getCostOfActions(self, actions):
            """
            Returns the cost of a particular sequence of actions.  If those actions
            include an illegal move, return 999999.  This is implemented for you.
            """
            if actions == None: return 999999
            x,y= self.startingPosition
            for action in actions:
                dx, dy = Actions.directionToVector(action)
                x, y = int(x + dx), int(y + dy)
                if self.walls[x][y]: return 999999
            return len(actions)
    
    
    def cornersHeuristic(state, problem):
        """
        A heuristic for the CornersProblem that you defined.
    
          state:   The current search state
                   (a data structure you chose in your search problem)
    
          problem: The CornersProblem instance for this layout.
    
        This function should always return a number that is a lower bound on the
        shortest path from the state to a goal of the problem; i.e.  it should be
        admissible (as well as consistent).
        """
        corners = problem.corners # These are the corner coordinates
        walls = problem.walls # These are the walls of the maze, as a Grid (game.py)
    
        "*** YOUR CODE HERE ***"
        position = state[0]
        stateCorners = state[1]
        corners = problem.corners
        top = problem.walls.height-2
        right = problem.walls.width-2
        node = []
        for c in corners:
            if c == (1,1):
                if not stateCorners[3]:
                    node.append(c)
            if c == (1, top):
                if not stateCorners[2]:
                    node.append(c)
            if c == (right, top):
                if not stateCorners[1]:
                    node.append(c)
            if c == (right, 1):
                if not stateCorners[0]:
                    node.append(c)
        cost = 0
        currPosition = position
        while len(node) > 0:
            distArr= []
            for i in range(0, len(node)):
                dist = util.manhattanDistance(currPosition, node[i])
                distArr.append(dist)
            mindist = min(distArr)
            cost += mindist
            minDistI= distArr.index(mindist)
            currPosition = node[minDistI]
            del node[minDistI]
        return cost
    
    class AStarCornersAgent(SearchAgent):
        "A SearchAgent for FoodSearchProblem using A* and your foodHeuristic"
        def __init__(self):
            self.searchFunction = lambda prob: search.aStarSearch(prob, cornersHeuristic)
            self.searchType = CornersProblem
    
    class FoodSearchProblem:
        """
        A search problem associated with finding the a path that collects all of the
        food (dots) in a Pacman game.
    
        A search state in this problem is a tuple ( pacmanPosition, foodGrid ) where
          pacmanPosition: a tuple (x,y) of integers specifying Pacman's position
          foodGrid:       a Grid (see game.py) of either True or False, specifying remaining food
        """
        def __init__(self, startingGameState):
            self.start = (startingGameState.getPacmanPosition(), startingGameState.getFood())
            self.walls = startingGameState.getWalls()
            self.startingGameState = startingGameState
            self._expanded = 0 # DO NOT CHANGE
            self.heuristicInfo = {} # A dictionary for the heuristic to store information
    
        def getStartState(self):
            return self.start
    
        def isGoalState(self, state):
            return state[1].count() == 0
    
        def getSuccessors(self, state):
            "Returns successor states, the actions they require, and a cost of 1."
            successors = []
            self._expanded += 1 # DO NOT CHANGE
            for direction in [Directions.NORTH, Directions.SOUTH, Directions.EAST, Directions.WEST]:
                x,y = state[0]
                dx, dy = Actions.directionToVector(direction)
                nextx, nexty = int(x + dx), int(y + dy)
                if not self.walls[nextx][nexty]:
                    nextFood = state[1].copy()
                    nextFood[nextx][nexty] = False
                    successors.append( ( ((nextx, nexty), nextFood), direction, 1) )
            return successors
    
        def getCostOfActions(self, actions):
            """Returns the cost of a particular sequence of actions.  If those actions
            include an illegal move, return 999999"""
            x,y= self.getStartState()[0]
            cost = 0
            for action in actions:
                # figure out the next state and see whether it's legal
                dx, dy = Actions.directionToVector(action)
                x, y = int(x + dx), int(y + dy)
                if self.walls[x][y]:
                    return 999999
                cost += 1
            return cost
    
    class AStarFoodSearchAgent(SearchAgent):
        "A SearchAgent for FoodSearchProblem using A* and your foodHeuristic"
        def __init__(self):
            self.searchFunction = lambda prob: search.aStarSearch(prob, foodHeuristic)
            self.searchType = FoodSearchProblem
            
    def foodHeuristic(state, problem):
        """
        状态是一个元组(pacmanPosition,foodGrid)
        其中foodGrid是一个Grid(参见game.py)
        调用foodGrid.asList()来获取食物坐标列表。
    
        如果你想存储信息,可以在其他调用中重复使用
        启发式,你可以使用一个名为problem.heuristicInfo的字典
        例如,如果您只想计算一次墙壁并存储它
        尝试:problem.heuristicInfo ['wallCount'] = problem.walls.count()
        对此启发式的后续调用可以访问
        problem.heuristicInfo [ 'wallCount']
        """
        position, foodGrid = state
        "*** YOUR CODE HERE ***"
        hvalue = 0
        food_available = []
        total_distance = 0
        #处理食物的位置,以此构造启发式函数
        for i in range(0,foodGrid.width):
            for j in range(0,foodGrid.height):
                if (foodGrid[i][j] == True):
                    food_location = (i,j)
                    food_available.append(food_location)
        #没有食物就不用找了
        if (len(food_available) == 0):
                return 0        
        #初始化距离(current_food,select_food,distance)
        max_distance=((0,0),(0,0),0)
        for current_food in food_available:
            for select_food in food_available:
                if(current_food==select_food):
                    pass
                else:
                    #使用曼哈顿距离构造启发式函数
                    distance = util.manhattanDistance(current_food,select_food)
                    if(max_distance[2] < distance):
                        max_distance = (current_food,select_food,distance)
        #把起点和第一个搜索的食物连接起来
        #处理只有一个食物的情况
        if(max_distance[0]==(0,0) and max_distance[1]==(0,0)):
            hvalue = util.manhattanDistance(position,food_available[0])
        else: 
            d1 = util.manhattanDistance(position,max_distance[0])
            d2 = util.manhattanDistance(position,max_distance[1])
            hvalue = max_distance[2] + min(d1,d2)
        
        return hvalue
    
    class ClosestDotSearchAgent(SearchAgent):
        "Search for all food using a sequence of searches"
        def registerInitialState(self, state):
            self.actions = []
            currentState = state
            while(currentState.getFood().count() > 0):
                nextPathSegment = self.findPathToClosestDot(currentState) # The missing piece
                self.actions += nextPathSegment
                for action in nextPathSegment:
                    legal = currentState.getLegalActions()
                    if action not in legal:
                        t = (str(action), str(currentState))
                        raise Exception, 'findPathToClosestDot returned an illegal move: %s!\n%s' % t
                    currentState = currentState.generateSuccessor(0, action)
            self.actionIndex = 0
            print 'Path found with cost %d.' % len(self.actions)
    
        def findPathToClosestDot(self, gameState):
            """
            Returns a path (a list of actions) to the closest dot, starting from
            gameState.
            """
            # Here are some useful elements of the startState
            startPosition = gameState.getPacmanPosition()
            food = gameState.getFood()
            walls = gameState.getWalls()
            problem = AnyFoodSearchProblem(gameState)
    
            "*** YOUR CODE HERE ***"
            return search.aStarSearch(problem)
            util.raiseNotDefined()
    
    class AnyFoodSearchProblem(PositionSearchProblem):
        """
        A search problem for finding a path to any food.
    
        This search problem is just like the PositionSearchProblem, but has a
        different goal test, which you need to fill in below.  The state space and
        successor function do not need to be changed.
    
        The class definition above, AnyFoodSearchProblem(PositionSearchProblem),
        inherits the methods of the PositionSearchProblem.
    
        You can use this search problem to help you fill in the findPathToClosestDot
        method.
        """
    
        def __init__(self, gameState):
            "Stores information from the gameState.  You don't need to change this."
            # Store the food for later reference
            self.food = gameState.getFood()
    
            # Store info for the PositionSearchProblem (no need to change this)
            self.walls = gameState.getWalls()
            self.startState = gameState.getPacmanPosition()
            self.costFn = lambda x: 1
            self._visited, self._visitedlist, self._expanded = {}, [], 0
            # DO NOT CHANGE
    
        def isGoalState(self, state):
            x,y = state
            if self.food[x][y]:
                return True
            else:
                return False
            """
            The state is Pacman's position. Fill this in with a goal test that will
            complete the problem definition.
            """
            x,y = state
            "*** YOUR CODE HERE ***"
            foodGrid = self.food
            if (foodGrid[x][y] == True) or (foodGrid.count() == 0):
                    return True
            util.raiseNotDefined()
    
            
    ##################
    # Mini-contest 1 #
    ##################
    class ApproximateSearchAgent(Agent):
            def registerInitialState(self, state):
            self.walls = state.getWalls()
            self.mark = 0
            self.curPos = state.getPacmanPosition()
            self.path = []
            self.path.append(self.curPos)
            self.starttime = time.time()
            self.path = ApproximateSearchAgent.findPath2(self,state)
            self.cost = 0
            self.DisRecord = {}
            self.mark = 0
            self.disTime = 0
            self.pT = 0
            self.m = [[0 for col in range(450)] for row in range(450)]
            ApproximateSearchAgent.initFloyed(self)
          #   print ApproximateSearchAgent.getTotalDistance(self,self.path,state)
        #    print self.path
        #     print len(self.path)
            self.path = ApproximateSearchAgent.TwoOpt(self,state)
        #    print ApproximateSearchAgent.brutalDis(self,self.path,state)
        #     print time.time() - self.starttime
            
        def initFloyed(self):
            size = len(self.path)
            for i in range(0,size):
                x,y=self.path[i]
                if (x+1,y) in self.path:
                    self.m[x+y*30][x+1+y*30]=1
                    self.m[x+1+y*30][x+y*30]=1
                if (x-1,y) in self.path:
                    self.m[x+y*30][x-1+y*30]=1
                    self.m[x-1+y*30][x+y*30]=1
                if (x,y+1) in self.path:
                    self.m[x+(y+1)*30][x+y*30]=1
                    self.m[x+y*30][x+(y+1)*30]=1
                if (x,y-1) in self.path:
                    self.m[x+(y-1)*30][x+y*30]=1
                    self.m[x+y*30][x+(y-1)*30]=1   
            for k in range(0,size):
                for i in range(0,size):
                    if not(i==k):
                        for j in range(0,size):
                            if not(i==j) and not(j==k):
                                tx,ty=self.path[k]
                                pk=tx+ty*30
                                tx,ty=self.path[i]
                                pi=tx+ty*30
                                tx,ty=self.path[j]
                                pj=tx+ty*30
                                if not(self.m[pi][pk]==0) and not(self.m[pk][pj]==0):
                                    if self.m[pi][pj]==0 or self.m[pi][pk]+self.m[pk][pj]<self.m[pi][pj]:
                                        self.m[pi][pj]=self.m[pi][pk]+self.m[pk][pj]
                                        self.m[pj][pi]=self.m[pi][pj]
            print(self.m[181][121])
    
        def findPath(self, state):
            originPath = []
            foodMap = state.getFood()
            foodMap = foodMap.asList()
            curPos = state.getPacmanPosition()
            originPath.append(curPos)
            minDis = 9999999
            nextpos = curPos
            while len(foodMap) > 0:
                minDis = 9999999
                for pos in foodMap:
                    t = util.manhattanDistance(curPos,pos)
                    if t < minDis:
                        minDis = t
                        nextpos = pos
                originPath.append(nextpos) 
                foodMap.remove(nextpos)
                curPos = nextpos
            return originPath
    
        # greedy path
        def findPath2(self, state):
            from game import Directions
            s = Directions.SOUTH
            w = Directions.WEST
            n = Directions.NORTH
            e = Directions.EAST
            originPath = []
            foodMap = state.getFood()
            unvisited = foodMap.asList()
            curPos = state.getPacmanPosition()
            originPath.append(curPos)
            while len(unvisited) > 0:
                minDis = 999999
                minMD = 999999
                for pos in unvisited:
                 #   print curPos, pos
                    t = util.manhattanDistance(curPos,pos)
                    if t < minDis:
                        tt = mazeDistance(curPos,pos,state)
                        if tt < minMD:
                            minDis = t
                            minMD = tt
                            nextpos = pos
                
                prob = PositionSearchProblem(state, start=curPos, goal=nextpos, warn=False, visualize=False)
                move = search.bfs(prob)[0]
                x, y = curPos
                if move == s:
                    y -= 1
                if move == w:
                    x -= 1
                if move == n:
                    y += 1
                if move == e:
                    x += 1
                curPos = (x,y)
                if curPos in unvisited:
                    unvisited.remove(curPos)
                    originPath.append(curPos)
            return originPath
    
    
        def TwoOpt(self, state):
            size = len(self.path)
            improve = 0
            bestDis = ApproximateSearchAgent.getTotalDistance(self,self.path,state)
            while improve < 20:
                #print bestDis
                for i in range(1,size - 1):
                    for k in range (i+1, min(i+size/2,size)):
                        newPath = ApproximateSearchAgent.TwoOptSwap(self,i,k)
                        newDis = ApproximateSearchAgent.newTotalDistance(self,i,k,self.path,state,bestDis)     
                        if newDis <= 285:
                            self.path = newPath
                            return newPath
                        if newDis < bestDis:
                            improve = 0
                            self.path = newPath
                            bestDis = newDis
                improve += 1
            return newPath
    
        def TwoOptSwap(self,i,k):
            size = len(self.path)
            ansPath = list(self.path[0:i])
            rev = list(self.path[i:k+1])
            rev.reverse()
            end = list(self.path[k+1:size])
            return ansPath + rev + end
    
        def newTotalDistance(self,i,k,thispath,state,oldDis):
            newDis= oldDis + ApproximateSearchAgent.getDis(self,i-1,k,state,thispath) + ApproximateSearchAgent.getDis(self,i,k+1,state,thispath)- ApproximateSearchAgent.getDis(self,i-1,i,state,thispath)- ApproximateSearchAgent.getDis(self,k,k+1,state,thispath)  
            # paht i-1,k
            return newDis
    
    
    
        def getDis(self,start,end,state,thispath):
            if end >= len(thispath):
                return 0
            tx,ty=thispath[start]
            p1=tx+ty*30
            tx,ty=thispath[end]
            p2=tx+ty*30
            return self.m[p1][p2]
    
        def getTotalDistance(self,thispath,state):
          #  dt0 = time.time()
            totalDis = 0
            for i in range(len(thispath) - 1):
                tx,ty=thispath[i]
                p1=tx+ty*30
                tx,ty=thispath[i+1]
                p2=tx+ty*30
                totalDis += self.m[p1][p2]
                """
                if self.DisRecord.has_key((thispath[i],thispath[i+1])):
                    totalDis += self.DisRecord[(thispath[i],thispath[i+1])]
                else:
                    self.DisRecord[(thispath[i],thispath[i+1])] = mazeDistance(thispath[i],thispath[i+1],state)
                    totalDis += self.DisRecord[(thispath[i],thispath[i+1])]
                """
                #totalDis += mazeDistance(thispath[i],thispath[i+1],state)
       #     dt = time.time() - dt0
       #     self.disTime += dt
        #    print self.disTime
            return totalDis
    
        def brutalDis(self,thispath,state):
            totalDis = 0
            for i in range(len(thispath) - 1):
                totalDis += mazeDistance(thispath[i],thispath[i+1],state)
            return totalDis
    
        def getAction(self, state):
            if self.pT == 0:
              #  print self.disTime
                self.pT = 1
            curPos = state.getPacmanPosition() 
            if self.path[self.mark] == curPos:
                self.mark += 1
            nextpos = self.path[self.mark]
            prob = PositionSearchProblem(state, start=curPos, goal=nextpos, warn=False, visualize=False)
            move = (search.bfs(prob))[0]
            self.cost += 1
            #print self.cost
            #print time.time() - self.starttime 
            return move
            
    def mazeDistance(point1, point2, gameState):
        """
        Returns the maze distance between any two points, using the search functions
        you have already built.  The gameState can be any game state -- Pacman's position
        in that state is ignored.
        Example usage: mazeDistance( (2,4), (5,6), gameState)
        This might be a useful helper function for your ApproximateSearchAgent.
        """
        x1, y1 = point1
        x2, y2 = point2
        walls = gameState.getWalls()
        assert not walls[x1][y1], 'point1 is a wall: ' + point1
        assert not walls[x2][y2], 'point2 is a wall: ' + str(point2)
        prob = PositionSearchProblem(gameState, start=point1, goal=point2, warn=False, visualize=False)
        return len(search.bfs(prob))
    

     

    展开全文
  • pacman:吃豆人codenjoy

    2021-05-17 10:35:43
    创建自己的Codenjoy游戏 介绍 是CodingDojo开发人员框架。 他的fun'ovye团队建设活动的目的和/或编码培训。 已经。 B您可以编写自己的另一个。 搭建开发环境 开发游戏所需的全部就是jdk7,maven3,git客户端和Idea ...
  • 这是人工智能吃豆人project1的searchagent的算法实现
  • Pacman-人工智能实验

    2020-11-21 16:38:27
    人工智能实验课作业,包含serach、multiAgent、reinforcement三部分 人工智能实验课作业,包含serach、multiAgent、reinforcement三部分 人工智能实验课作业,包含serach、multiAgent、reinforcement三部分
  • Unity项目 - 吃豆人Pacman

    千次阅读 2019-09-26 07:03:07
    试玩下载:Pacman 吃豆人 提取码brkv 涉及知识 切片制作 Animations 状态机设置,any state切换,重写状态机 按键读取进行整数距离的刚体移动 用射线检测碰撞性 渲染顺序问题 单、多路径的实现 协程延时 Button ...

    项目展示

    1688704-20190526211931790-1526219776.jpg

    1688704-20190526211939061-236408026.gif

    Github项目地址:Pacman
    试玩下载:Pacman 吃豆人 提取码brkv

    涉及知识

    • 切片制作 Animations
    • 状态机设置,any state切换,重写状态机
    • 按键读取进行整数距离的刚体移动
    • 用射线检测碰撞性
    • 渲染顺序问题
    • 单、多路径的实现
    • 协程延时
    • Button 按键功能

    准备工作

    Pixels Per Unit:多少像素相当于Unity一个单位,迷宫Maze大小232x256,

    Pivot:设置贴图的零点,Bettom Left左下角

    物理化:墙,import package->custom package,导入已经设置好碰撞体的墙

    pacman切图,动画片段:Sprite Mode->multiple,Pixels Per Unit=8,进行Sprite Editor,显示其窗口。选择Slice切片,Type为Grid By Cell Count,切割参数3行4列,Apply后可在Pacman下面看到切割好的12张照片。

    动作制作:12张照片每3张为一个动作,分别是右,左,上,下,每次将3张拖入Hierarchy面板,保存在Animations文档下,各自命名。可在Project面板Animations文件夹下包含4个动画文件,说明每次保存的3张图片生成一个动画,还包含4个动画机(但只需要设置一个。其余可删除)

    初始状态机设置

    状态机:在主角Pacman内添加Animator组件,添加上述留下的动画机,打开Animator页面,看到4个组件,初始情况为:当游戏物体进入状态机,默认状态转变为PacmanRight;后拖拽其他3个状态到状态机页面

    分析主角移动:仅仅能横x纵y向移动,当持续按住某方向键位,速度为每0.3s移动 1 Unit

    Any State:
    切换连接4个状态,点击连线可看到说明:无论在任何状态只要达到连线内条件,即转变状态到所指对象状态

    Any State 切换条件:在Parameters内添加float型DirX,DirY值用来判断(持续按键产生的是浮点数)。例如PacmanRight的判断,添加DirY,当DirY>0.1(浮点数不精确性质,留一定范围)。并且取消状态机Settings内Can Transition To(考虑到帧数问题,和重复播放初始动作问题),其次2D动画将融合调0

    其他:可以调节动画的speed以调节播放该动作的速度

    吃豆人 Pacman

    吃豆人的实体化

    • 加碰撞器,circle collider,添加rigdbody2D,设置重力0

    吃豆人的移动方法

    1. 修改transform瞬移,修改坐标,多用在生成位置
    2. rigidbody2D移动,物理移动,推荐使用

    具体实现移动过程:

    1. 调用 Vector2.MoveTowards(transform.position,dest,speed) ,使得返回起始点到目标点的中间值,另设 temp 接收这个值;再对刚体进行移动操作GetComponent<Rigidbody2D>().MovePosition(temp);
      • Vector2.MoveTowards(transform.position,dest,speed):表示以 浮点数型 speed 的速度,从起点 transform.position 移动到终点 dest ,返回的值为两点坐标的中间值
    2. 初始时 transform.position = test ,故不会移动,因此需要按键检测以改变 dest 的值:
      • Input.GetKey(KeyCode.UpArrow) 或者 Input.GetKey(KeyCode.W) 实现读取键位
      • 然后赋值 dest:(Vector2)transform.position + Vector2.up;

    实现吃豆人每次单位移动(Vector2)transform.position + Vector2.up ,表示 当前坐标position 加 向上一个单位量;每次读取键盘方向信息,将当前坐标 + 某方向单位量 = 目的地位置

    产生问题1:此时移动会造成吃豆人旋转问题
    原因:Pacman 与墙壁碰撞时Z轴坐标改变造成旋转
    解决:冻结Z轴 Rigdibody2D->Constraints->Freeze Rotation Z

    产生问题2:卡槽间移动容易卡住,非规则移动
    原因:问题在于按键过程时刻改变dest ,造成 temp = Vector2.MoveTowards()的时刻改变
    解决:判断当上一个dest抵达时才读取新的键位if ((Vector2)transform.position == dest)

    产生问题3:首次测试按键移动发现撞墙后就不可再移动
    原因:抵达墙边时,键盘读取的dest到了墙以外,if判断永远无法transform.position==dest,无法在键盘读取
    解决:检测目的地合法性

    //检查目的地是否合法 dir方向值(上述的Vector.XXX)
    private bool Valid(Vector2 dir)
    {
        //pos 存储当前位置(墙内的合法位置)
        Vector2 pos = transform.position;
    
        //从 当前值pos+方向值dir 的位置发射一条射线到Pacman 当前的位置pos
        RaycastHit2D hit = Physics2D.Linecast(pos + dir, pos);
    
        //射线打到的碰撞器 是否等于 吃豆人的碰撞器:
        //若射线从墙中心(不合法位置)射出,hit.collider为墙的,不等于Pacman的,返回fault
        //若射线从路面(合法位置)射出,hit.collider等于Pacman的,返回true
        return (hit.collider == GetComponent<Collider2D>());
    }

    状态机的切换

    实现不同动作状态机的切换:

    • 获取移动的方向:Vector2 dir = dest - (Vector2)transform.position;
    • 把获取到的方向设置到状态机:GetComponent<Animator>().SetFloat("DirX", dir.x);
      1688704-20190526210629994-352869969.png

    2D游戏Z轴问题:若在不同层级碰撞功能失效,若在同一层级,则存在渲染顺序问题(谁覆盖谁)

    渲染顺序问题:Sprite Renderer->Order in Layer

    • 小的先渲染,大的后渲染:迷宫Maze 0,豆子pacdot 1,敌人 2~5,Pacman 6
    • 小的被覆盖,大的覆盖:先渲染的存在于底层,后渲染的位于上层(类似ppt中的图层)
      1688704-20190526210728877-485783599.png
      1688704-20190526210733986-1977340104.png

    豆子及敌人的创建、移动

    豆子:

    1. Pacdot 即为豆子图标,拖入页面内创建对象
    2. 对其添加碰撞器 Box Collider 2D,设置为触发器
    3. 对所有Pacdot添加脚本 Pacdot.cs
      • 脚本内创建碰撞检测OntriggerEnter2D(Collider2D collision) 函数用来检测触发 Pacdot 的物体是否为Pacman ,是则销毁Pacdot

    敌人的创建:

    1. 重复切图,合成动作,设置图层,安放位置
    2. 关于状态机,采用 重写状态机
      • 在 Animation文档内 create->Animator Override Controller,设置状态机参照 Controller 为 Pacman的,可看到 Original 为 Pacman内的动作,Override 内的就设置为每个敌人的不同动作(注意,删除原有的状态机使得物体的 Animator 内 Controller 找不到状态机组件,此时需要将重写后的状态机设置到它身上
    3. 再设置 Rigidbody 和 CircleCollider(注意此处要设置为触发器Trriger而不是碰撞器,范围0.8
      1688704-20190526210531304-842430402.png
      1688704-20190526210646767-560170477.png

    敌人的移动(单路径):

    1. 创建与豆子坐标一致的、始末位置同点的闭合路径(用空物体作路径点即可),统一储存于 way 结构内作为一条路径
    2. 编写 EnemyMove.cs
      • 创建循环队列保存所有路径点 Transform[] WayPoint ,及index 标记敌人在前往哪个路径点的途中;
      • 创建 FixedeUpdata() ,判断:若怪物没抵达目标位置,则从当前位置持续移动直到抵达(同Pacman的移动,但没有输入检测),若抵达,则index++,前往下一个点;此外动画状态的检测、切换也同Pacman的;
      • 设置触发检测:当检测到触发的物体是 Pacman ,则销毁 Pacman
    3. 在Unity页面将全部路径点拖拽到怪物脚本的Way Point处实现赋值数组
      1688704-20190526210840146-903114872.png

    敌人的移动(多路径):高级的方法是AI,但本例采用多路线随机分配实现

    1. 先创建对象 wayPointsGo 用来接收为预制体的路线 Way
    2. 创建 wayPoints ,在 start() 内调用foreach方法,将 wayPointGo 内的组件取出到 t,将 t.position 坐标顺序添加到wayPoints 表内(还需修改FixedUpdate()函数内: wayPoints[index] 此时为表,存储的是坐标,无需再.position,后续的wayPoints.Length也改为了wayPoints.Count),由此实现了一条路径;
    3. 修改EnemyMove.cs,游戏对象 wayPointsGo 改为数组形式 实现存储多路线
    4. 根据前面路径Way预制体的制作方法,再次制作多条路径
    private void LoadApath(GameObject go)
    {
        //将wayPointsGo数组内某一路径的子物体(路径点)的Transform组件取出,依次将其position赋值到Ways表中
        //修改为多路径后随机从5条路径走
        foreach (Transform t in wayPointsGo[Random.Range(0, 4)].transform)
        {
            wayPoints.Add(t.position);
        }
    }
    • 创建LoadApath函数:首先清除上调路径遗留再List中的信息,后foreach()加载路径到List内,而后再Start内每次调用随机,传入随机一条路径。

    产生问题1:不同怪物出门都与 Blinky红色敌人 同一点问题
    原因:因为做预制体way1,way2,...,wayn 时,路径始末两点坐标都是 Blinky红色敌人 上方3个单位,所以其他敌人起始移动就会先进行穿墙到那个点
    解决:修改 EnemyMove.cs ,创建一个坐标变量 startPos 用在存放每个敌人路径的始末位置点,在 Start() 函数内初始化设置 satrtPos 为怪物起始坐标+向上3个单位;再后续 foreach() 内插入该点到List表头,及在List末尾添加该点,但注意每次要调用LoadApath()函数要清除上一次路径信息

    //清空List内前次路径的信息
    wayPoints.Clear();
    
    //添加首末路径点到List内
    wayPoints.Insert(0, startPos);
    wayPoints.Add(startPos);

    产生问题2:即便随机路径下不同怪物选到同路径问题
    原因:每个敌人进行 Random.Range(0,n) 随机分配路径时可能抽到一样的随机数
    解决:添加 GameManager.cs ,调用如下代码

    public class GameManager : MonoBehaviour
    {
        private static GameManager _instance;
        public static GameManager Instance
        {
            get
            {
                return _instance;
            }
        }
    
        public List<int> usingIndex = new List<int>();
        public List<int> rawIndex = new List<int> { 0, 1, 2, 3};
    
        private void Awake()
        {
            _instance = this;
            int tempCount = rawIndex.Count;
    
            for (int i = 0; i < tempCount; i++)
            {
                int tempIndex = Random.Range(0, rawIndex.Count);
                usingIndex.Add(rawIndex[tempIndex]);
                rawIndex.RemoveAt(tempIndex);
            }
        }
    }
    
    //再修改EnemyMove,Start内
    LoadApath(wayPointsGo[GameManager.Instance.usingIndex[GetComponent<SpriteRenderer>().sortingOrder - 2]]);

    超级豆子

    超级豆子的生成:

    • GameManager.cs内:
      1. 创建pacdotGos,foreach()存储所有豆子;
      2. 生成超级豆子:CreateSuperPacdot()
      3. 创建布尔变量isSuperPacman(初始为false);
    • Pacdot.cs内:
      • 修改碰撞触发判定:if(是超吃豆人状态){} else 被敌人消灭
    • EnemyMove.cs内:
      • 修改碰撞触发判定:if(碰到的是超级吃豆人){} else 消灭吃豆人

    超级豆子带来的超级吃豆人状态:敌人静止,且可以被吃掉

    • Pacman的超级状态:OnEatSuperPacdot()
      • GameManager.cs 内添加布尔变量 isSuperPacman 判定是否超级状态
      • 当吃到超级豆子后启用该函数,变更状态标记 isSuperPacman = true
      • 启用静止敌人函数 FreeEnemy()(下有说明)
      • 实现保持超级状态4s:协程延时 StartCoroutine(Recover()); (下面说明)
      • 4s时间结束后取消状态(同在协程函数Recover()内进行)
    • 敌人静止: FreezeEnemy()
      • Blinky.GetComponent<EnemyMove>().enabled = false; 禁用怪物移动脚本的update方法
      • Blinky.GetComponent<SpriteRenderer>().color = new Color(0.7f, 0.7f, 0.7f, 0.7f); 敌人图标变暗淡
    • 敌人被吃掉:
      • 在敌人移动脚本 EnemyMove.cs 内检测,若碰撞检测到的 Pacman 是超级状态 GameManager.Instance.isSuperPacman 则自身回到出生点

    协程延时:

    • 协程的作用:当 启动 OnEatSuperPacdot() 变更为超级状态状态时,启用协程函数 StartCoroutine(Recover()) ,表示该协程函数与 OnEateSuperPacdot() 同时启用,并行运行
    • 实现功能:在吃到超级豆子瞬间即开始计时,计时完后取消敌人静止状态Dis_FreezeEnemy() 及恢复吃豆人状态 isSuperPacman = false
    • 延时代码:yield return new WaitForSeconds(4f);

    吃豆过程概述:

    • 开局一段时间后生成超级豆子
    • 豆子被吃掉后,从表内移除该豆子,销毁对象,延时10s后准备下一个超级的生成,与此同时改变吃豆人状态isSuperPacman=true(两过程不干涉,并行执行)
    • 若在超级吃豆人状态期间吃到敌人,则敌人位置回归到初始
    • 持续超级吃豆人状态4s后取消敌人的冻结,并调用状态恢复函数

    UI设计

    Start 与 Exit 图标:

    1. 创建 UI->Canvas,作为UI工作区域;
    2. 添加image作为logo;创建空物体命名StartPanel,包含2个 UI->text,start和exit ,修改字体,调整位置
    3. 创建空物体命名GamePanel,包含3个UI->text,remain,eaten,修改字体,score
    4. 倒计时321动画:同理将素材文件Start切片,设置动作,修改每个动作间隔时长为1s(Animation->Samples)
    5. GameManager.cs 内持有UI面板及动画、音乐

    建立 Button 按键跳转:

    1. Start、Exit 两UI添加 Button(script)组件,设置 Target Graphic->Start,Exit
    2. GameManager.cs内添加 OnStartButton()OnExitButton()对接按键脚本,如下图代码:
    3. 启用Button功能:对UI图标添加 On Click->GameManager->OnStartButton 启用该函数
    4. 其他:按键或者其他UI都必须存在于 画布Canvas 内;画布Canvas下一级是 画板Panel ;在下一级就是各类UI组件
      1688704-20190526210817610-1887387326.png
    //当点击 Start 
    public void OnStartButton()
    {
        //与点击开始按钮后同步进行的函数
        StartCoroutine(PlayStartCountDown());
    
        //Start声音,声音源在原点位置
        AudioSource.PlayClipAtPoint(startClip,  Vector3.zero);
    
        //隐藏开始按钮的页面
        startPanel.SetActive(false);
    }
    
    
    //点击Exit后退出游戏
    public void OnExitButton()
    {
        Application.Quit();
    }

    其他

    网页跳转:
    调用 Application.OpenURL("https://www.cnblogs.com/SouthBegonia/"); 即可实现,可用在Button 也可用在其他触发时间

    转载于:https://www.cnblogs.com/SouthBegonia/p/10927598.html

    展开全文
  • Pacman AI代理扮演吃豆人人工智能代理描述这项工作的目的是开发一种能够巧妙地玩Pacman游戏的代理,该游戏是在上个世纪流行的街机游戏。 当接触教授和其他贡献者实施的其他特工(幽灵)时,几个特工争夺最高分。 ...
  • Berkeley人工智能吃豆人作业-Python/Berkeley人工智能吃豆人作业-Python/再附带一份完整工程源码吧/Berkeley人工智能吃豆人作业-Python/再附带一份完整工程源码吧/Eclipse-python工作环境.txtBerkeley人工智能吃豆人...
  • Pacman:简单的2D吃豆人

    2021-04-05 00:59:01
    吃豆人 简单的2D吃豆人 使用点和全部在InitPathmap中初始化所有地图 指向共享指针的指针 常量类 使用C ++ 17进行内联编译 为精灵创建了组件系统,无需在实体的子代中定义它们 添加了EntityCollection用于实体的添加...
  • 这是我们人工智能课程大作业,pacman吃豆人的代码实现,实测满分通过,代码有注释,易理解,欢迎大家一起学习 之前找了很久才找到了,传上来更大家共享下,希望对大家有帮助,欢迎下载或者永久保存。
  • 本篇使用强化学习领域经典的Project-Pacman项目进行实操,Python2.7环境,使用Q-Learning算法进行训练学习,将讲解强化学习实操过程中的各处细节。如何设置Reward函数,如何更新各(State,Action)下的Q-Value值等。有...
  • 伯克利人工智能课程吃豆人源码及答案 。

空空如也

空空如也

1 2 3 4 5 ... 8
收藏数 159
精华内容 63
关键字:

pacman吃豆人人工智能