精华内容
下载资源
问答
  • 效果类似支付宝AR扫描动画demo,可以自定义设置扫描边框的颜色,设置动画的旋转速度
  • 主要为大家详细介绍了Android实现支付宝AR扫描动画效果,具有一定的参考价值,感兴趣的小伙伴们可以参考一下
  • 我们希望围绕AR混合形状动画的工具和工作流比当前可用的运动捕捉方法更易于使用且可用性更高。 使用Facial Remote,我们可以构建一些工具来在编辑器中迭代混合形状,而无需创建新版本来仅检查手机上的网格更改。 这...

    With the release of ARKit and the iPhone X paired with Unity, developers have an easy-to-use set of tools to create beautiful and expressive characters. This opens up exploring the magic of real-time puppeteering for the upcoming “Windup” animated short, directed by Yibing Jiang.

    随着ARKit的发布以及与Unity配对的iPhone X,开发人员拥有了一套易于使用的工具来创建漂亮而富有表现力的角色。 这将为即将到来的动画短片《 Windup》(由姜一冰执导)探索实时伪装的魔力。

    Unity Labs and the team behind “Windup” have come together to see how far we could push Unity’s ability to capture facial animation in real time on a cinematic character. We also enlisted the help of Roja Huchez of Beast House FX for modeling and rigging of the blend shapes to help bring the character expressions to life.

    Unity实验室和“ Windup”背后的团队聚在一起,探讨了我们可以将Unity的实时捕捉人像动画角色的能力提高到多远。 我们还寻求Beast House FX的Roja Huchez的帮助,对混合形状进行建模和装配,以使角色表情栩栩如生。

    What the team created is Facial AR Remote, a low-overhead way to capture performance using a connected device directly into the Unity editor. We found using the Remote’s workflow is useful not just for animation authoring, but also for character and blend shape modeling and rigging, creating a streamlined way to build your own animoji or memoji type interactions in Unity. This allows developers to be able to iterate on the model in the editor without needing to build to the device, removing time-consuming steps in the process.

    团队创建的是Facial AR Remote,这是一种低开销的方法,可以使用连接的设备直接在Unity编辑器中捕获性能。 我们发现使用Remote的工作流程不仅对动画创作有用,而且对角色和混合形状建模和绑定也有用,创建了一种简化的方法来在Unity中构建自己的动画或备忘录类型的交互。 这使开发人员能够在编辑器中迭代模型,而无需构建到设备上,从而消除了过程中耗时的步骤。

    为什么要构建面部AR遥控器 (Why build the Facial AR Remote)

    We saw an opportunity to build new animation tools for film projects opening up a future of real-time animation in Unity. There was also a “cool factor” in using AR tools for authoring and an opportunity to continue to push Unity’s real-time rendering. As soon as we had the basics working with data coming from the phone to the editor, our team and everyone around our desks could not stop having fun puppeteering our character. We saw huge potential for this kind of technology. What started as an experiment quickly proved itself both fun and useful. The project quickly expanded into the current Facial AR Remote and feature set.

    我们看到了为电影项目构建新的动画工具的机会,从而为Unity中的实时动画打开了未来。 使用AR工具进行创作还有一个“凉爽的因素”,并且有机会继续推动Unity的实时渲染。 一旦我们有了处理从电话到编辑器的数据的基础知识,我们的团队和办公桌周围的每个人就无法停止玩弄伪装我们角色的乐趣。 我们看到了这种技术的巨大潜力。 从实验开始的东西很快就证明了自己的乐趣和实用性。 该项目Swift扩展到当前的Facial AR Remote和功能集。

    The team set out expanding the project with Unity’s goal of democratizing development in mind. We wanted the tools and workflows around AR blend shape animation to be easier to use and more available than what was currently available and traditional methods of motion capture. The Facial Remote let us build out some tooling for iterating on blend shapes within the editor without needing to create a new build just to check mesh changes on the phone. What this means is a user is able to take a capture of an actor’s face and record it in Unity. And that capture can be used as a fixed point to iterate and update the character model or re-target the animation to another character without having to redo capture sessions with your actor. We found this workflow very useful for dialing in expressions on our character and refining the individual blend shapes.

    团队着眼于Unity的民主化发展目标,着手扩大项目。 我们希望围绕AR混合形状动画的工具和工作流比当前可用的运动捕捉方法更易于使用且可用性更高。 使用Facial Remote,我们可以构建一些工具来在编辑器中迭代混合形状,而无需创建新版本来仅检查手机上的网格更改。 这意味着用户可以捕获演员的脸并将其记录在Unity中。 而且该捕获可用作固定点,以迭代和更新角色模型或将动画重新定位到另一个角色,而无需重做与演员的捕获会话。 我们发现此工作流程对于在角色上拨入表达式并改进单个混合形状非常有用。

    面部AR遥控器的工作方式 (How the Facial AR Remote works)

    The remote is made up of a client phone app, with a stream reader acting as the server in Unity’s editor. The client is a light app that’s able to make use of the latest additions to ARKit and send that data over the network to the Network Stream Source on the Stream Reader GameObject. Using a simple TCP/IP socket and fixed-size byte stream, we send every frame of blendshape, camera and head pose data from the device to the editor. The editor then decodes the stream and to updates the rigged character in real time. To smooth out some jitter due to network latency, the stream reader keeps a tunable buffer of historic frames for when the editor inevitably lags behind the phone. We found this to be a crucial feature for preserving a smooth look on the preview character while staying as close as possible the real actor’s current pose. In poor network conditions, the preview will sometimes drop frames to catch up, but all data is still recorded with the original timestamps from the device.

    遥控器由一个客户端电话应用程序组成,其中流阅读器充当Unity编辑器中的服务器。 客户端是一个轻量级的应用程序,能够利用ARKit的最新功能并将该数据通过网络发送到Stream Reader GameObject上的Network Stream Source。 使用简单的TCP / IP套接字和固定大小的字节流,我们将从设备将混合形状,相机和头部姿势数据的每一帧从设备发送到编辑器。 然后,编辑器对流进行解码,并实时更新装配的角色。 为了消除由于网络延迟引起的某些抖动,流编辑器会保留可调整的历史帧缓冲,以防止编辑器不可避免地落后于手机。 我们发现这是一个至关重要的功能,它可以保持预览角色的流畅外观,同时尽可能地保持真实演员的当前姿势。 在恶劣的网络条件下,预览有时会丢帧以赶上,但所有数据仍会以设备的原始时间戳记录下来。

    On the editor side, we use the stream data to drive the character for preview as well as baking animation clips. Since we save the raw stream from the phone to disk, we can continue to play back this data on a character as we refine the blend shapes. And since the save data is just a raw stream from the phone, we can even re-target the motion to different characters. Once you have a stream you’re happy with captured, you can bake the stream to an animation clip on a character. This is great since they can use that clip that you have authored like any other animation in Unity to drive a character in Mecanim, Timeline or any of the other ways animation is used.

    在编辑器端,我们使用流数据来驱动角色进行预览以及烘焙动画剪辑。 由于我们将原始流从手机保存到磁盘,因此我们可以在优化混合形状时继续在角色上回放此数据。 由于保存的数据只是手机的原始数据流,因此我们甚至可以将动作重新定位到不同的字符。 一旦有了对捕获感到满意的流,就可以将流烘焙到角色上的动画剪辑。 这很棒,因为他们可以像Unity中的任何其他动画一样使用您创作的剪辑来驱动Mecanim,Timeline或使用动画的任何其他方式中的角色。

    Windup动画演示 (The Windup animation demo)

    With the Windup rendering tech demo previously completed, the team was able to use those high-quality assets to start our animation exploration. Since we were able to get a baseline up and running rather quickly, we had a lot of time to iterate on the blend shapes using the tools we were developing. Jitter, smoothing and shape tuning quickly became the major areas of focus for the project. The solves for the jittering were improved by figuring out the connection between frame rate and lag in frame processing as well as removing camera movement from the playback. Removing the ability to move the camera really focused the users on capturing the blend shapes and facilitated us being able to mount the phone in a stand.

    通过先前完成的Windup渲染技术演示,该团队得以使用这些高质量的资产来开始我们的动画探索。 由于我们能够快速启动并运行基线,因此我们有很多时间可以使用我们开发的工具迭代混合形状。 抖动,平滑和形状调整Swift成为该项目的重点领域。 通过解决帧处理中帧速率和滞后之间的联系以及从回放中消除摄像机移动,改善了抖动的解决方案。 取消移动相机的功能确实使用户着重于捕获混合形状,并使我们能够将手机安装在支架上。

    Understanding the blend shapes and getting the most out of the blend shape anchors in ARKit is what required the most iteration. It is difficult to understand the minutia of the different shapes from the documentation. So much of the final expression comes from the stylization of the character and how the shapes combine in some expected ways. We found that shapes like the eye/cheek squint shapes and mouth stretch were improved by limiting the influence of the blend shape changes to specific areas of the face. For example, the cheek squint should have little to no effect on the lower eyelid, and the lower eyelid in the squint should have little to no effect on the cheek. It also does not help that we initially missed how the mouthClosed shape was a corrective pose to bring the lips closed with the jawOpen shape at 100%.

    了解混合形状并充分利用ARKit中的混合形状锚点是需要最多迭代的地方。 从文档中很难理解不同形状的细节。 最终的表达方式很大程度上取决于角色的风格以及形状如何以某些预期方式组合。 我们发现,通过限制混合形状变化对面部特定区域的影响,可以改善诸如眼睛/脸颊斜视形状和嘴巴伸展的形状。 例如,斜眼斜视对下眼睑几乎没有影响,而斜眼斜视的下眼睑对脸颊几乎没有影响。 这也无济于事,我们最初错过了“ mouthClosed形状是一种矫正姿势, jawOpen以“ 100%的jawOpen形状使嘴唇闭合。

    Using information from the Skinned Mesh Renderer to look at the values that made up my expression on any frame, then under- or over-driving those values really helped to dial in the blend shapes. We were able to quickly over or underdrive the current blend shapes and determine if any blend shapes needed to be modified, and by how much. This helped with one of the hardest things to do, getting the right character to a key pose, like the way we wanted the little girl to smile. This was really helped by being able to see what shapes make up a given pose and in this case, it was the amount mouth stretch right and left worked with the smile to give the final shape. We found it helps to think of the shapes the phone provided as little building blocks, not as some face pose a human could make in isolation.

    使用来自Skinned Mesh Renderer的信息来查看构成我在任何帧上的表情的值,然后欠驱动或过驱动这些值确实有助于调入混合形状。 我们能够快速超过或降低当前的混合形状,并确定是否需要修改任何混合形状,以及需要修改多少。 这帮助完成了最困难的事情之一,使正确的角色扮演关键角色,就像我们希望小女孩微笑的方式一样。 能够看到什么形状构成给定的姿势,这确实有所帮助,在这种情况下,这是嘴向左和向右伸展,并带着微笑产生最终形状的过程。 我们发现,将电话提供的形状看作是很小的组成部分是有帮助的,而不是人类可以孤立地做出的某些面Kong姿势。

    At the very end of art production on the demo, we wanted to try an experiment to improve some of the animation on the character. Armed with the collective understanding of the blend shapes from ARKit, we tried modifying the base neutral pose of the character. Due to the stylization of the little girl character, there was an idea that the base pose of the character had the eyes too wide and a little too much base smile to the face. This left too little in the delta between eyes wide and base, with too wide a delta between base and closed. The effect of the squint blend shapes also needed to be better accounted for. The squint as it turns out seems to always be at ~60-70% when someone closes their eyes for the people we tested on. The change to the neutral pose paid off, and along with all the other work makes for the expressive and dynamic character you see in the demo.

    在演示的美术制作结束时,我们想尝试进行实验以改善角色上的某些动画。 有了ARKit对混合形状的集体理解,我们尝试了修改角色的基本中性姿势。 由于小女孩角色的风格,有一个想法认为角色的基本姿势会使眼睛睁得太宽,而脸上的微笑会显得太多。 这在双眼和底之间的三角形中留得太小,而在双眼和闭合之间的三角形中留得太宽。 斜眼混合形状的效果也需要更好地考虑。 事实证明,当有人为我们测试的人闭上眼睛时,斜视似乎总是在60-70%左右。 中立姿势的改变得到了回报,并且与所有其他工作一起使您在演示中看到了富有表现力和动态感的角色。

    未来 (The future)

    Combining Facial AR Remote and the rest of the tools in Unity, there is no limit to the amazing animations you can create! Soon anyone will be able to puppeteer digital characters, be it kids acting out and recording their favorite characters then sharing with friends and family, game streamers adding extra life to their avatars, or opening up new avenues for professionals and hobbyists to make animated content for broadcast. Get started by downloading Unity 2018 and checking out setup instructions on Facial AR Remote’s github. The team and the rest of Unity look forward to the artistic and creative uses of Facial AR Remote our users will create.

    将Facial AR Remote和Unity中的其他工具结合在一起,就可以创作出惊人的动画了! 很快,任何人都可以伪造数字角色,例如孩子们表演并录制自己喜欢的角色,然后与朋友和家人分享,游戏彩带为他们的化身增加额外的生活,或者为专业人士和业余爱好者开辟新的途径来制作动画内容广播。 通过下载Unity 2018并在Facial AR Remote的github上查看设置说明开始使用。 团队和Unity的其他成员都期待我们的用户将创造出Facial AR Remote的艺术和创意用途。

    翻译自: https://blogs.unity3d.com/2018/08/13/facial-ar-remote-animating-with-ar/

    展开全文
  • 实现支付宝AR扫描动画效果

    千次阅读 2018-05-25 09:49:45
    实现支付宝AR扫描效果动画之前一个网友说想要一个支付宝扫描动画的效果demo,所以又花了点时间做了下这个东西,先看效果图 说一下实现的思路,如图中最外围的蓝色的是用两个相距180°的圆弧实现的,再往里又是两个红色...

    实现支付宝AR扫描效果动画

    之前一个网友说想要一个支付宝扫描动画的效果demo,所以又花了点时间做了下这个东西,先看效果图 
    效果图
    说一下实现的思路,如图中最外围的蓝色的是用两个相距180°的圆弧实现的,再往里又是两个红色的圆弧再往里面是一个红色的圆,最里面的白色的是由4个间隔的白色圆弧组成的,其实说明白的就是简单的图形的堆积.然后通过控制绘制圆弧的起始角度进行旋转的动画.扫描的红色线条是一张渐变的图片,通过平移动画实现扫描的效果. 
    这个自定义View 的代码如下:

    package cn.com.hadon.scanner;
    
    import android.content.Context;
    import android.graphics.Bitmap;
    import android.graphics.BitmapFactory;
    import android.graphics.Canvas;
    import android.graphics.Color;
    import android.graphics.Paint;
    import android.graphics.Rect;
    import android.graphics.RectF;
    import android.support.annotation.Nullable;
    import android.util.AttributeSet;
    import android.view.View;
    
    /**
     * Created by Mr.Wang on 2017/5/8.
     */
    
    public class ScanView extends View {
        private Paint blueCirclePaint;//蓝色圈的画笔
        private Paint redCirclePaint;//红色圈的画笔
        private Paint whiteCirclePaint;//白色圈的画笔
    
        public static final int STATE_READY = 1;
        public static final int STATE_SCANING = 2;
        public static final int STATE_SUCCESS = 3;
    
        //定义圆弧的宽度
        private static final int BLUE_CIRCLE_BORDER_WIDTH = 8;
        private static final int INSIDER_RED_CIRCLE_BORDER_WIDTH = 20;
        private static final int OUTSIDER_CIRCLE_BORDER_WIDTH = 20;
        private static final int WHITE_CIRCLE_BORDER_WIDTH = 20;
    
        private int minLength;//中心最大圆的直径
        private int radius;//中心最大圆的半径
        private int centerX;//中心点X坐标
        private int centerY;//中心点Y坐标
    
        private Bitmap scanerbitmap;//条形扫描图片
        private int curState = STATE_SCANING;//初始状态
    
        public ScanView(Context context, @Nullable AttributeSet attrs) {
            super(context, attrs);
            //初始化一些变量
            scanerbitmap = BitmapFactory.decodeResource(getResources(), R.mipmap.scaner);
            blueCirclePaint = new Paint();
            redCirclePaint = new Paint();
            whiteCirclePaint = new Paint();
    
            blueCirclePaint.setColor(Color.BLUE);
            blueCirclePaint.setAntiAlias(true);
            blueCirclePaint.setStyle(Paint.Style.STROKE);
            blueCirclePaint.setStrokeWidth(BLUE_CIRCLE_BORDER_WIDTH);
    
            redCirclePaint.setColor(Color.RED);
            redCirclePaint.setAntiAlias(true);
            redCirclePaint.setStyle(Paint.Style.STROKE);
            redCirclePaint.setStrokeWidth(INSIDER_RED_CIRCLE_BORDER_WIDTH);
    
            whiteCirclePaint.setColor(Color.WHITE);
            whiteCirclePaint.setAntiAlias(true);
            whiteCirclePaint.setStyle(Paint.Style.STROKE);
            whiteCirclePaint.setStrokeWidth(WHITE_CIRCLE_BORDER_WIDTH);
    
        }
    
        @Override
        protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
            super.onMeasure(widthMeasureSpec, heightMeasureSpec);
            int width = MeasureSpec.getSize(widthMeasureSpec);
            int height = MeasureSpec.getSize(heightMeasureSpec);
            centerX = width / 2;
            centerY = height / 2;
            //获取直径和半径以及中心带你坐标方便后面的计算
            minLength = Math.min(width, height);
            radius = minLength / 2;
    
        }
    
    
        /**
         * 公开方法设置当前的状态值
         * @param state
         */
        public void setState(int state) {
            this.curState = state;
        }
    
    
        @Override
        protected void onDraw(Canvas canvas) {
            super.onDraw(canvas);
            canvas.drawColor(Color.TRANSPARENT);
            switch (curState) {
                case STATE_READY:
                    drawWhiteCircle(canvas);
                    drawInsiderRedCircle(canvas);
                    drawOutsiderRedCircle(canvas);
                    break;
                case STATE_SCANING:
                    drawWhiteCircle(canvas);
                    drawBlueCircle(canvas);
                    drawInsiderRedCircle(canvas);
                    drawOutsiderRedCircle(canvas);
                    drawScaner(canvas);
                    break;
                case STATE_SUCCESS:
                    drawWhiteCircle(canvas);
                    drawInsiderRedCircle(canvas);
                    break;
            }
            updateValues();
            invalidate();
        }
    
    
        private int blueStartAngle = 0;//蓝色圆圈的开始角度
        private int blueCircleSpace = BLUE_CIRCLE_BORDER_WIDTH;//蓝色弧距离最短边的距离用于计算自身的半径用
        private static final int BLUE_CIRCLE_SWEP_ANGLE = 20;//蓝色弧扫过的角度
    
        /**
         * 绘制蓝色弧
         * @param canvas
         */
        private void drawBlueCircle(Canvas canvas) {
            canvas.drawArc(centerX - radius + blueCircleSpace, blueCircleSpace, centerX + radius - blueCircleSpace, minLength - blueCircleSpace, blueStartAngle, BLUE_CIRCLE_SWEP_ANGLE, false, blueCirclePaint);
            canvas.drawArc(centerX - radius + blueCircleSpace, blueCircleSpace, centerX + radius - blueCircleSpace, minLength - blueCircleSpace, blueStartAngle + 180, BLUE_CIRCLE_SWEP_ANGLE, false, blueCirclePaint);
        }
    
        /**
         * 根据当前的状态来更改变量达到动画的效果
         */
        private void updateValues() {
            switch (curState) {
                case STATE_READY:
                    if (insideRedCircleSpace >= BLUE_CIRCLE_BORDER_WIDTH + INSIDER_RED_CIRCLE_BORDER_WIDTH+ OUTSIDER_CIRCLE_BORDER_WIDTH) {
                        insideRedCircleSpace -= 2;
                    }
                    whiteStartAngle = 5 / 2;
                    outsiderRedCircleStartAndle = -OUTSIDER_RED_CIRCLE_SWEP_ANGLE / 2;
                    break;
                case STATE_SCANING:
                    if (insideRedCircleSpace >= BLUE_CIRCLE_BORDER_WIDTH + INSIDER_RED_CIRCLE_BORDER_WIDTH+ OUTSIDER_CIRCLE_BORDER_WIDTH) {
                        insideRedCircleSpace -= 2;
                    }
                    blueStartAngle += 4;
                    outsiderRedCircleStartAndle += 2;
                    if (is2Max) {
                        if (whiteStartAngle == 30) {
                            is2Max = false;
                        } else {
                            whiteStartAngle++;
                        }
                    } else {
                        if (whiteStartAngle == -30) {
                            is2Max = true;
                        } else {
                            whiteStartAngle--;
                        }
                    }
                    scanerY += 6;
                    if (scanerY > minLength) {
                        scanerY = 0;
                    }
                    break;
                case STATE_SUCCESS:
                    whiteStartAngle = 5 / 2;
                    if (insideRedCircleSpace < whiteCircleSpace + INSIDER_RED_CIRCLE_BORDER_WIDTH) {
                        insideRedCircleSpace += 2;
                    }
                    break;
            }
        }
    
        private int insideRedCircleSpace = BLUE_CIRCLE_BORDER_WIDTH + INSIDER_RED_CIRCLE_BORDER_WIDTH + OUTSIDER_CIRCLE_BORDER_WIDTH;
        private int outsiderRedCircleSpace = BLUE_CIRCLE_BORDER_WIDTH + INSIDER_RED_CIRCLE_BORDER_WIDTH + OUTSIDER_CIRCLE_BORDER_WIDTH / 2;
        private static final int OUTSIDER_RED_CIRCLE_SWEP_ANGLE = 30;
        private int outsiderRedCircleStartAndle = -OUTSIDER_RED_CIRCLE_SWEP_ANGLE / 2;
    
        /**
         * 绘制内部的红色圆圈
         * @param canvas
         */
        private void drawInsiderRedCircle(Canvas canvas) {
            canvas.drawCircle(centerX, centerY, radius - insideRedCircleSpace, redCirclePaint);
        }
    
        /**
         * 绘制外部红色的两个弧
         * @param canvas
         */
        private void drawOutsiderRedCircle(Canvas canvas) {
            canvas.drawArc(centerX - radius + outsiderRedCircleSpace, outsiderRedCircleSpace, centerX + radius - outsiderRedCircleSpace, minLength - outsiderRedCircleSpace, outsiderRedCircleStartAndle, OUTSIDER_RED_CIRCLE_SWEP_ANGLE, false, redCirclePaint);
            canvas.drawArc(centerX - radius + outsiderRedCircleSpace, outsiderRedCircleSpace, centerX + radius - outsiderRedCircleSpace, minLength - outsiderRedCircleSpace, outsiderRedCircleStartAndle + 180, OUTSIDER_RED_CIRCLE_SWEP_ANGLE, false, redCirclePaint);
        }
    
        private int whiteStartAngle = 0;
        private static final int WHITE_CIRCLE_SWEP_ANGLE = 85;
        private int whiteCircleSpace = BLUE_CIRCLE_BORDER_WIDTH + INSIDER_RED_CIRCLE_BORDER_WIDTH + OUTSIDER_CIRCLE_BORDER_WIDTH + WHITE_CIRCLE_BORDER_WIDTH;
        private RectF whiteCircleRect;
        private boolean is2Max = true;
    
        /**
         * 绘制白色的弧
         * @param canvas
         */
        private void drawWhiteCircle(Canvas canvas) {
            if (whiteCircleRect == null) {
                whiteCircleRect = new RectF(centerX - radius + whiteCircleSpace, whiteCircleSpace, centerX + radius - whiteCircleSpace, minLength - whiteCircleSpace);
            }
    
            canvas.drawArc(whiteCircleRect, whiteStartAngle, WHITE_CIRCLE_SWEP_ANGLE, false, whiteCirclePaint);
            canvas.drawArc(whiteCircleRect, whiteStartAngle + 90, WHITE_CIRCLE_SWEP_ANGLE, false, whiteCirclePaint);
            canvas.drawArc(whiteCircleRect, whiteStartAngle + 180, WHITE_CIRCLE_SWEP_ANGLE, false, whiteCirclePaint);
            canvas.drawArc(whiteCircleRect, whiteStartAngle + 270, WHITE_CIRCLE_SWEP_ANGLE, false, whiteCirclePaint);
    
        }
    
        private int scanerY = 0;
    
        /**
         * 绘制扫描图片
         * @param canvas
         */
        private void drawScaner(Canvas canvas) {
    
            int p1, p2, hw;
            if (scanerY >= radius) {
                p1 = (scanerY - radius) * (scanerY - radius);
            } else {
                p1 = (radius - scanerY) * (radius - scanerY);
            }
            p2 = radius * radius;
            hw = (int) Math.sqrt(p2 - p1);
    
            Rect rect = new Rect(centerX - hw, scanerY - 10, centerX + hw, scanerY + 10);
            canvas.drawBitmap(scanerbitmap, null, rect, null);
    
        }
    }
    
    • 1
    • 2
    • 3
    • 4
    • 5
    • 6
    • 7
    • 8
    • 9
    • 10
    • 11
    • 12
    • 13
    • 14
    • 15
    • 16
    • 17
    • 18
    • 19
    • 20
    • 21
    • 22
    • 23
    • 24
    • 25
    • 26
    • 27
    • 28
    • 29
    • 30
    • 31
    • 32
    • 33
    • 34
    • 35
    • 36
    • 37
    • 38
    • 39
    • 40
    • 41
    • 42
    • 43
    • 44
    • 45
    • 46
    • 47
    • 48
    • 49
    • 50
    • 51
    • 52
    • 53
    • 54
    • 55
    • 56
    • 57
    • 58
    • 59
    • 60
    • 61
    • 62
    • 63
    • 64
    • 65
    • 66
    • 67
    • 68
    • 69
    • 70
    • 71
    • 72
    • 73
    • 74
    • 75
    • 76
    • 77
    • 78
    • 79
    • 80
    • 81
    • 82
    • 83
    • 84
    • 85
    • 86
    • 87
    • 88
    • 89
    • 90
    • 91
    • 92
    • 93
    • 94
    • 95
    • 96
    • 97
    • 98
    • 99
    • 100
    • 101
    • 102
    • 103
    • 104
    • 105
    • 106
    • 107
    • 108
    • 109
    • 110
    • 111
    • 112
    • 113
    • 114
    • 115
    • 116
    • 117
    • 118
    • 119
    • 120
    • 121
    • 122
    • 123
    • 124
    • 125
    • 126
    • 127
    • 128
    • 129
    • 130
    • 131
    • 132
    • 133
    • 134
    • 135
    • 136
    • 137
    • 138
    • 139
    • 140
    • 141
    • 142
    • 143
    • 144
    • 145
    • 146
    • 147
    • 148
    • 149
    • 150
    • 151
    • 152
    • 153
    • 154
    • 155
    • 156
    • 157
    • 158
    • 159
    • 160
    • 161
    • 162
    • 163
    • 164
    • 165
    • 166
    • 167
    • 168
    • 169
    • 170
    • 171
    • 172
    • 173
    • 174
    • 175
    • 176
    • 177
    • 178
    • 179
    • 180
    • 181
    • 182
    • 183
    • 184
    • 185
    • 186
    • 187
    • 188
    • 189
    • 190
    • 191
    • 192
    • 193
    • 194
    • 195
    • 196
    • 197
    • 198
    • 199
    • 200
    • 201
    • 202
    • 203
    • 204
    • 205
    • 206
    • 207
    • 208
    • 209
    • 210
    • 211
    • 212
    • 213
    • 214
    • 215
    • 216
    • 217
    • 218
    • 219
    • 220
    • 221
    • 222
    • 223
    • 224
    • 225
    • 226
    • 227
    • 228
    • 229
    • 230
    • 231
    • 232
    • 233
    • 234
    • 235
    • 236
    • 237
    • 238
    • 239
    • 240
    • 241

    下面是我偶读github地址

    完整的demo地址

    版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/wlj644920158/article/details/72865180
    展开全文
  • AR_painting 这将是对AR.js应用程序在图像上显示动画的测试。
  • WEBGL 3D AR VR 梅花 骨骼动画 粒子系统

    千次阅读 2017-06-03 21:31:21
    webgl 案例,欢迎体验 WEBGL 3D AR VR 梅花 骨骼动画 粒子系统 https://s.h5tu.com/jinli/hs/theplumblossom/

    webgl 案例,欢迎体验
    WEBGL 3D AR VR 梅花 骨骼动画 粒子系统

    https://s.h5tu.com/jinli/hs/theplumblossom/



    展开全文
  • 自动循环动画 1、铺设路点 创建N个路点,创建Gizmos文件夹,并放入格式为png的Point文件名图片,在每个路点增加脚本如下 using UnityEngine; public class PathNode : MonoBehaviour { public PathNode P_...

    自动循环动画

    1、铺设路点

    创建N个路点,创建Gizmos文件夹,并放入格式为png的Point文件名图片,在每个路点增加脚本如下

    using UnityEngine;
    
    public class PathNode : MonoBehaviour {
    
        public PathNode P_Paretn;
        public PathNode P_Next;
    
        public void SetNext(PathNode node) {
            if (P_Next != null) {
                P_Next.P_Paretn = null;
                P_Next = node;
                node.P_Paretn = this;
            }
        }
    
        void OnDrawGizmos()
        {
            Gizmos.DrawIcon(this.transform.position, "Point.png");
        }
    }

    P_Paretn为父路点,P_Next为下一个路点。没有则不需要拖入

     

    2、重写控制恐龙运动动画的代码

    using UnityEngine;
    
    public class Din_Auto : MonoBehaviour {
    
        public PathNode m_currentNode=null;
        public float speed = 20.0f;
        //储存走路时间
    
        public Din_Auto inst;
        public Animator V_Ani;
    
        public bool Move_Bl = true;
        //控制恐龙是否行走
    
        // Use this for initialization
        void Start () {
            inst = this;
            m_currentNode = GameObject.Find("P_A_01").GetComponent<PathNode>();
            Invoke("Eating", 1.8f);
            //让恐龙经过1.8秒后吃东西
            Invoke("Moving", 8.8f);
            //让恐龙经过8.8秒(吃东西动作播放完)后继续走路
        }
    	
    	void Update () {
            if (Move_Bl)
            {
                RotateTo();
                MoveTo();
            }
    	}
    
        //让恐龙执行吃东西动画的函数
        public void Eating() {
            Move_Bl = false;
            V_Ani.SetTrigger("Eating");
        }
    
        public void Moving() {
            Move_Bl =true;
            V_Ani.SetTrigger("Moving");
        }
    
    
        public void RotateTo() {
            float current = transform.eulerAngles.y;
            this.transform.LookAt(m_currentNode.transform);
            Vector3 target = transform.eulerAngles;
            float next = Mathf.MoveTowardsAngle(current,target.y,20*Time.deltaTime);
            this.transform.eulerAngles = new Vector3(0,next,0);
        }
    
        public void MoveTo() {
            Vector3 pos1 = transform.position;
            Vector3 pos2 = m_currentNode.transform.position;
            float dist = Vector2.Distance(new Vector2(pos1.x,pos1.z),new Vector2(pos2.x,pos2.z));
            if (dist<2f) {
                if (m_currentNode.P_Next == null)
                {
                    Destroy(gameObject);
                }
                else {
                    m_currentNode = m_currentNode.P_Next;
                }
                
            }
            transform.Translate(new Vector3(0, 0, speed * Time.deltaTime));
            
        }
    } 

    a、通过上面代码,可以看出RotateTo()方法里的Mathf.MoveTowardsAngle(current,target.speed)控制两个路点之间的转向

    float next = Mathf.MoveTowardsAngle(current,target.y,20*Time.deltaTime); 

    它规定了起始角度,终止角度和转弯速度。很明显,第一和第二个路点之间,是没有转向的。在经过第二个路点,转向第三个路点时,模型角度才会随时间发生变化

    b、MoveTo()里面的Translate()控制了模型的位移增量

    当然也有更精确的控制策略,比如计算出两点之间运动距离所需时间,平均分配转弯角度,或者更写实的在路点一定距离内,进行转弯

    c、MoveTo()实现了下个路点的切换,以及是否为最后一个路点的判断

     

    3、生成恐龙

    将该代码附在第一个路点上,每过N秒,生成一个新的恐龙

    更精确控制策略可自行尝试

    using UnityEngine;
    
    public class Ins_Din : MonoBehaviour {
    
        public GameObject Din_Pre;
        private int Frame_Count;
    
        //生成恐龙
        void FixedUpdate() {
            Frame_Count++;
    
            if (Frame_Count>1000) {
                Ins();
                Frame_Count = 0;
            }
        }
    
        void Ins() {
            Instantiate(Din_Pre, gameObject.transform.position, gameObject.transform.rotation);
        }
    }

     

    遮挡效果实现

    该效果使用了一个shader“Mask”(附录1)和一个脚本“SetRenderQueue”(附录2),两者配合完成了效果的实现

    a、shader添加到材质球上,材质球赋予遮挡恐龙的物体模型,完成遮挡

    b、脚本添加至恐龙Perfab的Raptor层

     

    附录1

    shader:Mask

    Shader "Masked/Mask" {
    	
    	SubShader {
    		// Render the mask after regular geometry, but before masked geometry and
    		// transparent things.
    		
    		Tags {"Queue" = "Geometry+10" }
    		
    		// Don't draw in the RGBA channels; just the depth buffer
    		
    		ColorMask 0
    		ZWrite On
    		
    		// Do nothing specific in the pass:
    		
    		Pass {}
    	}
    }

     

    附录2

    SetRenderQueue

    /*
    	SetRenderQueue.cs
    	
    	Sets the RenderQueue of an object's materials on Awake. This will instance
    	the materials, so the script won't interfere with other renderers that
    	reference the same materials.
    */
    
    using UnityEngine;
    
    [AddComponentMenu("Rendering/SetRenderQueue")]
    
    public class SetRenderQueue : MonoBehaviour {
    	
    	[SerializeField]
    	protected int[] m_queues = new int[]{3000};
    	
    	protected void Awake() {
    		Material[] materials = GetComponent<Renderer>().materials;
    		for (int i = 0; i < materials.Length && i < m_queues.Length; ++i) {
    			materials[i].renderQueue = m_queues[i];
    		}
    	}
    }

     

    展开全文
  • 文章目录说在前面骨骼动画说明原理 说在前面 opencv版本:4.0.1 opencv aruco版本:4.0.1 opengl:使用glad、glfw 模型导入:Assimp ar实现:基于标记(marker) visual studio版本:2017 原理部分:...
  • 【推荐课程】Unity3D-游戏ARVR进阶课程 - 动画系统
  • iOS精选源码 直播源播放器-电视台,乐视直播,熊猫直播,电视剧轮播,电影等 ...AR 太阳系Demo--逻辑清晰 ...AR屏幕检测,ARKit ...粒子动画,加载动画 3D立体相册,可以旋转的立方体 含有多种加
  • 上篇文章我们演示和如何用ARCore加载自己的模型,但是我们会发现 , 虽然小姐姐加载出来了 , 动画却没有执行 , 这是为什么呢 ? 这是因为在 Unity 中,导入模型的动画不是默认播放的,我们需要一个Animator Controller...
  • 一、蝎子动画基本 首先我们需要一个模型和模型所带的一些需要控制的动画(这里拿蝎子举例) 蝎子有四个动画,默认状态下我们使用idle动画,然后设置三个按钮来控制attack、walk和run 二、Animation Controller实现...
  • iOS精选源码 日期时间选择器,swift Space Battle 宇宙大战 SpriteKit游戏源码 LLDebugTool - 便捷的IOS调试工具(新增...动画,贝塞尔曲线,优酷播放按钮,跳过动画 支持图文混排的跑马灯 MUCheckbox-迷你的小...
  • 打开 视+, 中间按钮打开相机。画面会有个效果,以屏幕中间为中心向四周间断的扩散。扩散到的地方,会用白线显示轮廓。这应该是滤镜。谁知道如何做的?提供个思路。谢谢。
  • MP-EasyAR-3DModels-AnimationsFor Instance*微信公众平台**微信开发者工具**EasyAR**项目实践**首页recognition**recognition.js**recognition.wxml**recognition.wxss* 微信开发者工具&&EasyAR-MP ...
  • 在2013年9月11日于美国罗杰斯举行的2013 AR GIS用户论坛会议上,演示文稿“使用Python桥接GIS和3D动画”的源代码。 使用Reveal.js
  • 现在利用android OpenGL来绘制带动画的3D模型,现在就是相机扫描后不现实模型,但是OpenGL是一直在绘制的,就是不显示,卡在这儿一个多星期了,希望大佬们能够解答下,不甚感激呀
  • 网络大厂释出Android扩增实境(AR)软件开发工具包ARCore 1.7新版,可建立AR自拍及动画的工具,也改善了用户接口。ARCore 1.7提供的Augmented Faces API,支持前置摄影机,提供高质量及468个点的3D网格,以在人们...
  • 另外,×××相结合的 AR 特效,也是一个不错的选择! 短视频开发者创意大赛自 12 月启动以来,备受开发小伙伴们青睐,目前我们已经征集到部分参赛作品,近期作品公示、人气评选页面即将上线,欢迎广大用户对参赛...
  • 1.1 整体功能AR Scene(AR Scene场景动画)主要分为导航箭头动画(Navigation)和安全警示动画(Safety Warning)两部分。在命名空间ArHud.Logic.Navi中Main类中的Update方法中,此方法主要有两个功能:1。更新导航状态...

空空如也

空空如也

1 2 3 4 5 ... 19
收藏数 362
精华内容 144
关键字:

动画ar