精华内容
下载资源
问答
  • Vector CAPL 编程介绍

    2019-01-16 09:11:42
    1.CAPL是CAN总线访问编程语言 2.创建、修改、编译CAPL程序
  • Vector CANoe 9.0 官方文档(CHM格式),英文,从软件中提取
  • vector database v3.5 开发文档,直列式数据库新秀vector database v3.5 开发文档垂直开发秘籍
  • AUTOSAR详细介绍(vector文档

    热门讨论 2013-05-09 18:54:59
    vector公司做的介绍AUTOSAR的详细资料,简单易懂,适合汽车行业的初学者和想了解AUTOSAR的工程师。
  • Vector官方帮助文档,配置使用手册。从新建DaVinci工程开始一步一步的讲解如何配置工程;如何编译生成C代码;如何导入CDD、DBC等文件。手册讲解细致,可以说是手把手教学了
  • 使用此插件,您可以使用点积或余弦相似度根据嵌入向量对Elasticsearch文档进行评分。 一般的 这个插件的灵感来自和,使处理速度比原始快10倍。 试一试。 通过直接使用lucene指数,我获得了显着的速度改进 我为我的...
  • 为应对复杂的电子系统设计,全球汽车制造商、部件供应商及其他电子、半导体和软件公司联合建立了汽车开放系统架构(AUTomotive Open System Rchitecture,AUTOSAR)。AUTOSAR作为汽车电子软件开发的一个开放的、标准...
  • CAPL语言参考手册

    热门讨论 2014-09-22 09:48:14
    CAPL是Vector公司CANalyzer分析CAN协议包的语言,可以方便使用者更加灵活利用和控制仿真CAN通信情况。本文档是详细介绍CAPL语言,并有很好的实例
  • VectorCAST介绍

    2019-04-22 15:25:40
    文档描述了VectorCAST嵌入式代码自动化测试的解决方案
  • 2019年最新VECTOR的AUTOSAR培训资料,有比较详细的AUTOSAR介绍。
  • Vector2.js 二维向量js库

    2019-03-16 15:03:54
    Vector2.js 二维向量js库,里面有很多关于二维向量的计算函数。详细介绍可以参看:https://blog.csdn.net/tianseyiwan008/article/details/88595210
  • 使用此插件,您可以使用点积或余弦相似度,根据任意原始向量对文档进行评分。 发行版 Master分支的目标是Elasticsearch 5.4。 请注意,不支持5.5+版本,因为Elasticsearch更改了其插件机制。 5.5+的更新将很快开发...
  • SDK信息-开发者文档 为Anki提供轻松访问我们的技术和推进机器人技术状态的热情,我们利用Cozmo功能为所有者发布了免费的软件开发工具包(“SDK”)! 使用SDK,您可以使用单行代码对Cozmo进行编程,这些代码...

     Cozmo人工智能机器人SDK使用笔记(X)-总结-

    |人工智能基础(中小学版)实践平台|

    https://blog.csdn.net/ZhangRelay/article/details/86675539


    仿真:

     


    ROS Melodic的迷失与救赎::https://blog.csdn.net/column/details/28058.html

    GitChat::沉迷机器人操作系统的一个理由和四种修仙秘籍

    官网:www.anki.com,主要有四款玩具机器人,支持语言如下:

    仅支持英语、法语、德语和日语,那么想看中文说明怎么办?

    这里推荐谷歌浏览器(单击右键---翻成中文(简体)(T)):

    大部分翻译看懂并理解是没有问题的。

    常见的问题,都可以在客户服务-Customer Care-的页面找到。

    ----

    一些具体问题和解决方案
    适用于iOS,Android和Fire

    问题:“密码不正确”消息。

    解决方案:务必保证密码输入正确!Cozmo密码特征:15位 + 2个连字符 + 12个大写字母(不包括B,O或U)      

    问题:“您必须更新Cozmo应用程序以继续”消息。

    解决方案:将应用更新到您所有设备(Android,iOS和Amazon Fire)上提供的最新版本。

    问题:当我将Cozmo放在充电平台上时,Cozmo的屏幕上不显示无线密码SSID或PSK。

    解决方案:手动升高和降低Cozmo的手臂。

    问题:“无法加入网络”Cozmo_xxxxxx“消息。

    解决方案:按照此故障排除

    问题:“搜索IP地址”消息。

    解决方案:联系客户服务

    问题:Cozmo无法启动  

    解决方案:使用最小2A 5V的电源。

    iOS相关

    问题:在iOS设备上收到“无互联网”消息。

    解决方案:由于Cozmo不使用互联网,因此这是正常预期现象的而不是问题。

    问题:在iOS设备上收到“安全建议”消息。

    解决方案:按照此故障排除 

    Android相关

    问题:Android设备上发出了“无互联网”消息(或此消息的一类消息)。

    解决方案:由于Cozmo不使用互联网,因此这是正常预期现象而不是问题。在通知对话框中,选择“不要再次询问此网络”,然后选择“是”

    问题:在Android设备上收到“屏幕覆盖检测”消息。

    解决方案:关闭所有后台应用并重新启动应用

    问题:在Android设备上出现“身份验证问题”。

    解决方案:此消息是您设备的暂时性问题。请按照此故障排除

    ----

    手机应用:

    苹果可以直接在App Store中下载,Anki Cozmo(3.0.0 488MB)Anki Vector Robot(1.1.0 68.9MB)。

    安卓需要下载离线包然后拷贝到手机中进行安装,推荐使用XAPK安装工具,Cozmo 3.0.0 412.80MB Vector 1.1.1 31.96MB。

    请务必注意设备兼容性!软件:iOS 9/Android 5 (Lollipop)/Fire OS 5

    硬件:部分安卓手机wifi会经常断开,蓝牙连接不稳定,官方测试过的设备列表如下:

    设备兼容吗?
    了解您的设备是否适用于Anki的Cozmo的最简单方法是在Apple App Store、Google Play商店或Amazon Appstore中搜索,查找和下载Cozmo应用程序。
    如果您的设备无法找到并下载Cozmo应用程序,则表示您的设备不支持Anki的Cozmo。
    与Cozmo合作的测试设备
    我们所有的测试设备至少运行iOS 9.3.5,Android 5(Lollipop)或Fire OS 5.6。
    AMAZON KINDLE*
    Fire HD 8 (7th Generation)
    Fire HD 10 (7th Generation)
    Fire (7th Generation)
    Fire HD 8 (6th Generation)
    Fire HD 10 (5th Generation)
    Fire HD 8 (5th Generation)
    APPLE
    iPhone X
    iPhone 8
    iPhone 8 plus
    iPhone 7
    iPhone 7 plus
    iPhone SE
    iPhone 6s
    iPhone 6
    iPhone 6 plus
    iPhone 5s
    iPhone 5c
    iPhone 5
    iPad Pro (12.9-inch)
    iPad Pro (10.5-inch)
    iPad Pro (9.7-inch)
    iPad Air 2
    iPad Air
    iPad (6th generation)
    iPad (5th generation)
    iPad (4rd generation)
    iPad mini 4
    iPad mini 3
    iPad mini 2
    iPad mini
    iPod Touch (6th Generation)
    iPod Touch (5th Generation)
    GOOGLE
    Pixel 2
    Pixel 2 XL
    Pixel C
    Pixel
    Pixel XL
    Nexus 5X
    Nexus 6P
    Nexus 9
    Nexus 6
    Nexus 5
    HTC
    One (M9)
    One (M8)
    One (M7)
    LG
    LG G6
    LG G5
    LG G4
    LG G3
    MOTOROLA*
    Moto G5
    Moto X Pure Edition
    ONEPLUS*
    OnePlus 2
    SAMSUNG*
    Galaxy S9+
    Galaxy S9
    Galaxy S8+
    Galaxy S8
    Galaxy S7 Edge
    Galaxy S7
    Galaxy S6 Edge
    Galaxy S6
    Galaxy S5
    Galaxy Note 10.1 (2014 Edition)
    Galaxy Note5
    Galaxy Note 4
    Galaxy Tab S3 (2017 Edition)
    Galaxy Tab A 10.1" (2016 Edition)
    Galaxy Tab A 8.0" (2016 Edition)

    SDK信息-开发者文档

    为Anki提供轻松访问我们的技术和推进机器人技术状态的热情,我们利用Cozmo功能为所有者发布了免费的软件开发工具包(“SDK”)!  

    使用SDK,您可以使用单行代码对Cozmo进行编程,这些代码曾经需要博士级别的机器人和AI专家才能实现。

    这意味着,通过SDK,您将能够使用Cozmo的人脸跟踪模块来识别面部并采取行动,或者计算机视觉和路径规划来观察其环境中的障碍物,或者利用动画系统来编排复杂的动作命令。 而这仅仅只是个开始!!!(无限可能哦)

    如果这听起来很有趣 - 如果您拥有Cozmo - 您可以访问Cozmo SDK页面   以获取更多信息以及完整的要求列表。有关SDK的任何问题,请通过cozmoSDK [at] anki.com与我们联系。请注意,Cozmo SDK仅提供英文版本。

    需要获得更加平易近人的编程体验,请查看Cozmo的代码实验室。

    英文:

    中文:

    ----

    Fin

    ----

    展开全文
  • Vector诊断解决方案

    2019-04-22 16:02:07
    文档介绍了Vector在诊断领域的各种工具以及工具的联合使用等解决方案
  • http://www.cplusplus.com/reference/vector/vector/ C++_vector操作 https://blog.csdn.net/weixin_41743247/article/details/90635931 C++ vector用法(详解!!函数,实现) ...
    展开全文
  • 压缩包中包含两个文档,一个是介绍Vector软件支持的芯片以及编译器,一个是介绍Flash Bootloader 功能原理等,以及刷写工具vFlash的刷写概念
  • 这是Project Vector Documentation Wiki的文档原型。 原型应可在上查看以进行审查 项目矢量文档Wiki是通过。 可以编写脚本以从github或其他git存储库中提取更改。 它足以勾勒出这个想法,但还不完整。 在淘汰了...
  • CAN FD介绍(Vector

    2019-04-22 15:30:40
    CAN FD是博世和Vector共同开发的新一代总线,文档Vector对于CAN FD的详细介绍
  • VectorCAST

    2013-11-19 11:01:34
    VectorCAST软件测试解决方案,专门为用户提供自动化的测试和质量保障体系,尤其适用对自身有高安全性和高可靠性要求的嵌入式应用。其功能覆盖了“单元测试”、“集成测试”、“覆盖率分析”、“回归测试”和“静态...
  • 代码示例*来自文档/ examples / mai GoVector版本1.2.0的片段go_vector是基于C ++ std :: Vector的容器数据结构。 go_vector带来了功能编程,并删除了样板代码。 代码示例*摘自文档/examples/main.go*中的片段*构造...
  • 欧洲 测量与标定协议 XCP 文档vector的源码,在frescael的32位机上的实例。 欧洲 测量与标定协议 XCP 文档vector的源码,在frescael的32位机上的实例。
  • Vector_vFlash_刷写工具

    2019-08-20 16:22:35
    文档为PPT,内容是对于刷写工具vFlash的功能以及原理进行详细的介绍。
  •  API文档的解释:  Vector类可以实现可增长的对象数组。与数组一样,它包含可以使用整数索引进行访问的组件。但是,Vector 的大小可以根据需要增大或缩小,以适应创建 Vector 后进行添加或移除项的操作。  ...
  • vector_space_modelling:Python向量空间建模中的NLP和文档分类NLP
  • Vector内部中文期刊

    2019-04-22 15:46:31
    文档Vector中国发行的一种内部期刊,介绍了Vector当前的前沿技术,以及各种方案的实现
  • Startup_Vector_SLP4.pdf

    2020-03-09 17:34:56
    DaVinci Configur官方帮助文档,配置使用手册。手册讲解细致,可以说是手把手教学了 Content User Manual startup with Vector SLP4 1.5 Da Vinci Configurator Pro project 30 2 STEP2 Define Project ...
  • Vector人工智能机器人SDK使用笔记

    千次阅读 2019-02-03 07:16:55
    Cozmo是2016年推出的,2两年后的2018年Vector上市,具备语音助手和更多功能,元件数由300+升级到700+。 Vector的SDK具体说明在:developer.anki.com/vector/docs/...docs是文档,examples是示例,还有一些说明文档...

    Cozmo是2016年推出的,2两年后的2018年Vector上市,具备语音助手和更多功能,元件数由300+升级到700+。

    Vector的SDK具体说明在:developer.anki.com/vector/docs/。目前是测试版本

    Vector人工智能机器人SDK使用笔记


    首先下载Vector的SDK(github):

    docs是文档,examples是示例,还有一些说明文档和安装脚本等。

    SDK支持Windows、Linux和MacOS,具体安装和使用流程参考官网。

    由于是测试版本,示例还比较少,不如Cozmo丰富:

    face_images放置可以在Vector面部显示的图案jpg、png,可以自定义。

    apps是一些综合应用程序;tutorials是教程。

    1. tutorials教程

    教程并没有分章节,一共 13个:

     1.1 hello world

    """Hello World
    
    Make Vector say 'Hello World' in this simple Vector SDK example program.
    """
    
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial) as robot:
            print("Say 'Hello World'...")
            robot.say_text("Hello World")
    
    
    if __name__ == "__main__":
        main()

    对比Cozmo的hello world

    '''Hello World
    
    Make Cozmo say 'Hello World' in this simple Cozmo SDK example program.
    '''
    
    import cozmo
    
    
    def cozmo_program(robot: cozmo.robot.Robot):
        robot.say_text("Hello World").wait_for_completed()
    
    
    cozmo.run_program(cozmo_program)

    有些区别,需要注意哦~

    1.2 drive square

    """Make Vector drive in a square.
    
    Make Vector drive in a square by going forward and turning left 4 times in a row.
    """
    
    import anki_vector
    from anki_vector.util import degrees, distance_mm, speed_mmps
    
    
    def main():
        args = anki_vector.util.parse_command_args()
    
        # The robot drives straight, stops and then turns around
        with anki_vector.Robot(args.serial) as robot:
            robot.behavior.drive_off_charger()
    
            # Use a "for loop" to repeat the indented code 4 times
            # Note: the _ variable name can be used when you don't need the value
            for _ in range(4):
                print("Drive Vector straight...")
                robot.behavior.drive_straight(distance_mm(200), speed_mmps(50))
    
                print("Turn Vector in place...")
                robot.behavior.turn_in_place(degrees(90))
    
    
    if __name__ == "__main__":
        main()

    1.3 motors

    """Drive Vector's wheels, lift and head motors directly
    
    This is an example of how you can also have low-level control of Vector's motors
    (wheels, lift and head) for fine-grained control and ease of controlling
    multiple things at once.
    """
    
    import time
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial) as robot:
            robot.behavior.drive_off_charger()
    
            # Tell the head motor to start lowering the head (at 5 radians per second)
            print("Lower Vector's head...")
            robot.motors.set_head_motor(-5.0)
    
            # Tell the lift motor to start lowering the lift (at 5 radians per second)
            print("Lower Vector's lift...")
            robot.motors.set_lift_motor(-5.0)
    
            # Tell Vector to drive the left wheel at 25 mmps (millimeters per second),
            # and the right wheel at 50 mmps (so Vector will drive Forwards while also
            # turning to the left
            print("Set Vector's wheel motors...")
            robot.motors.set_wheel_motors(25, 50)
    
            # wait for 3 seconds (the head, lift and wheels will move while we wait)
            time.sleep(3)
    
            # Tell the head motor to start raising the head (at 5 radians per second)
            print("Raise Vector's head...")
            robot.motors.set_head_motor(5)
    
            # Tell the lift motor to start raising the lift (at 5 radians per second)
            print("Raise Vector's lift...")
            robot.motors.set_lift_motor(5)
    
            # Tell Vector to drive the left wheel at 50 mmps (millimeters per second),
            # and the right wheel at -50 mmps (so Vector will turn in-place to the right)
            print("Set Vector's wheel motors...")
            robot.motors.set_wheel_motors(50, -50)
    
            # Wait for 3 seconds (the head, lift and wheels will move while we wait)
            time.sleep(3)
    
            # Stop the motors, which unlocks the tracks
            robot.motors.set_wheel_motors(0, 0)
            robot.motors.set_lift_motor(0)
            robot.motors.set_head_motor(0)
    
    
    if __name__ == "__main__":
        main()

    1.4 animation

    """Play an animation on Vector
    """
    
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial) as robot:
            robot.behavior.drive_off_charger()
    
            # Play an animation via its name.
            #
            # Warning: Future versions of the app might change these, so for future-proofing
            # we recommend using play_animation_trigger when it becomes available.
            #
            # See the remote_control.py example in apps for an easy way to see
            # the available animations.
            animation = 'anim_pounce_success_02'
            print("Playing animation by name: " + animation)
            robot.anim.play_animation(animation)
    
    
    if __name__ == "__main__":
        main()

    1.5 play behaviors

    """Tell Vector to drive on and off the charger.
    """
    
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
    
        with anki_vector.Robot(args.serial) as robot:
            print("Drive Vector onto charger...")
            robot.behavior.drive_on_charger()
    
            print("Drive Vector off of charger...")
            robot.behavior.drive_off_charger()
    
    
    if __name__ == '__main__':
        main()

    1.6 face image

    png--jpg

    import os
    import sys
    import time
    
    try:
        from PIL import Image
    except ImportError:
        sys.exit("Cannot import from PIL: Do `pip3 install --user Pillow` to install")
    
    import anki_vector
    from anki_vector.util import degrees
    
    
    def main():
        args = anki_vector.util.parse_command_args()
    
        with anki_vector.Robot(args.serial) as robot:
            # If necessary, move Vector's Head and Lift to make it easy to see his face
            robot.behavior.set_head_angle(degrees(45.0))
            robot.behavior.set_lift_height(0.0)
    
            current_directory = os.path.dirname(os.path.realpath(__file__))
            image_path = os.path.join(current_directory, "..", "face_images", "cozmo_image.jpg")
    
            # Load an image
            image_file = Image.open(image_path)
    
            # Convert the image to the format used by the Screen
            print("Display image on Vector's face...")
            screen_data = anki_vector.screen.convert_image_to_screen_data(image_file)
            robot.screen.set_screen_with_image_data(screen_data, 4.0)
            time.sleep(5)
    
    
    if __name__ == "__main__":
        main()

    1.7 dock with cube

    """Tell Vector to drive up to a seen cube.
    
    This example demonstrates Vector driving to and docking with a cube, without
    picking it up.  Vector will line his arm hooks up with the cube so that they are
    inserted into the cube's corners.
    
    You must place a cube in front of Vector so that he can see it.
    """
    
    import anki_vector
    from anki_vector.util import degrees
    
    
    def main():
        args = anki_vector.util.parse_command_args()
    
        docking_result = None
        with anki_vector.Robot(args.serial) as robot:
            robot.behavior.drive_off_charger()
    
            # If necessary, move Vector's Head and Lift down
            robot.behavior.set_head_angle(degrees(-5.0))
            robot.behavior.set_lift_height(0.0)
    
            print("Connecting to a cube...")
            robot.world.connect_cube()
    
            if robot.world.connected_light_cube:
                print("Begin cube docking...")
                dock_response = robot.behavior.dock_with_cube(
                    robot.world.connected_light_cube,
                    num_retries=3)
                if dock_response:
                    docking_result = dock_response.result
    
                robot.world.disconnect_cube()
    
        if docking_result:
            if docking_result.code != anki_vector.messaging.protocol.ActionResult.ACTION_RESULT_SUCCESS:
                print("Cube docking failed with code {0} ({1})".format(str(docking_result).rstrip('\n\r'), docking_result.code))
        else:
            print("Cube docking failed.")
    
    
    if __name__ == "__main__":
        main()

    1.8 drive to cliff and back up

    """Make Vector drive to a cliff and back up.
    
    Place the robot about a foot from a "cliff" (such as a tabletop edge),
    then run this script.
    
    This tutorial is an advanced example that shows the SDK's integration
    with the Vector behavior system.
    
    The Vector behavior system uses an order of prioritizations to determine
    what the robot will do next. The highest priorities in the behavior
    system including the following:
    * When Vector reaches a cliff, he will back up to avoid falling.
    * When Vector is low on battery, he will start searching for his charger
    and self-dock.
    
    When the SDK is running at a lower priority level than high priorities
    like cliff and low battery, an SDK program can lose its ability to
    control the robot when a cliff if reached or when battery is low.
    
    This example shows how, after reaching a cliff, the SDK program can
    re-request control so it can continue controlling the robot after
    reaching the cliff.
    """
    
    import anki_vector
    from anki_vector.util import distance_mm, speed_mmps
    
    
    def main():
        args = anki_vector.util.parse_command_args()
    
        with anki_vector.Robot(args.serial) as robot:
            print("Vector SDK has behavior control...")
            robot.behavior.drive_off_charger()
    
            print("Drive Vector straight until he reaches cliff...")
            # Once robot reaches cliff, he will play his typical cliff reactions.
            robot.behavior.drive_straight(distance_mm(5000), speed_mmps(100))
    
            robot.conn.run_coroutine(robot.conn.control_lost_event.wait()).result()
    
            print("Lost SDK behavior control. Request SDK behavior control again...")
            robot.conn.request_control()
    
            print("Drive Vector backward away from the cliff...")
            robot.behavior.drive_straight(distance_mm(-300), speed_mmps(100))
    
    
    if __name__ == "__main__":
        main()

    1.9 show photo

    """Show a photo taken by Vector.
    
    Grabs the pictures off of Vector and open them via PIL.
    
    Before running this script, please make sure you have successfully
    had Vector take a photo by saying, "Hey Vector! Take a photo."
    """
    
    import io
    import sys
    
    try:
        from PIL import Image
    except ImportError:
        sys.exit("Cannot import from PIL: Do `pip3 install --user Pillow` to install")
    
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial) as robot:
            if len(robot.photos.photo_info) == 0:
                print('\n\nNo photos found on Vector. Ask him to take a photo first by saying, "Hey Vector! Take a photo."\n\n')
                return
            for photo in robot.photos.photo_info:
                print(f"Opening photo {photo.photo_id}")
                val = robot.photos.get_photo(photo.photo_id)
                image = Image.open(io.BytesIO(val.image))
                image.show()
    
    
    if __name__ == "__main__":
        main()

    1.10 eye color

    """Set Vector's eye color.
    """
    
    import time
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
    
        with anki_vector.Robot(args.serial) as robot:
            print("Set Vector's eye color to purple...")
            robot.behavior.set_eye_color(hue=0.83, saturation=0.76)
    
            print("Sleep 5 seconds...")
            time.sleep(5)
    
    
    if __name__ == '__main__':
        main()

    1.11 face event subscription 

    """Wait for Vector to see a face, and then print output to the console.
    
    This script demonstrates how to set up a listener for an event. It
    subscribes to event 'robot_observed_face'. When that event is dispatched,
    method 'on_robot_observed_face' is called, which prints text to the console.
    Vector will also say "I see a face" one time, and the program will exit when
    he finishes speaking.
    """
    
    import functools
    import threading
    
    import anki_vector
    from anki_vector.events import Events
    from anki_vector.util import degrees
    
    said_text = False
    
    
    def main():
        evt = threading.Event()
    
        def on_robot_observed_face(robot, event_type, event):
            print("Vector sees a face")
            global said_text
            if not said_text:
                said_text = True
                robot.say_text("I see a face!")
                evt.set()
    
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial, enable_face_detection=True) as robot:
    
            # If necessary, move Vector's Head and Lift to make it easy to see his face
            robot.behavior.set_head_angle(degrees(45.0))
            robot.behavior.set_lift_height(0.0)
    
            on_robot_observed_face = functools.partial(on_robot_observed_face, robot)
            robot.events.subscribe(on_robot_observed_face, Events.robot_observed_face)
    
            print("------ waiting for face events, press ctrl+c to exit early ------")
    
            try:
                if not evt.wait(timeout=5):
                    print("------ Vector never saw your face! ------")
            except KeyboardInterrupt:
                pass
    
        robot.events.unsubscribe(on_robot_observed_face, Events.robot_observed_face)
    
    
    if __name__ == '__main__':
        main()

    1.12 wake word subscription 

    """Wait for Vector to hear "Hey Vector!" and then play an animation.
    
    The wake_word event only is dispatched when the SDK program has
    not requested behavior control. After the robot hears "Hey Vector!"
    and the event is received, you can then request behavior control
    and control the robot. See the 'requires_behavior_control' method in
    connection.py for more information.
    """
    
    import functools
    import threading
    
    import anki_vector
    from anki_vector.events import Events
    
    wake_word_heard = False
    
    
    def main():
        evt = threading.Event()
    
        def on_wake_word(robot, event_type, event):
            robot.conn.request_control()
    
            global wake_word_heard
            if not wake_word_heard:
                wake_word_heard = True
                robot.say_text("Hello")
                evt.set()
    
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial, requires_behavior_control=False, cache_animation_list=False) as robot:
            on_wake_word = functools.partial(on_wake_word, robot)
            robot.events.subscribe(on_wake_word, Events.wake_word)
    
            print('------ Vector is waiting to hear "Hey Vector!" Press ctrl+c to exit early ------')
    
            try:
                if not evt.wait(timeout=10):
                    print('------ Vector never heard "Hey Vector!" ------')
            except KeyboardInterrupt:
                pass
    
    
    if __name__ == '__main__':
        main()

    1.13 custom objects

    """This example demonstrates how you can define custom objects.
    
    The example defines several custom objects (2 cubes, a wall and a box). When
    Vector sees the markers for those objects he will report that he observed an
    object of that size and shape there.
    
    You can adjust the markers, marker sizes, and object sizes to fit whatever
    object you have and the exact size of the markers that you print out.
    """
    
    import time
    
    import anki_vector
    from anki_vector.objects import CustomObjectMarkers, CustomObjectTypes
    
    
    def handle_object_appeared(event_type, event):
        # This will be called whenever an EvtObjectAppeared is dispatched -
        # whenever an Object comes into view.
        print(f"--------- Vector started seeing an object --------- \n{event.obj}")
    
    
    def handle_object_disappeared(event_type, event):
        # This will be called whenever an EvtObjectDisappeared is dispatched -
        # whenever an Object goes out of view.
        print(f"--------- Vector stopped seeing an object --------- \n{event.obj}")
    
    
    def main():
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial,
                               default_logging=False,
                               show_viewer=True,
                               show_3d_viewer=True,
                               enable_camera_feed=True,
                               enable_custom_object_detection=True,
                               enable_nav_map_feed=True) as robot:
            # Add event handlers for whenever Vector sees a new object
            robot.events.subscribe(handle_object_appeared, anki_vector.events.Events.object_appeared)
            robot.events.subscribe(handle_object_disappeared, anki_vector.events.Events.object_disappeared)
    
            # define a unique cube (44mm x 44mm x 44mm) (approximately the same size as Vector's light cube)
            # with a 50mm x 50mm Circles2 image on every face. Note that marker_width_mm and marker_height_mm
            # parameter values must match the dimensions of the printed marker.
            cube_obj = robot.world.define_custom_cube(custom_object_type=CustomObjectTypes.CustomType00,
                                                      marker=CustomObjectMarkers.Circles2,
                                                      size_mm=44.0,
                                                      marker_width_mm=50.0,
                                                      marker_height_mm=50.0,
                                                      is_unique=True)
    
            # define a unique cube (88mm x 88mm x 88mm) (approximately 2x the size of Vector's light cube)
            # with a 50mm x 50mm Circles3 image on every face.
            big_cube_obj = robot.world.define_custom_cube(custom_object_type=CustomObjectTypes.CustomType01,
                                                          marker=CustomObjectMarkers.Circles3,
                                                          size_mm=88.0,
                                                          marker_width_mm=50.0,
                                                          marker_height_mm=50.0,
                                                          is_unique=True)
    
            # define a unique wall (150mm x 120mm (x10mm thick for all walls)
            # with a 50mm x 30mm Triangles2 image on front and back
            wall_obj = robot.world.define_custom_wall(custom_object_type=CustomObjectTypes.CustomType02,
                                                      marker=CustomObjectMarkers.Triangles2,
                                                      width_mm=150,
                                                      height_mm=120,
                                                      marker_width_mm=50,
                                                      marker_height_mm=30,
                                                      is_unique=True)
    
            # define a unique box (20mm deep x 20mm width x20mm tall)
            # with a different 50mm x 50mm image on each of the 6 faces
            box_obj = robot.world.define_custom_box(custom_object_type=CustomObjectTypes.CustomType03,
                                                    marker_front=CustomObjectMarkers.Diamonds2,   # front
                                                    marker_back=CustomObjectMarkers.Hexagons2,    # back
                                                    marker_top=CustomObjectMarkers.Hexagons3,     # top
                                                    marker_bottom=CustomObjectMarkers.Hexagons4,  # bottom
                                                    marker_left=CustomObjectMarkers.Triangles3,   # left
                                                    marker_right=CustomObjectMarkers.Triangles4,  # right
                                                    depth_mm=20.0,
                                                    width_mm=20.0,
                                                    height_mm=20.0,
                                                    marker_width_mm=50.0,
                                                    marker_height_mm=50.0,
                                                    is_unique=True)
    
            if ((cube_obj is not None) and (big_cube_obj is not None) and
                    (wall_obj is not None) and (box_obj is not None)):
                print("All objects defined successfully!")
            else:
                print("One or more object definitions failed!")
                return
    
            print("\n\nShow a marker specified in the Python script to Vector and you will see the related 3d objects\n"
                  "display in Vector's 3d_viewer window. You will also see messages print every time a custom object\n"
                  "enters or exits Vector's view. Markers can be found from the docs under CustomObjectMarkers.\n\n")
    
            try:
                while True:
                    time.sleep(0.5)
            except KeyboardInterrupt:
                pass
    
    
    if __name__ == "__main__":
        main()

     

    2. apps

    分为四个部分,每个部分只有一个示例:

    2.1 3d viewer

    """3d Viewer example, with remote control.
    
    This is an example of how you can use the 3D viewer with a program, and the
    3D Viewer and controls will work automatically.
    """
    
    import time
    
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial,
                               show_viewer=True,
                               enable_camera_feed=True,
                               show_3d_viewer=True,
                               enable_face_detection=True,
                               enable_custom_object_detection=True,
                               enable_nav_map_feed=True):
            print("Starting 3D Viewer. Use Ctrl+C to quit.")
            try:
                while True:
                    time.sleep(0.5)
            except KeyboardInterrupt:
                pass
    
    
    if __name__ == "__main__":
        main()

    2.2 interactive shell

    """Command Line Interface for Vector
    
    This is an example of integrating Vector with an ipython-based command line interface.
    """
    
    import sys
    
    try:
        from IPython.terminal.embed import InteractiveShellEmbed
    except ImportError:
        sys.exit('Cannot import from ipython: Do `pip3 install ipython` to install')
    
    import anki_vector
    
    usage = """Use the [tab] key to auto-complete commands, and see all available methods and properties.
    
    For example, type 'robot.' then press the [tab] key and you'll see all the robot functions.
    Keep pressing tab to cycle through all of the available options.
    
    All IPython commands work as usual.
    Here's some useful syntax:
      robot?   -> Details about 'robot'.
      robot??  -> More detailed information including code for 'robot'.
    These commands will work on all objects inside of the shell.
    
    You can even call the functions that send messages to Vector, and he'll respond just like he would in a script.
    Try it out! Type:
        robot.anim.play_animation('anim_pounce_success_02')
    """
    
    args = anki_vector.util.parse_command_args()
    
    ipyshell = InteractiveShellEmbed(banner1='\nWelcome to the Vector Interactive Shell!',
                                     exit_msg='Goodbye\n')
    
    if __name__ == "__main__":
        with anki_vector.Robot(args.serial,
                               enable_camera_feed=True,
                               show_viewer=True) as robot:
            # Invoke the ipython shell while connected to Vector
            ipyshell(usage)

    2.3 proximity mapper

    """Maps a region around Vector using the proximity sensor.
    
    Vector will turn in place and use his sensor to detect walls in his
    local environment.  These walls are displayed in the 3D Viewer.  The
    visualizer does not effect the robot's internal state or behavior.
    
    Vector expects this environment to be static - if objects are moved
    he will have no knowledge of them.
    """
    
    import asyncio
    import concurrent
    from math import cos, sin, inf, acos
    import os
    import sys
    
    sys.path.append(os.path.join(os.path.dirname(__file__), 'lib'))
    from proximity_mapper_state import ClearedTerritory, MapState, Wall, WallSegment   # pylint: disable=wrong-import-position
    
    import anki_vector   # pylint: disable=wrong-import-position
    from anki_vector.util import parse_command_args, radians, degrees, distance_mm, speed_mmps, Vector3  # pylint: disable=wrong-import-position
    
    # Constants
    
    #: The maximum distance (in millimeters) the scan considers valid for a proximity respons.
    #: Wall detection past this threshold will be disregarded, and an 'open' node will
    #: be created at this distance instead.  Increasing this value may degrade the
    #: reliability of this program, see note below:
    #:
    #: NOTE: The proximity sensor works by sending a light pulse, and seeing how long that pulse takes
    #: to reflect and return to the sensor.  The proximity sensor does not specifically have a maximum
    #: range, but will get unreliable results below a certain return signal strength.  This return signal
    #: is impacted by environmental conditions (such as the orientation and material of the detected obstacle)
    #: as well as the distance.  Additionally, increasing this radius will reduce the resolution of contact
    #: points, necessitating changes to PROXIMITY_SCAN_SAMPLE_FREQUENCY_HZ and PROXIMITY_SCAN_BIND_THRESHOLD_MM
    #: to maintain effective wall prediction.
    PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM = 300
    
    #: The distance (in millimeters) to place an open node if no proximity results are detected along
    #: a given line of sight.  This should be smaller than the distance threshold, since these nodes
    #: indicate safe points for the robot to drive to, and the robot's size should be taken into account
    #: when estimating a maximum safe driving distance
    PROXIMITY_SCAN_OPEN_NODE_DISTANCE_MM = 230
    
    #: How frequently (in hertz) the robot checks proximity data while doing a scan.
    PROXIMITY_SCAN_SAMPLE_FREQUENCY_HZ = 15.0
    
    #: How long (in seconds) the robot spends doing it's 360 degree scan.
    PROXIMITY_SCAN_TURN_DURATION_S = 10.0
    
    #: How close (in millimeters) together two detected contact points need to be for the robot to
    #: consider them part of a continuous wall.
    PROXIMITY_SCAN_BIND_THRESHOLD_MM = 30.0
    
    #: A delay (in seconds) the program waits after the scan finishes before shutting down.
    #: This allows the user time to explore the mapped 3d environment in the viewer and can be
    #: Tuned to any desired length.  A value of 0.0 will prevent the viewer from closing.
    PROXIMITY_EXPLORATION_SHUTDOWN_DELAY_S = 8.0
    
    
    # @TODO enable when testing shows it is ready to go
    #: ACTIVELY_EXPLORE_SPACE can be activated to allow the robot to move
    #: into an open space after scanning, and continue the process until all open
    #: spaces are explored.
    ACTIVELY_EXPLORE_SPACE = True
    #: The speed (in millimeters/second) the robot drives while exploring.
    EXPLORE_DRIVE_SPEED_MMPS = 40.0
    #: The speed (in degrees/second) the robot turns while exploring.
    EXPLORE_TURN_SPEED_DPS = 90.0
    
    
    #: Takes a position in 3d space where a collection was detected, and adds it to the map state
    #: by either creating a wall, adding to wall or storing a loose contact point.
    async def add_proximity_contact_to_state(node_position: Vector3, state: MapState):
    
        # Comparison function for sorting points by distance.
        def compare_distance(elem):
            return (elem - node_position).magnitude_squared
    
        # Comparison function for sorting walls by distance using their head as a reference point.
        def compare_head_distance(elem):
            return (elem.vertices[0] - node_position).magnitude_squared
    
        # Comparison function for sorting walls by distance using their tail as a reference point.
        def compare_tail_distance(elem):
            return (elem.vertices[-1] - node_position).magnitude_squared
    
        # Sort all the loose contact nodes not yet incorporated into walls by
        # their distance to our reading position.  If the nearest one is within
        # our binding threshold - store it as a viable wall creation partner.
        # (infinity is used as a standin for 'nothing')
        closest_contact_distance = inf
        if state.contact_nodes:
            state.contact_nodes.sort(key=compare_distance)
            closest_contact_distance = (state.contact_nodes[0] - node_position).magnitude
            if closest_contact_distance > PROXIMITY_SCAN_BIND_THRESHOLD_MM:
                closest_contact_distance = inf
    
        # Sort all the walls both by head and tail distance from our sample
        # if either of the results are within our binding threshold, store them
        # as potential wall extension candidates for our sample.
        # (infinity is used as a standin for 'nothing')
        closest_head_distance = inf
        closest_tail_distance = inf
        if state.walls:
            state.walls.sort(key=compare_tail_distance)
            closest_tail_distance = (state.walls[0].vertices[-1] - node_position).magnitude
            if closest_tail_distance > PROXIMITY_SCAN_BIND_THRESHOLD_MM:
                closest_tail_distance = inf
    
            state.walls.sort(key=compare_head_distance)
            closest_head_distance = (state.walls[0].vertices[0] - node_position).magnitude
            if closest_head_distance > PROXIMITY_SCAN_BIND_THRESHOLD_MM:
                closest_head_distance = inf
    
        # Create a new wall if a loose contact node is in bind range and
        # is closer than any existing wall.  The contact node will be removed.
        if closest_contact_distance <= PROXIMITY_SCAN_BIND_THRESHOLD_MM and closest_contact_distance < closest_head_distance and closest_contact_distance < closest_tail_distance:
            state.walls.append(Wall(WallSegment(state.contact_nodes[0], node_position)))
            state.contact_nodes.pop(0)
    
        # Extend a wall if it's head is within bind range and is closer than
        # any loose contacts or wall tails.
        elif closest_head_distance <= PROXIMITY_SCAN_BIND_THRESHOLD_MM and closest_head_distance < closest_contact_distance and closest_head_distance < closest_tail_distance:
            state.walls[0].insert_head(node_position)
    
        # Extend a wall if it's tail is within bind range and is closer than
        # any loose contacts or wall heads.
        elif closest_tail_distance <= PROXIMITY_SCAN_BIND_THRESHOLD_MM and closest_tail_distance < closest_contact_distance and closest_tail_distance < closest_head_distance:
            state.walls.sort(key=compare_tail_distance)
            state.walls[0].insert_tail(node_position)
    
        # If nothing was found to bind with, store the sample as a loose contact node.
        else:
            state.contact_nodes.append(node_position)
    
    
    #: Takes a position in 3d space and adds it to the map state as an open node
    async def add_proximity_non_contact_to_state(node_position: Vector3, state: MapState):
        # Check to see if the uncontacted sample is inside of any area considered already explored.
        is_open_unexplored = True
        for ct in state.cleared_territories:
            if (node_position - ct.center).magnitude < ct.radius:
                is_open_unexplored = False
    
        # If the uncontacted sample is in unfamiliar ground, store it as an open node.
        if is_open_unexplored:
            state.open_nodes.append(node_position)
    
    
    #: Modifies the map state with the details of a proximity reading
    async def analyze_proximity_sample(reading: anki_vector.proximity.ProximitySensorData, robot: anki_vector.robot.Robot, state: MapState):
        # Check if the reading meets the engine's metrics for valid, and that its within our specified distance threshold.
        reading_contacted = reading.is_valid and reading.distance.distance_mm < PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM
    
        if reading_contacted:
            # The distance will either be the reading data, or our threshold distance if the reading is considered uncontacted.
            reading_distance = reading.distance.distance_mm if reading_contacted else PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM
    
            # Convert the distance to a 3d position in worldspace.
            reading_position = Vector3(
                robot.pose.position.x + cos(robot.pose_angle_rad) * reading_distance,
                robot.pose.position.y + sin(robot.pose_angle_rad) * reading_distance,
                robot.pose.position.z)
    
            await add_proximity_contact_to_state(reading_position, state)
        else:
            # Convert the distance to a 3d position in worldspace.
            safe_driving_position = Vector3(
                robot.pose.position.x + cos(robot.pose_angle_rad) * PROXIMITY_SCAN_OPEN_NODE_DISTANCE_MM,
                robot.pose.position.y + sin(robot.pose_angle_rad) * PROXIMITY_SCAN_OPEN_NODE_DISTANCE_MM,
                robot.pose.position.z)
    
            await add_proximity_non_contact_to_state(safe_driving_position, state)
    
    
    #: repeatedly collects proximity data sample and converts them to nodes and walls for the map state
    async def collect_proximity_data_loop(robot: anki_vector.robot.Robot, future: concurrent.futures.Future, state: MapState):
        try:
            scan_interval = 1.0 / PROXIMITY_SCAN_SAMPLE_FREQUENCY_HZ
    
            # Runs until the collection_active flag is cleared.
            # This flag is cleared external to this function.
            while state.collection_active:
                # Collect proximity data from the sensor.
                reading = robot.proximity.last_sensor_reading
                if reading is not None:
                    await analyze_proximity_sample(reading, robot, state)
                robot.viewer_3d.user_data_queue.put(state)
                await asyncio.sleep(scan_interval)
    
        # Exceptions raised in this process are ignored, unless we set them on the future, and then run future.result() at a later time
        except Exception as e:    # pylint: disable=broad-except
            future.set_exception(e)
        finally:
            future.set_result(state)
    
    
    #: Updates the map state by rotating 360 degrees and collecting/applying proximity data samples.
    async def scan_area(robot: anki_vector.robot.Robot, state: MapState):
        collect_future = concurrent.futures.Future()
    
        # The collect_proximity_data task relies on this external trigger to know when its finished.
        state.collection_active = True
    
        # Activate the collection task while the robot turns in place.
        collect_task = robot.conn.loop.create_task(collect_proximity_data_loop(robot, collect_future, state))
    
        # Turn around in place, then send the signal to kill the collection task.
        robot.behavior.turn_in_place(angle=degrees(360.0), speed=degrees(360.0 / PROXIMITY_SCAN_TURN_DURATION_S))
        state.collection_active = False
    
        # Wait for the collection task to finish.
        robot.conn.run_coroutine(collect_task)
        # While the result of the task is not used, this call will propagate any exceptions that
        # occured in the task, allowing for debug visibility.
        collect_future.result()
    
    
    #: Top level call to perform exploration and environment mapping
    async def map_explorer(robot: anki_vector.robot.Robot):
        # Drop the lift, so that it does not block the proximity sensor
        robot.behavior.set_lift_height(0.0)
    
        # Create the map state, and add it's rendering function to the viewer's render pipeline
        state = MapState()
        robot.viewer_3d.add_render_call(state.render)
    
        # Comparison function used for sorting which open nodes are the furthest from all existing
        # walls and loose contacts.
        # (Using 1/r^2 to respond strongly to small numbers of close contact and weaking to many distant contacts)
        def open_point_sort_func(position: Vector3):
            proximity_sum = 0
            for p in state.contact_nodes:
                proximity_sum = proximity_sum + 1 / (p - position).magnitude_squared
            for c in state.walls:
                for p in c.vertices:
                    proximity_sum = proximity_sum + 1 / (p - position).magnitude_squared
            return proximity_sum
    
        # Loop until running out of open samples to navigate to,
        # or if the process has yet to start (indicated by a lack of cleared_territories).
        while (state.open_nodes and ACTIVELY_EXPLORE_SPACE) or not state.cleared_territories:
            if robot.pose:
                # Delete any open samples range of the robot.
                state.open_nodes = [position for position in state.open_nodes if (position - robot.pose.position).magnitude > PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM]
    
                # Collect map data for the robot's current location.
                await scan_area(robot, state)
    
                # Add where the robot is to the map's cleared territories.
                state.cleared_territories.append(ClearedTerritory(robot.pose.position, PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM))
    
                # @TODO: This whole block should ideally be replaced with the go_to_pose actions when that is ready to go.
                # Alternatively, the turn&drive commands can be modified to respond to collisions by cancelling.  After
                # either change, ACTIVELY_EXPLORE_SPACE should be defaulted True
                if ACTIVELY_EXPLORE_SPACE and state.open_nodes:
                    # Sort the open nodes and find our next navigation point.
                    state.open_nodes.sort(key=open_point_sort_func)
                    nav_point = state.open_nodes[0]
    
                    # Calculate the distance and direction of this next navigation point.
                    nav_point_delta = Vector3(
                        nav_point.x - robot.pose.position.x,
                        nav_point.y - robot.pose.position.y,
                        0)
                    nav_distance = nav_point_delta.magnitude
                    nav_direction = nav_point_delta.normalized
    
                    # Convert the nav_direction into a turn angle relative to the robot's current facing.
                    robot_forward = Vector3(*robot.pose.rotation.to_matrix().forward_xyz).normalized
                    turn_angle = acos(nav_direction.dot(robot_forward))
                    if nav_direction.cross(robot_forward).z > 0:
                        turn_angle = -turn_angle
    
                    # Turn toward the nav point, and drive to it.
                    robot.behavior.turn_in_place(angle=radians(turn_angle), speed=degrees(EXPLORE_TURN_SPEED_DPS))
                    robot.behavior.drive_straight(distance=distance_mm(nav_distance), speed=speed_mmps(EXPLORE_DRIVE_SPEED_MMPS))
    
        if PROXIMITY_EXPLORATION_SHUTDOWN_DELAY_S == 0.0:
            while True:
                await asyncio.sleep(1.0)
        else:
            print('finished exploring - waiting an additional {0} seconds, then shutting down'.format(PROXIMITY_EXPLORATION_SHUTDOWN_DELAY_S))
            await asyncio.sleep(PROXIMITY_EXPLORATION_SHUTDOWN_DELAY_S)
    
    
    if __name__ == '__main__':
        # Connect to the robot
        args = parse_command_args()
        with anki_vector.Robot(args.serial,
                               enable_camera_feed=True,
                               show_viewer=True,
                               enable_nav_map_feed=False,
                               show_3d_viewer=True) as robotInstance:
            robotInstance.behavior.drive_off_charger()
            loop = asyncio.get_event_loop()
            loop.run_until_complete(map_explorer(robotInstance))

    2.4 remote control

    """Control Vector using a webpage on your computer.
    
    This example lets you control Vector by Remote Control, using a webpage served by Flask.
    """
    
    import io
    import json
    import sys
    import time
    from lib import flask_helpers
    
    import anki_vector
    from anki_vector import util
    
    
    try:
        from flask import Flask, request
    except ImportError:
        sys.exit("Cannot import from flask: Do `pip3 install --user flask` to install")
    
    try:
        from PIL import Image
    except ImportError:
        sys.exit("Cannot import from PIL: Do `pip3 install --user Pillow` to install")
    
    
    def create_default_image(image_width, image_height, do_gradient=False):
        """Create a place-holder PIL image to use until we have a live feed from Vector"""
        image_bytes = bytearray([0x70, 0x70, 0x70]) * image_width * image_height
    
        if do_gradient:
            i = 0
            for y in range(image_height):
                for x in range(image_width):
                    image_bytes[i] = int(255.0 * (x / image_width))   # R
                    image_bytes[i + 1] = int(255.0 * (y / image_height))  # G
                    image_bytes[i + 2] = 0                                # B
                    i += 3
    
        image = Image.frombytes('RGB', (image_width, image_height), bytes(image_bytes))
        return image
    
    
    flask_app = Flask(__name__)
    _default_camera_image = create_default_image(320, 240)
    _is_mouse_look_enabled_by_default = False
    
    
    def remap_to_range(x, x_min, x_max, out_min, out_max):
        """convert x (in x_min..x_max range) to out_min..out_max range"""
        if x < x_min:
            return out_min
        if x > x_max:
            return out_max
        ratio = (x - x_min) / (x_max - x_min)
        return out_min + ratio * (out_max - out_min)
    
    
    class RemoteControlVector:
    
        def __init__(self, robot):
            self.vector = robot
    
            self.drive_forwards = 0
            self.drive_back = 0
            self.turn_left = 0
            self.turn_right = 0
            self.lift_up = 0
            self.lift_down = 0
            self.head_up = 0
            self.head_down = 0
    
            self.go_fast = 0
            self.go_slow = 0
    
            self.is_mouse_look_enabled = _is_mouse_look_enabled_by_default
            self.mouse_dir = 0
    
            all_anim_names = self.vector.anim.anim_list
            all_anim_names.sort()
            self.anim_names = []
    
            # Hide a few specific test animations that don't behave well
            bad_anim_names = [
                "ANIMATION_TEST",
                "soundTestAnim"]
    
            for anim_name in all_anim_names:
                if anim_name not in bad_anim_names:
                    self.anim_names.append(anim_name)
    
            default_anims_for_keys = ["anim_turn_left_01",  # 0
                                      "anim_blackjack_victorwin_01",  # 1
                                      "anim_pounce_success_02",  # 2
                                      "anim_feedback_shutup_01",  # 3
                                      "anim_knowledgegraph_success_01",  # 4
                                      "anim_wakeword_groggyeyes_listenloop_01",  # 5
                                      "anim_fistbump_success_01",  # 6
                                      "anim_reacttoface_unidentified_01",  # 7
                                      "anim_rtpickup_loop_10",  # 8
                                      "anim_volume_stage_05"]  # 9
    
            self.anim_index_for_key = [0] * 10
            kI = 0
            for default_key in default_anims_for_keys:
                try:
                    anim_idx = self.anim_names.index(default_key)
                except ValueError:
                    print("Error: default_anim %s is not in the list of animations" % default_key)
                    anim_idx = kI
                self.anim_index_for_key[kI] = anim_idx
                kI += 1
    
            self.action_queue = []
            self.text_to_say = "Hi I'm Vector"
    
        def set_anim(self, key_index, anim_index):
            self.anim_index_for_key[key_index] = anim_index
    
        def handle_mouse(self, mouse_x, mouse_y):
            """Called whenever mouse moves
                mouse_x, mouse_y are in in 0..1 range (0,0 = top left, 1,1 = bottom right of window)
            """
            if self.is_mouse_look_enabled:
                mouse_sensitivity = 1.5  # higher = more twitchy
                self.mouse_dir = remap_to_range(mouse_x, 0.0, 1.0, -mouse_sensitivity, mouse_sensitivity)
                self.update_mouse_driving()
    
                desired_head_angle = remap_to_range(mouse_y, 0.0, 1.0, 45, -25)
                head_angle_delta = desired_head_angle - util.radians(self.vector.head_angle_rad).degrees
                head_vel = head_angle_delta * 0.03
                self.vector.motors.set_head_motor(head_vel)
    
        def set_mouse_look_enabled(self, is_mouse_look_enabled):
            was_mouse_look_enabled = self.is_mouse_look_enabled
            self.is_mouse_look_enabled = is_mouse_look_enabled
            if not is_mouse_look_enabled:
                # cancel any current mouse-look turning
                self.mouse_dir = 0
                if was_mouse_look_enabled:
                    self.update_mouse_driving()
                    self.update_head()
    
        def update_drive_state(self, key_code, is_key_down, speed_changed):
            """Update state of driving intent from keyboard, and if anything changed then call update_driving"""
            update_driving = True
            if key_code == ord('W'):
                self.drive_forwards = is_key_down
            elif key_code == ord('S'):
                self.drive_back = is_key_down
            elif key_code == ord('A'):
                self.turn_left = is_key_down
            elif key_code == ord('D'):
                self.turn_right = is_key_down
            else:
                if not speed_changed:
                    update_driving = False
            return update_driving
    
        def update_lift_state(self, key_code, is_key_down, speed_changed):
            """Update state of lift move intent from keyboard, and if anything changed then call update_lift"""
            update_lift = True
            if key_code == ord('R'):
                self.lift_up = is_key_down
            elif key_code == ord('F'):
                self.lift_down = is_key_down
            else:
                if not speed_changed:
                    update_lift = False
            return update_lift
    
        def update_head_state(self, key_code, is_key_down, speed_changed):
            """Update state of head move intent from keyboard, and if anything changed then call update_head"""
            update_head = True
            if key_code == ord('T'):
                self.head_up = is_key_down
            elif key_code == ord('G'):
                self.head_down = is_key_down
            else:
                if not speed_changed:
                    update_head = False
            return update_head
    
        def handle_key(self, key_code, is_shift_down, is_alt_down, is_key_down):
            """Called on any key press or release
               Holding a key down may result in repeated handle_key calls with is_key_down==True
            """
    
            # Update desired speed / fidelity of actions based on shift/alt being held
            was_go_fast = self.go_fast
            was_go_slow = self.go_slow
    
            self.go_fast = is_shift_down
            self.go_slow = is_alt_down
    
            speed_changed = (was_go_fast != self.go_fast) or (was_go_slow != self.go_slow)
    
            update_driving = self.update_drive_state(key_code, is_key_down, speed_changed)
    
            update_lift = self.update_lift_state(key_code, is_key_down, speed_changed)
    
            update_head = self.update_head_state(key_code, is_key_down, speed_changed)
    
            # Update driving, head and lift as appropriate
            if update_driving:
                self.update_mouse_driving()
            if update_head:
                self.update_head()
            if update_lift:
                self.update_lift()
    
            # Handle any keys being released (e.g. the end of a key-click)
            if not is_key_down:
                if ord('9') >= key_code >= ord('0'):
                    anim_name = self.key_code_to_anim_name(key_code)
                    self.queue_action((self.vector.anim.play_animation, anim_name))
                elif key_code == ord(' '):
                    self.queue_action((self.vector.say_text, self.text_to_say))
    
        def key_code_to_anim_name(self, key_code):
            key_num = key_code - ord('0')
            anim_num = self.anim_index_for_key[key_num]
            anim_name = self.anim_names[anim_num]
            return anim_name
    
        def func_to_name(self, func):
            if func == self.vector.say_text:
                return "say_text"
            if func == self.vector.anim.play_animation:
                return "play_anim"
            return "UNKNOWN"
    
        def action_to_text(self, action):
            func, args = action
            return self.func_to_name(func) + "( " + str(args) + " )"
    
        def action_queue_to_text(self, action_queue):
            out_text = ""
            i = 0
            for action in action_queue:
                out_text += "[" + str(i) + "] " + self.action_to_text(action)
                i += 1
            return out_text
    
        def queue_action(self, new_action):
            if len(self.action_queue) > 10:
                self.action_queue.pop(0)
            self.action_queue.append(new_action)
    
        def update(self):
            """Try and execute the next queued action"""
            if self.action_queue:
                queued_action, action_args = self.action_queue[0]
                if queued_action(action_args):
                    self.action_queue.pop(0)
    
        def pick_speed(self, fast_speed, mid_speed, slow_speed):
            if self.go_fast:
                if not self.go_slow:
                    return fast_speed
            elif self.go_slow:
                return slow_speed
            return mid_speed
    
        def update_lift(self):
            lift_speed = self.pick_speed(8, 4, 2)
            lift_vel = (self.lift_up - self.lift_down) * lift_speed
            self.vector.motors.set_lift_motor(lift_vel)
    
        def update_head(self):
            if not self.is_mouse_look_enabled:
                head_speed = self.pick_speed(2, 1, 0.5)
                head_vel = (self.head_up - self.head_down) * head_speed
                self.vector.motors.set_head_motor(head_vel)
    
        def update_mouse_driving(self):
            drive_dir = (self.drive_forwards - self.drive_back)
    
            turn_dir = (self.turn_right - self.turn_left) + self.mouse_dir
            if drive_dir < 0:
                # It feels more natural to turn the opposite way when reversing
                turn_dir = -turn_dir
    
            forward_speed = self.pick_speed(150, 75, 50)
            turn_speed = self.pick_speed(100, 50, 30)
    
            l_wheel_speed = (drive_dir * forward_speed) + (turn_speed * turn_dir)
            r_wheel_speed = (drive_dir * forward_speed) - (turn_speed * turn_dir)
    
            self.vector.motors.set_wheel_motors(l_wheel_speed, r_wheel_speed, l_wheel_speed * 4, r_wheel_speed * 4)
    
    
    def get_anim_sel_drop_down(selectorIndex):
        html_text = """<select onchange="handleDropDownSelect(this)" name="animSelector""" + str(selectorIndex) + """">"""
        i = 0
        for anim_name in flask_app.remote_control_vector.anim_names:
            is_selected_item = (i == flask_app.remote_control_vector.anim_index_for_key[selectorIndex])
            selected_text = ''' selected="selected"''' if is_selected_item else ""
            html_text += """<option value=""" + str(i) + selected_text + """>""" + anim_name + """</option>"""
            i += 1
        html_text += """</select>"""
        return html_text
    
    
    def get_anim_sel_drop_downs():
        html_text = ""
        for i in range(10):
            # list keys 1..9,0 as that's the layout on the keyboard
            key = i + 1 if (i < 9) else 0
            html_text += str(key) + """: """ + get_anim_sel_drop_down(key) + """<br>"""
        return html_text
    
    
    def to_js_bool_string(bool_value):
        return "true" if bool_value else "false"
    
    
    @flask_app.route("/")
    def handle_index_page():
        return """
        <html>
            <head>
                <title>remote_control_vector.py display</title>
            </head>
            <body>
                <h1>Remote Control Vector</h1>
                <table>
                    <tr>
                        <td valign = top>
                            <div id="vectorImageMicrosoftWarning" style="display: none;color: #ff9900; text-align: center;">Video feed performance is better in Chrome or Firefox due to mjpeg limitations in this browser</div>
                            <img src="vectorImage" id="vectorImageId" width=640 height=480>
                            <div id="DebugInfoId"></div>
                        </td>
                        <td width=30></td>
                        <td valign=top>
                            <h2>Controls:</h2>
    
                            <h3>Driving:</h3>
    
                            <b>W A S D</b> : Drive Forwards / Left / Back / Right<br><br>
                            <b>Q</b> : Toggle Mouse Look: <button id="mouseLookId" onClick=onMouseLookButtonClicked(this) style="font-size: 14px">Default</button><br>
                            <b>Mouse</b> : Move in browser window to aim<br>
                            (steer and head angle)<br>
                            (similar to an FPS game)<br>
    
                            <h3>Head:</h3>
                            <b>T</b> : Move Head Up<br>
                            <b>G</b> : Move Head Down<br>
    
                            <h3>Lift:</h3>
                            <b>R</b> : Move Lift Up<br>
                            <b>F</b>: Move Lift Down<br>
                            <h3>General:</h3>
                            <b>Shift</b> : Hold to Move Faster (Driving, Head and Lift)<br>
                            <b>Alt</b> : Hold to Move Slower (Driving, Head and Lift)<br>
                            <b>P</b> : Toggle Free Play mode: <button id="freeplayId" onClick=onFreeplayButtonClicked(this) style="font-size: 14px">Default</button><br>
                            <h3>Play Animations</h3>
                            <b>0 .. 9</b> : Play Animation mapped to that key<br>
                            <h3>Talk</h3>
                            <b>Space</b> : Say <input type="text" name="sayText" id="sayTextId" value="""" + flask_app.remote_control_vector.text_to_say + """" onchange=handleTextInput(this)>
                        </td>
                        <td width=30></td>
                        <td valign=top>
                        <h2>Animation key mappings:</h2>
                        """ + get_anim_sel_drop_downs() + """<br>
                        </td>
                    </tr>
                </table>
    
                <script type="text/javascript">
                    var gLastClientX = -1
                    var gLastClientY = -1
                    var gIsMouseLookEnabled = """ + to_js_bool_string(_is_mouse_look_enabled_by_default) + """
                    var gIsFreeplayEnabled = false
                    var gUserAgent = window.navigator.userAgent;
                    var gIsMicrosoftBrowser = gUserAgent.indexOf('MSIE ') > 0 || gUserAgent.indexOf('Trident/') > 0 || gUserAgent.indexOf('Edge/') > 0;
                    var gSkipFrame = false;
    
                    if (gIsMicrosoftBrowser) {
                        document.getElementById("vectorImageMicrosoftWarning").style.display = "block";
                    }
    
                    function postHttpRequest(url, dataSet)
                    {
                        var xhr = new XMLHttpRequest();
                        xhr.open("POST", url, true);
                        xhr.send( JSON.stringify( dataSet ) );
                    }
    
                    function updateVector()
                    {
                        console.log("Updating log")
                        if (gIsMicrosoftBrowser && !gSkipFrame) {
                            // IE doesn't support MJPEG, so we need to ping the server for more images.
                            // Though, if this happens too frequently, the controls will be unresponsive.
                            gSkipFrame = true;
                            document.getElementById("vectorImageId").src="vectorImage?" + (new Date()).getTime();
                        } else if (gSkipFrame) {
                            gSkipFrame = false;
                        }
                        var xhr = new XMLHttpRequest();
                        xhr.onreadystatechange = function() {
                            if (xhr.readyState == XMLHttpRequest.DONE) {
                                document.getElementById("DebugInfoId").innerHTML = xhr.responseText
                            }
                        }
    
                        xhr.open("POST", "updateVector", true);
                        xhr.send( null );
                    }
                    setInterval(updateVector , 60);
    
                    function updateButtonEnabledText(button, isEnabled)
                    {
                        button.firstChild.data = isEnabled ? "Enabled" : "Disabled";
                    }
    
                    function onMouseLookButtonClicked(button)
                    {
                        gIsMouseLookEnabled = !gIsMouseLookEnabled;
                        updateButtonEnabledText(button, gIsMouseLookEnabled);
                        isMouseLookEnabled = gIsMouseLookEnabled
                        postHttpRequest("setMouseLookEnabled", {isMouseLookEnabled})
                    }
    
                    function onFreeplayButtonClicked(button)
                    {
                        gIsFreeplayEnabled = !gIsFreeplayEnabled;
                        updateButtonEnabledText(button, gIsFreeplayEnabled);
                        isFreeplayEnabled = gIsFreeplayEnabled
                        postHttpRequest("setFreeplayEnabled", {isFreeplayEnabled})
                    }
    
                    updateButtonEnabledText(document.getElementById("mouseLookId"), gIsMouseLookEnabled);
                    updateButtonEnabledText(document.getElementById("freeplayId"), gIsFreeplayEnabled);
    
                    function handleDropDownSelect(selectObject)
                    {
                        selectedIndex = selectObject.selectedIndex
                        itemName = selectObject.name
                        postHttpRequest("dropDownSelect", {selectedIndex, itemName});
                    }
    
                    function handleKeyActivity (e, actionType)
                    {
                        var keyCode  = (e.keyCode ? e.keyCode : e.which);
                        var hasShift = (e.shiftKey ? 1 : 0)
                        var hasCtrl  = (e.ctrlKey  ? 1 : 0)
                        var hasAlt   = (e.altKey   ? 1 : 0)
    
                        if (actionType=="keyup")
                        {
                            if (keyCode == 80) // 'P'
                            {
                                // Simulate a click of the freeplay button
                                onFreeplayButtonClicked(document.getElementById("freeplayId"))
                            }
                            else if (keyCode == 81) // 'Q'
                            {
                                // Simulate a click of the mouse look button
                                onMouseLookButtonClicked(document.getElementById("mouseLookId"))
                            }
                        }
    
                        postHttpRequest(actionType, {keyCode, hasShift, hasCtrl, hasAlt})
                    }
    
                    function handleMouseActivity (e, actionType)
                    {
                        var clientX = e.clientX / document.body.clientWidth  // 0..1 (left..right)
                        var clientY = e.clientY / document.body.clientHeight // 0..1 (top..bottom)
                        var isButtonDown = e.which && (e.which != 0) ? 1 : 0
                        var deltaX = (gLastClientX >= 0) ? (clientX - gLastClientX) : 0.0
                        var deltaY = (gLastClientY >= 0) ? (clientY - gLastClientY) : 0.0
                        gLastClientX = clientX
                        gLastClientY = clientY
    
                        postHttpRequest(actionType, {clientX, clientY, isButtonDown, deltaX, deltaY})
                    }
    
                    function handleTextInput(textField)
                    {
                        textEntered = textField.value
                        postHttpRequest("sayText", {textEntered})
                    }
    
                    document.addEventListener("keydown", function(e) { handleKeyActivity(e, "keydown") } );
                    document.addEventListener("keyup",   function(e) { handleKeyActivity(e, "keyup") } );
    
                    document.addEventListener("mousemove",   function(e) { handleMouseActivity(e, "mousemove") } );
    
                    function stopEventPropagation(event)
                    {
                        if (event.stopPropagation)
                        {
                            event.stopPropagation();
                        }
                        else
                        {
                            event.cancelBubble = true
                        }
                    }
    
                    document.getElementById("sayTextId").addEventListener("keydown", function(event) {
                        stopEventPropagation(event);
                    } );
                    document.getElementById("sayTextId").addEventListener("keyup", function(event) {
                        stopEventPropagation(event);
                    } );
                </script>
    
            </body>
        </html>
        """
    
    
    def get_annotated_image():
        # TODO: Update to use annotated image (add annotate module)
        image = flask_app.remote_control_vector.vector.camera.latest_image
        if image is None:
            return _default_camera_image
    
        return image
    
    
    def streaming_video():
        """Video streaming generator function"""
        while True:
            if flask_app.remote_control_vector:
                image = get_annotated_image()
    
                img_io = io.BytesIO()
                image.save(img_io, 'PNG')
                img_io.seek(0)
                yield (b'--frame\r\n'
                       b'Content-Type: image/png\r\n\r\n' + img_io.getvalue() + b'\r\n')
            else:
                time.sleep(.1)
    
    
    def serve_single_image():
        if flask_app.remote_control_vector:
            image = get_annotated_image()
            if image:
                return flask_helpers.serve_pil_image(image)
    
        return flask_helpers.serve_pil_image(_default_camera_image)
    
    
    def is_microsoft_browser(req):
        agent = req.user_agent.string
        return 'Edge/' in agent or 'MSIE ' in agent or 'Trident/' in agent
    
    
    @flask_app.route("/vectorImage")
    def handle_vectorImage():
        if is_microsoft_browser(request):
            return serve_single_image()
        return flask_helpers.stream_video(streaming_video)
    
    
    def handle_key_event(key_request, is_key_down):
        message = json.loads(key_request.data.decode("utf-8"))
        if flask_app.remote_control_vector:
            flask_app.remote_control_vector.handle_key(key_code=(message['keyCode']), is_shift_down=message['hasShift'],
                                                       is_alt_down=message['hasAlt'], is_key_down=is_key_down)
        return ""
    
    
    @flask_app.route('/mousemove', methods=['POST'])
    def handle_mousemove():
        """Called from Javascript whenever mouse moves"""
        message = json.loads(request.data.decode("utf-8"))
        if flask_app.remote_control_vector:
            flask_app.remote_control_vector.handle_mouse(mouse_x=(message['clientX']), mouse_y=message['clientY'])
        return ""
    
    
    @flask_app.route('/setMouseLookEnabled', methods=['POST'])
    def handle_setMouseLookEnabled():
        """Called from Javascript whenever mouse-look mode is toggled"""
        message = json.loads(request.data.decode("utf-8"))
        if flask_app.remote_control_vector:
            flask_app.remote_control_vector.set_mouse_look_enabled(is_mouse_look_enabled=message['isMouseLookEnabled'])
        return ""
    
    
    @flask_app.route('/setFreeplayEnabled', methods=['POST'])
    def handle_setFreeplayEnabled():
        """Called from Javascript whenever freeplay mode is toggled on/off"""
        message = json.loads(request.data.decode("utf-8"))
        if flask_app.remote_control_vector:
            isFreeplayEnabled = message['isFreeplayEnabled']
            connection = flask_app.remote_control_vector.vector.conn
            connection.request_control(enable=(not isFreeplayEnabled))
        return ""
    
    
    @flask_app.route('/keydown', methods=['POST'])
    def handle_keydown():
        """Called from Javascript whenever a key is down (note: can generate repeat calls if held down)"""
        return handle_key_event(request, is_key_down=True)
    
    
    @flask_app.route('/keyup', methods=['POST'])
    def handle_keyup():
        """Called from Javascript whenever a key is released"""
        return handle_key_event(request, is_key_down=False)
    
    
    @flask_app.route('/dropDownSelect', methods=['POST'])
    def handle_dropDownSelect():
        """Called from Javascript whenever an animSelector dropdown menu is selected (i.e. modified)"""
        message = json.loads(request.data.decode("utf-8"))
    
        item_name_prefix = "animSelector"
        item_name = message['itemName']
    
        if flask_app.remote_control_vector and item_name.startswith(item_name_prefix):
            item_name_index = int(item_name[len(item_name_prefix):])
            flask_app.remote_control_vector.set_anim(item_name_index, message['selectedIndex'])
    
        return ""
    
    
    @flask_app.route('/sayText', methods=['POST'])
    def handle_sayText():
        """Called from Javascript whenever the saytext text field is modified"""
        message = json.loads(request.data.decode("utf-8"))
        if flask_app.remote_control_vector:
            flask_app.remote_control_vector.text_to_say = message['textEntered']
        return ""
    
    
    @flask_app.route('/updateVector', methods=['POST'])
    def handle_updateVector():
        if flask_app.remote_control_vector:
            flask_app.remote_control_vector.update()
            action_queue_text = ""
            i = 1
            for action in flask_app.remote_control_vector.action_queue:
                action_queue_text += str(i) + ": " + flask_app.remote_control_vector.action_to_text(action) + "<br>"
                i += 1
    
            return "Action Queue:<br>" + action_queue_text + "\n"
        return ""
    
    
    def run():
        args = util.parse_command_args()
    
        with anki_vector.AsyncRobot(args.serial, enable_camera_feed=True) as robot:
            flask_app.remote_control_vector = RemoteControlVector(robot)
    
            robot.behavior.drive_off_charger()
    
            flask_helpers.run_flask(flask_app)
    
    
    if __name__ == '__main__':
        try:
            run()
        except KeyboardInterrupt as e:
            pass
        except anki_vector.exceptions.VectorConnectionException as e:
            sys.exit("A connection error occurred: %s" % e)


     

    展开全文
  • Vector_XCP 源代码

    2020-07-06 16:45:02
    XCP代码以及相关文档 The Standard Protocol for ECU Development
  • PAGE / NUMPAGES DLL中传递STL参数vector对象作为dll参数传递等问题 STL跨平台调用会出现很多异常你可以试试. STL使用模板生成当我们使用模板的时候每一个EXE和DLL都在编译器产生了自己的代码导致模板所使用的静态...
  • C++ STL容器总结之vector(超详细版)

    万次阅读 2020-02-20 23:01:00
    vector的中文翻译为向量,是一种C++ STL中的序列容器。它的是存储方式和C++语言本身提供的数组一样都是顺序存储,因此vector的操作和数组十分相似。但是和数组不一样的是,数组的存储空间是静态的,一旦配置了就不能...

    一、vector简介

    vector的中文翻译为向量,是一种C++ STL中的序列容器。它的是存储方式和C++语言本身提供的数组一样都是顺序存储,因此vector的操作和数组十分相似。但是和数组不一样的是,数组的存储空间是静态的,一旦配置了就不能改变,而vector的存储空间是动态的,随着元素的加入,它的内部机制会自动扩充空间以容纳新元素,因此也被称为可变长数组。

    二、vector存储机制

    vector的动态空间实现如下图所示,
    1
    为了减少空间配置的时间代价,通常vector配置的空间会比我们标注的需求量更大一些,于是vector总的存储空间就如上图所示分成了两部分,其中工作空间是按我们标注的需求配置的,而备用空间就是额外配置的那一部分空间。

    当需要扩充的新的空间时,vector会优先使用备用空间。例如我们现在要在末尾插入6,它会将 f i n i s h finish finish 迭代器指向的空间用于存储6,最后再调整 f i n i s h finish finish 迭代器的位置。
    2当备用空间不足以存放要插入的元素时,vector会在另外较大的空闲空间中配置一块大小为原来两倍的新空间(由于不能保证原空间之后有足有的空闲空间,所以并不是直接在原空间后面接续),然后将原空间中的数据按顺序迁移到新空间,最后释放原来的存储空间。这一过程可以简单记忆为重新配置、移动数据、释放原空间

    下面是以“在备用空间用完的情况下插入新元素3”为例,用图片形式描述前后的存储空间变化。
    在这里插入图片描述
    由于以上过程对使用者而言都是透明的,所以需要非常注意的一点是,一旦vector的空间重新配置,所有指向原vector的迭代器都会失效!

    三、vector创建和初始化

    初始化比较简单,我们可以根据自己的使用需求来相应的初始化方法。

    //vector头文件
    #include <vector>
    //创建一个空的vector
    vector<int> vec1;
    //创建一个有n个元素的vector,初始值都为0
    vector<int> vec2(n);
    //创建一个有n个元素的vector,并将它们值都初始化为x
    vector<int> vec3(n, x);
    //用vec3来初始化vec4,它们大小和元素的值都相同
    vector<int> vec4(vec3);
    

    四、vector基本操作详解

    很多人不明白为什么要了解容器本身实现的原理,觉得学会怎么用就够了。其实,学了容器的实现原理会对容器操作会有更加深刻的理解,同时也会对算法思想有或多或少的领悟

    接下来,我们结合这张底层存储示意图仔细分析vector的基本操作。
    4

    • 迭代器
      这里简单介绍下面将出现的两种比较陌生的迭代器:
      • r e v e r s e _ i t e r a t o r reverse\_iterator reverse_iterator,称为反向迭代器,它的递增方向和我们常用的 i t e r a t o r iterator iterator 相反,例如 r i t + + rit++ rit++表示指向的位置往左移。
      • c o n s t _ i t e r a t o r const\_iterator const_iterator,它不能改变所指向的元素的值,但是可以再指向其他元素,和const关键词标注的 i t e r a t o r iterator iterator 刚好相反。
    //获取第一个元素的迭代器,也就是图中的start
    vec.begin();
    
    //获取最后一个下个位置的迭代器,即图中的finish
    vec.end();
    
    //返回vector<T>::reverse_iterator类型迭代器,指向finish-1
    vec.rbegin();
    
    //返回vector<T>::reverse_iterator类型迭代器,指向start的前一个位置
    vec.rend();
    
    //c++11,返回vector<T>::const_iterator类型迭代器,位置同图中start
    vec.cbegin();
    
    //c++11,返回vector<T>::const_iterator类型迭代器,位置同图中finish
    vec.cend();
    
    • 容量
    //获取当前容量大小
    vec.size();
    
    //获取包括备用空间在内的总容量大小
    vec.capacity();
    
    //最大容量,即最多可以存储多少个当前类型元素
    vec.max_size();
    
    //判断vector是否为空,即判断start == finish
    vec.empty();
    
    //重新设置vector的容量,大小为n
    vec.resize(n);
    
    • 元素访问
      因为vector也是顺序存储,所以它进行元素的随机访问的时间复杂度也为 O ( 1 ) O(1) O(1)
    //像数组一样用索引进行随机访问,例如vec[0]
    operator []
    
    //作用同上,增加异常处理,越界抛出out of range
    vec.at(index);
    
    //返回一个元素,即迭代器start指向的元素
    vec.front();
    
    //返回最后一个元素,即迭代器finish-1指向的元素
    vec.back();
    
    • 修改操作操作
      由于是顺序存储结构,随机插入或随机删除都会引起元素的移动(对末尾元素操作除外),因此它们的时间复杂度都是 O ( n ) O(n) O(n)。下图是删除元素的底层实现示意,可以用于帮助我们的理解,从图中也可以看出删除操作不会引起总空间的变化。
      5
    //元素的赋值值操作(类似于初始化)
    //将vec赋值为n个x
    vec.assign(n, x);
    //将[first, last)范围内的元素赋值给vec
    vec.assign(first, last);
    //传入参数为数组指针,将数组[0, n)中的元素赋值给vec
    vec.assign(array, array + n);
    
    //将新元素x插入到finish所在的位置,并将迭代器finish后移,时间复杂度为O(1)
    vec.push_back(x);
    
    //清除位于finish-1的元素,时间复杂度为O(1)
    vec.pop_back();
    
    //时间复杂度为O(n),position表示插入位置的迭代器
    //在position插入新元素,其值为x
    vec.insert(position, x);
    //在position插入n个新元素,它们的值都为x
    vec.insert(position, n, x);
    
    //时间复杂度为O(n),下列参数均为迭代器
    //清除某个特定位置的元素
    vec.erase(position);
    //清除[first, last)中的所有元素
    vec.erase(first, last);
    
    //交换vec和vec2中的所有元素,
    vec.swap(vec2);
    
    //清除所有的元素,事实上调用了erase(begin(), end())
    vec.clear();
    
    • 遍历操作
    //顺序遍历
    //最常用的方式
    for (int i = 0; i < vec.size(); i++) cout << vec[i] << endl;
    //利用迭代器
    vector<int>::iterator it;
    for (it = vec.begin(); it != vec.end(); i++) cout << *it << endl;
    //c++11
    for(auto x : vec) cout << x << endl;
    
    //逆序遍历
    //常用方式
    for (int i = size() - 1; i >= 0; i--) cout << vec[i] << endl;
    //利用反向迭代器
    vector<int>::reverse_iterator rit;
    for (rit = vec.rbegin(); rit != vec.rend(); rit++) cout << *rit << endl;
    
    • 查找操作
      这个操作主要是利用STL提供的 f i n d find find 函数。
    #include <algorithm>
    //需要调用find函数,在[first, last)中查找x,返回的是迭代器
    vector<int>::iterator it = find(vec.begin(), vec.end(), x);
    //如果找到则输出YES,否则输出NO
    if (it != vec.end()) cout << "YES" << endl;
    else cout << "NO" << endl;
    
    • 排序和翻转
      主要是调用STL提供的 s o r t sort sort r e v e r s e reverse reverse 函数。
    #include <algorithm>
    //升序排序(默认)
    sort(vec.begin(), vec.end());
    sort(vec.begin(), vec.end(), less<int>());
    //降序排序
    sort(vec.begin(), vec.end(), greater<int>());
    
    //元素翻转
    reverse(vec.begin(), vec.end());
    

    五、参考资料

    1. C++官方的vector文档
    2. 《STL源码剖析》
    3. 个人混乱的实验代码

    既然都看到就这里了(我也终于写到这里了),希望你可以做到下面三点哦:

    • 点赞,这是对作者辛苦写作的最低成本的鼓励。
    • 答应我,把它学会!(别扔进收藏夹吃灰)
    • 可以考虑关注一波,STL系列的文章会持续更新。
    展开全文

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 78,128
精华内容 31,251
关键字:

vector文档