精华内容
下载资源
问答
  • Vector SDK可让您直接访问前所未有的一组高级传感器,AI功能和机器人技术,包括计算机视觉,智能地图和导航,以及开创性的动画集。 关于此SDKSDK可让您完全访问Vector固件提供的所有Vector硬件和软件功能。 这...
  • Vector人工智能机器人SDK使用笔记

    千次阅读 2019-02-03 07:16:55
    Cozmo是2016年推出的,2两年后的2018年Vector上市,具备语音助手和更多功能,元件数由300...Vector人工智能机器人SDK使用笔记 首先下载VectorSDK(github): docs是文档,examples是示例,还有一些说明文档...

    Cozmo是2016年推出的,2两年后的2018年Vector上市,具备语音助手和更多功能,元件数由300+升级到700+。

    Vector的SDK具体说明在:developer.anki.com/vector/docs/。目前是测试版本

    Vector人工智能机器人SDK使用笔记


    首先下载Vector的SDK(github):

    docs是文档,examples是示例,还有一些说明文档和安装脚本等。

    SDK支持Windows、Linux和MacOS,具体安装和使用流程参考官网。

    由于是测试版本,示例还比较少,不如Cozmo丰富:

    face_images放置可以在Vector面部显示的图案jpg、png,可以自定义。

    apps是一些综合应用程序;tutorials是教程。

    1. tutorials教程

    教程并没有分章节,一共 13个:

     1.1 hello world

    """Hello World
    
    Make Vector say 'Hello World' in this simple Vector SDK example program.
    """
    
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial) as robot:
            print("Say 'Hello World'...")
            robot.say_text("Hello World")
    
    
    if __name__ == "__main__":
        main()

    对比Cozmo的hello world

    '''Hello World
    
    Make Cozmo say 'Hello World' in this simple Cozmo SDK example program.
    '''
    
    import cozmo
    
    
    def cozmo_program(robot: cozmo.robot.Robot):
        robot.say_text("Hello World").wait_for_completed()
    
    
    cozmo.run_program(cozmo_program)

    有些区别,需要注意哦~

    1.2 drive square

    """Make Vector drive in a square.
    
    Make Vector drive in a square by going forward and turning left 4 times in a row.
    """
    
    import anki_vector
    from anki_vector.util import degrees, distance_mm, speed_mmps
    
    
    def main():
        args = anki_vector.util.parse_command_args()
    
        # The robot drives straight, stops and then turns around
        with anki_vector.Robot(args.serial) as robot:
            robot.behavior.drive_off_charger()
    
            # Use a "for loop" to repeat the indented code 4 times
            # Note: the _ variable name can be used when you don't need the value
            for _ in range(4):
                print("Drive Vector straight...")
                robot.behavior.drive_straight(distance_mm(200), speed_mmps(50))
    
                print("Turn Vector in place...")
                robot.behavior.turn_in_place(degrees(90))
    
    
    if __name__ == "__main__":
        main()

    1.3 motors

    """Drive Vector's wheels, lift and head motors directly
    
    This is an example of how you can also have low-level control of Vector's motors
    (wheels, lift and head) for fine-grained control and ease of controlling
    multiple things at once.
    """
    
    import time
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial) as robot:
            robot.behavior.drive_off_charger()
    
            # Tell the head motor to start lowering the head (at 5 radians per second)
            print("Lower Vector's head...")
            robot.motors.set_head_motor(-5.0)
    
            # Tell the lift motor to start lowering the lift (at 5 radians per second)
            print("Lower Vector's lift...")
            robot.motors.set_lift_motor(-5.0)
    
            # Tell Vector to drive the left wheel at 25 mmps (millimeters per second),
            # and the right wheel at 50 mmps (so Vector will drive Forwards while also
            # turning to the left
            print("Set Vector's wheel motors...")
            robot.motors.set_wheel_motors(25, 50)
    
            # wait for 3 seconds (the head, lift and wheels will move while we wait)
            time.sleep(3)
    
            # Tell the head motor to start raising the head (at 5 radians per second)
            print("Raise Vector's head...")
            robot.motors.set_head_motor(5)
    
            # Tell the lift motor to start raising the lift (at 5 radians per second)
            print("Raise Vector's lift...")
            robot.motors.set_lift_motor(5)
    
            # Tell Vector to drive the left wheel at 50 mmps (millimeters per second),
            # and the right wheel at -50 mmps (so Vector will turn in-place to the right)
            print("Set Vector's wheel motors...")
            robot.motors.set_wheel_motors(50, -50)
    
            # Wait for 3 seconds (the head, lift and wheels will move while we wait)
            time.sleep(3)
    
            # Stop the motors, which unlocks the tracks
            robot.motors.set_wheel_motors(0, 0)
            robot.motors.set_lift_motor(0)
            robot.motors.set_head_motor(0)
    
    
    if __name__ == "__main__":
        main()

    1.4 animation

    """Play an animation on Vector
    """
    
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial) as robot:
            robot.behavior.drive_off_charger()
    
            # Play an animation via its name.
            #
            # Warning: Future versions of the app might change these, so for future-proofing
            # we recommend using play_animation_trigger when it becomes available.
            #
            # See the remote_control.py example in apps for an easy way to see
            # the available animations.
            animation = 'anim_pounce_success_02'
            print("Playing animation by name: " + animation)
            robot.anim.play_animation(animation)
    
    
    if __name__ == "__main__":
        main()

    1.5 play behaviors

    """Tell Vector to drive on and off the charger.
    """
    
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
    
        with anki_vector.Robot(args.serial) as robot:
            print("Drive Vector onto charger...")
            robot.behavior.drive_on_charger()
    
            print("Drive Vector off of charger...")
            robot.behavior.drive_off_charger()
    
    
    if __name__ == '__main__':
        main()

    1.6 face image

    png--jpg

    import os
    import sys
    import time
    
    try:
        from PIL import Image
    except ImportError:
        sys.exit("Cannot import from PIL: Do `pip3 install --user Pillow` to install")
    
    import anki_vector
    from anki_vector.util import degrees
    
    
    def main():
        args = anki_vector.util.parse_command_args()
    
        with anki_vector.Robot(args.serial) as robot:
            # If necessary, move Vector's Head and Lift to make it easy to see his face
            robot.behavior.set_head_angle(degrees(45.0))
            robot.behavior.set_lift_height(0.0)
    
            current_directory = os.path.dirname(os.path.realpath(__file__))
            image_path = os.path.join(current_directory, "..", "face_images", "cozmo_image.jpg")
    
            # Load an image
            image_file = Image.open(image_path)
    
            # Convert the image to the format used by the Screen
            print("Display image on Vector's face...")
            screen_data = anki_vector.screen.convert_image_to_screen_data(image_file)
            robot.screen.set_screen_with_image_data(screen_data, 4.0)
            time.sleep(5)
    
    
    if __name__ == "__main__":
        main()

    1.7 dock with cube

    """Tell Vector to drive up to a seen cube.
    
    This example demonstrates Vector driving to and docking with a cube, without
    picking it up.  Vector will line his arm hooks up with the cube so that they are
    inserted into the cube's corners.
    
    You must place a cube in front of Vector so that he can see it.
    """
    
    import anki_vector
    from anki_vector.util import degrees
    
    
    def main():
        args = anki_vector.util.parse_command_args()
    
        docking_result = None
        with anki_vector.Robot(args.serial) as robot:
            robot.behavior.drive_off_charger()
    
            # If necessary, move Vector's Head and Lift down
            robot.behavior.set_head_angle(degrees(-5.0))
            robot.behavior.set_lift_height(0.0)
    
            print("Connecting to a cube...")
            robot.world.connect_cube()
    
            if robot.world.connected_light_cube:
                print("Begin cube docking...")
                dock_response = robot.behavior.dock_with_cube(
                    robot.world.connected_light_cube,
                    num_retries=3)
                if dock_response:
                    docking_result = dock_response.result
    
                robot.world.disconnect_cube()
    
        if docking_result:
            if docking_result.code != anki_vector.messaging.protocol.ActionResult.ACTION_RESULT_SUCCESS:
                print("Cube docking failed with code {0} ({1})".format(str(docking_result).rstrip('\n\r'), docking_result.code))
        else:
            print("Cube docking failed.")
    
    
    if __name__ == "__main__":
        main()

    1.8 drive to cliff and back up

    """Make Vector drive to a cliff and back up.
    
    Place the robot about a foot from a "cliff" (such as a tabletop edge),
    then run this script.
    
    This tutorial is an advanced example that shows the SDK's integration
    with the Vector behavior system.
    
    The Vector behavior system uses an order of prioritizations to determine
    what the robot will do next. The highest priorities in the behavior
    system including the following:
    * When Vector reaches a cliff, he will back up to avoid falling.
    * When Vector is low on battery, he will start searching for his charger
    and self-dock.
    
    When the SDK is running at a lower priority level than high priorities
    like cliff and low battery, an SDK program can lose its ability to
    control the robot when a cliff if reached or when battery is low.
    
    This example shows how, after reaching a cliff, the SDK program can
    re-request control so it can continue controlling the robot after
    reaching the cliff.
    """
    
    import anki_vector
    from anki_vector.util import distance_mm, speed_mmps
    
    
    def main():
        args = anki_vector.util.parse_command_args()
    
        with anki_vector.Robot(args.serial) as robot:
            print("Vector SDK has behavior control...")
            robot.behavior.drive_off_charger()
    
            print("Drive Vector straight until he reaches cliff...")
            # Once robot reaches cliff, he will play his typical cliff reactions.
            robot.behavior.drive_straight(distance_mm(5000), speed_mmps(100))
    
            robot.conn.run_coroutine(robot.conn.control_lost_event.wait()).result()
    
            print("Lost SDK behavior control. Request SDK behavior control again...")
            robot.conn.request_control()
    
            print("Drive Vector backward away from the cliff...")
            robot.behavior.drive_straight(distance_mm(-300), speed_mmps(100))
    
    
    if __name__ == "__main__":
        main()

    1.9 show photo

    """Show a photo taken by Vector.
    
    Grabs the pictures off of Vector and open them via PIL.
    
    Before running this script, please make sure you have successfully
    had Vector take a photo by saying, "Hey Vector! Take a photo."
    """
    
    import io
    import sys
    
    try:
        from PIL import Image
    except ImportError:
        sys.exit("Cannot import from PIL: Do `pip3 install --user Pillow` to install")
    
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial) as robot:
            if len(robot.photos.photo_info) == 0:
                print('\n\nNo photos found on Vector. Ask him to take a photo first by saying, "Hey Vector! Take a photo."\n\n')
                return
            for photo in robot.photos.photo_info:
                print(f"Opening photo {photo.photo_id}")
                val = robot.photos.get_photo(photo.photo_id)
                image = Image.open(io.BytesIO(val.image))
                image.show()
    
    
    if __name__ == "__main__":
        main()

    1.10 eye color

    """Set Vector's eye color.
    """
    
    import time
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
    
        with anki_vector.Robot(args.serial) as robot:
            print("Set Vector's eye color to purple...")
            robot.behavior.set_eye_color(hue=0.83, saturation=0.76)
    
            print("Sleep 5 seconds...")
            time.sleep(5)
    
    
    if __name__ == '__main__':
        main()

    1.11 face event subscription 

    """Wait for Vector to see a face, and then print output to the console.
    
    This script demonstrates how to set up a listener for an event. It
    subscribes to event 'robot_observed_face'. When that event is dispatched,
    method 'on_robot_observed_face' is called, which prints text to the console.
    Vector will also say "I see a face" one time, and the program will exit when
    he finishes speaking.
    """
    
    import functools
    import threading
    
    import anki_vector
    from anki_vector.events import Events
    from anki_vector.util import degrees
    
    said_text = False
    
    
    def main():
        evt = threading.Event()
    
        def on_robot_observed_face(robot, event_type, event):
            print("Vector sees a face")
            global said_text
            if not said_text:
                said_text = True
                robot.say_text("I see a face!")
                evt.set()
    
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial, enable_face_detection=True) as robot:
    
            # If necessary, move Vector's Head and Lift to make it easy to see his face
            robot.behavior.set_head_angle(degrees(45.0))
            robot.behavior.set_lift_height(0.0)
    
            on_robot_observed_face = functools.partial(on_robot_observed_face, robot)
            robot.events.subscribe(on_robot_observed_face, Events.robot_observed_face)
    
            print("------ waiting for face events, press ctrl+c to exit early ------")
    
            try:
                if not evt.wait(timeout=5):
                    print("------ Vector never saw your face! ------")
            except KeyboardInterrupt:
                pass
    
        robot.events.unsubscribe(on_robot_observed_face, Events.robot_observed_face)
    
    
    if __name__ == '__main__':
        main()

    1.12 wake word subscription 

    """Wait for Vector to hear "Hey Vector!" and then play an animation.
    
    The wake_word event only is dispatched when the SDK program has
    not requested behavior control. After the robot hears "Hey Vector!"
    and the event is received, you can then request behavior control
    and control the robot. See the 'requires_behavior_control' method in
    connection.py for more information.
    """
    
    import functools
    import threading
    
    import anki_vector
    from anki_vector.events import Events
    
    wake_word_heard = False
    
    
    def main():
        evt = threading.Event()
    
        def on_wake_word(robot, event_type, event):
            robot.conn.request_control()
    
            global wake_word_heard
            if not wake_word_heard:
                wake_word_heard = True
                robot.say_text("Hello")
                evt.set()
    
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial, requires_behavior_control=False, cache_animation_list=False) as robot:
            on_wake_word = functools.partial(on_wake_word, robot)
            robot.events.subscribe(on_wake_word, Events.wake_word)
    
            print('------ Vector is waiting to hear "Hey Vector!" Press ctrl+c to exit early ------')
    
            try:
                if not evt.wait(timeout=10):
                    print('------ Vector never heard "Hey Vector!" ------')
            except KeyboardInterrupt:
                pass
    
    
    if __name__ == '__main__':
        main()

    1.13 custom objects

    """This example demonstrates how you can define custom objects.
    
    The example defines several custom objects (2 cubes, a wall and a box). When
    Vector sees the markers for those objects he will report that he observed an
    object of that size and shape there.
    
    You can adjust the markers, marker sizes, and object sizes to fit whatever
    object you have and the exact size of the markers that you print out.
    """
    
    import time
    
    import anki_vector
    from anki_vector.objects import CustomObjectMarkers, CustomObjectTypes
    
    
    def handle_object_appeared(event_type, event):
        # This will be called whenever an EvtObjectAppeared is dispatched -
        # whenever an Object comes into view.
        print(f"--------- Vector started seeing an object --------- \n{event.obj}")
    
    
    def handle_object_disappeared(event_type, event):
        # This will be called whenever an EvtObjectDisappeared is dispatched -
        # whenever an Object goes out of view.
        print(f"--------- Vector stopped seeing an object --------- \n{event.obj}")
    
    
    def main():
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial,
                               default_logging=False,
                               show_viewer=True,
                               show_3d_viewer=True,
                               enable_camera_feed=True,
                               enable_custom_object_detection=True,
                               enable_nav_map_feed=True) as robot:
            # Add event handlers for whenever Vector sees a new object
            robot.events.subscribe(handle_object_appeared, anki_vector.events.Events.object_appeared)
            robot.events.subscribe(handle_object_disappeared, anki_vector.events.Events.object_disappeared)
    
            # define a unique cube (44mm x 44mm x 44mm) (approximately the same size as Vector's light cube)
            # with a 50mm x 50mm Circles2 image on every face. Note that marker_width_mm and marker_height_mm
            # parameter values must match the dimensions of the printed marker.
            cube_obj = robot.world.define_custom_cube(custom_object_type=CustomObjectTypes.CustomType00,
                                                      marker=CustomObjectMarkers.Circles2,
                                                      size_mm=44.0,
                                                      marker_width_mm=50.0,
                                                      marker_height_mm=50.0,
                                                      is_unique=True)
    
            # define a unique cube (88mm x 88mm x 88mm) (approximately 2x the size of Vector's light cube)
            # with a 50mm x 50mm Circles3 image on every face.
            big_cube_obj = robot.world.define_custom_cube(custom_object_type=CustomObjectTypes.CustomType01,
                                                          marker=CustomObjectMarkers.Circles3,
                                                          size_mm=88.0,
                                                          marker_width_mm=50.0,
                                                          marker_height_mm=50.0,
                                                          is_unique=True)
    
            # define a unique wall (150mm x 120mm (x10mm thick for all walls)
            # with a 50mm x 30mm Triangles2 image on front and back
            wall_obj = robot.world.define_custom_wall(custom_object_type=CustomObjectTypes.CustomType02,
                                                      marker=CustomObjectMarkers.Triangles2,
                                                      width_mm=150,
                                                      height_mm=120,
                                                      marker_width_mm=50,
                                                      marker_height_mm=30,
                                                      is_unique=True)
    
            # define a unique box (20mm deep x 20mm width x20mm tall)
            # with a different 50mm x 50mm image on each of the 6 faces
            box_obj = robot.world.define_custom_box(custom_object_type=CustomObjectTypes.CustomType03,
                                                    marker_front=CustomObjectMarkers.Diamonds2,   # front
                                                    marker_back=CustomObjectMarkers.Hexagons2,    # back
                                                    marker_top=CustomObjectMarkers.Hexagons3,     # top
                                                    marker_bottom=CustomObjectMarkers.Hexagons4,  # bottom
                                                    marker_left=CustomObjectMarkers.Triangles3,   # left
                                                    marker_right=CustomObjectMarkers.Triangles4,  # right
                                                    depth_mm=20.0,
                                                    width_mm=20.0,
                                                    height_mm=20.0,
                                                    marker_width_mm=50.0,
                                                    marker_height_mm=50.0,
                                                    is_unique=True)
    
            if ((cube_obj is not None) and (big_cube_obj is not None) and
                    (wall_obj is not None) and (box_obj is not None)):
                print("All objects defined successfully!")
            else:
                print("One or more object definitions failed!")
                return
    
            print("\n\nShow a marker specified in the Python script to Vector and you will see the related 3d objects\n"
                  "display in Vector's 3d_viewer window. You will also see messages print every time a custom object\n"
                  "enters or exits Vector's view. Markers can be found from the docs under CustomObjectMarkers.\n\n")
    
            try:
                while True:
                    time.sleep(0.5)
            except KeyboardInterrupt:
                pass
    
    
    if __name__ == "__main__":
        main()

     

    2. apps

    分为四个部分,每个部分只有一个示例:

    2.1 3d viewer

    """3d Viewer example, with remote control.
    
    This is an example of how you can use the 3D viewer with a program, and the
    3D Viewer and controls will work automatically.
    """
    
    import time
    
    import anki_vector
    
    
    def main():
        args = anki_vector.util.parse_command_args()
        with anki_vector.Robot(args.serial,
                               show_viewer=True,
                               enable_camera_feed=True,
                               show_3d_viewer=True,
                               enable_face_detection=True,
                               enable_custom_object_detection=True,
                               enable_nav_map_feed=True):
            print("Starting 3D Viewer. Use Ctrl+C to quit.")
            try:
                while True:
                    time.sleep(0.5)
            except KeyboardInterrupt:
                pass
    
    
    if __name__ == "__main__":
        main()

    2.2 interactive shell

    """Command Line Interface for Vector
    
    This is an example of integrating Vector with an ipython-based command line interface.
    """
    
    import sys
    
    try:
        from IPython.terminal.embed import InteractiveShellEmbed
    except ImportError:
        sys.exit('Cannot import from ipython: Do `pip3 install ipython` to install')
    
    import anki_vector
    
    usage = """Use the [tab] key to auto-complete commands, and see all available methods and properties.
    
    For example, type 'robot.' then press the [tab] key and you'll see all the robot functions.
    Keep pressing tab to cycle through all of the available options.
    
    All IPython commands work as usual.
    Here's some useful syntax:
      robot?   -> Details about 'robot'.
      robot??  -> More detailed information including code for 'robot'.
    These commands will work on all objects inside of the shell.
    
    You can even call the functions that send messages to Vector, and he'll respond just like he would in a script.
    Try it out! Type:
        robot.anim.play_animation('anim_pounce_success_02')
    """
    
    args = anki_vector.util.parse_command_args()
    
    ipyshell = InteractiveShellEmbed(banner1='\nWelcome to the Vector Interactive Shell!',
                                     exit_msg='Goodbye\n')
    
    if __name__ == "__main__":
        with anki_vector.Robot(args.serial,
                               enable_camera_feed=True,
                               show_viewer=True) as robot:
            # Invoke the ipython shell while connected to Vector
            ipyshell(usage)

    2.3 proximity mapper

    """Maps a region around Vector using the proximity sensor.
    
    Vector will turn in place and use his sensor to detect walls in his
    local environment.  These walls are displayed in the 3D Viewer.  The
    visualizer does not effect the robot's internal state or behavior.
    
    Vector expects this environment to be static - if objects are moved
    he will have no knowledge of them.
    """
    
    import asyncio
    import concurrent
    from math import cos, sin, inf, acos
    import os
    import sys
    
    sys.path.append(os.path.join(os.path.dirname(__file__), 'lib'))
    from proximity_mapper_state import ClearedTerritory, MapState, Wall, WallSegment   # pylint: disable=wrong-import-position
    
    import anki_vector   # pylint: disable=wrong-import-position
    from anki_vector.util import parse_command_args, radians, degrees, distance_mm, speed_mmps, Vector3  # pylint: disable=wrong-import-position
    
    # Constants
    
    #: The maximum distance (in millimeters) the scan considers valid for a proximity respons.
    #: Wall detection past this threshold will be disregarded, and an 'open' node will
    #: be created at this distance instead.  Increasing this value may degrade the
    #: reliability of this program, see note below:
    #:
    #: NOTE: The proximity sensor works by sending a light pulse, and seeing how long that pulse takes
    #: to reflect and return to the sensor.  The proximity sensor does not specifically have a maximum
    #: range, but will get unreliable results below a certain return signal strength.  This return signal
    #: is impacted by environmental conditions (such as the orientation and material of the detected obstacle)
    #: as well as the distance.  Additionally, increasing this radius will reduce the resolution of contact
    #: points, necessitating changes to PROXIMITY_SCAN_SAMPLE_FREQUENCY_HZ and PROXIMITY_SCAN_BIND_THRESHOLD_MM
    #: to maintain effective wall prediction.
    PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM = 300
    
    #: The distance (in millimeters) to place an open node if no proximity results are detected along
    #: a given line of sight.  This should be smaller than the distance threshold, since these nodes
    #: indicate safe points for the robot to drive to, and the robot's size should be taken into account
    #: when estimating a maximum safe driving distance
    PROXIMITY_SCAN_OPEN_NODE_DISTANCE_MM = 230
    
    #: How frequently (in hertz) the robot checks proximity data while doing a scan.
    PROXIMITY_SCAN_SAMPLE_FREQUENCY_HZ = 15.0
    
    #: How long (in seconds) the robot spends doing it's 360 degree scan.
    PROXIMITY_SCAN_TURN_DURATION_S = 10.0
    
    #: How close (in millimeters) together two detected contact points need to be for the robot to
    #: consider them part of a continuous wall.
    PROXIMITY_SCAN_BIND_THRESHOLD_MM = 30.0
    
    #: A delay (in seconds) the program waits after the scan finishes before shutting down.
    #: This allows the user time to explore the mapped 3d environment in the viewer and can be
    #: Tuned to any desired length.  A value of 0.0 will prevent the viewer from closing.
    PROXIMITY_EXPLORATION_SHUTDOWN_DELAY_S = 8.0
    
    
    # @TODO enable when testing shows it is ready to go
    #: ACTIVELY_EXPLORE_SPACE can be activated to allow the robot to move
    #: into an open space after scanning, and continue the process until all open
    #: spaces are explored.
    ACTIVELY_EXPLORE_SPACE = True
    #: The speed (in millimeters/second) the robot drives while exploring.
    EXPLORE_DRIVE_SPEED_MMPS = 40.0
    #: The speed (in degrees/second) the robot turns while exploring.
    EXPLORE_TURN_SPEED_DPS = 90.0
    
    
    #: Takes a position in 3d space where a collection was detected, and adds it to the map state
    #: by either creating a wall, adding to wall or storing a loose contact point.
    async def add_proximity_contact_to_state(node_position: Vector3, state: MapState):
    
        # Comparison function for sorting points by distance.
        def compare_distance(elem):
            return (elem - node_position).magnitude_squared
    
        # Comparison function for sorting walls by distance using their head as a reference point.
        def compare_head_distance(elem):
            return (elem.vertices[0] - node_position).magnitude_squared
    
        # Comparison function for sorting walls by distance using their tail as a reference point.
        def compare_tail_distance(elem):
            return (elem.vertices[-1] - node_position).magnitude_squared
    
        # Sort all the loose contact nodes not yet incorporated into walls by
        # their distance to our reading position.  If the nearest one is within
        # our binding threshold - store it as a viable wall creation partner.
        # (infinity is used as a standin for 'nothing')
        closest_contact_distance = inf
        if state.contact_nodes:
            state.contact_nodes.sort(key=compare_distance)
            closest_contact_distance = (state.contact_nodes[0] - node_position).magnitude
            if closest_contact_distance > PROXIMITY_SCAN_BIND_THRESHOLD_MM:
                closest_contact_distance = inf
    
        # Sort all the walls both by head and tail distance from our sample
        # if either of the results are within our binding threshold, store them
        # as potential wall extension candidates for our sample.
        # (infinity is used as a standin for 'nothing')
        closest_head_distance = inf
        closest_tail_distance = inf
        if state.walls:
            state.walls.sort(key=compare_tail_distance)
            closest_tail_distance = (state.walls[0].vertices[-1] - node_position).magnitude
            if closest_tail_distance > PROXIMITY_SCAN_BIND_THRESHOLD_MM:
                closest_tail_distance = inf
    
            state.walls.sort(key=compare_head_distance)
            closest_head_distance = (state.walls[0].vertices[0] - node_position).magnitude
            if closest_head_distance > PROXIMITY_SCAN_BIND_THRESHOLD_MM:
                closest_head_distance = inf
    
        # Create a new wall if a loose contact node is in bind range and
        # is closer than any existing wall.  The contact node will be removed.
        if closest_contact_distance <= PROXIMITY_SCAN_BIND_THRESHOLD_MM and closest_contact_distance < closest_head_distance and closest_contact_distance < closest_tail_distance:
            state.walls.append(Wall(WallSegment(state.contact_nodes[0], node_position)))
            state.contact_nodes.pop(0)
    
        # Extend a wall if it's head is within bind range and is closer than
        # any loose contacts or wall tails.
        elif closest_head_distance <= PROXIMITY_SCAN_BIND_THRESHOLD_MM and closest_head_distance < closest_contact_distance and closest_head_distance < closest_tail_distance:
            state.walls[0].insert_head(node_position)
    
        # Extend a wall if it's tail is within bind range and is closer than
        # any loose contacts or wall heads.
        elif closest_tail_distance <= PROXIMITY_SCAN_BIND_THRESHOLD_MM and closest_tail_distance < closest_contact_distance and closest_tail_distance < closest_head_distance:
            state.walls.sort(key=compare_tail_distance)
            state.walls[0].insert_tail(node_position)
    
        # If nothing was found to bind with, store the sample as a loose contact node.
        else:
            state.contact_nodes.append(node_position)
    
    
    #: Takes a position in 3d space and adds it to the map state as an open node
    async def add_proximity_non_contact_to_state(node_position: Vector3, state: MapState):
        # Check to see if the uncontacted sample is inside of any area considered already explored.
        is_open_unexplored = True
        for ct in state.cleared_territories:
            if (node_position - ct.center).magnitude < ct.radius:
                is_open_unexplored = False
    
        # If the uncontacted sample is in unfamiliar ground, store it as an open node.
        if is_open_unexplored:
            state.open_nodes.append(node_position)
    
    
    #: Modifies the map state with the details of a proximity reading
    async def analyze_proximity_sample(reading: anki_vector.proximity.ProximitySensorData, robot: anki_vector.robot.Robot, state: MapState):
        # Check if the reading meets the engine's metrics for valid, and that its within our specified distance threshold.
        reading_contacted = reading.is_valid and reading.distance.distance_mm < PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM
    
        if reading_contacted:
            # The distance will either be the reading data, or our threshold distance if the reading is considered uncontacted.
            reading_distance = reading.distance.distance_mm if reading_contacted else PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM
    
            # Convert the distance to a 3d position in worldspace.
            reading_position = Vector3(
                robot.pose.position.x + cos(robot.pose_angle_rad) * reading_distance,
                robot.pose.position.y + sin(robot.pose_angle_rad) * reading_distance,
                robot.pose.position.z)
    
            await add_proximity_contact_to_state(reading_position, state)
        else:
            # Convert the distance to a 3d position in worldspace.
            safe_driving_position = Vector3(
                robot.pose.position.x + cos(robot.pose_angle_rad) * PROXIMITY_SCAN_OPEN_NODE_DISTANCE_MM,
                robot.pose.position.y + sin(robot.pose_angle_rad) * PROXIMITY_SCAN_OPEN_NODE_DISTANCE_MM,
                robot.pose.position.z)
    
            await add_proximity_non_contact_to_state(safe_driving_position, state)
    
    
    #: repeatedly collects proximity data sample and converts them to nodes and walls for the map state
    async def collect_proximity_data_loop(robot: anki_vector.robot.Robot, future: concurrent.futures.Future, state: MapState):
        try:
            scan_interval = 1.0 / PROXIMITY_SCAN_SAMPLE_FREQUENCY_HZ
    
            # Runs until the collection_active flag is cleared.
            # This flag is cleared external to this function.
            while state.collection_active:
                # Collect proximity data from the sensor.
                reading = robot.proximity.last_sensor_reading
                if reading is not None:
                    await analyze_proximity_sample(reading, robot, state)
                robot.viewer_3d.user_data_queue.put(state)
                await asyncio.sleep(scan_interval)
    
        # Exceptions raised in this process are ignored, unless we set them on the future, and then run future.result() at a later time
        except Exception as e:    # pylint: disable=broad-except
            future.set_exception(e)
        finally:
            future.set_result(state)
    
    
    #: Updates the map state by rotating 360 degrees and collecting/applying proximity data samples.
    async def scan_area(robot: anki_vector.robot.Robot, state: MapState):
        collect_future = concurrent.futures.Future()
    
        # The collect_proximity_data task relies on this external trigger to know when its finished.
        state.collection_active = True
    
        # Activate the collection task while the robot turns in place.
        collect_task = robot.conn.loop.create_task(collect_proximity_data_loop(robot, collect_future, state))
    
        # Turn around in place, then send the signal to kill the collection task.
        robot.behavior.turn_in_place(angle=degrees(360.0), speed=degrees(360.0 / PROXIMITY_SCAN_TURN_DURATION_S))
        state.collection_active = False
    
        # Wait for the collection task to finish.
        robot.conn.run_coroutine(collect_task)
        # While the result of the task is not used, this call will propagate any exceptions that
        # occured in the task, allowing for debug visibility.
        collect_future.result()
    
    
    #: Top level call to perform exploration and environment mapping
    async def map_explorer(robot: anki_vector.robot.Robot):
        # Drop the lift, so that it does not block the proximity sensor
        robot.behavior.set_lift_height(0.0)
    
        # Create the map state, and add it's rendering function to the viewer's render pipeline
        state = MapState()
        robot.viewer_3d.add_render_call(state.render)
    
        # Comparison function used for sorting which open nodes are the furthest from all existing
        # walls and loose contacts.
        # (Using 1/r^2 to respond strongly to small numbers of close contact and weaking to many distant contacts)
        def open_point_sort_func(position: Vector3):
            proximity_sum = 0
            for p in state.contact_nodes:
                proximity_sum = proximity_sum + 1 / (p - position).magnitude_squared
            for c in state.walls:
                for p in c.vertices:
                    proximity_sum = proximity_sum + 1 / (p - position).magnitude_squared
            return proximity_sum
    
        # Loop until running out of open samples to navigate to,
        # or if the process has yet to start (indicated by a lack of cleared_territories).
        while (state.open_nodes and ACTIVELY_EXPLORE_SPACE) or not state.cleared_territories:
            if robot.pose:
                # Delete any open samples range of the robot.
                state.open_nodes = [position for position in state.open_nodes if (position - robot.pose.position).magnitude > PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM]
    
                # Collect map data for the robot's current location.
                await scan_area(robot, state)
    
                # Add where the robot is to the map's cleared territories.
                state.cleared_territories.append(ClearedTerritory(robot.pose.position, PROXIMITY_SCAN_DISTANCE_THRESHOLD_MM))
    
                # @TODO: This whole block should ideally be replaced with the go_to_pose actions when that is ready to go.
                # Alternatively, the turn&drive commands can be modified to respond to collisions by cancelling.  After
                # either change, ACTIVELY_EXPLORE_SPACE should be defaulted True
                if ACTIVELY_EXPLORE_SPACE and state.open_nodes:
                    # Sort the open nodes and find our next navigation point.
                    state.open_nodes.sort(key=open_point_sort_func)
                    nav_point = state.open_nodes[0]
    
                    # Calculate the distance and direction of this next navigation point.
                    nav_point_delta = Vector3(
                        nav_point.x - robot.pose.position.x,
                        nav_point.y - robot.pose.position.y,
                        0)
                    nav_distance = nav_point_delta.magnitude
                    nav_direction = nav_point_delta.normalized
    
                    # Convert the nav_direction into a turn angle relative to the robot's current facing.
                    robot_forward = Vector3(*robot.pose.rotation.to_matrix().forward_xyz).normalized
                    turn_angle = acos(nav_direction.dot(robot_forward))
                    if nav_direction.cross(robot_forward).z > 0:
                        turn_angle = -turn_angle
    
                    # Turn toward the nav point, and drive to it.
                    robot.behavior.turn_in_place(angle=radians(turn_angle), speed=degrees(EXPLORE_TURN_SPEED_DPS))
                    robot.behavior.drive_straight(distance=distance_mm(nav_distance), speed=speed_mmps(EXPLORE_DRIVE_SPEED_MMPS))
    
        if PROXIMITY_EXPLORATION_SHUTDOWN_DELAY_S == 0.0:
            while True:
                await asyncio.sleep(1.0)
        else:
            print('finished exploring - waiting an additional {0} seconds, then shutting down'.format(PROXIMITY_EXPLORATION_SHUTDOWN_DELAY_S))
            await asyncio.sleep(PROXIMITY_EXPLORATION_SHUTDOWN_DELAY_S)
    
    
    if __name__ == '__main__':
        # Connect to the robot
        args = parse_command_args()
        with anki_vector.Robot(args.serial,
                               enable_camera_feed=True,
                               show_viewer=True,
                               enable_nav_map_feed=False,
                               show_3d_viewer=True) as robotInstance:
            robotInstance.behavior.drive_off_charger()
            loop = asyncio.get_event_loop()
            loop.run_until_complete(map_explorer(robotInstance))

    2.4 remote control

    """Control Vector using a webpage on your computer.
    
    This example lets you control Vector by Remote Control, using a webpage served by Flask.
    """
    
    import io
    import json
    import sys
    import time
    from lib import flask_helpers
    
    import anki_vector
    from anki_vector import util
    
    
    try:
        from flask import Flask, request
    except ImportError:
        sys.exit("Cannot import from flask: Do `pip3 install --user flask` to install")
    
    try:
        from PIL import Image
    except ImportError:
        sys.exit("Cannot import from PIL: Do `pip3 install --user Pillow` to install")
    
    
    def create_default_image(image_width, image_height, do_gradient=False):
        """Create a place-holder PIL image to use until we have a live feed from Vector"""
        image_bytes = bytearray([0x70, 0x70, 0x70]) * image_width * image_height
    
        if do_gradient:
            i = 0
            for y in range(image_height):
                for x in range(image_width):
                    image_bytes[i] = int(255.0 * (x / image_width))   # R
                    image_bytes[i + 1] = int(255.0 * (y / image_height))  # G
                    image_bytes[i + 2] = 0                                # B
                    i += 3
    
        image = Image.frombytes('RGB', (image_width, image_height), bytes(image_bytes))
        return image
    
    
    flask_app = Flask(__name__)
    _default_camera_image = create_default_image(320, 240)
    _is_mouse_look_enabled_by_default = False
    
    
    def remap_to_range(x, x_min, x_max, out_min, out_max):
        """convert x (in x_min..x_max range) to out_min..out_max range"""
        if x < x_min:
            return out_min
        if x > x_max:
            return out_max
        ratio = (x - x_min) / (x_max - x_min)
        return out_min + ratio * (out_max - out_min)
    
    
    class RemoteControlVector:
    
        def __init__(self, robot):
            self.vector = robot
    
            self.drive_forwards = 0
            self.drive_back = 0
            self.turn_left = 0
            self.turn_right = 0
            self.lift_up = 0
            self.lift_down = 0
            self.head_up = 0
            self.head_down = 0
    
            self.go_fast = 0
            self.go_slow = 0
    
            self.is_mouse_look_enabled = _is_mouse_look_enabled_by_default
            self.mouse_dir = 0
    
            all_anim_names = self.vector.anim.anim_list
            all_anim_names.sort()
            self.anim_names = []
    
            # Hide a few specific test animations that don't behave well
            bad_anim_names = [
                "ANIMATION_TEST",
                "soundTestAnim"]
    
            for anim_name in all_anim_names:
                if anim_name not in bad_anim_names:
                    self.anim_names.append(anim_name)
    
            default_anims_for_keys = ["anim_turn_left_01",  # 0
                                      "anim_blackjack_victorwin_01",  # 1
                                      "anim_pounce_success_02",  # 2
                                      "anim_feedback_shutup_01",  # 3
                                      "anim_knowledgegraph_success_01",  # 4
                                      "anim_wakeword_groggyeyes_listenloop_01",  # 5
                                      "anim_fistbump_success_01",  # 6
                                      "anim_reacttoface_unidentified_01",  # 7
                                      "anim_rtpickup_loop_10",  # 8
                                      "anim_volume_stage_05"]  # 9
    
            self.anim_index_for_key = [0] * 10
            kI = 0
            for default_key in default_anims_for_keys:
                try:
                    anim_idx = self.anim_names.index(default_key)
                except ValueError:
                    print("Error: default_anim %s is not in the list of animations" % default_key)
                    anim_idx = kI
                self.anim_index_for_key[kI] = anim_idx
                kI += 1
    
            self.action_queue = []
            self.text_to_say = "Hi I'm Vector"
    
        def set_anim(self, key_index, anim_index):
            self.anim_index_for_key[key_index] = anim_index
    
        def handle_mouse(self, mouse_x, mouse_y):
            """Called whenever mouse moves
                mouse_x, mouse_y are in in 0..1 range (0,0 = top left, 1,1 = bottom right of window)
            """
            if self.is_mouse_look_enabled:
                mouse_sensitivity = 1.5  # higher = more twitchy
                self.mouse_dir = remap_to_range(mouse_x, 0.0, 1.0, -mouse_sensitivity, mouse_sensitivity)
                self.update_mouse_driving()
    
                desired_head_angle = remap_to_range(mouse_y, 0.0, 1.0, 45, -25)
                head_angle_delta = desired_head_angle - util.radians(self.vector.head_angle_rad).degrees
                head_vel = head_angle_delta * 0.03
                self.vector.motors.set_head_motor(head_vel)
    
        def set_mouse_look_enabled(self, is_mouse_look_enabled):
            was_mouse_look_enabled = self.is_mouse_look_enabled
            self.is_mouse_look_enabled = is_mouse_look_enabled
            if not is_mouse_look_enabled:
                # cancel any current mouse-look turning
                self.mouse_dir = 0
                if was_mouse_look_enabled:
                    self.update_mouse_driving()
                    self.update_head()
    
        def update_drive_state(self, key_code, is_key_down, speed_changed):
            """Update state of driving intent from keyboard, and if anything changed then call update_driving"""
            update_driving = True
            if key_code == ord('W'):
                self.drive_forwards = is_key_down
            elif key_code == ord('S'):
                self.drive_back = is_key_down
            elif key_code == ord('A'):
                self.turn_left = is_key_down
            elif key_code == ord('D'):
                self.turn_right = is_key_down
            else:
                if not speed_changed:
                    update_driving = False
            return update_driving
    
        def update_lift_state(self, key_code, is_key_down, speed_changed):
            """Update state of lift move intent from keyboard, and if anything changed then call update_lift"""
            update_lift = True
            if key_code == ord('R'):
                self.lift_up = is_key_down
            elif key_code == ord('F'):
                self.lift_down = is_key_down
            else:
                if not speed_changed:
                    update_lift = False
            return update_lift
    
        def update_head_state(self, key_code, is_key_down, speed_changed):
            """Update state of head move intent from keyboard, and if anything changed then call update_head"""
            update_head = True
            if key_code == ord('T'):
                self.head_up = is_key_down
            elif key_code == ord('G'):
                self.head_down = is_key_down
            else:
                if not speed_changed:
                    update_head = False
            return update_head
    
        def handle_key(self, key_code, is_shift_down, is_alt_down, is_key_down):
            """Called on any key press or release
               Holding a key down may result in repeated handle_key calls with is_key_down==True
            """
    
            # Update desired speed / fidelity of actions based on shift/alt being held
            was_go_fast = self.go_fast
            was_go_slow = self.go_slow
    
            self.go_fast = is_shift_down
            self.go_slow = is_alt_down
    
            speed_changed = (was_go_fast != self.go_fast) or (was_go_slow != self.go_slow)
    
            update_driving = self.update_drive_state(key_code, is_key_down, speed_changed)
    
            update_lift = self.update_lift_state(key_code, is_key_down, speed_changed)
    
            update_head = self.update_head_state(key_code, is_key_down, speed_changed)
    
            # Update driving, head and lift as appropriate
            if update_driving:
                self.update_mouse_driving()
            if update_head:
                self.update_head()
            if update_lift:
                self.update_lift()
    
            # Handle any keys being released (e.g. the end of a key-click)
            if not is_key_down:
                if ord('9') >= key_code >= ord('0'):
                    anim_name = self.key_code_to_anim_name(key_code)
                    self.queue_action((self.vector.anim.play_animation, anim_name))
                elif key_code == ord(' '):
                    self.queue_action((self.vector.say_text, self.text_to_say))
    
        def key_code_to_anim_name(self, key_code):
            key_num = key_code - ord('0')
            anim_num = self.anim_index_for_key[key_num]
            anim_name = self.anim_names[anim_num]
            return anim_name
    
        def func_to_name(self, func):
            if func == self.vector.say_text:
                return "say_text"
            if func == self.vector.anim.play_animation:
                return "play_anim"
            return "UNKNOWN"
    
        def action_to_text(self, action):
            func, args = action
            return self.func_to_name(func) + "( " + str(args) + " )"
    
        def action_queue_to_text(self, action_queue):
            out_text = ""
            i = 0
            for action in action_queue:
                out_text += "[" + str(i) + "] " + self.action_to_text(action)
                i += 1
            return out_text
    
        def queue_action(self, new_action):
            if len(self.action_queue) > 10:
                self.action_queue.pop(0)
            self.action_queue.append(new_action)
    
        def update(self):
            """Try and execute the next queued action"""
            if self.action_queue:
                queued_action, action_args = self.action_queue[0]
                if queued_action(action_args):
                    self.action_queue.pop(0)
    
        def pick_speed(self, fast_speed, mid_speed, slow_speed):
            if self.go_fast:
                if not self.go_slow:
                    return fast_speed
            elif self.go_slow:
                return slow_speed
            return mid_speed
    
        def update_lift(self):
            lift_speed = self.pick_speed(8, 4, 2)
            lift_vel = (self.lift_up - self.lift_down) * lift_speed
            self.vector.motors.set_lift_motor(lift_vel)
    
        def update_head(self):
            if not self.is_mouse_look_enabled:
                head_speed = self.pick_speed(2, 1, 0.5)
                head_vel = (self.head_up - self.head_down) * head_speed
                self.vector.motors.set_head_motor(head_vel)
    
        def update_mouse_driving(self):
            drive_dir = (self.drive_forwards - self.drive_back)
    
            turn_dir = (self.turn_right - self.turn_left) + self.mouse_dir
            if drive_dir < 0:
                # It feels more natural to turn the opposite way when reversing
                turn_dir = -turn_dir
    
            forward_speed = self.pick_speed(150, 75, 50)
            turn_speed = self.pick_speed(100, 50, 30)
    
            l_wheel_speed = (drive_dir * forward_speed) + (turn_speed * turn_dir)
            r_wheel_speed = (drive_dir * forward_speed) - (turn_speed * turn_dir)
    
            self.vector.motors.set_wheel_motors(l_wheel_speed, r_wheel_speed, l_wheel_speed * 4, r_wheel_speed * 4)
    
    
    def get_anim_sel_drop_down(selectorIndex):
        html_text = """<select onchange="handleDropDownSelect(this)" name="animSelector""" + str(selectorIndex) + """">"""
        i = 0
        for anim_name in flask_app.remote_control_vector.anim_names:
            is_selected_item = (i == flask_app.remote_control_vector.anim_index_for_key[selectorIndex])
            selected_text = ''' selected="selected"''' if is_selected_item else ""
            html_text += """<option value=""" + str(i) + selected_text + """>""" + anim_name + """</option>"""
            i += 1
        html_text += """</select>"""
        return html_text
    
    
    def get_anim_sel_drop_downs():
        html_text = ""
        for i in range(10):
            # list keys 1..9,0 as that's the layout on the keyboard
            key = i + 1 if (i < 9) else 0
            html_text += str(key) + """: """ + get_anim_sel_drop_down(key) + """<br>"""
        return html_text
    
    
    def to_js_bool_string(bool_value):
        return "true" if bool_value else "false"
    
    
    @flask_app.route("/")
    def handle_index_page():
        return """
        <html>
            <head>
                <title>remote_control_vector.py display</title>
            </head>
            <body>
                <h1>Remote Control Vector</h1>
                <table>
                    <tr>
                        <td valign = top>
                            <div id="vectorImageMicrosoftWarning" style="display: none;color: #ff9900; text-align: center;">Video feed performance is better in Chrome or Firefox due to mjpeg limitations in this browser</div>
                            <img src="vectorImage" id="vectorImageId" width=640 height=480>
                            <div id="DebugInfoId"></div>
                        </td>
                        <td width=30></td>
                        <td valign=top>
                            <h2>Controls:</h2>
    
                            <h3>Driving:</h3>
    
                            <b>W A S D</b> : Drive Forwards / Left / Back / Right<br><br>
                            <b>Q</b> : Toggle Mouse Look: <button id="mouseLookId" onClick=onMouseLookButtonClicked(this) style="font-size: 14px">Default</button><br>
                            <b>Mouse</b> : Move in browser window to aim<br>
                            (steer and head angle)<br>
                            (similar to an FPS game)<br>
    
                            <h3>Head:</h3>
                            <b>T</b> : Move Head Up<br>
                            <b>G</b> : Move Head Down<br>
    
                            <h3>Lift:</h3>
                            <b>R</b> : Move Lift Up<br>
                            <b>F</b>: Move Lift Down<br>
                            <h3>General:</h3>
                            <b>Shift</b> : Hold to Move Faster (Driving, Head and Lift)<br>
                            <b>Alt</b> : Hold to Move Slower (Driving, Head and Lift)<br>
                            <b>P</b> : Toggle Free Play mode: <button id="freeplayId" onClick=onFreeplayButtonClicked(this) style="font-size: 14px">Default</button><br>
                            <h3>Play Animations</h3>
                            <b>0 .. 9</b> : Play Animation mapped to that key<br>
                            <h3>Talk</h3>
                            <b>Space</b> : Say <input type="text" name="sayText" id="sayTextId" value="""" + flask_app.remote_control_vector.text_to_say + """" onchange=handleTextInput(this)>
                        </td>
                        <td width=30></td>
                        <td valign=top>
                        <h2>Animation key mappings:</h2>
                        """ + get_anim_sel_drop_downs() + """<br>
                        </td>
                    </tr>
                </table>
    
                <script type="text/javascript">
                    var gLastClientX = -1
                    var gLastClientY = -1
                    var gIsMouseLookEnabled = """ + to_js_bool_string(_is_mouse_look_enabled_by_default) + """
                    var gIsFreeplayEnabled = false
                    var gUserAgent = window.navigator.userAgent;
                    var gIsMicrosoftBrowser = gUserAgent.indexOf('MSIE ') > 0 || gUserAgent.indexOf('Trident/') > 0 || gUserAgent.indexOf('Edge/') > 0;
                    var gSkipFrame = false;
    
                    if (gIsMicrosoftBrowser) {
                        document.getElementById("vectorImageMicrosoftWarning").style.display = "block";
                    }
    
                    function postHttpRequest(url, dataSet)
                    {
                        var xhr = new XMLHttpRequest();
                        xhr.open("POST", url, true);
                        xhr.send( JSON.stringify( dataSet ) );
                    }
    
                    function updateVector()
                    {
                        console.log("Updating log")
                        if (gIsMicrosoftBrowser && !gSkipFrame) {
                            // IE doesn't support MJPEG, so we need to ping the server for more images.
                            // Though, if this happens too frequently, the controls will be unresponsive.
                            gSkipFrame = true;
                            document.getElementById("vectorImageId").src="vectorImage?" + (new Date()).getTime();
                        } else if (gSkipFrame) {
                            gSkipFrame = false;
                        }
                        var xhr = new XMLHttpRequest();
                        xhr.onreadystatechange = function() {
                            if (xhr.readyState == XMLHttpRequest.DONE) {
                                document.getElementById("DebugInfoId").innerHTML = xhr.responseText
                            }
                        }
    
                        xhr.open("POST", "updateVector", true);
                        xhr.send( null );
                    }
                    setInterval(updateVector , 60);
    
                    function updateButtonEnabledText(button, isEnabled)
                    {
                        button.firstChild.data = isEnabled ? "Enabled" : "Disabled";
                    }
    
                    function onMouseLookButtonClicked(button)
                    {
                        gIsMouseLookEnabled = !gIsMouseLookEnabled;
                        updateButtonEnabledText(button, gIsMouseLookEnabled);
                        isMouseLookEnabled = gIsMouseLookEnabled
                        postHttpRequest("setMouseLookEnabled", {isMouseLookEnabled})
                    }
    
                    function onFreeplayButtonClicked(button)
                    {
                        gIsFreeplayEnabled = !gIsFreeplayEnabled;
                        updateButtonEnabledText(button, gIsFreeplayEnabled);
                        isFreeplayEnabled = gIsFreeplayEnabled
                        postHttpRequest("setFreeplayEnabled", {isFreeplayEnabled})
                    }
    
                    updateButtonEnabledText(document.getElementById("mouseLookId"), gIsMouseLookEnabled);
                    updateButtonEnabledText(document.getElementById("freeplayId"), gIsFreeplayEnabled);
    
                    function handleDropDownSelect(selectObject)
                    {
                        selectedIndex = selectObject.selectedIndex
                        itemName = selectObject.name
                        postHttpRequest("dropDownSelect", {selectedIndex, itemName});
                    }
    
                    function handleKeyActivity (e, actionType)
                    {
                        var keyCode  = (e.keyCode ? e.keyCode : e.which);
                        var hasShift = (e.shiftKey ? 1 : 0)
                        var hasCtrl  = (e.ctrlKey  ? 1 : 0)
                        var hasAlt   = (e.altKey   ? 1 : 0)
    
                        if (actionType=="keyup")
                        {
                            if (keyCode == 80) // 'P'
                            {
                                // Simulate a click of the freeplay button
                                onFreeplayButtonClicked(document.getElementById("freeplayId"))
                            }
                            else if (keyCode == 81) // 'Q'
                            {
                                // Simulate a click of the mouse look button
                                onMouseLookButtonClicked(document.getElementById("mouseLookId"))
                            }
                        }
    
                        postHttpRequest(actionType, {keyCode, hasShift, hasCtrl, hasAlt})
                    }
    
                    function handleMouseActivity (e, actionType)
                    {
                        var clientX = e.clientX / document.body.clientWidth  // 0..1 (left..right)
                        var clientY = e.clientY / document.body.clientHeight // 0..1 (top..bottom)
                        var isButtonDown = e.which && (e.which != 0) ? 1 : 0
                        var deltaX = (gLastClientX >= 0) ? (clientX - gLastClientX) : 0.0
                        var deltaY = (gLastClientY >= 0) ? (clientY - gLastClientY) : 0.0
                        gLastClientX = clientX
                        gLastClientY = clientY
    
                        postHttpRequest(actionType, {clientX, clientY, isButtonDown, deltaX, deltaY})
                    }
    
                    function handleTextInput(textField)
                    {
                        textEntered = textField.value
                        postHttpRequest("sayText", {textEntered})
                    }
    
                    document.addEventListener("keydown", function(e) { handleKeyActivity(e, "keydown") } );
                    document.addEventListener("keyup",   function(e) { handleKeyActivity(e, "keyup") } );
    
                    document.addEventListener("mousemove",   function(e) { handleMouseActivity(e, "mousemove") } );
    
                    function stopEventPropagation(event)
                    {
                        if (event.stopPropagation)
                        {
                            event.stopPropagation();
                        }
                        else
                        {
                            event.cancelBubble = true
                        }
                    }
    
                    document.getElementById("sayTextId").addEventListener("keydown", function(event) {
                        stopEventPropagation(event);
                    } );
                    document.getElementById("sayTextId").addEventListener("keyup", function(event) {
                        stopEventPropagation(event);
                    } );
                </script>
    
            </body>
        </html>
        """
    
    
    def get_annotated_image():
        # TODO: Update to use annotated image (add annotate module)
        image = flask_app.remote_control_vector.vector.camera.latest_image
        if image is None:
            return _default_camera_image
    
        return image
    
    
    def streaming_video():
        """Video streaming generator function"""
        while True:
            if flask_app.remote_control_vector:
                image = get_annotated_image()
    
                img_io = io.BytesIO()
                image.save(img_io, 'PNG')
                img_io.seek(0)
                yield (b'--frame\r\n'
                       b'Content-Type: image/png\r\n\r\n' + img_io.getvalue() + b'\r\n')
            else:
                time.sleep(.1)
    
    
    def serve_single_image():
        if flask_app.remote_control_vector:
            image = get_annotated_image()
            if image:
                return flask_helpers.serve_pil_image(image)
    
        return flask_helpers.serve_pil_image(_default_camera_image)
    
    
    def is_microsoft_browser(req):
        agent = req.user_agent.string
        return 'Edge/' in agent or 'MSIE ' in agent or 'Trident/' in agent
    
    
    @flask_app.route("/vectorImage")
    def handle_vectorImage():
        if is_microsoft_browser(request):
            return serve_single_image()
        return flask_helpers.stream_video(streaming_video)
    
    
    def handle_key_event(key_request, is_key_down):
        message = json.loads(key_request.data.decode("utf-8"))
        if flask_app.remote_control_vector:
            flask_app.remote_control_vector.handle_key(key_code=(message['keyCode']), is_shift_down=message['hasShift'],
                                                       is_alt_down=message['hasAlt'], is_key_down=is_key_down)
        return ""
    
    
    @flask_app.route('/mousemove', methods=['POST'])
    def handle_mousemove():
        """Called from Javascript whenever mouse moves"""
        message = json.loads(request.data.decode("utf-8"))
        if flask_app.remote_control_vector:
            flask_app.remote_control_vector.handle_mouse(mouse_x=(message['clientX']), mouse_y=message['clientY'])
        return ""
    
    
    @flask_app.route('/setMouseLookEnabled', methods=['POST'])
    def handle_setMouseLookEnabled():
        """Called from Javascript whenever mouse-look mode is toggled"""
        message = json.loads(request.data.decode("utf-8"))
        if flask_app.remote_control_vector:
            flask_app.remote_control_vector.set_mouse_look_enabled(is_mouse_look_enabled=message['isMouseLookEnabled'])
        return ""
    
    
    @flask_app.route('/setFreeplayEnabled', methods=['POST'])
    def handle_setFreeplayEnabled():
        """Called from Javascript whenever freeplay mode is toggled on/off"""
        message = json.loads(request.data.decode("utf-8"))
        if flask_app.remote_control_vector:
            isFreeplayEnabled = message['isFreeplayEnabled']
            connection = flask_app.remote_control_vector.vector.conn
            connection.request_control(enable=(not isFreeplayEnabled))
        return ""
    
    
    @flask_app.route('/keydown', methods=['POST'])
    def handle_keydown():
        """Called from Javascript whenever a key is down (note: can generate repeat calls if held down)"""
        return handle_key_event(request, is_key_down=True)
    
    
    @flask_app.route('/keyup', methods=['POST'])
    def handle_keyup():
        """Called from Javascript whenever a key is released"""
        return handle_key_event(request, is_key_down=False)
    
    
    @flask_app.route('/dropDownSelect', methods=['POST'])
    def handle_dropDownSelect():
        """Called from Javascript whenever an animSelector dropdown menu is selected (i.e. modified)"""
        message = json.loads(request.data.decode("utf-8"))
    
        item_name_prefix = "animSelector"
        item_name = message['itemName']
    
        if flask_app.remote_control_vector and item_name.startswith(item_name_prefix):
            item_name_index = int(item_name[len(item_name_prefix):])
            flask_app.remote_control_vector.set_anim(item_name_index, message['selectedIndex'])
    
        return ""
    
    
    @flask_app.route('/sayText', methods=['POST'])
    def handle_sayText():
        """Called from Javascript whenever the saytext text field is modified"""
        message = json.loads(request.data.decode("utf-8"))
        if flask_app.remote_control_vector:
            flask_app.remote_control_vector.text_to_say = message['textEntered']
        return ""
    
    
    @flask_app.route('/updateVector', methods=['POST'])
    def handle_updateVector():
        if flask_app.remote_control_vector:
            flask_app.remote_control_vector.update()
            action_queue_text = ""
            i = 1
            for action in flask_app.remote_control_vector.action_queue:
                action_queue_text += str(i) + ": " + flask_app.remote_control_vector.action_to_text(action) + "<br>"
                i += 1
    
            return "Action Queue:<br>" + action_queue_text + "\n"
        return ""
    
    
    def run():
        args = util.parse_command_args()
    
        with anki_vector.AsyncRobot(args.serial, enable_camera_feed=True) as robot:
            flask_app.remote_control_vector = RemoteControlVector(robot)
    
            robot.behavior.drive_off_charger()
    
            flask_helpers.run_flask(flask_app)
    
    
    if __name__ == '__main__':
        try:
            run()
        except KeyboardInterrupt as e:
            pass
        except anki_vector.exceptions.VectorConnectionException as e:
            sys.exit("A connection error occurred: %s" % e)


     

    展开全文
  • Vector人工智能情感机器人SDK发布和说明 Vector是Anki第二代人工智能情感机器人(第一代为Cozmo),目前SDK开发者工具已经发布。 Vector一直致力于为大众提供先进的,重要的,相关的机器人技术和人工智能技术。...

    Vector人工智能情感机器人SDK发布和说明

    Vector是Anki第二代人工智能情感机器人(第一代为Cozmo),目前SDK开发者工具已经发布。


    Vector一直致力于为大众提供先进的,重要的,相关的机器人技术和人工智能技术。我们这一部分是通过我们的多学科团队花费无尽的日夜制作的创新经验来实现的。但我们也相信开放我们的技术非常重要,因此像您这样的开发人员,研究人员和教育工作者可以在您自己的工作中使用它们。

    今天很高兴地宣布Vector SDK alpha的公开可用性。它使您可以访问Vector的众多硬件和软件技术,包括:

    • 高清彩色摄像机流
    • 红外激光扫描仪
    • 面子和情感识别
    • 高分辨率彩色IPS屏幕
    • 电容式触摸传感器
    • 四滴传感器
    • 数以百计的独特动画
    • 六轴惯性测量单元(IMU)
    • 自定义视觉标记

    Vector SDK使用Python,这是一种用于从机器学习到入门计算机科学课程的各种编程语言。Vector 还可以使用数千个第三方库,让您可以根据需要增强其功能。我们建议您访问SDK Showcase论坛,了解其他人如何使用SDK,以及共享您自己的项目。

    SDK附带一个3D查看器,可以呈现Vector对世界的理解

    最后,与推出以测试版形式推出的Cozmo SDK不同,我们向那些支持我们的Kickstarter广告系列的人发布了一个pre-alpha版本的SDK。如果没有您的所有见解和反馈,这个alpha版本的SDK是不可能的,所以非常感谢大家。

     

    附带的遥控器示例程序可让您手动控制Vector的硬件和动画

    在新的一年里有更多的SDK用户,所以一定要在官方的Anki开发者论坛中查看新的公告。


    • SDK文档 - 访问安装说明,API参考和示例程序。

    • Vector SDK常见问题解答 - 通过查看常见问题快速掌握。

    • 入门 - 必读文章,提供有关安装和使用SDK的方便提示。

    • Anki开发者YouTube - 订阅并了解其他人如何使用我们的机器人技术和人工智能技术。


    developer.anki.com/vector/docs/index.html


    欢迎使用Vector SDK Alpha


    Vector SDK使您可以直接访问Vector前所未有的高级传感器,AI功能和机器人技术,包括计算机视觉,智能地图和导航,以及富有表现力的动画系列。

    它功能强大但易于使用,复杂但不复杂,而且功能多样,足以用于广泛的领域,包括企业,研究和娱乐。

    请注意,这是Vector SDK的alpha版本,尚未完成功能,但已经可以访问Vector的许多硬件和软件功能。请访问Anki官方论坛了解更多详情。
     

    安装

    下载

    下载

    • SDK示例
    • GitHub上

    Downloads

    入门

    • Vector SDK入门
    • Anki开发者论坛
    • 先决条件
    • 启动SDK
    • 示例程序

    Getting Started With the Vector SDK


    API参考

    The API


    指数和表格




    ----

    展开全文
  • Anki Vector-Python SDK 进一步了解Vector: : 了解有关SDK的更多信息: : SDK文档: : 论坛: : 入门 您可以按照步骤使用SDK设置Vector机器人。 您还可以按照此项目的docs文件夹中的说明生成SDK文档的...
  • anki_vector SDK源码解析(教程)

    千次阅读 2018-12-25 10:13:00
    一:最近anki vector robot开放了Python SDK,我听到的第一时间就赶快上网查了查,先抛几个官网重要链接吧: Python编程API手册及环境搭建等:  ...
    一:最近anki vector robot开放了Python SDK,我听到的第一时间就赶快上网查了查,先抛几个官网重要链接吧:
    Python编程API手册及环境搭建等:
    anki公司github地址及anki_vectorSDK实现源码,用于理解电脑和vector的通信协议,包含教学例程:
    一些关于cozmo和vector编程使用的Web工具,可以在浏览器看到机器人看到的图像:
    anki编程交流社区:
    cozmo在线手册:
    cozmo编程SDK源码及教学例程源码:
    目前没有相关书籍,anki不是像google一样的一流大厂。笔者从事C++开发,对python编程也是小白级别,对vector编程的学习也全是靠上面这几个链接。
     
      首先,你得拥有一台vector和一台能用于编程的电脑(对操作系统没有要求,这点很不错),使它们处于同一个局域网内(都连着家里的wifi就行)。具体安装python和搭建环境等细节不再展开,在线文档写的已经很详尽了。
     转载请注明出处:https://www.cnblogs.com/xjjsk/p/10159946.html
    二:接下来的部分介绍几个简单的官网demo:
      源码目录:vector-python-sdk-master/examples/tutorials/
     1 #01_hello_world.py
     2 
     3 import anki_vector
     4 
     5 def main():
     6   args = anki_vector.util.parse_command_args()
     7   with anki_vector.Robot(args.serial) as robot:
     8     print("Say 'Hello World'...")
     9     robot.say_text("Hello World")
    10 
    11 if __name__ == "__main__":
    12   main()

    首先,第3行引入anki_vector模块,其实就是一个叫做anki_vector的文件夹,目前所有的程序,只需要引入这个模块就拥有vector的所有控制功能了。

     第6行解析命令行参数,只需要写在这就行了,暂时用不着命令行参数。
    第7行将解析后的命令行参数传给Robot类,创建一个Robot对象,取名为robot。
    从第8行开始操作robot,就能完成所有对vector机器人的操作了。
    例如,第9行让你的机器人说一句“Hello World”,目前不支持中文,但是可以用汉语拼音哈哈哈。
    11行和12行,如果这个文件是被别的文件引用,则main只是个普通被调函数,否则就执行main()函数。
    其实,每一个程序都是这么写的,你只需要复制上面的代码,将第8、9两行换成你需要实现的逻辑就行,而所有的控制,都可以通过robot对象实现。
    当然,引入anki_vector模块就是为了得到robot对象的,除了这个模块,你还可以引入任何其他python库进来玩,甚至可以使用ros、opencv等庞大的库进行人工智能编程。
    如果不清楚robot里面支持哪些操作,可以翻阅其他的例程和 vector的在线API文档。如果有遇到非常奇怪的Bug,可以到vector社区提出,与其他开发者交流。
     
      其实我感觉anki_vector的接口封装的特别好,就只需要看上面一个例子,再结合API文档,就能玩遍所有功能了。
     
    anki_vector.Robot(args.serial),这一句是创建一个robot对象,并连接到你的小V,每个程序开头一般都少不了这一行。而Robot这个对象的创建也是具有很多参数的,在这个例子中它只带了一个参数,其他的参数都使用默认参数。下面列出了这个重要的构造函数的所有参数及默认值:
    class Robot:
      def __init__(self,
        serial: str = None,#矢量序号。机器人的序号(ex. 00e20100)位于向量的底面,或从Vector的调试屏幕访问。用于确定要加载哪个向量配置。
        ip: str = None,#Vector的ip地址。(可选)
        config: dict = None,#自定义的dict,覆盖Vector配置中的值。(可选)
        default_logging: bool = True,#记录日志
        behavior_activation_timeout: int = 10,#连接超时时间。
        cache_animation_list: bool = True,#获取启动时可用的动画列表。
        enable_face_detection: bool = False,#相册开关。
        enable_camera_feed: bool = False,#相机开关
        enable_audio_feed: bool = False,#音频开关
        enable_custom_object_detection: bool = False,#自定义对象检测开关
        enable_nav_map_feed: bool = None,#导航地图开关
        show_viewer: bool = False,#相机画面开关
        show_3d_viewer: bool = False,#3D画面开关
        requires_behavior_control: bool = True):#是否控制小V的行为系统
        pass

    可以有选择的填入,普通情况下只需要小V的编号serial就行了。

     

    三:所以接下来,我将尝试对SDK进行源码解析,看看其他语言是否也能实现控制功能。
       以下是刚刚下载下来的源码目录:

    anki_vector目录下存放的就是SDK库源码了,这是最重要的,接下来我就阅读这一部分。

    examples目录下存放着一些应用例子,就是调用了anki_vector模块的示例程序,包含了上面讲的01_hello_world.py例程,如果还不清楚怎么调用anki_vector,可以多看看这部分。

    剩下的都是一些无关紧要的文件,感兴趣可以翻阅一下。

    接下来打开anki_vector目录:

    虽然文件很多,但是并不复杂,没有太多的嵌套,就一个平滑的文件列表,里面每一个py文件都实现了vector的一个控制功能(例如:背灯的控制由lights.py实现,照相的控制由camera.py实现),少数py文件用于实现基础功能和最后汇总(例如robot.py用于汇总对vector的控制功能,在调用这个库时只需要创建一个Robot类的对象,其他的操作全由这个对象间接完成)。

    最重要的是,这些文件与官网API几乎是一一对应,也就是说,每个文件内都封装了一个功能类。

    下面是在线文档中的API,可以与上面的库目录对比着看,对每个API的说明也是对每个文件的说明:

    具体每个类实现了哪些方法,可以点击对应的在线文档API进去看,也可以直接看源码。

    浏览了anki_vector库的概貌之后,我们再回到最先讲的hello world程序,看看在那几个调用中,到底发生了什么。

     

    四:hello world内部实现

     这个程序里,其实程序里面最不理解的就是这两句:

    1 args = anki_vector.util.parse_command_args()
    2 with anki_vector.Robot(args.serial) as robot:
    3   robot.say_text("Hello World")
    4   pass

    按照调用次序顺藤摸瓜,从anki_vector模块中找到util模块,再从util模块找到parse_command_args函数的实现,调用的时候是不带参数的:

     parser是函数参数,在前面的例程中,我们没有传递参数,默认是None。argparse是python的一个常用库,最后发生的就是在87行设置一些默认参数,,88行返回。
    再回到hello_world例程中,把返回的args直接传递给Robot的构造函数。
    在with ... as ...语句中,创建了一个Robot类型的对象robot,然后隐式调用了Robot类的__enter__函数,在with ... as ...语句结束后会隐式调用__exit__函数。而里面所做的,就是调用connect成员函数和disconnect成员函数,可以简单的认为是与你的vector机器人建立连接和断开连接,connect函数内部代码有点长,但是逻辑很简单,就是初始化所有的功能类对象,这里不再展开:

    然后先看一下say_text函数:创建一个protobuf定义的协议结构体对象,然后填充内容,然后使用gRPC发送给小V。gRPC是基于protobuf和http2.0的一种通用的RPC,使用它可以方便的生成服务端和客户端代码,由此也可以看出小V内部使用的是gRPC服务端。

    看完这些后,大体明白了robot.py的作用,这个模块不做具体的工作,只是简单的把其他基础模块的功能集成进来,为外部用户提供一个统一的接口。从这个文件的开头也能看出,它引用了同级目录下的几乎所有模块。

     

     五:功能模块详解

    回头仔细看看anki_vector目录,里面除了python源文件,还包含了三个目录,一个是messaging,一个是opengl,一个是configure。
    打开messaging,一看里面的文件后缀就知道,这是在使用gRPC,google开源的通信框架,基于google protobuf实现,是一种高效率的、使用非常便捷的、被广泛使用的通信方法,应该是用于电脑和vector的所有通信。虽然文件多,但是其实编码时只需要编写proto后缀的文件,用这些文件说明协议和RPC规则,编写完proto文件后,运行google提供的工具,就能生成两个同名的py文件。例如编写了*.proto,运行工具就会生成*_pb2.py和*_pb2_grpc.py两个文件,然后在源码中使用这两个生成的文件就行。而且用的是protobuf的第2版本,已经有第3版本了。
    opengl目录更不用说了,调用了openGL,应该是用于处理和渲染vector看到的图像,或者输入一些图像给vector。
    configure目录内只有一个__main__.py,用于配置自己的vector的信息,使电脑有权限能够连接特定vector。

     了解完这些,对SDK源码已经有全局的了解。之后将调出几个具体的功能讲一下,所有功能的实现流程都大致相同。

    ----后面会持续更新----
    -----未完期待-----
    ----转载请注明出处(此页面URL)----
    anki vector robot SDK python 入门编程教程

    转载于:https://www.cnblogs.com/xjjsk/p/10159946.html

    展开全文
  • 它和matrix-org/matrix-react-sdk应该有效地视为一个项目(例如,当前对vector-im / element-web提交了matrix-react-sdk错误,而不是该项目)。 翻译状态 开发人员指南 平台目标: Chrome,Firefox和Safari。 ...
  • 2_Theoretical_basis_of_vector_control_of_ST_MC_SDK_5_x; 3_Phase_current_detection_and_reconstruction_location_and_velocity_information_acquisition_of_ST_MC_SDK_5_x; 4_WB_application_guide_and_...
  • Android Vector(Vector/Vector动画)

    千次阅读 2018-03-29 15:09:58
    Android Vector(Vector/Vector动画) 本文由 Luzhuo 编写,转发请保留该信息. 原文: https://blog.csdn.net/Rozol/article/details/79743079 AppCompat23.2 增加了对Vector(矢量图)的全版本兼容 静态 ...

    Android Vector(Vector/Vector动画)


    本文由 Luzhuo 编写,转发请保留该信息.
    原文: https://blog.csdn.net/Rozol/article/details/79743079


    AppCompat23.2 增加了对Vector(矢量图)的全版本兼容
    静态 Vector 支持 Android2.1+
    动态 Vector 支持 Android3.0+ (属性动画 Android3.0+ (api>=11))
    动态 部分不兼容AndroidL(5.0)以下 (path Morphing)

    Vector矢量图简介

    PNG(位图) / SVG(矢量图) / Vector差异

    • SVG:
    • Vector: Android中使用
      • 工具:
        • SVG2Android(在线 / SVG转Vector): inloop.github.io/svg2android
        • Android Studio Vector Asset: 右键drawable -> New -> Vector Asset -> Material Icon(自带svg图) / Local file(本地的svg图)
    • Vector只实现了SVG语法的Path标签
    • 大小比较: 5,755B(png) > 2,696(svg) > 1,626(vector)

    Vector常用语法:

    <vector xmlns:android="http://schemas.android.com/apk/res/android"
            android:width="24dp"
            android:height="24dp"
            android:viewportWidth="24.0"
            android:viewportHeight="24.0">
        <path
            android:name="color"
            android:fillColor="#FF000000"
            android:pathData="M19.35,10.04C18.67,6.59 15.64,4 12,4 9.11,4 6.6,5.64 5.35,8.04 2.34,8.36 0,10.91 0,14c0,3.31 2.69,6 6,6h13c2.76,0 5,-2.24 5,-5 0,-2.64 -2.05,-4.78 -4.65,-4.96z"/>
    </vector>
    • 标签:

      • width: Vector的宽
      • height: Vector的高
      • viewportWidth: 把width分成多少等份
      • viewportHeight: 把height分成多少等份
      • name: 起个名字
      • fillColor: 填充颜色
      • strokeAlpha: 线条alpa
      • strokeColor: 线条颜色
      • strokeWidth: 线条宽度
      • strokeLineCap: 线帽 round/square
      • pathData: 路径, 根据路径绘制语法进行绘制
    • 路径绘制语法:

      • M = moveto(M X,Y): 画笔移动到指定位置
      • L = lineto(L X,Y): 画直线到指定位置
      • Z = closepath(): 关闭路径
      • H = horizontal lineto(H X): 画水平线到指定X轴位置
      • V = vertical lineto(V Y): 画垂直线到指定Y轴位置
      • Q = quadratic Belzier curve(Q X,Y,ENDX,ENDY): 2阶贝塞尔曲线
      • C = curveto(C X1,Y1,X2,Y2,ENDX,ENDY): 3阶贝赛尔曲线

    VectorDrawable

    • 兼容性:

      • 只兼容minSDK >= 21 (AndroidL 5.0)
      • Gradle Plugin 1.5 增加了5.0以下版本的兼容
        • api >=21 使用原生
        • api <= 21 将Vector转换成PNG
      • AppCompat23.2 增加了对Vector的全版本兼容

        • 静态Vector支持Android2.1+
        • 动态Vector支持Android3.0+ (属性动画 Android3.0+ (api>=11))
      • 动态VectorDrawable的兼容问题(唯一无法向下兼容的):

        • path Morphing (路径转换动画 □ -> ○)
          • 在Android L 以上需要代码配置(demo中start的代码不同)
          • 在Android L 以下无法使用
        • path Interpodation (路径插值器)
          • 在Android L 只能使用系统地插值器(不能自定义)
        • 不支持从Strings.xml中读取

    VectorDrawable使用场景

    • Vector(矢量图) vs Bitmap(位图)
      • Vector比较简单时,效率高, Vector非常复杂时, Bitmap效率高
      • Vector适用于icon / button / imageview的图标等小的icon, 或者动画效果; Bitmap在GPU中有重绘缓存功能(Vector没有), 能做频繁的重绘(Vector不能).
      • vector加载速度 > png, 渲染速度 < png.

    静态使用

    准备工作

    1. 配置module的 build.gradle 文件

      android {
          // ...
          defaultConfig{
              // ...
              vectorDrawables.useSupportLibrary = true
          }
      }
      dependencies{
          // ...
          // > 23.2
          compile 'com.android.support:appcompat-v7:24.2.1'
      }
      
    2. 配置project的 build.gradle 文件

      buildscript{
          dependencies{
              //  > 2.1
              classpath 'com.android.tools.build:gradle:2.2.0'
          }
      }
      
    3. 创建适量图, 放到drawable目录下.

      • (自己编写矢量图, 或者从外部导入svg图, 导入步骤见 Vector: Android中使用)
    4. 在ImageView中使用 app:srcCompat="@drawable/cards" 引用

      • 如果是有状态属性的控件(如:Button), 则通过 android:background + selector 方式引用
    5. 在 布局文件中 含有 vector矢量图 的 Activity 还需要添加一句以下代码

      static {
          AppCompatDelegate.setCompatVectorFromResourcesEnabled(true);
      }
      

    Vector的放大(缩小同理)

    • 布局: ImageView一个使用默认(24dp * 24dp)大小, 一个使用限定大小

          <ImageView
              android:layout_width="wrap_content"
              android:layout_height="wrap_content"
              app:srcCompat="@drawable/static_cards" />
      
          <ImageView
              android:layout_width="100dp"
              android:layout_height="100dp"
              app:srcCompat="@drawable/static_cards" />
    • 效果:

    • 为什么默认大小是(24dp * 24dp), 因为矢量图就设定的这么大

      <vector xmlns:android="http://schemas.android.com/apk/res/android"
              android:width="24dp"
              android:height="24dp"
              android:viewportWidth="24.0"
              android:viewportHeight="24.0">
          <!-- ... -->
      </vector>

    通过选择器给Button设置图片

    • 布局: 通过android:background + selector实现

      <Button
          android:layout_width="100dp"
          android:layout_height="100dp"
          android:background="@drawable/static_bg_btn"/>
    • 选择器: 给选择器的默认和按压状态, 分别设置一张不同的vector矢量图

      <?xml version="1.0" encoding="utf-8"?>
      <selector xmlns:android="http://schemas.android.com/apk/res/android">
      
          <item android:drawable="@drawable/static_image" android:state_pressed="true" />
          <item android:drawable="@drawable/static_cards" />
      
      </selector>
      <!-- 选择器 -->
    • 效果:

    动态使用

    准备工作

    • 同 静态使用 , 略

    vector + 属性动画

    • 布局: 给ImageView设置一个动画粘合剂

      <ImageView
          android:onClick="anim"
          android:layout_width="100dp"
          android:layout_height="100dp"
          app:srcCompat="@drawable/dynamic_move_anim"/>
    • 动画粘合剂(drawable目录下): 用于粘合 属性动画 和 适量图, 使用时会提示要api>21, 不用管该提示, 低版本也可使用

      • drawable设置矢量图
      • 通过targetname选择 矢量路径 ,并使用animation 粘合一个属性动画.

        <?xml version="1.0" encoding="utf-8"?>
        <animated-vector xmlns:android="http://schemas.android.com/apk/res/android"
            android:drawable="@drawable/dynamic_move" >
        
            <target
                android:animation="@animator/dynamic_move_left"
                android:name="left" />
        
            <target
                android:animation="@animator/dynamic_move_right"
                android:name="right" />
        
        </animated-vector>
    • 适量图: 如果要将矢量图里多个路径都粘合动画, 就需要使用group标签进行分组

      <vector xmlns:android="http://schemas.android.com/apk/res/android"
          android:width="24dp"
          android:height="24dp"
          android:viewportHeight="24.0"
          android:viewportWidth="24.0">
      
          <group android:name="left">
              <path
                  android:fillColor="#FF000000"
                  android:pathData="M9.01,14L2,14v2h7.01v3L13,15l-3.99,-4v3z" />
          </group>
      
          <group android:name="right">
              <path
                  android:fillColor="#FF000000"
                  android:pathData="M14.99,13v-3L22,10L22,8h-7.01L14.99,5L11,9l3.99,4z" />
          </group>
          <!-- group 标签可以对path标签进行分组, 并且group标签中含有path所没有的标签, 并且只有被该标签包裹才能执行动画 -->
      
      </vector>
    • 动画: 属性动画里, 使用沿x轴平移动画, 并使用overshoot插值器

      <?xml version="1.0" encoding="utf-8"?>
      <objectAnimator xmlns:android="http://schemas.android.com/apk/res/android"
          android:propertyName="translateX"
          android:valueFrom="0"
          android:valueTo="10"
          android:duration="1000"
          android:repeatCount="infinite"
          android:repeatMode="reverse"
          android:interpolator="@android:interpolator/overshoot" />
      
      <?xml version="1.0" encoding="utf-8"?>
      <objectAnimator xmlns:android="http://schemas.android.com/apk/res/android"
          android:propertyName="translateX"
          android:valueFrom="0"
          android:valueTo="-10"
          android:duration="1000"
          android:repeatCount="infinite"
          android:repeatMode="reverse"
          android:interpolator="@android:interpolator/overshoot" />
    • 在Activity里执行属性动画

      public void anim(View view){
          ImageView imageView = (ImageView) view;
          Drawable drawable = imageView.getDrawable();
          if(drawable instanceof Animatable){
              ((Animatable)drawable).start();
          }
      }
    • 效果:

    • 以上是位移属性动画, 如果要变色动画的话, 只需在属性动画里配置颜色相关属性即可: (颜色提示需要 api>=14, 不用管, 低版本也能用)

      <?xml version="1.0" encoding="utf-8"?>
      <!-- strokeColor:线条颜色 fillColor:填充颜色-->
      <objectAnimator xmlns:android="http://schemas.android.com/apk/res/android"
          android:propertyName="fillColor"
          android:valueFrom="@android:color/holo_red_dark"
          android:valueTo="@android:color/darker_gray"
          android:duration="5000"
          android:interpolator="@android:interpolator/overshoot"
          android:valueType="intType"/>
    • 还有截取动画:

      <?xml version="1.0" encoding="utf-8"?>
      <!-- trimPathStart 从头开始截取, trimPathEnd 从尾开始截取 -->
      <objectAnimator xmlns:android="http://schemas.android.com/apk/res/android"
          android:duration="1000"
          android:propertyName="trimPathStart"
          android:valueFrom="1"
          android:valueTo="0"/>
    • 当然也支持组合动画:

      <?xml version="1.0" encoding="utf-8"?>
      <set xmlns:android="http://schemas.android.com/apk/res/android">
      
          <objectAnimator
              android:propertyName="trimPathStart"
              android:valueFrom="1"
              android:valueTo="0"
              android:duration="10000"
              android:repeatCount="infinite"
              android:repeatMode="reverse"
              android:valueType="floatType" />
      
          <objectAnimator
              android:propertyName="strokeColor"
              android:valueFrom="@android:color/holo_red_dark"
              android:valueTo="@android:color/darker_gray"
              android:duration="10000"
              android:repeatCount="infinite"
              android:repeatMode="reverse"
              android:valueType="intType" />
      
      </set>
    • 还有路径动画, 但是只支持Android5.0+ (api>=21).

      <?xml version="1.0" encoding="utf-8"?>
      <!-- 属性动画 路径变换( □ -> ○ ) -->
      <objectAnimator xmlns:android="http://schemas.android.com/apk/res/android"
          android:propertyName="pathData"
          android:valueFrom="M 48,54 L 31,42 15,54 21,35 6,23 25,23 32,4 40,23 58,23 42,35 z"
          android:valueTo="M 48,54 L 31,54 15,54 10,35 6,23 25,10 32,4 40,10 58,23 54,35 z"
          android:valueType="pathType"
          android:duration="3000"/>
      
          <!-- 动态VectorDrawable不支持从Strings.xml中读取<pathData>数据,所以将数据拷贝于此 (兼容问题)-->
      • 使用该动画还需要注意的一个问题, 那就是Activity执行该动画也不一样. (其他均相同)

        @TargetApi(Build.VERSION_CODES.LOLLIPOP)
        public void anim(View view){
            ImageView imageView = (ImageView) view;
        
            // 此处代码不同
            AnimatedVectorDrawable drawable = (AnimatedVectorDrawable) getDrawable(R.drawable.dynamic_pathchange_anim);
            imageView.setImageDrawable(drawable);
            if(drawable != null) drawable.start();
        }
      • 动画里的 valueFrom 是从 vector 里拷贝出来的, 告诉动画从这里开始变. (如果不写的话, 执行时间会不准, 你会发现速度会快很多).

      • 效果:

    展开全文
  • 此页面将不断增强,以提供有关如何最好地利用我们的 Vector Maps JS SDK for Web 的更多见解和参考资料。 您可以在此处获取要在本文档中使用的 api 密钥: : 介绍 适用于 Web 的交互式地图 JavaScript SDK 有助于...
  • Vector CANoe 9.0 官方文档(CHM格式),英文,从软件中提取
  • matlab史密斯圆图代码PicoVNA 矢量网络分析仪工具箱 PicoVNA 矢量网络分析仪工具箱提供了一组函数和示例,可与直接来自 MATLAB 的 Pico Technology PicoVNA :registered:矢量网络分析仪产品一起使用。...
  • Milvus Python SDK 适用于Python SDK。 要为该项目贡献代码,请首先阅读我们的。 有关详细的SDK文档,请参阅。 开始吧 先决条件 pymilvus仅支持Python 3.6或更高版本。 安装pymilvus 您可以通过pip或pip3 for ...
  • Android使用Vector进行适配和瘦身

    万次阅读 2018-01-02 12:12:32
    Android Vector在android5.0开始google提供了Vector的支持,到现在为止google已经提供了低版本的兼容,Vector的技术也越来越完善,因此日后在android中使用Vector是一个趋势。Android Vector的优势: Vector图像可以...
  • OpenCV-android-sdk.zip

    2020-09-09 15:16:53
    这是CTS测试Rotation Vector CV Crosscheck必须安装的apk Since version 1.7 several packages of OpenCV Manager are built. Every package is targeted for some specific hardware platform and includes ...
  • 记一次升级Flutter SDK失败的光辉历史

    万次阅读 2019-05-29 19:47:45
    刚打开项目,看到控制台的编译log提示Flutter SDK可以更新,行吧,那就更新吧,想着应该也是一个很简单的事情,因为我Android Studio也是经常更新的。 ok,编译结束,Terminal中直接执行flutter upgrade,因为之前也...
  • 在进行STM32MP157交叉编译的时候,发现用官方SDK编译STL代码编译不过去,后换以下链接编译器,自行编译,OK https://releases.linaro.org/components/toolchain/binaries/latest-7/arm-linux-gnueabihf/ 1.用...
  • Cozmo人工智能机器人SDK使用笔记(X)-总结- |人工智能基础(中小学版)实践平台| https://blog.csdn.net/ZhangRelay/article/details/86675539 仿真: ROS Melodic的迷失与救赎::...
  • NvrSDK交接文档

    千次阅读 2017-05-16 14:24:19
    外面客户购买了我们的NVR产品,需要提供SDK包做二次开发 解答客户对接SDK过程中遇到的问题 解决SDK本身存在的bug 根据新的需求增加接口 总结起来就是:提供SDK安装包、解答对接、解决bug、新增需求接口;二.准备...
  • S32K SDK使用详解之S32 SDK软件架构详解

    千次阅读 多人点赞 2020-04-09 10:52:56
    1. SDK的MCU平台相关设备驱动解析(SDK-->platform-->devices目录) 1.1 子目录common 1.2 子目录S32K1xx(为具体使用的MCU型号,可能为 S32K116/S32K118/S31K142/S32K144/S32K146/S32K148) 1.3 其他文件 2. ...
  • AlgoLab Raster to Vector Conversion SDK 以动态链接库(dll)实现AlgoLab Raster to Vector Conversion Toolkit将光栅图转换为矢量图的所有功能。它也提供Raster Vector ActiveX, VB Com工具格式的界面。 ...
  • Because every version of xxxxx from sdk depends on xxxxx and xxxx depends on xxxx, xxxxx from sdk is forbidden. So, because xxxx depends on xxxxxx any from sdk, version solving failed. pub get failed...
  • vector_xml属性

    千次阅读 2018-06-20 23:09:35
    vector_xml属性 矢量图形对应的XML文件定义在res/drawable下,在XML文件中的根标签是vector。 矢量图形的xml文件支持以下标签: vector:根标签,表示一个矢量动画。矢量图形对应的Java类是VectorDrawable。...
  • GOOGLE VR SDK开发VR游戏,VR播放器之一

    万次阅读 2016-03-24 12:51:43
    谷歌对VR最大的贡献是提供了廉价的谷歌眼镜,按照GOOGLE提供的图纸,使用两个放大镜和一个披萨盒就能轻松DIY出自己的VR眼镜,同一时期谷歌推出来开源的VR SDK,使得开发者可以轻松构建VR游戏和VR播放器,使得几乎...
  • STM FOC库 5.3使用说明书,可以快速上手FOC库,适合于新手,老鸟就不要下了
  • 本文介绍了用于Java开发机器学习和深度学习的Vector API 英语原文链接 https://software.intel.com/en-us/articles/vector-api-developer-program-for-java Vector API教程介绍什么是SIMD?什么是Vector API?Vector...
  • create_project ( "Example Project 1" , "example" , "Vector" ) sa . upload_images_from_folder_to_project ( "Example Project 1" , "<path>" ) 安装 SDK在PyPI上可用: pip install superannotate 该软件包...
  • 在Android中使用Vector来替代传统的图片有很多好处,比如自适应,体积小,不失真等。但是,在Android5.0以下版本使用时会有兼容性问题,在Androi 5.0以下的设备可能会报这样的错误: Caused by: org.xmlpull.v1....
  • AWS IOT C++ SDK 使用

    千次阅读 2018-01-30 20:13:55
    C++ SDK samples 中PubSub.cpp写的很清楚该怎么用。 本文主要讲怎么把我们想要的部分从sdk源码中分离出来。 重点:AWS iot message 负载只支持128kb 128kb 128kb 意味着图片视频数据基本上都走不了iot了 1 构建...
  • 请按照步骤操作以使用SDK设置Vector机器人(此API的未来版本将不需要此步骤)。 在安装Python SDK并开始工作后,您可以继续设置Node.js API: 从%home%/.anki_vector/sdk_config.ini获得的值直接传递到构造函数中。...
  • 史上最全C++ S3 SDK使用流程梳理

    千次阅读 2020-06-19 16:36:11
    史上最全C++ S3 SDK使用流程梳理 1、aws-s3 sdk获取 https://github.com/aws/aws-sdk-cpp 以上链接可获取aws各版本的c++ sdk,目前最新版本为1.8,请根据需要获取相应版本sdk。 如:wget ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 18,527
精华内容 7,410
关键字:

sdkvector