• 机器人绘图系统,王海,,
  • 1、前记:一个案例的记录。 介绍: Using robots to draw is an emerging artform. To expand this field, a method for sketching any image, on a dry erase board, using an ABB IRB 1600 industrial robot, ...



             Using robots to draw is an emerging artform. To expand this field, a method for sketching any image, on a dry
    erase board, using an ABB IRB 1600 industrial robot, was developed. The four tasks to complete this goal include: developing the mechanical fixturing for the marker, writing path perception algorithms, porting the computer vision data to the robot’s proprietary controller, and a full system integration and test. The provided 3-pronged radial gripper with custom 3D printed finger was used to grasp an Expo brand, bullet-tipped, dry erase marker. An 18 x 24 inch whiteboard was featured in the workspace to draw upon.

            To perceive to curve to sketch form the image, two main procedures were used from MATLAB’s image processing toolbox. First, canny edge transform was performed on the reduced grayscale image. Then a breadth-first search algorithm was developed to generate a collection of vectors representing a list of paths, with each path being made up of groupings of
    coordinates. The next step was to import the paths into the ABB proprietary software, RobotStudio, and RAPID programing language. To accomplish this, the path data was written to a text file using RAPID syntax and pasted into RobotStudio. The final system was largely successful in sketching the image. Mechanically, there were issues with line thickness consistency as the board table is not entirely level as the marker was very rigidly held in the end of arm tool. Future iterations may include improved marker fixturing, further algorithm refinement, and multiple colors.


  • 机器人 GDI 绘图

    2015-12-16 16:18:14
    机器人 GDI 绘图图片
  • 墙上绘图机器人 If you were asked to draw a picture of several people in ski gear, standing in the snow, chances are you’d start with an outline of three or four people reasonably positioned in the ...


    If you were asked to draw a picture of several people in ski gear, standing in the snow, chances are you’d start with an outline of three or four people reasonably positioned in the center of the canvas, then sketch in the skis under their feet. Though it was not specified, you might decide to add a backpack to each of the skiers to jibe with expectations of what skiers would be sporting. Finally, you’d carefully fill in the details, perhaps painting their clothes blue, scarves pink, all against a white background, rendering these people more realistic and ensuring that their surroundings match the description. Finally, to make the scene more vivid, you might even sketch in some brown stones protruding through the snow to suggest that these skiers are in the mountains.

    如果您被要求在雪地上画几个滑雪者的照片,您可能会先从三四个人的轮廓开始,然后将它们合理地定位在画布的中心,然后在滑雪者的下方素描脚。 尽管未指定,但您可以决定向每个滑雪者添加一个背包,以期对滑雪者的运动有所期待。 最后,您将仔细地填写细节,也许将他们的衣服涂成蓝色,围巾变成粉红色,并且都以白色为背景,从而使这些人更加逼真并确保其周围环境与描述相符。 最后,为使场景更加生动,您甚至可以草绘一些穿过雪地的棕色石头,以表明这些滑雪者在山上。

    Now there’s a bot that can do all that.


    New AI technology being developed at Microsoft Research AI can understand a natural language description, sketch a layout of the image, synthesize the image, and then refine details based on the layout and the individual words provided. In other words, this bot can generate images from caption-like text descriptions of everyday scenes. This deliberate mechanism produced a significant boost in generated image quality compared to the earlier state-of-the-art technique for text-to-image generation for complicated everyday scenes, according to results on industry standard tests reported in “Object-driven Text-to-Image Synthesis via Adversarial Training”, to be published this month in Long Beach, California at the 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2019). This is a collaboration project among Pengchuan ZhangQiuyuan Huang and Jianfeng Gao of Microsoft Research AILei Zhang of Microsoft, Xiaodong He of JD AI Research, and Wenbo Li and Siwei Lyu of the University at Albany, SUNY (while Wenbo Li worked as an intern at Microsoft Research AI).

    Microsoft Research AI正在开发的新AI技术可以理解自然的语言描述,绘制图像的布局,合成图像,然后根据布局和所提供的各个单词来细化细节。 换句话说,该机器人可以从类似于字幕的日常场景文字描述中生成图像。 根据“ 对象驱动的文字广告”中报告的行业标准测试结果,与较早的用于复杂日常场景的文字到图片生成的最新技术相比,这种故意的机制大大提高了生成的图片质量。 通过对抗训练进行图像合成 ”,本月将在加利福尼亚州长滩的2019 IEEE计算机视觉和模式识别会议 (CVPR 2019)上发表。 这是Microsoft Research AI的 Zhang PengchuanHuang Qiuyuan GaoGao JianfengMicrosoft的 Zhang Lei ,JD AI Research的Xiaodong He以及纽约州立大学奥尔巴尼分校的Wenwen LiSiwei Lyu之间的合作项目(李文博担任Microsoft Research AI的实习生)。

    There are two main challenges intrinsic to the description-based drawing bot problem. The first is that many kinds of objects can appear in everyday scenes and the bot should be able to understand and draw all of them. Previous text-to-image generation methods use image-caption pairs that only provide a very coarse-grained supervising signal for generating individual objects, limiting their object generation quality. In this new technology, the researchers make use of the COCO dataset that contains labels and segmentation maps for 1.5 million object instances across 80 common object classes, enabling the bot to learn both concept and appearance of these objects. This fine-grained supervised signal for object generation significantly improves generation quality for these common object classes.

    基于描述的绘图机器人问题固有两个主要挑战。 首先是许多物体可以出现在日常场景中,并且机器人应该能够理解并绘制所有物体。 以前的文本到图像生成方法使用图像标题对,它们仅提供非常粗糙的监督信号来生成单个对象,从而限制了对象的生成质量。 在这项新技术中,研究人员利用了COCO数据集,该数据集包含跨80个常见对象类的150万个对象实例的标签和分段图,从而使该机器人可以学习这些对象的概念和外观。 用于对象生成的细粒度监督信号显着提高了这些常见对象类的生成质量。

    The second challenge lies in the understanding and generation of the relationships between multiple objects in one scene. Great success has been achieved in generating images that only contain one main object for several specific domains, such as faces, birds, and common objects. However, generating more complex scenes containing multiple objects with semantically meaningful relationships across those objects remains a significant challenge in text-to-image generation technology. This new drawing bot learned to generate layout of objects from co-occurrence patterns in the COCO dataset to then generate an image conditioned on the pre-generated layout.

    第二个挑战在于理解和生成一个场景中多个对象之间的关系。 在生成仅包含多个特定领域的一个主要对象的图像(例如人脸,鸟类和常见对象)方面已取得了巨大的成功。 但是,在文本到图像生成技术中,生成包含多个对象且在这些对象之间具有语义上有意义的关系的更复杂的场景仍然是一项重大挑战。 这个新的绘图机器人学会了从COCO数据集中的共现模式生成对象的布局,然后生成以预生成的布局为条件的图像。

    对象驱动的注意力图像生成 (Object-driven attentive image generation)

    At the core of Microsoft Research AI’s drawing bot is a technology known as the Generative Adversarial Network, or GAN. The GAN consists of two machine learning models—a generator that generates images from text descriptions, and a discriminator that uses text descriptions to judge the authenticity of generated images. The generator attempts to get fake pictures past the discriminator; the discriminator on the other hand never wants to be fooled. Working together, the discriminator pushes the generator toward perfection.

    Microsoft Research AI的绘图机器人的核心是一种称为对抗性生成网络(GAN)的技术。 GAN由两个机器学习模型组成-一个从文本描述生成图像的生成器,以及一个使用文本描述判断生成图像的真实性的鉴别器。 生成器尝试使伪造的图片经过鉴别器; 另一方面,歧视者从不希望被愚弄。 鉴别器共同作用,将发生器推向完美。

    The drawing bot was trained on a dataset of 100,000 images, each with salient object labels and segmentation maps and five different captions, allowing the models to conceive individual objects and semantic relations between objects. The GAN, for example, learns how a dog should look like when comparing images with and without dog descriptions.

    该绘图机器人在100,000个图像的数据集上进行了训练,每个图像上都带有显着的对象标签和分割图以及五个不同的标题,从而使模型可以构思单个对象以及对象之间的语义关系。 例如,GAN可以在比较带有和不带有狗描述的图像时,了解狗的外观。

    Figure 1: A complex scene with multiple objects and relationships.

    Figure 1: A complex scene with multiple objects and relationships.


    GANs work well when generating images containing only one salient object, such as a human face, birds or dogs, but quality stagnates with more complex everyday scenes, such a scene described as “A woman wearing a helmet is riding a horse” (see Figure 1.) This is because such scenes contain multiple objects (woman, helmet, horse) and rich semantic relations between them (woman wear helmet, woman ride horse). The bot first must understand these concepts and place them in the image with a meaningful layout. After that, a more supervised signal capable of teaching the object generation and the layout generation is required to fulfill this language-understanding-and-image-generation task.

    当生成仅包含一个显着物体(例如人脸,鸟或狗)的图像时,GAN可以很好地工作,但是高质量的图像会停滞在更复杂的日常场景中,这种场景被描述为“戴头盔的女人骑着马”(见图) 1.)这是因为这样的场景包含多个对象(女人,头盔,马)和它们之间的丰富语义关系(女人戴头盔,女人骑马)。 机器人首先必须理解这些概念,并以有意义的布局将它们放置在图像中。 此后,需要一个能够指导对象生成和布局生成的更加监督的信号来完成此语言理解和图像生成任务。

    As humans draw these complicated scenes, we first decide on the main objects to draw and make a layout by placing bounding boxes for these objects on the canvas. Then we focus on each object, by repeatedly checking the corresponding words that describe this object. To capture this human trait, the researchers created what they called an Object-driven attentive GAN, or ObjGAN, to mathematically model the human behavior of object centered attention. ObjGAN does this by breaking up the input text into individual words and matching those words to specific objects in the image.

    当人类绘制这些复杂的场景时,我们首先通过在画布上放置这些对象的边界框来确定要绘制的主要对象并进行布局。 然后,我们通过重复检查描述该对象的相应单词来关注每个对象。 为了捕捉这种人类特征,研究人员创建了他们所谓的“对象驱动的专注GAN”或“ ObjGAN”,以数学方式模拟了以对象为中心的注意力的人类行为。 ObjGAN通过将输入文本分解为单个单词并将这些单词与图像中的特定对象进行匹配来实现。

    Humans typically check two aspects to refine the drawing: the realism of individual objects and the quality of image patches. ObjGAN mimics this behavior as well by introducing two discriminators—one object-wise discriminator and one patch-wise discriminator. The object-wise discriminator is trying to determine whether the generated object is realistic or not and whether the object is consistent with the sentence description. The patch-wise discriminator is trying to determine whether this patch is realistic or not and whether this patch is consistent with the sentence description.

    人们通常会检查两个方面来完善绘图:单个对象的真实感和图像斑块的质量。 ObjGAN还通过引入两个鉴别符(一个是面向对象的鉴别符和一个是基于补丁的鉴别符)来模仿此行为。 逐个对象的鉴别器试图确定所生成的对象是否现实,以及该对象是否与句子描述一致。 逐块鉴别器试图确定此补丁是否现实,以及该补丁是否与句子描述一致。

    相关工作:故事可视化 (Related work: Story visualization)

    State-of-the-art text-to-image generation models can generate realistic bird images based on a single-sentence description. However, text-to-image generation can go far beyond synthesis of a single image based on one sentence. In “StoryGAN: A Sequential Conditional GAN for Story Visualization”, Jianfeng Gao of Microsoft Research, along with Zhe Gan, Jingjing Liu and Yu Cheng of Microsoft Dynamics 365 AI Research, Yitong Li, David Carlson and Lawrence Carin of Duke University, Yelong Shen of Tencent AI Research and Yuexin Wu of Carnegie Mellon University go a step further and propose a new task, called Story Visualization. Given a multi-sentence paragraph, a full story can be visualized, generating a sequence of images, one for each sentence. This is a challenging task, as the drawing bot is not only required to imagine a scenario that fits the story, model the interactions between different characters appearing in the story, but it also must be able to maintain global consistency across dynamic scenes and characters. This challenge has not been addressed by any single image or video generation methods.

    最新的文本到图像生成模型可以基于单句描述生成逼真的鸟图像。 但是,文本到图像的生成远远超出了基于一个句子合成单个图像的范围。 在“ StoryGAN:用于故事可视化的顺序条件GAN ”中,Microsoft Research的高剑锋 ,以及Microsoft Dynamics 365 AI Research的Gan Zhe,Liu Jingjing和Yu Cheng,杜克大学的Yiyi Li,David Carlson和Lawrence Carin,沉业龙腾讯AI Research的研究人员和卡内基梅隆大学的Wu Yuexin Wu进一步提出了一项新的任务,即故事可视化。 给定一个多句段,可以使整个故事可视化,生成一系列图像,每个句子一个。 这是一项艰巨的任务,因为绘图机器人不仅需要想象一个适合故事的场景,为故事中出现的不同角色之间的交互建模,而且还必须能够在动态场景和角色之间保持全局一致性。 任何单一的图像或视频生成方法都没有解决这一挑战。

    Figure 2: Story visualization vs. simple image generation.

    Figure 2: Story visualization vs. simple image generation.


    The researchers came up with a new story-to-image-sequence generation model, StoryGAN, based on the sequential conditional GAN framework. This model is unique in that it consists of a deep Context Encoder that dynamically tracks the story flow, and two discriminators at the story and image levels to enhance the image quality and the consistency of the generated sequences. StoryGAN also can be naturally extended for interactive image editing, where an input image can be edited sequentially based on the text instructions. In this case, a sequence of user instructions will serve as the “story” input. Accordingly, the researchers modified existing datasets to create the CLEVR-SV and Pororo-SV datasets, as shown in the Figure 2.

    研究人员根据顺序条件GAN框架提出了一个新的故事到图像序列生成模型StoryGAN。 该模型的独特之处在于它包括一个动态跟踪故事流程的深层上下文编码器,以及故事和图像级别的两个鉴别器,以增强图像质量和所生成序列的一致性。 StoryGAN还可以自然扩展为交互式图像编辑,其中可以根据文本指令顺序编辑输入图像。 在这种情况下,一系列用户指令将用作“故事”输入。 因此,研究人员修改了现有数据集以创建CLEVR-SV和Pororo-SV数据集,如图2所示。

    实际应用–真实故事 (Practical applications – a real story)

    Text-to-image generation technology could find practical applications acting as a sort of sketch assistant to painters and interior designers, or as a tool for voice-activated photo editing. With more computing power, the researchers imagine the technology generating animated films based on screenplays, augmenting the work that animated filmmakers do by removing some of the manual labor involved.

    文本到图像生成技术可以找到实际的应用程序,充当画家和室内设计师的素描助手,或者用作声控照片编辑的工具。 研究人员可以想象,凭借更高的计算能力,该技术可以根据电影剧本生成动画电影,从而消除了一些体力劳动,从而使动画电影制片人的工作更加丰富。

    For now, the generated images are still far away from photo realistic. Individual objects almost always reveal flaws, such as blurred faces and or buses with distorted shapes. These flaws are a clear indication that a computer, not a human, created the images. Nevertheless, the quality of the ObjGAN images is significantly better than previous best-in-class GAN images and serve as a milestone on the road toward a generic, human-like intelligence that augments human capabilities.

    目前,生成的图像距离照片逼真还很远。 单个对象几乎总会显示出瑕疵,例如模糊的面Kong和/或形状失真的公共汽车。 这些缺陷清楚地表明,是由计算机而非人创造的图像。 尽管如此,ObjGAN图像的质量明显优于以前的同类最佳GAN图像,并且是通向增强人类能力的类人通用智能道路上的里程碑。

    For AIs and humans to share the same world, each must have a way to interact with the other. Language and vision are the two most important modalities for humans and machines to interact with each other. Text-to-image generation is one important task that advances language-vision multi-modal intelligence research.

    为了使AI和人类共享同一个世界,彼此之间必须有一种相互交流的方式。 语言和视觉是人机交互的两个最重要的方式。 文本到图像的生成是推进语言视觉多模式智能研究的一项重要任务。

    The researchers who created this exciting work look forward to sharing these findings with attendees at CVPR in Long Beach and hearing what you think. In the meantime, please feel free to check out their open-source code for ObjGAN and StoryGANon GitHub

    创造了这项令人兴奋的工作的研究人员期待与长滩CVPR的与会者分享这些发现,并听听您的想法。 同时,请随时在GitHub上查看其针对ObjGANStoryGAN的开源代码

    翻译自: https://habr.com/en/company/microsoft/blog/457200/


  • 针对当前绘图机器人体积较大、绘图平面小、只能工作于水平面等不足,提出了一种基于伺服电机驱动的悬挂式绘图机器人系统设计方法。该系统采用悬挂式设计,由电机驱动模块、图像处理模块和WIFI传输模块构成。基于系统...
  • 我们还将学习如何安装和使用Polargraph程序进行机器人控制。 使用的硬件: Arduino Uno L293D电机罩 L293D电机驱动器IC Nema17步进电机 伺服马达 GT2皮带轮16齿 GT2橡胶腰带(5M) 适配器电源 铅重量 跳线
  • 绘图型工艺品机器人制作经验 绘图型工艺品机器人制作经验 绘图型工艺品机器人制作经验 绘图型工艺品机器人制作经验
  • 通过绘图机器人采用常用的绘制染料,绘制铅笔,纸,就可以自动、准确的选取不同颜色将绘制的图像实时显示在你的眼前。是不是很不可思议?其实很简单,无非是控制板EBB,2个电动机,每个马达控制绘图机器人连接杆移动...
  • 基于stm32的绘图机器人设计

    千次阅读 2020-08-05 16:57:36
    基于stm32的绘图机器人设计 经过一段时间的折腾,终于算是把绘图机器人弄完啦。 简介 这是我的大学毕业设计,平平淡淡才是真,本设计只为完成简单的绘图工作而设计 简单的图像处理算法 简单的机电控制算法 在此感谢...





    在此感谢布雷森汉姆直线算法理解 这篇博客的博主@Vicent_Chen






    In the current era of rapid development of information technology, artificial intelligence and big data have become hot topics, attracting the attention of most people, especially intelligent robots, which are many people who are bent over. The rapid development of computer technology has made the field of robot penetration more extensive, and there are more and more types of work to replace humans. Too fast development, excessive demand, and too broad a field tend to make people more immersed in letting robots replace the simple labor that humans have done, but forget some work that was originally done by large machines, such as drawing technology. Drawing technology is used in many design aspects, such as fashion design, industrial design, education and teaching, engineering design. However, as far as I can find out, many of them are still used by printers for printing or color printing after being painted by professionals. It is not a suitable choice for ordinary designers. Therefore, we have designed a plotter that has no special requirements for paper at a lower cost.
    The design adopts the STM32 single-chip microcomputer as the core unit of processing control. Combined with the upper computer control software, the bitmap is converted into the path information of the pen through image processing, and the information is sent to the STM32 controller through the serial port. The controller uses the Bresenham line algorithm to calculate the coordinates of the points of the line between the two coordinates before and after, and converts the position of each point into the pulse amount and direction of the stepping motor to finally control the movement of the tip. The drawing scheme is to draw a contour map. The design is reliable, low cost, and has good drawing accuracy. The robot can be applied to various sketches and other fields.

    Key words: STM32; Bresenham; Image processing; Drafting robot
    第一章 绪论 1

    基于stm32微控制器的绘图机器人研究设计的目的 1
    基于stm32微控制器的绘图机器人研究设计的设计功能 1
    第二章 系统整体方案设计 2
    第三章 硬件设计 4
    硬件电路资源使用设计 4
    1.1 STM32F103RBT6片上资源 4
    1.2 外围芯片资源 4
    机械结构设计 5
    通信原理 9
    第四章 软件设计 11
    上位机程序设计 11
    1.1 二值化算法 11
    1.2 轮廓化算法 13
    1.3 描边算法 14
    1.4 串口通信数据格式 16
    下位机程序设计 17
    2.1 缓冲区算法 17
    2.2 Bresenham直线算法 17
    2.3 坐标轴的确定 19
    第五章 系统调试与实验结果 21
    系统调试 21
    实验结果 24
    参 考 文 献 25
    致 谢 26
    附 录 27


    #include "stm32f10x.h"
    #include "bsp.h"				//板件链接支持包 已经包含bitband.h
    #include "stm32iic.h"
    #include "codetab.h"
    #include "LQ12864.h"
    #include "function.h"
    u16 x,y; 						//坐标变量
    u16 R[10]; 
    u8 Rok=0; 					//接受完成标志
    u16 x1,y1; 					//接受的坐标
    u16 pwmcount; 				//脉冲数目
    u16 t=0; 						//执行次数
    u8 Flag_Send=0; 				//0为没有发送过
    u8 init_flag=0; 				//打印开始要先初始化
    u8 Flag_MotST=1; 			//Holdon 状态  (1)
    u8 TB=1; 						//抬笔标记
    buf uart_buf[bufsize];
    u16 front=0;
    u16 rear=0;
    int main()
    	OLED_Init ();
    	MOT_GPIO_Config();		//MOT pwm、GPIO初始化函数
    	DJ_PWM_Config(50000); 	//Tim4 -ch4 50hz
    	Pen_Up ;
    	TIM_CDISEN ; 				//TIM不计数
    	TOP_SDISEN ; 				//关闭top step通道
    	BOT_SDISEN ; 				//关闭bot step通道
    	LOC_Init ();
    	OLED_CLS ();
    	OLED_P8x16Str (0,2,(u8*)"Completed");
    	LOOSEN ; 					//松开电机
    			if(init_flag==0 ) 
    				LOC_Init ();
    				OLED_CLS ();
    				OLED_P8x16Str (0,2,(u8*)"Waiting...");
    				init_flag =1; //不再初始化
    				 uart_buf [rear].data_y,
    				 uart_buf [rear].data_act);

    2020年8月4日20:00:59 补充---------

  • Canvas绘图android机器人

    2013-12-09 11:31:20
    今晚瞎折腾,闲着没事画了个机器人——android,浪费了一个晚上的时间。画这丫还真不容易,为那些坐标,差点砸了键盘,好在最后画出个有模有样的,心稍安。  下面来看看画这么个机器人需要些什么东西:主要是...













       canvas.drawRoundRect(RectF, float, float, Paint)方法用于画圆角矩形,第一个参数为图形显示区域,第二个参数和第三个参数分别是水平圆角半径和垂直圆角半径。

       canvas.drawLine(startX, startY, stopX, stopY,paint):前四个参数的类型均为float,最后一个参数类型为Paint。表示用画笔paint从点(startX,startY)到点(stopX,stopY)画一条直线;

       canvas.drawArc(oval, startAngle, sweepAngle,useCenter,paint):第一个参数oval为RectF类型,即圆弧显示区域,startAngle和sweepAngle均为float类型,分别表示圆弧起始角度和圆弧度数,3点钟方向为0度,useCenter设置是否显示圆心,boolean类型,paint为画笔;

       canvas.drawCircle(float,float, float,Paint)方法用于画圆,前两个参数代表圆心坐标,第三个参数为圆半径,第四个参数是画笔;



        Rect(intleft,int top,int right,int bottom)










       下面借用一张图说明(忘了哪个博客找来的(*^__^*)嘻嘻……),如Rect(150, 75, 260, 120) 一目了然吧。





    package  com.scgm.android.drawable;

    import android.graphics.Canvas;



    public interface  draw Graphics{

    public void  draw(Canvascanvas);



    package  com.scgm.android.drawable;

    import android.content.Context;

    import  android.graphics.Canvas;

    import android.graphics.Color;

    import android.graphics.Paint;

    import android.view.View;


    public class GameView extends View  implements  Runnable{


         private  Paint mPaint=null;

         privatedrawGraphics drawGraphics= null;



         public GameView(Context context) {


              //TODOAuto-generated constructor stub


             mPaint= new Paint();


             new  Thread(this).start();


         public void  onDraw(Canvas canvas){









             drawGraphicsnew  DrawCircle();


             drawGraphicsnew  DrawLine();


             drawGraphics= newDrawRect();






            public void run() {

                 // TODOAuto-generatedmethod stub

                while(!Thread.currentThread().isInterrupted()) {



                     } catch(InterruptedException e) {

                       //TODO:handle exception



                     //使用postInvalidate 可以直接在线程中更新界面







    package com.scgm.android.drawable;

    import android.graphics.Canvas;

    import android.graphics.Color;

    import android.graphics.Paint;

    import android.graphics.RectF;



    public class DrawRect  implements  drawGraphics{


         private  Paint paintnull;


         public DrawRect(){

          paint=new  Paint();





           public void  draw(Canvas canvas){

               // TODOAuto-generatedmethod stub


               RectF rectF1 = new RectF(120,170,370,500);

               RectF rectF2 = new RectF(40,150,90,400);

               RectF rectF3 = new RectF(390,150,440,400);

               RectF rectF4 = new RectF(140,520,200,650);

               RectF rectF5 = new RectF(290,520,350,650);





               canvas.drawRoundRect(rectF1, 20, 20,paint);

               canvas.drawRoundRect(rectF2, 20, 20,paint);

               canvas.drawRoundRect(rectF3, 20, 20,paint);

               canvas.drawRoundRect(rectF4, 20, 20,paint);

               canvas.drawRoundRect(rectF5, 20, 20,paint);




    package  com.scgm.android.drawable;

    import android.graphics.Canvas;

    import  android.graphics.Color;

    import android.graphics.Paint;



    public class DrawLine  implements  drawGraphics{


         private Paint paintnull;


          public  DrawLine(){

             paintnew  Paint();




          public void draw(Canvascanvas) {

              // TODOAuto-generatedmethod stub











    package com.scgm.android.drawable;

    import android.graphics.Canvas;

    import  android.graphics.Color;

    import  android.graphics.Paint;

    import android.graphics.RectF;



    public class DrawCircle  implements  drawGraphics{


          private  Paint paintnull;

          private  Paint paint_eyenull;


          public  DrawCircle(){

           paint= new  Paint();

           paint_eye= new  Paint();




          public  void draw(Canvas canvas) {

               // TODOAuto-generatedmethod stub




               RectF rectF = new RectF(120,60,370,240);



               canvas.drawCircle(190, 110, 18,paint_eye);

               canvas.drawCircle(300, 110, 18,paint_eye);

               canvas.drawArc(rectF, 180,180,true, paint);




    package  com.scgm.android.drawable;

    import  android.app.Activity;

    import android.os.Bundle; 

    public class GameStart  extends  Activity{


         private  GameView mGameViewnull




         public  void  onCreate(Bundle  savedInstanceState){ 







  • Josef – 通过人工神经网络进行绘图机器人
  • robopaint, 友好的绘图机器人工具包软件 ! RoboPaint ! 绘图机器人软件,以及你的友好绘画机器人工具包,WaterColorBot 。下载/安装点击这里下载当前测试版。 这个版本有许多改进,可以改善RoboPaint的...
  • Scribit墙壁绘图机器人据消息,建筑和设计公司CarloRattiAssociati(CRA)将于4月16日在米兰家具展上正式推出Scribit墙壁绘图机器人。几年前,建筑和设计公司CRA展示了一套使用喷涂无人机在墙上绘制图像的系统。这...
  • 毕业设计:基于 stm32 微控制器的绘图机器人设计程序源码以及论文 包含:源码,论文 包含:源码,论文
  • 针对目前家政服务机器人在可视化定位上的不足,提出一种将房屋布局图加载到上位机绘图板程序中,通过在上位机上绘制路线,实时控制家政服务机器人位置的方案。在该系统中,上位机按照RS-232协议传输路线信息到无线...
  • 真棒绘图者:计算机控制绘图机和其他视觉艺术机器人的精选代码和资源清单
  • 要求skribbl.io,gartic电话等的Drawbot要求NodeJS 14.16.0 Python 3.9请求-pip安装请求初始化打开命令提示符i要求skribbl.io,gartic电话等的Drawbot要求NodeJS 14.16.0 Python 3.9请求-pip安装请求初始化打开在...
  • 适用于skribbl.io的Drawbot,Gartic Phone ... 按下绘图后,将鼠标快速移至绘图画布的左上方 如果您不知所措,可以按ESC终止图形 附加信息 在Image URL字段中,您可以在其中放置文件路径。 请注意,路径从主文件夹开始
  • 在这个低成本的项目中,您可以学习有趣的内容,例如激光切割,编码,力学,三角学。 无需Arduino代码!
  • 这个机器人由三个步进电机以及其驱动芯片组成。电机控制芯片的脉冲与正反转信号由计算机并口(打印机口)输出,程序由VB编写!
  • 前不久,在一则科技资讯中发现了一款名为Axidraw的绘图仪器,于是本着一个工科学子的Diy精神,打算自制一台出来。在此将这一过程中遇到的问题和解决方法罗列出来,希望给有同样想法或者有同样困惑的朋友一个参考。 ...
  • 机器人控制板包含一个ATmega328P微控制器和一个L293D电机驱动器。 当然,它与Arduino Uno板没有什么不同,但是它更加有用,因为它不需要另一个屏蔽来驱动电机! 它不受跳线干扰的影响,可通过CH340G轻松编程。 在...
  • 四个简短的示例脚本。 每个脚本运行序列: - 定义符号运动方程:位置(角度) - 找到逆运动学方程:角度(位置) -定义目标点数组- 对于每个目标点,求解角度并绘制 “drawing_robot_3d_bonus.m”的剪辑和下面的...
  • 此代码用于为自动绘图仪,极坐标图或其他垂直绘图仪生成gcode。 它获取原始图像,对其进行操作并生成有点像原始图像的绘制路径。 该代码专门用于与多个Copic标记一起使用。 该代码旨在进行大量修改以生成不同且独特...
  • MATLAB用拟合出的代码绘图多传感器机器人校准框架和工具箱 该工具箱通过组合多种校准方法,为通用机器人的多链校准提供了解决方案。 用户可以定义任意机器人,选择校准方法,设置优化求解器和校准的参数。 可以保存...
  • 雷锋网 AI 科技评论按:第 33 届人工智能顶级国际会议 AAAI 2018 论文收录结果目前已公布,其中就收录了同济大学「智能大数据可视化实验室」开发的绘图机器人 AI-Sketcher 的相关论文。 AI-Sketcher 是一款能够...
  • 写字机器人

    2020-12-27 13:54:10
    绘图机器人的整体系统设计框架如下。 6.1上位机程序设计 上位机程序采用 C++编写,使用OpenCV视觉处理库进行图像处理一共有四种画图方法: (1)将图片二值化后直接轮廓化。 (2)也可以采用Canny算子对二值...
  • 很棒的加密交易机器人 关于 关于加密交易机器人的真棒列表,包括开源机器人,技术分析和市场... 它包含回测,绘图和资金管理工具,以及通过机器学习进行的策略优化。 [已弃用] Gekko是连接到流行的比特币交易所的比



1 2 3 4 5 ... 16
收藏数 304
精华内容 121