2017-07-31 11:54:52 u014084081 阅读数 186

iOS Drawing-Images

内容来自[Addison.Wesley.IOS.Drawing.Sep.2014.ISBN.1502345307],记录下重要的内容

基本

创建context

CGBitmapContextCreate()创建位图图形上下文(Creates a bitmap graphics context),原型为:

CGContextRef CGBitmapContextCreate(void *data, size_t width, size_t height, size_t bitsPerComponent, size_t bytesPerRow, CGColorSpaceRef space, uint32_t bitmapInfo);

参数说明:

  • data-如果不为 NULL ,那么它应该指向一块大小至少为 bytesPerRow * height 字节的内存;如果 为 NULL ,那么系统就会为我们自动分配和释放所需的内存,所以一般指定 NULL 即可
  • width-图片的宽度,size_t在iOS中被定义为unsigned long
  • height-图片的高度
  • bitsPerComponent-像素的每个颜色分量使用的 bit 数,在 RGB 颜色空间下指定 8 即可。一个component是指单个信道。 ARGB数据每像素使用四个component。灰度数据(grayscale)使用一个(无Alpha通道数据)或两个(带Alpha通道数据)
  • bytesPerRow-位图的每一行使用的字节数,大小至少为width * bytes per pixel 字节。对ARGB使用width * 4,对grayscale(non-alpha)则直接使用width
  • colorspace-bitmap上下文使用的颜色空间
  • bitmapInfo-指定位图使用的Alpha通道的样式,kCGImageAlphaPremultipliedFirst用于彩色图像,kCGImageAlphaNone用于grayscale

获取image数据

可以通过UIImagePNGRepresentation()来获取图片的PNG表示,或者通过UIImageJPEGRepresentation ()获取图片的JPEG表示,但是这些方法返回的数据适合把图片存储为文件格式,它们包括file header(文件头) 和 marker data(标记数据) , internal chunks(内部块) , 和 compression(压缩) 。当要处理图片时,需要从上下文中获取字节数组。

使用CGBitmapContextGetData来检索源字节。它将这些字节复制到NSData实例中,并将该实例返回给调用者

提取字节数据

#define BITS_PER_COMPONENT  8
#define ARGB_COUNT 4
NSData *BytesFromRGBImage(UIImage *sourceImage)
{
    if (!sourceImage) return nil;

    //颜色空间
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    if (colorSpace == NULL)
    {
        NSLog(@"Error creating RGB color space");
        return nil;
    }

    //创建上下文
    int width = sourceImage.size.width;
    int height = sourceImage.size.height;
    CGContextRef context = CGBitmapContextCreate(NULL, width, height, BITS_PER_COMPONENT, width * ARGB_COUNT, colorSpace, (CGBitmapInfo) kCGImageAlphaPremultipliedFirst);
    CGColorSpaceRelease(colorSpace );
    if (context == NULL)
    {
        NSLog(@"Error creating context");
        return nil;
    }

    //将源图片绘制到上下文中
    CGRect rect = (CGRect){.size = sourceImage.size};
    CGContextDrawImage(context, rect, sourceImage.CGImage);

    //bytes创建data
    NSData *data = [NSData dataWithBytes:CGBitmapContextGetData(context) length:(width * height * 4)];
    CGContextRelease(context);

    return data;
}

有了从图片获取字节数据,哪么如何用字节数据创建image呢?
这里使用的是CGBitmapContextCreate(),只是第一个参数传递bytes数据,表示使用提供的数据,而不要分配内存,如下:
把bytes转变为image

UIImage *ImageFromRGBBytes(NSData *data, CGSize targetSize)
{
    //检查data
    int width = targetSize.width;
    int height = targetSize.height;
    if (data.length < (width * height * 4))
    {
        NSLog(@"Error: Not enough RGB data provided. Got %d bytes. Expected %d bytes", data.length, width * height * 4);
        return nil;
    }

    //创建颜色空间
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    if (colorSpace == NULL)
    {
        NSLog(@"Error creating RGB colorspace");
        return nil;
    }

    //创建bitmap上下文
    Byte *bytes = (Byte *) data.bytes;
    CGContextRef context = CGBitmapContextCreate(bytes, width, height, BITS_PER_COMPONENT, width * ARGB_COUNT, colorSpace, (CGBitmapInfo) kCGImageAlphaPremultipliedFirst);
    CGColorSpaceRelease(colorSpace );
    if (context == NULL)
    {
        NSLog(@"Error. Could not create context");
        return nil;
    }

    //转为image
    CGImageRef imageRef = CGBitmapContextCreateImage(context);
    UIImage *image = [UIImage imageWithCGImage:imageRef];

    // Clean up
    CGContextRelease(context);
    CFRelease(imageRef);

    return image;
}

一些基本用法

创建自己定义的图片,如下使用自定义的颜色和大小来创建图片

UIImage *BuildSwatchWithColor(UIColor *color, CGFloat side)
{
    //创建图片context
    UIGraphicsBeginImageContextWithOptions(
                                           CGSizeMake(side, side), YES,
                                           0.0);

    [color setFill];
    UIRectFill(CGRectMake(0, 0, side, side));

    //获取图片
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return image;
}

获取缩略图
缩略图使用drawInRect:方法绘制,注意比率

UIImage *BuildThumbnail(UIImage *sourceImage, CGSize targetSize, BOOL useFitting)
{
    CGRect targetRect = SizeMakeRect(targetSize);
    UIGraphicsBeginImageContextWithOptions(targetSize, NO, 0.0);

    CGRect naturalRect = (CGRect){.size = sourceImage.size};
    CGRect destinationRect = useFitting ? RectByFittingRect(naturalRect, targetRect) : RectByFillingRect(naturalRect, targetRect);
    [sourceImage drawInRect:destinationRect];

    UIImage *thumbnail = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return thumbnail;
}

其它用法

提取子图像

使用CGImageRef CGImageCreateWithImageInRect(CGImageRef image, CGRect rect)方法提取子图像,该方法使用包含在现有位图图像的子区域内的数据创建位图图像。

提取子图像,介绍两张方式
第一种方式:

UIImage *ExtractRectFromImage(UIImage *sourceImage, CGRect subRect)
{
    // Extract image
    CGImageRef imageRef = CGImageCreateWithImageInRect(sourceImage.CGImage, subRect);
    if (imageRef != NULL)
    {
        UIImage *output = [UIImage imageWithCGImage:imageRef];
        CGImageRelease(imageRef);
        return output;
    }

    NSLog(@"Error: Unable to extract subimage");
    return nil;
}

第二种方式

UIImage *ExtractSubimageFromRect(UIImage *sourceImage, CGRect rect)
{
    UIGraphicsBeginImageContextWithOptions(rect.size, NO, 1);
    CGRect destRect = CGRectMake(-rect.origin.x, -rect.origin.y,
                                 sourceImage.size.width, sourceImage.size.height);
    [sourceImage drawInRect:destRect];
    UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return newImage;
}

水印

添加水印就是在图片上绘制一些其它的东西
如下的方法:

- (UIImage *) buildWatermarking: (CGSize) targetSize
{
    UIGraphicsBeginImageContextWithOptions(targetSize, NO, 0.0);
    CGContextRef context = UIGraphicsGetCurrentContext();

    //绘制原始图像到context
    CGRect targetRect = SizeMakeRect(targetSize);
    UIImage *sourceImage = [UIImage imageNamed:@"pronghorn.jpg"];
    CGRect imgRect = RectByFillingRect(SizeMakeRect(sourceImage.size), targetRect);
    [sourceImage drawInRect:imgRect];

    //创建字符串
    NSString *watermark = @"watermark";
    UIFont *font =
    [UIFont fontWithName:@"HelveticaNeue" size:48];
    CGSize size = [watermark sizeWithAttributes:@{NSFontAttributeName:font}];
    CGRect stringRect = RectCenteredInRect(SizeMakeRect(size), targetRect);

    //旋转context
    CGPoint center = RectGetCenter(targetRect);
    CGContextTranslateCTM(context, center.x, center.y);
    CGContextRotateCTM(context, M_PI_4);
    CGContextTranslateCTM(context, -center.x, -center.y);

    //绘制字符串,使用blend mode 
    CGContextSetBlendMode(context, kCGBlendModeDifference);
    [watermark drawInRect:stringRect withAttributes:@{NSFontAttributeName:font, NSForegroundColorAttributeName:[UIColor whiteColor]}];

    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    return image;
}

效果如下:

这里写图片描述

2013-10-23 09:56:52 u010013695 阅读数 1943

1 前言

  从本节开始我们开始学习绘图相应技术,本节主要简单的介绍了IOS中的绘图和打印技术,用到的技术和如何进行。

  转载请注明出处:http://blog.csdn.net/developer_zhang

2 详述

2.1 原文

This document covers three relatedsubjects:


  Drawing custom UI views. Custom UI views allow you to draw content that cannot easily be drawn with standard UI elements. For example, a drawing program might use a custom view for the user’s drawing, or anarcade game might use a custom view into which it drawssprites.
  Drawing into offscreen bitmap and PDF content. Whether you plan to display the images later, export them to a file, or print the images to an AirPrint-enabled printer, offscreen drawing lets you do so withoutinterrupting the user’s workflow.
  Adding AirPrint support to your app. The iOS printing system lets you draw your content differently to fit on the page.
Figure I-1  You can combine custom views with standard views, and even draw things offscreen.


At a Glance

  The iOS native graphics system combines three major technologies: UIKit, Core Graphics, and Core Animation. UIKit provides views and some high-level drawing functionality within those views, Core Graphics providesadditional (lower-level) drawing support within UIKit views, and Core Animation provides the ability to applytransformations and animation to UIKit views. Core Animation is also responsible for viewcompositing.

Custom UI Views Allow Greater Drawing Flexibility


  This document describes how to draw into custom UI views using native drawing technologies. These technologies, which include the Core Graphics and UIKit frameworks, support 2D drawing.

  Before you consider using a custom UI view, you should make certain that you really need to do so. Native drawing issuitable for handling more complex 2D layout needs. However, because custom views areprocessor-intensive, you should limit the amount of drawing you do using native drawing technologies.

  As an alternative to custom drawing, an iOS app can draw things onscreen in several other ways.

Using standard (built-in) views. Standard views let you draw common user-interfaceprimitives, including lists, collections, alerts, images, progress bars, tables, and so on without the need toexplicitly draw anything yourself. Using built-in views not onlyensures a consistent user experience between iOS apps, but also saves you programmingeffort. If built-in views meet your needs, you should read View Programming Guide for iOS.
Using Core Animation layers. Core Animation lets you create complex, layered 2D views with animation and transformations. Core Animation is a good choice foranimating standard views, or for combining views in complex ways to present theillusion of depth, and can be combined with custom-drawn views as described in this document. To learn more about Core Animation, read Core Animation Overview.
Using OpenGL ES in a GLKit view or a custom view. The OpenGL ES framework provides a set of open-standard graphics libraries gearedprimarily toward game development or apps that require high framerates, such as virtualprototyping apps and mechanical and architectural design apps. It conforms to the OpenGL ES 2.0 and OpenGL ES v1.1specifications. To learn more about OpenGL drawing, read OpenGL ES Programming Guide for iOS.
Using web content. The UIWebView class lets you display web-based user interfaces in an iOS app. To learn more about displaying web content in a web view, read Using UIWebView to display select document types and UIWebView Class Reference.
Depending on the type of app you are creating, it may be possible to use little or no custom drawing code. Althoughimmersive apps typically make extensive use of custom drawing code, utility andproductivity apps can often use standard views and controls to display their content.

The use of custom drawing code should be limited to situations where the content you display needs to changedynamically. For example, a drawing app typically needs to use custom drawing code totrack the user’s drawing commands, and an arcade-style game may need to update the screenconstantly to reflect the changing game environment. In those situations, you should choose anappropriate drawing technology and create a custom view class to handle events and update the displayappropriately.

On the other hand, if the bulk of your app’s interface isfixed, you can render the interfacein advance to one or more image files and display those images at runtime using the UIImageView class. You can layer image views with other content as needed to build your interface. You can also use the UILabel class to display configurable text and include buttons or other controls to provide interactivity. For example, an electronic version of aboard game can often be created with little or no custom drawing code.

  Because custom views are generally more processor-intensive (with less help from the GPU), if you can do what you need to do using standard views, you should always do so. Also, you should make your custom views as small as possible, containing only content that you cannot draw in any other way, use use standard views for everything else. If you need to combine standard UI elements with custom drawing, consider using a Core Animation layer tosuperimpose a custom view with a standard view so that you draw as little as possible.

A Few Key Concepts Underpin Drawing With the Native Technologies

  When you draw content with UIKit and Core Graphics, you should be familiar with a few concepts in addition to the view drawing cycle.

  For the drawRect: method, UIKit creates a graphics context for rendering to the display. This graphics context contains the information the drawing system needs toperform drawing commands, including attributes such as fill andstroke color, the font, the clipping area, and line width. You can also create and draw into custom graphics context for bitmap images and PDF content.
  UIKit has a default coordinate system where the origin of drawing is at the top-left of a view;positive values extend downward and to the right of that origin. You can change the size, orientation, and position of the default coordinate system relative to theunderlying view or window by modifying the current transformationmatrix, which maps a view’s coordinate space to the device screen.
  In iOS, the logical coordinate space, which measures distances in points, is not equal to the device coordinate space, which measures inpixels. For greater precision, points are expressed in floating-point values.
Relevant Chapter: “iOS Drawing Concepts”
UIKit, Core Graphics, and Core Animation Give Your App Many Tools For Drawing

The UIKit and Core Graphics have many complementary graphics capabilities thatencompass graphics contexts, Bézier paths, images, bitmaps, transparency layers, colors, fonts, PDF content, and drawing rectangles and clipping areas. In addition, Core Graphics has functions related to line attributes, color spaces, pattern colors,gradients, shadings, and image masks. The Core Animation framework enables you to create fluid animations by manipulating and displaying content created with other technologies.

Relevant Chapters: “iOS Drawing Concepts,” “Drawing Shapes Using Bézier Paths,” “Drawing and Creating Images,” “Generating PDF Content”
Apps Can Draw Into Offscreen Bitmaps or PDFs
It is often useful for an app to draw content offscreen:

Offscreen bitmap contexts are often used when scaling down  photographs for upload, rendering content into an image file forstorage purposes, or using Core Graphics to generate complex images for display.
Offscreen PDF contexts are often used when drawing user-generated content for printing purposes.
  After you create an offscreen context, you can draw into it just as you would draw within the drawRect: method of a custom view.

Relevant Chapters: “Drawing and Creating Images,” “Generating PDF Content”


Apps Have a Range of Options for Printing Content


  As of iOS 4.2, apps can print content wirelessly to supported printers using AirPrint. Whenassembling a print job, they have three ways to give UIKit the content to print:

  They can give the framework one or more objects that are directly printable; such objects requireminimal app involvement. These are instances of the NSData, NSURL, UIImage, or ALAsset classes containing or referencing image data or PDF content.
  They can assign a print formatter to the print job. A print formatter is an object that can lay out content of a certain type (such as plain text or HTML) over multiple pages.
  They can assign a page renderer to the print job. A page renderer is usually an instance of a custom subclass of UIPrintPageRenderer that draws the content to be printed in part or in full. A page renderer can use one or more print formatters to help it draw and format its printable content.


Relevant Chapter: “Printing”


It’s Easy to Update Your App for High-Resolution Screens


  Some iOS devices feature high-resolution screens, so your app must be prepared to run on these devices and on devices with lower-resolution screens. iOS handles much of the work required to handle the different resolutions, but your app must do therest. Your tasks include providing specially named high-resolution images and modifying your layer- and image-related code to take the currentscale factor into account.

Relevant Appendix: “Supporting High-Resolution Screens In Views”

2.2 生词

subjects['sʌbdʒekts; 'sʌbdʒɪkts]n. 学科;科目(subject的复数);[图情] 主题;被试者

arcade[ɑː'keɪd]n. 拱廊;(内设投角子电子游戏机等的)游乐场;有拱廊的街道

sprite[spraɪt]n. 妖精,精灵;鬼怪;调皮鬼

offscreen['ɔfskri:n, 'ɔ:-]adv. 幕后;私生活方面

bitmap['bɪtmæp]n. [计] 位图,位映像

AirPrint 无线打印

interrupt[,intə'rʌpt]n. 中断

figure['fɪgə]n. 数字;人物;图形;价格;(人的)体形;画像

additional[ə'dɪʃ(ə)n(ə)l]adj. 附加的,额外的

transformation[trænsfə'meɪʃ(ə)n; trɑːns-; -nz-]n. []转化;转换;改革;变形

compositing[kəm'pəuzitiŋ]vt. 使合成;使混合(composite的ing形式)

flexibility[,fleksɪ'bɪlɪtɪ]n. 灵活性;弹性;适应性

consider[kən'sɪdə]vt. 考虑;认为;考虑到;细想

suitable['suːtəb(ə)l]adj. 适当的;相配的

processor['prəʊsesə]n. [计] 处理器;处理程序;加工者

intensive[ɪn'tensɪv]adj. 加强的;集中的;透彻的;加强语气的

alternative[ɔːl'tɜːnətɪv; ɒl-]adj. 供选择的;选择性的;交替的

primitive['prɪmɪtɪv]adj. 原始的,远古的;简单的,粗糙的

explicitly[ik'splisitli]adv. 明确地;明白地

ensure[ɪn'ʃɔː; -'ʃʊə; en-]vt. 保证,确保;使安全

consistent[kən'sɪst(ə)nt]adj. 始终如一的,一致的;坚持的

effort['efət]n. 努力;成就

layered['leɪəd]adj. 分层的;层状的

animating['ænimeitiŋ]adj. 有生气的;启发的

illusion[ɪ'l(j)uːʒ(ə)n]n. 幻觉,错觉;错误的观念或信仰

geared['gɪəd]v. 用齿轮连接;给…装上齿轮;挂档开动机器;准备好(gear的过去式)adj. 齿轮传动的,变速螺旋桨

primarily['praɪm(ə)rɪlɪ; praɪ'mer-]adv. 首先;主要地,根本上

toward[tə'wɔːd; twɔːd; tɔːd]prep. 向;对于;为了;接近

rates[reɪts]n. 价格;[数] 比率;等级(rate的复数形式)

virtual['vɜːtjʊəl]adj. [计] 虚拟的;有效的;实质上的,事实上的

prototype['prəʊtətaɪp]n. 原型;标准,模范

mechanical[mɪ'kænɪk(ə)l]adj. 机械的;力学的;呆板的;无意识的;手工操作的

architectural[,ɑːkɪ'tektʃərəl]adj. 建筑学的;建筑上的;符合建筑法的

conform[kən'fɔːm]vi. 符合;遵照;适应环境

specification[,spesɪfɪ'keɪʃ(ə)n]n. 规格;说明书;详述

immersiven. 沉浸式;沉浸感;增加沉浸感

extensive[ɪk'stensɪv; ek-]adj. 广泛的;大量的;广阔的

utility[juːˈtɪləti]n. 实用;效用;公共设施;功用

productivity[prɒdʌk'tɪvɪtɪ]n. 生产力;生产率;生产能力

dynamically[dai'næmikəli]adv. 动态地;充满活力地;不断变化地

track[træk]vt. 追踪;通过;循路而行;用纤拉

constantly['kɒnst(ə)ntlɪ]adv. 不断地;时常地

appropriate[ə'prəʊprɪət]adj. 适当的

bulk[bʌlk]n. 体积,容量;大多数,大部分;大块

fixed[fɪkst]adj. 固执的;<美口>处境...的;准备好的;确定的

render['rendə]vt. 致使;提出;实施;着色;以…回报

in advanceadv. 预先,提前

interactivity[,intəræk'tiviti]n. 交互性;互动性

electronic[ɪlek'trɒnɪk; el-]adj. 电子的

board[bɔːd]n. 董事会;木板;甲板;膳食

generally['dʒen(ə)rəlɪ]adv. 通常;普遍地,一般地

superimpose[,suːp(ə)rɪm'pəʊz; ,sjuː-]vt. 添加;重叠;附加;安装

few[fjuː]adj. 很少的;几乎没有的

concept['kɒnsept]n. 观念,概念

underpin[ʌndə'pɪn]vt. 巩固;支持;从下面支撑;加强…的基础

be familiar with  熟悉,知道

rendering['rend(ə)rɪŋ]n. 翻译;表现;表演;描写;打底;(建筑物等)透视图

perform[pə'fɔːm]vt. 执行;完成;演奏

stroke[strəʊk]vt. 抚摸;敲击;划尾桨;划掉

clipping['klɪpɪŋ]n. 剪裁,剪断;剪报,剪辑;剪下物,剪下的东西

origin['ɒrɪdʒɪn]n. 起源;原点;出身;开端

positive['pɒzɪtɪv]n. 正数;[摄] 正片

extend[ɪk'stend; ek-]vt. 延伸;扩大;推广;伸出;给予;使竭尽全力;对…估价

downward['daʊnwəd]adj. 向下的,下降的

orientation[,ɔːrɪən'teɪʃ(ə)n; ,ɒr-]n. 方向;定向;适应;情况介绍;向东方

underlying[ʌndə'laɪɪŋ]v. 放在…的下面;为…的基础;优先于(underlie的ing形式)

matrix['meɪtrɪks]n. [数] 矩阵;模型;[生物][地质] 基质;母体;子宫;[地质] 脉石

logical['lɒdʒɪk(ə)l]adj. 合逻辑的,合理的;逻辑学的

measure['meʒə]vt. 测量;估量;权衡

pixel['pɪks(ə)l; -sel]n. (显示器或电视机图象的)像素(等于picture element)

precision[prɪ'sɪʒ(ə)n]n. 精度,[数] 精密度;精确

expressed[ɪk'spresɪd]v. 表达( express的过去式和过去分词 );(用符号等)表示;榨;

complementary[kɒmplɪ'ment(ə)rɪ]adj. 补足的,补充的

capability[keɪpə'bɪlɪtɪ]n. 才能,能力;性能,容量

encompass[ɪn'kʌmpəs; en-]vt. 包含;包围,环绕;完成

Bézier  贝兹曲线 补充一下,一般常用的向量有贝兹曲线(Bézier)或非均匀有理曲线(NURBS),前者多用在2D向量绘图软体,后者则常用在3D绘图软体中。

transparency[træn'spær(ə)nsɪ; trɑːn-; -'speə-]n. 透明,透明度;幻灯片;有图案的玻璃

gradient['greɪdɪənt]n. [数][物] 梯度;坡度;倾斜度

mask[mɑːsk]n. 面具;口罩;掩饰

fluid['fluːɪdadj. 流动的;流畅的;不固定的

manipulating[məˈnipjuleitɪŋ]v. 操纵;假造;手动(manipulate的ing形式

scale down按比例减少;按比例缩小

photographs['fəutəgræfs]n. 照片;逼真的描绘(photograph的复数)

storage['stɔːrɪdʒ]n. 存储;仓库;贮藏所

relevant['relɪv(ə)nt]adj. 相关的;切题的;中肯的;有重大关系的;有意义的,目的明确的

wireless['waɪəlɪs]adj. 无线的;无线电的

assemblie 集合 装配

minimal['mɪnɪm(ə)l]adj. 最低的;最小限度的

involvement[ɪn'vɒlvm(ə)nt]n. 牵连;包含;混乱;财政困难

assign[ə'saɪn]vt. 分配;指派;[计][数] 赋值

renderern. 渲染器;描绘器

resolution[rezə'luːʃ(ə)n]n. [物] 分辨率;决议;解决;决心

rest[rest]n. 休息,静止;休息时间;剩余部分;支架

scale[skeɪl]n. 规模;比例;鳞;刻度;天平;数值范围

factor['fæktə]n. 因素;要素;[物] 因数;代理人

3 结语

  以上是所有内容,希望对大家有所帮助。

2019-04-19 19:21:47 qfeung 阅读数 46

iOS Drawing Concepts

High-quality graphics are an important part of your app’s user interface. Providing high-quality graphics not only makes your app look good, but it also makes your app look like a natural extension to the rest of the system. iOS provides two primary paths for creating high-quality graphics in your system: OpenGL or native rendering using Quartz, Core Animation, and UIKit. This document describes native rendering. (To learn about OpenGL drawing, see OpenGL ES Programming Guide.)

Quartz is the main drawing interface, providing support for path-based drawing, anti-aliased rendering, gradient fill patterns, images, colors, coordinate-space transformations, and PDF document creation, display, and parsing. UIKit provides Objective-C wrappers for line art, Quartz images, and color manipulations. Core Animation provides the underlying support for animating changes in many UIKit view properties and can also be used to implement custom animations.

This chapter provides an overview of the drawing process for iOS apps, along with specific drawing techniques for each of the supported drawing technologies. You will also find tips and guidance on how to optimize your drawing code for the iOS platform.

Important: Not all UIKit classes are thread safe. Be sure to check the documentation before performing drawing-related operations on threads other than your app’s main thread.

高质量的图形(graphics)是应用程序用户界面的重要组成部分。 提供高质量的图形不仅使您的应用程序看起来很好,而且还使您的应用程序看起来像是系统其余部分的自然扩展。 iOS提供了两种在系统中创建高质量图形的主要途径:①OpenGL, ②使用Quartz,Core Animation和UIKit的自然渲染(native rendering)。 本文档描述了自然渲染。 (要了解OpenGL绘图,请参阅OpenGL ES Programming Guide。)

Quartz是主要的绘图接口,支持基于路径的绘制,消除锯齿渲染,渐变填充图案,图像,颜色,坐标空间转换以及PDF文档创建,显示和解析。 UIKit为线条艺术,石英图像和颜色处理提供Objective-C包装。 Core Animation为许多UIKit视图属性中的动画更改提供了底层支持,也可用于实现自定义动画。

本章概述了iOS应用程序的绘图过程,以及每种支持的绘图技术的特定绘图技术。 您还可以找到有关如何针对iOS平台优化绘图代码的提示和指导。

重要提示:并非所有UIKit类都是线程安全的。 在对应用程序主线程以外的线程执行与绘图相关的操作之前,请务必检查文档。

The UIKit Graphics System (UIKit 图形系统)

In iOS, all drawing to the screen—regardless of whether it involves OpenGL, Quartz, UIKit, or Core Animation—occurs within the confines of an instance of the UIView class or a subclass thereof. Views define the portion of the screen in which drawing occurs. If you use system-provided views, this drawing is handled for you automatically. If you define custom views, however, you must provide the drawing code yourself. If you use Quartz, Core Animation, and UIKit to draw, you use the drawing concepts described in the following sections.

In addition to drawing directly to the screen, UIKit also allows you to draw into offscreen bitmap and PDF graphics contexts. When you draw in an offscreen context, you are not drawing in a view, which means that concepts such as the view drawing cycle do not apply (unless you then obtain that image and draw it in an image view or similar).

在iOS中,所有对于屏幕的绘制 - 无论是涉及OpenGL,Quartz,UIKit还是Core Animation - 都发生在UIView类的实例或其子类的范围内。 视图定义了发生绘图的屏幕部分。 如果使用系统提供的视图,则会自动为您处理此图形。 但是,如果定义自定义视图,则必须自己提供绘图代码。 如果使用Quartz,Core Animation和UIKit进行绘制,则可以使用以下各节中描述的绘图概念。

除了直接绘制到屏幕,UIKit还允许您绘制到屏幕外位图和PDF图形上下文。 当你在离屏上下文中绘制时,您并不是在一个视图中绘制,这意味着视图绘制周期等概念不适用(除非您获取该图像并在图像视图中绘制它或类似图像)。

The View Drawing Cycle (视图绘制周期)

The basic drawing model for subclasses of the UIView class involves updating content on demand. The UIView class makes the update process easier and more efficient; however, by gathering the update requests you make and delivering them to your drawing code at the most appropriate time.

When a view is first shown or when a portion of the view needs to be redrawn, iOS asks the view to draw its content by calling the view’s drawRect: method.

UIView类的子类的基本绘图模型涉及按需更新内容。 UIView类使更新过程更容易,更有效; 但是,通过收集更新请求,您可以在最合适的时间将它们交付给您的绘图代码。

首次显示视图或需要重绘视图的一部分时,iOS会要求视图通过调用视图的drawRect:方法来绘制其内容。

There are several actions that can trigger a view update:

  • Moving or removing another view that was partially obscuring your view
  • Making a previously hidden view visible again by setting its hidden property to NO
  • Scrolling a view off of the screen and then back onto the screen
  • Explicitly calling the setNeedsDisplay or setNeedsDisplayInRect: method of your view

有几个操作可以触发视图更新:

  • 移动或删除部分遮挡视图的其他视图
  • 通过将其隐藏属性设置为NO,可以再次显示先前隐藏的视图
  • 滚动屏幕上的视图,然后返回到屏幕上
  • 显式调用视图的setNeedsDisplaysetNeedsDisplayInRect:方法

System views are redrawn automatically. For custom views, you must override the drawRect: method and perform all your drawing inside it. Inside your drawRect: method, use the native drawing technologies to draw shapes, text, images, gradients, or any other visual content you want. The first time your view becomes visible, iOS passes a rectangle to the view’s drawRect: method that contains your view’s entire visible area. During subsequent calls, the rectangle includes only the portion of the view that actually needs to be redrawn. For maximum performance, you should redraw only affected content.

After calling your drawRect: method, the view marks itself as updated and waits for new actions to arrive and trigger another update cycle. If your view displays static content, then all you need to do is respond to changes in your view’s visibility caused by scrolling and the presence of other views.

If you want to change the contents of the view, however, you must tell your view to redraw its contents. To do this, call the setNeedsDisplay or setNeedsDisplayInRect: method to trigger an update. For example, if you were updating content several times a second, you might want to set up a timer to update your view. You might also update your view in response to user interactions or the creation of new content in your view.

Important: Do not call your view’s drawRect: method yourself. That method should be called only by code built into iOS during a screen repaint. At other times, no graphics context exists, so drawing is not possible. (Graphics contexts are explained in the next section.)

系统视图会自动重绘。对于自定义视图,您必须覆盖drawRect:方法并在其中执行所有绘图。在drawRect:方法中,使用本机绘图技术绘制所需的形状,文本,图像,渐变或任何其他可视内容。第一次看到您的视图时,iOS会将一个矩形传递给视图的drawRect:方法,该方法包含视图的整个可见区域。在后续调用期间,矩形仅包括实际需要重绘的视图部分。为获得最佳性能,您应仅重绘受影响的内容。

在调用drawRect:方法之后,视图将自身标记为已更新,并等待新操作到达并触发另一个更新周期。如果您的视图显示静态内容,那么您需要做的就是响应视图因滚动和其他视图的存在而导致的可见性变化。

但是,如果要更改视图的内容,则必须告诉视图重绘其内容。为此,请调用setNeedsDisplaysetNeedsDisplayInRect:方法以触发更新。例如,如果您每秒多次更新内容,则可能需要设置计时器以更新视图。您还可以更新视图以响应用户交互或在视图中创建新内容。

重要提示:请勿自行调用视图的drawRect:方法。 只有在屏幕重绘期间内置于iOS中的代码才能调用该方法。 在其他时候,不存在图形上下文,因此无法绘图。 (图形上下文将在下一节中介绍。)

Coordinate Systems and Drawing in iOS

When an app draws something in iOS, it has to locate the drawn content in a two-dimensional space defined by a coordinate system. This notion might seem straightforward at first glance, but it isn’t. Apps in iOS sometimes have to deal with different coordinate systems when drawing.

当应用程序在iOS中绘制内容时,它必须在由二维空间定义的坐标系中定位绘制的内容。乍一看,这个概念似乎很简单,但事实并非如此。 iOS中的应用有时必须在绘图时处理不同的坐标系。

In iOS, all drawing occurs in a graphics context. Conceptually, a graphics context is an object that describes where and how drawing should occur, including basic drawing attributes such as the colors to use when drawing, the clipping area, line width and style information, font information, compositing options, and so on.

在iOS中,所有绘图都在图形上下文中进行。从概念上讲,图形上下文是描述绘图应在何处以及如何发生的对象,包括基本绘图属性,例如绘制时使用的颜色,剪切区域,线宽和样式信息,字体信息,合成选项等。

In addition, as shown in Figure 1-1, each graphics context has a coordinate system. More precisely, each graphics context has three coordinate systems:

另外,如图1-1所示,每个图形上下文都有一个坐标系。更确切地说,每个图形上下文都有三个坐标系:

  • The drawing (user) coordinate system. This coordinate system is used when you issue drawing commands.

  • The view coordinate system (base space). This coordinate system is a fixed coordinate system relative to the view.

  • The (physical) device coordinate system. This coordinate system represents pixels on the physical screen.

  • 绘图(用户)坐标系。发出绘图命令时使用此坐标系。

  • 视图坐标系(基本空间)。该坐标系是相对于视图的固定坐标系。

  • (物理)设备坐标系。该坐标系表示物理屏幕上的像素。

Figure 1-1 The relationship between drawing coordinates, view coordinates, and hardware coordinates
图1-1 绘图坐标,视图坐标和硬件坐标之间的关系
在这里插入图片描述
The drawing frameworks of iOS create graphics contexts for drawing to specific destinations—the screen, bitmaps, PDF content, and so on—and these graphics contexts establish the initial drawing coordinate system for that destination. This initial drawing coordinate system is known as the default coordinate system, and is a 1:1 mapping onto the view’s underlying coordinate system.

iOS的绘图框架创建用于绘制到特定目的地的图形上下文 - 屏幕,位图,PDF内容等 - 并且这些图形上下文为该目标建立初始绘图坐标系。此初始绘图坐标系称为默认坐标系,是视图底层坐标系上的1:1映射。

Each view also has a current transformation matrix (CTM), a mathematical matrix that maps the points in the current drawing coordinate system to the (fixed) view coordinate system. The app can modify this matrix (as described later) to change the behavior of future drawing operations.

每个视图还具有当前变换矩阵(CTM),该数学矩阵将当前绘图坐标系中的点映射到(固定)视图坐标系。应用程序可以修改此矩阵(如稍后所述)以更改将来绘制操作的行为。

Each of the drawing frameworks of iOS establishes a default coordinate system based on the current graphics context. In iOS, there are two main types of coordinate systems:

An upper-left-origin coordinate system (ULO), in which the origin of drawing operations is at the upper-left corner of the drawing area, with positive values extending downward and to the right. The default coordinate system used by the UIKit and Core Animation frameworks is ULO-based.
A lower-left-origin coordinate system (LLO), in which the origin of drawing operations is at the lower-left corner of the drawing area, with positive values extending upward and to the right. The default coordinate system used by Core Graphics framework is LLO-based.

iOS的每个绘图框架都基于当前图形上下文建立默认坐标系。在iOS中,有两种主要类型的坐标系:

左上原点坐标系(ULO),其中绘图操作的原点位于绘图区域的左上角,正值向下和向右延伸。 UIKit和Core Animation框架使用的默认坐标系是基于ULO的。
左下原点坐标系(LLO),其中绘图操作的原点位于绘图区域的左下角,正值向上和向右延伸。 Core Graphics框架使用的默认坐标系是基于LLO的。

These coordinate systems are shown in Figure 1-2.
这些坐标系如图1-2所示。
在这里插入图片描述

Note: The default coordinate system in OS X is LLO-based. Although the drawing functions and methods of the Core Graphics and AppKit frameworks are perfectly suited to this default coordinate system, AppKit provides programmatic support for flipping the drawing coordinate system to have an upper-left origin.

**注意:**OS X中的默认坐标系是基于LLO的。 虽然Core Graphics和AppKit框架的绘图功能和方法非常适合此默认坐标系,但AppKit提供了编程支持,可以将绘图坐标系翻转为左上角。

Before calling your view’s drawRect: method, UIKit establishes the default coordinate system for drawing to the screen by making a graphics context available for drawing operations. Within a view’s drawRect: method, an app can set graphics-state parameters (such as fill color) and draw to the current graphics context without needing to refer to the graphics context explicitly. This implicit graphics context establishes a ULO default coordinate system.

在调用视图的drawRect方法之前,UIKit通过为绘图操作提供图形上下文来建立绘制到屏幕的默认坐标系。 在视图的drawRect:方法中,应用程序可以设置图形状态参数(例如填充颜色)并绘制到当前图形上下文,而无需显式引用图形上下文。 此隐式图形上下文建立ULO默认坐标系。

Points Versus Pixels

In iOS there is a distinction between the coordinates you specify in your drawing code and the pixels of the underlying device. When using native drawing technologies such as Quartz, UIKit, and Core Animation, the drawing coordinate space and the view’s coordinate space are both logical coordinate spaces, with distances measured in points. These logical coordinate systems are decoupled from the device coordinate space used by the system frameworks to manage the pixels onscreen.

在iOS中,您在绘图代码中指定的坐标与底层设备的像素之间存在区别。 使用Quartz,UIKit和Core Animation等原生绘图技术时,绘图坐标空间和视图的坐标空间都是逻辑坐标空间,以点为单位测量距离。 这些逻辑坐标系与系统框架用于管理屏幕上像素的设备坐标空间分离。

The system automatically maps points in the view’s coordinate space to pixels in the device coordinate space, but this mapping is not always one-to-one. This behavior leads to an important fact that you should always remember:

系统会自动将视图坐标空间中的点映射到设备坐标空间中的像素,但此映射并不总是一对一的。 这种行为导致了一个重要的事实,你应该永远记住:

One point does not necessarily correspond to one physical pixel.

一点不一定对应于一个物理像素。

The purpose of using points (and the logical coordinate system) is to provide a consistent size of output that is device independent. For most purposes, the actual size of a point is irrelevant. The goal of points is to provide a relatively consistent scale that you can use in your code to specify the size and position of views and rendered content. How points are actually mapped to pixels is a detail that is handled by the system frameworks. For example, on a device with a high-resolution screen, a line that is one point wide may actually result in a line that is two physical pixels wide. The result is that if you draw the same content on two similar devices, with only one of them having a high-resolution screen, the content appears to be about the same size on both devices.

使用点(和逻辑坐标系)的目的是提供与设备无关的一致输出大小。 在大多数情况下,点的实际大小是无关紧要的。 点的目标是提供相对一致的比例,您可以在代码中使用该比例来指定视图和呈现内容的大小和位置。 实际如何将点映射到像素是由系统框架处理的细节。 例如,在具有高分辨率屏幕的设备上,一点宽的线实际上可能产生两个物理像素宽的线。 结果是,如果您在两个类似的设备上绘制相同的内容,其中只有一个具有高分辨率屏幕,则两个设备上的内容大小大致相同。

Note: In the context of PDF rendering and printing, Core Graphics defines “point” using the industry standard mapping of one point to 1/72 of an inch.

注意:在PDF渲染和打印的上下文中,Core Graphics使用一个点到1/72英寸的行业标准映射来定义“点”。

In iOS, the UIScreen, UIView, UIImage, and CALayer classes provide properties to obtain (and, in some cases, set) a scale factor that describes the relationship between points and pixels for that particular object. For example, every UIKit view has a contentScaleFactor property. On a standard-resolution screen, the scale factor is typically 1.0. On a high-resolution screen, the scale factor is typically 2.0. In the future, other scale factors may also be possible. (In iOS prior to version 4, you should assume a scale factor of 1.0.)

在iOS中,UIScreen,UIView,UIImage和CALayer类提供了获取(并且在某些情况下,设置)比例因子的属性,该比例因子描述了该特定对象的点和像素之间的关系。 例如,每个UIKit视图都有一个contentScaleFactor属性。 在标准分辨率屏幕上,比例因子通常为1.0。 在高分辨率屏幕上,比例因子通常为2.0。 将来,其他比例因子也是可能的。 (在版本4之前的iOS中,您应该假设比例因子为1.0。)

Native drawing technologies, such as Core Graphics, take the current scale factor into account for you. For example, if one of your views implements a drawRect: method, UIKit automatically sets the scale factor for that view to the screen’s scale factor. In addition, UIKit automatically modifies the current transformation matrix of any graphics contexts used during drawing to take into account the view’s scale factor. Thus, any content you draw in your drawRect: method is scaled appropriately for the underlying device’s screen.

原生绘图技术(如Core Graphics)会将当前比例因子考虑在内。 例如,如果您的一个视图实现了drawRect:方法,UIKit会自动将该视图的比例因子设置为屏幕的比例因子。 此外,UIKit会自动修改绘图期间使用的任何图形上下文的当前变换矩阵,以考虑视图的比例因子。 因此,您在drawRect:方法中绘制的任何内容都会针对底层设备的屏幕进行适当缩放。

Because of this automatic mapping, when writing drawing code, pixels usually don’t matter. However, there are times when you might need to change your app’s drawing behavior depending on how points are mapped to pixels—to download higher-resolution images on devices with high-resolution screens or to avoid scaling artifacts when drawing on a low-resolution screen, for example.

由于这种自动映射,在编写绘图代码时,像素通常无关紧要。 但是,有时您可能需要根据点映射到像素的方式更改应用程序的绘图行为 - 在具有高分辨率屏幕的设备上下载更高分辨率的图像,或者在低分辨率屏幕上绘图时避免缩放瑕疵 , 例如。

In iOS, when you draw things onscreen, the graphics subsystem uses a technique called antialiasing to approximate a higher-resolution image on a lower-resolution screen. The best way to explain this technique is by example. When you draw a black vertical line on a solid white background, if that line falls exactly on a pixel, it appears as a series of black pixels in a field of white. If it appears exactly between two pixels, however, it appears as two grey pixels side-by-side, as shown in Figure 1-3.

在iOS中,当您在屏幕上绘制内容时,图形子系统使用称为抗锯齿的技术来在较低分辨率的屏幕上逼近较高分辨率的图像。 解释这种技术的最好方法是举例。 当您在纯白色背景上绘制黑色垂直线时,如果该线恰好落在像素上,则它在白色区域中显示为一系列黑色像素。 但是,如果它恰好出现在两个像素之间,它会并排显示为两个灰色像素,如图1-3所示。

Figure 1-3 A one-point line centered at a whole-numbered point value

图1-3 以整数点值为中心的单点线
在这里插入图片描述

Positions defined by whole-numbered points fall at the midpoint between pixels. For example, if you draw a one-pixel-wide vertical line from (1.0, 1.0) to (1.0, 10.0), you get a fuzzy grey line. If you draw a two-pixel-wide line, you get a solid black line because it fully covers two pixels (one on either side of the specified point). As a rule, lines that are an odd number of physical pixels wide appear softer than lines with widths measured in even numbers of physical pixels unless you adjust their position to make them cover pixels fully.

由整数点定义的位置落在像素之间的中点。 例如,如果从(1.0,1.0)到(1.0,10.0)绘制一个像素宽的垂直线,则会得到模糊的灰线。 如果绘制一条两像素宽的线,则会得到一条纯黑线,因为它完全覆盖两个像素(指定点两侧各一个)。 通常,奇数个物理像素宽的线条比使用偶数个物理像素测量宽度的线条更柔和,除非您调整它们的位置以使它们完全覆盖像素。

Where the scale factor comes into play is when determining how many pixels are covered by a one-point-wide line.

比例因子起作用的地方是确定一点宽线覆盖多少像素。

On a low-resolution display (with a scale factor of 1.0), a one-point-wide line is one pixel wide. To avoid antialiasing when you draw a one-point-wide horizontal or vertical line, if the line is an odd number of pixels in width, you must offset the position by 0.5 points to either side of a whole-numbered position. If the line is an even number of points in width, to avoid a fuzzy line, you must not do so.

在低分辨率显示器(比例因子为1.0)上,一点宽线是一个像素宽。 要在绘制一个点宽的水平或垂直线时避免抗锯齿,如果线宽为奇数个像素,则必须将位置偏移0.5个点到整个编号位置的任意一侧。 如果线宽为偶数个点,为避免模糊线,则不得这样做。

Figure 1-4 Appearance of one-point-wide lines on standard and retina displays

图1-4 标准和视网膜显示屏上一点宽线的外观

在这里插入图片描述
On a high-resolution display (with a scale factor of 2.0), a line that is one point wide is not antialiased at all because it occupies two full pixels (from -0.5 to +0.5). To draw a line that covers only a single physical pixel, you would need to make it 0.5 points in thickness and offset its position by 0.25 points. A comparison between the two types of screens is shown in Figure 1-4.

在高分辨率显示器(比例因子为2.0)上,一点宽的线根本没有抗锯齿,因为它占据两个完整像素(从-0.5到+0.5)。 要绘制仅覆盖单个物理像素的线,您需要将其设置为0.5个点,并将其位置偏移0.25个点。 两种屏幕的比较如图1-4所示。

Of course, changing drawing characteristics based on scale factor may have unexpected consequences. A 1-pixel-wide line might look nice on some devices but on a high-resolution device might be so thin that it is difficult to see clearly. It is up to you to determine whether to make such a change.

当然,基于比例因子改变绘图特性可能会产生意想不到的后果。 1像素宽的线在某些设备上可能看起来不错,但在高分辨率设备上可能很薄,很难看清楚。 由您决定是否进行此类更改。

2013-10-18 12:31:17 qq469236803 阅读数 859

iOS Drawing Concepts


https://developer.apple.com/library/ios/documentation/2ddrawing/conceptual/drawingprintingios/graphicsdrawingoverview/graphicsdrawingoverview.html


iOS Drawing Concepts

High-quality graphics are an important part of your app’s user interface. Providing high-quality graphics not only makes your app look good, but it also makes your app look like a natural extension to the rest of the system. iOS provides two primary paths for creating high-quality graphics in your system: OpenGL or native rendering using Quartz, Core Animation, and UIKit. This document describes native rendering. (To learn about OpenGL drawing, see OpenGL ES Programming Guide for iOS.)

Quartz is the main drawing interface, providing support for path-based drawing, anti-aliased rendering, gradient fill patterns, images, colors, coordinate-space transformations, and PDF document creation, display, and parsing. UIKit provides Objective-C wrappers for line art, Quartz images, and color manipulations. Core Animation provides the underlying support for animating changes in many UIKit view properties and can also be used to implement custom animations.

This chapter provides an overview of the drawing process for iOS apps, along with specific drawing techniques for each of the supported drawing technologies. You will also find tips and guidance on how to optimize your drawing code for the iOS platform.

Important: Not all UIKit classes are thread safe. Be sure to check the documentation before performing drawing-related operations on threads other than your app’s main thread.

The UIKit Graphics System

In iOS, all drawing to the screen—regardless of whether it involves OpenGL, Quartz, UIKit, or Core Animation—occurs within the confines of an instance of the UIView class or a subclass thereof. Views define the portion of the screen in which drawing occurs. If you use system-provided views, this drawing is handled for you automatically. If you define custom views, however, you must provide the drawing code yourself. If you use Quartz, Core Animation, and UIKit to draw, you use the drawing concepts described in the following sections.

In addition to drawing directly to the screen, UIKit also allows you to draw into offscreen bitmap and PDF graphics contexts. When you draw in an offscreen context, you are not drawing in a view, which means that concepts such as the view drawing cycle do not apply (unless you then obtain that image and draw it in an image view or similar).

The View Drawing Cycle

The basic drawing model for subclasses of the UIView class involves updating content on demand. The UIView class makes the update process easier and more efficient; however, by gathering the update requests you make and delivering them to your drawing code at the most appropriate time.

When a view is first shown or when a portion of the view needs to be redrawn, iOS asks the view to draw its content by calling the view’s drawRect: method.

There are several actions that can trigger a view update:

  • Moving or removing another view that was partially obscuring your view

  • Making a previously hidden view visible again by setting its hidden property to NO

  • Scrolling a view off of the screen and then back onto the screen

  • Explicitly calling the setNeedsDisplay or setNeedsDisplayInRect: method of your view

System views are redrawn automatically. For custom views, you must override the drawRect: method and perform all your drawing inside it. Inside your drawRect: method, use the native drawing technologies to draw shapes, text, images, gradients, or any other visual content you want. The first time your view becomes visible, iOS passes a rectangle to the view’s drawRect: method that contains your view’s entire visible area. During subsequent calls, the rectangle includes only the portion of the view that actually needs to be redrawn. For maximum performance, you should redraw only affected content.

After calling your drawRect: method, the view marks itself as updated and waits for new actions to arrive and trigger another update cycle. If your view displays static content, then all you need to do is respond to changes in your view’s visibility caused by scrolling and the presence of other views.

If you want to change the contents of the view, however, you must tell your view to redraw its contents. To do this, call the setNeedsDisplay or setNeedsDisplayInRect: method to trigger an update. For example, if you were updating content several times a second, you might want to set up a timer to update your view. You might also update your view in response to user interactions or the creation of new content in your view.

Important: Do not call your view’s drawRect: method yourself. That method should be called only by code built into iOS during a screen repaint. At other times, no graphics context exists, so drawing is not possible. (Graphics contexts are explained in the next section.)

Coordinate Systems and Drawing in iOS

When an app draws something in iOS, it has to locate the drawn content in a two-dimensional space defined by a coordinate system. This notion might seem straightforward at first glance, but it isn’t. Apps in iOS sometimes have to deal with different coordinate systems when drawing.

In iOS, all drawing occurs in a graphics context. Conceptually, a graphics context is an object that describes where and how drawing should occur, including basic drawing attributes such as the colors to use when drawing, the clipping area, line width and style information, font information, compositing options, and so on.

In addition, as shown in Figure 1-1, each graphics context has a coordinate system. More precisely, each graphics context has three coordinate systems:

  • The drawing (user) coordinate system. This coordinate system is used when you issue drawing commands.

  • The view coordinate system (base space). This coordinate system is a fixed coordinate system relative to the view.

  • The (physical) device coordinate system. This coordinate system represents pixels on the physical screen.

Figure 1-1  The relationship between drawing coordinates, view coordinates, and hardware coordinates

The drawing frameworks of iOS create graphics contexts for drawing to specific destinations—the screen, bitmaps, PDF content, and so on—and these graphics contexts establish the initial drawing coordinate system for that destination. This initial drawing coordinate system is known as the default coordinate system, and is a 1:1 mapping onto the view’s underlying coordinate system.

Each view also has a current transformation matrix (CTM), a mathematical matrix that maps the points in the current drawing coordinate system to the (fixed) view coordinate system. The app can modify this matrix (as described later) to change the behavior of future drawing operations.

Each of the drawing frameworks of iOS establishes a default coordinate system based on the current graphics context. In iOS, there are two main types of coordinate systems:

  • An upper-left-origin coordinate system (ULO), in which the origin of drawing operations is at the upper-left corner of the drawing area, with positive values extending downward and to the right. The default coordinate system used by the UIKit and Core Animation frameworks is ULO-based.

  • A lower-left-origin coordinate system (LLO), in which the origin of drawing operations is at the lower-left corner of the drawing area, with positive values extending upward and to the right. The default coordinate system used by Core Graphics framework is LLO-based.

These coordinate systems are shown in Figure 1-2.

Figure 1-2  Default coordinate systems in iOS

Note: The default coordinate system in OS X is LLO-based. Although the drawing functions and methods of the Core Graphics and AppKit frameworks are perfectly suited to this default coordinate system, AppKit provides programmatic support for flipping the drawing coordinate system to have an upper-left origin.

Before calling your view’s drawRect: method, UIKit establishes the default coordinate system for drawing to the screen by making a graphics context available for drawing operations. Within a view’s drawRect:method, an app can set graphics-state parameters (such as fill color) and draw to the current graphics context without needing to refer to the graphics context explicitly. This implicit graphics context establishes a ULO default coordinate system.

Points Versus Pixels

In iOS there is a distinction between the coordinates you specify in your drawing code and the pixels of the underlying device. When using native drawing technologies such as Quartz, UIKit, and Core Animation, the drawing coordinate space and the view’s coordinate space are both logical coordinate spaces, with distances measured in points. These logical coordinate systems are decoupled from the device coordinate space used by the system frameworks to manage the pixels onscreen.

The system automatically maps points in the view’s coordinate space to pixels in the device coordinate space, but this mapping is not always one-to-one. This behavior leads to an important fact that you should always remember:

  • One point does not necessarily correspond to one physical pixel.

The purpose of using points (and the logical coordinate system) is to provide a consistent size of output that is device independent. For most purposes, the actual size of a point is irrelevant. The goal of points is to provide a relatively consistent scale that you can use in your code to specify the size and position of views and rendered content. How points are actually mapped to pixels is a detail that is handled by the system frameworks. For example, on a device with a high-resolution screen, a line that is one point wide may actually result in a line that is two physical pixels wide. The result is that if you draw the same content on two similar devices, with only one of them having a high-resolution screen, the content appears to be about the same size on both devices.

In iOS, the UIScreenUIViewUIImage, and CALayer classes provide properties to obtain (and, in some cases, set) a scale factor that describes the relationship between points and pixels for that particular object. For example, every UIKit view has a contentScaleFactor property. On a standard-resolution screen, the scale factor is typically 1.0. On a high-resolution screen, the scale factor is typically 2.0. In the future, other scale factors may also be possible. (In iOS prior to version 4, you should assume a scale factor of 1.0.)

Native drawing technologies, such as Core Graphics, take the current scale factor into account for you. For example, if one of your views implements a drawRect: method, UIKit automatically sets the scale factor for that view to the screen’s scale factor. In addition, UIKit automatically modifies the current transformation matrix of any graphics contexts used during drawing to take into account the view’s scale factor. Thus, any content you draw in your drawRect: method is scaled appropriately for the underlying device’s screen.

Because of this automatic mapping, when writing drawing code, pixels usually don’t matter. However, there are times when you might need to change your app’s drawing behavior depending on how points are mapped to pixels—to download higher-resolution images on devices with high-resolution screens or to avoid scaling artifacts when drawing on a low-resolution screen, for example.

In iOS, when you draw things onscreen, the graphics subsystem uses a technique called antialiasing to approximate a higher-resolution image on a lower-resolution screen. The best way to explain this technique is by example. When you draw a black vertical line on a solid white background, if that line falls exactly on a pixel, it appears as a series of black pixels in a field of white. If it appears exactly between two pixels, however, it appears as two grey pixels side-by-side, as shown in Figure 1-3.

Figure 1-3  A one-point line centered at a whole-numbered point value

Positions defined by whole-numbered points fall at the midpoint between pixels. For example, if you draw a one-pixel-wide vertical line from (1.0, 1.0) to (1.0, 10.0), you get a fuzzy grey line. If you draw a two-pixel-wide line, you get a solid black line because it fully covers two pixels (one on either side of the specified point). As a rule, lines that are an odd number of physical pixels wide appear softer than lines with widths measured in even numbers of physical pixels unless you adjust their position to make them cover pixels fully.

Where the scale factor comes into play is when determining how many pixels are covered by a one-point-wide line.

On a low-resolution display (with a scale factor of 1.0), a one-point-wide line is one pixel wide. To avoid antialiasing when you draw a one-point-wide horizontal or vertical line, if the line is an odd number of pixels in width, you must offset the position by 0.5 points to either side of a whole-numbered position. If the line is an even number of points in width, to avoid a fuzzy line, you must not do so.

Figure 1-4  Appearance of one-point-wide lines on standard and retina displays

On a high-resolution display (with a scale factor of 2.0), a line that is one point wide is not antialiased at all because it occupies two full pixels (from -0.5 to +0.5). To draw a line that covers only a single physical pixel, you would need to make it 0.5 points in thickness and offset its position by 0.25 points. A comparison between the two types of screens is shown in Figure 1-4.

Of course, changing drawing characteristics based on scale factor may have unexpected consequences. A 1-pixel-wide line might look nice on some devices but on a high-resolution device might be so thin that it is difficult to see clearly. It is up to you to determine whether to make such a change.

Obtaining Graphics Contexts

Most of the time, graphics contexts are configured for you. Each view object automatically creates a graphics context so that your code can start drawing immediately as soon as your custom drawRect: method is called. As part of this configuration, the underlying UIView class creates a graphics context (a CGContextRef opaque type) for the current drawing environment.

If you want to draw somewhere other than your view (for example, to capture a series of drawing operations in a PDF or bitmap file), or if you need to call Core Graphics functions that require a context object, you must take additional steps to obtain a graphics context object. The sections below explain how.

For more information about graphics contexts, modifying the graphics state information, and using graphics contexts to create custom content, see Quartz 2D Programming Guide. For a list of functions used in conjunction with graphics contexts, see CGContext ReferenceCGBitmapContext Reference, and CGPDFContext Reference.

Drawing to the Screen

If you use Core Graphics functions to draw to a view, either in the drawRect: method or elsewhere, you’ll need a graphics context for drawing. (The first parameter of many of these functions must be aCGContextRef object.) You can call the function UIGraphicsGetCurrentContext to get an explicit version of the same graphics context that’s made implicit in drawRect:. Because it’s the same graphics context, the drawing functions should also make reference to a ULO default coordinate system.

If you want to use Core Graphics functions to draw in a UIKit view, you should use the ULO coordinate system of UIKit for drawing operations. Alternatively, you can apply a flip transform to the CTM and then draw an object in the UIKit view using Core Graphics native LLO coordinate system. “Flipping the Default Coordinate System” discusses flip transforms in detail.

The UIGraphicsGetCurrentContext function always returns the graphics context currently in effect. For example, if you create a PDF context and then call UIGraphicsGetCurrentContext, you’d receive that PDF context. You must use the graphics context returned by UIGraphicsGetCurrentContext if you use Core Graphics functions to draw to a view.

Note: The UIPrintPageRenderer class declares several methods for drawing printable content. In a manner similar to drawRect:, UIKit installs an implicit graphics context for implementations of these methods. This graphics context establishes a ULO default coordinate system.

Drawing to Bitmap Contexts and PDF Contexts

UIKit provides functions for rendering images in a bitmap graphics context and for generating PDF content by drawing in a PDF graphics context. Both of these approaches require that you first call a function that creates a graphics context—a bitmap context or a PDF context, respectively. The returned object serves as the current (and implicit) graphics context for subsequent drawing and state-setting calls. When you finish drawing in the context, you call another function to close the context.

Both the bitmap context and the PDF context provided by UIKit establish a ULO default coordinate system. Core Graphics has corresponding functions for rendering in a bitmap graphics context and for drawing in a PDF graphics context. The context that an app directly creates through Core Graphics, however, establishes a LLO default coordinate system.

Note: In iOS, it is recommended that you use the UIKit functions for drawing to bitmap contexts and PDF contexts. However, if you do use the Core Graphics alternatives and intend to display the rendered results, you will have to adjust your code to compensate for the difference in default coordinate systems. See “Flipping the Default Coordinate System” for more information.

For details, see “Drawing and Creating Images” (for drawing to bitmap contexts) and “Generating PDF Content” (for drawing to PDF contexts).

Color and Color Spaces

iOS supports the full range of color spaces available in Quartz; however, most apps should need only the RGB color space. Because iOS is designed to run on embedded hardware and display graphics onscreen, the RGB color space is the most appropriate one to use.

The UIColor object provides convenience methods for specifying color values using RGB, HSB, and grayscale values. When creating colors in this way, you never need to specify the color space. It is determined for you automatically by the UIColor object.

You can also use the CGContextSetRGBStrokeColor and CGContextSetRGBFillColor functions in the Core Graphics framework to create and set colors. Although the Core Graphics framework includes support for creating colors using other color spaces, and for creating custom color spaces, using those colors in your drawing code is not recommended. Your drawing code should always use RGB colors.

Drawing with Quartz and UIKit

Quartz is the general name for the native drawing technology in iOS. The Core Graphics framework is at the heart of Quartz, and is the primary interface you use for drawing content. This framework provides data types and functions for manipulating the following:

  • Graphics contexts

  • Paths

  • Images and bitmaps

  • Transparency layers

  • Colors, pattern colors, and color spaces

  • Gradients and shadings

  • Fonts

  • PDF content

UIKit builds on the basic features of Quartz by providing a focused set of classes for graphics-related operations. The UIKit graphics classes are not intended as a comprehensive set of drawing tools—Core Graphics already provides that. Instead, they provide drawing support for other UIKit classes. UIKit support includes the following classes and functions:

  • UIImage, which implements an immutable class for displaying images

  • UIColor, which provides basic support for device colors

  • UIFont, which provides font information for classes that need it

  • UIScreen, which provides basic information about the screen

  • UIBezierPath, which enables your app to draw lines, arcs, ovals, and other shapes.

  • Functions for generating a JPEG or PNG representation of a UIImage object

  • Functions for drawing to a bitmap graphics context

  • Functions for generating PDF data by drawing to a PDF graphics context

  • Functions for drawing rectangles and clipping the drawing area

  • Functions for changing and getting the current graphics context

For information about the classes and methods that comprise UIKit, see UIKit Framework Reference. For more information about the opaque types and functions that comprise the Core Graphics framework, see Core Graphics Framework Reference.

Configuring the Graphics Context

Before calling your drawRect: method, the view object creates a graphics context and sets it as the current context. This context exists only for the lifetime of the drawRect: call. You can retrieve a pointer to this graphics context by calling the UIGraphicsGetCurrentContext function. This function returns a reference to a CGContextRef type, which you pass to Core Graphics functions to modify the current graphics state.Table 1-1 lists the main functions you use to set different aspects of the graphics state. For a complete list of functions, see CGContext Reference. This table also lists UIKit alternatives where they exist.

Table 1-1  Core graphics functions for modifying graphics state

Graphics state

Core Graphics functions

UIKit alternatives

Current transformation matrix (CTM)

CGContextRotateCTM

CGContextScaleCTM

CGContextTranslateCTM

CGContextConcatCTM

None

Clipping area

CGContextClipToRect

UIRectClip function

Line: Width, join, cap, dash, miter limit

CGContextSetLineWidth

CGContextSetLineJoin

CGContextSetLineCap

CGContextSetLineDash

CGContextSetMiterLimit

None

Accuracy of curve estimation

CGContextSetFlatness

None

Anti-aliasing setting

CGContextSetAllowsAntialiasing

None

Color: Fill and stroke settings

CGContextSetRGBFillColor

CGContextSetRGBStrokeColor

UIColor class

Alpha global value (transparency)

CGContextSetAlpha

None

Rendering intent

CGContextSetRenderingIntent

None

Color space: Fill and stroke settings

CGContextSetFillColorSpace

CGContextSetStrokeColorSpace

UIColor class

Text: Font, font size, character spacing, text drawing mode

CGContextSetFont

CGContextSetFontSize

CGContextSetCharacterSpacing

UIFont class

Blend mode

CGContextSetBlendMode

The UIImage class and various drawing functions let you specify which blend mode to use.

The graphics context contains a stack of saved graphics states. When Quartz creates a graphics context, the stack is empty. Using the CGContextSaveGState function pushes a copy of the current graphics state onto the stack. Thereafter, modifications you make to the graphics state affect subsequent drawing operations but do not affect the copy stored on the stack. When you are done making modifications, you can return to the previous graphics state by popping the saved state off the top of the stack using the CGContextRestoreGState function. Pushing and popping graphics states in this manner is a fast way to return to a previous state and eliminates the need to undo each state change individually. It is also the only way to restore some aspects of the state, such as the clipping path, back to their original settings.

For general information about graphics contexts and using them to configure the drawing environment, see “Graphics Contexts” in Quartz 2D Programming Guide.

Creating and Drawing Paths

A path is a vector-based shapes created from a sequence of lines and Bézier curves. UIKit includes the UIRectFrame and UIRectFill functions (among others) for drawing simple paths such as rectangles in your views. Core Graphics also includes convenience functions for creating simple paths such as rectangles and ellipses.

For more complex paths, you must create the path yourself using the UIBezierPath class of UIKit, or using the functions that operate on the CGPathRef opaque type in the Core Graphics framework. Although you can construct a path without a graphics context using either API, the points in the path still must refer to the current coordinate system (which either has a ULO or LLO orientation), and you still need a graphics context to actually render the path.

When drawing a path, you must have a current context set. This context can be a custom view’s context (in drawRect:), a bitmap context, or a PDF context. The coordinate system determines how the path is rendered. UIBezierPath assumes a ULO coordinate system. Thus, if your view is flipped (to use LLO coordinates), the resulting shape may render differently than intended. For best results, you should always specify points relative to the origin of the current coordinate system of the graphics context used for rendering.

Note: Arcs are an aspect of paths that require additional work even if this “rule” is followed. If you create a path using Core Graphic functions that locate points in a ULO coordinate system, and then render the path in a UIKit view, the direction an arc “points” is different. See “Side Effects of Drawing with Different Coordinate Systems” for more on this subject.

For creating paths in iOS, it is recommended that you use UIBezierPath instead of CGPath functions unless you need some of the capabilities that only Core Graphics provides, such as adding ellipses to paths. For more on creating and rendering paths in UIKit, see “Drawing Shapes Using Bézier Paths.”

For information on using UIBezierPath to draw paths, see “Drawing Shapes Using Bézier Paths.” For information on how to draw paths using Core Graphics, including information about how you specify the points for complex path elements, see “Paths” in Quartz 2D Programming Guide. For information on the functions you use to create paths, see CGContext Reference and CGPath Reference.

Creating Patterns, Gradients, and Shadings

The Core Graphics framework includes additional functions for creating patterns, gradients, and shadings. You use these types to create non monochrome colors and use them to fill the paths you create. Patterns are created from repeating images or content. Gradients and shadings provide different ways to create smooth transitions from color to color.

The details for creating and using patterns, gradients, and shadings are all covered in Quartz 2D Programming Guide.

Customizing the Coordinate Space

By default, UIKit creates a straightforward current transformation matrix that maps points onto pixels. Although you can do all of your drawing without modifying that matrix, sometimes it can be convenient to do so.

When your view’s drawRect: method is first called, the CTM is configured so that the origin of the coordinate system matches the your view’s origin, its positive X axis extends to the right, and its positive Y axis extends down. However, you can change the CTM by adding scaling, rotation, and translation factors to it and thereby change the size, orientation, and position of the default coordinate system relative to the underlying view or window.

Using Coordinate Transforms to Improve Drawing Performance

Modifying the CTM is a standard technique for drawing content in a view because it allows you to reuse paths, which potentially reduces the amount of computation required while drawing. For example, if you want to draw a square starting at the point (20, 20), you could create a path that moves to (20, 20) and then draws the needed set of lines to complete the square. However, if you later decide to move that square to the point (10, 10), you would have to recreate the path with the new starting point. Because creating paths is a relatively expensive operation, it is preferable to create a square whose origin is at (0, 0) and to modify the CTM so that the square is drawn at the desired origin.

In the Core Graphics framework, there are two ways to modify the CTM. You can modify the CTM directly using the CTM manipulation functions defined in CGContext Reference. You can also create aCGAffineTransform structure, apply any transformations you want, and then concatenate that transform onto the CTM. Using an affine transform lets you group transformations and then apply them to the CTM all at once. You can also evaluate and invert affine transforms and use them to modify point, size, and rectangle values in your code. For more information on using affine transforms, see Quartz 2D Programming Guideand CGAffineTransform Reference.

Flipping the Default Coordinate System

Flipping in UIKit drawing modifies the backing CALayer to align a drawing environment having a LLO coordinate system with the default coordinate system of UIKit. If you only use UIKit methods and function for drawing, you shouldn’t need to flip the CTM. However, if you mix Core Graphics or Image I/O function calls with UIKit calls, flipping the CTM might be necessary.

Specifically, if you draw an image or PDF document by calling Core Graphics functions directly, the object is rendered upside-down in the view’s context. You must flip the CTM to display the image and pages correctly.

To flip a object drawn to a Core Graphics context so that it appears correctly when displayed in a UIKit view, you must modify the CTM in two steps. You translate the origin to the upper-left corner of the drawing area, and then you apply a scale translation, modifying the y-coordinate by -1. The code for doing this looks similar to the following:

CGContextSaveGState(graphicsContext);
CGContextTranslateCTM(graphicsContext, 0.0, imageHeight);
CGContextScaleCTM(graphicsContext, 1.0, -1.0);
CGContextDrawImage(graphicsContext, image, CGRectMake(0, 0, imageWidth, imageHeight));
CGContextRestoreGState(graphicsContext);

If you create a UIImage object initialized with a Core Graphics image object, UIKit performs the flip transform for you. Every UIImage object is backed by a CGImageRef opaque type. You can access the Core Graphics object through the CGImage property and do some work with the image. (Core Graphics has image-related facilities not available in UIKit.) When you are finished, you can recreate the UIImage object from the modified CGImageRef object.

Note: You can use the Core Graphics function CGContextDrawImage to draw an image to any rendering destination. This function has two parameters, the first for a graphics context and the second for a rectangle that defines both the size of the image and its location in a drawing surface such as a view. When drawing an image with CGContextDrawImage, if you don’t adjust the current coordinate system to a LLO orientation, the image appears inverted in a UIKit view. Additionally, the origin of the rectangle passed into this function is relative to the origin of the coordinate system that is current when the function is called.

Side Effects of Drawing with Different Coordinate Systems

Some rendering oddities are brought to light when you draw an object with with reference to the default coordinate system of one drawing technology and then render it in a graphics context of the other. You may want to adjust your code to account for these side effects.

Arcs and Rotations

If you draw a path with functions such as CGContextAddArc and CGPathAddArc and assume a LLO coordinate system, then you need to flip the CTM to render the arc correctly in a UIKit view. However, if you use the same function to create an arc with points located in a ULO coordinate system and then render the path in a UIKit view, you’ll notice that the arc is an altered version of its original. The terminating endpoint of the arc now points in the opposite direction of what that endpoint would do were the arc created using the UIBezierPath class. For example, a downward-pointing arrow now points upward (as shown in Figure 1-5), and the direction in which the arc “bends” is also different. You must change the direction of Core Graphics-drawn arcs to account for the ULO-based coordinate system; this direction is controlled by the startAngle andendAngle parameters of those functions.

Figure 1-5  Arc rendering in Core Graphics versus UIKit

You can observe the same kind of mirroring effect if you rotate an object (for example, by calling CGContextRotateCTM). If you rotate an object using Core Graphics calls that make reference to a ULO coordinate system, the direction of the object when rendered in UIKit is reversed. You must account for the different directions of rotation in your code; with CGContextRotateCTM, you do this by inverting the sign of the angleparameter (so, for example, a negative value becomes a positive value).

Shadows

The direction a shadow falls from its object is specified by an offset value, and the meaning of that offset is a convention of a drawing framework. In UIKit, positive x and y offsets make a shadow go down and to the right of an object. In Core Graphics, positive x and y offsets make a shadow go up and to the right of an object. Flipping the CTM to align an object with the default coordinate system of UIKit does not affect the object’s shadow, and so a shadow does not correctly track its object. To get it to track correctly, you must modify the offset values appropriately for the current coordinate system.

Note: Prior to iOS 3.2, Core Graphics and UIKit shared the same convention for shadow direction: positive offset values make the shadow go down and to the right of an object.

Applying Core Animation Effects

Core Animation is an Objective-C framework that provides infrastructure for creating fluid, real-time animations quickly and easily. Core Animation is not a drawing technology itself, in the sense that it does not provide primitive routines for creating shapes, images, or other types of content. Instead, it is a technology for manipulating and displaying content that you created using other technologies.

Most apps can benefit from using Core Animation in some form in iOS. Animations provide feedback to the user about what is happening. For example, when the user navigates through the Settings app, screens slide in and out of view based on whether the user is navigating further down the preferences hierarchy or back up to the root node. This kind of feedback is important and provides contextual information for the user. It also enhances the visual style of an app.

In most cases, you may be able to reap the benefits of Core Animation with very little effort. For example, several properties of the UIView class (including the view’s frame, center, color, and opacity—among others) can be configured to trigger animations when their values change. You have to do some work to let UIKit know that you want these animations performed, but the animations themselves are created and run automatically for you. For information about how to trigger the built-in view animations, see “Animating Views” in UIView Class Reference.

When you go beyond the basic animations, you must interact more directly with Core Animation classes and methods. The following sections provide information about Core Animation and show you how to work with its classes and methods to create typical animations in iOS. For additional information about Core Animation and how to use it, see Core Animation Programming Guide.

About Layers

The key technology in Core Animation is the layer object. Layers are lightweight objects that are similar in nature to views, but that are actually model objects that encapsulate geometry, timing, and visual propertiesfor the content you want to display. The content itself is provided in one of three ways:

  • You can assign a CGImageRef to the contents property of the layer object.

  • You can assign a delegate to the layer and let the delegate handle the drawing.

  • You can subclass CALayer and override one of the display methods.

When you manipulate a layer object’s properties, what you are actually manipulating is the model-level data that determines how the associated content should be displayed. The actual rendering of that content is handled separately from your code and is heavily optimized to ensure it is fast. All you must do is set the layer content, configure the animation properties, and then let Core Animation take over.

For more information about layers and how they are used, see Core Animation Programming Guide.

About Animations

When it comes to animating layers, Core Animation uses separate animation objects to control the timing and behavior of the animation. The CAAnimation class and its subclasses provide different types of animation behaviors that you can use in your code. You can create simple animations that migrate a property from one value to another, or you can create complex keyframe animations that track the animation through the set of values and timing functions you provide.

Core Animation also lets you group multiple animations together into a single unit, called a transaction. The CATransaction object manages the group of animations as a unit. You can also use the methods of this class to set the duration of the animation.

For examples of how to create custom animations, see Animation Types and Timing Programming Guide.

Accounting for Scale Factors in Core Animation Layers

Apps that use Core Animation layers directly to provide content may need to adjust their drawing code to account for scale factors. Normally, when you draw in your view’s drawRect: method, or in thedrawLayer:inContext: method of the layer’s delegate, the system automatically adjusts the graphics context to account for scale factors. However, knowing or changing that scale factor might still be necessary when your view does one of the following:

  • Creates additional Core Animation layers with different scale factors and composites them into its own content

  • Sets the contents property of a Core Animation layer directly

Core Animation’s compositing engine looks at the contentsScale property of each layer to determine whether the contents of that layer need to be scaled during compositing. If your app creates layers without an associated view, each new layer object’s scale factor is initially set to 1.0. If you do not change that scale factor, and if you subsequently draw the layer on a high-resolution screen, the layer’s contents are scaled automatically to compensate for the difference in scale factors. If you do not want the contents to be scaled, you can change the layer’s scale factor to 2.0 by setting a new value for the contentsScale property, but if you do so without providing high-resolution content, your existing content may appear smaller than you were expecting. To fix that problem, you need to provide higher-resolution content for your layer.

Important: The contentsGravity property of the layer plays a role in determining whether standard-resolution layer content is scaled on a high-resolution screen. This property is set to the valuekCAGravityResize by default, which causes the layer content to be scaled to fit the layer’s bounds. Changing the gravity to a nonresizing option eliminates the automatic scaling that would otherwise occur. In such a situation, you may need to adjust your content or the scale factor accordingly.

Adjusting the content of your layer to accommodate different scale factors is most appropriate when you set the contents property of a layer directly. Quartz images have no notion of scale factors and therefore work directly with pixels. Therefore, before creating the CGImageRef object you plan to use for the layer’s contents, check the scale factor and adjust the size of your image accordingly. Specifically, load an appropriately sized image from your app bundle or use the UIGraphicsBeginImageContextWithOptions function to create an image whose scale factor matches the scale factor of your layer. If you do not create a high-resolution bitmap, the existing bitmap may be scaled as discussed previously.

For informa


iOS drawing 绘图简介

阅读数 287

IOS Drawing 1

阅读数 3

没有更多推荐了,返回首页