精华内容
下载资源
问答
  • Apache Drill 1.0发布

    2015-05-21 01:37:55
    Addition of ILIKE(VARCHAR, PATTERN) and SUBSTR(VARCHAR, REGEX) functions 更多详情可以参考官方网站: http://drill.apache.org Drill实际上是MapR在主导的,项目负责人和核心开发者大多来自MapR。它实际上...

    https://img-my.csdn.net/uploads/201505/21/1432166589_4214.jpg

    虽然大数据往往将关系型数据库当作靶子,但事实上真正生产环境的Hadoop和Spark等大数据平台,每天大部分工作仍然是为SQL查询提供服务,所以,SQL on Hadoop就成了竞争最激烈的技术领域。

    5月19日,Apache基金会宣布针对Hadoop、NoSQL(MongoDB和HBase)和云存储(Amazon S3, Google Cloud Storage, Azure Blog Storage, Swift)的无模式SQL查询引擎Drill 1.0发布。

    项目的PMC成员Tomer Shiran说:

    这是许多公司数十名工程师将近三年开发的成果。Apache Drill的灵活性和易用性已经吸引了数千维护,而1.0版的企业级可靠性、安全与性能将进一步加速采用。

    发布声明中列出的相对于0.9的重要改进包括:

    • Substantial improvements in stability, memory handling and performance
    • Improvements in Drill CLI experience with addition of convenience shortcuts and improved colors/alignment
    • Substantial additions to documentation including coverage of troubleshooting, performance tuning and many additions to the SQL reference
    • Enhancements in join planning to facilitate high speed planning of large and complicated joins
    • Add support for new context functions including CURRENTUSER and CURRENTSCHEMA
    • Ability to treat all numbers as approximate decimals when reading JSON
    • Enhancements in Drill's text and CSV handling to support first row skipping, configurable field/line delimiters and configurable quoting
    • Improved JDBC compatibility (and tracing proxy for easy debugging).
    • Ability to do JDBC connections with direct urls (avoiding ZooKeeper)
    • Automatic selection of spooling or back-pressure exchange semantics to avoid distributed deadlocks in complex sort-heavy queries
    • Improvements in query profile reporting
    • Addition of ILIKE(VARCHAR, PATTERN) and SUBSTR(VARCHAR, REGEX) functions

    更多详情可以参考官方网站:http://drill.apache.org

    Drill实际上是MapR在主导的,项目负责人和核心开发者大多来自MapR。它实际上是众多SQL on Hadoop中的一个,此外还包括:

    • Hadoop上原生的Hive
    • Hortonworks主导的Hive演进项目Stinger
    • Cloudera主导的Impala
    • MapR主导的Apache Drill
    • Facebook的Presto
    • Pivotal的Greenplum
    • Salesforce最初开发的Apache Phoenix
    • 出自韩国的Apache Tajo(Google Tenzing的模仿)
    • Spark社区的Spark SQL
    • Splice Machine

    从逻辑上来说,除了Spark SQL会借助Spark的火势取得一定优势外,其余最值得关注的还是Hadoop三巨头分别支持的Impala、Stinger和Drill。Impala和Drill都是在Google Dremel启发下产生的,之前好像Impala势头较猛,但现在Drill有迎头赶上的意思。

    Drill这次正式发布,在FAQ里专门做了比较:

    https://img-my.csdn.net/uploads/201505/21/1432147140_2909.png

    CSDN上的更多Drill资料:http://www.csdn.net/tag/drill

    Hacker News上的讨论:https://news.ycombinator.com/item?id=9571780

    Quora: http://www.quora.com/Apache-Drill

    展开全文
  • Portal: Select Application pattern Overview After identifying the Business and Integration patterns that comprise ...

    http://www.ibm.com/developerworks/patterns/portal/select-application-topology.html

     

    Portal: Select Application pattern

    Overview
    After identifying the Business and Integration patterns that comprise the Portal composite pattern, the next step in planning an e-business application is to choose the Application pattern(s) that apply to the business drivers and objectives. An Application pattern shows the principal layout of the application, focusing on the shape of the application, the application logic, and the associated data.

    The selection of an Application pattern is based on the selected Business patterns, Integration patterns, and Composite patterns. The Application patterns use logical tiers to illustrate various ways to configure the interaction between users, applications and data.

    The building of each solution requires that specific business drivers be satisfied. There is a high probability that each solution will share characteristics with many of the Application patterns mentioned; however, the Application patterns available range from simple to more complex. Therefore, chose the simplest Application pattern that satisfies the requirements of your business objectives.

    Each Application pattern has associated Business and IT drivers. The architect should review each of the Business and IT drivers with the associated Application pattern to determine the best fit for the requirements.

    Recall the Business and Integration patterns identified earlier as the mandatory building blocks of the Portal composite pattern were:

    • Access Integration
    • Self-Service
    • Collaboration
    • Information Aggregation
    • Application Integration


    The figure below shows the specific Application patterns that can be used to enable various types of functionality found in each Business and Integration pattern. Note that some of these Applications are mandatory (blue type) to a properly-functioning Portal, and some are optional (red type).

    Portal application patterns
    This figure differs slightly from the original redbook (SG24-6087) because the Dec 2004 re-engineering of the Information Aggregation and Data Integration patterns changed some of the Application pattern names.
    Portal application patterns

    Application patterns
    Though a complete Portal solution requires multiple Application patterns, each Application pattern can be analyzed individually in terms of the functionality it brings to the overall solution, and the business and IT drivers it satisfies. Review the following Application patterns and select those for which you would like to see additional Runtime information.

    Access Integration::Web Single Sign-On application pattern
    Access Integration::Web Single Sign-On application pattern

    The Web Single Sign-On application pattern (as part of the Access Integration pattern) provides a framework for seamless application access through unified authentication services.

    Business and IT Drivers

    • Provide single sign on across multiple applications
    • Reduce Total Cost of Ownership (TCO)
    • Reduce user administration cost


    The primary business driver for choosing this Application pattern is to provide seamless access to multiple applications with a single sign-on while continuing to protect the security of enterprise information and applications.

    Simplification and increased efficiency of user profile management is the main IT driver for Single Sign-On.

    Benefits

    • Users can access their application portfolio easily and securely
    • User profile information is centralized in a common directory, simplifying profile management and reducing costs
    • Application development cost is reduced by providing a standard security solution

    Limitations
    Many existing applications are not capable of accepting a standard set of user credentials as a substitute for local authentication. Integration with such systems can be difficult or even impossible.

    The Portal composite pattern
    A fundamental characteristic of a portal implementation is that of information aggregation. In order to enhance this experience for the user, a Single Sign-On (SSO) solution makes sense. This allows the user to more quickly access the information and avoid worrying about which application they are accessing. It also allows for easier maintainability by the organization sponsoring the portal.

    Single Sign-On functionality requires more than just making sure that the applications that already exist in an enterprise support a central authentication capability. The existing processes must be changed to accommodate this new method of validating a user’s access capability. An analysis of the existing profiling mechanisms and overall security policies in a organization is the starting point for this type of effort.

    For more information, please see the IBM Redpaper A Secure Portal Extended With Single Sign-On, REDP-3743.

    Access Integration::Pervasive Device Access application pattern
    Access Integration::Pervasive Device Access application pattern

    The Access Integration pattern is used to provide consistent access to various applications using multiple device types. In order to provide pervasive device access to an existing Business pattern, the Pervasive Device Access application pattern adds a new tier to the architecture. This tier is responsible for the pervasive extensions to the original application. The function of this tier is to convert the HTML issued by the application presentation logic into a format appropriate for the pervasive device. In this way, the Pervasive Device Access application pattern provides a structure for extending the reach of individual applications from browsers and fat clients to pervasive devices such as PDAs and mobile phones.

    Business and IT Drivers

    • Provide universal access to information and services
    • Time to Market
    • Reduce Total Cost of Ownership (TCO)


    Striving to provide universal access to information and applications is often the primary business driver for choosing this Application pattern. The primary IT driver for choosing this Application pattern is to quickly extend the reach of applications to new device types without having to modify every individual application to enable its use by additional device types.

    The Portal composite pattern
    The Portal composite pattern supports the use of pervasive device access. In fact, "any type of device" access is supported through the use of templates in the pervasive access device tier. At this tier, the session data containing the type of device is known and the properly formatted content can be delivered. This formatted content can be transcoded in content management or datasource nodes, or it can be transcoded "dynamically" when requested by a specific type of client. This will depend on the frequency of updates to the data.

    It is architecturally sound to separate the storage, management, and transcoding of content from the presentation of that content. This means that it is best to allow the content management system to reduce the requirements of the Web server tier to the output of this already formatted content.

    Access Integration::Personalized Delivery application pattern
    Access Integration::Personalized Delivery application pattern

    The Access Integration::Personalized Delivery application pattern provides a framework for giving access to applications and information tailored to the interests and roles of a specific user or group. This Application pattern extends basic user management by collecting rich profile data that can be kept current up to the user’s current session. Data collected can be related to application, business, personal, interaction, or access device-specific preferences.

    Business and IT Drivers
    The primary business driver for choosing this Application pattern is to increase usability and improve the efficiency of Web applications by tailoring their presentation to the user’s role, interests, habits and/or preferences.

    Benefits

    • Users’ interaction with the site is benefited because of increased perception of control and efficiency
    • Fine-grained control of users’ access to applications is enabled according to role and preferences by the enterprise
    • Improved user effectiveness is enabled by adapting the complexity and detail of content to a user’s skill level


    Limitations
    Personalized Delivery can be very complex and expensive to fully implement.

    The Portal composite pattern
    This Application pattern supports the separation of the business logic, business rules, and presentation. Each one of these has part of the responsibility for providing the personalized experience to the user of the portal. The application server handles business logic that implements the business rules meta-data contained in the personalization server node. Once presentation of the personalized data is required, the presentation server node will access the correctly formatted and/or aggregated data for display to the portal user.

    Self-Service::Directly Integrated Single Channel application pattern
    Self-Service::Directly Integrated Single Channel application pattern

    The Directly Integrated Single Channel application pattern (from the Self-Service business pattern) provides a structure for applications that need one or more point-to-point connections with back-end applications but only need to focus on one delivery channel. This Application pattern can also be used to implement any one of the delivery channels.

    Business and IT Drivers

    • Improve the organizational efficiency
    • Reduce the latency of business events
    • Leverage existing skills
    • Leverage legacy investment
    • Back end application integration


    The primary business driver for choosing this Application pattern is to reduce the latency of business events by providing real-time access to back-end applications and data from Web applications.

    The IT Driver for choosing this Application pattern is to leverage legacy investments and existing skills.

    The Portal composite pattern
    The Portal composite pattern is involved in the direct connection between the portal user and a back-end application (e.g. Lotus Sametime or CICS based application). Once the portal user is authenticated via the directory and security services node and the session level security in the application server node, the WPS Portlet API will pass authentication credential information to the back-end application or datasource. Once complete, the user will now have a direct connection to that application and the portal system will not generally broker the communication. This works in most implementations.

    As shown in the figure below, the Directly Integrated Single Channel application pattern can be used in a variety of ways. For example, the portal application can be more than just green screen scraping with Host on Demand/Host Publisher portlet. The portal application can also be a collaboration portlet using collaboration services as well as a roll your own portlet for using Web services.

    Directly Integrated Single Channel application pattern

    Collaboration::Store and Retrieve application pattern
    Collaboration::Store and Retrieve application pattern

    The Store and Retrieve application pattern (as part of the Collaboration business pattern) allows users to collaborate with others on the network interactively. Unlike the Point-to-Point application pattern, this pattern does not require both partners to be online at the same time. It also does not require the client to know the physical or direct address of other users of the solution.

    A common implementation of this pattern is content management. Content Management allows two or more users to interact on a single piece "content" (e.g. images, text, other data, etc.) via the content management mechanism.

    Business and IT Drivers

    • Time to market
    • Improve the organizational efficiency
    • Reduce the latency of business events
    • Easy to adapt during mergers and acquisitions
    • Require deferred collaboration
    • Many users
    • Leverage existing skills
    • Network addressing independence
    • Managed service
    • Maintainability


    Guidelines for use
    This Application pattern should be used when:

    • The physical or direct addresses of other clients on the network are not known
    • The pattern can support both synchronous and asynchronous communication. This provides ability to support a wide range of solutions from bulletin boards and workrooms to interactive chat rooms
    • A server can be set up that will allow multiple clients to log in and share information with other users by posting messages on (or sending e-mail to) the server for later retrieval


    Benefits

    • This Application pattern is simple to implement
    • Since this Application pattern does not require that a client know the direct address of the destination, it is ideal for solutions where the network addresses are not published or where these addresses change frequently
    • Most of the functions of this pattern can be implemented using commercially available collaboration solutions
    • This pattern requires very minimal custom code and is cost effective to maintain


    Limitations

    • This pattern calls for the implementation of server software and associated hardware to support new users. This means that this will add to the overall complexity of the solution
    • The nature and type of collaboration supported by this pattern are simplistic. For more complex communications, later Application patterns are more appropriate


    The Portal composite pattern
    The Portal composite pattern supports this through the use of the content management and collaboration nodes. Content Management can provide asynchronous collaboration on content assets or documents and the collaboration can be in the form of threaded discussion forums or teamrooms where information is shared in a common space.

    Collaboration::Directed Collaboration application pattern
    Collaboration::Directed Collaboration application pattern

    The Collaboration::Directed Collaboration application pattern allows users to collaborate with others on the network interactively. This Application pattern requires the two interacting users to be online simultaneously. It also requires users to register with a server. In this pattern all of the users are peers and there are no client-server or master-slave relationships between the tiers in the pattern.

    Business and IT Drivers

    • Time to market
    • Improve organizational efficiency
    • Reduce the latency of business events
    • Easy to adapt during mergers and acquisitions
    • Require instantaneous collaboration
    • Many users
    • Leverage existing skills
    • Network addressing independence
    • Managed service
    • Maintainability
    • Complex data types
    • Significant network bandwidth


    This approach can be used to quickly establish collaboration between users of a solution without having to go through the process of developing a lot of custom code. It allows users to simultaneously and interactively modify shared applications and data.

    This pattern requires all the users to register with the server. The user’s profile, preferences and security privileges are stored on a server directory. This means that the client does not need to know the physical or direct address of other clients. It also allows us to implement different security levels, and implement more complex collaboration styles that include sharing applications and complex data types.

    This is the ideal Application pattern to choose if the current focus is to establish synchronous sophisticated collaboration functions within a solution. This solution is also applicable when the clients have permanent and preferably high-speed network connections. The solution is also cost-effective to develop because many of these functions are available in off-the-shelf products.

    This pattern is not a good fit for solutions where there are limitations on the processing power of the clients.

    The Portal composite pattern
    Collaboration in the case of the Portal composite pattern is usually enabled through this type of collaboration. It is generally in the form of instant messaging because communication is essentially a brokered real-time interaction.

    Information Aggregation::User Information Access application pattern
    Information Aggregation::User Information Access application pattern

    Business and IT Drivers

    • Require specialized derived data (e.g. subset, point in time, correlated datat, targeted to user group etc)
    • Distil meaningful information from a vast amount of sructured and unstructured data
    • Require R/O access to derived or aggregated data allowing data manipulation under user control
    • Require option to drill through source data
    • Require reliable, extended availability of the data
    • Optimized for future access performance
    • Require protection of operational system performance


    Benefits
    The use of read-only data provides for maximum consistency in a multi-user analysis or reporting environment.

    Limitations
    As mentioned, the vast majority of access to data in the UIA pattern is read-only. However, this is really a convention, since UIA products and the data access methods they use are fully open to read/write access as well. As shown in the figure above, read/write access, when allowed, should be against data sources that are not owned or managed by applications. This reduces the risk to data integrity somewhat, but does not eliminate it entirely, depending on how the data source is maintained.

    The Portal composite pattern
    The Portal composite pattern supports this through the use of the Search & Indexing node. For more information on this application pattern, please see the IBM Redbook Patterns: Information Aggregation and Data Integration with DB2 Information Integrator, SG24-7101.

    Application Integration::Population application pattern
    Application Integration::Population application pattern

    The Application Integration::Population application pattern structures the population of a data-store with data that requires minimal transformation and restructuring. The Population application pattern is a preparatory step and is not documented to the Runtime pattern level for the Portal composite pattern.

    Business and IT Drivers

    • Improve organizational efficiency
    • Reduce the latency of business events
    • Distill meaningful information from a vast amount of structure data
    • Minimize total cost of ownership (TCO)
    • Promote consistency of Operational Data
    • Maintainability


    The primary business driver for choosing Population is to copy data from the source data store to a target data store with minimal transformation. The main reason for creating a copy of the data is to avoid manipulating the primary source of a company’s operational data often maintained by Operational Systems.

    The Portal composite pattern
    The Portal composite pattern supports this no-transformation population through a centralized database server. However, in many installations, data will be transformed before reaching its final destination (e.g. database or file system for serving to the web).

    For more information on this application pattern, please see the IBM Redbook Patterns: Information Aggregation and Data Integration with DB2 Information Integrator, SG24-7101.

    Application Integration::Population=Multi Step application pattern
    Application Integration::Population=Multi Step application pattern

    The Information Aggregation::Population=Multi Step application pattern structures the population of a data-store with structured data that requires extensive reconciliation, transformation, and restructuring.

    Business and IT Drivers

    • Improve organizational efficiency
    • Reduce the latency of business events
    • Distill meaningful information from vast amounts of structured data
    • Extensive reconciliation, transformation, and restructuring of structured data
    • Minimize total cost of ownership (TCO)
    • Promote consistency of Operational Data
    • Maintainability


    The primary business driver for choosing Population=Multi Step is to reconcile data from multiple data sources and to transform and restructure it extensively to enable efficient access to information.

    This Application pattern is best suited for the aggregation and distillation of meaningful information from structured data.

    Benefits
    This is the ideal architecture when the complex transformation of structured data between the source and target data store is required.

    Limitations
    Reconciling data from multiple sources is often a complex undertaking and requires a considerable amount of effort, time, and resources. This is especially true when different systems use different semantics.

    The Portal composite pattern
    The Portal composite pattern supports the iterative transformation of data. This data transformation can take place in the datasource tier, in the content management node, in the presentation server node, or at the application server node (although this is not a suggested route). This pattern supports all of these options.

    Application Integration::Population=Multi Step Gather application pattern
    >Application Integration::Population=Multi Step Gather application pattern

    The Data Integration::Population=Multi Step Gather application pattern provides a structure for applications that retrieve and parse documents and create an index of relevant documents that match a specified selection criteria. This design is actually a specific instance of the Population application pattern described above. In practice, this design may also extend the Population=Multi Step application pattern, when transformation of data is required. In either case, the crawling and discovery mechanisms of this design aggregate a set of unstructured data. This pattern is also useful for solutions where there is a need to discovery content expertise within the organization.

    The Population=Multi Step Gather application pattern is a preparatory step and is not documented to the Runtime pattern level for the Portal composite pattern.

    Business and IT Drivers

    • Improve organizational efficiency
    • Reduce the latency of business events
    • Provide easier access to vast amounts of unstructured data through indexing and categorization
    • Reduce information overload
    • Identify the experts for collaboration to improve decision cycle times
    • Reduce knowledge loss from personnel turnover
    • Help new employees to reduce the learning curve
    • Minimize total cost of ownership
    • Promote consistency of Operational Data
    • Maintainability


    The primary business driver for choosing Population=Multi Step Gather is to select relevant documents from a vast set of documents based on specified selection criteria. The objective is to provide quick access to useful information instead of bombarding the user with too much information.

    Search engines that crawl the World Wide Web implement this Application pattern. It is best suited for selecting useful information from a huge collection of unstructured text data. A variation of this Application pattern can be used for working with other forms of unstructured data such as images, audio, and video files.

    The Portal composite pattern
    In any portal implementation, the ability to locate data and information as it is updated in the system is vital. The whole value proposition depends, in part, on a portal user’s ability to locate the information they need. The Portal composite pattern supports this through the Search and Indexing node. This represents both the ability to "free-text" search or navigate the content (but only that content that should be available to the user) and to index the content as it is updated.

     

    展开全文
  • 1.一个简单的反Design Pattern的例子  数据库中有这样一张表,内容如下: ProductId ProductName RRP SellingPrice 1 Drill 109.9900 99.9900 2 ...

    1.一个简单的反Design Pattern的例子

      数据库中有这样一张表,内容如下:

    ProductId

    ProductName

    RRP

    SellingPrice

    1

    Drill

    109.9900

    99.9900

    2

    Hammer

    0.9900

    7.9900

    3

    Shovel

    9.9900

    9.9900

      网页的目的是除了要显示商品名称,建议零售价RRP和销售价Selling Price之外,如果商品的销售价给比将以零售价低,还要显示销售价比零售价便宜了多少钱,以及显示便宜了的百分比。

      同时,显示的表格上方还要有一个打折按钮,选项为不打折和打95折。打折之后的价格显示为SellingPrice,同时需要重新计算建议零售价RRP和95折的SellingPrice。  

      因此,为了显示数据库表中不存在的新的column,对拖拽到页面中的GridView改造如下:

              <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False"
                 DataSourceID="SqlDataSource1"
                 DataKeyNames="ProductId" 
                EmptyDataText="There are no data records to display."
                OnRowDataBound="GridView1_RowDataBound">
                <Columns>          
                    <asp:BoundField DataField="ProductId" HeaderText="ProductId"
                                    ReadOnly="True" SortExpression="ProductId" />    
                    <asp:BoundField DataField="ProductName" HeaderText="ProductName" SortExpression="ProductName" />
                    <asp:BoundField DataField="RRP" HeaderText="RRP" SortExpression="RRP"
                        DataFormatString="{0:C}" />              
                    <asp:TemplateField HeaderText="SellingPrice" SortExpression="SellingPrice">
                        <ItemTemplate>
                            <asp:Label ID="lblSellingPrice" runat="server" Text='<%# Bind("SellingPrice") %>'></asp:Label>
                        </ItemTemplate>                   
                    </asp:TemplateField>
                    <asp:TemplateField HeaderText="Discount">
                        <ItemTemplate>
                            <asp:Label runat="server" ID="lblDiscount"></asp:Label>
                        </ItemTemplate>
                    </asp:TemplateField>
                    <asp:TemplateField HeaderText="Savings">
                        <ItemTemplate>
                            <asp:Label runat="server" ID="lblSavings"></asp:Label>
                        </ItemTemplate>
                    </asp:TemplateField>
                </Columns>
            </asp:GridView>

    2. BoundField和TemplateField的作用是什么?

      BoundField表明将表的这一行直接与数据库中对应的这一行的数据显示出来。如果直接使用BoundField将表和数据库绑定,那么后台程序将没有修改表中的内容的能力。为了使得后台能对表中的数据进行修改,必须要使得表格中的一格至少成为一个或者多个控件的容器,而TemplateField正好起到了这种容器的作用。

      这里我们看到,在SellingPrice这一个column里面,Text='<%# Bind("SellingPrice")%>',这里这么书写的原因是,<TemplateField>标签并不具备直接绑定数据库的能力,因此在空间里必须使用Bind语句,将Label的内容与数据库的SellingPrice这个Field绑定起来。但是实际上这里并不需要,因为后台程序自动会为Label更换显示的文字。

    3. OnRowDataBound如何理解?

      OnRowDataBound是数据库表中的数据按Row与GridView每绑定完成一行就会触发的一个事件。后台代码为:

      

            protected void GridView1_RowDataBound(object sender, GridViewRowEventArgs e)
            {          
                if (e.Row.RowType == DataControlRowType.DataRow)
                {
                    decimal RRP = decimal.Parse(((System.Data.DataRowView)e.Row.DataItem)["RRP"].ToString());
                    decimal SellingPrice = decimal.Parse(((System.Data.DataRowView)e.Row.DataItem)["SellingPrice"].ToString());

                    Label lblSellingPrice = (Label)e.Row.FindControl("lblSellingPrice");
                    Label lblSavings = (Label)e.Row.FindControl("lblSavings");
                    Label lblDiscount = (Label)e.Row.FindControl("lblDiscount");

                    lblSavings.Text = DisplaySavings(RRP, ApplyExtraDiscountsTo(SellingPrice));
                    lblDiscount.Text = DisplayDiscount(RRP, ApplyExtraDiscountsTo(SellingPrice));
                    lblSellingPrice.Text =  String.Format("{0:C}", ApplyExtraDiscountsTo(SellingPrice));
                }
            }

      可以看到,在传入的参数 e 中,包含了该数据行的各个Field的值,只需要按名字取出,经过一定处理以后,就可以复制为Label显示。这样,就实现了对GridView的改造。

    转载于:https://www.cnblogs.com/charrli/archive/2011/01/23/1942579.html

    展开全文
  • The Data Modeling Methodology 分为3阶段: 描述工作负载,如数据大小,重要的读写操作 描述实体间的关系,分开还是嵌入 选择模式(Pattern) 此图缺少第3阶段,正是本课程要讲的。 Model for Simplicity or ...

    讲数据模型,课程介绍参见这里

    Chapter 1: Introduction to Data Modeling

    需要具备的基础知识

    MongoDB Concepts and Vocabulary

    Database and Collection in MongoDB
    Performing joins with $lookup

    Relational Database Concepts and Vocabulary

    Table (Wikipedia Definition)
    Table (Textbook Definition)
    Entity Relationship Model (Wikipedia Definition)
    The Entity Relationship Data Model
    Crow’s Foot Notation and ERD
    Crow’s Foot Notation Definition

    General Database Concepts and Definitions

    Database (Wikipedia Definition)
    Schema (Wikipedia Definition)
    Schema Short Definition
    Database Transactions (Wikipedia Definition)
    Database Transactions Short Description
    Throughput vs Latency
    NoSQL Databases

    MongoDB Compass and Atlas

    Download Compass Here
    More About Atlas Here

    Data Modeling in MongoDB

    关于MongoDB的第一个误解是其为Schemaless。MongoDB也是有Schema的,只不过它更灵活,更能适应应用的改变。有灵活的Data Model,可以通过UML或ERD(实体关系图)设计。

    好的设计需要考虑:

    • 使用模式
    • 数据如何访问
    • 哪些查询对应用重要
    • 读写比例

    第二个误区是将所有信息存于一个document。

    第三个误区是不能做Join,$lookup可以实现Join。

    总的来说,设计好的schema是不容易的。

    Data Modeling in MongoDB

    数据库由collection(类似表)组成,collection由document(类似行)组成,document是JSON格式,由一系列field和value(类似列)组成,value可以是单个值或数组或嵌入document。JSON内部储存用BSON。

    Document Structure in MongoDB

    Supported Datatypes in MongoDB

    Constraints in Data Modeling

    计算机应用的约束条件包括:

    • 硬件:内存,存储性能和价格
    • 数据:大小,安全,数据主权(sovereignty)
    • 应用:网络延迟
    • 数据库:更新原子性,document最大16M

    working set: 应用在典型操作中访问的数据,例如经常访问的数据和索引。

    最佳实践:

    • 将常用的数据置于内存
    • 将索引置于内存
    • 尽量选择SSD而非磁盘
    • 不常用数据可置于磁盘

    约束条件变了,应用设计也需相应改变。

    The Data Modeling Methodology

    分为3阶段:

    1. 描述工作负载,如数据大小,重要的读写操作
    2. 描述实体间的关系,分开还是嵌入
    3. 选择模式(Pattern)
      在这里插入图片描述
      此图缺少第3阶段,正是本课程要讲的。

    Model for Simplicity or Performance

    需要在简单性和性能间取得平衡。
    在这里插入图片描述

    Identifying the Workload

    这一节以一个IoT的场景进行了分析,分析过程和结构参见M320-workload-IOT.xlsx

    有几点小的收获:

    • 每一个读写操作可以选择不同的Write Concern和Read Concern
    • hidden secondary可专门用作报表分析

    Chapter 2: Relationships

    Introduction to Relationships

    NoSQL数据库也是relational的,其也有实体和关系。

    在MongoDB中,需要考虑是嵌入还是关联。而在关系型数据库中,这些都是通过Join实现的。

    Relationship Types and Cardinality

    常见的关系包括1-1,1-N和N-N。例如客户ID和客户名称是1对1关系,客户和invoice是1对多关系,invoice和产品是多对多关系。

    1-N关系,N也需考虑基数,例如一个人的子女与一个名人的粉丝基数差异就很大。用[min, likely, max]表示更加准确。

    1-Zillion(无数)在大数据场景中常见。

    1-to-Many Relationship

    一个人拥有的信用卡和写的博客都是1-N关系。

    1-N关系的实现,如果采用嵌入(embeded),通常嵌入到最常查询的一边;如果采用参考(reference),一般放在N端。嵌入比参考简单,如果关联信息不多,也可以考虑嵌入,这样一个查询就可以得到全部信息。

    如果最常用的查询并不总是需要关联的信息,可以考虑参考。

    例如:

    • 嵌入到1端:文档和审阅者,由于审阅者不多,所以嵌入到文档端。
    • 嵌入到N端:订单与配送地址,将配送地址嵌入到订单,因为N端查询更频繁。关系型相比,地址会有冗余,当然,冗余也是有好处的。
    • 在1端参考(reference):邮编与店铺,在邮编文档中包括所有的店铺ID。
    • 在N端参考:同上例,在店铺中包括邮编,似乎更合理。

    Many-to-Many Relationship

    例如store和item,就是N-N关系,很容易被误认为是1-N关系。

    象电影和演员也是N-N关系,N-N关系需要从两端同时看,实际就是两个1-N关系。

    people和phone_number也是N-N关系,因为多个人可能共享一个家庭电话。但这种关系可以转换为1-N关系,通过将家庭电话复制到每一个人的电话列表中,当家庭迁移时,就需要更新所有人的家庭电话。像这种情形,一些冗余可能是更合适的。

    N-N关系可以通过embeded或reference实现。

    例如,将较少查询的N端嵌入到另一N端(或者说嵌入到查询较多的一端),较少查询的N端仍然保留,虽然有数据冗余,但保证了如果嵌入的数据删除,源数据(较少查询的N端)仍存在。而且嵌入的数据也未必是全部。

    嵌入的信息最好是不常变化的。

    又例如,在主端加入reference,通常是一个array。当然也可以在另一端加入reference,主要看你查询什么。

    reference比embeded好的地方在于可以避免重复。

    One-to-One Relationship

    1-1关系可通过嵌入实现,如同一级的field,或子文档中的field。好处是简单,易于理解。

    通过reference实现,实际是将一个id对应的信息一劈两半,两边通过此id连接。可能是为了优化访问。

    One-to-Zillions Relationship

    是1-N关系的一种,只不过N特别大。

    实现只有一种方式,即在Zillion那一端使用reference。

    Crow’s Foot Notation Definition
    Crow’s Foot Notation and ERD

    Chapter 3: Patterns (Part 1)

    pattern不是整体的解决方案,而是解决方案的一部分,类似于武术中的招式。

    推荐了Gang of Four的Design Patterns: Elements of Reusable Object-Oriented Software一书。

    Guide to Homework Validation

    介绍了课程使用的作业检查工具,需要下载一个可执行程序:

    $ ./validate_m320 --version
    validate_m320 version 02.01
    
    # 错误时
    
    $ cat answer_schema.json
    {
      "_id": "<objectId>",
      "title": "<string>",
      "artist": "<string>",
      "room": "<string>",
      "spot": "<string>",
      "on_display": "<bool>",
      "in_house": "<int>",
      "events": [{
        "k": "<string>",
        "v": "<date>"
      }]
    }
    
    $ ./validate_m320 example --file answer_schema.json
    
    The solution is incorrect, use --verbose if you prefer getting some hints
    
    $ ./validate_m320 example --file answer_schema.json --verbose
    Answer Filename: /home/vagrant/M320/answer_schema.json
    
    Errors:
    in_house: in_house must be one of the following: <bool>
    
    # 正确时
    $ cat answer_schema.json                              {
      "_id": "<objectId>",
      "title": "<string>",
      "artist": "<string>",
      "room": "<string>",
      "spot": "<string>",
      "on_display": "<bool>",
      "in_house": "<bool>",
      "events": [{
        "k": "<string>",
        "v": "<date>"
      }]
    }
    
    $ ./validate_m320 example --file answer_schema.json --verbose
    Answer Filename: /home/vagrant/M320/answer_schema.json
    The document passes validation
    
    Congratulations - here is your validation code: 5d124f9bd971a774b97b5fc7
    

    注意,这不是一个JSON检查工具。JSON语法可参见这里

    Handling Duplication, Staleness and Integrity

    Handling Duplication

    Duplication产生的原因是需要嵌入信息以提供快速访问,主要问题是一致性。

    有时,Duplication是更优的方案,例如订单中嵌入客户信息或配送地址,成单时这些信息是固定的,但后续例如配送地址可能会换,因此嵌入在一起是合理的。
    还有对于电影和演员这种多对多的关系,一旦电影上映,这些信息不会再变,因此在电影和演员两个collection中使用嵌入而非参照会更合理,有一些重复影响也很小。

    以上案例,在document中嵌入的重复信息对于本文档而言不会在改变,后续的变化也不会影响已有的文档。

    还有一种重复和预计算有关,例如计算电影票房,需要由应用负责更新,不过更新的频率需参考下一节。

    因为有重复,万一这些重复的信息后续需要修改,则需要批量更新。

    Handling Staleness

    数据陈旧是由于数据变化太快。始终看到最新的数据是无法保证的,关键是看用户能忍受数据的陈旧程度。例如数仓实际针对的都是过去的某一时间点。

    为保证数据的新鲜度,一般可采取实时复制或批量更新发,即所谓的涓流式。

    Handling Referential Integrity

    参照一致性用来连接两个文档,MongoDB不支持cascade deleting(因为不支持主外键)。

    可以用多文档事务或change stream解决一致性问题。或者干脆就放在一个文档里。

    Attribute Pattern

    先来看两个例子。
    例一:关于产品的document,有很多共同的属性,如制造商,品牌,价格。还有另外一些属性是各自不同的,但这些属性不多。如果需要查询这些不同的属性,就需要建立太多的索引。因此我们将其装换为子文档,其中k表示属性名,v表示属性值。例如:

    从
    {
    ...
    "input" : "5V/1300 mA",
    "output" : "5V/1A",
    "capacity" : "4200 mAh"
    }
    变为
    {
    ...
    "add_specs" : {
    	{ "k" : "input", "b" : "5V/1300 mA"}, 
    	{ "k" : "output", "b" : "5V/1A"}, 
    	{ "k" : "capacity", "b" : "4200 mAh"}
    }
    }
    

    然后针对子文档做索引:

    db.products.createIndex({"add_specs.k":1, "add_specs.v":1})
    

    然后就可以有效的处理一下查询:

    db.products.find({"add_specs" : {"elementMatch" : {"k" : "capacity", "b" : "4200 mAh"}}})
    

    例2:在文档中的一些属性名字不同,但值的含义类似,例如不同国家的电影上映日期:

    {
    ...
    "release_USA":"2020/01/01",
    "release_UK":"2020/02/01",
    "release_China":"2020/03/01"
    }
    

    和例一类似,可以将其转换为array:

    "release_date" : 
    [
    {"k":"release_USA""v":"2020/01/01"},
    {"k":"release_UK""v":"2020/02/01"},
    {"k":"release_China""v":"2020/03/01"}
    ]
    

    所以Attribute Pattern解决的问题是:

    • 有不多的相似的属性
    • 需要查询这些属性

    解决方法就是将filed:value转换为子文档:

    fieldA:field,
    fieldB:value
    

    场景包括产品特性,或具有相同value类型的field。

    好处是建索引容易,并且可以减少索引。未来添加新的field也能处理,而且容易理解原来的属性关系。可以很好的组织相同属性的field,或不确定的field。

    Attribute Pattern是多态的orthogonal(正交),我理解就是90度翻转(Transpose),例如行式变为列式。

    Attribute Pattern解决的问题也可用MongoDB 4.2新特性Wildcard Index解决。

    注意,在做练习时,一定要看清题意,另外可以通过–verbose获取错误信息,例如:

    $ ./validate_m320 pattern_attribute --file pattern_attribute.json --verbose
    Answer Filename: /home/vagrant/M320/pattern_attribute.json
    
    Errors:
    (root): Additional property date_acquisition is not allowed
    events.0.k: events.0.k must be one of the following: <string>
    events.1.k: events.1.k must be one of the following: <string>
    

    这表示以下答案中不会出现具体的值:

      "in_house": "<bool>",
      "events": [
    	{ "k": "moma", 
    	  "v": "<date>"},
    	{ "k": "louvres", 
    	  "v": "<date>"},
    	{ "k": "date_acquisition", 
    	  "v": "<date>"}
      ]
    

    而应该是下面的格式

    {
      "events": [
            { "k": "<string>",
              "v": "<date>"},
            { "k": "<string>",
              "v": "<date>"}
      ]
    }
    
    

    Extended Reference Pattern

    Join在关系型数据库中很容易实现,在MongoDB中,Join可以通过以下方法实现:

    • 通过应用
    • 通过lookup,如 $lookup$graphLookup
    • 通过嵌入文档避免Join

    Extended Reference是指在1-N关系中,在N端嵌入1的数据,如客户与订单,可以在订单中嵌入客户数据,嵌入的只是部分数据,即需要经常Join的数据。因此最终还是两个文档。

    嵌入的数据在两端都存在,因此数据有重复。为避免一致性问题,挑选的数据应不经常变化,而且应尽量少。

    Extended Reference避免了大量Join,通过将频繁查询的数据嵌入到主文档。例如catalog,移动应用和实时分析。可提供更快的读,减少Join,坏处是数据有重复。

    Subset Pattern

    Working set应全部容纳在内存中。如果内存容纳不下Working set,会影响性能。

    若Working set过大,可以通过纵向扩展或横向扩展(Sharding)添加内存,或减小Working set的大小。

    Subset Pattern即将不常用的field移至另一个文档,以减小Working set的大小。例如,对于1-1的field,可直接移到另一文档,对于1-N关系的field,如电影和演员,如果有1000个演员,可以仅保留20个主要演员在常用文档中,而在另一文档中保留所有演员,当然,这样会有数据重复。

    Subset Pattern解决的问题是文档过大,导致内存容纳不下,而且文档中大部分的信息不常查询。解决方法是拆成常用和不常用两个文档。

    场景包括产品和文章的评价,电影的演员,好处是Working set变小,访问更快,坏处是重复导致的空间占用,少量操作需要应用多轮查询。

    Chapter 4: Patterns (Part 2)

    Computed Pattern

    计算非常消耗资源,因此减少计算对性能有益,例如:

    • 数学计算。如统计,例如计算总和,每次插入都需重新计算;通过缓存以前的结果,就可以将多次累加变为一次累加。将中间结果写入另一个document,后台一个进程负责更新,这样实现了读写分离。
    • fan out操作。就是一个逻辑操作展开为多个操作,例如fan out 读指需要从多个地方读取数据,fan out写指需要写到多个地方。这样做实际是推送到多个数据需要的地方,用空间换取了性能。
    • roll up操作 是drill down操作的反面,例如每小时,每天的统计,可以不断向上卷积。

    在计算资源紧张后希望减少读延迟时使用Computed Pattern。

    其解决的问题是昂贵的计算操作,频繁在相同数据上计算,并产生相同的结果。解决方法为将计算结果存放于另一个文档。应用场景包括IoT,事件溯源,时间序列数据,聚合框架。好处是读操作更快,节省了计算和磁盘资源。不好在于需求难确定。

    Bucket Pattern

    Bucket Pattern是每一信息一个文档和一个文档存放所有信息的折衷。实际上也是1-N关系,但N不是所有,而是Total/M(1-Total容纳不下),M的确定需要很好地理解工作负载特征,例如N可能表示每天,或每小时。其实有点像数据库分区,不过分区的大小需要确定。

    在文档中存放的某field的信息是以array的信息存放的。类似于列式存储。

    使用场景包括IoT,数仓,一个对象包括太多信息。

    好处是返回数据和存放空间的平衡。数据更易于管理。删除数据也容易。

    坏处是对BI应用不友好,如果设计不当,查询性能不好。

    Schema Versioning Pattern

    Alter Table NightMare,这几天正好经历过,确实不太方便。但设计上确实应该尽可能精准,以避免后续schema的改动。

    关系型数据库修改schema可能需要更新数据,可能需要停机,一旦失败很难回退。

    Schema Versioning Pattern的本质是问文档均添加一个版本field,然后应用可以识别版本并做相应处理。所以需要升级应用和迁移文档。

    可以避免的问题包括宕机时间,更新文档的耗时,不想更新所有文档。场景包括重度使用的应用,有很多老旧数据的应用。

    好处包括无需宕机,迁移可控,无技术债务。

    Tree Patterns

    树状结构就是层级结构,例如组织架构,目录结构等。在层级结构中经常需要寻找承继关系等操作。

    Tree Patterns包括以下4个子模式,这些子模式可以组合:
    A. Parent Reference: 存parent,只有一个。更新方便。
    B. Child Reference:存children,一到多个,以数组存放
    C. Array of Ancestors 数组存放从此节点到根节点(含)间的所有节点,一直向上可以溯源到根。
    D. Materialized Paths 以".root.x.y.z…"方式存放节点所有祖先,因此可以使用正则表达式

    以下4类问题可以用这些模式解决:

    1. X的祖先 - C,D
    2. 谁汇报给Y - A,C
    3. Z下的所有节点 - B, C?
    4. 将N下的所有children移到P下 - A

    场景包括组织架构,产品分类。

    Polymorphic Pattern

    Polymorphic Pattern指大部分属性相同,可以归为一类,例如产品。

    此Pattern也适合于子文档,例如存各个国家的地址,大部分都相同,但有的国家叫省,有的叫州。

    schema versioning pattern也使用了此Pattern,通过版本号,所以此模式是一基础模式。

    此模式的实现是通过一个field来跟踪collection的shape,然后应用有不同的处理。

    此模式的好处是容易实现,可以提供统一视图。可以将对象放入同一个collection中,因此也可以只在一个collection中查询。

    场景包括统一视图,产品目录,内容管理。

    Summary of Patterns

    Approximation Patterns

    近似,就是减少应用到MongoDB的写操作,例如每秒1次更新改为每10秒一次更新。

    可以减少计算,但前提是业务可以允许不精确。

    实现上是更少的写,但每次写的payload可能更大,起码节省了round-trip。

    场景包括Web页面计数器,可以允许不精确(但统计上正确)的计数器,指标统计。

    缺点是不精确,需要应用实现。

    Outlier Pattern

    少数文档会影响整个查询的性能。

    实现上兼顾文档中最常用的fields,对于异常文档通过应用单独处理。也就是在主文档中会有标识,异常部分需要到外部去参照。

    场景包括社交网络等。

    缺点是不适合于即席查询的汇聚操作。

    Chapter 5: Conclusion

    这一章是考试,有一些小难度。
    在这里插入图片描述
    在这里插入图片描述

    展开全文
  • everyday mission

    2019-10-05 16:34:04
    sentence: Look before you leap. May I ask you a question? May I have a receipt? May I have your name,please? May I pay by credit card?pattern drill: Don't.......
  • HepPlanner源码分析——Calcite

    千次阅读 2017-12-26 22:27:05
    Calcite是开源的一套查询引擎,很多开源项目都使用了该开源项目,特别是对其Optimizer部分的使用,类似Drill、Hive、Flink都使用Calcite作为其优化引擎。 Calcite实现了两套Planner,HepPlanner和VolcanoPlanner,...
  • Dataset之ImageNet:ImageNet数据集简介、下载、使用方法之详细攻略 目录 lmageNet 数据集简介 1、ImageNet数据集的意义 2、ImageNet的数据结构——层次结构及其1000个类别 ...lmag...
  • 论文下载百度云链接:链接:https://pan.baidu.com/s/100OAXTIOTPoMjbi-dwOcxA ... 今天更新到2019年9月6号 目录 今天更新到2019年9月4号 Understanding the Representation Power of Graph Neural Networks in ...
  • 论文下载百度云链接:链接:https://pan.baidu.com/s/100OAXTIOTPoMjbi-dwOcxA ... 今天更新到2019年10月11号 目录 今天更新到2019年9月4号 Understanding the Representation Power of Graph Neural Networks i...
  • Third, I have developed a nuclear emergency drill management platform, which includes wechat mini program and Web terminal. The project comes from Rongcheng Branch of Weihai Ecological Environment ...
  • 基于 mini-batch 的增量计算模型可以提升部分场景的时延、节省计算成本,但有一个很大的限制:对 SQL 的 pattern 有要求。因为计算走的是批,批计算本身不维护状态,这就要求计算的指标能够比较方便地 merge,简单...
  • 转变:从SQL技术栈到图技术栈

    千次阅读 2020-09-01 13:00:59
    【摘要】传统的以SQL为中心的技术栈无法有效地应对大数据场景带来的多元异构数据管理、大规模关系网络管理和复杂网络分析等挑战,本文针对新型大数据技术栈展开研究。通过分析图数据模型的优势,结合图技术的发展和...
  • NVIDIA之AI Course:Getting Started with AI on Jetson Nano—Class notes(三) Notice The original text comes fromNVIDIA-AI Course. This article only provides Chinese translation. ...
  • SAP TADIR里对象类型的全部清单

    千次阅读 2020-01-16 08:40:14
    R3TRADTGBPC Drill Through R3TRADTSADT scopes for HTTP resource authorization via OAUTH2 R3TRAFLCBPC File Category R3TRAFLDBPC File Folder R3TRAFLEBPC Files R3TRAFLGBPC File Group R3TRAGGRAggregate R3...
  • NVIDIA之AI Course:Getting Started with AI on Jetson Nano—Class notes(四) Notice The original text comes fromNVIDIA-AI Course. This article only provides Chinese translation. ...
  • 为了让大家更好地学习交流,过往记忆大数据花了一个周末的时间把Awesome Big Data里近 600 个大数据相关的调度、存储、计算、数据库以及可视化等介绍全部翻译了一遍,供大家学习交流。 关系型数据库管理系统 ...
  • awesome-java

    千次阅读 2019-06-19 17:48:33
     - Automatically generates the Builder pattern. Immutables  - Annotation processors to generate simple, safe and consistent value objects. JavaPoet  - API to generate source files. ...
  • Transfer Learning Using ConvolutionalNeural Networks for Face Anti-spoofifing 标签: 论文 spoofing ...论文出处:Springer ...Computer Vision and Pattern Recognition (CVPR) Workshops, June 2014
  • 10、In the memory test program, we wrote an inverted pattern into SRAM on 16 locations starting from location 0. After read it back, we found the pattern does not match what we wrote. Which of the ...
  • 流式数据分析处理的常规方法

    万次阅读 2018-04-16 20:56:43
    书中提到了这个概念,应用之一是,在流式并发事件中寻求一定 pattern 的一批事件,比如事件相似性的模式。比如同时在成千上万个浏览同一个购物网站的用户中,寻找正在浏览同一个物品的那批顾客。 CEP 处理的方式...
  • 为了更好管理异常组合,我们引入了数据立方体概念,每个小方块代表一类基础异常,对于数据立方体,我们可以通过切片、切块、上卷(roll-up)、下钻(drill-down)等方式生成更复杂的组合类异常。异常立方体中我们会...
  • About Table Views in iOS Apps( iOS应用程序中的Table View) Table views are versatile user interface objects frequently found in iOS apps. A table view presents data in a scrollable list of multiple ...
  • Spark Parquet使用

    万次阅读 2017-10-18 15:09:26
    Spark SQL下的Parquet使用最佳实践和代码实战  分类: spark-sql(1)  ...1)过去整个业界对大数据的分析的技术栈的Pipeline一般分为以下两种方式: ...a)Data Source -> HDFS -> MR/H
  • 史上最全“大数据”学习资源整理

    千次阅读 2016-10-21 15:00:58
     Apache Drill:由Dremel授意的交互式分析框架;  Apache HCatalog:Hadoop的表格和存储管理层;  Apache Hive:Hadoop的类SQL数据仓库系统;  Apache Optiq:一种框架,可允许高效的查询翻译,其中包括异构性及...
  • 数据杂谈

    2016-03-14 12:10:48
    因此为了应对数据分析的需求,Impala、Presto、Drill这些交互式sql引擎应运而生。这些系统的唯一目标就是快,能够让用户更快速地处理SQL任务,因此牺牲了通用性稳定性等特性。 一个典型的数据仓库系统可以满足中低速...
  • google的面试要求(自己的标杆)

    千次阅读 2015-07-09 15:42:08
    Apattern has emerged. Two patterns, actually. The first pattern is that for most failed phone screens,  the candidate did most of the talking. The screener only asked about stuff on the candidate's ...
  •  Hive, Pig, Clojure-Hadoop, Grumpy(Groovy Hadoop), Sawzall, Scoobi, Lingual, Pattern,  Crunch, Scrunch 而且这个队伍还在不断的壮大   Hadoop的DSLs  Hadoop的DSL一般可以分为几个大类:...
  • iOS 开发库(iOS Developer Library)

    千次阅读 2012-11-06 09:52:31
    Last change: Update example code to new initializer pattern. Guides Data Management Foundation 2010-03-24 Minor Change  谓词编程指南 Predicate ...
  • EAGLE CAD转换到protel pcb文件的方法

    千次阅读 2012-03-24 22:41:11
    + r_l ("HOLESIZE", v.drill, 0) + r_layer ("STARTLAYER", v.start) + r_layer ("ENDLAYER", v.end) // + r_i ("CCSV", 1) + r_i ("CPLV", 1) + r_i ("CCWV", 1) + r_i ("CAGV", 1) + r_i ("CPEV", 0) // + r_i ...
  • Drill into Windows Phone 8 design and architecture, and learn best practices for building phone apps for consumers and the enterprise. Written by two senior members of the core Windows Phone Developer...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 704
精华内容 281
关键字:

drillpattern