精华内容
下载资源
问答
  • 原文在简书首发:http://www.jianshu.com/p/badf412db4e7lua-cmsgpack是一个开源的MessagePack实现方式、纯C的库,没有任何其它依赖,编译后可以...-----------官方的解释是:```It's like JSON.but fast and sma...

    原文在简书首发:http://www.jianshu.com/p/badf412db4e7

    lua-cmsgpack是一个开源的MessagePack实现方式、纯C的库,没有任何其它依赖,编译后可以直接被lua调用,目前主要支持Lua

    5.1/5.2/5.3 版本。

    1、什么是MessagePack?

    -----------

    官方的解释是:

    ```

    It's like JSON.

    but fast and small.

    ```

    跟JSON及其类似,但是比JSON更快并且占用空间更小,举个官方给出的例子,直接截官方图:

    a4c26d1e5885305701be709a3d33442f.png官方图

    翻译官方的解释:

    MessagePack是一种高效的二进制序列化格式, 它允许在多种语言(如JSON)之间交换数据,但它越来越小,

    小整数被编码为单个字节,典型的短字符串除了字符串本身之外还需要一个额外的字节。

    目前市面上流行的开发语言MessagePack几乎支持,官方的地址为:http://msgpack.org/Lua

    MessagePack也提供了一套开源库,地址在:https://github.com/fperrad/lua-MessagePack/。

    但是,作者使用的是lua-cmsgpack,至于哪个比较优异,作者还没有去比较,主要是先发现了lua-cmsgpack,后面看了下README文件,使用方法应该是差不多的,大家可以拿来参考。

    2、编译lua-cmsgpack

    ---------

    lua-cmsgpack包括官方提供的lua-MessagePack都需要自行编译,因为可能平台太多,所以官方没有为每一个平台提供编译好的版本。lua-cmsgpack的github地址为:https://github.com/antirez/lua-cmsgpack

    git clone下来之后需要安装cmake工具,mac平台直接在项目目录:

    ```

    cmake .

    make

    ```

    即可,当然需要预先安装lua,并且是5.1版本以上的。

    主要说下CentOS平台下cmake可能会出现的问题,如果cmake的过程出现以下错误:

    ```

    Could NOT find Lua51 (missing:

    LUA_INCLUDE_DIR)

    ...

    CMake Error at CMakeLists.txt:1

    (cmake_minimum_required):

    CMake 2.8 or higher is required.  You are

    running version 2.6.4

    Configuring incomplete, errors occurred!

    ```

    出现以上错误的话,需要自行安装lua的一些依赖库,一般:

    ```

    yum -y install lua lua-devel

    ```

    就可以了,如果还不行,再试试下面的命令:

    ```

    yum install ncurses-devel gcc gcc-c++ make

    ```

    编译完成之后会生成cmsgpack.so文件,使用的时候直接require进去即可

    3、lua调用例子

    ---------

    ```lua

    1 local cmsgpack =

    require "cmsgpack"

    2

    3 local tba = {1, 2,

    3}

    4

    5 local tbb =

    {

    6

    a = 1,

    7

    b = 3

    8 }

    9

    10 local msgpack = cmsgpack.pack(tba,

    tbb)

    11

    12 local res1, res2 =

    cmsgpack.unpack(msgpack)

    13

    14 for k, v in pairs(res1)

    do

    15

    print(k, v)

    16 end

    17

    18 for i, v in pairs(res2)

    do

    19

    print(i, v)

    20 end

    ```

    运行效果:

    ```

    #lua test_table.lua

    1

    1

    2

    2

    3

    3

    a

    1

    b

    3

    ```

    cmsgpack.pack()可以把多个lua对象序列化成一个二进制msgpack,执行反序化的时候会返回对应数量的lua对象,非常的方便。

    4、结合redis存储序列化后的msgpack

    ---------

    有趣的是redis也支持MessagePack,因此结合lua和lua-cmsgpack可以产生不错的化学反应,下面是一个简单的例子(结合OpenResty):

    ```lua

    local cmsgpack  = require

    "cmsgpack"

    local redis     =

    require "resty.redis"

    local red

    = redis:new()

    local ok, err = red:connect("127.0.0.1", 6379)

    if not ok then

    ngx.say("failed to

    connect: ", err)

    return

    end

    local lua_table = {

    a

    = 1,

    b

    = 3

    }

    local msgpack = cmsgpack.pack(lua_table)

    local ok, err = red:set("msg",

    msgpack)

    if not ok then

    ngx.say("failed to

    set dog: ", err)

    return

    end

    local ret_pack  =

    red:get("msg")

    local ret_table = cmsgpack.unpack(ret_pack)

    ngx.say(ret_table.a + ret_table.b)

    ```

    测试返回结果:

    ```

    4

    ```

    在某些场合还是有不错应用场景的。

    展开全文
  • 我有标题中给出的警告消息。我想了解删除它。...我知道什么是对象,类,方法,字段实例P.P.S.如果有人需要我的代码,它在这里:import java.awt.*;import javax.swing.*;public class HelloWorldSwi...

    我有标题中给出的警告消息。我想了解和删除它。我发现已经有一些关于这个问题的答案,但我不明白这些答案,因为一个过载与技术术语。有可能用简单的话解释这个问题吗?

    P.S。我知道OOP是什么。我知道什么是对象,类,方法,字段和实例化。

    P.P.S.如果有人需要我的代码,它在这里:

    import java.awt.*;

    import javax.swing.*;

    public class HelloWorldSwing extends JFrame {

    JTextArea m_resultArea = new JTextArea(6, 30);

    //====================================================== constructor

    public HelloWorldSwing() {

    //... Set initial text, scrolling, and border.

    m_resultArea.setText("Enter more text to see scrollbars");

    JScrollPane scrollingArea = new JScrollPane(m_resultArea);

    scrollingArea.setBorder(BorderFactory.createEmptyBorder(10,5,10,5));

    // Get the content pane, set layout, add to center

    Container content = this.getContentPane();

    content.setLayout(new BorderLayout());

    content.add(scrollingArea, BorderLayout.CENTER);

    this.pack();

    }

    public static void createAndViewJFrame() {

    JFrame win = new HelloWorldSwing();

    win.setTitle("TextAreaDemo");

    win.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);

    win.setVisible(true);

    }

    //============================================================= main

    public static void main(String[] args) {

    SwingUtilities.invokeLater(new Runnable(){

    public void run(){

    createAndViewJFrame();

    }

    });

    }

    }

    展开全文
  • 记录一次面试经历:面试官问到SerializableParcelable 的主要区别是什么,我说把知道的说了一遍,效率,S属于java api,P属于Android ,以及序列化的使用场景。他说不对,然后说S是系列化到磁盘,P序列化到内存...

    前言:抱着最起码的要求尽力去做好每一件事 ! ——秋不白

    内容如题,仅个人探讨,以下代码都测试通过。

            记录一次面试经历:面试官问到Serializable和Parcelable 的主要区别是什么,我说把知道的说了一遍,效率,S属于java api,P属于Android ,以及序列化的使用场景。他说不对,然后说S是系列化到磁盘,P是序列化到内存。我就百思不得其姐,决定好好研究一下。

         查找资料 以及根据AS自动生成的代码,可以理解,Parcelable的序列化和反序列化操作实际是,Parcel内部包装了可序列化的数据,也就是说序列化的时候,将数据放入,parcel的write方法,反序列化是通过CREATOR来完成的,通过获取当前线程的上下文加载器,并通过parcel的read方法,来完成反序列化操作。代码如下

    Person 实体类

    /**
     * Desc:
     *
     * @author: RedRose
     * Date: 2019/5/3
     * Email: yinsxi@163.com
     */
    
    public class Person implements Parcelable {
        private int age;
        private String name;
    
        public Person(int age, String name) {
            this.age = age;
            this.name = name;
        }
    
        private Person(Parcel in) {
            age = in.readInt();
            name = in.readString();
        }
    
        public static final Creator<Person> CREATOR = new Creator<Person>() {
            @Override
            public Person createFromParcel(Parcel in) {
                return new Person(in);
            }
    
            @Override
            public Person[] newArray(int size) {
                return new Person[size];
            }
        };
    
        @Override
        public int describeContents() {
            return 0;
        }
    
        @Override
        public void writeToParcel(Parcel dest, int flags) {
            dest.writeInt(age);
            dest.writeString(name);
        }
    
        @Override
        public String toString() {
            return "Person{" +
                    "age=" + age +
                    ", name='" + name + '\'' +
                    '}';
        }
    }
    

    具体的序列化和反序列化

                  //将parcelable 对象序列化到磁盘
                case R.id.testParcelable:
                    File file = new File(PathUtil.getAppDocPath() + "testParcelable.txt");
                    try {
                        if (!file.exists()) {
                            file.createNewFile();
                        }
                        Person p = new Person(1, "XueQin");
                        FileOutputStream out = (new FileOutputStream(file));
                        BufferedOutputStream bos = new BufferedOutputStream(out);
                        Parcel parcel = Parcel.obtain();
                        parcel.writeParcelable(p,0);
                        bos.write(parcel.marshall());
                        bos.flush();
                        bos.close();
                        out.flush();
                        out.close();
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                    break;
                    //反序列化parcelable对象
                case R.id.readParcelable:
                    try {
                        File file2 = new File(PathUtil.getAppDocPath() + "testParcelable.txt");
                        FileInputStream in = new FileInputStream(file2);
                        byte[] bytes = new byte[in.available()];
                        in.read(bytes);
                        Parcel parcel = Parcel.obtain();
                        parcel.unmarshall(bytes, 0, bytes.length);
                        parcel.setDataPosition(0);
                        Person person = parcel.readParcelable(Thread.currentThread().getContextClassLoader());
                        in.close();
                        LogUtils.e(TAG, person.toString());
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                    break;

    以及parcel的readParcelable的源码,可以参考下,通过当前线程的上下文类加载器获取到CREAOR对象。

      @SuppressWarnings("unchecked")
        public final <T extends Parcelable> T readParcelable(ClassLoader loader) {
            Parcelable.Creator<?> creator = readParcelableCreator(loader);
            if (creator == null) {
                return null;
            }
            if (creator instanceof Parcelable.ClassLoaderCreator<?>) {
              Parcelable.ClassLoaderCreator<?> classLoaderCreator =
                  (Parcelable.ClassLoaderCreator<?>) creator;
              return (T) classLoaderCreator.createFromParcel(this, loader);
            }
            return (T) creator.createFromParcel(this);
        }
    

    总结:知识如浩瀚的海洋,加油吧,我们一起。

    展开全文
  • 当想要数据, 比如对象或其他类型的, 存到文件或是通过网络传输, 需要面对的问题是序列化问题 对于序列化, 当然各个语言都提供相应的包, 比如, Java serialization, Ruby’s marshal, or Python’s pickle ...

    当想要数据, 比如对象或其他类型的, 存到文件或是通过网络传输, 需要面对的问题是序列化问题
    对于序列化, 当然各个语言都提供相应的包, 比如, Java serialization, Ruby’s marshal, or Python’s pickle

    一切都没有问题, 但如果考虑到跨平台和语言, 可以使用Json或XML
    如果你无法忍受Json或XML的verbose和parse的效率, 问题出现了, 当然你可以试图为Json发明一种binary编码

    当然没有这个必要重复造轮子, Thrift, Protocol Buffers or Avro, provide efficient, cross-language serialization of data using a schema, and code generation for the Java folks.

    So you have some data that you want to store in a file or send over the network. You may find yourself going through several phases of evolution:

    1. Using your programming language’s built-in serialization, such as Java serialization, Ruby’s marshal, or Python’s pickle. Or maybe you even invent your own format.
    2. Then you realise that being locked into one programming language sucks, so you move to using a widely supported, language-agnostic format like JSON (or XML if you like to party like it’s 1999).
    3. Then you decide that JSON is too verbose and too slow to parse, you’re annoyed that it doesn’t differentiate integers from floating point, and think that you’d quite like binary strings as well as Unicode strings. So you invent some sort of binary format that’s kinda like JSON, but binary (1, 2, 3, 4, 5, 6).
    4. Then you find that people are stuffing all sorts of random fields into their objects, using inconsistent types, and you’d quite like a schema and some documentation, thank you very much. Perhaps you’re also using a statically typed programming language and want to generate model classes from a schema. Also you realize that your binary JSON-lookalike actually isn’t all that compact, because you’re still storing field names over and over again; hey, if you had a schema, you could avoid storing objects’ field names, and you could save some more bytes!

    Once you get to the fourth stage, your options are typically Thrift, Protocol Buffers or Avro. All three provide efficient, cross-language serialization of data using a schema, and code generation for the Java folks.

     

    实际使用中, data总是在不断变化的, 所以schema总是在不断evoluation的, Thrift, Protobuf and Avro都支持这种特性, 保证在client或server的schema发生变化的时候可以尽量不影响正常的服务.

    In real life, data is always in flux. The moment you think you have finalised a schema, someone will come up with a use case that wasn’t anticipated, and wants to “just quickly add a field”. Fortunately Thrift, Protobuf and Avro all support schema evolution: you can change the schema, you can have producers and consumers with different versions of the schema at the same time, and it all continues to work. That is an extremely valuable feature when you’re dealing with a big production system, because it allows you to update different components of the system independently, at different times, without worrying about compatibility.

     

    本文的重点就是比较一下, Thrift, Protobuf and Avro到底如果将数据进行序列化成binary并且支持schema evoluation.

    The example I will use is a little object describing a person. In JSON I would write it like this:

    {
        "userName": "Martin",
        "favouriteNumber": 1337,
        "interests": ["daydreaming", "hacking"]
    }
    

    This JSON encoding can be our baseline. If I remove all the whitespace it consumes 82 bytes.

    Protocol Buffers

    The Protocol Buffers schema for the person object might look something like this:

    message Person {
        required string user_name        = 1;
        optional int64  favourite_number = 2;
        repeated string interests        = 3;
    }
    首先PB使用IDL来表示person的schema
    对于每个field都有一个唯一的tag,作为标识, 所以=1, =2, =3不是赋值, 是注明每个field的tag
    然后每个field可以是optional, required and repeated

    When we encode the data above using this schema, it uses 33 bytes, as follows:

    image

    上图清晰反映出, 如果将82bytes的Json格式转化为33bytes的binary格式
    首先序列化的时候只会记录tag, 而不会记录name, 所以可以任意改变fieldname而不会有影响, 而tag是永远不能变化的
    第一个byte记录tag和type, 后面记录具体的数据, 对于string还需要加上length

    可以看到, 在encoding的过程中, 没有特意记录optional, required and repeated
    在decoding的时候,对required field会进行validation check, 但对于opitonal和repeated, 如果没有可以完全不出现在encoding数据中 
    所以对于opitonal和repeated, 可以简单的从schema中删除, 比如在客户端. 但是需要注意的是, 被删除的field的tag后面不能被再次使用 
    但是对于required field的改动, 可能导致问题, 比如在客户端删除required field, 此时server端的validation check就会失败

    对于增加field, 只要使用新的tag, 就不会有任何问题

     

    Thrift

    Thrift is a much bigger project than Avro or Protocol Buffers, as it’s not just a data serialization library, but also an entire RPC framework.
    It also has a somewhat different culture: whereas Avro and Protobuf standardize a single binary encoding, Thrift embraces a whole variety of different serialization formats (which it calls “protocols”).

    Thrift的功能比较强大, 不仅仅是数据序列化库, 还是一整套的RPC框架, 支持完整的RPC协议栈.
    而且其对protocal的封装, 使其不仅仅支持binary encoding, 也可以实现不同的协议来支持其他的encoding

    Thrift IDL和PB其实很像, 不同是使用1:(而非=1)来标注field tag, 并且没有optional, required and repeated类型
    All the encodings share the same schema definition, in Thrift IDL:

    struct Person {
      1: string       userName,
      2: optional i64 favouriteNumber,
      3: list<string> interests
    }

    The BinaryProtocol encoding is very straightforward, but also fairly wasteful (it takes 59 bytes to encode our example record):

    image

    The CompactProtocol encoding is semantically equivalent, but uses variable-length integers and bit packing to reduce the size to 34 bytes:

    image

    前面说了, Thrift可以通过protocal封装不同的编码方式, 对于binary编码, 也有两种选择
    第一种就是简单的binary编码,没有做任何的空间优化, 可以看到浪费很多空间, 需要59 bytes
    第二种是compact binary编码, 和PB的编码方式比较相似, 区别的是Thrift比PB更灵活, 可以直接支持container, 比如这里的list. 而PB就只能通过repeated来实现简单的数据结构. (Thrift defines an explicit list type rather than Protobuf’s repeated field approach)

    Avro

    Avro schemas can be written in two ways, either in a JSON format:

    {
        "type": "record",
        "name": "Person",
        "fields": [
            {"name": "userName",        "type": "string"},
            {"name": "favouriteNumber", "type": ["null", "long"]},
            {"name": "interests",       "type": {"type": "array", "items": "string"}}
        ]
    }
    

    …or in an IDL:

    record Person {
        string               userName;
        union { null, long } favouriteNumber;
        array<string>        interests;
    }

    Notice that there are no tag numbers in the schema! So how does it work?

    Here is the same example data encoded in just 32 bytes:

    image

    Avro是比较新的方案, 现在使用的人还比较少, 主要在Hadoop. 同时设计也比较独特, 和Thrift和PB相比
    首先Schema可以使用IDL和Json定义, 而且注意binary encoding, 没有存储field tag和field type
    意味着,
    1. reader在parse data时必须有和其匹配的schema文件
    2. 没有field tag, 只能使用field name作为标识符, Avro支持field name的改变, 但需要先通知所有reader, 如下

    Because fields are matched by name, changing the name of a field is tricky. You need to first update all readers of the data to use the new field name, while keeping the old name as an alias (since the name matching uses aliases from the reader’s schema). Then you can update the writer’s schema to use the new field name.

    3. 读数据的时候是按照schema的field定义顺序依次读取的, 所以对于optional field需要特别处理, 如例子使用union { null, long }

    if you want to be able to leave out a value, you can use a union type, like union { null, long } above. This is encoded as a byte to tell the parser which of the possible union types to use, followed by the value itself. By making a union with the null type (which is simply encoded as zero bytes) you can make a field optional.

    4. 可以选择使用Json实现schema, 而对于Thrift或PB只支持通过IDL将schema转化为具体的代码. 所以avro可以实现通用的客户端和server, 当schema变化时, 只需要更改Json, 而不需要重新编译

    当schema发生变化时, Avro的处理更加简单, 只需要将新的schema通知所有的reader

    对于Thrift或PB, schema变化时, 需要重新编译client和server的代码, 虽然对于两边版本不匹配也有比较好的支持
    5. writer的schema和reader的schema不一定完全匹配, Avro parser可以使用resolution rules进行data translation

    So how does Avro support schema evolution?
    Well, although you need to know the exact schema with which the data was written (the writer’s schema), that doesn’t have to be the same as the schema the consumer is expecting (the reader’s schema). You can actually give two different schemas to the Avro parser, and it uses resolution rules to translate data from the writer schema into the reader schema.

    6. 支持简单的增加或减少field

    You can add a field to a record, provided that you also give it a default value (e.g. null if the field’s type is a union with null). The default is necessary so that when a reader using the new schema parses a record written with the old schema (and hence lacking the field), it can fill in the default instead.

    Conversely, you can remove a field from a record, provided that it previously had a default value. (This is a good reason to give all your fields default values if possible.) This is so that when a reader using the old schema parses a record written with the new schema, it can fall back to the default.

    有一个重要的问题没有讨论, Avro依赖于Json schema, 何时, 如何在client和server之间传递schema数据?

    答案是, 不同的场景不同的方法...通过文件头, connection的握手时...

    This leaves us with the problem of knowing the exact schema with which a given record was written.
    The best solution depends on the context in which your data is being used:

    • In Hadoop you typically have large files containing millions of records, all encoded with the same schema. Object container files handle this case: they just include the schema once at the beginning of the file, and the rest of the file can be decoded with that schema.
    • In an RPC context, it’s probably too much overhead to send the schema with every request and response. But if your RPC framework uses long-lived connections, it can negotiate the schema once at the start of the connection, and amortize that overhead over many requests.
    • If you’re storing records in a database one-by-one, you may end up with different schema versions written at different times, and so you have to annotate each record with its schema version. If storing the schema itself is too much overhead, you can use a hash of the schema, or a sequential schema version number. You then need a schema registry where you can look up the exact schema definition for a given version number.

    Avro相对于Thrift和PB, 更加复杂和难于使用, 当然有如下优点...

    At first glance it may seem that Avro’s approach suffers from greater complexity, because you need to go to the additional effort of distributing schemas.
    However, I am beginning to think that Avro’s approach also has some distinct advantages:

    • Object container files are wonderfully self-describing: the writer schema embedded in the file contains all the field names and types, and even documentation strings (if the author of the schema bothered to write some). This means you can load these files directly into interactive tools like Pig, and it Just Works™ without any configuration.
    • As Avro schemas are JSON, you can add your own metadata to them, e.g. describing application-level semantics for a field. And as you distribute schemas, that metadata automatically gets distributed too.
    • A schema registry is probably a good thing in any case, serving as documentation and helping you to find and reuse data. And because you simply can’t parse Avro data without the schema, the schema registry is guaranteed to be up-to-date. Of course you can set up a protobuf schema registry too, but since it’s not required for operation, it’ll end up being on a best-effort basis.
    展开全文
  • Jrte涉及用于大量文本分析以及信息提取转换的控制反转。... # ~r, ~q, ~p are sequences of 0s preset to empty string ^ ( # fib(0): ~q <- 0 ('0', select[`~q`] paste) # fib(n>1): cyc
  • java开源包1

    千次下载 热门讨论 2013-06-28 09:14:34
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • java开源包12

    热门讨论 2013-06-28 10:14:45
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • Java资源包01

    2016-08-31 09:16:25
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • java开源包101

    2016-07-13 10:11:08
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • java开源包11

    热门讨论 2013-06-28 10:10:38
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • java开源包2

    热门讨论 2013-06-28 09:17:39
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • java开源包3

    热门讨论 2013-06-28 09:20:52
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • java开源包6

    热门讨论 2013-06-28 09:48:32
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • java开源包5

    热门讨论 2013-06-28 09:38:46
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • java开源包10

    热门讨论 2013-06-28 10:06:40
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • java开源包4

    热门讨论 2013-06-28 09:26:54
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • java开源包8

    热门讨论 2013-06-28 09:55:26
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • java开源包9

    热门讨论 2013-06-28 09:58:55
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • java开源包7

    热门讨论 2013-06-28 09:52:16
    6、支持多种通信框架(Mina/Netty/Grizzly),支持多种序列化/反序列化Java/Hessian/PB); 7、支持自定义通信协议,可完全替换NFS-RPC自带的协议。 淘宝开放平台JAVA版SDK top4java 设计原则 容易维护扩展(不...
  • JAVA上百实例源码以及开源项目

    千次下载 热门讨论 2016-01-03 17:37:40
     Java生成密钥、保存密钥的实例源码,通过本源码可以了解到Java如何产生单钥加密的密钥(myKey)、产生双钥的密钥对(keyPair)、如何保存公钥的字节数组、保存私钥到文件privateKey.dat、如何用Java对象序列化保存私钥...
  • 密钥 Java生成密钥、保存密钥的实例源码,通过本源码可以了解到Java如何产生单钥加密的密钥(myKey)、产生双钥的密钥对(keyPair)、如何保存公钥的字节数组、保存私钥到文件privateKey.dat、如何用Java对象序列化保存...
  • public class Person implements Serializable{ private static final long serialVersionUID = 1L; private int age; private String name;} ...public static void main(String...想用反射的方式实现序列化,求解答。
  • java反射机制基本

    2017-06-20 10:09:00
    原因是一个题:只有new能调用构造函数吗?--------有回答:反射和序列化;;; ///网盘链接:::链接:http://pan.baidu.com/s/1pK8enFH 密码:sp4u 转载于:https://www.cnblogs.com/zeigongzi/p/7052935...
  • 为了便于date类型字段的序列化和序列化,需要在数据结构的date类型的字段上用JsonFormat注解进行注解具体格式如下 @JsonFormat(pattern = "yyyy-MM-dd'T'HH:mm:ss.SSSZ", locale = "zh", timezone = "GMT+8") ...
  • 序列化,串行化 ['siәriәlaiz]'(serializable adj.)(deserialize反序列化,反串行化) Socket [java] 网络套接字['sɒkit] stack n.堆栈 [stæk] (对应 heap 堆) statement 程序语句; 语句 ['steitmәnt]' n. 陈述,...
  • RESTful API正在公测中,2.x正式版将支持包括Java、Python在内的开发语言。 GitHub仅在周末处理格式严谨的bug,深恐招待不周,提问请上蝴蝶效应互帮互助。 安装 pip install hanlp 要求Python 3.6以上,支持Windows...
  • 转自:https://www.cnblogs.com/softidea/p/5668697.html@jsonignore的作用作用是json序列化时将java bean中的一些属性忽略掉,序列化和序列化都受影响。http://www.cnblogs.com/toSeeMyDream/p/4437858.html当表间...

空空如也

空空如也

1 2 3
收藏数 48
精华内容 19
关键字:

java序列化p和s

java 订阅