精华内容
下载资源
问答
  • # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin # These are commonly set, remove the # and set as required. basedir...

    Navicat报错:1045-Access denied for user root@localhost(using password:YES)怎么解决

    前言

    • 数据库好久没用过了,某一天打开Navicat,结果连接时报错:1045-Access denied for user root@localhost(using password:YES),我一脸懵逼,也没动过什么啊。没办法,只有盘他。
    • 结果百度发现出现这种问题的还不少,但我搞了几天,看了很多解决办法也没解决我的问题,甚至问题更加严重了,很是苦恼。于是闲置了一个月,想想也不是办法,终于,在昨天解决了,耶,在此记录一下,希望能够帮助到一些CSDNer。
    • 看了很多方法,总结一下出现这种情况的原因;
    1. 有两个mysql,检查一下电脑是否有没有卸载干净的mysql
    2. root权限问题;

    解决办法

    解决办法就是重置root权限密码,但网上很多说在my.ini配置文件下加skip-grant-tables还是行不通,甚至找不到my.ini这个文件。为了大家不再踩我当初的雷,直接上教程。

    1.删除mysql服务

    • 以管理员身份运行cmd,进入mysql的bin文件下,运行命令:
      sc delete MySql
      在这里插入图片描述
    • MySql必须和你的服务名称一致,可以在我的电脑-属性-服务中查看(我的是已经修改过后的,所以不一样)。删除mysql服务之后,在服务中就看不到了,如果还能看见,可以手动右击选择“停止”,服务就消失了。
      在这里插入图片描述

    2.新建my.ini配置文件

    • 在mysql目录下,原来是没有my.ini这个配置文件的,其实,新版的mysql的my.ini配置文件已经迁移到默认C盘下的ProgramData中,这时我们可以选择把它复制到mysql根目录下,但要注意修改my.ini文件中的basedir 和 datadir改成自己正确的路径。
      在这里插入图片描述
      在这里插入图片描述
    • 如果没有这个配置文件也可以自己新建一个空白的my.ini,复制以下代码:
      当然其中的basedir 和 datadir也要相应改变
    # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html
    # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the
    # *** default location during install, and will be replaced if you
    # *** upgrade to a newer version of MySQL.
    [client]
    default-character-set = utf8mb4
    [mysql]
    default-character-set = utf8mb4
    [mysqld]
    character-set-client-handshake = FALSE
    character-set-server = utf8mb4
    collation-server = utf8mb4_bin
    init_connect='SET NAMES utf8mb4'
    # Remove leading # and set to the amount of RAM for the most important data
    # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
    innodb_buffer_pool_size = 128M
    # Remove leading # to turn on a very important data integrity option: logging
    # changes to the binary log between backups.
    # log_bin
    # These are commonly set, remove the # and set as required.
    basedir = D:\MySQL
    datadir = D:\MySQL\data
    port = 3306
    # server_id = .....
    # Remove leading # to set options mainly useful for reporting servers.
    # The server defaults are faster for transactions and fast SELECTs.
    # Adjust sizes as needed, experiment to find the optimal values.
    join_buffer_size = 128M
    sort_buffer_size = 16M
    read_rnd_buffer_size = 16M 
    sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES
    

    3.重新生成data文件

    • 删除mysql下的data文件,如果有重要的数据表先备份好。在cmd中重新生成data文件,在data中输入:
      D:\MySql\bin>mysqld --initialize-insecure --user=mysql
      命令执行完毕会在mysql文件夹中生成新的data文件。
      在这里插入图片描述

    4.重新安装mysql服务,同时绑定my.ini配置文件

    在cmd中执行命令:
    D:\MySql\bin>mysqld --install "MySql80" --defaults-file="d:/mysql/my.ini"
    “MySql80”是服务名称,可以自己修改;”…\my.ini“是新建的配置文件的位置,也可以写成绝对路径”D:\MySql\my.ini“。
    在这里插入图片描述
    如果提示安装成功,这时打开电脑的”服务“窗口,可以找到新添加的MySql80服务:
    在这里插入图片描述
    启动mysql:在cmd中输入命令:D:\MySql\bin>net start mysql80,如果启动成功,如下:
    在这里插入图片描述
    如果启动不成功,可能是my.ini配置文件中的某些配置有问题。你可以修改ini文件内容,然后从头按步骤再试一遍。

    5.重新设置密码

    删除了data文件和服务之后,之前的密码就失效了,所以需要重新设置密码。在cmd中输入如下命令:D:\MySql\bin>mysql -u root -p这时密码为空,不用输入密码直接回车。
    在这里插入图片描述

    6.修改root用户密码

    • 在mysql8.0之前的版本,修改root密码的命令是:
      update mysql.user set authentication_string=password("你的密码") where user="root";
    • mysql8.0之后的版本,修改root密码的命令是:
      ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY '你的密码';
      在这里插入图片描述
      接着退出mysql,用刚刚修改的密码重新登录,分别运行命令:mysql> exitmysql -u root -p如图:
      在这里插入图片描述
      好了,现在已经成功绑定my.ini文件了,再去打开Navicat就能成功连接了,yeah~开心
      在这里插入图片描述

    后记

    我的文章主要参考以下文章,非常感谢解决了我的问题,最后,希望这篇文章对大家有帮助~~

    参考文章:安装之后没有my.ini配置文件怎么办

    展开全文
  • Snap7 Client

    千次阅读 2018-08-31 11:37:12
    A PLC client is the most well-known object, almost all PLC communication drivers on the market are clients. Into S7 world, LibNoDave, Prodave, SAPI-S7 (Simatic Net mn library) are clients. ...

       A PLC client is the most well-known object, almost all PLC communication        drivers on the market are clients.

    Into S7 world, LibNoDaveProdaveSAPI-S7 (Simatic Net mn library) are clients.

    The same OPC Server, despite to its name, is a client against the PLC.

    Finally, Snap7Client is a Client.

    It fulfills almost completely the S7 protocol, you can read/write the whole PLC memory (In/Out/DB/Merkers/Timers/Counters), perform block operations (upload/download), control the PLC (Run/Stop/Compress..), meet the security level (Set Password/Clear Password) and almost all functions that Simatic Manager or Tia Portal allows.

    You certainly have a clear idea about its use, its functions and their use are explained in detail into the Client API reference of the Snap7 Manual.

    What I think is important to highlight, is its advanced characteristics.

    Snap7 library is designed keeping in mind large industrial time-critical data transfers involving networks with dozen of PLCs.

    To meet this, Snap7Client exposes three interesting features : PDU Independence, SmartConnect and Asynchronous data transfer.

     

    PDU independence

    As said, every data packet exchanged with a PLC must fit in a PDU, whose size is fixed and varies from 240 up to 960 bytes.

    All Snap7 functions completely hide this concept, the data that you can transfer in a single call depends only on the size of the available data.

    If this data size exceeds the PDU size, the packet is automatically split across more subsequent transfers.

     

    SmartConnect

    When we try to connect to a generic server, basically two requirements must be met.

    1.  The hardware must be powered on.

    2.  A Server software listening for our connection must be running.

    If the server is PC based, the first condition not always implies the second.
    But for specialist hardware firmware-based such as the PLC, the things are different, few seconds after the power on all the services are running.

    Said that, if we "can ping" a PLC we are almost sure that it can accept our connections.

    The SmartConnect feature relies on this principle to avoid the TCP connection timeout when a PLC is off or the network cable is unwired.
    Unlike the TCP connection timeout, The ping time is fixed and we can decide how much it should be.

     

    When we call Cli_ConnectTo(), or when an active Snap7Partner needs to connect, first the PLC is “pinged”, then, if the ping result was ok, the TCP connection is performed.

     

    Snap7 uses two different ways to do this, depending on the platform:

    Windows

    The system library iphlpapi.dll is used, but it’s is loaded dynamically because it’s not officially supported by Microsoft (even if it is present in all platforms and now it’s fully documented by MSDN).

    If its load fails (very rare case), an ICMP socket is created to perform the ping. We use it as B-plan since we need administrative privileges to create RAW sockets in Vista/Windows 7/Windows 8.

    Unix (Linux/BSD/Solaris)

    From 1.3.0 an Async (with timeout) TCP connection is used so root rights are no more needed.

     

    During the initialization, the library checks if the ping can be performed trying the above methods.

    If all they fail, SmartConnect is disabled and all the clients (or Active partners) created will try to connect directly.

     

     

    Now let's see how to take full advantage of this feature.

     

    Let's suppose that we have a Client that cyclically exchanges data into a thread and we want a fast recovery in case of network problems or PLC power.

     

    In the thread body we could write something like this:

     

     

     

    C++

     

    while (!TerminateCondition())

    {   

        if (Client->Connected())

        {

           PerformDataExchange();

           sleep(TimeRate); // exchange time interval

        }   

        else

            if (Client->Connect()!=0)

                sleep(10); // small wait recovery time

    }

     

    //Supposing that TerminateCondition()is a bool function that //returns true when the thread needs to be terminated.

     

    //In Unix you have to use nanosleep() instead of sleep() or copy
    //SysSleep() from snap_sysutils.cpp.

     

     

     

    Pascal

     

    while not TerminateCondition do

    begin   

        if Client.Connected then

        begin

           PerformDataExchange;

           Sleep(TimeRate); // exchange time interval

        end   

        else

            if Client.Connect<>0 then

                Sleep(10); // small wait recovery time  

    end;

     

    //Supposing that TerminateCondition()is a boolean function that //returns true when the thread needs to be terminated.

     

     

     

     

    In the examples are used the C++ and Pascal classes that you find into the wrappers.

     

     

    Asynchronous data transfer

    synchronous function, is executed in the same thread of the caller, i.e. it exits only when its job is complete. Synchronous functions are often called blocking functions because they block the execution of the caller program.

    An asynchronous function as opposite, consists of two parts, the first, executed in the same thread of the caller, which prepares the data (if any), triggers the second part and exits immediately.
    The second one is executed in a separate thread and performs the body of the job requested, simultaneously to the execution of the caller program.

    This function is also called nonblocking.

    The choice of using one or the other model, essentially depends on two factors:

    1.   How much the parallel job is granular than the activity of the CPU.

    2.   How much, the job execution time, is greater than the overhead introduced by the synchronization.

    A S7 protocol job consists of:

    ·Data preparation.

    ·Data transmission.

    ·Waiting for the response.

    ·Decoding of the reply telegram.

    Each block transmitted is called PDU (protocol data unit) which is the greatest block that can be handled per transmission.

    The “max pdu size” concept belongs to the IsoTCP protocol and it’s negotiated during the S7 connection.

    So, if our data size (plus headers size) is greater than the max pdu size, we need to split our packets and repeat the tasks of transmission and waiting.

    “Waiting for the response” is the worst of them since it’s the longest and the CPU is unused in the meantime.

    So, a S7 Job is definitely granular and could benefit from asynchronous execution.

    “It could” because the advantage is zeroed by the synchronization overhead if the job consists of a single pdu.

    The Snap7 Client supports both data flow models via two different set of functions that can be mixed in the same session:

     
    Cli_<function name> 
    and Cli_As<function name>.

    The example in figure shows a call of Cli_DBRead that extends for more PDUs; during its execution the caller is blocked.

     

    End of Job Completion

    The asynchronous model in computer communications, however, has a great Achilles heel : the completion.

    To understand:

    The function is called, the job thread is triggered, the function exits and the caller work simultaneously to the thread.
    At the end, we need to join the two execution flows, and to do this we need of a sort of synchronization.

    An inappropriate completion model can completely nullify the advantage of asynchronous execution.

    Basically there are three completion models:

    ·Polling

    ·Idle wait

    ·Callback

    There is no better than the others, it depends on the context.

     

    Snap7 Client supports all three models, or a combination of them, if wanted.

    The polling is the simplest : after starting the process, we check the Client until the job is finished.

    To do this we use the function Cli_CheckAsCompletion(); when called it quits immediately and returns the status of the job : finished or in progress.

    We can use it to avoid that our program becomes unresponsive during large data transfer.

     

     

     

    The idle wait completion waits until the job is completed or a Timeout expired. During this time our program is blocked but the CPU is free to perform other tasks.

    To accomplish this are used specific OS primitives (events, semaphores..).

    The function delegated to this is Cli_WaitAsCompletion()

     

     

     

     

     

    The Callback method is the more complex one:

    When the job is terminated a user function (the so named callback) is invoked.

    To use it, we must instruct the client about the callback (using Cli_SetAsCallback()) and we need to write the synchronization code inside it.

    If it is used properly, this method can solve problems that cannot be solved with other libraries (as we will see speaking about the Snap7Partner).

    In the picture we have several PLC and we need to make a "type changeover" in a production line.
    And to do this we need to transfer a new large set of working parameters to each PLC.

     

    Though the callback resides into the user’s program, it’s called into the Client thread, so be aware that calling another Client function inside the callback could lead to stack overflow.

     

    Note

    InterlockedDecrement is a synchronization primitive present in Windows/Unix that performs an atomic decrement by one on a variable.

     

    Target Compatibility

    As said, the S7 Protocol, is the backbone of the Siemens communications.

    Many hardware components equipped with an Ethernet port can communicate via the S7 protocol, obviously not the whole protocol is fulfilled by them (would seem strange to download an FC into a CP343).

     

    S7 300/400/WinAC CPU

    They fully support the S7 Protocol.

    S7 1200/1500 CPU

    They use a modified S7 protocol with an extended telegram, 1500 series has advanced security functions (such as encrypted telegrams), however they can work in 300/400 compatibility mode and some functions can be executed, see also S71200/1500 Notes.

    S7 200/LOGO 0BA7

    Supported from Snap7 1.1.0
    These PLC have a different approach. See S7 200 and LOGO 0BA7 chapters for a detailed description about their use with Snap7.

    SINAMICS Drives

    It’ possible to communicate with the internal CPU and change drive parameters, for some models (G120 for example) is also possible to change drive parameters.

    A way to know what is possible to do with a given model, is to search what is possible to do with an HMI panel/Scada, since Snap7 can do the same things.

    CP (Communication processor)

    It’s possible to communicate with them, and see their internal SDBs, although it's not such a useful thing, or you can use SZL information for debug purpose.

     

    S7 Protocol partial compatibility list (See also § LOGO and S7200)

     

    CPU

    CP

    DRIVE

     

    300

    400

    WinAC

    Snap7S

    1200

    1500

    343/443/IE

    SINAMICS

    DB Read/Write

    O

    O

    O

    O

    O

    O(3)

    -

    O

    EB Read/Write

    O

    O

    O

    O

    O

    O

    -

    O

    AB Read/Write

    O

    O

    O

    O

    O

    O

    -

    O

    MK Read/Write

    O

    O

    O

    O

    O

    O

    -

    -

    TM Read/Write

    O

    O

    O

    O

    -

    -

    -

    -

    CT Read/Write

    O

    O

    O

    O

    -

    -

    -

    -

    Read SZL

    O

    O

    O

    O

    O

    O

    O

    O

    Multi Read/Write

    O

    O

    O

    O

    O

    O

    -

    O

    Directory

    O

    O

    O

    O

    -

    -

    O

    (2)

    Date and Time

    O

    O

    O

    O

    -

    -

    -

    O

    Control Run/Stop

    O

    O

    O

    O

    -

    -

    (1)

    O

    Security

    O

    O

    O

    O

    -

    -

    -

    -

    Block Upload/Down/Delete

    O

    O

    O

    -

    -

    -

    O

    O

     

    Snap7S = Snap7Server

     

    (1)   After the “Stop” command, the connection is lost, Stop/Run CPU sequence is needed.

    (2)   Tough DB are present and accessible, directory shows only SDBs.

    (3)   See S71200/1500 notes.

     

     S7 1200/1500 Notes

    An external equipment can access to S71200/1500 CPU using the S7 “base” protocol, only working as an HMI, i.e. only basic data transfer are allowed.

    All other PG operations (control/directory/etc..) must follow the extended protocol, not (yet) covered by Snap7.

    Particularly to access a DB in S71500 some additional setting plc-side are needed.

    1.    Only global DBs can be accessed.

    2.    The optimized block access must be turned off.

    3.    The access level must be “full” and the “connection mechanism” must allow GET/PUT.

    Let’s see these settings in TIA Portal V12

    DB property

    Select the DB in the left pane under “Program blocks” and press Alt-Enter (or in the contextual menu select “Properties…”

    Uncheck Optimized block access, by default it’s checked.

     

     

    Protection

    Select the CPU project in the left pane and press Alt-Enter (or in the contextual menu select “Properties…”

    In the item Protection, select “Full access” and Check “Permit access with PUT/GET ….” as in figure.

     

     

    Snap7 MicroClient

    In the Snap7 project, TSnap7MicroClient is the ancestor of TSnap7Client class.

    It’s not exported, i.e. you cannot reference it from outside the library, and the only way to use it is to embed it in your C++ source code.

    TSnap7MicroClient implements the body of all S7 Client jobs and the synchronous interface functions.

    The exported TSnap7Client, only adds the remaining asynchronous functions but does not introduce any new S7 behavior.

    Why we are speaking about an internal object ?

    The micro client is thread independent and only relies on the sockets layer, i.e. you would embed it in your source code if:

    ·Your application will run in a micro-OS that has no threads layer.

    ·Your application will run in a realtime-OS (such as QNX) or in an OS that has not a standard threads layer (neither Windows nor posix). In this case you may create a native thread and use the micro client inside of it.

     

    Micro client “extrapolation” is provided by design, there is a well-defined group of independent files to use.

    See the chapter Embedding Snap7 Microclient for further information.

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

     

    展开全文
  • Nest .net client 使用简要说明

    千次阅读 2014-11-06 17:44:26
    Nest .net client   --Quick Start-- ...NEST is a high level elasticsearch client that still maps very closely to the original elasticsearch API. Requests and Responses have been mapped to CLR objec
    Nest .net client
    

    --Quick Start--

    NEST is a high level elasticsearch client that still maps very closely to the original elasticsearch API. Requests and Responses have been mapped to CLR objects and NEST also comes with a powerful strongly typed query dsl.

    Installing
    From the package manager console inside visual studio

    PM > Install Package NEST -PreRelease
    Or search in the Package Manager UI for NEST and go from there

    Connecting
    Assumming Elasticsearch is already installed and running on your machine, go to http://localhost:9200 in your browser. You should see a similar response to this:
    {
      "status" : 200,
      "name" : "Sin-Eater",
      "version" : {
        "number" : "1.0.0",
        "build_hash" : "a46900e9c72c0a623d71b54016357d5f94c8ea32",
        "build_timestamp" : "2014-02-12T16:18:34Z",
        "build_snapshot" : false,
        "lucene_version" : "4.6"
      },
      "tagline" : "You Know, for Search"
    }

    To connect to your local node using NEST, simply:

    var node = new Uri("http://localhost:9200");

    var settings = new ConnectionSettings(
        node, 
        defaultIndex: "my-application"
    );

    var client = new ElasticClient(settings);
    Here we create new a connection to our node and specify a default index to use when we don't explictly specify one. This can greatly reduce the places a magic string or constant has to be used.

    NOTE: specifying defaultIndex is optional but NEST might throw an exception later on if no index is specified. In fact a simple new ElasticClient() is sufficient to chat with http://localhost:9200 but explicitly specifying connection settings is recommended.

    node here is a Uri but can also be an IConnectionPool see the Elasticsearch.net section on connecting

    Indexing
    Now imagine we have a Person POCO

    public class Person
    {
        public string Id { get; set; }
        public string Firstname { get; set; }
        public string Lastname { get; set; }
    }

    That we would like to index in elasticsearch. Indexing is now as simple as calling.
    var person = new Person
    {
        Id = "1",
        Firstname = "Martijn",
        Lastname = "Laarman"
    };

    var index = client.Index(person);
    This will index the object to /my-application/person/1. NEST is smart enough to infer the index and typename for the Person CLR type. It was also able to get the id of 1 through convention, by looking for an Id property on the specified object. Which property it will use for the Id can also be specified using the ElasticType attribute.

    The default index and type names are configurable per type. See the nest section on connecting.

    Image you want to override all the defaults for this one call, you should be able to do this with NEST and yes you can. NEST inferring is very powerful but if you want to pass explicit values you can always do so.

    var index = client.Index(person, i=>i
        .Index("another-index")
        .Type("another-type")
        .Id("1-should-not-be-the-id")
        .Refresh()
        .Ttl("1m")
    );

    This will index the document using /another-index/another-type/1-should-not-be-the-id?refresh=true&&ttl=1m as the url.

    Searching
    Now that we have indexed some documents we can begin to search for them.

    var searchResults = client.Search<Person>(s=>s
        .From(0)
        .Size(10)
        .Query(q=>q
             .Term(p=>p.Firstname, "martijn")
        )
    );

    searchResults.Documents now holds the first 10 people it knows whose first name is Martijn

    Please see the section on writing queries for details on how NEST helps you write terse elasticsearch queries.

    Again, the same inferring rules apply as this will hit /my-application/person/_search and the same rule that inferring can be overridden also applies.

    // uses /other-index/other-type/_search
    var searchResults = client.Search<Person>(s=>s
        .Index("other-index")
        .OtherType("other-type")
    );


    // uses /_all/person/_search
    var searchResults = client.Search<Person>(s=>s
       .AllIndices()
    );

    // uses /_search
    var searchResults = client.Search<Person>(s=>s
        .AllIndices()
        .AllTypes() 
    );

    Object Initializer Syntax
    As you can see from the previous examples, NEST provides a terse, fluent syntax for constructing API calls to Elasticsearch. However, fear not if lambdas aren't your thing, you can now use the new object initializer syntax (OIS) introduced in 1.0.

    The OIS is an alternative to the familair fluent syntax of NEST and works on all API endpoints. Anything that can be done with the fluent syntax can now also be done using the OIS.

    For example, the earlier indexing example above can be re-written as:

    var indexRequest = new IndexRequest<Person>(person)
    {
        Index = "another-index",
        Type = "another-type",
        Id = "1-should-not-be-the-id",
        Refresh = true,
        Ttl = "1m"
    };


    var index = client.Index(indexRequest);
    And searching...

    QueryContainer query = new TermQuery
    {
        Field = "firstName",
        Value = "martijn"
    };

    var searchRequest = new SearchRequest
    {
        From = 0,
        Size = 10,
        Query = query
    };

    var searchResults = Client.Search<Person>(searchRequest);
    Many of the examples throughout this documentation will be written in both forms.

    --Connecting--

    This section describes how to instantiate a client and have it connect to the server.

    Choosing the right connection strategy
    NEST follows pretty much the same design as Elasticsearch.Net when it comes to choosing the right connection strategy.

    new ElasticClient();
    will create a non failover client that talks to http://localhost:9200.

    var uri = new Uri("http://mynode.somewhere.com/");
    var settings = new ConnectionSettings(uri, defaultIndex: "my-application");
    This will create a non-failover client that talks with http://mynode.somewhere.com and uses the default index name my-application for calls which do not explicitly state an index name. Specifying a default index is optional but very handy.

    If you want a failover client, instead of passing a Uri, pass an IConnectionPool. See the Elasticsearch.Net documentation on cluster failover. All of its implementations can also be used with NEST.

    Changing the underlying connection
    By default NEST will use HTTP to chat with Elasticsearch, alternative implementation of the transport layer can be injected using the constructors optional second parameter:

    var client = new ElasticClient(settings, new ThriftConnection(settings));
    NEST comes with an Http Connection HttpConnection, Thrift Connection ThriftConnection and an In-Memory Connection InMemoryConnection, that nevers hits Elasticsearch.

    You can also roll your own connection if desired by implementing the IConnection interface.

    Subclassing existing connection implementations
    In addition to implementing your own IConnection, the existing HttpConnection and ThriftConnection are extendible and can be subclassed in order to modify or extend their behavior.

    For instance, a common use case is the ability to add client certificates to web requests. You can subclass HttpConnection and override the CreateHttpWebRequest method that creates the web request, and add certificates to it like so:

    public class SignedHttpConnection : HttpConnection
    {
        private readonly X509CertificateCollection _certificates;

        public SignedHttpConnection(IConnectionConfigurationValues settings, X509CertificateCollection certificates)
            : base(settings)
        {
            _certificates = certificates;
        }

        protected override HttpWebRequest CreateHttpWebRequest(Uri uri, string method, byte[] data, IRequestConfiguration requestSpecificConfig)
        {
            var request = base.CreateHttpWebRequest(uri, method, data, requestSpecificConfig);
            request.ClientCertificates = _certificates;
            return request;
        }
    }

    Settings
    The NEST client can be configured by passing in an IConnectionSettingsValues object, which is a sub-interface of Elasticsearch.Net's IConnectionConfigurationValues. Therefore all of the settings that can be used to configure Elasticsearch.Net also apply here, including the cluster failover settings

    The easiest way to pass IConnectionSettingsValues is to instantiate ConnectionSettings is:

    var settings = new ConnectionSettings(
            myConnectionPool, 
            defaultIndex: "my-application"
        )
        .PluralizeTypeNames();

    IConnectionSettingsValues has the following options:

    AddContractJsonConverters

    Allows you to add a custom JsonConverter to the built-in JSON serialization by passing in a predicate for a type. This way they will be part of the cached Json.NET contract for a type.

    settings.AddContractJsonConverters(t => 
        typeof (Enum).IsAssignableFrom(t) 
            ? new StringEnumConverter() 
            : null);

    MapDefaultTypeIndices

    Maps a CLR type to Elasticsearch indices. Takes precedence over SetDefaultIndex.

    MapDefaultTypeNames

    Maps Elasticsearch type names to a CLR type. Takes priority over the global SetDefaultTypeNameInferrer.

    PluralizeTypeNames

    This calls SetDefaultTypenameInferrer with an implementation that will pluralize type names. This used to be the default prior to Nest 1.0.

    SetDefaultIndex

    Sets the index to default to when no index is specified.

    SetDefaultPropertyNameInferrer

    By default NEST camelCases property names (EmailAddress => emailAddress) that do not have an explicit property name either via an ElasticProperty attribute or because they are part of a dictionary where the keys should be treated verbatim. Here you can register a function that transforms property names (default casing, pre- or suffixing).

    SetDefaultTypeNameInferrer

    Allows you to override how type names should be represented. The default will call .ToLowerInvariant() on the type's name.

    SetJsonSerializerSettingsModifier

    Allows you to update the internal Json.NET serializer settings to your liking. Do not use this to add custom JSON converters use AddContractJsonConverters instead.

    --Inference--

    Imagine we have a Person POCO
    public class Person
    {
        public string Id { get; set; }
        public string Firstname { get; set; }
        public string Lastname { get; set; }
    }

    That we would like to index in Elasticsearch

    var person = new Person
    {
        Id = "1",
        Firstname = "Martijn",
        Lastname = "Laarman"
    };


    var index = client.Index(person);
    This will index the object to /my-default-index/person/1.

    NEST is smart enough to infer the index and type name for the Person CLR type. It was also able to get the id of 1 by the convention of looking for Id property on the specified object. Where it will look for the Id can be specified using the ElasticType attribute.

    As noted in the quick start you can always pass explicit values for inferred ones.

    var index = client.Index(person, i=>i
        .Index("another-index")
        .Type("another-type")
        .Id("1-should-not-be-the-id")
        .Refresh()
        .Ttl("1m")
    );

    This will index the document using /another-index/another-type/1-should-not-be-the-id?refresh=true&ttl=1m as the URL.

    There are a couple of places within NEST where inference comes in to play...

    Index Name Inference
    Whenever an explicit index name is not provided, NEST will look to see if the type has its own default index name on the connection settings.

     settings.MapDefaultTypeIndices(d=>d
        .Add(typeof(MyType), "my-type-index")
     );

     client = new ElasticClient(settings, defaultIndex: "my-default-index");

     // searches in /my-type-index/mytype/_search
     client.Search<MyType>()

     // searches in /my-default-index/person/_search
     client.Search<Person>()

    MyType defaults to my-type-index because it is explicitly configured, but Person will default to the global fallback my-default-index.

    Type Name Inference
    Whenever NEST needs a type name but is not given one explicitly, it will use the given CLR type to infer it's Elasticsearch type name.

    settings.MapDefaultTypeNames(d=>d
        .Add(typeof(MyType), "MY_TYPO")
    );

    // searches in /inferred-index/MY_TYPO/_search
    client.Search<MyType>();


    // searches in /inferred-index/person/_search
    client.Search<Person>();

    Another way of setting an explicit inferred value for a type is through setting an attribute:

    [ElasticType(Name="automobile")]
    public class Car {} 
    As you can also see in the search example, NEST by default lowercases type names that do not have a configured inferred value.

    settings.SetDefaultTypeNameInferrer(t=>t.Name.ToUpperInvariant());
    Now all type names that have not been explictly specified or have not been explicitly configured will be uppercased.

    Prior to NEST 1.0 type names were by default lowercased AND pluralized, if you want this behavior back use:

    settings.PluralizeTypeNames();
    Property Name Inference
    In many places NEST allows you to pass property names and JSON paths as C# expressions, i.e:

    .Query(q=>q
        .Term(p=>p.Followers.First().FirstName, "martijn"))

    NEST by default will camelCase properties. So the FirstName property above will be translated to "followers.firstName".
    This can be configured by setting

    settings.SetDefaultPropertyNameInferrer(p=>p);
    This will leave property names untouched.

    Properties marked with [ElasticAttibute(Name="")] or [JsonProperty(Name="")] will pass the configured name verbatim.

    Id Inference
    Whenever an object is passed that needs to specify an id (i.e index, bulk operations) the object is inspected to see if it has an Id property and if so, that value will be used.

    This inspection happens once per type. The result of the function call that returns the id for an object of type T is cached; therfore, it is only called once per object of type T throughout the applications lifetime.

    An example of this is at the top of this documentation where the Index() call could figure out the object's id was 1.

    You can control which propery holds the Id:

    [ElasticType(IdProperty="CrazyGuid")]
    public class Car 
    {
        public Guid CrazyGuidId { get; set; }
    }

    This will cause the the id inferring to happen on CrazyGuid instead of Id.

    --Handling Responses--

    All the return objects from API calls in NEST implement:
    public interface IResponse
    {
        bool IsValid { get; }
        IElasticsearchResponse ConnectionStatus { get; }
        ElasticInferrer Infer { get; }
    }

    IsValid will return whether a response is valid or not. A response is usually only valid when an HTTP return result in the 200 range was returned. Some calls allow for 404 to be a valid response too, however.

    If a response returns 200 in Elasticsearch, sometimes it will contain more information on the validity of the call inside its response. It's highly recommended to read the Elasticsearch documentation for a call and check for these properties on the response as well.

    ConnectionStatus is the response as it was returned by Elasticsearch.Net. Its section on handling responses applies here as well.

    --Writing Queries--

    One of the most important things to grasp when using NEST is how to write queries. NEST offers you several possibilities.
    All the examples in this section are assumed to be wrapped in:

    var result = client.Search<ElasticSearchProject>(s=>s
        .From(0)
        .Size(10)
        // Query here
    );

    Or if using the object initializer syntax:

    var result = new SearchRequest
    {
        From = 0,
        Size = 10,
        // Query here

    };

    Raw Strings
    Although not preferred, many folks like to build their own JSON strings and just pass that along:

    .QueryRaw("\"match_all\" : { }")
    .FilterRaw("\"match_all\" : { }")

    NEST does not modify this in anyway and just writes this straight into the JSON output.

    Query DSL
    The preferred way to write queries, since it gives you alot of cool features.

    Lambda Expressions

    .Query(q=>q
        .Term(p=>p.Name, "NEST")
    )
    Here you'll see we can use expressions to address properties in a type safe matter. This also works for IEnumerable types e.g.

    .Query(q=>q
        .Term(p=>p.Followers.First().FirstName, "NEST")
    )

    Because these property lookups are expressions you don't have to do any null checks. The previous would expand to the followers.firstName property name.

    Of course if you need to pass the property name as string NEST will allow you to do so:

    .Query(q=>q
        .Term("followers.firstName", "martijn")
    )

    This can be alternatively written using the object initializer syntax:

    QueryContainer query = new TermQuery
    {
        Field = "followers.firstName",
        Value = "NEST"
    };

    Query = query
    Static Query/Filter Generator

    Sometimes you'll need to reuse a filter or query often. To aid with this you can also write queries like this:

    var termQuery = Query<ElasticSearchProject>
        .Term(p=>p
            .Followers.First().FirstName, "martijn");


    .Query(q=>q
        .Bool(bq=>bq
            .Must(
                mq=>mq.MatchAll()
                , termQuery
            )
        )
    )

    Similarly Filter<T>.[Filter]() methods exist for filters.

    Boolean Queries

    As can be seen in the previous example writing out boolean queries can turn into a really tedious and verbose effort. Luckily, NEST supports bitwise operators, so we can rewrite the previous as such:

    .Query(q=>q.MatchAll() && termQuery)
    Note how we are mixing and matching the lambda and static queries here.
    We can also do the same thing using the OIS:

    QueryContainer query = new MatchAllQuery() && new TermQuery
    {
        Field = "firstName",
        Value = "martijn"
    };

    Similary an OR looks like this:

    Fluent...

    .Query(q=>q
        q.Term("name", "Nest")
        || q.Term("name", "Elastica")
    )


    QueryContainer query1 = new TermQuery
    {
        Field = "name",
        Value = "Nest"
    };
    QueryContainer query2 = new TermQuery
    {
        Field = "name",
        Value = "Elastica"
    };

    Query = query1 || query2
    NOT's are also supported:


    .Query(q=>q
        q.Term("language", "php")
        && !q.Term("name", "Elastica")
    )


    Query = query1 && !query2
    This will query for all the php clients except Elastica.

    You can mix and match this to any level of complexity until it satisfies your query requirements.

    .Query(q=>q
        (q.Term("language", "php")
            && !q.Term("name", "Elastica")
        )
        ||
        q.Term("name", "NEST")
    )



    Query = (query1 && !query2) || query3
    Will query all php clients except Elastica or where the name equals NEST.

    Clean Output Support

    Normally writing three boolean must clauses looks like this (psuedo code)

    must
        clause1
        clause2
        clause3
    A naive implemenation of the bitwise operators would make all the queries sent to Elasticsearch look like

    must
        must
            clause1
            clause2
        clause3
    This degrades rather rapidly and makes inspecting generated queries quite a chore. NEST does its best to detect these cases and will always write them in the first, cleaner form.

    Conditionless Queries
    Writing complex boolean queries is one thing, but more often then not you'll want to make decisions on how to query based on user input.

    public class UserInput
    {
        public string Name { get; set; }
        public string FirstName { get; set; }
        public int? LOC { get; set; }
    }

    and then

    .Query(q=> {
            BaseQuery query = null;
            if (!string.IsNullOrEmpty(userInput.Name))
                query &= q.Term(p=>p.Name, userInput.Name);
            if (!string.IsNullOrEmpty(userInput.FirstName))
                query &= q
                    .Term("followers.firstName", userInput.FirstName);
            if (userInput.LOC.HasValue)
                query &= q.Range(r=>r.OnField(p=>p.Loc).From(userInput.Loc.Value))
            return query;
        })

    This again becomes tedious and verbose rather quickly as well. Therefore, NEST allows you to write the previous query as:

    .Query(q=>
        q.Term(p=>p.Name, userInput.Name);
        && q.Term("followers.firstName", userInput.FirstName)
        && q.Range(r=>r.OnField(p=>p.Loc).From(userInput.Loc))
    )

    If any of the queries would result in an empty query they won't be sent to Elasticsearch.
    So if all the terms are null (or empty string) on userInput except userInput.Loc it wouldn't even wrap the range query in a boolean query but just issue a plain range query.
    If all of them are empty it will result in a match_all query.
    This conditionless behavior is turned on by default but can be turned of like so:

     var result = client.Search<ElasticSearchProject>(s=>s
        .From(0)
        .Size(10)
        .Strict() //disable conditionless queries by default
        ...
    );

    However queries themselves can opt back in or out.

    .Query(q=>
        q.Strict().Term(p=>p.Name, userInput.Name);
        && q.Term("followers.firstName", userInput.FirstName)
        && q.Strict(false).Range(r=>r.OnField(p=>p.Loc).From(userInput.Loc))
    )

    In this example if userInput.Name is null or empty it will result in a DslException. The range query will use conditionless logic no matter if the SearchDescriptor uses .Strict() or not.

    Please note that conditionless query logic propagates:

    q.Strict().Term(p=>p.Name, userInput.Name);
    && q.Term("followers.firstName", userInput.FirstName)
    && q.Filtered(fq => fq
        .Query(qff => 
            qff.Terms(p => p.Country, userInput.Countries)
            && qff.Terms(p => p.Loc, userInput.Loc)
        )
    )

    If both userInput.Countries and userInput.Loc are null or empty the entire filtered query will not be issued.

    --Tips & Tricks--

    This page lists some general tips and tricks provided by the community

    Posting Elasticsearch queries from javascript
    Consider a scenario where you are using client side libraries like elasticjs
    but want security to be provided by server side business logic. Consider this example using WebAPI

    NOTE make sure dynamic scripting is turned off if you decide to open the full query DSL to the client!

        [RoutePrefix("api/Search")]
        public class SearchController : ApiController
        {
            [ActionName("_search")]
            public IHttpActionResult Post([FromBody]SearchDescriptor<dynamic> query)
            {;
                var client = new ElasticClient();

                //Your server side security goes here

                var result = client.Search(q => query);
                return Ok(result);
            }
        }

    The fragments [RoutePrefix("api/Search")] and [ActionName("_search")] will let you change your elastic search Url from http://localhost:9200/_search to http://yourwebsite/api/Search/_search and let things work as normal. The fragment [FromBody]SearchDescriptor<dynamic> query will convert the JSON query into NEST SearchDescriptor. The fragment client.Search(q => query) will execute the query.

    --
    官网导航 core 部分 有有关 seting,index,mapping的说明

    --官网地址--

    from http://nest.azurewebsites.net/nest/writing-queries.html
    展开全文
  • This article is an introduction to Jedis, a client library in Java for Redis – the popular in-memory data structure store that can persist on disk as well. It is driven by a keystore-based data...

    1. 概览

    This article is an introduction to Jedis, a client library in Java for Redis – the popular in-memory data structure store that can persist on disk as well. It is driven by a keystore-based data structure to persist data and can be used as a database, cache, message broker, etc.

    本文主要介绍Jedis,一个Redis的客户端类库,Redis是当下流行的内存数据库,同时也可将数据持久化到磁盘。是基于键的数据结构像数据库、缓存、消息一样持久化数据。

    First, we are going to explain in which kind of situations Jedis is useful and what it is about.

    首先,我们将解释Jedis是什么且在什么情况下有用的。

    In the subsequent sections we are elaborating on the various data structures and explaining transactions, pipelining and the publish/subscribe feature. We conclude with connection pooling and Redis Cluster.

    在随后的章节,我们将从各种数据类型,并且介绍事务、pipeline、发布/订阅等功能。最后我们将介绍连接池和Redis集群。

     

    2. 为什么是Jedis?

    Redis lists the most well-known client libraries on their official site. There are multiple alternatives to Jedis, but only two more are currently worthy of their recommendation star, lettuce, and Redisson.

    Redis在其官网上列出了比较有名的客户端类库。

    These two clients do have some unique features like thread safety, transparent reconnection handling and an asynchronous API, all features of which Jedis lacks.

    However, it is small and considerably faster than the other two. Besides, it is the client library of choice of the Spring Framework developers, and it has the biggest community of all three.

    3. Maven Dependencies

    Let’s start by declaring the only dependency we will need in the pom.xml:

    ?

    1

    2

    3

    4

    5

    <dependency>

        <groupId>redis.clients</groupId>

        <artifactId>jedis</artifactId>

        <version>2.8.1</version>

    </dependency>

    If you’re looking for the latest version of the library, check out this page.

    4. Redis Installation

    You will need to install and fire up one of the latest versions of Redis. We are running the latest stable version at this moment (3.2.1), but any post 3.x version should be okay.

    Find here more information about Redis for Linux and Macintosh, they have very similar basic installation steps. Windows is not officially supported, but this port is well maintained.

    After that we can directly dive in and connect to it from our Java code:

    ?

    1

    Jedis jedis = new Jedis();

    The default constructor will work just fine unless you have started the service on a non-default port or a remote machine, in which case you can configure it correctly by passing the correct values as parameters into the constructor.

    5. Redis Data Structures

    Most of the native operation commands are supported and, conveniently enough, they normally share the same method name.

    5.1. Strings

    Strings are the most basic kind of Redis value, useful for when you need to persist simple key-value data types:

    ?

    1

    2

    jedis.set("events/city/rome", "32,15,223,828");

    String cachedResponse = jedis.get("events/city/rome");

    The variable cachedResponse will hold the value 32,15,223,828. Coupled with expiration support, discussed later, it can work as a lightning fast and simple to use cache layer for HTTP requests received at your web application and other caching requirements.

    5.2. Lists

    Redis Lists are simply lists of strings, sorted by insertion order and make it an ideal tool to implement, for instance, message queues:

    ?

    1

    2

    3

    4

    jedis.lpush("queue#tasks", "firstTask");

    jedis.lpush("queue#tasks", "secondTask");

     

    String task = jedis.rpop("queue#tasks");

    The variable task will hold the value firstTask. Remember that you can serialize any object and persist it as a string, so messages in the queue can carry more complex data when required.

    5.3. Sets

    Redis Sets are an unordered collection of Strings that come in handy when you want to exclude repeated members:

    ?

    1

    2

    3

    4

    5

    6

    jedis.sadd("nicknames", "nickname#1");

    jedis.sadd("nicknames", "nickname#2");

    jedis.sadd("nicknames", "nickname#1");

     

    Set<String> nicknames = jedis.smembers("nicknames");

    boolean exists = jedis.sismember("nicknames", "nickname#1");

    The Java Set nicknames will have a size of 2, the second addition of nickname#1 was ignored. Also, the exists variable will have a value of true, the method sismember enables you to check for the existence of a particular member quickly.

    5.4. Hashes

    Redis Hashes are mapping between String fields and String values:

    ?

    1

    2

    3

    4

    5

    6

    7

    jedis.hset("user#1", "name", "Peter");

    jedis.hset("user#1", "job", "politician");

             

    String name = jedis.hget("user#1", "name");

             

    Map<String, String> fields = jedis.hgetAll("user#1");

    String job = fields.get("job");

    As you can see, hashes are a very convenient data type when you want to access object’s properties individually since you do not need to retrieve the whole object.

    5.5. Sorted Sets

    Sorted Sets are like a Set where each member has an associated ranking, that is used for sorting them:

    ?

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    Map<String, Double> scores = new HashMap<>();

     

    scores.put("PlayerOne", 3000.0);

    scores.put("PlayerTwo", 1500.0);

    scores.put("PlayerThree", 8200.0);

     

    scores.keySet().forEach(player -> {

        jedis.zadd("ranking", scores.get(player), player);

    });

             

    String player = jedis.zrevrange("ranking", 0, 1).iterator().next();

    long rank = jedis.zrevrank("ranking", "PlayerOne");

    The variable player will hold the value PlayerThree because we are retrieving the top 1 player and he is the one with the highest score. The rank variable will have a value of 1 because PlayerOne is the second in the ranking and the ranking is zero-based.

    6. Transactions

    Transactions guarantee atomicity and thread safety operations, which means that requests from other clients will never be handled concurrently during Redis transactions:

    ?

    1

    2

    3

    4

    5

    6

    7

    8

    String friendsPrefix = "friends#";

    String userOneId = "4352523";

    String userTwoId = "5552321";

     

    Transaction t = jedis.multi();

    t.sadd(friendsPrefix + userOneId, userTwoId);

    t.sadd(friendsPrefix + userTwoId, userOneId);

    t.exec();

    You can even make a transaction success dependent on a specific key by “watching” it right before you instantiate your Transaction:

    ?

    1

    jedis.watch("friends#deleted#" + userOneId);

    If the value of that key changes before the transaction is executed, the transaction will not be completed successfully.

    7. Pipelining

    When we have to send multiple commands, we can pack them together in one request and save connection overhead by using pipelines, it is essentially a network optimization. As long as the operations are mutually independent, we can take advantage of this technique:

    ?

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    String userOneId = "4352523";

    String userTwoId = "4849888";

     

    Pipeline p = jedis.pipelined();

    p.sadd("searched#" + userOneId, "paris");

    p.zadd("ranking", 126, userOneId);

    p.zadd("ranking", 325, userTwoId);

    Response<Boolean> pipeExists = p.sismember("searched#" + userOneId, "paris");

    Response<Set<String>> pipeRanking = p.zrange("ranking", 0, -1);

    p.sync();

     

    String exists = pipeExists.get();

    Set<String> ranking = pipeRanking.get();

    Notice we do not get direct access to the command responses, instead, we’re given a Response instance from which we can request the underlying response after the pipeline has been synced.

    8. Publish/Subscribe

    We can use the Redis messaging broker functionality to send messages between the different components of our system. Make sure the subscriber and publisher threads do not share the same Jedis connection.

    8.1. Subscriber

    Subscribe and listen to messages sent to a channel:

    ?

    1

    2

    3

    4

    5

    6

    7

    Jedis jSubscriber = new Jedis();

    jSubscriber.subscribe(new JedisPubSub() {

        @Override

        public void onMessage(String channel, String message) {

            // handle message

        }

    }, "channel");

    Subscribe is a blocking method, you will need to unsubscribe from the JedisPubSub explicitly. We have overridden the onMessage method but there are many more useful methods available to override.

    8.2. Publisher

    Then simply send messages to that same channel from the publisher’s thread:

    ?

    1

    2

    Jedis jPublisher = new Jedis();

    jPublisher.publish("channel", "test message");

    9. Connection Pooling

    It is important to know that the way we have been dealing with our Jedis instance is naive. In a real-world scenario, you do not want to use a single instance in a multi-threaded environment as a single instance is not thread-safe.

    Luckily enough we can easily create a pool of connections to Redis for us to reuse on demand, a pool that is thread safe and reliable as long as you return the resource to the pool when you are done with it.

    Let’s create the JedisPool:

    ?

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    11

    12

    13

    14

    15

    16

    17

    final JedisPoolConfig poolConfig = buildPoolConfig();

    JedisPool jedisPool = new JedisPool(poolConfig, "localhost");

     

    private JedisPoolConfig buildPoolConfig() {

        final JedisPoolConfig poolConfig = new JedisPoolConfig();

        poolConfig.setMaxTotal(128);

        poolConfig.setMaxIdle(128);

        poolConfig.setMinIdle(16);

        poolConfig.setTestOnBorrow(true);

        poolConfig.setTestOnReturn(true);

        poolConfig.setTestWhileIdle(true);

        poolConfig.setMinEvictableIdleTimeMillis(Duration.ofSeconds(60).toMillis());

        poolConfig.setTimeBetweenEvictionRunsMillis(Duration.ofSeconds(30).toMillis());

        poolConfig.setNumTestsPerEvictionRun(3);

        poolConfig.setBlockWhenExhausted(true);

        return poolConfig;

    }

    Since the pool instance is thread safe, you can store it somewhere statically but you should take care of destroying the pool to avoid leaks when the application is being shutdown.

    Now we can make use of our pool from anywhere in the application when needed:

    ?

    1

    2

    3

    try (Jedis jedis = jedisPool.getResource()) {

        // do operations with jedis resource

    }

    We used the Java try-with-resources statement to avoid having to manually close the Jedis resource, but if you cannot use this statement you can also close the resource manually in the finally clause.

    Make sure you use a pool like we have described in your application if you do not want to face nasty multi-threading issues. You can obviously play with the pool configuration parameters to adapt it to the best setup in your system.

    10. Redis Cluster

    This Redis implementation provides easy scalability and high availability, we encourage you to read their official specification if you are not familiar with it. We will not cover Redis cluster setup since that is a bit out of the scope for this article, but you should have no problems in doing so when you are done with its documentation.

    Once we have that ready, we can start using it from our application:

    ?

    1

    2

    3

    try (JedisCluster jedisCluster = new JedisCluster(new HostAndPort("localhost", 6379))) {

        // use the jedisCluster resource as if it was a normal Jedis resource

    } catch (IOException e) {}

    We only need to provide the host and port details from one of our master instances, it will auto-discover the rest of the instances in the cluster.

    This is certainly a very powerful feature but it is not a silver bullet. When using Redis Cluster you cannot perform transactions nor use pipelines, two important features on which many applications rely for ensuring data integrity.

    Transactions are disabled because, in a clustered environment, keys will be persisted across multiple instances. Operation atomicity and thread safety cannot be guaranteed for operations that involve command execution in different instances.

    Some advanced key creation strategies will ensure that data that is interesting for you to be persisted in the same instance will get persisted that way. In theory, that should enable you to perform transactions successfully using one of the underlying Jedis instances of the Redis Cluster.

    Unfortunately, currently you cannot find out in which Redis instance a particular key is saved using Jedis (which is actually supported natively by Redis), so you do not know which of the instances you must perform the transaction operation. If you are interested about this, you can find more information here.

    11. Conclusion

    The vast majority of the features from Redis are already available in Jedis and its development moves forward at a good pace.

    It gives you the ability to integrate a powerful in-memory storage engine in your application with very little hassle, just do not forget to set up connection pooling to avoid thread safety issues.

    You can find code samples in the GitHub project.

    展开全文
  • 添加spring-cloud-starter-security,spring-security-oauth2-autoconfigure和spring-boot-starter-oauth2-client 3个依赖。 <!-- Spring cloud starter: Security --> <!-- Include: web, actuator...
  • client detail

    千次阅读 2008-08-24 11:01:00
    Everything about SAP Clients ....by Saroj Mohapatra 1. Overview. 2. About SAP Clients 3. Creating a client and setting up the client attributes. 4. System-level control in transac
  • chrome extension native message & native client

    千次阅读 2019-08-08 14:33:57
    Sending and receiving messages to and from a native application is very similar to cross-extension messaging. The main difference is that  runtime.connectNative  is used instead of  runtime.connect...
  • spring cloud 升级config-client及部署问题

    千次阅读 2018-04-12 15:46:31
    接昨天,升级微服务到config-client又遇到一些问题,花了大半天的时间。其实,不该花这么久的,所以还是踩坑了。。 直接说问题吧。 rabbitmq连接报错 主要有几个报错,原因应该都是未连接上rabbitmq导致的 ...
  • Building a WebRTC Client for Android

    千次阅读 2013-11-15 16:28:28
    http://simonguest.com/2013/08/06/building-a-webrtc-client-for-android/ *** Update: As a few readers have pointed out, the libjingle source has now been merged into the main WebRTC branch (ht
  • Server Client Sockets

    千次阅读 2013-03-21 16:09:01
    Server Client Sockets By Emiliano, 25 Apr 2003  4.72 (75 votes)   Download source files - 5 Kb Introduction The ...
  • [root@master mysql]# rpm -ivh MySQL-client-5.6.36-1.linux_glibc2.5.x86_64.rpm  //安装mysql客户端 warning: MySQL-client-5.6.36-1.linux_glibc2.5.x86_64.rpm: Header V3 DSA/SHA1  Signature, key ID ...
  • FastReport.v4.9.81 for.Delphi.BCB.Full.Source企业版含ClientServer中文修正版 delphi2010中文完美支持。 D2010安装必读 delphi2010使用者安装时,请将res\frccD14.exe更名名为frcc.exe frccD14.exe 是专门的...
  • Here is an important piece of information. We are going to set our socket to be non-blocking so that it will not wait on send() and receive() functions when there is no data to send/receive. This is ...
  • A simple IOCP Server/Client Class By spinoza, 11 Dec 2008  4.91 (154 votes)   Rate: vote 1vote 2vote ...
  • MySQL Internals ClientServer Protocol

    千次阅读 2012-07-15 23:22:04
    转载自:http://forge.mysql.com/wiki/MySQL_Internals_ClientServer_Protocol ...MySQL Internals ClientServer Protocol [edit] MySQL Client/Server Protocol [edit] Organization The topic is: the
  • SSL Client Auth in Rails

    千次阅读 2012-05-11 17:21:46
    http://www.scatmania.org/projects/ssl-client-certificate-authentication-in-ruby-on-rails/ This article is about using SSL certificates installed into a web browser to authenticate against a Ruby o
  • Teradata Client Access

    千次阅读 2007-12-25 17:00:00
    2. Client Access2.1 Client ConnectionsUsers can access data in the Teradata Database through an application on both channel-attached and network-attached clients. Additionally, the node
  • Best visual client for Git on Mac OS X?

    千次阅读 2012-01-14 17:16:17
    http://stackoverflow.com/questions/455698/best-visual-client-for-git-on-mac-os-x When this question was asked, I think the correct answer was almost certainly gitx. Later in 2009, ...
  • Client Dataset Basics

    千次阅读 2007-12-10 18:35:00
    Fortunately, client datasets store their data in a very compact form. (I'll discuss this in more detail in the "Undo Support" section of Chapter 7.) Because they are memory based, client datasets ...
  • CAML and the Client Object Model

    千次阅读 2014-07-09 08:55:05
    SharePoint 2010 comes with a whole set of new ... One of these novelties is the Client Object Model. (Well it are 3 novelties because there are 3 different Client Object Models that you can use
  • In recent times I've had the opportunity to trial the new SAP NetWeaver Business Client 4.0 (NWBC), and I've been pretty impressed with what I've seen. Here I share my thoughts and findings ...   ...
  • Building a CalDAV client

    千次阅读 2013-07-08 19:16:24
    https://code.google.com/p/sabredav/wiki/BuildingACalDAVClient Building a CalDAV client Building a CalDAV client IntroductionGeneral synchronization concernsSyncing a calendar Retrieving
  • Redis & Redis Sentinel 基本使用

    千次阅读 2018-03-22 23:14:01
    Redis &amp; Redis-sentinel 基本使用...Redis Client客户端基本内容 Redis Sentinel客户端基本内容 与本文相关的代码与配置文件都已经上传至github上: 地址: https://github.com/SeanYanxml/bigdata Re...
  • Another important property of a message digest is that it is hard to reverse the process. Given the hash value, it is hard or impossible to find the input. Furthermore, a good hash function also ...
  • ZK Rich Client Framework and Agile Development

    千次阅读 2008-07-09 16:03:00
    Discuss this ArticleIntroductionThis article has two parts – a discussion followed by a tutorial.In the discussion, I will explain why rich client frameworks in general, and ZK in p
  • When building TCP client server systems it's easy to make simple mistakes which can severely limit scalability. One of these mistakes is failing to take into account the TIME_WAIT state. In this blog
  •   Important Changes to Oracle Database Patch Sets Starting With 11.2.0.2 [ID 1189783.1]   Modified 13-SEP-2010 Type ANNOUNCEMENT Status PUBLISHED   In this Document...
  • In this tutorial we will build a Jabber Client for iOS. The application developed in this series will enable users to sign in, add buddies, and send messages. In the process of building this app, we w

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 19,095
精华内容 7,638
关键字:

clientimportantvery