精华内容
下载资源
问答
  • zk集群和clickhouse集群搭建

    千次阅读 2019-06-14 18:55:04
    第一步搭建zk集群 stack.xml version: '3.1' services: zoo1: image: zookeeper restart: always hostname: zoo1 ports: - 2191:2181 environment: ZOO_MY_ID: 1 ZOO_SERVERS: se...

    第一步搭建zk集群
    stack.xml

    version: '3.1'
    
    services:
      zoo1:
        image: zookeeper
        restart: always
        hostname: zoo1
        ports:
          - 2191:2181
        environment:
          ZOO_MY_ID: 1
          ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
    
      zoo2:
        image: zookeeper
        restart: always
        hostname: zoo2
        ports:
          - 2192:2181
        environment:
          ZOO_MY_ID: 2
          ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181
    
      zoo3:
        image: zookeeper
        restart: always
        hostname: zoo3
        ports:
          - 2193:2181
        environment:
          ZOO_MY_ID: 3
          ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
    

    docker-compose -f stack.yml up 开启服务
    docker inspect zoo1 |grep ‘“IPAddress”’ 查看ip
    docker exec -it myNginx /bin/bash 进入容器 再执行 zkServer.sh status 查看zk的状态
    执行zkCli.sh 进入客户端【有时会出现没有正常启动的情况error】

    第二步搭建clickhouse集群

     docker run -d --name ch-server1 --ulimit nofile=262144:262144 yandex/clickhouse-server
    
     docker run -d --name ch-server2 --ulimit nofile=262144:262144 yandex/clickhouse-server
    
     docker run -d --name ch-server3 --ulimit nofile=262144:262144 yandex/clickhouse-server
    
    

    copy出里面的文件 本地修改 再上传过去

    docker cp ch-server1:/etc/clickhouse-server/ ./etc/clickhouse-server/
    docker cp ./etc/clickhouse-server/clickhouse-server/ ch-server1:/etc/
    docker cp ./etc/clickhouse-server/clickhouse-server/ ch-server2:/etc/
    docker cp ./etc/clickhouse-server/clickhouse-server/ ch-server3:/etc/
    

    总配置config.xml

    <?xml version="1.0"?>
    <!--
      NOTE: User and query level settings are set up in "users.xml" file.
    -->
    <yandex>
        <logger>
            <!-- Possible levels: https://github.com/pocoproject/poco/blob/develop/Foundation/include/Poco/Logger.h#L105 -->
            <level>trace</level>
            <log>/var/log/clickhouse-server/clickhouse-server.log</log>
            <errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
            <size>1000M</size>
            <count>10</count>
            <!-- <console>1</console> --> <!-- Default behavior is autodetection (log to console if not daemon mode and is tty) -->
        </logger>
        <!--display_name>production</display_name--> <!-- It is the name that will be shown in the client -->
        <http_port>8123</http_port>
        <tcp_port>9000</tcp_port>
    
        <!-- For HTTPS and SSL over native protocol. -->
        <!--
        <https_port>8443</https_port>
        <tcp_port_secure>9440</tcp_port_secure>
        -->
    
        <!-- Used with https_port and tcp_port_secure. Full ssl options list: https://github.com/ClickHouse-Extras/poco/blob/master/NetSSL_OpenSSL/include/Poco/Net/SSLManager.h#L71 -->
        <openSSL>
            <server> <!-- Used for https server AND secure tcp port -->
                <!-- openssl req -subj "/CN=localhost" -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt -->
                <certificateFile>/etc/clickhouse-server/server.crt</certificateFile>
                <privateKeyFile>/etc/clickhouse-server/server.key</privateKeyFile>
                <!-- openssl dhparam -out /etc/clickhouse-server/dhparam.pem 4096 -->
                <dhParamsFile>/etc/clickhouse-server/dhparam.pem</dhParamsFile>
                <verificationMode>none</verificationMode>
                <loadDefaultCAFile>true</loadDefaultCAFile>
                <cacheSessions>true</cacheSessions>
                <disableProtocols>sslv2,sslv3</disableProtocols>
                <preferServerCiphers>true</preferServerCiphers>
            </server>
    
            <client> <!-- Used for connecting to https dictionary source -->
                <loadDefaultCAFile>true</loadDefaultCAFile>
                <cacheSessions>true</cacheSessions>
                <disableProtocols>sslv2,sslv3</disableProtocols>
                <preferServerCiphers>true</preferServerCiphers>
                <!-- Use for self-signed: <verificationMode>none</verificationMode> -->
                <invalidCertificateHandler>
                    <!-- Use for self-signed: <name>AcceptCertificateHandler</name> -->
                    <name>RejectCertificateHandler</name>
                </invalidCertificateHandler>
            </client>
        </openSSL>
    
        <!-- Default root page on http[s] server. For example load UI from https://tabix.io/ when opening http://localhost:8123 -->
        <!--
        <http_server_default_response><![CDATA[<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>]]></http_server_default_response>
        -->
    
        <!-- Port for communication between replicas. Used for data exchange. -->
        <interserver_http_port>9009</interserver_http_port>
    
        <!-- Hostname that is used by other replicas to request this server.
             If not specified, than it is determined analoguous to 'hostname -f' command.
             This setting could be used to switch replication to another network interface.
          -->
        <!--
        <interserver_http_host>example.yandex.ru</interserver_http_host>
        -->
    
        <!-- Listen specified host. use :: (wildcard IPv6 address), if you want to accept connections both with IPv4 and IPv6 from everywhere. -->
        <!-- <listen_host>::</listen_host> -->
        <!-- Same for hosts with disabled ipv6: -->
        <!-- <listen_host>0.0.0.0</listen_host> -->
    
        <!-- Default values - try listen localhost on ipv4 and ipv6: -->
        <!--
        <listen_host>::1</listen_host>
        <listen_host>127.0.0.1</listen_host>
        -->
        <!-- Don't exit if ipv6 or ipv4 unavailable, but listen_host with this protocol specified -->
        <!-- <listen_try>0</listen_try> -->
    
        <!-- Allow listen on same address:port -->
        <!-- <listen_reuse_port>0</listen_reuse_port> -->
    
        <!-- <listen_backlog>64</listen_backlog> -->
    
        <max_connections>4096</max_connections>
        <keep_alive_timeout>3</keep_alive_timeout>
    
        <!-- Maximum number of concurrent queries. -->
        <max_concurrent_queries>100</max_concurrent_queries>
    
        <!-- Set limit on number of open files (default: maximum). This setting makes sense on Mac OS X because getrlimit() fails to retrieve
             correct maximum value. -->
        <!-- <max_open_files>262144</max_open_files> -->
    
        <!-- Size of cache of uncompressed blocks of data, used in tables of MergeTree family.
             In bytes. Cache is single for server. Memory is allocated only on demand.
             Cache is used when 'use_uncompressed_cache' user setting turned on (off by default).
             Uncompressed cache is advantageous only for very short queries and in rare cases.
          -->
        <uncompressed_cache_size>8589934592</uncompressed_cache_size>
    
        <!-- Approximate size of mark cache, used in tables of MergeTree family.
             In bytes. Cache is single for server. Memory is allocated only on demand.
             You should not lower this value.
          -->
        <mark_cache_size>5368709120</mark_cache_size>
    
    
        <!-- Path to data directory, with trailing slash. -->
        <path>/var/lib/clickhouse/</path>
    
        <!-- Path to temporary data for processing hard queries. -->
        <tmp_path>/var/lib/clickhouse/tmp/</tmp_path>
    
        <!-- Directory with user provided files that are accessible by 'file' table function. -->
        <user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
    
        <!-- Path to configuration file with users, access rights, profiles of settings, quotas. -->
        <users_config>users.xml</users_config>
    
        <!-- Default profile of settings. -->
        <default_profile>default</default_profile>
    
        <!-- System profile of settings. This settings are used by internal processes (Buffer storage, Distibuted DDL worker and so on). -->
        <!-- <system_profile>default</system_profile> -->
    
        <!-- Default database. -->
        <default_database>default</default_database>
    
        <!-- Server time zone could be set here.
    
             Time zone is used when converting between String and DateTime types,
              when printing DateTime in text formats and parsing DateTime from text,
              it is used in date and time related functions, if specific time zone was not passed as an argument.
    
             Time zone is specified as identifier from IANA time zone database, like UTC or Africa/Abidjan.
             If not specified, system time zone at server startup is used.
    
             Please note, that server could display time zone alias instead of specified name.
             Example: W-SU is an alias for Europe/Moscow and Zulu is an alias for UTC.
        -->
        <!-- <timezone>Europe/Moscow</timezone> -->
    
        <!-- You can specify umask here (see "man umask"). Server will apply it on startup.
             Number is always parsed as octal. Default umask is 027 (other users cannot read logs, data files, etc; group can only read).
        -->
        <!-- <umask>022</umask> -->
    
        <!-- Perform mlockall after startup to lower first queries latency
              and to prevent clickhouse executable from being paged out under high IO load.
             Enabling this option is recommended but will lead to increased startup time for up to a few seconds.
        -->
        <mlock_executable>false</mlock_executable>
    
        <!-- Configuration of clusters that could be used in Distributed tables.
             https://clickhouse.yandex/docs/en/table_engines/distributed/
          -->
        <remote_servers incl="clickhouse_remote_servers" >
            <!-- Test only shard config for testing distributed storage -->
    
           <!-- <cluster_2s_1r>
    
                &lt;!&ndash; 数据分片1  &ndash;&gt;
                <shard>
                    <internal_replication>true</internal_replication>
                    <replica>
                        <host>172.17.0.3</host>
                        <port>9000</port>
                        <user>default</user>
                        <password></password>
                    </replica>
                </shard>
    
                &lt;!&ndash; 数据分片2  &ndash;&gt;
                <shard>
                    <internal_replication>true</internal_replication>
                    <replica>
                        <host>172.17.0.4</host>
                        <port>9000</port>
                        <user>default</user>
                        <password></password>
                    </replica>
                </shard>
    
            </cluster_2s_1r>-->
    
        </remote_servers>
    
    
        <!-- If element has 'incl' attribute, then for it's value will be used corresponding substitution from another file.
             By default, path to file with substitutions is /etc/metrika.xml. It could be changed in config in 'include_from' element.
             Values for substitutions are specified in /yandex/name_of_substitution elements in that file.
          -->
    
        <!-- ZooKeeper is used to store metadata about replicas, when using Replicated tables.
             Optional. If you don't use replicated tables, you could omit that.
    
             See https://clickhouse.yandex/docs/en/table_engines/replication/
          -->
        <zookeeper incl="zookeeper-servers" optional="true" />
    
    
        <!-- Substitutions for parameters of replicated tables.
              Optional. If you don't use replicated tables, you could omit that.
    
             See https://clickhouse.yandex/docs/en/table_engines/replication/#creating-replicated-tables
          -->
        <macros incl="macros" optional="true" />
    
        <timezone>Asia/Shanghai</timezone>
        <!-- Reloading interval for embedded dictionaries, in seconds. Default: 3600. -->
        <builtin_dictionaries_reload_interval>3600</builtin_dictionaries_reload_interval>
    
    
        <!-- Maximum session timeout, in seconds. Default: 3600. -->
        <max_session_timeout>3600</max_session_timeout>
    
        <!-- Default session timeout, in seconds. Default: 60. -->
        <default_session_timeout>60</default_session_timeout>
    
        <!-- Sending data to Graphite for monitoring. Several sections can be defined. -->
        <!--
            interval - send every X second
            root_path - prefix for keys
            hostname_in_path - append hostname to root_path (default = true)
            metrics - send data from table system.metrics
            events - send data from table system.events
            asynchronous_metrics - send data from table system.asynchronous_metrics
        -->
        <!--
        <graphite>
            <host>localhost</host>
            <port>42000</port>
            <timeout>0.1</timeout>
            <interval>60</interval>
            <root_path>one_min</root_path>
            <hostname_in_path>true</hostname_in_path>
    
            <metrics>true</metrics>
            <events>true</events>
            <asynchronous_metrics>true</asynchronous_metrics>
        </graphite>
        <graphite>
            <host>localhost</host>
            <port>42000</port>
            <timeout>0.1</timeout>
            <interval>1</interval>
            <root_path>one_sec</root_path>
    
            <metrics>true</metrics>
            <events>true</events>
            <asynchronous_metrics>false</asynchronous_metrics>
        </graphite>
        -->
    
    
        <!-- Query log. Used only for queries with setting log_queries = 1. -->
        <query_log>
            <!-- What table to insert data. If table is not exist, it will be created.
                 When query log structure is changed after system update,
                  then old table will be renamed and new table will be created automatically.
            -->
            <database>system</database>
            <table>query_log</table>
            <!--
                PARTITION BY expr https://clickhouse.yandex/docs/en/table_engines/custom_partitioning_key/
                Example:
                    event_date
                    toMonday(event_date)
                    toYYYYMM(event_date)
                    toStartOfHour(event_time)
            -->
            <partition_by>toYYYYMM(event_date)</partition_by>
            <!-- Interval of flushing data. -->
            <flush_interval_milliseconds>7500</flush_interval_milliseconds>
        </query_log>
    
        <!-- Query thread log. Has information about all threads participated in query execution.
             Used only for queries with setting log_query_threads = 1. -->
        <query_thread_log>
            <database>system</database>
            <table>query_thread_log</table>
            <partition_by>toYYYYMM(event_date)</partition_by>
            <flush_interval_milliseconds>7500</flush_interval_milliseconds>
        </query_thread_log>
    
        <!-- Uncomment if use part log.
             Part log contains information about all actions with parts in MergeTree tables (creation, deletion, merges, downloads).
        <part_log>
            <database>system</database>
            <table>part_log</table>
            <flush_interval_milliseconds>7500</flush_interval_milliseconds>
        </part_log>
        -->
    
    
        <!-- Parameters for embedded dictionaries, used in Yandex.Metrica.
             See https://clickhouse.yandex/docs/en/dicts/internal_dicts/
        -->
    
        <!-- Path to file with region hierarchy. -->
        <!-- <path_to_regions_hierarchy_file>/opt/geo/regions_hierarchy.txt</path_to_regions_hierarchy_file> -->
    
        <!-- Path to directory with files containing names of regions -->
        <!-- <path_to_regions_names_files>/opt/geo/</path_to_regions_names_files> -->
    
    
        <!-- Configuration of external dictionaries. See:
             https://clickhouse.yandex/docs/en/dicts/external_dicts/
        -->
        <dictionaries_config>*_dictionary.xml</dictionaries_config>
    
        <!-- Uncomment if you want data to be compressed 30-100% better.
             Don't do that if you just started using ClickHouse.
          -->
        <compression incl="clickhouse_compression">
        <!--
            <!- - Set of variants. Checked in order. Last matching case wins. If nothing matches, lz4 will be used. - ->
            <case>
    
                <!- - Conditions. All must be satisfied. Some conditions may be omitted. - ->
                <min_part_size>10000000000</min_part_size>        <!- - Min part size in bytes. - ->
                <min_part_size_ratio>0.01</min_part_size_ratio>   <!- - Min size of part relative to whole table size. - ->
    
                <!- - What compression method to use. - ->
                <method>zstd</method>
            </case>
        -->
        </compression>
    
        <!-- Allow to execute distributed DDL queries (CREATE, DROP, ALTER, RENAME) on cluster.
             Works only if ZooKeeper is enabled. Comment it if such functionality isn't required. -->
        <distributed_ddl>
            <!-- Path in ZooKeeper to queue with DDL queries -->
            <path>/clickhouse/task_queue/ddl</path>
    
            <!-- Settings from this profile will be used to execute DDL queries -->
            <!-- <profile>default</profile> -->
        </distributed_ddl>
    
        <!-- Settings to fine tune MergeTree tables. See documentation in source code, in MergeTreeSettings.h -->
        <!--
        <merge_tree>
            <max_suspicious_broken_parts>5</max_suspicious_broken_parts>
        </merge_tree>
        -->
    
        <!-- Protection from accidental DROP.
             If size of a MergeTree table is greater than max_table_size_to_drop (in bytes) than table could not be dropped with any DROP query.
             If you want do delete one table and don't want to restart clickhouse-server, you could create special file <clickhouse-path>/flags/force_drop_table and make DROP once.
             By default max_table_size_to_drop is 50GB; max_table_size_to_drop=0 allows to DROP any tables.
             The same for max_partition_size_to_drop.
             Uncomment to disable protection.
        -->
        <!-- <max_table_size_to_drop>0</max_table_size_to_drop> -->
        <!-- <max_partition_size_to_drop>0</max_partition_size_to_drop> -->
    
        <!-- Example of parameters for GraphiteMergeTree table engine -->
        <graphite_rollup_example>
            <pattern>
                <regexp>click_cost</regexp>
                <function>any</function>
                <retention>
                    <age>0</age>
                    <precision>3600</precision>
                </retention>
                <retention>
                    <age>86400</age>
                    <precision>60</precision>
                </retention>
            </pattern>
            <default>
                <function>max</function>
                <retention>
                    <age>0</age>
                    <precision>60</precision>
                </retention>
                <retention>
                    <age>3600</age>
                    <precision>300</precision>
                </retention>
                <retention>
                    <age>86400</age>
                    <precision>3600</precision>
                </retention>
            </default>
        </graphite_rollup_example>
    
        <!-- Directory in <clickhouse-path> containing schema files for various input formats.
             The directory will be created if it doesn't exist.
          -->
        <format_schema_path>/var/lib/clickhouse/format_schemas/</format_schema_path>
     <include_from>/etc/clickhouse-server/metrika.xml</include_from>
        <!-- Uncomment to disable ClickHouse internal DNS caching. -->
        <!-- <disable_internal_dns_cache>1</disable_internal_dns_cache> -->
    </yandex>
    
    

    集群配置metrika.xml

    <yandex>
    
        <!-- 集群配置 -->
        <clickhouse_remote_servers>
            <cluster_1s_3r>
    
                <!-- 数据分片1  -->
                <shard>
                    <internal_replication>true</internal_replication>
                    <replica>
                        <host>172.17.0.3</host>
                        <port>9000</port>
                        <user>default</user>
                        <password></password>
                    </replica>
                    <replica>
                        <host>172.17.0.4</host>
                        <port>9000</port>
                        <user>default</user>
                        <password></password>
                    </replica>
                    <replica>
                        <host>172.17.0.5</host>
                        <port>9000</port>
                        <user>default</user>
                        <password></password>
                    </replica>
    
                </shard>
    
                <!-- 数据分片2  -->
               <!-- <shard>
                    <internal_replication>true</internal_replication>
                    <replica>
                        <host>172.17.0.4</host>  <!--容器ip-->
                        <port>9000</port>
                        <user>default</user>
                        <password></password>
                    </replica>
                </shard>
                 <shard>
                    <internal_replication>true</internal_replication>
                    <replica>
                        <host>172.17.0.5</host>
                        <port>9000</port>
                        <user>default</user>
                        <password></password>
                    </replica>
                </shard>-->
    
    
            </cluster_1s_3r>
        </clickhouse_remote_servers>
    
    
        <macros>
            <replica>172.17.0.5</replica> <!--每个clickhouse的ip ,事实上只要确保唯一即可-->
        </macros>
    
    
        <!-- ZK  -->
        <zookeeper-servers>
            <node>
                <host>172.16.9.139</host>
                <port>2191</port>
            </node>
            <node>
                <host>172.16.9.139</host>   <!--172.22.0.3 必须是本机地址➕端口号 不能是容器ip,巨坑-->
                <port>2192</port>
            </node>
             <node>
                <host>172.16.9.139</host>   <!--172.26.0.4-->
                <port>2193</port>
            </node>
        </zookeeper-servers>
    
        <networks>
            <ip>::/0</ip>
        </networks>
    
        <!-- 数据压缩算法  -->
        <clickhouse_compression>
            <case>
                <min_part_size>10000000000</min_part_size>
                <min_part_size_ratio>0.01</min_part_size_ratio>
                <method>lz4</method>
            </case>
        </clickhouse_compression>
    
    </yandex>
    
    

    进入容器 ch-server2

    docker run -it \
    --rm \
    --add-host ch-server1:172.17.0.3 \
    --add-host ch-server2:172.17.0.4 \
    --add-host ch-server3:172.17.0.5 \
    yandex/clickhouse-client \
    --host ch-server2 \
    --port 9000
    

    查看集群状态

    select * from system.clusters
    

    在这里插入图片描述
    创建表且插入数据

    
    CREATE TABLE t_product
    (
        `EventDate` DateTime,
        `CounterID` UInt32,
        `UserID` UInt32
    )
    ENGINE = ReplicatedMergeTree('/clickhouse/tables/t_product', '{replica}')
    PARTITION BY toYYYYMM(EventDate)
    ORDER BY (CounterID, EventDate, intHash32(UserID))
    SAMPLE BY intHash32(UserID);
    
    CREATE TABLE t_product_all AS t_product ENGINE = Distributed(cluster_1s_3r, default, t_product, rand());
    
    insert into default.t_product (EventDate,CounterID,UserID)values('2002-10-01 00:00:00',1,1);
     
    insert into default.t_product (EventDate,CounterID,UserID)values('2002-10-01 00:00:00',2,1);
    insert into default.t_product (EventDate,CounterID,UserID)values('2002-10-01 00:00:00',3,1);
    
    
    insert into default.t_product (EventDate,CounterID,UserID)values('2002-10-01 00:00:00',1,2);
    insert into default.t_product (EventDate,CounterID,UserID)values('2002-10-01 00:00:00',1,3);
    

    表2

    CREATE TABLE default.image_label
    (
        `label_id` UInt32,
        `label_name` String,
        `insert_time` Date
    )
    ENGINE = ReplicatedMergeTree('/clickhouse/tables/{shard}/image_label', '{replica}', insert_time, (label_id, insert_time), 8192);
    
    CREATE TABLE image_label_all AS image_label ENGINE = Distributed(cluster_1s_3r, default, image_label, rand());
    
    insert into default.image_label (label_id,label_name,insert_time)values(1,'lable-1','2002-10-12');
    
    insert into default.image_label (label_id,label_name,insert_time)values(2,'lable-2','2002-10-12');
    insert into default.image_label (label_id,label_name,insert_time)values(3,'lable-3','2002-10-12');
    
    insert into default.image_label (label_id,label_name,insert_time)values(8,'lable-8','2002-10-12');
    insert into default.image_label (label_id,label_name,insert_time)values(9,'lable-9','2002-10-12');
    

    最后查看 zk中的znode
    在这里插入图片描述

    展开全文
  • 本次搭建zk集群为了方便,以docker容器的方式来演示 1.创建三个docker实例并按装jdk8 docker run --privileged -dit --name zk1 --hostname zk1 docker.io/centos:7.6.1810 /usr/sbin/init docker run --...

    本次搭建zk集群为了方便,以docker容器的方式来演示

    1.创建三个docker实例并按装jdk8

    docker run --privileged -dit --name zk1 --hostname zk1 docker.io/centos:7.6.1810 /usr/sbin/init
    docker run --privileged -dit --name zk2 --hostname zk2 docker.io/centos:7.6.1810 /usr/sbin/init
    docker run --privileged -dit --name zk3 --hostname zk3 docker.io/centos:7.6.1810 /usr/sbin/init
    #为了方便这里直接使用yum安装
    docker exec -ti zk1 /bin/bash -c "yum -y install java-1.8.0-openjdk.x86_64"
    docker exec -ti zk2 /bin/bash -c "yum -y install java-1.8.0-openjdk.x86_64"
    docker exec -ti zk3 /bin/bash -c "yum -y install java-1.8.0-openjdk.x86_64"
    

    2.查出每个docker的ip(在创建时指定ip也可以)

    [root@wfw zk]# docker inspect -f='{{.NetworkSettings.IPAddress}} {{.Config.Hostname}}' $(sudo docker ps -a -q) > hostname
    [root@wfw zk]# ll
    total 4
    -rw-r--r-- 1 root root 45 Jan 11 23:30 hostname
    [root@wfw zk]# cat hostname 
    172.17.0.4 zk3
    172.17.0.3 zk2
    172.17.0.2 zk1
    

    3.配置每个docker的hosts

    #将hostname文件拷贝到各个docker中的home路径下
    docker cp hostname zk1:/home/hostname
    docker cp hostname zk2:/home/hostname
    docker cp hostname zk3:/home/hostname
    
    #将hostname内容写入hosts文件
    docker exec -ti zk1 /bin/bash -c "cat /home/hostname >> /etc/hosts"
    docker exec -ti zk2 /bin/bash -c "cat /home/hostname >> /etc/hosts"
    docker exec -ti zk3 /bin/bash -c "cat /home/hostname >> /etc/hosts"
    

    4.下载zk并复制到各个docker

    wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
    
    #将zk复制到各个docker的home路径下
    docker cp zookeeper-3.4.10.tar.gz zk1:/home/zookeeper-3.4.10.tar.gz
    docker cp zookeeper-3.4.10.tar.gz zk2:/home/zookeeper-3.4.10.tar.gz
    docker cp zookeeper-3.4.10.tar.gz zk3:/home/zookeeper-3.4.10.tar.gz
    
    #解压所有docker上的zk包
    docker exec -ti zk1 /bin/bash -c "tar zxvf /home/zookeeper-3.4.10.tar.gz -C /home"
    docker exec -ti zk2 /bin/bash -c "tar zxvf /home/zookeeper-3.4.10.tar.gz -C /home"
    docker exec -ti zk3 /bin/bash -c "tar zxvf /home/zookeeper-3.4.10.tar.gz -C /home"
    

    5.登陆zk1,配置zk的配置文件zoo.cfg

    [root@zk1 home]# cd /home/zookeeper-3.4.10/conf/
    [root@zk1 conf]# touch zoo.cfg
    [root@zk1 conf]# vi zoo.cfg 
    [root@zk1 conf]# cat zoo.cfg 
    tickTime=2000
    initLimit=10
    syncLimit=5
    dataLogDir=/opt/zookeeper/logs
    dataDir=/opt/zookeeper/data
    clientPort=2181
    autopurge.snapRetainCount=500
    autopurge.purgeInterval=24
    server.1=zk1:2888:3888
    server.2=zk2:2888:3888 
    server.3=zk3:2888:3888
    

    6.拷贝zoo.cfg到其他两个docker的同路径下,为了方便也可以在宿主机中创建该文件像上述方法拷贝到所有docker

    docker cp zoo.cfg zk1:/home/zookeeper-3.4.10/conf/zoo.cfg
    docker cp zoo.cfg zk2:/home/zookeeper-3.4.10/conf/zoo.cfg
    docker cp zoo.cfg zk3:/home/zookeeper-3.4.10/conf/zoo.cfg
    

    7.创建data路径及myid文件

    docker exec -ti zk1 /bin/bash -c "mkdir -p /opt/zookeeper/{logs,data}; echo "1" > /opt/zookeeper/data/myid"
    docker exec -ti zk2 /bin/bash -c "mkdir -p /opt/zookeeper/{logs,data}; echo "2" > /opt/zookeeper/data/myid"
    docker exec -ti zk3 /bin/bash -c "mkdir -p /opt/zookeeper/{logs,data}; echo "3" > /opt/zookeeper/data/myid"
    

    8.启动每个docker中的zk进程

    [root@wfw zk]# docker exec -ti zk3 /bin/bash -c "/home/zookeeper-3.4.10/bin/zkServer.sh start"
    ZooKeeper JMX enabled by default
    Using config: /home/zookeeper-3.4.10/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    [root@wfw zk]# docker exec -ti zk2 /bin/bash -c "/home/zookeeper-3.4.10/bin/zkServer.sh start"
    ZooKeeper JMX enabled by default
    Using config: /home/zookeeper-3.4.10/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    [root@wfw zk]# docker exec -ti zk1 /bin/bash -c "/home/zookeeper-3.4.10/bin/zkServer.sh start"
    ZooKeeper JMX enabled by default
    Using config: /home/zookeeper-3.4.10/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    

    9.查询zk状态

    [root@wfw zk]# docker exec -ti zk3 /bin/bash -c "/home/zookeeper-3.4.10/bin/zkServer.sh status"
    ZooKeeper JMX enabled by default
    Using config: /home/zookeeper-3.4.10/bin/../conf/zoo.cfg
    Mode: follower
    [root@wfw zk]# docker exec -ti zk2 /bin/bash -c "/home/zookeeper-3.4.10/bin/zkServer.sh status"
    ZooKeeper JMX enabled by default
    Using config: /home/zookeeper-3.4.10/bin/../conf/zoo.cfg
    Mode: leader
    [root@wfw zk]# docker exec -ti zk1 /bin/bash -c "/home/zookeeper-3.4.10/bin/zkServer.sh status"
    ZooKeeper JMX enabled by default
    Using config: /home/zookeeper-3.4.10/bin/../conf/zoo.cfg
    Mode: follower
    

    附:

    如果三个docker已经配置了ssh免密登陆,还可以只在一个docker启动左右集群的zk进程

    #!/bin/bash
    usage="Usage: zkRun.sh [start|stop|status|restart]"
    if [ $# -ne 1 ]; then
      echo $usage
      exit 1
    fi
    case $1 in
      (start)
        cmd="start"
      ;;
      (stop)
        cmd="stop"
      ;;
      (status)
        cmd="status"
      ;;
      (restart)
        cmd="restart"
      ;;
      (*)
        echo $usage
        exit 1
        ;;
    esac
    docker_hostname=$(cat ./hostname|awk '{print $2}')
    for salve in $docker_hostname ; do
        ssh $docker_hostname "/home/zookeeper-3.4.10/bin/zkServer.sh $cmd"
    done
    

    在一开始生成hostname文件时就是直接通过查询docker容器列表详情获得的,也可以通过脚本形式来生成zk的zoo.cfg配置文件,过程也不复杂

    #!/bin/bash
    # zoo_gen.sh
    # generate zoo.cfg
    echo "tickTime=2000" >> zoo.cfg
    echo "initLimit=10" >> zoo.cfg
    echo "syncLimit=5" >> zoo.cfg
    echo "dataLogDir=/opt/zookeeper/logs" >> zoo.cfg
    echo "dataDir=/opt/zookeeper/data" >> zoo.cfg
    echo "clientPort=2181" >> zoo.cfg
    echo "autopurge.snapRetainCount=500" >> zoo.cfg
    echo "autopurge.purgeInterval=24" >> zoo.cfg
    
    id=0
    for hostname in $(cat ./hostname|awk '{print $2}')
      do
        ((id++))
        echo "server.$id=$hostname:2888:3888" >> zoo.cfg
      done
    

    myid文件的自动生成和推送其实也不是难做,只不过需要ssh免密登陆,稍微复杂一些,这里就略过了,感兴趣的可以自己试着写写,方法思路有很多种

    zk_auto.sh

    #!/bin/sh
    #docker pull openjdk:8u242-jdk
    #wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz
    
    docker stop zk1 && docker rm zk1
    docker stop zk2 && docker rm zk2
    docker stop zk3 && docker rm zk3
    
    #创建docker
    image=mamohr/centos-java:latest
    docker run --privileged -dit --name zk3 --hostname zk3 ${image} /usr/sbin/init
    docker run --privileged -dit --name zk2 --hostname zk2 ${image} /usr/sbin/init
    docker run --privileged -dit --name zk1 --hostname zk1 ${image} /usr/sbin/init
    
    #生成hostname
    docker inspect -f='{{.NetworkSettings.IPAddress}} {{.Config.Hostname}}' $(sudo docker ps -a -q) | grep zk > hostname
    
    #将hostname文件拷贝到各个docker中的home路径下
    docker cp hostname zk1:/home/hostname
    docker cp hostname zk2:/home/hostname
    docker cp hostname zk3:/home/hostname
    
    #将hostname内容写入hosts文件
    docker exec -ti zk1 /bin/bash -c "cat /home/hostname >> /etc/hosts"
    docker exec -ti zk2 /bin/bash -c "cat /home/hostname >> /etc/hosts"
    docker exec -ti zk3 /bin/bash -c "cat /home/hostname >> /etc/hosts"
    
    #将zk复制到各个docker的home路径下
    docker cp zookeeper-3.4.10.tar.gz zk1:/home/zookeeper-3.4.10.tar.gz
    docker cp zookeeper-3.4.10.tar.gz zk2:/home/zookeeper-3.4.10.tar.gz
    docker cp zookeeper-3.4.10.tar.gz zk3:/home/zookeeper-3.4.10.tar.gz
    
    #解压所有docker上的zk包
    docker exec -ti zk1 /bin/bash -c "tar zxf /home/zookeeper-3.4.10.tar.gz -C /home"
    docker exec -ti zk2 /bin/bash -c "tar zxf /home/zookeeper-3.4.10.tar.gz -C /home"
    docker exec -ti zk3 /bin/bash -c "tar zxf /home/zookeeper-3.4.10.tar.gz -C /home"
    
    #生成zoo.cfg
    echo "tickTime=2000" > zoo.cfg
    echo "initLimit=10" >> zoo.cfg
    echo "syncLimit=5" >> zoo.cfg
    echo "dataLogDir=/opt/zookeeper/logs" >> zoo.cfg
    echo "dataDir=/opt/zookeeper/data" >> zoo.cfg
    echo "clientPort=2181" >> zoo.cfg
    echo "autopurge.snapRetainCount=500" >> zoo.cfg
    echo "autopurge.purgeInterval=24" >> zoo.cfg
    
    id=0
    for hostname in $(cat ./hostname|awk '{print $2}')
      do
        ((id++))
        echo "server.$id=$hostname:2888:3888" >> zoo.cfg
      done
    
    docker cp zoo.cfg zk1:/home/zookeeper-3.4.10/conf/zoo.cfg
    docker cp zoo.cfg zk2:/home/zookeeper-3.4.10/conf/zoo.cfg
    docker cp zoo.cfg zk3:/home/zookeeper-3.4.10/conf/zoo.cfg
    
    docker exec -ti zk1 /bin/bash -c "mkdir -p /opt/zookeeper/{logs,data}; echo "1" > /opt/zookeeper/data/myid"
    docker exec -ti zk2 /bin/bash -c "mkdir -p /opt/zookeeper/{logs,data}; echo "2" > /opt/zookeeper/data/myid"
    docker exec -ti zk3 /bin/bash -c "mkdir -p /opt/zookeeper/{logs,data}; echo "3" > /opt/zookeeper/data/myid"
    
    docker exec -ti zk1 /bin/bash -c "/home/zookeeper-3.4.10/bin/zkServer.sh start"
    docker exec -ti zk2 /bin/bash -c "/home/zookeeper-3.4.10/bin/zkServer.sh start"
    docker exec -ti zk3 /bin/bash -c "/home/zookeeper-3.4.10/bin/zkServer.sh start"
    
    docker exec -ti zk1 /bin/bash -c "/home/zookeeper-3.4.10/bin/zkServer.sh status" 
    docker exec -ti zk2 /bin/bash -c "/home/zookeeper-3.4.10/bin/zkServer.sh status"
    docker exec -ti zk3 /bin/bash -c "/home/zookeeper-3.4.10/bin/zkServer.sh status"
    
    
    展开全文
  • 云原生比较????,但是之前从未接触过,最近的项目就是上云上云再上云,再...搭建ZK集群 一、购买机器(ZK) k8s不可以选择跨Region(地区)的机器,但是支持跨可用区,为了稳定可靠,每个可用区都要有机器 先上购买ECS

    云原生比较🔥,但是之前从未接触过,最近的项目就是上云上云再上云,再陌生也要了解了解多了解

    先附上文档大概了解一下
    官方文档参考:Kubernetes 文档
    云原生开源社区文档:云原生新手入门指北
    以及阿里云开发者社区相关文章:K8s 应用管理之道 - 有状态服务云原生时代消息中间件的演进路线


    接下来就进入正题,可能描述性文字偏多,也有一些踩坑总结

    搭建ZK集群

    一、购买机器(ZK)

    k8s不可以选择跨Region(地区)的机器,但是支持跨可用区,为了稳定可靠,每个可用区都要有机器
    先上购买ECS链接

    ECS基础配置:

    • 付费模式- 包年包月
    • 地域- 华北3张家口-可用区ABC
    • 实例规格- 自行决定
    • 镜像- 自行决定
    • 存储- 系统盘- 自行决定
    • 数据盘-(数据量不是很大的可以不买数据盘,买大一点的系统盘)
    • 快照服务- 没有选择
    • 没有选择共享nas(这里没有买,但要实现云原生持久化存储还是需要的)

    网络和安全组:

    • 网络- 选择当前可用区的专有网络和交换机
    • 公网Ip- 无选择
    • 安全组- 这里都可以看介绍自行选择
    • 弹性网卡- 没有选择

    系统配置

    • 登录凭证- 选择自定义密码
      登录名:***
      登录密码:******
    • 实例名称自定义- 类似可参考- aaa-zk-A[0,3]ecs、bbb-zk-B[0,3]ecs、ccc-zk-C[0,3]ecs 对应ABC区的机器
    • 主机名自定义- 类似可参考- aaa-zk-A[0,3]ecs、bbb-zk-B[0,3]ecs、ccc-zk-C[0,3]ecs 对应ABC区的机器
    • 其他没有设置选择

    分组设置

    由于是选填,所以没有进行选择


    二、创建ZK镜像

    Docker商店地址:https://store.docker.com/editions/community/docker-ce-desktop-mac
    下载安装
    打开终端输入docker info命令,出现类似下面信息说明安装并启动成功
    在这里插入图片描述

    1.创建Dockerfile文件

    这里把需要的一些命令也安装进去

    FROM hyperledger/fabric-zookeeper:latest
    COPY zookeeper-entrypoint.sh /
    
    #给zookeeper-entrypoint.sh加上执行权限
    RUN chmod +x /zookeeper-entrypoint.sh
    
    RUN apt update
    RUN apt-get -y install netcat
    RUN apt install net-tools
    RUN apt install wget
    RUN apt-get install procps
    
    #安装下面三个命令需要对话判断y/n,不加-y会导致对话没有回复,无法安装命令导致失败
    RUN apt install yum -y
    RUN apt install iputils-ping -y
    RUN apt-get install vim -y
    
    ENTRYPOINT ["/zookeeper-entrypoint.sh"]
    

    命令需要对话判断y/n的,不加-y会导致对话没有回复,无法安装命令导致失败,失败如下图
    在这里插入图片描述

    2.创建zookeeper-entrypoint.sh文件

    ⚠️:zookeeper-entrypoint.sh和Dockerfile需要处于同一路径

    #!/bin/bash
    INIT_FILE="/external/bin/init.sh"
    
    if [[ -f "$INIT_FILE" ]]; then
        sh ${INIT_FILE}
    fi
    
    /docker-entrypoint.sh "zkServer.sh" "start-foreground"
    

    3.构建镜像

    在终端执行命令

    docker build -t 镜像仓库地址/-zookeeper:1.0.0 .
    

    看到类似如下图,即为构建成功
    在这里插入图片描述
    查看现有镜像

    docker images
    

    一般上面构建咩有问题,这里就会看到我们创建的镜像


    镜像上传

    开通容器镜像服务:https://cr.console.aliyun.com/cn-zhangjiakou/instances/repositories
    初次开通会看到👇图:
    在这里插入图片描述
    容器镜像服务设置Registry登录密码:******
    地域选择华北3(张家口)
    创建命名空间-创建镜像仓库-创建的镜像仓库右侧点击管理

    登录阿里云Docker Registry

    这里具体看操作指南中username的值

    docker login --username=具体看操作指南中username的值 registry.cn-zhangjiakou.aliyuncs.com
    

    输入密码为容器镜像服务设置Registry登录密码:******

    首次不需要拉取镜像,只要push即可

    查看[ImageId]和[镜像版本号]

    是否需要sudo看个人情况

    sudo docker images
    

    将镜像推送到Registry

    $ sudo docker tag [ImageId] 仓库地址/zk:[镜像版本号]
    $ sudo docker push 仓库地址/zk:[镜像版本号]
    

    类似下图即为镜像上传成功:
    在这里插入图片描述
    也可以在页面-镜像版本看到上传的版本镜像:
    在这里插入图片描述


    创建ZK集群

    创建集群🔗

    ECS节点登录密码:***

    其中添加ECS节点选择默认系统镜像并且使用数据盘(这里需要确认一下,是否需要使用数据盘,看需要存储的数据量,或者可以把系统盘买大一些,就不用数据盘了)


    接下来就是部署ZK应用,这里没什么好讲的

    然后是为应用挂载持久化存储卷NAS

    但是








    先写到这里,0点了,下班!!!
    在这里插入图片描述
    接下篇:云原生-云应用挂载持久化存储卷NAS及通过NAS实现批量机器并发查找日志

    展开全文
  • springboot+dubbo+zk集群搭建

    千次阅读 2019-05-11 10:30:15
    zookeeper的集群搭建在上一编已经说过,不会的可以查看。 下面开始搭建springboot+dubbo+zk注册中心的demo 生产者工程目录如图 一、创建dubbo-provider父工程 父pom.xml <?xml version="1.0" encoding=...

    zookeeper的集群搭建在上一编已经说过,不会的可以查看。

    下面开始搭建springboot+dubbo+zk注册中心的demo

    生产者工程目录如图

    一、创建dubbo-provider父工程

    父pom.xml

    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
        <packaging>pom</packaging>
        <modules>
            <module>provider</module>
            <module>api</module>
        </modules>
        <parent>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-parent</artifactId>
            <version>2.1.4.RELEASE</version>
            <relativePath/> <!-- lookup parent from repository -->
        </parent>
        <groupId>springboot</groupId>
        <artifactId>dubbo-provider</artifactId>
        <version>0.0.1-SNAPSHOT</version>
        <name>dubbo-provider</name>
        <description>Demo project for Spring Boot</description>
        <properties>
            <java.version>1.8</java.version>
        </properties>
        <dependencies>
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-test</artifactId>
                <scope>test</scope>
            </dependency>
        </dependencies>
    

    </project>

     

    二、创建provider工程(Module)

    pom.xml文件

    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <parent>
            <artifactId>dubbo-provider</artifactId>
            <groupId>springboot</groupId>
            <version>0.0.1-SNAPSHOT</version>
        </parent>
        <modelVersion>4.0.0</modelVersion>
        <packaging>jar</packaging>
        <artifactId>provider</artifactId>
        <dependencies>
            <!-- https://mvnrepository.com/artifact/com.alibaba.boot/dubbo-spring-boot-autoconfigure -->
            <dependency>
                <groupId>com.alibaba.boot</groupId>
                <artifactId>dubbo-spring-boot-starter</artifactId>
                <version>0.2.1.RELEASE</version>
            </dependency>
    
            <dependency>
                <groupId>com.alibaba</groupId>
                <artifactId>dubbo</artifactId>
                <version>2.6.5</version>
            </dependency>
            <dependency>
                <groupId>org.apache.curator</groupId>
                <artifactId>curator-framework</artifactId>
                <version>2.12.0</version>
            </dependency>
            <!-- exclusions去掉log4j的依赖,会冲突 -->
            <dependency>
                <artifactId>zookeeper</artifactId>
                <groupId>org.apache.zookeeper</groupId>
                <version>3.4.14</version>
                <exclusions>
                    <exclusion>
                        <groupId>org.slf4j</groupId>
                        <artifactId>slf4j-log4j12</artifactId>
                    </exclusion>
                    <exclusion>
                        <groupId>log4j</groupId>
                        <artifactId>log4j</artifactId>
                    </exclusion>
                    <exclusion>
                        <groupId>io.netty</groupId>
                        <artifactId>netty</artifactId>
                    </exclusion>
                </exclusions>
            </dependency>
            <dependency>
                <groupId>springboot</groupId>
                <artifactId>api</artifactId>
                <version>0.0.1-SNAPSHOT</version>
            </dependency>
        </dependencies>
    

    </project>

     

    application.yml配置文件信息

    #dubbo的name默认使用${spring.application.name},所以不用配置dubbo.application.name也行
    spring:
      application:
        name: dubbo-provider
    #使用注解,要配置dubbo的扫描包路径
    dubbo:
      scan:
        base-packages: com.facade
    #dubbo启动的端口
      protocol:
        port: 20880
      registry:
        protocol: zookeeper
        address: 192.168.79.135:2181,192.168.79.136:2181,192.168.79.137:2181

     

     

    定义一个服务接口

    服务接口的实现

    定义服务接口暴露出去的dubbo实现(因为实际项目中会包装一层facade,而不会将service当作dubbo接口)

    二、创建provider的api工程(Module)

    api工程用于给消费者工程引用依赖得到api接口来调用

    pom.xml文件,什么都不用依赖

    暴露给其他服务的api接口

    最后就是springboot启动服务的类

    生产者工程搭建成功

     

    消费者工程目录如图

    一、创建dubbo-consumer父工程

    pom.xml文件

    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
       xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
       <modelVersion>4.0.0</modelVersion>
        <packaging>pom</packaging>
        <modules>
            <module>consumer</module>
        </modules>
        <parent>
          <groupId>org.springframework.boot</groupId>
          <artifactId>spring-boot-starter-parent</artifactId>
          <version>2.1.4.RELEASE</version>
          <relativePath/> <!-- lookup parent from repository -->
       </parent>
       <groupId>springboot</groupId>
       <artifactId>dubbo-consumer</artifactId>
       <version>0.0.1-SNAPSHOT</version>
       <name>dubbo-consumer</name>
       <description>Demo project for Spring Boot</description>
       <properties>
          <java.version>1.8</java.version>
       </properties>
       <dependencies>
          <dependency>
             <groupId>org.springframework.boot</groupId>
             <artifactId>spring-boot-starter</artifactId>
          </dependency>
          <dependency>
             <groupId>org.springframework.boot</groupId>
             <artifactId>spring-boot-starter-test</artifactId>
             <scope>test</scope>
          </dependency>
       </dependencies>
    

    </project>

     

    application.yml配置文件

    #dubbo的name默认使用${spring.application.name},所以不用配置dubbo.application.name也行
    spring:
      application:
        name: dubbo-consumer
    #dubbo启动的端口
    dubbo:
      protocol:
        port: 20881
      registry:
        protocol: zookeeper
        address: 192.168.79.135:2181,192.168.79.136:2181,192.168.79.137:2181

    这里用springboot的test来启动消费者来

    消费者工程搭建成功

     

    现在启动生产者服务(直接启动springboot的main就可以了)

    生产者启动成功

    启动消费者(调用ConsumerTest测试)

    调用成功,说明搭建的springboot+dubbo+zk集群成功了

     

    查询zk节点的变化,看看暴露的接口存到注册中心的是什么东西

     

     

    发现多了dubbo节点,并者dubbo节点的子节点下有注册到zk上的服务(api接口)名称 有多少个服务,dubbo下就有多少个子节点

    /dubbo/com.api.HelloFacade

    接口节点上还有providers,configuartors等节点

     

    查看providers节点下发现这一串东西(这是生产者注册到zk上的地址)

    [dubbo%3A%2F%2F172.16.19.186%3A20880%2Fcom.api.HelloFacade%3Fanyhost%3Dtrue%26application%3Ddubboprovider%26bean.name%3DServiceBean%3Acom.api.HelloFacade%3A1.0.0%26dubbo%3D2.0.2%26generic%3Dfalse%26interface%3Dcom.api.HelloFacade%26methods%3DsayH%26pid%3D11836%26revision%3D1.0.0%26side%3Dprovider%26timestamp%3D1557481294861%26version%3D1.0.0]

    百度一下url解码

    [dubbo://172.16.19.186:20880/com.api.HelloFacade?anyhost=true&application=dubboprovider&bean.name=ServiceBean:com.api.HelloFacade:1.0.0&dubbo=2.0.2&generic=false&interface=com.api.HelloFacade&methods=sayH&pid=11836&revision=1.0.0&side=provider&timestamp=1557481294861&version=1.0.0]

    dubbo使用zookeeper作为注解中心,服务启动后,zk节点会创建/dubbo节点,然后把在dubbo节点上创建生产者注册到zk的服务(api接口)名称/dubbo/com.api.HelloFacade,该节点下会保存provider和consumer的节点,/dubbo/com.api.HelloFacade/provider节点(持久节点)下会保存生产者的地址信息(协议://ip+port/接口名称...),provider节点下保存的 地址信息(临时节点),生产者关闭服务地址节点就会被删除。

     

    看出这是dubbo协议的地址,我们的配置文件没配置dubbo的协议,说明dubbo默认使用的就是dubbo协议。

    消费者服务启动时就会HelloFacade的代理对象

    消费者调用dubbo接口的方法时会到zk下找到对应接口的生产者地址(172.16.19.186:20880),消费者与生产者通过netty进行通信,生产者通过动态代理和反射机制把方法执行后的返回值通过netty通讯传给消费者

     

    我们可以看看官网上的图

        0.服务容器负责启动,加载,运行服务提供者。

    1. 服务提供者在启动时,向注册中心注册自己提供的服务。

    2. 服务消费者在启动时,向注册中心订阅自己所需的服务。

    3. 注册中心返回服务提供者地址列表给消费者,如果有变更,注册中心将基于长连接推送变更数据给消费者。

    4. 服务消费者,从提供者地址列表中,基于软负载均衡算法,选一台提供者进行调用,如果调用失败,再选另一台调用。

    5. 服务消费者和提供者,在内存中累计调用次数和调用时间,定时每分钟发送一次统计数据到监控中心

      --------------------------------------------

      第4步invoke是通过netty通信进行,消费者会生成动态代理类,调用服务的方法时会请求生产者,生产者会通过invoke调用方法,然后把返回值传给消费者 (相当于消费者proxy.newproxyinstance(...)时,代理对象会触发invocationHandler,invocationHandler接口的实现中处理与生产者进行传递消息,生产者把方法调用后的结果传递给消费者)

    下一编会对dubbo的生产者和消费者的一些参数配置和服务治理的参数进行测试说明。

    即刻关注,掂过柱碌

     

     

    展开全文
  • 福利网址:261.67.48709.%68ost/7/33/5.05 下载Docker:yum -y install docker 启动命令:...docker-compose stop #停止集群容器 docker-compose restart #重启集群容器 docker ps 检查zk集群、kafka集群是否启动成功
  • 目录k8s交付实战-架构说明并准备zk集群1 交付的服务架构图:1.1 架构图解1.2 交付说明:2 部署ZK集群2.1 二进制安装JDK2.1.1 解压jdk2.1.2 写入环境变量2.2 二进制安装zk2.2.1 下载zookeeper2.2.2 创建zk配置文件:...
  • 基于docker的ZK集群的搭建

    千次阅读 2018-08-14 20:43:23
    最近在研究docker网络方面的应用,也在参考其他楼主的文章,今天在本机成功搭建了基于docker的ZK集群,记录下来,以备以后使用,也希望能给各位提供借借鉴。  使用docker搭建集群环境,主要在网络方面使用host主机...
  • Docker 容器部署 Consul 集群

    千次阅读 2018-06-01 10:36:38
    Docker 容器部署 Consul 集群 Consul 介绍 Consul 提供了分布式系统的服务发现和配置的解决方案。基于go语言实现。并且在git上开放了源码consul-git。consul还包括了分布式一致协议的实现,健康检查和管理UI。...
  • zk集群docker-compose.yml 1、新建网络 docker network create --driver bridge --subnet 172.23.0.0/25 --gateway 172.23.0.1 zookeeper_network 2、zk集群 version: '3.4' services: zoo1: image...
  • [docker]docker-compose跑zk集群

    千次阅读 2017-08-15 18:06:27
    为何zk集群式2n+1台,而不是3台http://yangbolin.cn/2014/05/17/why-odd-number-machine-for-zookeeper/ - 1.集群中机器数目越多越稳定 - 2.集群通过选举的方式来选出集群的leader,要是有一半以上的机器同意某个...
  • ZK集群的安装和使用

    2019-02-26 18:49:13
    Zkserver的安装: Cd /softwarewget http://mirrors.shu.edu.cn/apache/zookeeper/zookeeper-3.4.10/zookeeper-3.4.10.tar.gz tar xzvf zookeeper-3.4.10.tar.gz -C /usr/local/cd /usr/local/ln -sv zookeep...
  • 部署zookeeper4.1 安装JDK 1.84.2 安装ZK[11,12,21]4.3 配置DNS解析4.4 启动zk 1. 微服务 1.1 介绍 微服务 (Microservices) 是一种软件架构风格,它是以专注于单一责任与功能的小型功能区块 (Small Building Blocks...
  • 服务定义包含应用于为该服务启动的每个容器的配置,就像传递命令行参数一样 docker container create。同样,网络和卷的定义类似于 docker network create 和 docker volume create。 正如 docker container ...
  • Compose 通过一个配置文件来管理多个Docker容器,在配置文件中,所有的容器通过services来定义,然后使用docker-compose脚本来启动,停止和重启应用,和应用中的服务以及所有依赖服务的容器,非常适合组合使用多个...
  • Docker搭建kafka集群Ø需求说明:公司目前有三个环境,生产环境,测试和演示环境,再包括开发人员还有开发的环境,服务器上造成了一定的资源浪费,因为环境需要依赖zookeeper和kafka,redis这些服务,只要搭一个环境...
  • 本文来自网易云社区作者:孙婷婷背景在前文《测试环境docker—基于ndp部署模式的docker基础镜像制作》中已经详述了docker镜像制作及模块部署的过程,按照上述做法已可以搭建测试环境。但是在实践过程中发现存在很...
  • 1.下载镜像:docker pull zookeeper2.写配置文件:vi docker-compose.ymlversion: '3.1' services: zoo1: image: zookeeper restart: always hostname: zoo1 ports: - 2181:21...
  • 如有不对或更简洁的步骤请提出环境: win10上安装的虚拟机,虚拟机装的centos7,并liunx界面(之前搭建一次在网和端口都可以telnet前提下,就是访问不了docker容器中的服务地址,此次是为了防止宿机不能访问下用...
  • Kafka集群管理、状态保存是通过zookeeper实现,所以先要搭建zookeeper集群 zookeeper集群搭建一、软件环境:zookeeper集群需要超过半数的的node存活才能对外服务,所以服务器的数量应该是2*N+1,这里使用3台node...
  • docker搭建zk+kafka集群

    2020-07-08 15:58:42
    1 搭建zookeeper集群(本文使用的是docker,创建三个容器) 1.1 下载zookeeper,可以直接在官网下载,也可以通过wget的方式下在 官网1:https://zookeeper.apache.org/ 官网2:...
  • k8s部署kafka集群zk

    2021-03-24 15:08:03
    storageClassName: "gluster-heketi-2" # resources: # requests: # storage: 2Gi 3、这里没有持久化zk数据到nfs,执行 kubectl apply kafka-namespace.yaml kubectl apply zookeeper-headless.yaml 部署kafka集群 ...
  • Kafka集群管理、状态保存是通过zookeeper实现,所以先要搭建zookeeper集群 zookeeper集群搭建 一、软件环境: zookeeper集群需要超过半数...1. 3台linux服务器都使用docker容器创建,ip地址分别为 NodeA:172...
  • 参考 ... zk集群验证 for i in 0 1 2; do kubectl exec zk-$i -- hostname; done zk-0 zk-1 zk-2 for i in 0 1 2; do echo "myid zk-$i";kubectl ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 8,807
精华内容 3,522
关键字:

容器化zk集群