Vert.x 3.0 cluster 구축하기
개요
vert.x 3.0에서 클러스터 사용의 주된 목적은 이벤트 버스와 자료구조(map, counter, ...)를 이용해 분산처리를 쉽게 구현하기 위함이다.
이벤트 버스와 자료구조는 서버간 공유된다.
실행방법은 간단하다. cluster.xml을 환경에 맞게 설정하고, 실행 할 때 -cluster 인자를 붙이면 된다.
java -jar your.jar -cluster
구성
맴버 서버 3대를 구성한다고 치고, cluster.xml을 설정했다.
주로 수정할 부분은 <group>태그 부분과 <member-list>태그 부분이다.
맴버 서버는 3대이지만, cluster.xml의 member list를 보면 두 대만 추가했다.
모든 맴버를 추가해도 되지만, 맴버가 유연하게 줄었다 늘어난다고 하면, 고정 맴버만 등록하는 게 좋을 것 같다.
cluster.xml 고급 속성 리스트
<?xml version="1.0" encoding="UTF-8"?> | |
<!-- | |
~ Copyright (c) 2008-2015, Hazelcast, Inc. All Rights Reserved. | |
~ | |
~ Licensed under the Apache License, Version 2.0 (the "License"); | |
~ you may not use this file except in compliance with the License. | |
~ You may obtain a copy of the License at | |
~ | |
~ http://www.apache.org/licenses/LICENSE-2.0 | |
~ | |
~ Unless required by applicable law or agreed to in writing, software | |
~ distributed under the License is distributed on an "AS IS" BASIS, | |
~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. | |
~ See the License for the specific language governing permissions and | |
~ limitations under the License. | |
--> | |
<!-- | |
The default Hazelcast configuration. This is used when: | |
- no hazelcast.xml if present | |
--> | |
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.5.xsd" | |
xmlns="http://www.hazelcast.com/schema/config" | |
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> | |
<properties> | |
<property name="hazelcast.wait.seconds.before.join">0</property> | |
<property name="hazelcast.heartbeat.interval.seconds">1</property> | |
<property name="hazelcast.initial.min.cluster.size">0</property> | |
</properties> | |
<group> | |
<name>your-group-name</name> | |
<password>your-pass</password> | |
</group> | |
<management-center enabled="false">http://192.168.189.1:8181/mancenter</management-center> | |
<network> | |
<port auto-increment="false" port-count="100">5701</port> | |
<outbound-ports> | |
<!-- | |
Allowed port range when connecting to other nodes. | |
0 or * means use system provided port. | |
--> | |
<ports>0</ports> | |
</outbound-ports> | |
<join> | |
<multicast enabled="false"> | |
<multicast-group>224.2.2.3</multicast-group> | |
<multicast-port>54327</multicast-port> | |
</multicast> | |
<tcp-ip enabled="true"> | |
<!-- <required-member>192.168.199.128</required-member> --> | |
<member-list> | |
<member>192.168.199.128</member> | |
<member>192.168.199.129</member> | |
</member-list> | |
</tcp-ip> | |
<aws enabled="false"> | |
<access-key>my-access-key</access-key> | |
<secret-key>my-secret-key</secret-key> | |
<!--optional, default is us-east-1 --> | |
<region>us-west-1</region> | |
<!--optional, default is ec2.amazonaws.com. If set, region shouldn't be set as it will override this property --> | |
<host-header>ec2.amazonaws.com</host-header> | |
<!-- optional, only instances belonging to this group will be discovered, default will try all running instances --> | |
<security-group-name>hazelcast-sg</security-group-name> | |
<tag-key>type</tag-key> | |
<tag-value>hz-nodes</tag-value> | |
</aws> | |
</join> | |
<interfaces enabled="false"> | |
<interface>192.168.199.1</interface> | |
</interfaces> | |
<ssl enabled="false"/> | |
<socket-interceptor enabled="false"/> | |
<symmetric-encryption enabled="false"> | |
<!-- | |
encryption algorithm such as | |
DES/ECB/PKCS5Padding, | |
PBEWithMD5AndDES, | |
AES/CBC/PKCS5Padding, | |
Blowfish, | |
DESede | |
--> | |
<algorithm>PBEWithMD5AndDES</algorithm> | |
<!-- salt value to use when generating the secret key --> | |
<salt>thesalt</salt> | |
<!-- pass phrase to use when generating the secret key --> | |
<password>thepass</password> | |
<!-- iteration count to use when generating the secret key --> | |
<iteration-count>19</iteration-count> | |
</symmetric-encryption> | |
</network> | |
<partition-group enabled="false"/> | |
<executor-service name="default"> | |
<pool-size>16</pool-size> | |
<!--Queue capacity. 0 means Integer.MAX_VALUE.--> | |
<queue-capacity>0</queue-capacity> | |
</executor-service> | |
<queue name="default"> | |
<!-- | |
Maximum size of the queue. When a JVM's local queue size reaches the maximum, | |
all put/offer operations will get blocked until the queue size | |
of the JVM goes down below the maximum. | |
Any integer between 0 and Integer.MAX_VALUE. 0 means | |
Integer.MAX_VALUE. Default is 0. | |
--> | |
<max-size>0</max-size> | |
<!-- | |
Number of backups. If 1 is set as the backup-count for example, | |
then all entries of the map will be copied to another JVM for | |
fail-safety. 0 means no backup. | |
--> | |
<backup-count>0</backup-count> | |
<!-- | |
Number of async backups. 0 means no backup. | |
--> | |
<async-backup-count>0</async-backup-count> | |
<empty-queue-ttl>-1</empty-queue-ttl> | |
</queue> | |
<map name="default"> | |
<!-- | |
Data type that will be used for storing recordMap. | |
Possible values: | |
BINARY (default): keys and values will be stored as binary data | |
OBJECT : values will be stored in their object forms | |
NATIVE : values will be stored in non-heap region of JVM | |
--> | |
<in-memory-format>BINARY</in-memory-format> | |
<!-- | |
Number of backups. If 1 is set as the backup-count for example, | |
then all entries of the map will be copied to another JVM for | |
fail-safety. 0 means no backup. | |
--> | |
<backup-count>0</backup-count> | |
<!-- | |
Number of async backups. 0 means no backup. | |
--> | |
<async-backup-count>0</async-backup-count> | |
<!-- | |
Maximum number of seconds for each entry to stay in the map. Entries that are | |
older than <time-to-live-seconds> and not updated for <time-to-live-seconds> | |
will get automatically evicted from the map. | |
Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0. | |
--> | |
<time-to-live-seconds>0</time-to-live-seconds> | |
<!-- | |
Maximum number of seconds for each entry to stay idle in the map. Entries that are | |
idle(not touched) for more than <max-idle-seconds> will get | |
automatically evicted from the map. Entry is touched if get, put or containsKey is called. | |
Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0. | |
--> | |
<max-idle-seconds>0</max-idle-seconds> | |
<!-- | |
Valid values are: | |
NONE (no eviction), | |
LRU (Least Recently Used), | |
LFU (Least Frequently Used). | |
NONE is the default. | |
--> | |
<eviction-policy>NONE</eviction-policy> | |
<!-- | |
Maximum size of the map. When max size is reached, | |
map is evicted based on the policy defined. | |
Any integer between 0 and Integer.MAX_VALUE. 0 means | |
Integer.MAX_VALUE. Default is 0. | |
--> | |
<max-size policy="PER_NODE">0</max-size> | |
<!-- | |
When max. size is reached, specified percentage of | |
the map will be evicted. Any integer between 0 and 100. | |
If 25 is set for example, 25% of the entries will | |
get evicted. | |
--> | |
<eviction-percentage>25</eviction-percentage> | |
<!-- | |
Minimum time in milliseconds which should pass before checking | |
if a partition of this map is evictable or not. | |
Default value is 100 millis. | |
--> | |
<min-eviction-check-millis>100</min-eviction-check-millis> | |
<!-- | |
While recovering from split-brain (network partitioning), | |
map entries in the small cluster will merge into the bigger cluster | |
based on the policy set here. When an entry merge into the | |
cluster, there might an existing entry with the same key already. | |
Values of these entries might be different for that same key. | |
Which value should be set for the key? Conflict is resolved by | |
the policy set here. Default policy is PutIfAbsentMapMergePolicy | |
There are built-in merge policies such as | |
com.hazelcast.map.merge.PassThroughMergePolicy; entry will be overwritten if merging entry exists for the key. | |
com.hazelcast.map.merge.PutIfAbsentMapMergePolicy ; entry will be added if the merging entry doesn't exist in the cluster. | |
com.hazelcast.map.merge.HigherHitsMapMergePolicy ; entry with the higher hits wins. | |
com.hazelcast.map.merge.LatestUpdateMapMergePolicy ; entry with the latest update wins. | |
--> | |
<merge-policy>com.hazelcast.map.merge.PutIfAbsentMapMergePolicy</merge-policy> | |
</map> | |
<multimap name="default"> | |
<backup-count>0</backup-count> | |
<value-collection-type>SET</value-collection-type> | |
</multimap> | |
<list name="default"> | |
<backup-count>0</backup-count> | |
</list> | |
<set name="default"> | |
<backup-count>0</backup-count> | |
</set> | |
<jobtracker name="default"> | |
<max-thread-size>0</max-thread-size> | |
<!-- Queue size 0 means number of partitions * 2 --> | |
<queue-size>0</queue-size> | |
<retry-count>0</retry-count> | |
<chunk-size>1000</chunk-size> | |
<communicate-stats>true</communicate-stats> | |
<topology-changed-strategy>CANCEL_RUNNING_OPERATION</topology-changed-strategy> | |
</jobtracker> | |
<semaphore name="default"> | |
<initial-permits>1</initial-permits> | |
<backup-count>0</backup-count> | |
<async-backup-count>0</async-backup-count> | |
</semaphore> | |
<reliable-topic name="default"> | |
<read-batch-size>10</read-batch-size> | |
<topic-overload-policy>BLOCK</topic-overload-policy> | |
<statistics-enabled>true</statistics-enabled> | |
</reliable-topic> | |
<ringbuffer name="default"> | |
<capacity>10000</capacity> | |
<backup-count>0</backup-count> | |
<async-backup-count>0</async-backup-count> | |
<time-to-live-seconds>30</time-to-live-seconds> | |
<in-memory-format>BINARY</in-memory-format> | |
</ringbuffer> | |
<serialization> | |
<portable-version>0</portable-version> | |
</serialization> | |
<services enable-defaults="true"/> | |
<!-- | |
<listeners> | |
<listener>com.sample.hazlecast.MyMembershipListener</listener> | |
</listeners> | |
--> | |
</hazelcast> |
실험
마스터는 누구인가
cluster 모드로 실행하고, 맴버 연결이 이루어지면, 맴버 정보가 출력되는데,
맴버 중 첫 번째가 마스터이다.
예)
Members [2] {
Member [192.168.199.129]:5701
Member [192.168.199.128]:5701 this
}
맴버 리스트 서버가 모두 켜지지 않은 상황에서 서버들이 켜졌을 때
맴버 리스트 서버에 연결을 시도하고, 실패하면 해당 주소를 블랙리스트에 등록하여 맴버 리스트 서버가 켜졌다 하더라도 재연결을 하지 않는다. 정상적으로 동작하려면 맴버 리스트 서버 하나는 살아 있어야 한다.
맴버 리스트에 <required-member>192.168.199.128</required-member> 를 등록하면, 해당 서버에 연결될 때까지 시도 한다.
맴버 외 서버가 맴버 리스트 서버에 연결된 이후 맴버 리스트 서버가 재시작 하면
맴버 외 서버는 맴버 리스트 서버에 연결을 시도한다.
맴버 리스트 서버가 하나만 켜졌을 때
정상적으로 모든 서버들이 연결되며, 이후에 서버가 켜지면 연결되어 있던 모든 서버에 추가된다.
마스터 서버가 꺼지면
연결되어 있던 맴버 중 하나가 마스터가 된다.
cluster.xml 맴버 리스트에 추가되지 않은 맴버도 마스터가 된다.
이 경우에 문제가 발생할 수 있다.
맴버 리스트 서버가 모두 꺼져있고, 맴버 리스트 외 서버가 마스터가 된 상태를 그룹1로 보자.
맴버 리스트 서버가 다시 켜지고, 맴버 리스트 외 서버(그룹1 서버가 아닌)가 새롭게 연결된 상태를 그룹2로 보자.
그룹1과 그룹2는 서로 연결되지 않은 채 따로 동작한다. 즉 공유 데이터가 분리된다.
그룹1과 그룹2가 서로 연결되었을 때, 그룹2의 공유 데이터로 덮어진다.
이벤트 버스에 이벤트를 등록한 서버와 등록하지 않은 서버가 있는 상황을 만들고, publish했을 때,
이벤트 등록된 서버에만 메시지가 전달되어, 네트워크 트래픽이 등록된 서버에만 발생한다.