Skip to content
Takehiro YOSHIHAMA edited this page Jul 4, 2016 · 7 revisions

This document is old, so please read README.md or USAGE.md.

Starting a node

First of all, start Flare as a single node. And we need to decide data directory in advance. Following examples assume that data directory is "/home/flare".

1. Creating configuration file (index server)

Flare should be run w/ one index server (flarei) and one or more node server(s) (flared). First of all, create configuration file for an index server.

If you installed Flare via Debian packages, you will find flarei.conf at /etc, and if not, you should manually create flarei.conf (a source code package containts sample files under etc/).

Follwing lines are an example for this How To:

# data directory
data-dir = /home/flare
# syslog facility
log-facility = local0
# max connections to accept
max-connection = 512
# node down threshold
monitor-threshold = 3
# node monitoring interval (index server sends "ping" w/ this interval)
monitor-interval = 1
# server name of myself
server-name = flare.example.com
# server port of myself
server-port = 12120
# max thread pool size
thread-pool-size = 8

You can find details at reference page.

2. Creating Configuration file (node server)

Next, we need to create configuration file for a node server. Same as "flarei.conf", it should be already created in /etc if you installed Flare via Debian package, and if not, you need to manually create it (you can also find a sample file in etc/ of a source code package).

Follwing lines are an example for this How To:

# data directory
data-dir = /home/flare
# name of index server (sync this to "server-name" of flarei.conf)
index-server-name = flare.example.com
# port number of index server (sync this to "server-port" of flarei.conf)
index-server-port = 12120
# syslog facility
log-facility = local0
# max-connections to accept
max-connection = 1024
# number of lock slots to access records
mutex-slot = 64
# number of proxy request concurrency
proxy-concurrency = 2
# server name of myself
server-name = flare.example.com
# server port of myself
server-port = 12121
# stack size of each thread (kb)
stack-size = 128
# storage options
storage-ap = 4
storage-bucket-size = 16777216
# storage-compress =
# storage-large = true
# storage type (currently only "tch" is available)
storage-type = tch
# max number of thread pool
thread-pool-size = 16

3. Editing /etc/default/flare (Debian package only)

If you installed Flare via Debian package, editing /etc/default/flare is also required.

RUN_INDEX="yes"
RUN_NODE="yes"
CONF_INDEX="/etc/flarei.conf"
CONF_NODE="/etc/flared.conf"
DATA_INDEX="/home/flare"
DATA_NODE="/home/flare"

In this case, DATA_INDEX and DATA_NODE is modified according to flarei.conf and flared.conf (if you created flare*.conf in some other dir, you need to modify CONF_INDEX and CONF_NODE).

4. Configuring syslog (optional)

If you want to check log messages, please configure syslog.conf (normally, /etc/syslog.conf, but could be rsyslog.conf or syslog-ng.conf if you use another syslog daeamon). Following lines are typical case (for syslogd):

# Log for flare
local0.*                /var/log/flare.log

5. Run

Now, configuration would be done, and you can run flare anytime like this:

$ sudo /usr/local/flare/bin/flarei -f /home/flarei.conf --daemonize
$ sudo /usr/local/flare/bin/flared -f /home/flared.conf --daemonize

We need to run flarei first because flared tries to register itself in startup procedure (if it fails, flared exits).

If you are using Debian package, following line(s) are also available:

$ sudo /etc/init.d/flare start

Log messages will be like this (if you configured syslog):

flarei[8603]: [3069409824][NTC][flarei.cc:105-startup] flarei version 1.0.0 - system logger started
flarei[8603]: [3069409824][NTC][flarei.cc:107-startup] application startup in progress...
flarei[8603]: [3069409824][NTC][flarei.cc:108-startup]   config_path:       /etc/flarei.conf
flarei[8603]: [3069409824][NTC][flarei.cc:109-startup]   daemonize:         true
flarei[8603]: [3069409824][NTC][flarei.cc:110-startup]   data_dir:          /home/flare
flarei[8603]: [3069409824][NTC][flarei.cc:111-startup]   max_connection:    512
flarei[8603]: [3069409824][NTC][flarei.cc:112-startup]   monitor_threshold: 3
flarei[8603]: [3069409824][NTC][flarei.cc:113-startup]   monitor_interval:  1
flarei[8603]: [3069409824][NTC][flarei.cc:114-startup]   server_name:       flare.example.com
flarei[8603]: [3069409824][NTC][flarei.cc:115-startup]   server_port:       12120
flarei[8603]: [3069409824][NTC][flarei.cc:116-startup]   stack_size:        128
flarei[8603]: [3069409824][NTC][flarei.cc:117-startup]   thread_pool_size:  8
flarei[8604]: [3069409824][NTC][app.cc:90-_daemonize] daemon process created -> now i have new pid [8604]
flarei[8604]: [3069409824][NTC][cluster.cc:1210-_reconstruct_node_partition] reconstructing node partition map... (from 0 entries in node map)
flarei[8604]: [3069409824][NTC][cluster.cc:1309-_reconstruct_node_partition] node partition map:
flarei[8604]: [3069409824][NTC][cluster.cc:1328-_reconstruct_node_partition] node partition map (prepare):
flarei[8604]: [3069409824][NTC][flarei.cc:164-run] entering running loop
...
flared[8607]: [3069975072][NTC][flared.cc:109-startup] flared version 1.0.0 - system logger started
flared[8607]: [3069975072][NTC][flared.cc:111-startup] application startup in progress...
flared[8607]: [3069975072][NTC][flared.cc:112-startup]   config_path:         /etc/flared.conf
flared[8607]: [3069975072][NTC][flared.cc:113-startup]   daemonize:           true
flared[8607]: [3069975072][NTC][flared.cc:114-startup]   data_dir:            /home/flare
flared[8607]: [3069975072][NTC][flared.cc:115-startup]   index_server_name:   flare.example.com
flared[8607]: [3069975072][NTC][flared.cc:116-startup]   index_server_port:   12120
flared[8607]: [3069975072][NTC][flared.cc:117-startup]   max_connection:      1024
flared[8607]: [3069975072][NTC][flared.cc:118-startup]   mutex_slot:          64
flared[8607]: [3069975072][NTC][flared.cc:119-startup]   proxy_concurrency:   2
flared[8607]: [3069975072][NTC][flared.cc:120-startup]   server_name:         flare.example.com
flared[8607]: [3069975072][NTC][flared.cc:121-startup]   server_port:         12121
flared[8607]: [3069975072][NTC][flared.cc:122-startup]   stack_size:          128
flared[8607]: [3069975072][NTC][flared.cc:123-startup]   storage_ap:          4
flared[8607]: [3069975072][NTC][flared.cc:124-startup]   storage_bucket_size: 16777216
flared[8607]: [3069975072][NTC][flared.cc:125-startup]   storage_compress:    
flared[8607]: [3069975072][NTC][flared.cc:126-startup]   storage_large:       false
flared[8607]: [3069975072][NTC][flared.cc:127-startup]   storage_type:        tch
flared[8607]: [3069975072][NTC][flared.cc:128-startup]   thread_pool_size:    16
flared[8608]: [3069975072][NTC][app.cc:90-_daemonize] daemon process created -> now i have new pid [8608]

6. Configuring node settings

Finally, our node server is running! But we still need to configure node settings. Please check node states like this:

$ telnet flare.example.com 12120
...
Escape character is '^]'.
stats nodes

Then, we can get node statistics:

STAT flare.example.com:12121:role proxy
STAT flare.example.com:12121:state active
STAT flare.example.com:12121:partition 0
STAT flare.example.com:12121:balance -1
STAT flare.example.com:12121:thread_type 16
END

As these lines show, role of our node "flare.example.com:12121" is "proxy" and no "master" is present. So we need to set this node as "master":

$ telnet flare.example.com 12120
...
Escape character is '^]'.
node role flare.example.com 12121 master 1 0

And response will be:

OK

This is "node role" command for an index server:

[node server name] [node server port] [role=(master|slave|proxy)] [balance] ([partition])

After this, response of

stats nodes

will be like this:

STAT flare.example.com:12121:role master
STAT flare.example.com:12121:state active
STAT flare.example.com:12121:partition 0
STAT flare.example.com:12121:balance 1
STAT flare.example.com:12121:thread_type 16
END

This is something similar to we have 1 persistent memcached server. To see if a node server is running correctly, please try "set" and "get" commands:

$ telnet flare.example.com 12121
...
set key1 0 0 4
test
STORED
get key1
VALUE key1 0 4
test
END

Please node that we can also access via memcached client libraries.

Adding slave nodes

If load is too high w/ only 1 server, or to add redundancy, adding slave nodes will solve these issue.

1. Preparing new server(s)

Please setup server(s) for slave nodes and install Flare.

2. Creating configuration file

Create a configuration file (flared.conf) like this:

# data directory
data-dir = /home/flare
# name of index server (sync this to "server-name" of flarei.conf)
index-server-name = flare.example.com
# port number of index server (sync this to "server-port" of flarei.conf)
index-server-port = 12120
# syslog facility
log-facility = local0
# max-connections to accept
max-connection = 1024
# number of lock slots to access records
mutex-slot = 64
# number of proxy request concurrency
proxy-concurrency = 2
# server name of myself
server-name = node1.example.com
# server port of myself
server-port = 12121
# stack size of each thread (kb)
stack-size = 128
# storage options
storage-ap = 4
storage-bucket-size = 16777216
# storage-compress =
# storage-large = true
# storage type (currently only "tch" is available)
storage-type = tch
# max number of thread pool
thread-pool-size = 16

If you just want to examine behavior of a slave node, you can run Flare using another port (12122, for instance). Because Flare identifies each node w/ "server name:port number".

And if you use Debian package, modification of /etc/default/flare is also required:

RUN_INDEX="no"
RUN_NODE="yes"
CONF_INDEX="/etc/flarei.conf"
CONF_NODE="/etc/flared.conf"
DATA_INDEX="/home/flare"
DATA_NODE="/home/flare"

Please note that RUN_INDEX is set to "no".

3. Checking node state

After another node has successfully started, check node states w/ "stats nodes":

$ telnet flare.example.com 12120
...
Escape character is '^]'.
STAT flare.example.com:12121:role master
STAT flare.example.com:12121:state active
STAT flare.example.com:12121:partition 0
STAT flare.example.com:12121:balance 1
STAT flare.example.com:12121:thread_type 16
STAT node1.example.com:12121:role proxy
STAT node1.example.com:12121:state active
STAT node1.example.com:12121:partition -1
STAT node1.example.com:12121:balance 0
STAT node1.example.com:12121:thread_type 17
END

We can see that role of a new node is set to proxy. In this states, all the request to node1.example.com:12121 is proxied to flare.example.com:12121. To check this behavior, just connecting to node1.example.com:12121 and send a get command:

get key1

Perhapas you will get same result as examples above:

VALUE key1 0 4
test
END

This is of course one of a very important features, but the node is far from slave...so we need to set role of this node to slave.

4. Configuring node settings

To set role of nodes, use "node role" command:

$ telnet flare.example.com 12120
Escape character is '^]'.
node role node1.example.com 12121 slave 1 0
OK

This means that we set role of "node1.example.com:12121" to slave of partition=0, and access balance is set to 1.

After this command is accepted, Flare behaves like this:

  • index server sends notification to each node (in this case, role of node1.example.com is set to "prepare" and balance of this node is set to 0 forcefully)
  • after flare.example.com:12121 received notification, this node starts replicating subsequent requests to node1.example.com:12121
  • after node1.example.com:12121 received notification, this node starts to dump data from a master node (flare.example.com:12121, in this case)
  • after dump is completed, node1.example.com:12121 send request to shift state of this node to "active"
  • index server sends notification to each node

So, please wait for a couple of minutes after publishing a "node role" command. "stats nodes" tells us the situation, and you will get following result if procedures above are successfully done:

STAT flare.example.com:12121:role master
STAT flare.example.com:12121:state active
STAT flare.example.com:12121:partition 0
STAT flare.example.com:12121:balance 1
STAT flare.example.com:12121:thread_type 16
STAT node1.example.com:12121:role slave
STAT node1.example.com:12121:state active
STAT node1.example.com:12121:partition 0
STAT node1.example.com:12121:balance 0
STAT node1.example.com:12121:thread_type 17
END

(role of node1.example.com:12121 is updated)

5. Setting access balance

List of "stats nodes" shows that role of node1.example.com:12121 is shifted to master, but also shows that balance is set to 0. So all the requests through other proxy nodes are not balanced to this node. So we need to send another "node role" command like this:

node role node1.example.com 12121 slave 2

In this case, get requests is balanced to node1.example.com:12121 twrice of flare.example.com:12121.

Adding a master node

Adding slave nodes solve some kind of problems, but if the amount of records is getting huge, we need to partition master servers. In other words, adding master servers and slave servers will solve almost all kind of problems around scalability.

1. Preparing new server(s)

Please setup server(s) for slave nodes and install Flare, and create configuration file just like slave servers.

2. Configuring node settings

Adding master is also done w/ "node role" command:

node role node2.example.com 12121 master 1 1

Please notice that the last argument (= partition) is set to 1 (if you already have 2 master servers, you should set this to 2).

After this, "stats nodes" shows this:

...
STAT node2.example.com:12121:role master
STAT node2.example.com:12121:state prepare
STAT node2.example.com:12121:partition 1
STAT node2.example.com:12121:balance 1
STAT node2.example.com:12121:thread_type 18

In this state, Flare behaves like as follows:

  • active master(s) replicate subsequent data to node2.example.com:12121 (not all data, but only suitable data for new partition).
  • a preparing master dumps suitable data from all active server(s)

3. Adding slave nodes (optional)

You can add slave node(s) to preparing master, too. If you want to add a slave node (node3.example.com:12121) to node2.example.com:12121, send "node role" command like this:

node role node3.example.com 12121 slave 1 1

4. Setting state

In case of adding a new master node, state of the node is not automatically shifted (not like adding a new slave node) because this will cause unintentinal load to a new master node. So we should shift state of the node manually after dump completed w/ "node state" command.

node state node2.example.com active

To see if dump is complete or not, please send "stats threads" and check "op" items and find "dump". If no "op:dump" founds, dump should complete (please see also "curr_items" of "stats" command, too).

Adding proxy node(s)

Proxy node just proxy all the requests to suitable server(s) according to current node states. This will cause that question about meaning of this kind of nodes, but this is important in some cases.

Proxy node is assumed to run as local process of each client. This means that all clients send request to local proxy node. This will reduce bogus proxy requests between different partition nodes like this:

+-----+ get key1 +-------+ (proxy) get key1 +-------+
| ws1 |    ->    | node2 |         ->       | node1 |
+-----+          +-------+                  +-------+

A local proxy node always select suitable node to access, and reduce tcp connection.

+------------------------+
|     get key1 +--------+| (proxy) get key1 +-------+
| ws1    ->    | flared |->        ->       | node1 |
|              +--------+|                  +-------+
+------------------------+

Deleting a node

If you want to remove a node, following procedure is recommended (in this case, we try to remove node1.example.com:12121).

First, send "node state" command to index server:

node state node1.example.com 12121 down

And shutdown the process (flared) w/ kill command (kill -KILL cause unintentional request failure, so please send SIGTERM command).

And then, send "node remove" command:

node remove node1.example.com 12121