Aerospike
Aerospike is a key-value store with schema-less data model. Data is organized into policy containers called "namespaces", semantically similar to "databases" in an RDBMS system. Within a namespace, data is subdivided into "sets" (similar to tables) and "records" (similar to rows). Each record has an indexed "key" that is unique in the set, and one or more named "bins" (similar to columns) that hold values associated with the record.
Aerospike Concepts in MySQL Terms
Aerospike | MySQL |
---|---|
namespace | db |
set | table |
bin | column |
key | primary key |
record | row |
Download, installl and start Aerospike
http://www.aerospike.com/download/
Configure Aerospike
http://www.aerospike.com/docs/operations/configure/
service {
user root
group root
paxos-single-replica-limit 1 # Number of nodes where the replica count is automatically reduced to 1.
pidfile /var/run/aerospike/asd.pid
service-threads 4
transaction-queues 4
transaction-threads-per-queue 4
proto-fd-max 15000
}
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any info
}
}
network {
service {
address any #
port 3000
}
heartbeat {
mode multicast # May be either|multicast|or|mesh|. In case of multicast, all cluster nodes must be in the same subnet., we use use to configure manually which nodes belong together?
# address 239.1.99.222 # only used for multicast
# port 9918 # only used for multicast
# To use unicast-mesh heartbeats, remove the 3 lines above, and see
# aerospike_mesh.conf for alternative.
# mesh-seed-address-port
mode mesh
mesh-address 265.312.999.555 # IP address of another node in the cluster
# mesh-port 3002
interval 150 # Interval in milliseconds in which heartbeats are sent.
timeout 40 # Number of missing heartbeats after which the remote node will be declared dead. (150ms x 20 = 6 seconds)
}
fabric {
port 3001
}
info {
port 3003
}
}
namespace thorsten {
replication-factor 2 # Number of copies of a record (including the master copy) maintained in the entire cluster.
default-ttl 30d # 30 days, use 0 to never expire/evict.
enable-xdr false # Enables XDR shipping for the namespace.
(sync via slow links to other data centers)
set-disable-eviction false # server will delete older entries if memory or disk is full, FIXME configure this
# storage-engine device {
# file /opt/aerospike/data/bar.dat
# filesize 16G
# data-in-memory true # Store data in memory in addition to file.
# }
}
high-water-memory-pct 60
high-water-disk-pct 50
stop-writes-pct 90
memory-ize 6G
default-ttl 0
namespace test {
replication-factor 2
memory-size 4G
default-ttl 30d # 30 days, use 0 to never expire/evict.
storage-engine memory
load-at-startup true # on startup load data from storage and not start with empty db
data-in-memory true # keep data in memory, otherwise only the index is in memory. With non SSD drives it is very slow to set this to false
}
namespace bar {
replication-factor 2
memory-size 4G
default-ttl 30d # 30 days, use 0 to never expire/evict.
storage-engine memory
# To use file storage backing, comment out the line above and use the
# following lines instead.
# storage-engine device {
# file /opt/aerospike/data/bar.dat
# filesize 16G
# data-in-memory true # Store data in memory in addition to file.
# }
}
Record size
The default MAX record size is 128 KB
Can be increased like this
storage-engine device {
write-block-size 1M # Max size in bytes for each record (LDT entries might be larger)
}
Maximum limit is 1MB
AQL Client
aql is an SQL like client to aerospike
Databases
Tables
You can also use aql from the command line to process the output
Columns
+-------+-------------+-------+-----------+
| quota | bin | count | namespace |
+-------+-------------+-------+-----------+
| 32768 | "firstname" | 3 | "test" |
| 32768 | "height" | 3 | "test" |
| 32768 | "id" | 3 | "test" |
+-------+-------------+-------+-----------+
aql> select firstname, height, id from test.people
Filtering needs an index
0 rows in set (0.001 secs)
Error: (201) AEROSPIKE_ERR_INDEX_NOT_FOUND
aql> CREATE INDEX user_age_idx ON test.people (height) NUMERIC
aql> select firstname, height, id from test.people where height=187
aql> select firstname, height, id from test.people where height between 187 and 190
Get entry via primary key:
http://www.aerospike.com/docs/guide/query.html
http://www.aerospike.com/docs/guide/aggregation.html
Delete all the data
Delete the whole set bar in the namespace foo issue this on all nodes
And turn it to false again after some time on all nodes after some time!
Monitor running cluster
http://www.aerospike.com/docs/tools/asadm
Admin> info
Deprecated http://www.aerospike.com/docs/tools/asmonitor
Monitor> info
Use Java as a Client
http://www.aerospike.com/docs/client/java/install/
import com.aerospike.client.Bin;
import com.aerospike.client.Host;
import com.aerospike.client.Key;
import com.aerospike.client.Record;
import com.aerospike.client.policy.ClientPolicy;
...
Host[] hosts = new Host[] {
new Host("127.0.0.1", 3000),
};
try(AerospikeClient client = new AerospikeClient(new ClientPolicy(), hosts)) {
// this namespace needs to exist in the aerospike configuration
String namespaceName="test";
String setName="people";
String bin1Name="firstname";
String bin2Name="height";
String bin3Name="id";
Integer id=1;
Key key = new Key(namespaceName, setName, id);
Bin bin1 = new Bin(bin1Name, "John");
Bin bin2 = new Bin(bin2Name, 182);
Bin bin3 = new Bin(bin3Name, id);
// Write a record
client.put(null, key, bin1, bin2, bin3);
// Read a record
Record record = client.get(null, key);
System.err.println(record); // (gen:4),(exp:182531601),(bins:(firstname:John),(id:1),(height:182))
String fstName=record.getString(bin1Name);
System.err.println(fstName); // John
}
Putting lists or maps into a bin and do read and writes to them via AeroSpike
ldt-enabled true # large data types, require to put lists and maps as values into bins
}
Read from it like this
final Map<?, ?> filteredValues = lmap.get(Value.get(fieldKey));
final Object result = filteredValues.get(fieldKey);
And write like this:
lmap.put(Value.get(fieldKey), Value.get(yourValue));
Delete
lmap.remove(Value.get(field));