ClusterControl  Version1.6.0
Controller SDK
The RPC (v1) services

The RPC Reply

Here is an example that shows an RPC reply that notifies the caller about a failure:

{
"cc_timestamp": 1461225902,
"errorString": "Cluster 323 is not running.",
"requestStatus": "ClusterNotFound"
}

The "requestStatus" contains information about the nature of the failure and currently can hold the following values:

  • ok
    Request is successfully processed.
  • InvalidRequest
    Invalid value or missing field in the request (e.g. the request contains a job ID and the value of it is -1).
  • ObjectNotFound
    Object not found (e.g. the request contains a job ID and the job with that ID was not found).
  • TryAgain
    Currently not available, try again later. This happens if the request arrives before the controller is fully started.
  • ClusterNotFound
    The cluster is not found, not running. This only happens if the cluster ID is not 0 and the cluster object is not found.
  • AccessDenied
    Insufficient rights.
  • UnknownError
    We are not proud of this, should not be used at all.

Here is an other example that shows a request that was executed successfully:

{
"cc_timestamp": 1461225902,
"data": [
{
"contents": "# this is some comment\n# and an another one\n\n# here is a value without section\nnosection=true\n\n[section1]\nkey1\t=\t\"value1\"\n\n# and here is another section:\n[section2]\ninteger=20140719\n\n",
"crc": "00000000",
"filename": "myconfig1.cfg",
"hasChange": false,
"hostId": 6,
"hostname": "myhost1",
"path": "/my/path/myconfig1.cfg",
"size": 183,
"timestamp": 1461225902
} ],
"requestStatus": "ok",
"total": 1
}

The host discovery API

/0/discovery

Description: In case of cluster / node creation/addition, frontend might would like to fetch/get some host related details, this API can give some basic info.

checkHosts

Description: checks if a host can be reached on SSH and whether controller can gain superuser privileges, if these passing, the method also gives some network/memory/cpu/storage/os information about the host

Arguments:

  • ssh_user: the username used for SSH login
  • ssh_keyfile: the absolute path of the SSH private keyfile on controller
  • ssh_password: in case of no keyfile password based authentication may be used
  • ssh_port: SSH port (defaults to 22 if not specified or <= 0)
  • sudo_password: a sudo password may be used
  • hosts: comma separated list of hostname:port combinations (:port is optional)

Important fields in the returned data:

  • status/available: whether the host is available for addition (so hostname:port is not part of other already existing cluster)
  • status/reachable: whether the host is accessible trough SSH and cmon can gain superuser priliges (so use sudo for non-root user)
  • status/message: human readable message about host status
  • status/message_advice: in case of failure some advice about how to fix the issue
  • status/message_techincal: the techincal details about the failure
1 $ curl -XPOST -d '{"operation":"checkhosts","token": "5be993bd3317aba6a24cc52d2a39e7636d35d55d", "hosts": "192.168.0.100:3306", "ssh_user":"kedz","ssh_keyfile": "/home/kedz/.ssh/id_rsa"}' 'http://localhost:9500/0/discovery'
{
"cc_timestamp": 1486382761,
"data": [
{
"cpu_info":
{
"cores": 8,
"mhz": 4396.73
},
"hostname": "192.168.0.100",
"memory":
{
"free_mb": 15047,
"total_mb": 24030
},
"network_interfaces":
{
"eth0": "192.168.0.100",
"vboxnet0": "192.168.33.1",
"vboxnet0:1": "192.168.44.1",
"virbr0": "192.168.122.1"
},
"os_version":
{
"distribution/codename": "yakkety",
"distribution/name": "ubuntu",
"distribution/release": "16.10",
"distribution/type": "debian"
},
"port": 3306,
"status":
{
"available": true,
"message": "Host is reachable.",
"message_advice": "",
"message_technical": "Host is reachable on SSH and can gain superuser privileges.",
"reachable": true
},
"storage_info": [
{
"filesystem": "ext4",
"free_mb": 93229,
"mountpoint": "/",
"partition": "/dev/sda2",
"total_mb": 231833
},
{
"filesystem": "ext4",
"free_mb": 47046,
"mountpoint": "/mnt/ssd128g",
"partition": "/dev/sdd2",
"total_mb": 111536
} ]
} ],
"requestStatus": "ok",
"total": 1
}

And a failure example, with advice / technical info:

1 $ curl -XPOST -d '{"operation":"checkhosts","token": "5be993bd3317aba6a24cc52d2a39e7636d35d55d", "hosts": "5.5.5.5", "ssh_user":"kedz","ssh_keyfile": "/home/kedz/.ssh/id_rsa"}' 'http://localhost:9500/0/discovery'
{
"cc_timestamp": 1486382809,
"data": [
{
"hostname": "5.5.5.5",
"port": -1,
"status":
{
"available": false,
"message": "SSH connection failed.",
"message_advice": "Check hostname and verify network/firewall settings.",
"message_technical": "libssh connect error: Timeout connecting to 5.5.5.5",
"reachable": false
}
} ],
"requestStatus": "ok",
"total": 1
}

The Backup API

/$CLUSTERID/backup

listbackups

Description: lists the created backups for the current cluster.

There are two formats supported. By default the format version 1 is returned, the original one, which contains backup metadata only.

When the value of backup_record_version is 2, the olda content is put inside property named metadata. New properties added holding information of locations of the backup copies. A backup can be available on certain hosts (controller and other) and also in the cloud ie.: AWS S3 buckets.

Parameters:

  • backup_record_version: (default 1) the backup format version (1, 2)
  • limit: the max. number of backup records to be returned
  • offset: return backup records after this many records (paging)
  • ascending: whether to return backups in reversed/ascending order

Example of backup format version 1 (the default):

1 $ curl -XPOST -d '{"operation": "listbackups", "token": "RB81tydD0exsWsaM"}' http://localhost:9500/101/backup
{
"cc_timestamp": 1477063671,
"data": [
{
"backup": [
{
"db": "mysql",
"files": [
{
"class_name": "CmonBackupFile",
"created": "2016-10-21T15:26:40.000Z",
"hash": "md5:c7f4b2b80ea439ae5aaa28a0f3c213cb",
"path": "mysqldump_2016-10-21_172640_mysqldb.sql.gz",
"size": 161305,
"type": "data,schema"
} ],
"start_time": "2016-10-21T15:26:41.000Z"
} ],
"backup_host": "192.168.33.125",
"cid": 101,
"class_name": "CmonBackupRecord",
"config":
{
"backupDir": "/tmp",
"backupHost": "192.168.33.125",
"backupMethod": "mysqldump",
"backupToIndividualFiles": false,
"backup_failover": false,
"backup_failover_host": "",
"ccStorage": false,
"checkHost": false,
"compression": true,
"includeDatabases": "",
"netcat_port": 9999,
"origBackupDir": "/tmp",
"port": 3306,
"set_gtid_purged_off": true,
"throttle_rate_iops": 0,
"throttle_rate_netbw": 0,
"usePigz": false,
"wsrep_desync": false,
"xtrabackupParallellism": 1,
"xtrabackup_locks": false
},
"created": "2016-10-21T15:26:40.000Z",
"created_by": "",
"description": "",
"finished": "2016-10-21T15:26:41.000Z",
"id": 5,
"job_id": 2952,
"log_file": "",
"lsn": 140128879096992,
"method": "mysqldump",
"parent_id": 0,
"root_dir": "/tmp/BACKUP-5",
"status": "Completed",
"storage_host": "192.168.33.125"
},
{
"backup": [
{
"db": "",
"files": [
{
"class_name": "CmonBackupFile",
"created": "2016-10-21T15:21:50.000Z",
"hash": "md5:538196a9d645c34b63cec51d3e18cb47",
"path": "backup-full-2016-10-21_172148.xbstream.gz",
"size": 296000,
"type": "full"
} ],
"start_time": "2016-10-21T15:21:50.000Z"
} ],
"backup_host": "192.168.33.125",
"cid": 101,
"class_name": "CmonBackupRecord",
"config":
{
"backupDir": "/tmp",
"backupHost": "192.168.33.125",
"backupMethod": "xtrabackupfull",
"backupToIndividualFiles": false,
"backup_failover": false,
"backup_failover_host": "",
"ccStorage": false,
"checkHost": false,
"compression": true,
"includeDatabases": "",
"netcat_port": 9999,
"origBackupDir": "/tmp",
"port": 3306,
"set_gtid_purged_off": true,
"throttle_rate_iops": 0,
"throttle_rate_netbw": 0,
"usePigz": false,
"wsrep_desync": false,
"xtrabackupParallellism": 1,
"xtrabackup_locks": true
},
"created": "2016-10-21T15:21:47.000Z",
"created_by": "",
"description": "",
"finished": "2016-10-21T15:21:50.000Z",
"id": 4,
"job_id": 2951,
"log_file": "",
"lsn": 1627039,
"method": "xtrabackupfull",
"parent_id": 0,
"root_dir": "/tmp/BACKUP-4",
"status": "Completed",
"storage_host": "192.168.33.125"
} ],
"requestStatus": "ok",
"total": 2
}

Example of backup format version 2 (with information about backup locations):

1 echo '{"token": "K8VdzG2vG81ik0zo", "operation": "listbackups", "backup_record_version": "2"}' | curl -sX POST -H"Content-Type: application/json" -d @- http://192.168.30.4:9500/47/backup
[
{
"metadata": {
"root_dir": "/mongo-backups/BACKUP-106",
"class_name": "CmonBackupRecord",
"schedule_id": 0,
"id": 106,
"verified": {
"status": "Unverified",
"message": "",
"verified_time": "1969-12-31T23:59:59.000Z"
},
"job_id": 1683,
"use_for_pitr": true,
"created_by": "",
"chain_up": 0,
"parent_id": 0,
"config": {},
"method": "mongodump",
"status": "Completed",
"backup_host": "",
"description": "",
"lsn": 0,
"finished": "2017-09-14T20:23:37.934Z",
"compressed": true,
"cid": 47,
"backup": [
{
"files": [
{
"hash": "md5:2889b115cc388f6d2535dcca24e78378",
"created": "2017-09-14T20:23:36.000Z",
"class_name": "CmonBackupFile",
"path": "replica_set_0.gz",
"type": "mongodump-gz",
"size": 1022
}
],
"start_time": "2017-09-14T20:23:37.000Z",
"db": "$replica_set_0"
}
],
"created": "2017-09-14T20:21:02.000Z",
"storage_host": "192.168.30.70",
"log_file": "",
"total_datadir_size": 24566
},
"cloud_locations": [
{
"provider" : "aws"
"bucket_and_path": "s9s-acceptance-test-bucket",
"cloud_location_uuid": "23dae2cb-09c7-4d8c-913b-4617608d3da0",
"retention": 400,
"credentials_id": 0
"created_time": "2017-09-15T13:38:47.000Z",
"finished_time": "2017-09-15T13:38:52.000Z"
}
],
"version": 2,
"host_locations": [
{
"storage_host": "127.0.0.1",
"root_dir": "/home/backups"
"created_time": "2017-09-15T13:38:40.000Z",
"finished_time": "2017-09-15T13:38:42.000Z"
"host_location_uuid": "32f4f814-777a-468c-8b5b-37ca0605946c",
}
]
}
]

deletebackup

Description: Deletes a backup record and all associated backup files. (DEPRECATED)

Please use Delete backup job instead of this from now.

1 $ curl -XPOST -d '{"operation": "deletebackup", "id": 3, "token": "RB81tydD0exsWsaM"}' http://localhost:9500/101/backup
{
"cc_timestamp": 1477062125,
"errorString": "Backup 3 not exists.",
"requestStatus": "UnknownError"
}
1 $ curl -XPOST -d '{"operation": "deletebackup", "id" 2, "token": "RB81tydD0exsWsaM"}' http://localhost:9500/101/backup
{
"cc_timestamp": 1477062130,
"requestStatus": "ok"
}

listschedules

Description: lists the created backup schedules for the current cluster.

1 $ curl -XPOST -d '{"operation": "listschedules", "token": "5xCzdArurlwtnEG2"}' http://localhost:9500/78/backup
{
"cc_timestamp": 1473433701,
"data": [
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/mnt/dbbackup01",
"cc_storage": "0",
"hostname": "192.168.134.7",
"port": "3306",
"wsrep_desync": false
}
},
"enabled": true,
"id": 24,
"lastExec": "2016-09-07T23:00:06.000Z",
"schedule": "0 1 * * 5",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/mnt/dbbackup01",
"cc_storage": "0",
"hostname": "192.168.134.7",
"port": "3306",
"wsrep_desync": false
}
},
"enabled": true,
"id": 25,
"lastExec": "2016-02-28T04:13:30.000Z",
"schedule": "0 1 * * 1",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/mnt/dbbackup01/ROMOTO",
"cc_storage": 0,
"hostname": "192.168.134.7:3306"
}
},
"enabled": true,
"id": 26,
"lastExec": "2016-02-28T23:02:39.000Z",
"schedule": "30 0 * * 2",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/mnt/dbbackup01/ROMOTO",
"cc_storage": 0,
"hostname": "192.168.134.7:3306"
}
},
"enabled": true,
"id": 27,
"lastExec": "2016-02-29T23:30:27.000Z",
"schedule": "30 0 * * 3",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/mnt/dbbackup01/ROMOTO",
"cc_storage": 0,
"hostname": "192.168.134.7:3306"
}
},
"enabled": true,
"id": 28,
"lastExec": "2016-03-01T23:30:26.000Z",
"schedule": "30 0 * * 4",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/mnt/dbbackup01/ROMOTO",
"cc_storage": 0,
"hostname": "192.168.134.7:3306"
}
},
"enabled": true,
"id": 29,
"lastExec": "2016-09-07T22:30:10.000Z",
"schedule": "30 0 * * 5",
},
// ... removed some entries from doc ...
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/tmp/okokok",
"cc_storage": 1,
"hostname": "192.168.33.121",
"netcat_port": "9999",
"port": "3306",
"wsrep_desync": false
}
},
"enabled": true,
"id": 37,
"lastExec": "2016-09-08T22:00:04.000Z",
"schedule": "0 0 * * 6",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/tmp/okokok",
"cc_storage": 1,
"hostname": "192.168.33.121",
"netcat_port": "9999",
"port": "3306",
"wsrep_desync": false
}
},
"enabled": true,
"id": 38,
"lastExec": "1970-01-01T00:00:00.000Z",
"schedule": "0 0 * * 7",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/tmp/okokok",
"cc_storage": 1,
"hostname": "192.168.33.121",
"netcat_port": "9999",
"port": "3306",
"wsrep_desync": false
}
},
"enabled": true,
"id": 39,
"lastExec": "1970-01-01T00:00:00.000Z",
"schedule": "0 0 * * 1",
} ],
"requestStatus": "ok",
"total": 16
}

schedule

Description: creates a backup schedule. Arguments:

  • schedule: cron like schedule string (m h dom mon dow)
  • job: the backup JSon job string (or object)

For job see Creating a Backup for backup job reference.

deprecated but supported if no cron-like schedule is specified:

  • weekDay: number from 1-7, defining a day of week
  • execTime: on the day the execution time in "HH:MM:SS" format

An example request & reply:

1 curl -XPOST -d '{"operation": "schedule", "token":"5xCzdArurlwtnEG2", "schedule" : "0 15 * * 3","job":{"command": "backup", "job_data": {"backup_method": "mysqldump", "backupdir": "/dbbackup01", "cc_storage": "0", "hostname": "192.168.33.121"}} }' http://localhost:9500/78/backup
{
"cc_timestamp": 1473773254,
"requestStatus": "ok",
"schedule":
{
"enabled": true,
"id": 44,
"job":
{
"command": "backup",
"job_data":
{
"backup_method": "mysqldump",
"backupdir": "/dbbackup01",
"cc_storage": "0",
"hostname": "192.168.33.121"
}
},
"lastExec": "1969-12-31T23:59:59.000Z",
"schedule": "0 15 * * 3",
}
}

deleteschedule

Description: removes a backup schedule. Arguments:

  • id: the schedule id.

An example request & reply:

1 curl -XPOST -d '{"operation": "deleteschedule", "token": "5xCzdArurlwtnEG2", "id": 55}' http://localhost:9500/78/backup
{
"cc_timestamp": 1473773753,
"requestStatus": "ok"
}

updateschedule

Description: Updates an existing backup schedule. Arguments:

  • id: the schedule id
  • schedule: the new cron line
  • enabled: whether the schedule is enabled/disabled
  • job: (optional) the new backup JSon job string (or object)

An example request & reply:

1 $ curl -XPOST -d '{"operation": "updateschedule", "token": "nV5SEdkZLheZxyrh", "id": 7, "enabled": false }' http://localhost:9500/120/backup
{
"cc_timestamp": 1478873968,
"requestStatus": "ok",
"schedule":
{
"class_name": "CmonBackupSchedule",
"enabled": false,
"id": 7,
"job":
{
"command": "backup",
"job_data":
{
"backup_method": "mysqldump",
"backupdir": "/tmp",
"cc_storage": "0",
"hostname": "192.168.33.121",
"port": 3306,
"wsrep_desync": false
},
"scheduleId": 7
},
"lastExec": "1970-01-01T00:00:00.000Z",
"schedule": "20 15 * * *"
}
}

The Certificate Authority API

/0/ca

create

Description: creates a certificate request and signs by a CA or self-sign, and store the generated key + certificate data. Arguments:

  • type: "ca", "server" or "client"
  • name: the desired name with full path (without extensions) (eg.: group1/servers/host55)
  • validity: the certificate validity in days (defaults to 365 if not specified)
  • issuerId: the CA certificate to be used for signing, otherwise it will be self-signed
  • user_id: the requester user ID (for accounting)
  • data: the certificate parameters

NOTE: the specified name (which can contain directory paths like some/dir/mycert) is a relative path to the cmon's CA directory which is /var/lib/cmon/ca by default.

Certificate parameters, the following keys are supported there:

  • keybits: the key size for the RSA key generation (1024, 2048, 4096)
  • CN: the certificate common-name
  • subjectAltName: a JSon list of possible domain names, IP addresses
  • C: the country name ISO code
  • L: locality name
  • ST: state or province name
  • O: organization name
  • OU: organization unit name
  • title: title
  • GN: given name
  • SN: surname
  • description: description field of the certificate
  • emailAddress: e-mail address field

Example request for CA certificate and key generation:

1 curl -X POST -d '{"operation": "create", "user_id": 100, "type": "ca", "name": "CoolCA", "validity": 3650, "data": { "keybits": 2048, "description": "My cool CA certificate.", "emailAddress": "kedazo@severalnines.com" } }' http://localhost:9500/0/ca
{
"data":
{
"certfile": "CoolCA.crt",
"id": 1,
"isCA": true,
"isClient": false,
"isServer": false,
"issuerId": 0,
"keybits": 2048,
"keyfile": "CoolCA.key",
"requesterUserId": 100,
"serialNumber": 1,
"status": "Issued",
"subjectName":
{
"basicConstraints": "ca",
"description": "My cool CA certificate.",
"emailAddress": "kedazo@severalnines.com",
"extendedKeyUsage": [ "OCSPSigning" ],
"keyUsage": [ "DigitalSignature", "KeyCertificateSign", "CRLSign" ],
"serialNumber": "1"
},
"validFrom": 1449579031,
"validUntil": 1765025431
},
"total": 1
}

An example request using the previous CA key to sign a server certificate

curl -X POST -d '{"operation": "create", "user_id": 100, "type": "server", "name": "coolca_servers/server01", "issuerId": 1, "validity": 1000, "data": { "keybits": 2048, "CN": "coolca.server.tld", "subjectAltName": [ "192.168.33.1", "fancy.domain.name" ] } }' http://localhost:9500/0/ca
1 {
2  "data":
3  {
4  "certfile": "coolca_servers/server01.crt",
5  "id": 2,
6  "isCA": false,
7  "isClient": true,
8  "isServer": true,
9  "issuerId": 1,
10  "keybits": 2048,
11  "keyfile": "coolca_servers/server01.key",
12  "requesterUserId": 100,
13  "serialNumber": 2,
14  "status": "Issued",
15  "subjectName":
16  {
17  "CN": "coolca.server.tld",
18  "extendedKeyUsage": [ "ClientAuth", "ServerAuth" ],
19  "keyUsage": [ "DigitalSignature", "NonRepudiation", "KeyEncipherment", "KeyAgreement" ],
20  "serialNumber": "2",
21  "subjectAltName": [ "IP Address:192.168.33.1", "DNS:fancy.domain.name" ]
22  },
23  "validFrom": 1449579372,
24  "validUntil": 1536069372
25  },
26  "total": 1
27 }

listcerts

Description: lists the available certificates on the system

1 curl -X POST -d '{"operation": "listcerts"}' http://localhost:9500/0/ca
2 # NOTE: the certificates and keys are actually stored in the filesystem:
3 sudo find /var/lib/cmon/ca
4 /var/lib/cmon/ca
5 /var/lib/cmon/ca/CoolCA.crt
6 /var/lib/cmon/ca/CoolCA.key
7 /var/lib/cmon/ca/coolca_servers
8 /var/lib/cmon/ca/coolca_servers/server01.crt
9 /var/lib/cmon/ca/coolca_servers/server01.key
{
"data": [
{
"certfile": "CoolCA.crt",
"id": 1,
"isCA": true,
"isClient": false,
"isServer": false,
"issuerId": 0,
"keybits": 2048,
"keyfile": "CoolCA.key",
"requesterUserId": 100,
"serialNumber": 1,
"status": "Issued",
"subjectName":
{
"basicConstraints": "ca",
"description": "My cool CA certificate.",
"emailAddress": "kedazo@severalnines.com",
"extendedKeyUsage": [ "OCSPSigning" ],
"keyUsage": [ "DigitalSignature", "KeyCertificateSign", "CRLSign" ],
"serialNumber": "1"
},
"validFrom": 1449579031,
"validUntil": 1765025431
},
{
"certfile": "galera/cluster_11/galera_rep.crt",
"id": 7,
"inUseBy":
{
"clusters": [
{
"id": 16,
"name": "cluster_16"
} ],
"hosts": [ "192.168.33.123:3306", "192.168.33.122:3306" ]
},
"isCA": true,
"isClient": true,
"isServer": true,
"issued": 1457969123,
"issuerId": 0,
"keybits": 2048,
"keyfile": "galera/cluster_11/galera_rep.key",
"serialNumber": 7,
"status": "Issued",
"subjectName":
{
"C": "SE",
"CN": "Galera_Replication_Link_Auto_Generated_Certificate",
"L": "Stockholm",
"O": "Severalnines AB",
"ST": "ST",
"basicConstraints": "ca",
"extendedKeyUsage": [ "ClientAuth", "ServerAuth", "OCSPSigning" ],
"keyUsage": [ "DigitalSignature", "NonRepudiation", "KeyEncipherment", "KeyAgreement", "KeyCertificateSign", "CRLSign" ],
"serialNumber": "7"
},
"validFrom": 1457879123,
"validUntil": 1773325523
} ],
"total": 2
}

certinfo

Description: gets the certificate information by certificate id. Arguments:

  • id : the certificate ID.

NOTE: It is also possible to get the certificate info by a GET request, path must contain then the certificate id, like in the following example:

1 curl http://localhost:9500/0/ca/1
{
"data":
{
"certfile": "CoolCA.crt",
"id": 1,
"isCA": true,
"isClient": false,
"isServer": false,
"issuerId": 0,
"keybits": 2048,
"keyfile": "CoolCA.key",
"requesterUserId": 100,
"serialNumber": 1,
"status": "Issued",
"subjectName":
{
"basicConstraints": "ca",
"description": "My cool CA certificate.",
"emailAddress": "kedazo@severalnines.com",
"extendedKeyUsage": [ "OCSPSigning" ],
"keyUsage": [ "DigitalSignature", "KeyCertificateSign", "CRLSign" ],
"serialNumber": "1"
},
"validFrom": 1449579031,
"validUntil": 1765025431
},
"total": 1
}

importlocal

Description: imports a local certificate + key pair, with CA NOTE: CA cert file path and name is not required for self-signed certificates.

If a certificate (or CA) is already imported, a duplicate will not be created in our CA database.

Arguments:

  • cert_file: the certificate file path on the controller
  • key_file: the private key file path on the controller
  • ca_file: the CA certificate path on the controller
  • name: the used path (including name) for the certificate in our CA storage
  • name_ca: the CA path/name in our CA storage, if not specified the 'name' will be used with '_ca' suffix

The return value will contain the certificate details, and additionally 'wasCaKnown' will be set to 'true' in case of when a CA certificate was already known (existing) in the storage, so no CA certificate import happened.

1 curl -X POST -d '{"operation": "importlocal", "user_id": 100, "cert_file": "/var/lib/mycerts/001/certificate.crt", "key_file": "/var/lib/mycerts/001/private.key", "name": "imported_keys/mycertificate001" }' http://localhost:9500/0/ca
{
"cc_timestamp": 1463572534,
"data":
{
"certfile": "imported_keys/mycertificate001.crt",
"id": 7,
"inUseBy":
{
"clusters": [ ],
"hosts": [ ]
},
"isCA": true,
"isClient": true,
"isServer": true,
"issued": 1463572534,
"issuerId": 0,
"keybits": 2048,
"keyfile": "imported_keys/mycertificate001.key",
"serialNumber": 17912915768810270261,
"status": "Issued",
"subjectName":
{
"C": "US",
"CN": "*.test0123.tld",
"L": "New Haven",
"ST": "Connecticut",
"basicConstraints": "ca",
"serialNumber": "17912915768810270261"
},
"validFrom": 1463572199,
"validUntil": 1495108199
},
"requestStatus": "ok",
"total": 1
}

revoke

Description: toggles the certificate status to 'Revoked' Arguments:

  • 'id' the internal identifier of the certificate
  • user_id: the requester user ID (for accounting)

An example request:

1 curl -X POST -d '{"operation": "revoke", "user_id": 205, "id": 4}' http://localhost:9500/0/ca
{
"data":
{
"certfile": "coolca_servers/server02.crt",
"id": 4,
"isCA": false,
"isClient": true,
"isServer": true,
"issued": 1450195314,
"issuerId": 1,
"keybits": 2048,
"keyfile": "coolca_servers/server02.key",
"requesterUserId": 100,
"revoked": 1450195957,
"revokerUserId": 205,
"serialNumber": 4,
"status": "Revoked",
"subjectName":
{
"CN": "coolca.server.tld",
"extendedKeyUsage": [ "ClientAuth", "ServerAuth" ],
"keyUsage": [ "DigitalSignature", "NonRepudiation", "KeyEncipherment", "KeyAgreement" ],
"serialNumber": "4",
"subjectAltName": [ "IP Address:192.168.33.1", "DNS:2001:470:1f1a:2c2::43" ]
},
"validFrom": 1450105313,
"validUntil": 1536595313
},
"total": 1
}

move

Description: moves/renames a certificate (+private key) Arguments:

  • id: the id of the certificate
  • name: the new path+name

An example reply/request:

1 curl -X POST -d '{"operation": "certinfo", "id": 4}' http://localhost:9500/0/ca | egrep '(keyfile,certfile)'
2  "certfile": "postgresql_single/cluster_3/server.crt",
3  "keyfile": "postgresql_single/cluster_3/server.key",
4 # and now rename
5 curl -X POST -d '{"operation": "move", "id": 4, "name": "new_path/pgsqlserver"}' http://localhost:9500/0/ca
{
"cc_timestamp": 1457442688,
"data":
{
"certfile": "new_path/pgsqlserver.crt",
"id": 4,
"isCA": false,
"isClient": false,
"isServer": true,
"issued": 1455111265,
"issuerId": 3,
"keybits": 2048,
"keyfile": "new_path/pgsqlserver.key",
"serialNumber": 4,
"status": "Issued",
"subjectName":
{
"CN": "PostgreSQL_Server_Cmon_Auto_Generated_Server_Certificate",
"description": "Generated by ClusterControl",
"extendedKeyUsage": [ "ServerAuth" ],
"keyUsage": [ "DigitalSignature", "KeyEncipherment", "KeyAgreement" ],
"serialNumber": "4"
},
"validFrom": 1455021265,
"validUntil": 1800707665
},
"requestStatus": "ok",
"total": 1
}

crl

Description: writes out a CRL (Certificate Revokation List) next to the CA certificate Arguments: 'id' the CA certificate id (which signed the certificate and shall sign the CRL)

Example request/reply

1 your@shell$ curl -X POST -d '{"operation": "crl", "id": 1}' http://localhost:9500/0/ca
2 {
3  "data":
4  {
5  "certfile": "CoolCA.crt",
6  "crlfile": "CoolCA.crl"
7  },
8  "total": 1
9 }
10 your@shell$ sudo openssl crl -in /var/lib/cmon/ca/CoolCA.crl -text
11 Certificate Revocation List (CRL):
12  Version 2 (0x1)
13  Signature Algorithm: sha256WithRSAEncryption
14  Issuer: /description=My cool CA certificate./emailAddress=kedazo@severalnines.com
15  Last Update: Dec 15 16:18:46 2015 GMT
16  Next Update: Jan 14 16:18:46 2016 GMT
17 Revoked Certificates:
18  Serial Number: 04
19  Revocation Date: Dec 15 16:12:37 2015 GMT
20  CRL entry extensions:
21  X509v3 CRL Reason Code:
22  Unspecified
23  Signature Algorithm: sha256WithRSAEncryption
24  3c:1e:cf:3a:83:5c:29:a7:02:29:c8:fe:98:89:91:d2:95:68:
25  2c:5c:12:f8:83:b0:b6:87:17:cc:a0:9d:27:46:e7:07:a2:ac:
26  fe:66:cd:d0:ce:c0:fc:8c:db:f2:c9:3e:52:05:68:4a:09:26:
27  02:4e:73:dd:ff:2d:c8:d6:de:64:dc:f3:3c:de:cc:3a:1f:7a:
28  db:3d:ac:18:b6:d7:c1:92:f8:10:0a:9c:db:85:a7:9c:5d:07:
29  c7:8e:ff:bf:ff:77:cb:5d:4c:20:e8:2d:9b:37:3b:3f:e1:66:
30  13:13:15:8d:c9:84:82:9d:aa:fc:b4:44:05:bc:50:94:49:39:
31  e9:8a:e7:62:19:b9:da:6e:5f:4f:7a:38:76:68:5d:01:3e:7d:
32  da:b7:bc:d3:20:d4:b1:69:41:c5:d1:f3:4b:63:f3:e8:18:89:
33  d3:70:9f:79:33:84:76:6b:33:bb:67:79:a8:fa:98:c8:2f:ec:
34  b2:bb:18:a2:c6:31:5d:e1:5c:d9:02:c3:d8:da:79:5c:27:30:
35  1c:5c:11:71:83:54:09:c3:75:60:24:a1:b9:69:57:71:e6:d6:
36  ef:dc:12:ae:d2:5b:14:57:68:34:35:aa:ec:fa:fe:3a:09:1c:
37  53:fa:29:ac:21:82:3f:bc:67:d1:44:da:72:97:ed:c0:14:2a:
38  a9:33:62:82
39 -----BEGIN X509 CRL-----
40 MIIBtzCBoAIBATANBgkqhkiG9w0BAQsFADBKMSAwHgYDVQQNDBdNeSBjb29sIENB
41 IGNlcnRpZmljYXRlLjEmMCQGCSqGSIb3DQEJARYXa2VkYXpvQHNldmVyYWxuaW5l
42 cy5jb20XDTE1MTIxNTE2MTg0NloXDTE2MDExNDE2MTg0NlowIjAgAgEEFw0xNTEy
43 MTUxNjEyMzdaMAwwCgYDVR0VBAMKAQAwDQYJKoZIhvcNAQELBQADggEBADwezzqD
44 XCmnAinI/piJkdKVaCxcEviDsLaHF8ygnSdG5weirP5mzdDOwPyM2/LJPlIFaEoJ
45 JgJOc93/LcjW3mTc8zzezDofets9rBi218GS+BAKnNuFp5xdB8eO/7//d8tdTCDo
46 LZs3Oz/hZhMTFY3JhIKdqvy0RAW8UJRJOemK52IZudpuX096OHZoXQE+fdq3vNMg
47 1LFpQcXR80tj8+gYidNwn3kzhHZrM7tneaj6mMgv7LK7GKLGMV3hXNkCw9jaeVwn
48 MBxcEXGDVAnDdWAkoblpV3Hm1u/cEq7SWxRXaDQ1quz6/joJHFP6Kawhgj+8Z9FE
49 2nKX7cAUKqkzYoI=
50 -----END X509 CRL-----

The operational reports API

/${CLUSTERID}/reports

listreports

Description: lists the available reports for the cluster

1 curl -X POST -d '{"operation": "listreports" }' http://localhost:9500/1/reports
{
"cc_timestamp": 1448030788,
"data": [
{
"cid": 1,
"generatedby": "<unknown-rpc-user>",
"id": 1,
"name": "default_2015-11-20_154052.html",
"path": "/home/kedz/s9s_tmp/1/galera/cmon-reports/default_2015-11-20_154052.html",
"recipients": "",
"timestamp": 1448030453,
"type": "default"
},
{
"cid": 1,
"generatedby": "kedazo@s9s.io",
"id": 2,
"name": "default_2015-11-20_154405.html",
"path": "/home/kedz/s9s_tmp/1/galera/cmon-reports/default_2015-11-20_154405.html",
"recipients": "",
"timestamp": 1448030646,
"type": "default"
} ],
"requestStatus": "ok",
"total": 2
}

GET request to fetch the report file

Description: the backend provides a way to get the report files using the following HTTP GET request: http://localhost:9500/1/reports/default_2015-11-20_154405.html

In case of protected RPC, you may need to specify the authentication token in the GET request like: http://localhost:9500/16/reports/default_2016-03-22_115301.html?token=NrA2AoIrk6iq9ChD

Use the "name" field (of the report meta-data) to fetch a specific report file.

generatereport

Generates a report.

Possible report types/names:

  • 'default': a generic reports about the cluster state
  • 'availability': availibily report
  • 'backup': backup report

Arguments:

  • name: the report name/type
  • username: the RPC username
  • clusterIds: some reports can contain info about multiple clusters, here the frontend could list the clusterId-s whose the user might be interested in (or have access) (please use comma separated list, like: "clusterIds": "1, 4, 5" )
  • recipients: comma separated list of e-mail addresses to send out the report

Supported types:

  • type : default (per cluster, sysadmin report for one cluster)
  • type : availability (global, summary of all clusters)
  • type : backup (global, summary of all clusters)
1 curl -X POST -d '{"operation": "generatereport", "name": "default", "username": "kedazo@s9s.io" }' http://localhost:9500/1/reports
{
"cc_timestamp": 1448030646,
"data":
{
"cid": 1,
"generatedby": "kedazo@s9s.io",
"id": 2,
"name": "default_2015-11-20_154405.html",
"path": "/home/kedz/s9s_tmp/1/galera/cmon-reports/default_2015-11-20_154405.html",
"recipients": "",
"timestamp": 1448030646,
"type": "default"
},
"requestStatus": "ok"
}

deletereport

Deletes a report (both the report file and meta-data from cmon database)

1 curl -X POST -d '{"operation": "deletereport", "id": 1 }' http://localhost:9500/1/reports
{
"cc_timestamp": 1448031045,
"requestStatus": "ok"
}

addschedule

Creates a schedule for a report

Arguments:

  • name: the report name/type
  • schedule: the cron-like schedule line
  • username: the RPC username
  • clusterIds: some reports can contain info about multiple clusters, here the frontend could list the clusterId-s whose the user might be interested in (or have access) (please use comma separated list, like: "clusterIds": "1, 4, 5" )
  • recipients: e-mail recipients to send the report (comma separated list)

Supported types:

  • type : default (per cluster, sysadmin report for one cluster)
  • type : availability (global, summary of all clusters)
  • type : backup (global, summary of all clusters)
curl -X POST -d '{"operation": "addschedule", "name": "default", "username": "kedazo@severalnines.com", "schedule": "*/5 * * * *", "recipients": "kedazo@severalnines.com"}' http://localhost:9500/1/reports
1 {
2  "cc_timestamp": 1448361188,
3  "id": 1,
4  "requestStatus": "ok"
5 }

schedules

Lists the currently scheduled reports for the actual cluster.

curl -X POST -d '{"operation": "schedules"}' http://localhost:9500/1/reports
1 {
2  "cc_timestamp": 1448366203,
3  "data": [
4  {
5  "arguments": "",
6  "command": "default",
7  "id": 1,
8  "recipients": "kedazo@severalnines.com",
9  "schedule": "*/5 * * * *"
10  } ],
11  "requestStatus": "ok",
12  "total": 1
13 }

removeschedule

Removes an opreport schedule

Arguments:

  • id: the schedule id (from schedules RPC retval)
1 curl -X POST -d '{"operation": "removeschedule", "id": 1}' http://localhost:9500/1/reports
{
"cc_timestamp": 1448366306,
"requestStatus": "ok"
}

listErrorReports

Lists the available/created error report for a specific cluster.

1 $ curl -XPOST -d '{ "token":"td0vd3usRMuNXSC3","operation": "listerrorreports"}' http://localhost:9500/156/reports
{
"cc_timestamp": 1496053698,
"data": [
{
"created": "2017-05-29T09:24:19.000Z",
"id": 1,
"path": "/var/www/clustercontrol/app/tmp/logs/error_report_20170529-112416.tar.gz",
"size": "220.80 KiB",
"www": true
},
{
"created": "2017-05-29T09:24:29.000Z",
"id": 2,
"path": "/home/kedz/s9s_tmp/error_report_20170529-112426.tar.gz",
"size": "221.72 KiB",
"www": false
},
{
"created": "2017-05-29T09:56:05.000Z",
"id": 3,
"path": "/var/www/html/clustercontrol/app/tmp/logs/error-report-cluster156-2017-05-29_115605.tar.gz",
"size": "238.48 KiB",
"www": true
},
{
"created": "2017-05-29T09:59:05.000Z",
"id": 4,
"path": "/var/www/html/clustercontrol/app/tmp/logs/error-report-cluster156-2017-05-29_115905.tar.gz",
"size": "243.97 KiB",
"www": true
} ],
"requestStatus": "ok",
"total": 4
}

downloadErrorReport

Arguments:

  • id: the error report id

The HTTP reply will be either an error message JSon, or the tar-gz stream directly.

An example GET request to download a report by ID

1 wget 'http://localhost:9500/156/reports?token=td0vd3usRMuNXSC3&operation=downloadErrorReport&id=4'

removeErrorReport

Arguments:

  • id: the error report id

Few example requests and replies

1 $ curl -XPOST -d '{ "token":"td0vd3usRMuNXSC3","operation": "removeerrorreport", "id": 2}' http://localhost:9500/156/reports
{
"cc_timestamp": 1496053850,
"requestStatus": "ok"
}
1 $ curl -XPOST -d '{ "token":"td0vd3usRMuNXSC3","operation": "removeerrorreport", "id": 2}' http://localhost:9500/156/reports
{
"cc_timestamp": 1496053852,
"errorString": "Error-report not found.",
"requestStatus": "UnknownError"
}

The local repositories API

/${CLUSTERID}/repos

Description: with these API methods you can list, manage your local mirrored APT/YUM reposities created by cmon

NOTE: these APIs are cluster global (so independent from clusters), as the repositories are shared across the clusters.

Local repository jobs

See create_local_repository and update_local_repository jobs for the local APT/YUM repository mirroring.

listrepos

Lists available local repositories.

Arguments (for filtration):

  • cluster-type: filter by cluster-type (galera, mongodb, postgresql, ...)
  • vendor: filter by vendor (percona, mariadb, ...)
  • db-version: filter by db (major.minor) version (5.6, 10.1, ...)

Example request:

1 curl -X POST -d '{"operation": "listrepos" }' http://localhost:9500/1/repos
{
"cc_timestamp": 1447418582,
"data": [
{
"cluster-type": "galera",
"db-version": "5.6",
"local-path": "/var/www/html/cmon-repos/percona-5.6-yum-el7",
"name": "percona-5.6-yum-el7",
"os":
{
"release": "7",
"type": "redhat"
},
"timestamp": 1454203988,
"used-by-cluster": "1",
"vendor": "percona"
},
{
"cluster-type": "galera",
"db-version": "5.5",
"local-path": "/var/www/html/cmon-repos/percona-5.5-yum-el7",
"name": "percona-5.5-yum-el7",
"os":
{
"release": "7",
"type": "redhat"
},
"timestamp": 1454403988,
"used-by-cluster": "",
"vendor": "percona"
} ],
"requestStatus": "ok",
"total": 2
}

reposetup

Prints out the (semi-)generated repository filename and contents to use the repository manually. You have to substitute the CMON-HOSTNAME string with the controllers IP address before u use it.

Arguments:

  • name: the repository name

Example request:

1 curl -X POST -d '{"operation": "reposetup", "name": "percona-5.6-yum-el7" }' http://localhost:9500/1/repos
{
"cc_timestamp": 1447418574,
"content": "[percona-5.6-yum-el7]\nname = percona-5.6-yum-el7\nbaseurl = http://CMON-HOSTNAME/cmon-repos/percona-5.6-yum-el7\nenabled = 1\ngpgkey = http://CMON-HOSTNAME/cmon-repos/percona-5.6-yum-el7/localrepo-gpg-pubkey.asc\n",
"filename": "/etc/yum.repos.d/percona-5.6-yum-el7.repo",
"requestStatus": "ok"
}

removerepo

Removes a repository from controller (it deletes the repository dir too)

Arguments:

  • name: the repository name

Example request:

1 curl -X POST -d '{"operation": "removerepo", "name": "percona-5.5-yum-el7" }' http://localhost:9500/1/repos
{
"cc_timestamp": 1447420775,
"requestStatus": "ok"
}

combinations

For the Frontend/UI gives a list of the supported clusterType/vendor/dbVersion/osRelease combinations for the create_local_repository job.

The request returns lists containing the following (4) fields: clusterType, vendor, dbVersion and osRelease

Example request:

1 curl -X POST -d '{"operation": "combinations" }' http://localhost:9500/1/repos | json
{
"cc_timestamp": 1454427430,
"data": [
[
"mongodb",
"10gen",
"3.2",
"5"
],
[
"mongodb",
"10gen",
"3.2",
"6"
],
..
],
"requestStatus": "ok",
"total": 108
}

The settings API

/${CLUSTERID}/settings

Description: with this API you can get/modify the cmon configuration values.

set

Sets/updates a setting value in cmon.

NOTE: at the current moment some setting might not be modifiable by the cmon, and some of the might be overwritten (as cmon.cnf has the priority for some keys).

Arguments:

  • key: the setting key
  • value: the new value of the property.

Example request:

1 curl -XPOST -d '{"operation":"set","key":"CLUSTER_NAME","value":"MyGreatCluster5"}' 'http://localhost:9500/5/settings'
{
"cc_timestamp": 1444995954,
"requestStatus": "ok"
}

And lets verify the change

1 curl -XPOST -d'{"operation":"list","keys":"CLUSTER_NAME"}' 'http://localhost:9500/5/settings'
{
"cc_timestamp": 1444996169,
"data":
{
"CLUSTER_NAME": "MyGreatCluster5"
},
"requestStatus": "ok",
"total": 1
}

setvalues

The "setvalues" call obsoletes the previous "set" call because it is able to handle multiple settings and change them all in one RPC call.

The "setvalues" will set all the values or none of them, if a "setvalues" contains an invalid key the whole request will be rejected and none of the values will be actually set.

Example:

{
"operation": "setvalues",
"configuration_values":
{
"CPU_CRITICAL": 95,
"CPU_WARNING": 91
}
}
{
"cc_timestamp": 1484036217,
"configuration_values":
{
"CPU_CRITICAL": 95,
"CPU_WARNING": 91
},
"requestStatus": "ok"
}

As the example shows the backend will read back the configuration value after it did all the changes and it puts the values into the reply. Since these are the actual configuration values this makes it possible to double check if the configuration subsystem actually accepted all the new values.

list

Lists the current settings/values.

Arguments:

  • keys: an optional comma separated list to filter the results
1 curl -XPOST -d'{"operation":"list"}' 'http://localhost:9500/7/settings'
{
"data":
{
"BINDIR": "/usr/bin/",
"CLUSTER_NAME": "cluster_7",
"CMON_CONFIG_PATH": "/etc/cmon.d/cmon_7.cnf",
"CMON_DB": "cmon",
"CMON_HOSTNAME": "192.168.33.1",
"CMON_HOSTNAME1": "192.168.33.1",
"CMON_USER": "cmon",
"CONFIGDIR": "/etc/",
"DB_HOURLY_STATS_COLLECTION_INTERVAL": 5,
"DB_SCHEMA_STATS_COLLECTION_INTERVAL": 10800,
"DB_STATS_COLLECTION_INTERVAL": 30,
"ENABLE_CLUSTER_AUTORECOVERY": true,
"ENABLE_MYSQL_TIMEMACHINE": false,
"ENABLE_NODE_AUTORECOVERY": true,
"HOST_STATS_COLLECTION_INTERVAL": 60,
"LOG_COLLECTION_INTERVAL": 600,
"MONITORED_MYSQL_PORT": 3306,
"MYSQL_BASEDIR": "/usr",
"MYSQL_PORT": 3306,
"NDB_CONNECTSTRING": "127.0.0.1:1186",
"OS": "redhat",
"OS_USER": "kedz",
"OS_USER_HOME": "/home/kedz",
"PURGE": 7,
"SSH_KEYPATH": "/home/kedz/.ssh/id_rsa",
"SSH_PORT": 22,
"STAGING_DIR": "/home/kedz/s9s_tmp",
"SUDO": "sudo -n 2>/dev/null",
"USE_INTERNAL_REPOS": false,
"VENDOR": "percona",
"WWWROOT": "/var/www/html/"
},
"total": 31
}

setLicense

Sets and verifies a new license on the backend.

NOTE: The method may not allow to set license for a while if you've tried too many times with wrong keys in a short time.

1 curl -XPOST -d '{"operation":"setLicense","email":"demo@severalnines.com","company":"Severalnines AB","exp_date":"31/12/2014","lickey":"deadbeef0123"}' 'http://localhost:9500/0/settings'
{
"cc_timestamp": 1456921835,
"data":
{
"hasLicense": false,
"licenseExpires": -1,
"licenseStatus": "Invalid license data found."
},
"requestStatus": "ok"
}

getLicense

Returns the current license data (key is masked, only the 4 chars are available) and the validity information.

1 $curl -XPOST -d '{"operation":"getlicense"}' 'http://localhost:9500/0/settings?token=5be993bd3317aba6a24cc52d2a39e7636d35d55d'
{
"cc_timestamp": 1523010743,
"data":
{
"hasLicense": true,
"license":
{
"company": "Severalnines AB",
"email": "test@severalnines.com",
"exp_date": "31/12/2018",
"lickey": "XXXXXXXXXXXXXXXX3242"
},
"licenseExpires": 268,
"licenseStatus": "License found."
},
"requestStatus": "ok"
}

verifyLicense

A method just to verify (but not set or change anything) if the supplied license key looks valid or not. (NOTE: this method won't check the expiration date, just the key)

1 curl -XPOST -d '{"operation":"verifyLicense","email":"my@email.tld","company":"Severalnines AB","exp_date":"31/12/2014","lickey":"huhuhuh122134241"}' 'http://localhost:9500/0/settings'
{
"cc_timestamp": 1456921835,
"data":
{
"hasLicense": false,
"licenseExpires": -1,
"licenseStatus": "The license key is valid (expiration date is not checked)."
},
"requestStatus": "ok"
}

generateToken

Generates and sets (in cmon_X.cnf) an RPC authentication token for the correspondig cluster. If this method succeeds the further RPC calls will only work by specifying the generated token.

Example usage of 'generateToken' RPC method:

1 $ curl -XPOST -d'{"operation":"generateToken"}' 'http://localhost:9500/4/settings'
{
"data":
{
"token": "ry3AabVrZS7XSzVV"
},
"requestStatus": "ok",
"total": 1
}
1 $ curl -XPOST -d'{"operation":list","keys":"CLUSTER_NAME"}' 'http://localhost:9500/4/settings'
{
"cc_timestamp": 1455802341,
"errorString": "Access denied (invalid authentication token)",
"requestStatus": "error"
}
1 $ curl -XPOST -d'{"token":"ry3AabVrZS7XSzVV","operation":"list","keys":"CLUSTER_NAME"}' 'http://localhost:9500/4/settings'
{
"cc_timestamp": 1455802356,
"data":
{
"CLUSTER_NAME": "cluster_4"
},
"requestStatus": "ok",
"total": 1
}
1 $ sudo -n grep ^rpc_key /etc/cmon.d/cmon_4.cnf
2 rpc_key=ry3AabVrZS7XSzVV
3 $

getMailserver

Description: this method can be used the obtain the (global, not cluster specific) SMTP mail server settings.

1 curl
2 'http://localhost:9500/0/settings?token=5be993bd3317aba6a24cc52d2a39e7636d35d55d&operation=get_mailserver
{
"cc_timestamp": 1514992255,
"requestStatus": "ok",
"smtp_server":
{
"hostname": "my.smtp.server",
"password": "password",
"port": 587,
"sender": "hullala@smtp.server.tld",
"use_tls": false,
"username": "hollala@smtp.server.tld"
}
}

setMailserver

Description: a method to set/update the global SMTP server settings.

Arguments in 'smtp_server':

  • hostname: the SMTP server hostname
  • port: the SMTP server port
  • use_tls: whether we should use TLS (if false, but server supports STARTTLS then cmon will automatically upgrades the connection to encrypted)
  • username: the SMTP authentication username (plain)
  • password: the SMTP authentication password (plain)
  • sender: a valid e-mail address accepted by SMTP server, be used in From: field
1 curl -XPOST -d'{"operation":"set_mailserver","smtp_server": { "hostname": "my.smtp.server", "password": "password","port": 587, "sender": "hullala@smtp.server.tld", "use_tls": false, "username": "hullala@smtp.server.tld" }}' 'http://localhost:9500/0/settings?token=5be993bd3317aba6a24cc52d2a39e7636d35d55d'
{
"cc_timestamp": 1514992933,
"requestStatus": "ok"
}

unregisterHost

Description: this method can be used to remove a service from cmon without being stopped/uninstalled, it just gets unregistered from controller.

Arguments:

  • host: a Cmon*Host instance (hostname and port fields are mandatory)

An example request & reply:

1 $ curl -XPOST -d '{"token":"6T17RE8PBaSzOKbZ","operation": "unregisterHost", "host":{"hotname":"10.0.3.29","port":9600}}' http://127.0.0.1:9500/180/settings
{
"cc_timestamp": 1504868192,
"errorString": "",
"requestStatus": "ok"
}

The Processes API

/${CLUSTERID}/proc

Description: at this RPC path you can request the cmon to return the collected running processes on the nodes/controller.

top

Returns all the running processes (like the 'top' utility) and its properties (cpu/mem usage) from the nodes.

Possible arguments:

  • "hostname" : to filter the results (by hostname)

Example:

{
"operation": "top",
"including_hosts": "127.0.0.1",
"limit": 3
}
{
"cc_timestamp": 1484036247,
"data": [
{
"hostname": "127.0.0.1",
"processes": [
{
"class_name": "CmonProcStats",
"cpu_time": 3.22118e+06,
"cpu_usage": 23.3078,
"executable": "VBoxHeadless",
"mem_usage": 0.404718,
"nice": 0,
"pid": 15020,
"priority": 20,
"res_mem": 101978112,
"shr_mem": 42090496,
"state": "S",
"user": "kedz",
"virt_mem": 2684211200
},
{
"class_name": "CmonProcStats",
"cpu_time": 96383,
"cpu_usage": 13.9251,
"executable": "firefox",
"mem_usage": 5.66649,
"nice": 0,
"pid": 20764,
"priority": 20,
"res_mem": 1427804160,
"shr_mem": 130043904,
"state": "S",
"user": "kedz",
"virt_mem": 2861780992
},
{
"class_name": "CmonProcStats",
"cpu_time": 2.55142e+06,
"cpu_usage": 10.1273,
"executable": "VBoxHeadless",
"mem_usage": 0.437571,
"nice": 0,
"pid": 17618,
"priority": 20,
"res_mem": 110256128,
"shr_mem": 42549248,
"state": "S",
"user": "kedz",
"virt_mem": 2764001280
} ],
"status":
{
"class_name": "CmonCollectorReport",
"last_sample": "2017-01-10T08:17:23.000Z",
"last_sample_age_secs": 4,
"message": "Sample 3 created",
"sample_counter": 3,
"success": true
}
} ],
"requestStatus": "ok",
"total": 1
}

The GetRunningProcesses Call

The "GetRunningProcesses" obsoletes the previous "top" RPC call. It has a number of new features including new parameters and a different return value.

The call supports the following arguments (more arguments will be added soon):

  • including_hosts
    A list of host names that will be returned if they are found in the cluster. If any of these hosts are not in the cluster they will simply be ignored.
  • limit
    Limits the number of processes to return for every hosts.
  • with_process_properties
    The list of process properties that will be returned.

Example:

{
"operation": "getRunningProcesses",
"including_hosts": "127.0.0.1",
"limit": 3
}
{
"cc_timestamp": 1484036247,
"data": [
{
"hostname": "127.0.0.1",
"processes": [
{
"class_name": "CmonProcStats",
"cpu_time": 3.22118e+06,
"cpu_usage": 23.3078,
"executable": "VBoxHeadless",
"mem_usage": 0.404718,
"nice": 0,
"pid": 15020,
"priority": 20,
"res_mem": 101978112,
"shr_mem": 42090496,
"state": "S",
"user": "kedz",
"virt_mem": 2684211200
},
{
"class_name": "CmonProcStats",
"cpu_time": 96383,
"cpu_usage": 13.9251,
"executable": "firefox",
"mem_usage": 5.66649,
"nice": 0,
"pid": 20764,
"priority": 20,
"res_mem": 1427804160,
"shr_mem": 130043904,
"state": "S",
"user": "kedz",
"virt_mem": 2861780992
},
{
"class_name": "CmonProcStats",
"cpu_time": 2.55142e+06,
"cpu_usage": 10.1273,
"executable": "VBoxHeadless",
"mem_usage": 0.437571,
"nice": 0,
"pid": 17618,
"priority": 20,
"res_mem": 110256128,
"shr_mem": 42549248,
"state": "S",
"user": "kedz",
"virt_mem": 2764001280
} ],
"sample_report":
{
"class_name": "CmonCollectorReport",
"last_sample": "2017-01-10T08:17:23.000Z",
"last_sample_age_secs": 4,
"message": "Sample 3 created",
"sample_counter": 3,
"success": true
}
} ],
"requestStatus": "ok",
"total": 1
}

managedProcesses

This API returns the list (with current status) of the monitored managed processes

(these infos was available from the old 'processes' and 'ext_proc' SQL tables)

Possible arguments:

  • "hostname" : to filter the results (by hostname)
1 curl -XPOST -d '{"operation": "managedprocesses"}''http://localhost:9500/5/proc'
{
"cc_timestamp": 1435658110,
"data": [
{
"category": "ClusterProvider",
"command": "nohup service postgresql start",
"executable": "postgres",
"getpidcommand": "pgrep -f ^postgres",
"hostname": "192.168.33.121",
"managed": true,
"pid": 891
},
{
"category": "ClusterProvider",
"command": "nohup service postgresql start",
"executable": "postgres",
"getpidcommand": "pgrep -f ^postgres",
"hostname": "192.168.33.122",
"managed": true,
"pid": 848
} ],
"requestStatus": "ok",
"total": 2
}

toggleManaged

With this flag you can actually deactivate/activate the process recovery (temporary).

(when managed is set to false, cmon will not try to restart the failing process)

Possible arguments:

  • "hostname": to specify on which host we toggle the managed flag
  • "executable": the processe executable name to enable/disable
  • "managed": bool (true/false) value of the new setting

An example request:

1 $ curl -XPOST -d '{"operation": toggleManaged", "hostname": "192.168.33.122", "executable": "postgres", "managed": false}' 'http://localhost:9500/5/proc'
{
"cc_timestamp": 1437478194,
"requestStatus": "ok"
}

Virtual Files API

/${CLUSTERID}/files

Description: here you can get files from CMON using GET requests, for example if you have a imperative script which produces a 'graph', then it will be stored here for some time (~1 hour).

Please consider the following script

var graph = new CmonGraph;
for (row = 0; row < 100; ++row) {
graph.setData(0, row, row);
graph.setData(1, row, sin(row / 10.0));
graph.setData(2, row, cos(row / 10.0));
}
exit(graph);

When you execute it, you get an RPC reply like this:

{
"cc_timestamp": 1427809088,
"requestStatus": "ok",
"results": {
"exitStatus": {
"class": "CmonGraph",
"fileName": "51ff4aec-29cd-baab-f2fb-e3467cc254f8.png",
"height": 600,
"mimeType": "image/png",
"width": 800
},
"fileName": "/dkedves_test/graph01.js",
"status": "Ended"
},
"success": true
}

And then you can get the generated (in-memory) image from the following URL:

1 curl http://localhost:9500/1/files/51ff4aec-29cd-baab-f2fb-e3467cc254f8.png

The imperative scripting API

/${CLUSTERID}/imp

With these API methods, you can actually execute the scripts written in the The imperative (JavaScript like) language language.

NOTE* : These APIs are only available from cmon >= 1.2.10.

saveScript

Description: saves (creates/updates) a script file Arguments:

  • filename: the full path of the script (/path/to/script.js)
  • user: the author username (for internal logging purposes)
  • content: the script contents
  • tags: (optional) the script tags (can be a ; separated list in a string or a JSon list)

For tags, see setTags for example how it can be passed.

1 $ curl -XPOST -d'{"operation":"saveScript","content":"var a = 1;\nprint(a);","user":"superAdmin007","filename":"/test/test1.js"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425390000,
"requestStatus": "ok"
}

loadScript

Description: loads a script file (or only its meta-data) Arguments:

  • filename: the full path of the script (/path/to/script.js)
  • onlymetadata: (defaults to false), if this set, the content will not be replied back
1 $ curl -XPOST -d'{"operation":"loadScript","filename":"/test/test1.js"}' 'http://localhost:9500/14/imp';
{
"cc_timestamp": 1425390125,
"data": {
"content": "var a = 7;\nprint(a);",
"filename": "test1.js",
"lasteditor": "superAdmin007",
"path": "/test/",
"timestamp": 1425390123,
"version": 2
},
"requestStatus": "ok"
}

compileScript

Description: compiles (parses) the script file Arguments:

  • filename: the full path of the script (/path/to/script.js)

NOTE: every compiled script will be kept in the memory until a new modification arrives or until it becomes invalidated (it is set now for 120 secs). You can still execute a non-compiled or invalidated script, then it will be compiled first and then executed.

1 curl -XPOST -d'{"operation":"compileScript","filename":"/test/test1.js"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425392797,
"requestStatus": "ok",
"results": { }
}

On error, you may got a syntax error back in the results like here:

{
"cc_timestamp": 1425393004,
"requestStatus": "ok",
"results":
{
"exitStatus": null,
"messages": [
{
"lineNumber": 1,
"message": ":1: syntax error.",
"severity": "error"
} ],
"status": "Parsed"
}
}

executeScript

Description: executes (and before that re-compiles if needed) a script

Arguments:

  • filename: the full path of the script (/path/to/script.js)
  • arguments: the arguments string
  • user: the username (for internal logging purposes)

Here is an example about a successful run:

1 $ curl -XPOST -d'{"operation":"executeScript","filename":"/test/test1.js"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425393260,
"requestStatus": "ok",
"results":
{
"exitStatus": null,
"messages": [
{
"message": "8"
} ],
"status": "Ended"
}
}

And an other example when error occurs:

1 curl -XPOST -d'{"operation":"executeScript","filename":"/test/test2.js"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425393300,
"requestStatus": "ok",
"results":
{
"exitStatus": null,
"messages": [
{
"lineNumber": 1,
"message": ":1: syntax error.",
"severity": "error"
} ],
"status": "Parsed"
}
}

schedule

Description: schedules a script to be periodically executed

Arguments:

  • filename: the full path of the script (/path/to/script.js)
  • schedule: the cron-like schedule string (or an empty string -> disables the schedule)
  • arguments: the arguments string
  • user: the username (for internal logging purposes)

Schedule string: It should be consinsted from 5 parts (separated by space or tab): m h dom mon dow, (minute, hour, day of month, month, day of week).

For example we schedule the following script to run in every 5 minutes:

1 curl -XPOST -d'{"operation":"schedule","filename":"/test/test1.js","schedule":"*/5 * * * *","arguments": "50 120 true"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425643537,
"requestStatus": "ok"
}

changeschedule

Description: changes the settings of a schedule

Arguments:

  • schedule_id: the Id of the schedule. MANDATORY.
  • schedule: the cron-like schedule string (or an empty string -> disables the schedule) (OPTIONAL)
  • arguments: the arguments string (OPTIONAL)
  • enabled: true/false (OPTIONAL)

Examples: To disable a schedule:

1 $ curl -XPOST -d '{"operation": "changeSchedule", "schedule_id": 1314,"enabled": false, "schedule": "*/5 * * * *", "token": "VxNXyL0TFl6CARkO"}' http://127.0.01:9500/102/imp
{
"cc_timestamp": 1477661410,
"requestStatus": "ok"
}
1 $ curl -XPOST -d '{"operation": "dirTree", "path":"/s9s/host/", "token": "VxNXyL0TFl6CARkO"}' http://127.0.0.1:9500/102/imp
{
"cc_timestamp": 1477661415,
"data":
{
"contents": [
{
"filename": "cpu_usage.js",
"name": "cpu_usage.js",
"path": "/s9s/host/",
"schedule": "*/5 * * * *",
"schedule_args": "",
"schedule_enabled": false,
"schedule_id": 1314,
"settings":
{
"project":
{
"tags": "s9s;host"
}
},
"timestamp": 1466500249,
"type": "file",
"version": 1
} ],
"name": "host",
"path": "/s9s/",
"type": "directory"
},
"requestStatus": "ok"
}

To enable a schedule:

1 curl -XPOST -d '{"operation": changeSchedule", "schedule_id": 1314,"enabled": true, "schedule": "*/10 * * * *", "token": "VxNXyL0TFl6CARkO"}' http://127.0.0.1:9500/102/imp
{
"cc_timestamp": 1425643537,
"requestStatus": "ok"
}

scheduleResults

Description: fetch/get the last result of one or more scheduled scripts

Arguments:

  • filename|filenames: a single or multiple script paths (/path/to/script.js)

An example query where we fetch multiple script results:

1 curl -XPOST -d'{"operation":"scheduleResults","filenames":["/test/test1.js","/test/test2.js"]}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425652908,
"data": [
{
"exitStatus": "null",
"filename": "/test/test1.js",
"messages": [
{
"message": "13"
} ],
"status": "Ended",
"timestamp": 1425652500
},
{
"exitStatus": "null",
"filename": "/test/test2.js",
"messages": [
{
"message": "55"
} ],
"status": "Ended",
"timestamp": 1425652500
} ],
"requestStatus": "ok",
"total": 2
}

dirTree

Description: returns the file directory tree Arguments:

  • path: (defaults to "/") specify a directory (/subdir1/) if you want to get only a sub-tree
  • showFiles: (defaults to true) whether to show also the file(s) in the tree
1 curl -XPOST -d '{"operation": dirTree", "path":"/s9s/host/", "token": "VxNXyL0TFl6CARkO"}' http://127.0.0.1:9500/102/imp
{
"cc_timestamp": 1477661429,
"data":
{
"contents": [
{
"filename": "cpu_usage.js",
"name": "cpu_usage.js",
"path": "/s9s/host/",
"schedule": "*/10 * * * *",
"schedule_args": "",
"schedule_enabled": true,
"schedule_id": 1314,
"settings":
{
"project":
{
"tags": "s9s;host"
}
},
"timestamp": 1466500249,
"type": "file",
"version": 1
} ],
"name": "host",
"path": "/s9s/",
"type": "directory"
},
"requestStatus": "ok"
}

setTags

Description: sets tags for the specified script-file Arguments:

  • filename: the full path of the script (/path/to/script.js)
  • user: the username (for internal logging purposes)
  • tags: the script tags (can be a ; separated list in a string or a JSon list)
1 # JSon list:
2 curl -XPOST -d'{"operation":"setTags","user":"superadmin01","tags":["mytag","helllooo"],"filename":"/test/tags/script1.js"}' 'http://localhost:9500/1/imp'
3 
4 # ; separated string:
5 curl -XPOST -d'{"operation":"setTags","tags":"tag1;tag2;tag3","filename":"/test/tags/script2.js"}' 'http://localhost:9500/1/imp'

{.sh}

removeScript

Description: removes a script from the system Arguments:

  • filename: the full path of the script (/path/to/script.js)
1 $ curl -XPOST -d'{"operation":"removeScript","filename":"/test2/subdir/filename.js"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425394458,
"requestStatus": "ok"
}

moveScript

Description: renames or moves a script to a new place Arguments:

  • filename: the full path of the script (/path/to/script.js)
  • newname: the new full path of the script (/other/path/newname.js)
  • user: the username (for internal logging purposes)
1 $ curl -XPOST -d'{"operation":"moveScript","filename":"/test/test2.js","newname":"/newpath/newname.js"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425394458,
"requestStatus": "ok"
}

importTarGz

Description: imports a set of scripts from a .tar.gz (it should contain a directory with the same name as the .tar.gz (test.tar.gz should contain a 'test' directory). Arguments:

  • localpath: the local (full) file path to the .tar.gz file on the controller
  • overwrite: (defaults to true) if this set to false then the call will fail if there are any files to be replaced by the tar.gz script

NOTE: the imported scripts will appear in a subdirectory (using the .tar.gz name)

NOTE: the filename 'root.tar.gz' is handled specially, will be imported to '/'

1 curl -XPOST -d'{"operation":"importTarGz","localpath":"/home/kedz/mytargz.tar.gz"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425483265,
"requestStatus": "ok"
}

exportTarGz

Description: exports sub-tree of the scripts to a .tar.gz file. (note the tar.gz file will be overwtitten if it exists already).

Arguments:

  • outdir: the local directory path of the output .tar.gz file on the controller
  • path: the virtual directory to be exported (must start with '/')

NOTE: the filename will be constructed of the path name and ".tar.gz" suffix.

NOTE2: if you export '/' then the output filename will be root.tar.gz

1 curl -XPOST -d'{"operation":"exportTarGz","outdir":"/home/kedz/","path":"/test"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425555463,
"requestStatus": "ok"
}
1 sudo tar -tzf test.tar.gz
2 test/
3 test/test1.js
4 test/test2.js

The log API

/${CLUSTERID}/log

NOTE* : This API is only available from cmon >= 1.2.11.

These RPC methods are provided for UI to obtain the collected log file entries from the nodes

list

List the (collected) log file names of the cluster.

1 curl -XPOST -d'{"operation":"list"}' 'http://localhost:9500/6/log'
{
"cc_timestamp": 1436352125,
"data": [
{
"filename": "/var/log/mysql/error.log",
"hostname": "192.168.33.1"
},
{
"filename": "/var/log/cmon_6.log",
"hostname": "192.168.33.1"
},
{
"filename": "/var/log/mariadb/mariadb.log",
"hostname": "192.168.33.123"
} ],
"requestStatus": "ok",
"total": 3
}

contents

Gets the processed contents a log file. The results will be sorted in descending of the log message creation time.

Arguments:

  • hostname: the host name
  • filename: the log filename
  • limit: optionally you can specify a limit of the returned log entries
1 $ curl -XPOST -d'{"operation":"contents", "hostname": "192.168.33.1", "filename": "/var/log/mysql/error.log", "limit": 5}' 'http://localhost:9500/6/log'
{
"cc_timestamp": 1436353601,
"data": [
{
"component": "Unknown",
"created": 1436344053,
"ident": "",
"message": "IP address '192.168.33.1' could not be resolved: Name or service not known",
"severity": "LOG_WARNING"
},
{
"component": "Unknown",
"created": 1436344053,
"ident": "/usr/sbin/mysqld",
"message": "ready for connections. Version: '5.6.24-0ubuntu2.1' socket: '/var/run/mysqld/mysqld.sock' port: 3306 (Ubuntu)",
"severity": "LOG_INFO"
},
{
"component": "Unknown",
"created": 1436344053,
"ident": "",
"message": "Event Scheduler: Loaded 2 events",
"severity": "LOG_INFO"
},
{
"component": "Unknown",
"created": 1436344053,
"ident": "",
"message": "Server socket created on IP: '0.0.0.0'.",
"severity": "LOG_INFO"
},
{
"component": "Unknown",
"created": 1436344053,
"ident": "",
"message": "- '0.0.0.0' resolves to '0.0.0.0';",
"severity": "LOG_INFO"
} ],
"requestStatus": "ok",
"total": 5
}

The Logger API

/${CLUSTERID}/logger

The logger API is the new RPC API for the new logger subsystem. This is under construction.

The getLogEntries RPC call

The log is returned in cmonlogentry objects.

1 curl -XPOST -d '{"operation": "getLogEntries", "token": "QfVlqKGRajRrsaaF"}' http://localhost:9500/78/logger
{
"operation" : "getLogEntries",
"created_after" : "2015-05-08T10:10:45.+0200Z",
"created_before" : "2019-05-08T10:10:45.+0200Z",
"limit" : 2,
"offset" : 0,
"cluster_id" : 200
}
{
"cc_timestamp": 1506582060,
"log_entries": [
{
"class_name": "CmonLogMessage",
"component": "Mail",
"created": "2017-09-28T07:00:50.144Z",
"log_class": "LogMessage",
"log_id": 83,
"log_origins":
{
"sender_binary": "cmon",
"sender_file": "cmonmailqueue.cpp",
"sender_line": 295,
"sender_pid": 5948,
"tv_nsec": 144442441,
"tv_sec": 1506582050
},
"log_specifics":
{
"cluster_id": 200,
"message_text": "Refusing to send e-mail with no recipients (subject: default_repl_200 alarm (CRITICAL): Galera node recovery failed)"
},
"severity": "LOG_WARNING"
},
{
"class_name": "CmonLogMessage",
"created": "2017-09-28T07:00:49.715Z",
"log_class": "AlarmRaised",
"log_id": 82,
"log_origins":
{
"sender_binary": "cmon",
"sender_file": "cmonalarmdb.cpp",
"sender_line": 1383,
"sender_pid": 5948,
"tv_nsec": 715240582,
"tv_sec": 1506582049
},
"log_specifics":
{
"alarm":
{
"alarm_id": 0,
"class_name": "CmonAlarm",
"component": 5,
"component_name": "ClusterRecovery",
"counter": 0,
"created": "2017-09-28T07:00:49.000Z",
"ignored": 0,
"measured": 0,
"message": "Galera node recovery failed. Permanent error.",
"recommendation": "Check mysql error.log file.",
"reported": "2017-09-28T07:00:49.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Galera node recovery failed",
"type": 3000,
"type_name": "GaleraNodeRecoveryFail"
},
"cluster_id": 200,
"message_text": "Alarm raised: Galera node recovery failed."
},
"severity": "LOG_CRIT"
} ],
"log_entry_counts":
{
"component":
{
"Mail": 1,
"Unknown": 1
},
"hostname":
{
"": 2
},
"sender_binary":
{
"cmon": 2
},
"severity":
{
"LOG_CRIT": 1,
"LOG_WARNING": 1
}
},
"requestStatus": "ok",
"total": 36
}

The getLogStatistics RPC call

1 curl -XPOST -d '{"operation": "getLogStatistics", "token": "QfVlqKGRajRrsaaF"}' http://localhost:9500/0/logger
{
"operation" : "getLogStatistics"
}
{
"cc_timestamp": 1506582060,
"log_statistics":
{
"cluster_log_statistics": [
{
"cluster_id": 200,
"disabled": false,
"entries_received": 80,
"format_string": "%C : (%S) %M",
"last_error_message": "Success.",
"lines_written": 80,
"log_file_name": "./cmon-ut-communication.log",
"max_log_file_size": 5242880,
"messages_per_sec": 1.33333,
"syslog_enabled": false,
"write_cycle_counter": 3
},
{
"cluster_id": 400,
"disabled": false,
"entries_received": 3,
"format_string": "%C : (%S) %M",
"last_error_message": "",
"lines_written": 0,
"log_file_name": "",
"max_log_file_size": 5242880,
"messages_per_sec": 0.05,
"syslog_enabled": false,
"write_cycle_counter": 1
} ],
"current_time": "2017-09-28T07:01:00.181Z",
"entries_statistics":
{
"entries_received": 83,
"entries_written_to_cmondb": 83
},
"has_cmondb": true,
"last_error_message": "Success.",
"last_flush_time": "2017-09-28T07:00:52.732Z",
"log_debug_enabled": false,
"writer_thread_running": true,
"writer_thread_started": "2017-09-28T07:00:22.643Z"
},
"requestStatus": "ok"
}

The config API

/${CLUSTERID}/config

NOTE* : This API is only available from cmon >= 1.2.10.

These RPC methods are provided for UI for nodes config file viewing and manipulation (edit).

list

List the (collected) configuration files of the cluster.

1 curl -XPOST -d'{"operation":"list"}' 'http://localhost:9500/14/config'
{
"cc_timestamp": 1435239570,
"data": [
{
"crc": -1563075013,
"filename": "postgresql.conf",
"hasChange": false,
"hostId": 15,
"hostname": "192.168.33.121",
"path": "/var/lib/pgsql/data/postgresql.conf",
"size": 22048,
"timestamp": 1435239001
},
{
"crc": -988904738,
"filename": "postgresql.conf",
"hasChange": false,
"hostId": 18,
"hostname": "192.168.33.122",
"path": "/var/lib/pgsql/data/postgresql.conf",
"size": 21130,
"timestamp": 1435239001
},
{
"crc": 352626019,
"filename": "recovery.conf",
"hasChange": false,
"hostId": 18,
"hostname": "192.168.33.122",
"path": "/var/lib/pgsql/data/recovery.conf",
"size": 155,
"timestamp": 1435239001
} ],
"requestStatus": "ok",
"total": 3
}

variables

Retruns the currently existing/set sections and variables and its values in a config file.

Arguments:

  • hostId, hostname: you can specify it if you looking for a specifc host's configuration
  • showAllHosts: (defaults to false), if it is set, then all hosts will be returned even if there are no config files are available (because not fetched or any other reason).
1 $ curl -XPOST -d'{"operation":"variables","hostname":"192.168.33.123"}' 'http://localhost:9500/6/config'
{
"cc_timestamp": 1436936500,
"data": [
{
"hostId": 127,
"hostname": "192.168.33.123",
"variables": [
{
"filepath": "/etc/my.cnf",
"linenumber": 2,
"section": "mysqld",
"value": "/var/lib/mysql",
"variablename": "datadir"
},
{
"filepath": "/etc/my.cnf",
"linenumber": 3,
"section": "mysqld",
"value": "/var/lib/mysql/mysql.sock",
"variablename": "socket"
},
{
"filepath": "/etc/my.cnf",
"linenumber": 5,
"section": "mysqld",
"value": "0",
"variablename": "symbolic-links"
},
{
"filepath": "/etc/my.cnf",
"linenumber": 13,
"section": "mysqld_safe",
"value": "/var/log/mariadb/mariadb.log",
"variablename": "log-error"
},
{
"filepath": "/etc/my.cnf",
"linenumber": 14,
"section": "mysqld_safe",
"value": "/var/run/mariadb/mariadb.pid",
"variablename": "pid-file"
},
{
"filepath": "/etc/my.cnf.d/server.cnf",
"linenumber": 13,
"section": "mysqld",
"value": "0.0.0.0",
"variablename": "bind-address"
} ]
} ],
"requestStatus": "ok",
"total": 1
}

contents

Get the contents of one or more configuration file(s).

The 'filename' argument must be specified, and optionally the caller could specify the 'hostname' or the 'hostId'.

1 curl -XPOST -d'{"operation":"contents","filename":"postgresql.conf","hostId":5}' 'http://localhost:9500/14/config'

And an example reply (please note that i haven't copied the whole contents there as it was too long)

{
"cc_timestamp": 1423215173,
"data": [
{
"contents": "#restart_after_crash = on\r\nlisten_address = '127.0.0.1'\r\n",
"crc": 1910057875,
"filename": "postgresql.conf",
"hasChange": true,
"hostId": 5,
"hostname": "192.168.33.99",
"size": 21274,
"timestamp": 1425289335
} ],
"requestStatus": "ok",
"total": 1
}

edit

Perform an edit action on a config file (or on multiple config files), the same filtering is available here like in case of 'contents' operation, so 'filename' must be specified and the others are optional.

add

With this action you can add new configuration entries to the config file(s).

Arguments:

  • section (optional): the config file section to be edited (think for [section])
  • key: the configuration name key (for example: ssl_cert_file)
  • value: and the value to be used for this new configuration entry
1 curl -XPOST -d'{"operation":"edit","filename":"postgresql.conf","hostId":5,"action":"add","key":"ssl_cert_file","value":"/etc/certfile.crt"}' 'http://localhost:9500/14/config'
{
"cc_timestamp": 1423215692,
"requestStatus": "ok"
}

In this example a new line will be added to the config file with the following content:

ssl_cert_file=/etc/certfile.crt\n\n

change

With this action you can change the value(s) of the configuration file(s) entry(ies).

Arguments:

  • section (optional): the config file section to be edited (think for [section])
  • key: the configuration name key (for example: ssl_cert_file)
  • value: the new value to be set for the key
1 curl -XPOST -d'{"operation":"edit","filename":"postgresql.conf","hostId":5,"action":"change","key":"ssl_cert_file","value":"/etc/mycert.crt"}' 'http://localhost:9500/14/config'
{
"cc_timestamp": 1423215692,
"requestStatus": "ok"
}

In this example the 'ssl_cert_file' configuration key value will be changed to the new value:

ssl_cert_file=/etc/mycert.crt\n\n

disable

This action is meant for disable, i.e. comment out an unused/unneded configuration key.

Arguments:

  • section (optional): the config file section to be edited (think for [section])
  • key: the configuration name key (for example: ssl_cert_file)
1 curl -XPOST -d'{"operation":"edit","filename":"postgresql.conf","hostId":5,"action":"disable","key":"ssl_cert_file"}' 'http://localhost:9500/14/config'
{
"cc_timestamp": 1423215692,
"requestStatus": "ok"
}

In this example the 'ssl_cert_file' configuration key value will be commented out:

# ssl_cert_file=/etc/mycert.crt\n\n

setContent

This operation is allows the web-ui to replace the whole contents of a configuration file

Arguments:

  • hostname/hostId : one of these must be specified
  • filename: the filename to be changed
  • content: the contents of the file
  • export_config: (bool) when this is enabled config will be also saved directly on the node too

An example:

1 curl -XPOST -d'{"operation":"setcontent","hostId":3,"filename":"postgresql.conf","content":"hello=world"}' 'http://localhost:9500/14/config';

}

{
"cc_timestamp": 1425982204,
"requestStatus": "ok"
}
1 curl -XPOST -d'{"operation":"contents","filename":"postgresql.conf","hostId":3}' 'http://localhost:9500/14/config'
{
"cc_timestamp": 1425982219,
"data": [
{
"contents": "hello=world",
"crc": -1308491584,
"filename": "postgresql.conf",
"hasChange": true,
"hostId": 3,
"hostname": "10.10.10.13",
"size": 11,
"timestamp": 1425982204
} ],
"requestStatus": "ok",
"total": 1
}

The Spreadsheet API

/${CLUSTERID}/sheet

The JSon syntax of this interface is described in Spreadsheets Spreadsheets documentation.

The Jobs API

/${CLUSTERID}/job

createJob

The createJob RPC call is deprecated, please use createJobInstance instead.

To push a new job (to a specific cluster, or a generic one [use clusterId = 0 in the path]), the a JSon in the following format, where the "job" value is a properly formatted JSon jobspec. (see the details at 'jobs_json_format.text')

{ "job": {...} }

For auditing the UI (or other clients) should specify the following fields:

{
"operation": "createJob",
"ip": "${THE IP OF THE CLIENT/BROWSER}",
"username": "${THE USERNAME}",
"userid": "${THE USER ID, dcps}",
"job": { }
}

Look at this example how a new MongoDB server must be installed by invoking the setupserver job:

1 $ curl -X POST -H"Content-Type: application/json" -d '{
2  "operation": "createJob",
3  "ip": "192.168.55.32",
4  "username": "kedazo",
5  "userid": 6532,
6  "job": { "command": "create_cluster",
7  "job_data": { "type": "mongodb", "mongodb_hostname": "192.168.33.10",
8  "mongodb_user": "root", "mongodb_password": "password",
9  "mongodb_rs_name": "test_replica_set", "enable_mongodb_uninstall": 1,
10  "ssh_port": 22, "ssh_user": "kedz",
11  "ssh_keyfile": "/home/kedz/.ssh/id_rsa",
12  "api_id": 1 }}}' \
13  http://localhost:9500/0/job

The interface is reports back the 'jobId' in the following format:

{
"requestStatus": "ok",
"jobId": 24,
"status": "DEFINED"
}

createJobInstance

This call obsoletes the "createJob" RPC call. It provides a standard way to pass job information in a CmonJobInstance Class object.

Example:

{
"operation" : "createJobInstance",
"job" :
{
"class_name": "CmonJobInstance",
"job_spec": "{\n}",
"status_text": "Waiting",
"title": "The title of the job.",
"user_name": "pipas",
"user_id": 42
}
}
{
"cc_timestamp": 1506582050,
"job":
{
"can_be_aborted": false,
"can_be_deleted": true,
"class_name": "CmonJobInstance",
"cluster_id": 200,
"created": "2017-09-28T07:00:50.000Z",
"exit_code": 0,
"group_id": 0,
"group_name": "",
"job_id": 3,
"job_spec": "{\n}",
"status": "DEFINED",
"status_text": "Pending",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
},
"requestStatus": "ok"
}

It is also possible to create scheduled and recurring jobs using this RPC call by adding the following properties to the job instance object:

  • scheduled: This field has to be a string representation of a date&time value. The job will be scheduled and executed only after the internal clock reaches the schedule date&time.
  • recurrence: This can be used to create a recurring job, a job that is executed over and over again. The value of this property has to be a five field long crontab style recurrence definition (e.g. "&star;/30 &star; &star; &star; &star;").

Please note that a job can not be scheduled and recurring the same time (this is not implemented) and the scheduling field has precedence over the recurrence.

scheduleJobInstance

A call that can be used to create job instances that will be executed in the future. The passed job instance should have a "scheduled" field that holds the planned execution time of the job instance and it should be in the future.

Example:

{
"operation" : "scheduleJobInstance",
"job" :
{
"class_name": "CmonJobInstance",
"job_spec": "{\n}",
"status_text": "Waiting",
"title": "The title of the job.",
"user_name": "pipas",
"user_id": 42,
"scheduled": "2017-09-28T07:01:50.177Z"
}
}
{
"cc_timestamp": 1506582050,
"job":
{
"can_be_aborted": false,
"can_be_deleted": true,
"class_name": "CmonJobInstance",
"cluster_id": 200,
"created": "2017-09-28T07:00:50.000Z",
"exit_code": 0,
"group_id": 0,
"group_name": "",
"job_id": 2,
"job_spec": "{\n}",
"scheduled": "2017-09-28T07:01:50.177Z",
"status": "SCHEDULED",
"status_text": "Scheduled",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
},
"requestStatus": "ok"
}

Note: I am not sure if we need this, I think it was a mistake (my mistake) to add this. The jobs can be scheduled using the normal "createJobInstance" call, please use that one.

getStatus

To fetch the job current status, you need to push the following request

1 curl -X POST -H"Content-Type: application/json" -d '{
2  "operation": "getStatus", "jobId": 14 }' http://localhost:9500/0/job

You will get a JSon reply like this:

{
"exitCode": 0,
"jobId": 14,
"requestStatus": "ok",
"status": "FINISHED",
"statusText": "Job finished."
}

or an error message otherwise:

1 curl -X POST -H"Content-Type: application/json" -d '{"operation": "getStatus", "jobId": 99 }' http://localhost:9500/0/job
{
"errorString": "No such job.",
"requestStatus": "error"
}
1 curl -X POST -H"Content-Type: application/json" -d '{"operation": "getStatus", "jobId": 13 }' http://localhost:9500/55/job
{
"errorString": "Cluster 55 is not running.",
"requestStatus": "error"
}

getJobInstance

The "getJobInstance" RPC call obsoletes the previous "getStatus" call because it is able to return all the properties of a job and not just the status. In the future when we add new properties to the jobs (e.g. a progress percent to be shown as a progress bar) this RPC call will be handle the new properties.

This RPC call also available using the deprecated "getJob" call.

Example:

{
"operation": "getJob",
"job_id": 3
}
{
"cc_timestamp": 1506582050,
"job":
{
"can_be_aborted": false,
"can_be_deleted": true,
"class_name": "CmonJobInstance",
"cluster_id": 200,
"created": "2017-09-28T07:00:50.000Z",
"exit_code": 0,
"group_id": 0,
"group_name": "",
"job_id": 3,
"job_spec": "{\n}",
"status": "DEFINED",
"status_text": "Pending",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
},
"requestStatus": "ok"
}

getJobInstances

The getJobInstance call supports the "limit" and "offset" arguments the same way the SQL syntax uses the LIMIT and OFFSET keywords. The default value limit is 100.

Example:

{
"operation": "getJobInstances"
}
{
"cc_timestamp": 1506582050,
"jobs": [
{
"can_be_aborted": false,
"can_be_deleted": true,
"class_name": "CmonJobInstance",
"cluster_id": 200,
"created": "2017-09-28T07:00:50.000Z",
"exit_code": 0,
"group_id": 0,
"group_name": "",
"job_id": 4,
"job_spec": "{\n}",
"status": "DEFINED",
"status_text": "Pending",
"title": "The second job.",
"user_id": 42,
"user_name": "pipas"
},
{
"can_be_aborted": false,
"can_be_deleted": true,
"class_name": "CmonJobInstance",
"cluster_id": 200,
"created": "2017-09-28T07:00:50.000Z",
"exit_code": 0,
"group_id": 0,
"group_name": "",
"job_id": 3,
"job_spec": "{\n}",
"status": "DEFINED",
"status_text": "Pending",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
},
{
"can_be_aborted": false,
"can_be_deleted": true,
"class_name": "CmonJobInstance",
"cluster_id": 200,
"created": "2017-09-28T07:00:50.000Z",
"exit_code": 0,
"group_id": 0,
"group_name": "",
"job_id": 2,
"job_spec": "{\n}",
"scheduled": "2017-09-28T07:01:50.177Z",
"status": "SCHEDULED",
"status_text": "Scheduled",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
},
{
"can_be_aborted": false,
"can_be_deleted": true,
"class_name": "CmonJobInstance",
"cluster_id": 200,
"created": "2017-09-28T07:00:50.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 1,
"job_spec": "{\n}",
"status": "DEFINED",
"status_text": "Pending",
"title": "Unknown Command",
"user_id": 0,
"user_name": "system"
} ],
"requestStatus": "ok",
"total": 4
}

deleteJobInstance

The "deleteJobInstance" can be used to delete job instances that are not currently executed.

Example:

{
"operation": "deleteJobInstance",
"job_id": 5
}
{
"cc_timestamp": 1506582060,
"requestStatus": "ok"
}

getActiveJobInstances

The getActiveJobInstances RPC call is similar to the getJobInstances call, but only handles job instances that are not finished or aborted. The default value limit is 100.

The getJobInstance call supports the "limit" and "offset" arguments the same way the SQL syntax uses the LIMIT and OFFSET keywords.

Example:

{
"operation": "getActiveJobInstances"
}
{
"cc_timestamp": 1506582050,
"jobs": [
{
"can_be_aborted": false,
"can_be_deleted": true,
"class_name": "CmonJobInstance",
"cluster_id": 200,
"created": "2017-09-28T07:00:50.000Z",
"exit_code": 0,
"group_id": 0,
"group_name": "",
"job_id": 4,
"job_spec": "{\n}",
"status": "DEFINED",
"status_text": "Pending",
"title": "The second job.",
"user_id": 42,
"user_name": "pipas"
},
{
"can_be_aborted": false,
"can_be_deleted": true,
"class_name": "CmonJobInstance",
"cluster_id": 200,
"created": "2017-09-28T07:00:50.000Z",
"exit_code": 0,
"group_id": 0,
"group_name": "",
"job_id": 3,
"job_spec": "{\n}",
"status": "DEFINED",
"status_text": "Pending",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
},
{
"can_be_aborted": false,
"can_be_deleted": true,
"class_name": "CmonJobInstance",
"cluster_id": 200,
"created": "2017-09-28T07:00:50.000Z",
"exit_code": 0,
"group_id": 0,
"group_name": "",
"job_id": 2,
"job_spec": "{\n}",
"scheduled": "2017-09-28T07:01:50.177Z",
"status": "SCHEDULED",
"status_text": "Scheduled",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
},
{
"can_be_aborted": false,
"can_be_deleted": true,
"class_name": "CmonJobInstance",
"cluster_id": 200,
"created": "2017-09-28T07:00:50.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 1,
"job_spec": "{\n}",
"status": "DEFINED",
"status_text": "Pending",
"title": "Unknown Command",
"user_id": 0,
"user_name": "system"
} ],
"requestStatus": "ok",
"total": 4
}

getJobMessages

This operation returns back all the job messages related to the 'jobId', for the exact request & response format please see the following example:

Arguments:

  • jobId: the jobId
1 curl -X POST -H"Content-Type: application/json" -d '{"operation": "getJobMessages", "jobId": 24 }' http://localhost:9500/0/job
{
"jobId": 24,
"messages": [
{
"exitCode": 0,
"id": 732,
"message": "Checking job parameters.",
"time": "2014-08-29 16:34:10"
},
{
"exitCode": 0,
"id": 733,
"message": "Check if host is already exist in other cluster.",
"time": "2014-08-29 16:34:10"
},
{
"exitCode": 1,
"id": 734,
"message": "Host (192.168.33.10) is already in an other cluster.",
"time": "2014-08-29 16:34:10"
} ],
"requestStatus": "ok"
}

getJobLog

The "getJobLog" RPC call can be used to simultaneously access the properties of the specified job and its messages. This call is an enhanced version of the "gotjobmessages" call.

curl -X POST -H"Content-Type: application/json" -d'{ "operation": "getjoblog", "token": "rBf51gA3NZgJgys6", "jobId":"39287" }' http://localhost:9500/59/job

Example:

{
"job_id": "25",
"limit": 2,
"offset": 5,
"operation": "getjoblog"
}
{
"cc_timestamp": 1484036170,
"job":
{
"can_be_aborted": false,
"can_be_deleted": true,
"class_name": "CmonJobInstance",
"cluster_id": 0,
"created": "2017-01-10T08:16:10.000Z",
"exit_code": 0,
"job_id": 25,
"job_spec": "{\n}",
"status": "DEFINED",
"status_text": "Waiting",
"title": "Unknown Command",
"user_id": 0,
"user_name": "system"
},
"messages": [
{
"class_name": "CmonJobMessage",
"created": "2017-01-10T08:16:10.000Z",
"job_id": 25,
"message_id": 253,
"message_status": "JOB_SUCCESS",
"message_text": "Test message 04."
},
{
"class_name": "CmonJobMessage",
"created": "2017-01-10T08:16:10.000Z",
"job_id": 25,
"message_id": 252,
"message_status": "JOB_SUCCESS",
"message_text": "Test message 03."
} ],
"requestStatus": "ok"
}

The properties of both the jobs and the job messages might be changed in the future, we might add new properties to implement new features, but hopefully there is no need to change the general structure of the reply message.

getJobStatistics

The getJobStatistics call is designed to return the number of jobs in every state, so the caller can find how many jobs are to be done.

Example:

{
"operation": "getJobStatistics"
}
{
"cc_timestamp": 1506582050,
"cluster_id": 200,
"requestStatus": "ok",
"statistics":
{
"by_state":
{
"ABORTED": 0,
"DEFINED": 3,
"DEQUEUED": 0,
"FAILED": 0,
"FINISHED": 0,
"RUNNING": 0,
"SCHEDULED": 1
},
"class_name": "CmonJobStatistics"
}
}

getJobs

Get the jobs list of a specific cluster. NOTE: the jobs will be returned sorted by jobId in descending order.

Arguments:

  • limit: (optional) if you want to get only a specified amount of the latest jobs
  • returnfrom: (optional) if this unix-timestamp is specified cmon will do return only the jobs newer than this time.

Please note that the following example doesn't have any jobs to return:

1 curl -X POST -H"Content-Type: application/json" -d'{ "operation": "getJobs", "limit": 2 }' http://localhost:9500/4/job
{
"cc_timestamp": 1432721311,
"jobs": [
{
"exitCode": 0,
"ip": "127.0.0.1",
"jobId": 1467,
"jobStr": "{\"command\":\"restore_backup\",\"job_data\":{\"backupid\":\"20\",\"stop_cluster\":false}}",
"status": "FINISHED",
"time": 1432721296,
"userid": 1,
"username": "Admin"
},
{
"exitCode": 1,
"ip": "127.0.0.1",
"jobId": 1466,
"jobStr": "{\"command\":\"backup\",\"job_data\":{\"hostname\":\"192.168.33.122\",\"backupdir\":\"/tmp/backups\",\"cc_storage\":\"0\",\"compression\":\"1\",\"port\":\"5432\"}}",
"status": "FAILED",
"time": 1432721284,
"userid": 1,
"username": "Admin"
} ],
"requestStatus": "ok"
}

The returned "jobs" item is JSon list of "job" objects, a job "object" with the following syntax:

{
"exitCode": 1,
"jobId": 23,
"jobStr": "{\"command\": \"create_cluster\",
\"job_data\": {
\"api_id\": 1,
\"enable_mongodb_uninstall\": 1,
\"mongodb_hostname\": \"192.168.33.10\",
\"mongodb_password\": \"password\",
\"mongodb_rs_name\": \"test_replica_set\",
\"mongodb_user\": \"root\",
\"ssh_keyfile\": \"/home/kedz/.ssh/id_rsa\",
\"ssh_port\": 22,
\"ssh_user\": \"kedz\",
\"type\": \"mongodb\"
}}",
"status": "FAILED",
"time": "2014-08-29 16:33:25"
}

NOTE that this contains the job specification in string format instead of real JSon, as the cmon jobs table may contain syntactically incorrect jobspec-s or jobspecs in the old format

The Alarm API

/${CLUSTERID}/alarm

The "getStatistics" Request

The "getStatistics" call can be used to find how many active alarms are there. The alarms that has set to be "ignored" are not counted.

The request can also contain two dates to filter the alarms by creation date and/or report dates. If the dates are provided in the request alarms that are created/reported before the set dates are not counted.

Example:

{
"operation" : "getStatistics",
"created_after" : "2016-05-08T10:10:45.+0200Z",
"reported_after": "2016-06-07T10:10:45.+0200Z",
"cluster_id" : 200
}
{
"alarm_statistics": [
{
"class_name": "CmonAlarmStatistics",
"cluster_id": 200,
"critical": 5,
"warning": 5
} ],
"cc_timestamp": 1506582060,
"requestStatus": "ok"
}

This call also works with multiple cluster IDs in the request.

{
"operation" : "getStatistics",
"created_after" : "2016-05-08T10:10:45.+0200Z",
"reported_after": "2016-06-07T10:10:45.+0200Z",
"cluster_ids" : [ 200, 201, 202 ]
}
{
"alarm_statistics": [
{
"class_name": "CmonAlarmStatistics",
"cluster_id": 200,
"critical": 5,
"warning": 5
},
{
"class_name": "CmonAlarmStatistics",
"cluster_id": 201,
"critical": 0,
"warning": 0
},
{
"class_name": "CmonAlarmStatistics",
"cluster_id": 202,
"critical": 0,
"warning": 0
} ],
"cc_timestamp": 1506582060,
"requestStatus": "ok"
}

The "getAlarms" Request

The "getAlarms" call can be used to retrieve the active alarms from the backend. This call works with one cluster ID (aka "cluster_id") and also with multiple cluster IDs (aka "cluster_ids").

Example (one cluster):

{
"operation" : "getAlarms",
"created_after" : "2016-05-08T10:10:45.+0200Z",
"reported_after": "2016-06-07T10:10:45.+0200Z",
"cluster_id" : 200
}
{
"alarms": [
{
"alarm_id": 1,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 0,
"component_name": "Network",
"counter": 1,
"created": "2017-09-28T07:00:31.000Z",
"host_id": 1,
"hostname": "127.0.0.1",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: Host 127.0.0.1 is not responding to ping after 3 cycles, the host is most likely unreachable.",
"recommendation": "Restart failed host, check firewall.",
"reported": "2017-09-28T07:00:31.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Host is not responding",
"type": 10006,
"type_name": "HostUnreachable"
},
{
"alarm_id": 2,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 0,
"component_name": "Network",
"counter": 1,
"created": "2017-09-28T07:00:31.000Z",
"host_id": 2,
"hostname": "127.0.0.2",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.2 reports: Host 127.0.0.2 is not responding to ping after 3 cycles, the host is most likely unreachable.",
"recommendation": "Restart failed host, check firewall.",
"reported": "2017-09-28T07:00:31.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Host is not responding",
"type": 10006,
"type_name": "HostUnreachable"
},
{
"alarm_id": 3,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 7,
"component_name": "Host",
"counter": 1,
"created": "2017-09-28T07:00:31.000Z",
"host_id": 1,
"hostname": "127.0.0.1",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: 8.53 percent swap space is used. Swapping is decremental for database performance.\ntop - 09:00:30 up 7 days, 1:51, 1 user, load average: 1.82, 1.19, 1.04\nTasks: 507 total, 2 running, 501 sleeping, 0 stopped, 4 zombie\n%Cpu(s): 10.2 us, 3.6 sy, 0.0 ni, 84.8 id, 1.1 wa, 0.0 hi, 0.4 si, 0.0 st\nKiB Mem : 24605308 total, 540308 free, 6815976 used, 17249024 buff/cache\nKiB Swap: 8338428 total, 7626808 free, 711620 used. 16033516 avail Mem \n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 2042 mysql 20 0 5732288 1.446g 9888 S 100.0 6.2 33:12.80 mysqld\n17363 root 20 0 47172 7372 2648 R 93.8 0.0 12:08.85 systemd\n 6101 root 20 0 2470512 66128 17252 S 18.8 0.3 2:53.05 cmon\n 6509 kedz 20 0 14372 3392 2952 S 18.8 0.0 0:00.10 bash\n 2050 root 20 0 414736 94284 73144 S 6.2 0.4 128:48.67 Xorg\n 3990 kedz 20 0 1241996 163096 54408 S 6.2 0.7 104:06.60 budgie-wm\n 4879 kedz 20 0 435160 6316 4824 S 6.2 0.0 4:44.69 gvfsd-trash\n 7979 kedz 20 0 37780 3776 2936 R 6.2 0.0 0:00.01 top\n23911 kedz 20 0 2163064 165120 61804 S 6.2 0.7 21:27.24 vlc\n 1 root 20 0 206244 7800 4904 S 0.0 0.0 0:55.73 systemd\n 2 root 20 0 0 0 0 S 0.0 0.0 0:00.34 kthreadd\n 4 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:+\n 6 root 20 0 0 0 0 S 0.0 0.0 1:21.52 ksoftirqd/0\n 7 root 20 0 0 0 0 S 0.0 0.0 19:05.92 rcu_sched\n 8 root 20 0 0 0 0 S 0.0 0.0 0:00.08 rcu_bh\n 9 root rt 0 0 0 0 S 0.0 0.0 0:30.34 migration/0\n 10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 lru-add-dr+\n 11 root rt 0 0 0 0 S 0.0 0.0 0:01.73 watchdog/0\n 12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/0\n 13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/1\n 14 root rt 0 0 0 0 S 0.0 0.0 0:01.99 watchdog/1\n 15 root rt 0 0 0 0 S 0.0 0.0 0:29.35 migration/1\n 16 root 20 0 0 0 0 S 0.0 0.0 1:19.61 ksoftirqd/1\n 18 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/1:+\n 19 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/2\n 20 root rt 0 0 0 0 S 0.0 0.0 0:01.72 watchdog/2\n 21 root rt 0 0 0 0 S 0.0 0.0 0:27.23 migration/2\n 22 root 20 0 0 0 0 S 0.0 0.0 1:04.63 ksoftirqd/2\n 24 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/2:+\n 25 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/3\n 26 root rt 0 0 0 0 S 0.0 0.0 0:01.60 watchdog/3\n 27 root rt 0 0 0 0 S 0.0 0.0 0:22.55 migration/3\n 28 root 20 0 0 0 0 S 0.0 0.0 0:55.98 ksoftirqd/3\n 30 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/3:+\n 31 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/4\n 32 root rt 0 0 0 0 S 0.0 0.0 0:01.72 watchdog/4\n 33 root rt 0 0 0 0 S 0.0 0.0 0:29.07 migration/4\n 34 root 20 0 0 0 0 S 0.0 0.0 1:26.63 ksoftirqd/4\n 36 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/4:+\n 37 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/5\n 38 root rt 0 0 0 0 S 0.0 0.0 0:01.60 watchdog/5\n 39 root rt 0 0 0 0 S 0.0 0.0 0:27.87 migration/5\n 40 root 20 0 0 0 0 S 0.0 0.0 1:00.00 ksoftirqd/5\n 42 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/5:+\n 43 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/6\n 44 root rt 0 0",
"recommendation": "Increase RAM, tune swappiness, reduce memory footprint of mysqld, mongodb or other processes running on the node. You can also reboot the server as a temporary countermeasure.",
"reported": "2017-09-28T07:00:31.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "Host is swapping",
"type": 10000,
"type_name": "HostSwapping"
},
{
"alarm_id": 4,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 7,
"component_name": "Host",
"counter": 1,
"created": "2017-09-28T07:00:31.000Z",
"host_id": 1,
"hostname": "127.0.0.1",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: \n82.47 percent disk space has been used on sda2.\n\nFilesystem Size Used Avail Use% Mounted on\n/dev/sda1 1022M 1.4M 1021M 0% /boot/efi\n/dev/sda2 226G 187G 40G 82% /\n/dev/sdb2 929G 686G 243G 74% /mnt/data2\n/dev/sdc1 391G 384G 7.7G 98% /var/lib/schroot/mount/trusty-ec07a19b-46ad-4839-bbb7-94931fefed61\n/dev/sdd2 109G 30G 79G 27% /mnt/ssd128g\n/dev/sde1 3.6T 493G 3.1T 13% /mnt/4tb\n",
"recommendation": "The server is running low on free disk space. Remove old logfiles, backups etc, or increase the size of the disk volume. Disk full may cause the database to fail and recovery very difficult.",
"reported": "2017-09-28T07:00:31.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "Storage space alarm",
"type": 10002,
"type_name": "HostDiskUsage"
},
{
"alarm_id": 5,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 7,
"component_name": "Host",
"counter": 1,
"created": "2017-09-28T07:00:31.000Z",
"host_id": 2,
"hostname": "127.0.0.2",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.2 reports: 8.53 percent swap space is used. Swapping is decremental for database performance.\ntop - 09:00:31 up 7 days, 1:51, 1 user, load average: 1.82, 1.19, 1.04\nTasks: 507 total, 2 running, 501 sleeping, 0 stopped, 4 zombie\n%Cpu(s): 10.2 us, 3.6 sy, 0.0 ni, 84.8 id, 1.1 wa, 0.0 hi, 0.4 si, 0.0 st\nKiB Mem : 24605308 total, 538932 free, 6817212 used, 17249164 buff/cache\nKiB Swap: 8338428 total, 7626808 free, 711620 used. 16032432 avail Mem \n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 2042 mysql 20 0 5732288 1.446g 9888 S 100.0 6.2 33:13.08 mysqld\n17363 root 20 0 47172 7372 2648 R 93.8 0.0 12:09.14 systemd\n 6509 kedz 20 0 14372 3392 2952 S 18.8 0.0 0:00.15 bash\n 6101 root 20 0 2470512 66128 17252 S 12.5 0.3 2:53.09 cmon\n 2228 root -51 0 0 0 0 S 6.2 0.0 129:24.43 irq/30-nvi+\n 3990 kedz 20 0 1241996 163096 54408 S 6.2 0.7 104:06.61 budgie-wm\n 5948 kedz 20 0 1027104 28124 21636 S 6.2 0.1 0:00.24 ut_s9sclus+\n 1 root 20 0 206244 7800 4904 S 0.0 0.0 0:55.73 systemd\n 2 root 20 0 0 0 0 S 0.0 0.0 0:00.34 kthreadd\n 4 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:+\n 6 root 20 0 0 0 0 S 0.0 0.0 1:21.52 ksoftirqd/0\n 7 root 20 0 0 0 0 S 0.0 0.0 19:05.92 rcu_sched\n 8 root 20 0 0 0 0 S 0.0 0.0 0:00.08 rcu_bh\n 9 root rt 0 0 0 0 S 0.0 0.0 0:30.34 migration/0\n 10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 lru-add-dr+\n 11 root rt 0 0 0 0 S 0.0 0.0 0:01.73 watchdog/0\n 12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/0\n 13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/1\n 14 root rt 0 0 0 0 S 0.0 0.0 0:01.99 watchdog/1\n 15 root rt 0 0 0 0 S 0.0 0.0 0:29.35 migration/1\n 16 root 20 0 0 0 0 S 0.0 0.0 1:19.61 ksoftirqd/1\n 18 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/1:+\n 19 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/2\n 20 root rt 0 0 0 0 S 0.0 0.0 0:01.72 watchdog/2\n 21 root rt 0 0 0 0 S 0.0 0.0 0:27.23 migration/2\n 22 root 20 0 0 0 0 S 0.0 0.0 1:04.63 ksoftirqd/2\n 24 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/2:+\n 25 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/3\n 26 root rt 0 0 0 0 S 0.0 0.0 0:01.60 watchdog/3\n 27 root rt 0 0 0 0 S 0.0 0.0 0:22.55 migration/3\n 28 root 20 0 0 0 0 S 0.0 0.0 0:55.98 ksoftirqd/3\n 30 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/3:+\n 31 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/4\n 32 root rt 0 0 0 0 S 0.0 0.0 0:01.72 watchdog/4\n 33 root rt 0 0 0 0 S 0.0 0.0 0:29.07 migration/4\n 34 root 20 0 0 0 0 S 0.0 0.0 1:26.63 ksoftirqd/4\n 36 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/4:+\n 37 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/5\n 38 root rt 0 0 0 0 S 0.0 0.0 0:01.60 watchdog/5\n 39 root rt 0 0 0 0 S 0.0 0.0 0:27.87 migration/5\n 40 root 20 0 0 0 0 S 0.0 0.0 1:00.00 ksoftirqd/5\n 42 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/5:+\n 43 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/6\n 44 root rt 0 0 0 0 S 0.0 0.0 0:01.55 watchdog/6\n 45 root rt 0 0 0 0 S 0.0 0.0 0:25.96 migration/6\n 46 ro",
"recommendation": "Increase RAM, tune swappiness, reduce memory footprint of mysqld, mongodb or other processes running on the node. You can also reboot the server as a temporary countermeasure.",
"reported": "2017-09-28T07:00:31.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "Host is swapping",
"type": 10000,
"type_name": "HostSwapping"
},
{
"alarm_id": 6,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 7,
"component_name": "Host",
"counter": 1,
"created": "2017-09-28T07:00:32.000Z",
"host_id": 2,
"hostname": "127.0.0.2",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.2 reports: \n82.47 percent disk space has been used on sda2.\n\nFilesystem Size Used Avail Use% Mounted on\n/dev/sda1 1022M 1.4M 1021M 0% /boot/efi\n/dev/sda2 226G 187G 40G 82% /\n/dev/sdb2 929G 686G 243G 74% /mnt/data2\n/dev/sdc1 391G 384G 7.7G 98% /var/lib/schroot/mount/trusty-ec07a19b-46ad-4839-bbb7-94931fefed61\n/dev/sdd2 109G 30G 79G 27% /mnt/ssd128g\n/dev/sde1 3.6T 493G 3.1T 13% /mnt/4tb\n",
"recommendation": "The server is running low on free disk space. Remove old logfiles, backups etc, or increase the size of the disk volume. Disk full may cause the database to fail and recovery very difficult.",
"reported": "2017-09-28T07:00:32.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "Storage space alarm",
"type": 10002,
"type_name": "HostDiskUsage"
},
{
"alarm_id": 7,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 8,
"component_name": "DbHealth",
"counter": 1,
"created": "2017-09-28T07:00:32.000Z",
"ignored": 0,
"measured": 0,
"message": "72 redundant indexes have been detected. We recommend that you drop the redundant indexes during a maintenance window. To find out which indexes please go to Performance -> Schema Analyzer -> Show Redundant Indexes in the UI.\nRead here how to perform schema changes in safe ways: \nhttp://www.severalnines.com/blog/online-schema-upgrade-mysql-galera-cluster-using-toi-method",
"recommendation": "Go to Performance -> Table Analyzer -> Show Redundant Indexes to find out which indexes.",
"reported": "2017-09-28T07:00:32.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "Redundant indexes detected",
"type": 4003,
"type_name": "MySqlIndexAnalyzer"
},
{
"alarm_id": 8,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 6,
"component_name": "Node",
"counter": 1,
"created": "2017-09-28T07:00:32.000Z",
"host_id": 1,
"hostname": "127.0.0.1",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: MySQL Server is not connected to data nodes.",
"recommendation": "Check firewall/security rules and ndb-connectstring.",
"reported": "2017-09-28T07:00:32.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "MySQL server is not connected to NDB",
"type": 5010,
"type_name": "MySqlClusterNotConnected"
},
{
"alarm_id": 9,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 6,
"component_name": "Node",
"counter": 2,
"created": "2017-09-28T07:00:41.000Z",
"ignored": 0,
"measured": 0,
"message": "The cmon lost contact to the management server(s).",
"recommendation": "Check the connection and/or star the management servers.",
"reported": "2017-09-28T07:00:41.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "The cmon lost contact to the management server(s)",
"type": 5005,
"type_name": "NdbMgmdFailure"
},
{
"alarm_id": 11,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 5,
"component_name": "ClusterRecovery",
"counter": 1,
"created": "2017-09-28T07:00:50.000Z",
"ignored": 0,
"measured": 0,
"message": "Galera node recovery failed. Permanent error.",
"recommendation": "Check mysql error.log file.",
"reported": "2017-09-28T07:00:50.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Galera node recovery failed",
"type": 3000,
"type_name": "GaleraNodeRecoveryFail"
} ],
"cc_timestamp": 1506582060,
"cluster_id": 200,
"requestStatus": "ok"
}

Example (multiple clusters)

{
"operation" : "getAlarms",
"cluster_ids" : [ 200, 201 ]
}
{
"alarms": [
{
"alarm_id": 1,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 0,
"component_name": "Network",
"counter": 1,
"created": "2017-09-28T07:00:31.000Z",
"host_id": 1,
"hostname": "127.0.0.1",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: Host 127.0.0.1 is not responding to ping after 3 cycles, the host is most likely unreachable.",
"recommendation": "Restart failed host, check firewall.",
"reported": "2017-09-28T07:00:31.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Host is not responding",
"type": 10006,
"type_name": "HostUnreachable"
},
{
"alarm_id": 2,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 0,
"component_name": "Network",
"counter": 1,
"created": "2017-09-28T07:00:31.000Z",
"host_id": 2,
"hostname": "127.0.0.2",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.2 reports: Host 127.0.0.2 is not responding to ping after 3 cycles, the host is most likely unreachable.",
"recommendation": "Restart failed host, check firewall.",
"reported": "2017-09-28T07:00:31.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Host is not responding",
"type": 10006,
"type_name": "HostUnreachable"
},
{
"alarm_id": 3,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 7,
"component_name": "Host",
"counter": 1,
"created": "2017-09-28T07:00:31.000Z",
"host_id": 1,
"hostname": "127.0.0.1",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: 8.53 percent swap space is used. Swapping is decremental for database performance.\ntop - 09:00:30 up 7 days, 1:51, 1 user, load average: 1.82, 1.19, 1.04\nTasks: 507 total, 2 running, 501 sleeping, 0 stopped, 4 zombie\n%Cpu(s): 10.2 us, 3.6 sy, 0.0 ni, 84.8 id, 1.1 wa, 0.0 hi, 0.4 si, 0.0 st\nKiB Mem : 24605308 total, 540308 free, 6815976 used, 17249024 buff/cache\nKiB Swap: 8338428 total, 7626808 free, 711620 used. 16033516 avail Mem \n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 2042 mysql 20 0 5732288 1.446g 9888 S 100.0 6.2 33:12.80 mysqld\n17363 root 20 0 47172 7372 2648 R 93.8 0.0 12:08.85 systemd\n 6101 root 20 0 2470512 66128 17252 S 18.8 0.3 2:53.05 cmon\n 6509 kedz 20 0 14372 3392 2952 S 18.8 0.0 0:00.10 bash\n 2050 root 20 0 414736 94284 73144 S 6.2 0.4 128:48.67 Xorg\n 3990 kedz 20 0 1241996 163096 54408 S 6.2 0.7 104:06.60 budgie-wm\n 4879 kedz 20 0 435160 6316 4824 S 6.2 0.0 4:44.69 gvfsd-trash\n 7979 kedz 20 0 37780 3776 2936 R 6.2 0.0 0:00.01 top\n23911 kedz 20 0 2163064 165120 61804 S 6.2 0.7 21:27.24 vlc\n 1 root 20 0 206244 7800 4904 S 0.0 0.0 0:55.73 systemd\n 2 root 20 0 0 0 0 S 0.0 0.0 0:00.34 kthreadd\n 4 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:+\n 6 root 20 0 0 0 0 S 0.0 0.0 1:21.52 ksoftirqd/0\n 7 root 20 0 0 0 0 S 0.0 0.0 19:05.92 rcu_sched\n 8 root 20 0 0 0 0 S 0.0 0.0 0:00.08 rcu_bh\n 9 root rt 0 0 0 0 S 0.0 0.0 0:30.34 migration/0\n 10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 lru-add-dr+\n 11 root rt 0 0 0 0 S 0.0 0.0 0:01.73 watchdog/0\n 12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/0\n 13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/1\n 14 root rt 0 0 0 0 S 0.0 0.0 0:01.99 watchdog/1\n 15 root rt 0 0 0 0 S 0.0 0.0 0:29.35 migration/1\n 16 root 20 0 0 0 0 S 0.0 0.0 1:19.61 ksoftirqd/1\n 18 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/1:+\n 19 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/2\n 20 root rt 0 0 0 0 S 0.0 0.0 0:01.72 watchdog/2\n 21 root rt 0 0 0 0 S 0.0 0.0 0:27.23 migration/2\n 22 root 20 0 0 0 0 S 0.0 0.0 1:04.63 ksoftirqd/2\n 24 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/2:+\n 25 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/3\n 26 root rt 0 0 0 0 S 0.0 0.0 0:01.60 watchdog/3\n 27 root rt 0 0 0 0 S 0.0 0.0 0:22.55 migration/3\n 28 root 20 0 0 0 0 S 0.0 0.0 0:55.98 ksoftirqd/3\n 30 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/3:+\n 31 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/4\n 32 root rt 0 0 0 0 S 0.0 0.0 0:01.72 watchdog/4\n 33 root rt 0 0 0 0 S 0.0 0.0 0:29.07 migration/4\n 34 root 20 0 0 0 0 S 0.0 0.0 1:26.63 ksoftirqd/4\n 36 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/4:+\n 37 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/5\n 38 root rt 0 0 0 0 S 0.0 0.0 0:01.60 watchdog/5\n 39 root rt 0 0 0 0 S 0.0 0.0 0:27.87 migration/5\n 40 root 20 0 0 0 0 S 0.0 0.0 1:00.00 ksoftirqd/5\n 42 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/5:+\n 43 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/6\n 44 root rt 0 0",
"recommendation": "Increase RAM, tune swappiness, reduce memory footprint of mysqld, mongodb or other processes running on the node. You can also reboot the server as a temporary countermeasure.",
"reported": "2017-09-28T07:00:31.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "Host is swapping",
"type": 10000,
"type_name": "HostSwapping"
},
{
"alarm_id": 4,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 7,
"component_name": "Host",
"counter": 1,
"created": "2017-09-28T07:00:31.000Z",
"host_id": 1,
"hostname": "127.0.0.1",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: \n82.47 percent disk space has been used on sda2.\n\nFilesystem Size Used Avail Use% Mounted on\n/dev/sda1 1022M 1.4M 1021M 0% /boot/efi\n/dev/sda2 226G 187G 40G 82% /\n/dev/sdb2 929G 686G 243G 74% /mnt/data2\n/dev/sdc1 391G 384G 7.7G 98% /var/lib/schroot/mount/trusty-ec07a19b-46ad-4839-bbb7-94931fefed61\n/dev/sdd2 109G 30G 79G 27% /mnt/ssd128g\n/dev/sde1 3.6T 493G 3.1T 13% /mnt/4tb\n",
"recommendation": "The server is running low on free disk space. Remove old logfiles, backups etc, or increase the size of the disk volume. Disk full may cause the database to fail and recovery very difficult.",
"reported": "2017-09-28T07:00:31.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "Storage space alarm",
"type": 10002,
"type_name": "HostDiskUsage"
},
{
"alarm_id": 5,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 7,
"component_name": "Host",
"counter": 1,
"created": "2017-09-28T07:00:31.000Z",
"host_id": 2,
"hostname": "127.0.0.2",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.2 reports: 8.53 percent swap space is used. Swapping is decremental for database performance.\ntop - 09:00:31 up 7 days, 1:51, 1 user, load average: 1.82, 1.19, 1.04\nTasks: 507 total, 2 running, 501 sleeping, 0 stopped, 4 zombie\n%Cpu(s): 10.2 us, 3.6 sy, 0.0 ni, 84.8 id, 1.1 wa, 0.0 hi, 0.4 si, 0.0 st\nKiB Mem : 24605308 total, 538932 free, 6817212 used, 17249164 buff/cache\nKiB Swap: 8338428 total, 7626808 free, 711620 used. 16032432 avail Mem \n\n PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND\n 2042 mysql 20 0 5732288 1.446g 9888 S 100.0 6.2 33:13.08 mysqld\n17363 root 20 0 47172 7372 2648 R 93.8 0.0 12:09.14 systemd\n 6509 kedz 20 0 14372 3392 2952 S 18.8 0.0 0:00.15 bash\n 6101 root 20 0 2470512 66128 17252 S 12.5 0.3 2:53.09 cmon\n 2228 root -51 0 0 0 0 S 6.2 0.0 129:24.43 irq/30-nvi+\n 3990 kedz 20 0 1241996 163096 54408 S 6.2 0.7 104:06.61 budgie-wm\n 5948 kedz 20 0 1027104 28124 21636 S 6.2 0.1 0:00.24 ut_s9sclus+\n 1 root 20 0 206244 7800 4904 S 0.0 0.0 0:55.73 systemd\n 2 root 20 0 0 0 0 S 0.0 0.0 0:00.34 kthreadd\n 4 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:+\n 6 root 20 0 0 0 0 S 0.0 0.0 1:21.52 ksoftirqd/0\n 7 root 20 0 0 0 0 S 0.0 0.0 19:05.92 rcu_sched\n 8 root 20 0 0 0 0 S 0.0 0.0 0:00.08 rcu_bh\n 9 root rt 0 0 0 0 S 0.0 0.0 0:30.34 migration/0\n 10 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 lru-add-dr+\n 11 root rt 0 0 0 0 S 0.0 0.0 0:01.73 watchdog/0\n 12 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/0\n 13 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/1\n 14 root rt 0 0 0 0 S 0.0 0.0 0:01.99 watchdog/1\n 15 root rt 0 0 0 0 S 0.0 0.0 0:29.35 migration/1\n 16 root 20 0 0 0 0 S 0.0 0.0 1:19.61 ksoftirqd/1\n 18 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/1:+\n 19 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/2\n 20 root rt 0 0 0 0 S 0.0 0.0 0:01.72 watchdog/2\n 21 root rt 0 0 0 0 S 0.0 0.0 0:27.23 migration/2\n 22 root 20 0 0 0 0 S 0.0 0.0 1:04.63 ksoftirqd/2\n 24 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/2:+\n 25 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/3\n 26 root rt 0 0 0 0 S 0.0 0.0 0:01.60 watchdog/3\n 27 root rt 0 0 0 0 S 0.0 0.0 0:22.55 migration/3\n 28 root 20 0 0 0 0 S 0.0 0.0 0:55.98 ksoftirqd/3\n 30 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/3:+\n 31 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/4\n 32 root rt 0 0 0 0 S 0.0 0.0 0:01.72 watchdog/4\n 33 root rt 0 0 0 0 S 0.0 0.0 0:29.07 migration/4\n 34 root 20 0 0 0 0 S 0.0 0.0 1:26.63 ksoftirqd/4\n 36 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/4:+\n 37 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/5\n 38 root rt 0 0 0 0 S 0.0 0.0 0:01.60 watchdog/5\n 39 root rt 0 0 0 0 S 0.0 0.0 0:27.87 migration/5\n 40 root 20 0 0 0 0 S 0.0 0.0 1:00.00 ksoftirqd/5\n 42 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/5:+\n 43 root 20 0 0 0 0 S 0.0 0.0 0:00.00 cpuhp/6\n 44 root rt 0 0 0 0 S 0.0 0.0 0:01.55 watchdog/6\n 45 root rt 0 0 0 0 S 0.0 0.0 0:25.96 migration/6\n 46 ro",
"recommendation": "Increase RAM, tune swappiness, reduce memory footprint of mysqld, mongodb or other processes running on the node. You can also reboot the server as a temporary countermeasure.",
"reported": "2017-09-28T07:00:31.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "Host is swapping",
"type": 10000,
"type_name": "HostSwapping"
},
{
"alarm_id": 6,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 7,
"component_name": "Host",
"counter": 1,
"created": "2017-09-28T07:00:32.000Z",
"host_id": 2,
"hostname": "127.0.0.2",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.2 reports: \n82.47 percent disk space has been used on sda2.\n\nFilesystem Size Used Avail Use% Mounted on\n/dev/sda1 1022M 1.4M 1021M 0% /boot/efi\n/dev/sda2 226G 187G 40G 82% /\n/dev/sdb2 929G 686G 243G 74% /mnt/data2\n/dev/sdc1 391G 384G 7.7G 98% /var/lib/schroot/mount/trusty-ec07a19b-46ad-4839-bbb7-94931fefed61\n/dev/sdd2 109G 30G 79G 27% /mnt/ssd128g\n/dev/sde1 3.6T 493G 3.1T 13% /mnt/4tb\n",
"recommendation": "The server is running low on free disk space. Remove old logfiles, backups etc, or increase the size of the disk volume. Disk full may cause the database to fail and recovery very difficult.",
"reported": "2017-09-28T07:00:32.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "Storage space alarm",
"type": 10002,
"type_name": "HostDiskUsage"
},
{
"alarm_id": 7,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 8,
"component_name": "DbHealth",
"counter": 1,
"created": "2017-09-28T07:00:32.000Z",
"ignored": 0,
"measured": 0,
"message": "72 redundant indexes have been detected. We recommend that you drop the redundant indexes during a maintenance window. To find out which indexes please go to Performance -> Schema Analyzer -> Show Redundant Indexes in the UI.\nRead here how to perform schema changes in safe ways: \nhttp://www.severalnines.com/blog/online-schema-upgrade-mysql-galera-cluster-using-toi-method",
"recommendation": "Go to Performance -> Table Analyzer -> Show Redundant Indexes to find out which indexes.",
"reported": "2017-09-28T07:00:32.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "Redundant indexes detected",
"type": 4003,
"type_name": "MySqlIndexAnalyzer"
},
{
"alarm_id": 8,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 6,
"component_name": "Node",
"counter": 1,
"created": "2017-09-28T07:00:32.000Z",
"host_id": 1,
"hostname": "127.0.0.1",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: MySQL Server is not connected to data nodes.",
"recommendation": "Check firewall/security rules and ndb-connectstring.",
"reported": "2017-09-28T07:00:32.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "MySQL server is not connected to NDB",
"type": 5010,
"type_name": "MySqlClusterNotConnected"
},
{
"alarm_id": 9,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 6,
"component_name": "Node",
"counter": 2,
"created": "2017-09-28T07:00:41.000Z",
"ignored": 0,
"measured": 0,
"message": "The cmon lost contact to the management server(s).",
"recommendation": "Check the connection and/or star the management servers.",
"reported": "2017-09-28T07:00:41.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "The cmon lost contact to the management server(s)",
"type": 5005,
"type_name": "NdbMgmdFailure"
},
{
"alarm_id": 11,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 5,
"component_name": "ClusterRecovery",
"counter": 1,
"created": "2017-09-28T07:00:50.000Z",
"ignored": 0,
"measured": 0,
"message": "Galera node recovery failed. Permanent error.",
"recommendation": "Check mysql error.log file.",
"reported": "2017-09-28T07:00:50.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Galera node recovery failed",
"type": 3000,
"type_name": "GaleraNodeRecoveryFail"
} ],
"cc_timestamp": 1506582060,
"cluster_ids": [ 200, 201 ],
"requestStatus": "ok"
}

The documentation of the CmonAlarm Class contains the list of properties that are returned for the API.

The "getAlarm" Request

The "getAlarm" returns information about one alarm identified by the alarm ID.

Example:

{
"operation" : "getAlarm",
"alarm_id" : 1
}
{
"alarm":
{
"alarm_id": 1,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 0,
"component_name": "Network",
"counter": 1,
"created": "2017-09-28T07:00:31.000Z",
"host_id": 1,
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: Host 127.0.0.1 is not responding to ping after 3 cycles, the host is most likely unreachable.",
"recommendation": "Restart failed host, check firewall.",
"reported": "2017-09-28T07:00:31.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Host is not responding",
"type": 10006,
"type_name": "HostUnreachable"
},
"cc_timestamp": 1506582060,
"requestStatus": "ok"
}

The "ignoreAlarm" Request

The "ignoreAlarm" RPC call will set the alarm to be ignored. The alarm is identifyed by the alarm ID.

Example:

{
"operation" : "ignoreAlarm",
"alarm_id" : 1,
"ignore" : true
}
{
"alarm":
{
"alarm_id": 1,
"class_name": "CmonAlarm",
"cluster_id": 200,
"component": 0,
"component_name": "Network",
"counter": 1,
"created": "2017-09-28T07:01:00.000Z",
"host_id": 1,
"ignored": 1,
"measured": 0,
"message": "Server 127.0.0.1 reports: Host 127.0.0.1 is not responding to ping after 3 cycles, the host is most likely unreachable.",
"recommendation": "Restart failed host, check firewall.",
"reported": "2017-09-28T07:01:00.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Host is not responding",
"type": 10006,
"type_name": "HostUnreachable"
},
"cc_timestamp": 1506582060,
"requestStatus": "ok"
}

The Stats API

/${CLUSTERID}/stat

The "setHost" Request

The "setHost" call is a newer and better version of the "setHostAlias" call and so it renders that deprecated. The "setHost" can be used to set multiple properties of a host at once.

The CmonHost class and the inherited classes have a number of properties a few of them are writable from outside (e.g. by the UI). So not all of the properties can be changed using this call, only those that marked as publicly writable in the CmonHost class documentation. If a "setHost" request contains a reference to a property that is not writable from outside the whole request will be rejected.

Here is an example for the "setHost" request:

{
"operation": "setHost",
"hostname": "127.0.0.1",
"port": 3306,
"properties":
{
"alias": "TestAlias001",
"description": "This is a test host..."
}
}

This will change three properties of the host. The new property values are going to be stored in the Cmon Database and a host change event will be emitted about the changes.

The "addFileMonitoring" request

File monitoring is a feature the UI can use to continuously monitor the changes of assorted files on a specific host. There are files that are monitored because the controller thinks they are important and the UI itself can request the monitoring of files if the user thinks they are important.

File monitoring utilizes the RPC and the event system. The monitoring can be requested through the RPC and the actual file changes are reported through the event system: the controller simply sends an event when something happens to the monitored files.

It is possible to monitor files that does not exist. If this happens the controller will simply report non-existing file status and when the file is created the report will reflect that.

One important aspect is that there are two distinct type of file monitoring. The file metadata monitoring will report if the file metadata changes. When the owner, the file size, the modification date changes, the event will report that.

The other monitoring option is the content monitoring. Content monitoring will report the metadata changes, but it also will monitor the file content. This is what we like to call the "tail -f" feature because it is doing exactly the same as the user would want issuing the "tail -f" command on the host.

The content monitoring is pricy, the controller has to go and fetch the file content every time the file changes and in addition it has to monitor the file more frequently so that the user gets a "real-time" view of the file. This is why the content monitoring is only available on request and it also has an expiration time. When the UI requests a content monitoring on a specific file it gets an UUID and that UUID has an expiration date. The request can be re-newed using the UUID, but if the UI misses this opportunity the content monitoring will stop.

Multiple UI instances can have multiple UUIDs to monitor the same file. The content monitoring will expire when the last UUID expires.

Example:

{
"operation": "addFileMonitoring",
"hostname": "127.0.0.1",
"port": 3306,
"content_monitoring": true,
"full_path": "/var/log/mysql/mysqld.log"
}
{
"cc_timestamp": 1506582050,
"content_monitoring": true,
"full_path": "/var/log/mysql/mysqld.log",
"requestStatus": "ok",
"uuid": "012f88d8-7053-46c7-a3c2-e6f8d89ebbd7"
}

This is a tipical request to start the content monitoring. It may have the following fields:

  • operation
    For registering a file monitoring this field must be "fileMonitoringRequest".
  • hostname
    The name of the host on which the file can be found.
  • port
    Identifies the host on which the file can be found.
  • content_monitoring
    Currently only the content monitoring can be registered using this call, so this field must be true.
  • full_path
    The file monitoring can only be registered using the full path to identify the file, so this field must be a valid path, but the file does not need to exist at the time of the call.
  • uuid
    When the caller wants to re-new an existing content monitoring it should send the UUID it received for the initial request. If there is no previously received UUID this field should not be sent for UUIDs that does not exist will trigger a failure.

The reply should contain the following fields:

  • content_monitoring
    This field is true if the content monitoring was requested and the operation was successful. This field is true even if the file itself does not exist. Non-existing files can be monitored, events will be sent showing that the file does not exist and the content will be available once the file is created.
  • full_path
    The full path of the monitored file.
  • uuid
    The UUID that identifies the request itself. The same UUID will be sent in the events so the caller can find if the monitoring request expires and it can re-new in time.
  • requestStatus
    The usual.

Should an error occure the reply will look like this:

{
"operation": "addFileMonitoring",
"hostname": "127.0.0.1",
"port": 3306,
"content_monitoring": true,
"full_path": "/var/log/mysql/mysqld.log",
"uuid": "57c66973-51ff-4aec-29cd-baabf2fbe346"
}
{
"cc_timestamp": 1506582050,
"errorString": "UUID '57c66973-51ff-4aec-29cd-baabf2fbe346' not found.",
"requestStatus": "UnknownError"
}

And finally here is an example showing a fragment of an event that actually delivers the result of the content monitoring. Please note that this is only a fragment of a more complex event showing only information about one monitored file:

...
{
"access": 432,
"changed": "2016-03-08T09:59:57.+0100Z",
"class_name": "CmonFile",
"content":
{
"as_string": "lugin 'INNODB_BUFFER_PAGE_LRU'\r\n...",
"end_index": 50878,
"start_index": 40517
},
"content_monitoring": [
{
"active": true,
"ends": "2016-03-08T10:05:47.+0100Z",
"uuid": "c4cb0c6b-12d1-3f2a-ffac-fab5cbd60322"
} ],
"content_monitoring_active": true,
"exists": true,
"full_path": "/var/log/mysql/mysqld.log",
"group": "mysql",
"hard_links": 1,
"host_name": "10.10.2.3",
"modified": "2016-03-08T09:59:57.+0100Z",
"size": 50878,
"used": "2016-03-08T09:59:07.+0100Z",
"user": "mysql"
}
...

Here are the list of the fields related to the content monitoring:

  • content/start_index
    This event holds a fragment of a file, the same way the "tail -f" would print parts of the file as lines are added to the end. The "start_index" shows where this fragment starts in the file.
  • content/end_index
    Shows where the content fragment ends.
  • as_string
    The actual file content fragment as a string. In our example the string is truncated, the actual event held a longer string here.
  • content_monitoring
    A list that shows all the requests the controller has for monitoring the content of this file.
  • content_monitoring/uuid
    The content monitoring can be re-newed using this UUID. It is of course the same the caller received when initiating the monitoring.
  • content_monitoring/active
    Shows that the content monitoring request is active.
  • content_monitoring/ends
    Shows the expiration date and time for the content monitoring.
  • content_monitoring_active
    This is a convenience value it shows if at least one of the content monitoring requests is active.
  • size
    The size of the file.

The "getFileMonitoring" request

Example:

{
"operation": "getFileMonitoring",
"hostname": "127.0.0.1",
"port": 3306
}
{
"cc_timestamp": 1506582050,
"monitored_files": [
{
"access": 511,
"changed": "2016-04-22T06:09:08.000Z",
"class_name": "CmonFile",
"exists": true,
"full_path": "/etc/mysql/my.cnf",
"group": "root",
"hard_links": 1,
"host_name": "127.0.0.1",
"modified": "2016-04-22T06:09:08.000Z",
"size": 24,
"used": "2016-04-22T06:09:08.000Z",
"user": "root"
},
{
"class_name": "CmonFile",
"exists": false,
"full_path": "/var/lib/mysql/stderr",
"host_name": "127.0.0.1"
},
{
"class_name": "CmonFile",
"content_monitoring": [
{
"active": true,
"ends": "2017-09-28T07:10:50.000Z",
"uuid": "012f88d8-7053-46c7-a3c2-e6f8d89ebbd7"
} ],
"content_monitoring_active": true,
"exists": false,
"full_path": "/var/log/mysql/mysqld.log",
"host_name": "127.0.0.1"
} ],
"requestStatus": "ok",
"total": 3
}

The "getMetaTypeInfo" request

So you received some reply, event or result from the backend, it contains some structures that have the well known "class_name" set to some string. The structure has a number of properties and you want to know about those properties.

Some properties are even so obvious and some are well known, for example the hosts have host names and that should be of course obvious, but what's with the more complicated less frequently used properties. Should your code be wired in interpreters that inderstand how to visualize properties nobody knows? Of course not, here is the "getMetaTypeInfo" request.

An example request to request all the CmonDiskInfo properties:

1 curl -XPOST -d '{"operation": "getMetaTypenfo", "type-name": "CmonDiskInfo"}' 'http://localhost:9500/10/stat'

Example:

{
"operation": "getMetaTypeInfo",
"type-name": "CmonDiskInfo",
"property-name": "capacity, temperature-celsius"
}
{
"cc_timestamp": 1506582050,
"metatype_info": [
{
"class_name": "CmonParamSpec",
"default_value": 0,
"description": "Disk size reported by the disk in bytes.",
"is_counter": false,
"is_public": true,
"is_writable": false,
"owner_type_name": "CmonDiskInfo",
"property_name": "capacity",
"short_ui_string": "Capacity",
"type_name": "Ulonglong",
"unit": "byte"
},
{
"class_name": "CmonParamSpec",
"default_value": 0,
"description": "The internal temperature of the disk.",
"is_counter": false,
"is_public": true,
"is_writable": false,
"owner_type_name": "CmonDiskInfo",
"property_name": "temperature-celsius",
"short_ui_string": "Temperature",
"type_name": "Int",
"unit": "℃ "
} ],
"requestStatus": "ok"
}

Here is a comprehensive list about the fields of the reply message. It is important to note, that not all the information about all the properties and metatypes are registered, so some fields might be missing or hold the wrong value right now.

  • class_name
    Well, this is the class name of the structure that holds information about the information about a property. One CmonParamSpec object holds information about one property of one class and of course one class can have many properties.
  • owner_type_name
    The type name of the owner of the property. Properties are inherited from parent classes, so the owner here might be different from the type name that is sent as getMetaTypeInfo request. No matter, this just means the property is inherited.
  • property_name
    The name of the property. This can be a single string to get information about one property, a string that contains one or more property names sperated by semicolon (e.g. "capacity; temperature-celsius") to get information about more than one property or completely omitted to get information about all the properties the given class has.
  • type_name
    The type name of the property.
  • default_value
    The default value of the property. If the property has the default value the backend might choose not to send the property in the JSon messages, the events for example have this filtering already implemented.
  • is_public
    If this is true that means the property is visible for external processes like the UI itself. Non-public properties are properties that are kept inside the controller because nobody is interested seeing them.
  • is_writable
    If this is true the property can be changed from outside the controller e.g. from the UI. The hosts for example have an "alias name" the user can change and so is writable by the UI through RPC calls.
  • is_counter
    Oh, yes, counters. We call a property a counter if the actual information is held by the difference of two values. Counters are started from 0 when the computer boots up, they can only be incremented so their value are always the same or bigger than before and of course nobody interested their actual value. The interesting thing is how much they grew since the last time we checked them.
  • description
    A human readable string describing the property in one or a few sentences.
  • short_ui_string
    A human readable string describing the property in one or a few words. So the UI code doesn't need to know what the property is, it just knows that this string is shown the user he will understand what it means.
  • unit
    This is obvious, numerical values can have units.

setHostAlias

Sets a user-defined custom alias name for a host instance.

Well this is not really a statistics request, but maybe it is better to have this request close to the getHosts RPC call.

Arguments:

  • hostname: the hostname field to identify the host
  • port: a portnumber to identify the host
  • alias: the new 'alias' for the hostname

Example request and response:

1 curl -X POST -d '{"operation": "setHostAlias", "hostname": "192.168.33.116", "port": 3306, "alias": "My favorite SQL server 1." }' http://localhost:9500/5/stat
{
"cc_timestamp": 1444919572,
"requestStatus": "ok"
}

And verify the result:

1 curl -X POST -d '{"operation": "getHosts", "fields": "hostname,port,alias" }' http://localhost:9500/5/stat
{
"cc_timestamp": 1444919650,
"data": [
{
"hostname": "192.168.33.115",
"port": 3306
},
{
"hostname": "192.168.33.1",
"port": -1
},
{
"alias": "My favorite SQL server 1.",
"hostname": "192.168.33.116",
"port": 3306
} ],
"requestStatus": "ok",
"total": 3
}

getHosts

Returns the host details in the cluster. See The hosts Hosts page for more detailed description of the returned fields.

1 $ curl -X POST -d '{"operation": "getHosts" }' http://localhost:9500/14/stat
{
"data": [
{
"clusterid": 14,
"configfile": "/etc/postgresql/9.4/main/postgresql.conf",
"connected": true,
"distributioncodename": "utopic",
"distributionname": "Ubuntu",
"distributionrelease": "14.10",
"hostId": 1,
"hostname": "192.168.33.1",
"ip": "192.168.33.1",
"pingstatus": -1,
"port": 5432,
"role": "postgres",
"version": "9.4beta3"
},
{
"clusterid": 14,
"configfile": "/etc/cmon.d/cmon_14.cnf",
"connected": true,
"distributioncodename": "utopic",
"distributionname": "Ubuntu",
"distributionrelease": "14.10",
"hostId": 2,
"hostname": "10.10.10.13",
"ip": "10.10.10.13",
"pingstatus": -1,
"port": -1,
"role": "controller",
"version": "1.2.9"
} ],
"requestStatus": "ok",
"total": 2
}

It is also possible to request only the needed/used fields by specifying a filter, please note that the field names must be specified case sensitive.

1 curl 'http://localhost:9500/14/stat?operation=getHosts&fields=hostId,hostname,role'
2 # or by posting the params in JSon format
3 curl -X POST -d '{"operation": "getHosts", "fields": "hostId,hostname,role" }' http://localhost:9500/14/stat
{
"data": [
{
"hostId": 1,
"hostname": "192.168.33.1",
"role": "postgres"
},
{
"hostId": 2,
"hostname": "10.10.10.13",
"role": "controller"
} ],
"requestStatus": "ok",
"total": 2
}

cpuInfo

Returns the CPU informations per host, but it is possible to filter the results here by specifying the hostId parameter.

The "cpuInfo" call is deprecated, please consider using the "getcpuphysicalinfo" call instead.

1 $ curl 'http://localhost:9500/12/stat?operation=cpuinfo&hostId=1
2 # or
3 $ curl -X POST -d '{"operation": "cpuinfo", "hostId": 1}' 'http://localhost:9500/12/stat'
{
"data": [
{
"cpucores": 4,
"cpumaxmhz": 3400,
"cpumhz": 2200,
"cpumodel": "Intel(R) Core(TM) i3-4130 CPU @ 3.40GHz",
"cputemp": 63,
"hostid": 1
} ],
"requestStatus": "ok",
"total": 1
}

The GetCpuPhysicalinfo request

The "getcpuphysicalinfo" obsoletes the old "getcpuinfo" request. The most inportant difference is that the new "getcpuphysicalinfo" request is able to handle more than one physical CPU in one host, so the caller has to be ready to process such replies.

The "getcpuphysicalinfo" returns information about the physical CPU devices found in the hosts. If the request is processed before the CPU information becomes available (before the CPU stat collector has a chance to get the data from the remote host) the reply will indicate an error ("requestStatus" will be "TryAgain"). If the data already available the reply will enumerate all the physical CPU devices on all the requested.

Example:

{
"operation": "getcpuphysicalinfo",
"hostId": 1
}
{
"cc_timestamp": 1484036217,
"data": [
{
"class_name": "CmonCpuInfo",
"cpucores": 4,
"cpumaxmhz": 4.4e+06,
"cpumhz": 4233.15,
"cpumodel": "Intel(R) Core(TM) i7-4790K CPU @ 4.00GHz",
"cputemp": 68,
"hostid": 1,
"physical_cpu_id": 0,
"siblings": 8,
"vendor": "GenuineIntel"
} ],
"requestStatus": "ok",
"total": 1
}

The reply provides the following information about the CPU devices:

  • class_name
    This is always CmonCpuInfo.
  • physical_cpu_id
    The unique ID of this physical CPU.
  • cpucores
    Shows how many cpu cores the CPU has.
  • siblings
    Indicates how many virtual CPU is provided by this physical CPU. If for example the CPU is an "Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz" it has 6 cores and 2x hyperthreading, so the number of siblings is 12. If a host has two of these CPUs we should show 24 CPUs in the UI.
  • cpumaxmhz
    The maximum clock frequency measured in MegaHertz.
  • cpumhz
    The current CPU frequency measured in MegaHertz.
  • cpumodel
    The name of the model.
  • vendor
    The name of the vendor.
  • cputemp
    The temperature of the CPU measured is Celsius.

getStatInfo

A method to fetch each statistics object possible keys (and whether it is a speed/gauge counter or an absolute value).

1 curl 'http://localhost:9500/17/stat?token=6cBC1Us5v4NsmilJ&operation=statinfo&class_name=CmonSqStats'
{
"cc_timestamp": 1511334486,
"data":
{
"ABORTED_CLIENTS": "GaugeCounter",
"ABORTED_CONNECTS": "GaugeCounter",
"ARCHIVED_COUNT": "GaugeCounter",
"BUFFERS_ALLOC": "GaugeCounter",
"BUFFERS_BACKEND": "GaugeCounter",
"BUFFERS_BACKEND_FSYNC": "GaugeCounter",
"BUFFERS_CHECKPOINT": "GaugeCounter",
"BUFFERS_CLEAN": "GaugeCounter",
"BYTES_RECEIVED": "GaugeCounter",
"BYTES_SENT": "GaugeCounter",
"CHECKPOINTS_REQ": "GaugeCounter",
"CHECKPOINTS_TIMED": "GaugeCounter",
/* ... minified ... */
"rows-fetched": "GaugeCounter",
"rows-inserted": "GaugeCounter",
"rows-updated": "GaugeCounter"
},
"requestStatus": "ok"
}

getStatByName

Fetching all the stats by name, it is also possible to filter the results by hostId.

NEW arguments in 1.4.2:

  • calculate_per_sec: (boolean) re-calculates the speed counters to /sec value (Default: false) This does the value=(value*1000.0/interval) calculation on backend side.
  • stat_info: (boolean) includes info about all possible stat properties wheter it is speed (GaugeCounter) or absolute (Counter) value (Default: false)
  • compact_format (boolean) an experimental new feature to save bandwith cpu on both backend and UI side, see an example a bit later, in 'data' a map will be returned where the key is created timestamp * 1000, and the value is the array of values (see 'keys' field for order and column names)

Possible stat name keys are:

Supported 'row' filters:

  • by hostid by specifying 'hostid' or 'hostId': it can list multiple hosts (comma separated)
  • by samplekey (useful for CmonHaProxyStats filtration)
  • by specifying an interval (defaults to from one-day-ago to now), using 'startdate' and/or 'enddate' fields (UNIX timestamps).
  • use 'returnfrom' if you want only the last few records with the same scale of ('startdate'-'enddate' interval).
  • the network stats could be filtered by setting 'interface' name in the request.
  • the disk stats could be filtered by setting 'device' name in the request.
  • the cpustats could be filtererd out by sending 'cpuid' or 'coreid'
  • you can specify a count 'limit' if you need only the last few results.

And supported 'column' filters:

  • 'fields': It is possible now to request only the needed/used fields by specifying a record field filter: 'fields', please note that the field names must be specified case sensitive, and should be separated by a comma.

There is a possibility to aggregate the host values in the results (so in such case no hostid must be specified, as all host values will be aggregated), by specifying the 'aggrhosts' field. 'aggrhosts' only supports the following aggregation: 'sum'. The backend groups the data together by hostId-s for example if you have 3 hosts: {(host1, host2, host3), (host1, host2, host3), ...} then the aggregation (sum) will be done on the groups.

1 # an example
2 curl -X POST -H"Content-Type: application/json" -d '{"operation": "getStatByName",
3  "name": "memorystat", "hostId": 2, "startdate": 149642776, "limit": 1}' \
4  http://localhost:9500/12/stat
5 
6 # it is also possible to use GET request
7 curl 'http://localhost:9500/12/stat?operation=getStatByName&name=memorystat&hostId=2&startdate=149642776&limit=1'
8 
9 # an example with aggregation:
10 curl 'http://localhost:9500/2/stat?operation=getStatByName&name=sqlstat&fields=hostid,COM_SELECT,THREADS_CONNECTED&aggrhosts=sum'
11 
12 # and example with field filtering:
13 $ curl 'http://localhost:9500/14/stat?operation=getStatByName&name=sqlstat&fields=hostid,iceated,interval,sampleends,commits&startdate=1421326457'
14 {
15  "data": [
16  {
17  "commits": 11,
18  "hostid": 1,
19  "interval": 14984,
20  "sampleends": 1421326482
21  } ],
22  "requestStatus": "ok",
23  "total": 1
24 }

An example with 'compact_format' enabled, note the key here is timestamp*1000 as requested by UI.

1  $curl -X POST -d '{"token":"W4rVBtj8uMRO7sYe", "operation": "getStatByName", "name": "sqlstat", "calculate_per_sec": true, "compact_format":true, "fields":"sampleends,THREADS_CONNECTED,COM_SELECT,COM_INSERT,COM_UPDATE,COM_DELETE,QUERIES,interval,created" }' http://localhost:9500/152/stat
2  \encode
3  \code{.js}
4 {
5  "cc_timestamp": 1494338638,
6  "data":
7  {
8  "1494332892000": [ 1494332953, 3, 1.4242, 0, 0, 0, 26.8009 ],
9  "1494332954000": [ 1494333015, 3, 1.4032, 0, 0, 0, 26.6609 ],
10  "1494333016000": [ 1494333077, 3, 1.38721, 0, 0, 0, 25.9376 ],
11  /* ..... */
12  "1494338128000": [ 1494338189, 3, 1.48389, 0, 0, 0, 28.0488 ],
13  "1494338190000": [ 1494338251, 3, 1.38698, 0, 0, 0, 26.2076 ],
14  "1494338252000": [ 1494338313, 3, 1.38703, 0, 0, 0, 26.1439 ],
15  "1494338499000": [ 1494338560, 3, 1.42184, 0, 0, 0, 26.967 ],
16  "1494338561000": [ 1494338622, 3, 1.40314, 0, 0, 0, 26.0951 ],
17  "1494338623000": [ 1494338636, 3, 1.35714, 0, 0, 0, 27.1429 ]
18  },
19  "keys": [ "sampleends", "THREADS_CONNECTED", "COM_SELECT", "COM_INSERT", "COM_UPDATE", "COM_DELETE", "QUERIES" ],
20  "requestStatus": "ok",
21  "total": 86
22  }

Few example requests and replies for the statByName operation (i truncated the returned stat lists to keep this text as short as possible, so only 'one' sample stat is there in the outputs)

1 $ curl -X POST -d '{"operation": "statByName", "name": "invalidstat"}' 'http://localhost:9500/12/stat'
{
"errorString": "Invalid name field, possible values are: netstat,memorystat,diskstat,cpustat,sqlstat","dbstat",
"requestStatus": "error"
}
1 $ curl -X POST -d '{"operation": "statByName", "name": "netstat"}' 'http://localhost:9500/12/stat'
{
"requestStatus": "ok",
"data": [
{
"created": 1410429060,
"hostid": 2,
"interface": "eth1",
"rxBytes": 227928539,
"rxPackets": 2535849,
"rxSpeed": 1488.6,
"sampleends": 1410429673,
"txBytes": 1111575613,
"txPackets": 4331919,
"txSpeed": 11740.4
} ]
}

Memory statistics

1 $ curl -X POST -d '{"operation": "statByName", "name": "memorystat", "hostId": 2}' 'http://localhost:9500/12/stat'
{
"requestStatus": "ok",
"data": [
{
"created": 1410429673,
"hostid": 2,
"memoryutilization": 0.073535,
"rambuffers": 83820544,
"ramcached": 374575104,
"ramfree": 752859136,
"ramfreemin": 752779264,
"ramtotal": 1307394048,
"sampleends": 1410429734,
"swapfree": 0,
"swaptotal": 0,
"swaputilization": 0
} ]
}

Disk statistics

1 $ curl "http://localhost:9500/180/stat?token=6T17RE8PBaSzOKbZ&operation=getStatByName&name=diskstat&hostid=122&returnfrom=$(date +%s --date='3 minutes ago')"
{
"cc_timestamp": 1504771248,
"data": [
{
"blocksize": 4096,
"created": 1504771018,
"device": "/dev/sda2",
"free": 43078811648,
"hostid": 122,
"interval": 118915,
"mountpoint": "/",
"reads": 16,
"readspersec": 0,
"sampleends": 1504771107,
"samplekey": "CmonDiskStats-122-/dev/sda2",
"sectorsread": 1472,
"sectorswritten": 96384,
"total": 243095224320,
"utilization": 0.0243065,
"writes": 2568,
"writespersec": 27
},
{
"blocksize": 4096,
"created": 1504771138,
"device": "/dev/sda2",
"free": 43078881280,
"hostid": 122,
"interval": 120914,
"mountpoint": "/",
"reads": 16,
"readspersec": 0,
"sampleends": 1504771228,
"samplekey": "CmonDiskStats-122-/dev/sda2",
"sectorsread": 240,
"sectorswritten": 74264,
"total": 243095224320,
"utilization": 0.0203032,
"writes": 2302,
"writespersec": 24
} ],
"requestStatus": "ok",
"total": 2
}

CPU statistics

1 $ curl -X POST -d '{"operation": "statByName", "name": "cpustat", "hostId": 2}' 'http://localhost:9500/12/stat'
{
"requestStatus": "ok",
"data": [
{
"busy": 0.0252534,
"cpumhz": 3383.43,
"cputemp": 0,
"created": 1410429734,
"hostid": 2,
"idle": 0.974747,
"iowait": 0,
"loadavg1": 0.05,
"loadavg15": 0.05,
"loadavg5": 0.08,
"sampleends": 1410429734,
"steal": 0,
"sys": 0.0167802,
"uptime": 191351,
"user": 0.00847317
} ]
}

SQL Server statistics

For properties see CmonSqlStats class documentation.

1 $ curl -X POST -d '{"operation": "getStatByName", "name": "sqlstat", "startdate":1417431406 }' http://localhost:9500/99/stat
2 # or an example using GET request:
3 $ curl 'http://localhost:9500/14/stat?operation=getStatByName&name=sqlstat&startdate=1417431406'
{
"data": [
{
"blocks-hit": 25152,
"blocks-read": 0,
"commits": 47,
"connections": 2,
"created": 1417431381,
"hostid": 1,
"interval": 30000,
"rollbacks": 0,
"rows-deleted": 0,
"rows-fetched": 496,
"rows-inserted": 0,
"rows-updated": 0,
"sampleends": 1417431406,
"samplekey": "SqlStats-1"
},
{
"blocks-hit": 12549,
"blocks-read": 0,
"commits": 21,
"connections": 2,
"created": 1417431411,
"hostid": 1,
"interval": 15000,
"rollbacks": 0,
"rows-deleted": 0,
"rows-fetched": 247,
"rows-inserted": 0,
"rows-updated": 0,
"sampleends": 1417431421,
"samplekey": "SqlStats-1"
} ],
"requestStatus": "ok",
"total": 2
}

Database statistics

This kind of statistics currently only implemented for PostgreSQL, as this server provides these more detailed statistics on per database basis, therefore it should be collected separatedly.

NOTE: cmon currently only gets/stores this statistics data only about the current (postgres) database...

For properties see CmonDatabaseStats class documentation.

1 curl 'http://localhost:9500/14/stat?operation=getStatByName&name=dbstat&startdate=1417429497'
{
"data": [
{
"blocks-hit": 5097,
"blocks-read": 0,
"created": 1417429497,
"hostid": 1,
"idx-hit": 3,
"idx-read": 0,
"interval": 30000,
"sampleends": 1417429522,
"samplekey": "PgDbStats-1",
"tidx-hit": 0,
"tidx-read": 0,
"toast-hit": 0,
"toast-read": 0
},
{
"blocks-hit": 18689,
"blocks-read": 0,
"created": 1417429527,
"hostid": 1,
"idx-hit": 11,
"idx-read": 0,
"interval": 30000,
"sampleends": 1417429552,
"samplekey": "PgDbStats-1",
"tidx-hit": 0,
"tidx-read": 0,
"toast-hit": 0,
"toast-read": 0
} ],
"requestStatus": "ok",
"total": 2
}

TCP Network statistics

Here we collect various network TCP statistics, an example request and reply:

1 curl -X POST -d '{"operation": "statByName", "name": "tcpStat"}' 'http://localhost:9500/10/stat
{
"cc_timestamp": 1457447276,
"data": [
{
"created": 1457360880,
"hostid": 70,
"interval": 81113,
"received_bad_segments": 0,
"received_segments": 20519,
"retransmitted_segments": 0,
"sampleends": 1457360951,
"samplekey": "CmonTcpStats-70",
"sent_segments": 49884
},
{
"created": 1457360880,
"hostid": 71,
"interval": 81114,
"received_bad_segments": 0,
"received_segments": 150187,
"retransmitted_segments": 27,
"sampleends": 1457360951,
"samplekey": "CmonTcpStats-71",
"sent_segments": 67772
},
/* ... truncated ... */
{
"created": 1457447163,
"hostid": 72,
"interval": 79247,
"received_bad_segments": 0,
"received_segments": 19471,
"retransmitted_segments": 0,
"sampleends": 1457447231,
"samplekey": "CmonTcpStats-72",
"sent_segments": 46250
},
{
"created": 1457447183,
"hostid": 73,
"interval": 81061,
"received_bad_segments": 0,
"received_segments": 19611,
"retransmitted_segments": 0,
"sampleends": 1457447252,
"samplekey": "CmonTcpStats-73",
"sent_segments": 46249
} ],
"requestStatus": "ok",
"total": 4436
}

NDB node statistics

For properties see CmonNdbStats properties class documentation.

1 # curl -X POST -d '{"token":"yjQnm2fPlTjCJ9bn","operation": "statByName", "name": "ndbstat"}' 'http://localhost:9500/65/stat'
{
"cc_timestamp": 1469794066,
"data": [
{
"created": 1469791127,
"dm_total_bytes": 134217728,
"dm_used_bytes": 819200,
"hostid": 70,
"im_total_bytes": 22282240,
"im_used_bytes": 172032,
"interval": 0,
"sampleends": 1469791217,
"samplekey": "CmonNdbStats-70"
},
{
"created": 1469791127,
"dm_total_bytes": 134217728,
"dm_used_bytes": 819200,
"hostid": 71,
"im_total_bytes": 22282240,
"im_used_bytes": 172032,
"interval": 0,
"sampleends": 1469791217,
"samplekey": "CmonNdbStats-71"
},
// ..
{
"created": 1469791247,
"dm_total_bytes": 134217728,
"dm_used_bytes": 819200,
"hostid": 70,
"im_total_bytes": 22282240,
"im_used_bytes": 172032,
"interval": 0,
"sampleends": 1469791337,
"samplekey": "CmonNdbStats-70"
} ],
"requestStatus": "ok",
"total": 4436
}

HAProxy load-balancers statistics

For properties see CmonHaProxyStats properties class documentation.

An example request and reply

curl 'http://localhost:9500/162/stat?token=6ZMBqXvLYXBAqGy&operation=getStatByName&name=haproxystat'
1  $ curl 'http://localhost:9500/162/stat?token=6ZMBqXvsLYXBAqGy&operation=getStatByName&name=haproxystat'
2 {
3  "cc_timestamp": 1502110271,
4  "data": [
5  {
6  "bin": 2716,
7  "bout": 29770,
8  "created": 1502109914,
9  "dreq": 0,
10  "dresp": 0,
11  "ereq": 0,
12  "hostid": 27,
13  "hrsp_1xx": 0,
14  "hrsp_2xx": 28,
15  "hrsp_3xx": 0,
16  "hrsp_4xx": 0,
17  "hrsp_5xx": 0,
18  "hrsp_other": 0,
19  "iid": 1,
20  "interval": 90499,
21  "pid": 1,
22  "pxname": "admin_page",
23  "rate": 0,
24  "rate_lim": 0,
25  "rate_max": 4,
26  "req_rate": 0,
27  "req_rate_max": 4,
28  "req_tot": 28,
29  "sampleends": 1502109990,
30  "samplekey": "CmonHaProxyStats-27-admin_page-FRONTEND",
31  "scur": 0,
32  "sid": 0,
33  "slim": 8192,
34  "smax": 1,
35  "status": "OPEN",
36  "stot": 28,
37  "svname": "FRONTEND",
38  "type": 0
39  },
40  {
41  "act": 0,
42  "bck": 0,
43  "bin": 2716,
44  "bout": 29770,
45  "chkdown": 0,
46  "cli_abrt": 0,
47  "created": 1502109914,
48  "downtime": 0,
49  "dreq": 0,
50  "dresp": 0,
51  "econ": 0,
52  "eresp": 0,
53  "hostid": 27,
54  "hrsp_1xx": 0,
55  "hrsp_2xx": 0,
56  "hrsp_3xx": 0,
57  "hrsp_4xx": 0,
58  "hrsp_5xx": 0,
59  "hrsp_other": 0,
60  "iid": 1,
61  "interval": 90498,
62  "lastchg": 271348,
63  "lbtot": 0,
64  "pid": 1,
65  "pxname": "admin_page",
66  "qcur": 0,
67  "qmax": 0,
68  "rate": 0,
69  "rate_max": 0,
70  "sampleends": 1502109990,
71  "samplekey": "CmonHaProxyStats-27-admin_page-BACKEND",
72  "scur": 0,
73  "sid": 0,
74  "slim": 8192,
75  "smax": 0,
76  "srv_abrt": 0,
77  "status": "UP",
78  "stot": 0,
79  "svname": "BACKEND",
80  "type": 1,
81  "weight": 0,
82  "wredis": 0,
83  "wretr": 0
84  },
85  {
86  "bin": 0,
87  "bout": 0,
88  "created": 1502109914,
89  "dreq": 0,
90  "dresp": 0,
91  "ereq": 0,
92  "hostid": 27,
93  "iid": 2,
94  "interval": 90499,
95  "pid": 1,
96  "pxname": "haproxy_10.0.3.31_3307",
97  "rate": 0,
98  "rate_lim": 0,
99  "rate_max": 0,
100  "req_rate": 0,
101  "req_rate_max": 0,
102  "req_tot": 0,
103  "sampleends": 1502109990,
104  "samplekey": "CmonHaProxyStats-27-haproxy_10.0.3.31_3307-FRONTEND",
105  "scur": 0,
106  "sid": 0,
107  "slim": 8192,
108  "smax": 0,
109  "status": "OPEN",
110  "stot": 0,
111  "svname": "FRONTEND",
112  "type": 0
113  }
114  /* some items are cut... */
115  ],
116  "requestStatus": "ok",
117  "total": 45
118  }

Last HAProxy sample

Description: with these calls it is possible to obtain the last RAW sample from cmon for each HAProxy, the hostId is required in the request

Name syntax: haproxystat.HOSTID.lastsample

1 $ curl "http://localhost:9500/162/stat?token=6ZMBqXvLYXBAqGy&operation=getinfo&name=haproxystat.27.lastsample"
{
"cc_timestamp": 1502178222,
"data":
{
"contents": "# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,\r\nadmin_page,FRONTEND,,,0,2,8192,714,134575,2494175,0,0,0,,,,,OPEN,,,,,,,,,1,1,0,,,,0,0,0,13,,,,0,713,0,1,0,0,,0,13,714,,,\r\nadmin_page,BACKEND,0,0,0,0,8192,0,134575,2494175,0,0,,0,0,0,0,UP,0,0,0,,0,339868,0,,1,1,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0,\r\nhaproxy_10.0.3.31_3307,FRONTEND,,,0,1,8192,122921,119107836,47413912,0,0,0,,,,,OPEN,,,,,,,,,1,2,0,,,,0,0,0,2,,,,,,,,,,,0,0,0,,,\r\nhaproxy_10.0.3.31_3307,10.0.3.31,0,0,0,1,64,122921,119107836,47413912,,0,,0,0,0,0,UP,100,1,0,0,0,339868,0,128,1,2,1,,122921,,2,0,,2,L7OK,200,12,,,,,,,0,,,,0,0,\r\nhaproxy_10.0.3.31_3307,BACKEND,0,0,0,1,8192,122921,119107836,47413912,0,0,,0,0,0,0,UP,100,1,0,,0,339868,0,,1,2,0,,122921,,1,0,,2,,,,,,,,,,,,,,0,0,\r\n\r\n",
"timestamp": "2017-08-08T07:48:45.457Z"
},
"requestStatus": "ok"
}

The Prometheus RPC API entry point

Description: using GET requests to /$CLUSTERID/stat/prometheus/... cmon will forward the requests to an active Prometheus instance (or the one what was specified optionally by the caller). This works when a cluster has the agentless monitoring enabled.

Clustercontrol expects the Prometheus URL path without the /api/v1/ part.

Arguments:

  • monitor
    Optionally caller can specify explicitly a Prometheus instance to be queried (hostname or hostname:port syntax works here).
  • Prometheus arguments:
    This method will forward the following arguments (in the GET request) to the Prometheus instance: query, time, timeout, start, end, step, match[]

NOTE: for Prometheus query language and for its functions please look at its official documentation: https://prometheus.io/docs/prometheus/latest/querying/basics/

Few example requests and replies:

Query up the available metrics:

1 curl 'http://127.0.0.1:9500/61/stat/prometheus/label/__name__/values?token=dx6hyGk3wOhK0dIG'
{
"cc_timestamp": 1517576509,
"data": [ "go_gc_duration_seconds", "go_gc_duration_seconds_count",
"go_gc_duration_seconds_sum", "go_goroutines", "go_info",
"go_memstats_alloc_bytes", "go_memstats_alloc_bytes_total",
"go_memstats_buck_hash_sys_bytes", "go_memstats_frees_total",
"go_memstats_gc_cpu_fraction", "go_memstats_gc_sys_bytes",
"go_memstats_heap_alloc_bytes", "go_memstats_heap_idle_bytes",
"go_memstats_heap_inuse_bytes", "go_memstats_heap_objects",
"go_memstats_heap_released_bytes", /* ... and lots of others .. */
"up" ],
"requestStatus": "ok",
"status": "success"
}

Get the exporters status

1 curl 'http://127.0.0.1:9500/61/stat/prometheus/query?query=\{__name__=%22up%22\}&token=dx6hyGk3wOhK0dIG'
{
"cc_timestamp": 1517576333,
"data":
{
"result": [
{
"metric":
{
"__name__": "up",
"clustercontrol": "1.5.2",
"instance": "10.0.3.119:9100",
"job": "node"
},
"value": [ 1.51758e+09, "0" ]
},
/* ... others ... */
],
"resultType": "vector"
},
"requestStatus": "ok",
"status": "success"
}

Query some (filesystem) stats using PromQL

1 curl 'http://127.0.0.1:9500/61/stat/prometheus/query_range?start=1517576531&end=1517577531&step=60&query=\{__name__=~%22(node_filesystem.*)%22,instance=%22127.0.0.1:9100%22,mountpoint=%22/%22\}&token=dx6hyGk3wOhK0dIG'
{
"cc_timestamp": 1517577592,
"data":
{
"result": [
{
"metric":
{
"__name__": "node_filesystem_avail",
"clustercontrol": "1.5.2",
"device": "/dev/sda2",
"fstype": "ext4",
"instance": "127.0.0.1:9100",
"job": "node",
"mountpoint": "/"
},
"values": [ [ 1517576531, "40473014272" ], [ 1517576591, "40472379392" ], [ 1517576651, "40471703552" ], [ 1517576711, "40471044096" ], [ 1517576771, "40470200320" ], [ 1517576831, "40469516288" ], [ 1517576891, "40468815872" ], [ 1517576951, "40468205568" ], [ 1517577011, "40467578880" ], [ 1517577071, "40466911232" ], [ 1517577131, "40466255872" ], [ 1517577191, "40473985024" ], [ 1517577251, "40473432064" ], [ 1517577311, "40472805376" ], [ 1517577371, "40472117248" ], [ 1517577431, "40471400448" ], [ 1517577491, "40470937600" ] ]
},
/* other metrics... */
],
"resultType": "matrix"
},
"requestStatus": "ok",
"status": "success"
}

Processes (database clients)

With this RPC API you can get the list about the currenty running query processes on the database nodes. Possible parameter: hostId (to get only a specific host queries)

NOTE: this API is implemented for PostgreSQL, and MySQL (galera, single .. etc) and for MongoDB (so well for all cluster types)

1 curl http://localhost:9500/14/stat?operation=processes

For PostgreSQL, the reply looks like this:

{
"data": [
{
"appName": "",
"backendPid": 4615,
"backendStart": "2014-11-28 15:08:58.379023+01",
"client": "192.168.33.1:46734",
"databaseName": "postgres",
"hostId": 1,
"query": "SELECT datname, pid, usename, application_name, COALESCE(client_hostname, host(client_addr), 'localhost'), client_port, backend_start, xact_start, query_start, waiting, query, state FROM pg_stat_activity",
"queryStart": "2014-11-28 15:11:18.413027+01",
"state": "active",
"userName": "root",
"waiting": false,
"xactStart": "2014-11-28 15:11:18.413027+01"
} ],
"requestStatus": "ok",
"total": 1
}

An example for MySQL Replication:

1 curl 'http://localhost:9500/54/stat?operation=processes&token=0uJiPUDtRIrmvcVq'
{
"cc_timestamp": 1499685166,
"data": [
{
"client": "10.0.3.105:42814",
"databaseName": "",
"hostId": 127,
"info": "",
"pid": 31,
"query": "Binlog Dump GTID",
"queryStart": 188,
"reportTs": 1499685166,
"state": "Master has sent all binlog to slave; waiting for more updates",
"userName": "rpl_user"
},
{
"client": "",
"databaseName": "",
"hostId": 128,
"info": "",
"pid": 30,
"query": "Connect",
"queryStart": 188,
"reportTs": 1499685166,
"state": "Waiting for master to send event",
"userName": "system user"
},
{
"client": "",
"databaseName": "",
"hostId": 128,
"info": "",
"pid": 31,
"query": "Connect",
"queryStart": 180,
"reportTs": 1499685166,
"state": "Slave has read all relay log; waiting for more updates",
"userName": "system user"
} ],
"requestStatus": "ok",
"total": 3
}

And finally a MongoDB example:

1 curl 'http://localhost:9500/155/stat?operation=proesses&token=aQo4BWVdlW7NqwWZ'
{
"cc_timestamp": 1499694695,
"data": [
{
"active": true,
"desc": "ReplBatcher",
"hostId": 130,
"hostname": "10.0.3.105",
"ns": "local.oplog.rs",
"op": "none",
"opid": 62,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.105:27017",
"secs_running": 2845,
"threadId": "139644933940992",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "WT RecordStoreThread: local.oplog.rs",
"hostId": 130,
"hostname": "10.0.3.105",
"ns": "local.oplog.rs",
"op": "none",
"opid": 58,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.105:27017",
"secs_running": 2846,
"threadId": "139645261256448",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 70795,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"client": "10.0.3.74:60288",
"connectionId": 23,
"desc": "conn23",
"hostId": 130,
"hostname": "10.0.3.105",
"ns": "local.oplog.rs",
"op": "getmore",
"opid": 17192,
"query":
{
"collection": "oplog.rs",
"getMore":
{
"$numberLong": "19653741566"
},
"lastKnownCommittedOpTime":
{
"t":
{
"$numberLong": "1"
},
"ts":
{
"$timestamp":
{
"i": 5,
"t": 1499691860
}
}
},
"maxTimeMS":
{
"$numberLong": "5000"
},
"term":
{
"$numberLong": "1"
}
},
"reportTs": 1499694693,
"reported_by": "10.0.3.105:27017",
"secs_running": 2,
"threadId": "139644916102912",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"client": "10.0.3.1:53612",
"connectionId": 41,
"desc": "conn41",
"hostId": 130,
"hostname": "10.0.3.105",
"ns": "admin.$cmd",
"op": "command",
"opid": 17217,
"query":
{
"currentOp": 1
},
"reportTs": 1499694693,
"reported_by": "10.0.3.105:27017",
"secs_running": 0,
"threadId": "139645509216000",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "rsSync",
"hostId": 130,
"hostname": "10.0.3.105",
"ns": "local.oplog.rs",
"op": "none",
"opid": 61,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.105:27017",
"secs_running": 2845,
"threadId": "139645244471040",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "SyncSourceFeedback",
"hostId": 130,
"hostname": "10.0.3.105",
"ns": "",
"op": "none",
"opid": 282,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.105:27017",
"threadId": "139645227685632",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "rsSync",
"hostId": 131,
"hostname": "10.0.3.41",
"ns": "local.replset.minvalid",
"op": "none",
"opid": 95,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.41:27017",
"secs_running": 2842,
"threadId": "140496365635328",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "rsBackgroundSync",
"hostId": 131,
"hostname": "10.0.3.41",
"ns": "local.replset.minvalid",
"op": "none",
"opid": 313,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.41:27017",
"secs_running": 2831,
"threadId": "140496357242624",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"client": "10.0.3.1:52144",
"connectionId": 33,
"desc": "conn33",
"hostId": 131,
"hostname": "10.0.3.41",
"ns": "admin.$cmd",
"op": "command",
"opid": 16252,
"query":
{
"currentOp": 1
},
"reportTs": 1499694693,
"reported_by": "10.0.3.41:27017",
"secs_running": 0,
"threadId": "140496612542208",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "SyncSourceFeedback",
"hostId": 131,
"hostname": "10.0.3.41",
"ns": "",
"op": "none",
"opid": 16247,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.41:27017",
"threadId": "140496348849920",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "WT RecordStoreThread: local.oplog.rs",
"hostId": 131,
"hostname": "10.0.3.41",
"ns": "local.oplog.rs",
"op": "none",
"opid": 87,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.41:27017",
"secs_running": 2842,
"threadId": "140496322619136",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 74693,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "ReplBatcher",
"hostId": 131,
"hostname": "10.0.3.41",
"ns": "local.oplog.rs",
"op": "none",
"opid": 96,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.41:27017",
"secs_running": 2842,
"threadId": "140496037267200",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"client": "10.0.3.1:34164",
"connectionId": 35,
"desc": "conn35",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "admin.$cmd",
"op": "command",
"opid": 17645,
"query":
{
"currentOp": 1
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"secs_running": 0,
"threadId": "140626511873792",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"client": "10.0.3.41:59786",
"connectionId": 18,
"desc": "conn18",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "local.oplog.rs",
"op": "getmore",
"opid": 17635,
"query":
{
"collection": "oplog.rs",
"getMore":
{
"$numberLong": "25473614514"
},
"lastKnownCommittedOpTime":
{
"t":
{
"$numberLong": "1"
},
"ts":
{
"$timestamp":
{
"i": 5,
"t": 1499691860
}
}
},
"maxTimeMS":
{
"$numberLong": "5000"
},
"term":
{
"$numberLong": "1"
}
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"secs_running": 0,
"threadId": "140625903023872",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "rsSync",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "local.replset.minvalid",
"op": "none",
"opid": 64,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"secs_running": 2843,
"threadId": "140626264962816",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "ReplBatcher",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "local.oplog.rs",
"op": "none",
"opid": 65,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"secs_running": 2843,
"threadId": "140625937647360",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "rsBackgroundSync",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "local.replset.minvalid",
"op": "none",
"opid": 262,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"secs_running": 2833,
"threadId": "140626256570112",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "SyncSourceFeedback",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "",
"op": "none",
"opid": 17638,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"threadId": "140626248177408",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "WT RecordStoreThread: local.oplog.rs",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "local.oplog.rs",
"op": "none",
"opid": 57,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"secs_running": 2843,
"threadId": "140626222999296",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 74864,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
} ],
"requestStatus": "ok",
"total": 19
}

Database server variables

With this RPC API you will get the server variables. Implemented for MySQL and its variants and for PostgreSQL.

Possible parameters:

  • hostId: to be able to get that only about a specific host
  • variables: a comma separated list about the wanted parameters (to save some bandwidth)

Here you can see few example requests and responses:

PostgreSQL:

1 curl 'http://localhost:9500/14/stat?operation=variables&variables=TimeZone,log_filename,wal_segment_size'
{
"cc_timestamp": 1442560211,
"data": [
{
"hostId": 2,
"hostname": "192.168.33.121",
"variables":
{
"TimeZone": "US/Eastern",
"log_filename": "postgresql-%a.log",
"wal_segment_size": "16MB"
}
},
{
"hostId": 3,
"hostname": "192.168.33.122",
"variables":
{
"TimeZone": "US/Eastern",
"log_filename": "postgresql-%a.log",
"wal_segment_size": "16MB"
}
} ],
"requestStatus": "ok",
"total": 2
}

MySQL/Galera:

1 curl 'http://localhost:9500/6/stat?operation=variales&variables=version,performance_schema,port,thread_pool_idle_timeout,thread_pool_max_threads,thread_pool_oversubscribe,thread_pool_size'
{
"cc_timestamp": 1442560350,
"data": [
{
"hostId": 1,
"hostname": "192.168.33.123",
"variables":
{
"performance_schema": "OFF",
"port": "3306",
"thread_pool_idle_timeout": "60",
"thread_pool_max_threads": "500",
"thread_pool_oversubscribe": "3",
"thread_pool_size": "1",
"version": "5.5.41-MariaDB"
}
} ],
"requestStatus": "ok",
"total": 1
}

Database server statuses

With this RPC API you will get the server states. This basically returns the latest mysql/mongo server statistics "snapshot".

NOTE: This method works for MySQL/PostgreSQL and MongoDB clusters.

Possible parameters:

  • hostId: to be able to get that only about a specific host
  • keys: a comma separated list about the wanted parameters (to save some bandwidth)

Here you can see few example requests and responses:

1 curl 'http://localhost:9500/6/stat?operation=getdbstatus&keys=COM_SELECT,COM_COMMIT,COM_DELETE,COM_FLUSH'
{
"cc_timestamp": 1442562250,
"data": [
{
"hostId": 1,
"hostname": "192.168.33.123",
"statuses":
{
"COM_COMMIT": "0",
"COM_DELETE": "0",
"COM_FLUSH": "0",
"COM_SELECT": "1997221"
}
} ],
"requestStatus": "ok",
"total": 1
}

Database Deadlock Log

With this RPC API you will get the deadlocked transasctios Implemented for MySQL and its variants and for PostgreSQL.

Possible parameters:

  • startdate
  • enddate
  • limit : Can be included in any of the examples below. If startdate + enddate is ommited, limit defaults to 25 and up to 25 of the latest records will be sent back.

Here you can see few example requests and responses:

1 curl -XPOST -d '{"operation": "txdeadlock", "startdate":"1408044387", "enddate":"1458044387"}' 'http://localhos:9500/11/stat

or

1 curl -XPOST -d '{"operation": "txdeadlock", "startdate":"1408044387", "limit": "2"}' 'http://localhos:9500/11/stat

or

1 curl -XPOST -d '{"operation": "txdeadlock"}' 'http://localhos:9500/11/stat

Example output

{
[ {
"blocked_by_trx_id": "none",
"db": "sbtest",
"duration": "55",
"host": "localhost",
"hostId": 1439,
"info": "SELECT /*!40001 SQL_NO_CACHE */ * FROM `sbtest1`",
"innodb_status": "a lot of data",
"innodb_trx_id": "7371225",
"instance": "10.10.10.10:3306",
"internal_trx_id": "7371225",
"lastseen": "2016-03-09 13:57:36",
"message": "NULL",
"mysql_trx_id": "261",
"sql": "SELECT /*!40001 SQL_NO_CACHE */ * FROM `sbtest1`",
"state": "RUNNING",
"user": "backupuser"
}, ... ],
"requestStatus": "ok",
"total": 25
}

Get database storage (growth) info over time

With this RPC API you will get the database meta info (size). Implemented for MySQL and its variants and for PostgreSQL.

Possible parameters:

  • startdate
  • enddate
  • limit : Can be included in any of the examples below. If startdate + enddate is ommited, limit defaults to 31 and up to 31 of the latest records will be sent back.

Here you can see few example requests and responses:

1 curl -X POST -d '{"token":"igIAHI3cALXIuOvM", "operation": "getdbgrowth"}' http://localhost:9500/2/stat

or

1 curl -X POST -d '{"token":"igIAHI3cALXIuOvM", "operation": "getdbgrowth", "startdate":"1408044387", "enddate":"1518044387"}' http://localhost:9500/2/stat

or

1 curl -XPOST -d '{"operation": "txdeadlock"}' 'http://localhos:9500/11/stat

Example output

{
[ {
"blocked_by_trx_id": "none",
"db": "sbtest",
"duration": "55",
"host": "localhost",
"hostId": 1439,
"info": "SELECT /*!40001 SQL_NO_CACHE */ * FROM `sbtest1`",
"innodb_status": "a lot of data",
"innodb_trx_id": "7371225",
"instance": "10.10.10.10:3306",
"internal_trx_id": "7371225",
"lastseen": "2016-03-09 13:57:36",
"message": "NULL",
"mysql_trx_id": "261",
"sql": "SELECT /*!40001 SQL_NO_CACHE */ * FROM `sbtest1`",
"state": "RUNNING",
"user": "backupuser"
}, ... ],
"requestStatus": "ok",
"total": 25
}

Old PHP CMONAPI (used by web UI) WEB_UI compatible APIs

The request is a simple GET request, in the following syntax:

1 http://${IP_OF_CMON_HOST}:9500/${CLUSTERID}/stat/ram_history.json?hostId=2&startdate=1410441439

The request should contain also the 'token=RPC_TOKEN' specified if the cmon.cnf contains 'rpc_key'.

The following paths are implemented in the web-ui compatible way:

  • /clusterid/stat/ram_history.json
  • /clusterid/stat/network_history.json
  • /clusterid/stat/cpu_history.json
  • /clusterid/stat/disk_history.json

Currently only the following filtering options are available for the results:

  • startdate (unix timestamp), defaults to now() - one day
  • enddate
  • hostId (please note that all parameters are case sensitive)
  • interface (applies for network_history.json)

Some example requests & replies ( ... means i removed some items from the results for better readability)

1 $ curl 'http://localhost:9500/12/stat/ram_history.json?hostId=2&startdate=1410443539'
{
"data": [
{
"hostid": 2,
"ram_free": 755666944,
"ram_total": 1307394048,
"ram_used": 551727104,
"report_ts": 1410443507,
"swap_free": 0,
"swap_total": 0,
"swap_used": 0
},
/* .. truncated .. */
{
"hostid": 2,
"ram_free": 749642752,
"ram_total": 1307394048,
"ram_used": 557751296,
"report_ts": 1410443539,
"swap_free": 0,
"swap_total": 0,
"swap_used": 0
} ],
"requestStatus": "ok",
"total": 6
}
1 $ curl 'http://localhost:9500/12/stat/disk_history.json?hostId=1'
{
"data": [
{
"_reads": 0,
"_writes": 0,
"disk_name": "sda2",
"free_bytes": 18801885184,
"hostid": 1,
"report_ts": 1410440765,
"total_bytes": 116954603520
},
/* .. truncated .. */
{
"_reads": 0,
"_writes": 0,
"disk_name": "sda2",
"free_bytes": 18825199616,
"hostid": 1,
"report_ts": 1410442113,
"total_bytes": 116954603520
} ],
"requestStatus": "ok",
"total": 3
}
1 $ curl 'http://localhost:9500/12/stat/network_history.json?hostid=1&stardate=1410444489'
{
"data": [
{
"hostid": 1,
"interface": "eth1",
"report_ts": 1410441439,
"rx_bytes_sec": 6178.55,
"tx_bytes_sec": 3490.93
},
{
"hostid": 1,
"interface": "eth1",
"report_ts": 1410442113,
"rx_bytes_sec": 181965,
"tx_bytes_sec": 16914.5
} ],
"requestStatus": "ok",
"total": 2
}
1 $ curl 'http://localhost:9500/12/stat/cpu_history.json?hostid=1&stardate=1410444489'
{
"data": [
{
"coreid": 65535,
"hostid": 1,
"idle": 0.350358,
"iowait": 0.0601709,
"report_ts": 1410441438,
"steal": 0,
"sys": 0.0559242,
"usr": 0.533547,
"util": 0.116095
},
{
"coreid": 65535,
"hostid": 1,
"idle": 0.504347,
"iowait": 0.00402398,
"report_ts": 1410442112,
"steal": 0,
"sys": 0.0424423,
"usr": 0.449187,
"util": 0.0464663
} ],
"requestStatus": "ok",
"total": 2
}

hostsStats

This API is especially made for the web-ui, to show some generic "current" statuses/stats of the nodes on the node overview page.

The '_interval' field is for the disk stats (_reads,_writes, _sectors_read, _sectors_written) fields, to calculate the actual rates.

1 curl 'http://localhost:9500/14/stat?operation=hostsStats'

or

{
"operation": "hostsStats"
}

and the reply is:

{
"cc_timestamp": 1506582050,
"data": [
{
"_interval": 9861,
"_reads": 4,
"_sectors_read": 288,
"_sectors_written": 15280,
"_writes": 364,
"cmon_status": 1506582050,
"host_is_up": true,
"hostname": "127.0.0.1",
"hoststatus": "CmonHostFailed",
"id": 1,
"idle": 0.538447,
"iowait": 0.00598443,
"loadavg1": 1.95,
"loadavg15": 1.06,
"loadavg5": 1.25,
"maintenance_mode_active": false,
"ping_status": 0,
"ping_time": 0,
"report_ts": 1506582040,
"rx_bytes_sec": 75741,
"sshfailcount": 0,
"status": 18,
"steal": 0,
"sys": 0.308037,
"tx_bytes_sec": 592318,
"uptime": 611475,
"usr": 0.147531
},
{
"_interval": 9848,
"_reads": 4,
"_sectors_read": 288,
"_sectors_written": 12928,
"_writes": 353,
"cmon_status": 1506582045,
"host_is_up": true,
"hostname": "127.0.0.2",
"hoststatus": "CmonHostOnline",
"id": 2,
"idle": 0.538614,
"iowait": 0.00598757,
"loadavg1": 1.95,
"loadavg15": 1.06,
"loadavg5": 1.25,
"maintenance_mode_active": false,
"ping_status": 0,
"ping_time": 0,
"report_ts": 1506582040,
"rx_bytes_sec": 75733,
"sshfailcount": 0,
"status": 10,
"steal": 0,
"sys": 0.308029,
"tx_bytes_sec": 592258,
"uptime": 611475,
"usr": 0.147369
} ],
"requestStatus": "ok",
"total": 2
}

getInfo

With this operation it is possible to get the cluster infos from the internal info collector.

Few info names (this is is not full and subject to change atm):

  • "conf.configfile"
  • "conf.clustername"
  • "conf.os"
  • "conf.clusterid"
  • "conf.hostname"
  • "conf.clustertype"
  • "conf.clustertypestr"
  • "cmon.hostname"
  • "cmon.domainname"
  • "cmon.uptime"
  • "cmon.starttime"

An example calling, and reply format:

1 $curl -X POST -d '{"operation": "getInfo", "name":"conf.clustertypestr" }' http://localhost:9500/14/stat
{
"data": "postgresql_single",
"requestStatus": "ok"
}

It is also possible (here too) to use GET request:

1 curl 'http://localhost:9500/12/stat?operation=getInfo&name=cmon.hostname'

getMongoShardingStatus

Queries the status of sharding of a mongo cluster. The query is done on the first found mongos node. All the fields (except total_chunks) and their names are coming from the result of 'sh.status()' mongo command. For more information please take a look at https://docs.mongodb.com/manual/reference/method/sh.status/index.html

The status.shards.replica_set_0.total_chunks and its respective for other replicasets (shards) are not in the original sh.status() report. The cmon backend calculates it's vaule by summarizing all the number of chunks of all the databases and their collections.

The value of status.balancer.Migration_Results_for_the_last_24_hours.Message is optional. If it is non empty, then Success and Failure have no sense becuase there were no migration to succeed or fail.

An example calling, and reply format:

1 echo '{"token": "KpFrUV2sdEn9uMSr", "operation": "getmongoshardingstatus"}' | curl -sX POST -H"Content-Type: application/json" -d @- http://192.168.30.4:9500/66/stat
{
"status": {
"shards": {
"replica_set_1": {
"total_chunks": 6
},
"replica_set_0": {
"total_chunks": 6
}
},
"balancer": {
"Last_reported_error": "could not get updated shard list from config server due to Operation timed out",
"Failed_balancer_rounds_in_last_5_attempts": 2,
"Migration_Results_for_the_last_24_hours": {
"Failure": 0,
"Message": "No recent migrations",
"FailureMessage": "",
"Success": 0
},
"Currently_enabled": true,
"Currently_running": false,
"Time_of_Reported_error": "Wed Oct 25 2017 02:28:41 GMT-0400 (EDT)"
},
"autosplit": {
"Currently_enabled": false
},
"databases": {
"test1": {
"collections": {
"testCollection1": {
"chunks": {
"replica_set_1": 2,
"replica_set_0": 2
},
"balancing": true
},
"testCollection2": {
"chunks": {
"replica_set_1": 2,
"replica_set_0": 2
},
"balancing": true
}
},
"primary": "replica_set_0",
"partitioned": true
},
"test2": {
"collections": {
"testCollection": {
"chunks": {
"replica_set_1": 2,
"replica_set_0": 2
},
"balancing": true
}
},
"primary": "replica_set_0",
"partitioned": true
}
}
},
"requestStatus": "ok",
"cc_timestamp": 1509095519
}

The RPC for Maintenance

Maintenance periods currently can be registered for hosts and for entire clusters, so when a new maintenance period is registered it can be done by either providing a "hostname" or a "cluster_id" to identify the target of the maintenance.

Every maintenance period has an owner identified by the username and user ID. No maintenance period can be registered without these properties. The RPC can't be used to register maintenance periods under the username "system" or the user ID 0, these are for internal use only.

Maintenance can overlap. It is possible create a window between e.g XX:00 to YY:00 and then create another maintenance period that starts within the first window and stretches beyond YY:00 to ZZ:000. In this case the maintenance period is extended to cover XX:00 to ZZ:00.

A number of jobs implicitly creates a maintenance period. At this moment of time it is not possible to set the maintenance period for a job inside the UI. CMON supports setting a time for most jobs.

The following jobs put the node/nodes in maintenance mode: Restore Backup: 60 minutes or until the restore is finished. Remove Node: 10 minutes or until the job is finished. Start Cluster: 60 minutes or until the job is finished. Stop Cluster: 60 minutes or until the job is finished. Stop Node: 30 minutes. No less and no more. After 30 minutes the node will exit maintenance mode. Since the node is then CmonHostShutdown, no alarms should be sent and it is a bug if it does. Add Node: 10 minutes or until the job is finished. Upgrade Cluster: 20 minutes or until the job is finished. Rolling Restart: 20 minutes or until the job is finished. Replication Failover: 5 minutes or until job is finished. Replication Switchover: 5 minutes or until job is finished.

These settings should be made configurable from the UI.

The addMaintenance RPC Call

The "addMaintenance" is a simple request that adds a new maintenance period for one host or for an entire cluster. Both the host and the cluster might have multiple even overlapping maintenance periods, so the RPC reply holds an "UUID" field that identifies the newly added maintenance period.

The call should either contain a field called "hostname" or "cluster_id" to create host or cluster maintenance periods.

{
"operation": "addMaintenance",
"hostname": "10.10.10.1",
"initiate": "2026-08-19T05:54:01.043Z",
"deadline": "2026-08-19T06:54:01.043Z",
"user": "joe",
"user_id": 42,
"reason": "Some reason."
}
{
"UUID": "577a0a99-b034-4be3-8578-516c88a31012",
"cc_timestamp": 1506582050,
"requestStatus": "ok"
}

The getMaintenance RPC Call

The getMaintenance call can be used to return all the maintenance periods for a cluster and the hosts in a cluster.

{
"operation": "getMaintenance"
}
{
"cc_timestamp": 1506582050,
"maintenance_records": [
{
"class_name": "CmonMaintenanceInfo",
"hostname": "10.10.10.1",
"is_active": false,
"maintenance_periods": [
{
"UUID": "577a0a99-b034-4be3-8578-516c88a31012",
"deadline": "2026-08-19T06:54:01.043Z",
"groupid": 0,
"groupname": "",
"initiate": "2026-08-19T05:54:01.043Z",
"is_active": false,
"reason": "Some reason.",
"userid": 42,
"username": "joe"
} ]
} ],
"requestStatus": "ok"
}

The removeMaintenance RPC Call

The removeMaintenance RPC call can be used to remove maintenance periods before their time.

Similarly to the addMaintenance call the removeMaintenance call should also contain a "hostname" or "cluster_id" to handle host and cluster maintenance periods.

The call also should have an UUID field to identify which maintenance period should be removed.

{
"operation": "removeMaintenance",
"hostname": "10.10.10.1",
"user": "joe",
"user_id": 42,
"UUID": "577a0a99-b034-4be3-8578-516c88a31012"
}
{
"cc_timestamp": 1506582050,
"requestStatus": "ok"
}

The RPC for the Query Monitor

Currently only mysql based clusters supports this.

{
"operation" : "qm_topqueries|qm_queryoutliers",
"order_by_col" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}

The list of top queries API

List the top queries.

\code{.js}
{
"operation" : "qm_topqueries",
"order_by_col" : STRING (default 'total_time')
"order_by_col" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
\endcode
\code{.js}
{
"cc_timestamp": 1523433168,
"data": [
{
"avg_query_time": 90,
"canonical": "SELECT ?",
"command": "",
"count": 153840,
"db": "proxydemo",
"host": "10.10.10.19",
"hostid": 48,
"info": "SELECT 1",
"last_seen": 1523427205,
"lock_time": 0,
"max_query_time": 14105,
"min_query_time": 15,
"query_id": 9014708004629822606,
"query_time": 14105,
"rows_examined": 0,
"rows_sent": 0,
"state": "",
"stddev": 19,
"sum_created_tmp_disk_tables": 0,
"sum_created_tmp_tables": 0,
"sum_lock_time": 0,
"sum_no_good_index_used": 0,
"sum_no_index_used": 0,
"sum_query_time": 13927861,
"sum_rows_examined": 0,
"sum_rows_sent": 0,
"user": "proxydemo",
"variance": 398
},
..
]
"requestStatus": "ok",
"total": N
}
\endcode

The list of top queries API

List the outliers.

\code{.js}
{
"operation" : "qm_queryoutliers",
"order_by_col" : STRING (default 'last_seen')
"limit" : NUMBER,
"offset" : NUMBER,
"startTime": EPOCH,
"stopTime": EPOCH
}
\endcode
\code{.js}
{
"cc_timestamp": 1523433168,
"data": [
{
"avg_query_time": 90,
"canonical": "SELECT ?",
"count": 1,
"hostid": 48,
"info": "SELECT 1",
"last_seen": 1523427205,
"lock_time": 0,
"max_query_time": 14105,
"min_query_time": 15,
"query_id": 9014708004629822606,
"query_time": 14105,
"stddev": 19
},
..
]
"requestStatus": "ok",
"total": N
}
\endcode

Delete q API

Purge the querymonitor.

The Clusters API

This is the documentation for the RPC API found on /0/clusters and on /${CLUSTERID}/clusters.

The GetClusterInfo RPC call

The "getclusterinfo" RPC call is designed to provide the basic information about one specific cluster. The CmonClusterInfo Class holds the properties of the clusters in the reply messages.

Example:

{
"operation": "getclusterinfo",
"with_hosts": true,
"with_host_properties": "class_name, hostname, port, ip",
"cluster_id": 200
}
{
"cc_timestamp": 1484036247,
"cluster":
{
"alarm_statistics":
{
"class_name": "CmonAlarmStatistics",
"cluster_id": 200,
"critical": 5,
"warning": 2
},
"class_name": "CmonClusterInfo",
"cluster_auto_recovery": true,
"cluster_id": 200,
"cluster_name": "default_repl_200",
"cluster_type": "MYSQLCLUSTER",
"configuration_file": "configs/UtCmonRpcService_01.conf",
"group_owner":
{
"class_name": "CmonGroup",
"group_id": 0,
"group_name": ""
},
"hosts": [
{
"class_name": "CmonMySqlHost",
"hostname": "127.0.0.2",
"ip": "127.0.0.2",
"port": 3306
},
{
"class_name": "CmonHost",
"hostname": "127.0.0.1",
"ip": "127.0.0.1",
"port": 9555
} ],
"job_statistics":
{
"by_state":
{
"ABORTED": 0,
"DEFINED": 0,
"DEQUEUED": 0,
"FAILED": 0,
"FINISHED": 0,
"RUNNING": 0
},
"class_name": "CmonJobStatistics",
"cluster_id": 200
},
"log_file": "./cmon-ut-cmonrpcservice.log",
"maintenance_mode_active": false,
"managed": true,
"node_auto_recovery": true,
"owner":
{
"class_name": "CmonUser",
"email_address": "",
"groups": [ ],
"user_id": 0,
"user_name": ""
},
"state": "MGMD_NO_CONTACT",
"status_text": "No contact to the management node.",
"vendor": "oracle",
"version": "5.7"
},
"requestStatus": "ok"
}

The GetAllClusterInfo RPC call

The "GetAllClusterInfo" RPC call can be used to get the cluster info for all clusters that are known to the Cmon controller. The CmonClusterInfo Class holds the properties of the clusters in the reply messages.

The returned data is pretty much the same as seen in the "GetClusterInfo" call but instead of returning information about one cluster it returns a list that holds information about multiple clusters.

The RPC calls that are getting clusters can be used to the get the host list of the clusters too. Use the "with_hosts" and "with_host_properties" optional fields to request the host list and filter the host properties.

Example:

1 $ curl -XPOST -d '{"operation":"getallclusterinfo", "with_hosts":true, "token":
2 "d62d8adf4f32f5f4a388888eb4def7c86860257c"}' http://127.0.0.1:9500/0/clusters
{
"operation": "getallclusterinfo",
"with_hosts": true,
"with_host_properties": "class_name, hostname, port, ip",
"with_license_check": true,
"cluster_ids": [ 200 ]
}
{
"cc_timestamp": 1484036247,
"clusters": [
{
"alarm_statistics":
{
"class_name": "CmonAlarmStatistics",
"cluster_id": 200,
"critical": 5,
"warning": 2
},
"class_name": "CmonClusterInfo",
"cluster_auto_recovery": true,
"cluster_id": 200,
"cluster_name": "default_repl_200",
"cluster_type": "MYSQLCLUSTER",
"configuration_file": "configs/UtCmonRpcService_01.conf",
"group_owner":
{
"class_name": "CmonGroup",
"group_id": 0,
"group_name": ""
},
"hosts": [
{
"class_name": "CmonMySqlHost",
"hostname": "127.0.0.2",
"ip": "127.0.0.2",
"port": 3306
},
{
"class_name": "CmonHost",
"hostname": "127.0.0.1",
"ip": "127.0.0.1",
"port": 9555
} ],
"job_statistics":
{
"by_state":
{
"ABORTED": 0,
"DEFINED": 0,
"DEQUEUED": 0,
"FAILED": 0,
"FINISHED": 0,
"RUNNING": 0
},
"class_name": "CmonJobStatistics",
"cluster_id": 200
},
"log_file": "./cmon-ut-cmonrpcservice.log",
"maintenance_mode_active": false,
"managed": true,
"node_auto_recovery": true,
"owner":
{
"class_name": "CmonUser",
"email_address": "",
"groups": [ ],
"user_id": 0,
"user_name": ""
},
"state": "MGMD_NO_CONTACT",
"status_text": "No contact to the management node.",
"vendor": "oracle",
"version": "5.7"
} ],
"license_check":
{
"class_name": "CmonLicenseCheck",
"has_license": false,
"status_text": "No license found."
},
"requestStatus": "ok",
"total": 2
}

The "listAccounts" Request

Does a sql query for a list of database accounts (users) for the requested hosts (by default the master).

Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "listAccounts",
"hosts" : STRING,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
  • hosts
    Selects hosts to query database users on. When not specified, by default the master or primary cluster node will be used. The value should be a ';' separated list of 'hostname' or 'hostname:port' values.
  • orderByColumn
    May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
  • limit
    A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
  • offset
    An offset of the first line to return of the result set (table lines). By default this is 0.

Here is a minimal example request:

{
"operation": "listAccounts"
}

Here is a minimal example request with limit:

{
"operation": "listAccounts",
"limit" : 1
}

Here is an example result:

{
"cc_timestamp": 1500543343,
"queryResultCount": 1,
"queryResultTotalCount": 8,
"queryResults": [
{
"hostname": "192.168.30.74",
"port": 3306,
"accounts": [
{
"class_name": "CmonAccount",
"grants": "REPLICATION CLIENT",
"host_allow": "192.168.30.74",
"own_database": "",
"password": "*0976C3DEA4D15D73572FEBD24E8BB7B1375EB818",
"user_name": "proxysql-monitor"
}
]
}
],
"requestStatus": "ok"
}

The Original Clusters API

This is the original an default interpretation of the /0/clusters and /${CLUSTERID}/clusters RPC calls. To use this leave the "operation" field empty in the RPC request. (It is also ok to send any non-recognized string in the "operation" field, but of course easier to simply omit it.)

/0/clusters or /${CLUSTERID}/clusters (clusterId will be ignored anyway)

On this REST path you will get back the list of the manged clusters by the clustercontroller instance. (You can get only one specific clusters data by sending an 'id' field specified.)

Few example calls/replies:

1 $ curl 'http://localhost:9500/0/clusters'
{
"cc_timestamp": 1444310882,
"clusters": [
{
"clusterAutorecovery": true,
"configFile": "/etc/cmon.d/cmon_5.cnf",
"id": 5,
"logFile": "/var/log/cmon_5.log",
"name": "cluster_5",
"nodeAutorecovery": true,
"running": true,
"status": 0,
"statusText": "",
"type": "postgresql_single"
},
{
"clusterAutorecovery": true,
"configFile": "/etc/cmon.d/cmon_6.cnf",
"id": 6,
"logFile": "/var/log/cmon_6.log",
"name": "cluster_6",
"nodeAutorecovery": true,
"running": true,
"status": 0,
"statusText": "",
"type": "mysql_single"
} ],
"info":
{
"hasLicense": true,
"licenseExpires": 83,
"licenseStatus": "License found.",
"version": "1.2.12"
},
"requestStatus": "ok"
}
1 $ curl 'http://10.0.0.6:9500/0/clusters'
{
"cc_timestamp": 1432889560,
"data": [
{
"clusterAutorecovery": true,
"configFile": "/etc/cmon.d/cmon_4.cnf",
"id": 4,
"logFile": "/var/log/cmon_4.log",
"name": "cluster_4",
"nodeAutorecovery": true,
"running": true,
"status": 2,
"statusText": "Cluster started.",
"type": "postgresql_single"
} ],
"requestStatus": "ok",
"total": 1
}

The ProxySql API

/${CLUSTERID}/proxysql

The "topQueries" Request

Does a sql query for stats about top queires sent for proxysql. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "topQueries",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • orderByColumn
    May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
  • limit
    A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
  • offset
    An offset of the first line to return of the result set (table lines). By default this is 0.

Here is an example result:

{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name" : "CmonProxySqlTopQuery",
"count_star": "2",
"digest": "0x99531AEFF718C501",
"digest_text": "show tables",
"first_seen": "1483775487",
"hostgroup": "10",
"last_seen": "1483775674",
"max_time": "578",
"min_time": "217",
"schemaname": "proxydemo",
"sum_time": "795",
"username": "proxydemo"
}],
"queryResultCount" : 1,
"queryResultTotalCount" : 13,
"requestStatus": "ok"
}

The "queryRules" Request

Does a sql query for stats about top queires sent for proxysql. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "queryRules",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • orderByColumn
    May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
  • limit
    A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
  • offset
    An offset of the first line to return of the result set (table lines). By default this is 0.

Here is an example result:

{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name" : "CmonProxySqlQueryRule",
"active": "1",
"apply": "1",
"cache_ttl": "NULL",
"client_addr": "NULL",
"comment": "NULL",
"delay": "NULL",
"destination_hostgroup": "10",
"digest": "NULL",
"error_msg": "NULL",
"flagIN": "0",
"flagOUT": "NULL",
"log": "NULL",
"match_digest": "NULL",
"match_pattern": "^SELECT . .. .cache FOR UPDATE",
"mirror_flagOUT": "NULL",
"mirror_hostgroup": "NULL",
"negate_match_pattern": "0",
"proxy_addr": "NULL",
"proxy_port": "NULL",
"reconnect": "NULL",
"replace_pattern": "NULL",
"retries": "NULL",
"rule_id": "100",
"schemaname": "NULL",
"timeout": "NULL",
"username": "NULL"
}],
"queryResultCount" : 1,
"queryResultTotalCount" : 3,
"requestStatus": "ok"
}

The "insertQueryRule" Request

Inserts the queryRule on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "insertQueryRule",
"hostName" : STRING,
"port" : NUMBER,
"queryRule" : {
"class_name" : "CmonProxySqlQueryRule",
"rule_id" : NUMBER
}
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • queryRule
    Properties should be according to CmonProxySqlQueryRule class. Has no default value, so this is a mandatory field.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "deleteQueryRule" Request

Deletes a queryRule on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "deleteQueryRule",
"hostName" : STRING,
"port" : NUMBER,
"queryRule" : {
"class_name" : "CmonProxySqlQueryRule",
"rule_id" : NUMBER
}
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • queryRule
    Properties should be according to CmonProxySqlQueryRule class, but only rule_id is used for constructing DELETE sql command. Has no default value, so this is a mandatory field.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "updateQueryRule" Request

Updates a queryRule on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "updateQueryRule",
"hostName" : STRING,
"port" : NUMBER,
"queryRule" : {
"class_name" : "CmonProxySqlQueryRule",
"rule_id" : NUMBER
"new_rule_id" : NUMBER
}
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • queryRule
    Properties should be according to CmonProxySqlQueryRule class, except that when the rule_id (the primary key) is to be changed, the new value should be defined as "new_rule_id" property. Has no default value, so this is a mandatory field.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "queryHostgroups" Request

Does a sql query for mysql server hostgroups set for proxysql host. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "queryHostgroups",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • orderByColumn
    May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
  • limit
    A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
  • offset
    An offset of the first line to return of the result set (table lines). By default this is 0.

Here is an example result:

{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name": "CmonProxySqlHostgroup",
"writer_hostgroup": 10,
"reader_hostgroup": 20,
"comment" : "host groups"
}],
"queryResultCount" : 1,
"queryResultTotalCount" : 3,
"requestStatus": "ok"
}

The "queryRules" Request

Does a sql query for mysql servers registered for proxysql. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "queryServers",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • orderByColumn
    May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
  • limit
    A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
  • offset
    An offset of the first line to return of the result set (table lines). By default this is 0.

Here is an example result:

{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name" : "CmonProxySqlServer",
"Bytes_data_recv": "0",
"Bytes_data_sent": "0",
"ConnERR": "0",
"ConnFree": "0",
"ConnUsed": "0",
"Latency_ms": "285",
"Queries": "0",
"status": "ONLINE",
"comment": "read server",
"compression": "0",
"hostgroup_id": "20",
"hostname": "192.168.30.11",
"max_connections": "100",
"max_latency_ms": "0",
"max_replication_lag": "10",
"port": "3306",
"use_ssl": "0",
"weight": "1"
}],
"hostgroupIdList": [20],
"queryResultCount" : 1,
"queryResultTotalCount" : 3,
"requestStatus": "ok"
}

The "insertMysqlServer" Request

Inserts a mysql server the specified proxysql host configuration. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "insertMysqlServer",
"hostName" : STRING,
"port" : NUMBER,
"mysqlServer" : {
"class_name" : "CmonProxySqlServer",
"hostgroup_id" : NUMBER,
"hostname" : STRING
}
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • mysqlServer
    Properties should be according to CmonProxySqlServer class. Has no default value, so this is a mandatory field.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "deleteMysqlServer" Request

Deletes a mysql server from proxysql host configuration. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "deleteQueryRule",
"hostName" : STRING,
"port" : NUMBER,
"mysqlServer" : {
"class_name" : "CmonProxySqlServer",
"hostgroup_id" : NUMBER,
}
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • mysqlServer
    Properties should be according to CmonProxySqlServer class, but only hostgroup_id, hostname and port is used for constructing DELETE sql command. Has no default value, so this is a mandatory field.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "updateMysqlServer" Request

Updates a mysql server config on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "updateMysqlServer",
"hostName" : STRING,
"port" : NUMBER,
"mysqlServer" : {
"class_name" : "CmonProxySqlServer",
"hostgroup_id" : NUMBER,
"hostname" : STRING
}
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • mysqlServer
    Properties should be according to CmonProxySqlServer class. Has no default value, so this is a mandatory field.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "queryUsers" Request

Does a sql query for mysql users set for proxysql host. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "queryUsers",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • orderByColumn
    May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
  • limit
    A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
  • offset
    An offset of the first line to return of the result set (table lines). By default this is 0.

Here is an example result:

{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name": "CmonProxySqlUser",
"active": "1",
"backend": "1",
"default_hostgroup": "10",
"default_schema": "proxydemo",
"fast_forward": "0",
"frontend": "1",
"max_connections": "10000",
"password": "proxydemo",
"schema_locked": "0",
"transaction_persistent": "0",
"use_ssl": "0",
"username": "proxydemo"
}],
"queryResultCount" : 1,
"queryResultTotalCount" : 3,
"requestStatus": "ok"
}

The "insertMysqlUser" Request

Inserts the mysql user on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "insertQueryRule",
"hostName" : STRING,
"port" : NUMBER,
"mysqlUser" : {
"class_name" : "CmonProxySqlUser",
"username" : STRING,
"password" : STRING,
"db_database" : STRING,
"db_privs" : STRING,
"frontend" : NUMBER,
"backend" : NUMBER
}
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • mysqlUser
    Properties should be according to CmonProxySqlUser class. Has no default value, so this is a mandatory field.
  • db_database
    Extra property for CmonProxySqlUser, usable only for insertion (for now). This should have the form of "databasename.*" or "*.*" by default this will be set to "*.*".
  • db_privs
    Extra property for CmonProxySqlUser, usable only for insertion (for now). This should be a list of coma separated mysql user privileges. Has no default value and at least one privilege must be defined, or the user creation will fail.
    List of supported privilege values:
    CREATE, DROP, GRANT OPTION, LOCK TABLES, REFERENCES, EVENT, ALTER, DELETE, INDEX, INSERT, SELECT, UPDATE, CREATE TEMPORARY TABLES, TRIGGER, CREATE VIEW, SHOW VIEW, ALTER ROUTINE, CREATE ROUTINE, EXECUTE, FILE, CREATE TABLESPACE, CREATE USER, PROCESS, PROXY, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, SHOW DATABASES, SHUTDOWN, SUPER, ALL [PRIVILEGES]
  • frontend
    Not mandatory, by default set to 1.
  • backend
    Not mandatory, by default set to 1.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "deleteMysqlUser" Request

Deletes a mysql user on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "deleteMysqlUser",
"hostName" : STRING,
"port" : NUMBER,
"mysqlUser" : {
"class_name" : "CmonProxySqlUser",
"username" : STRING
}
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • mysqlUser
    Properties should be according to CmonProxySqlUser class. Has no default value, so this is a mandatory field.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "updateMysqlUser" Request

Updates a mysql user on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "updateMysqlUser",
"hostName" : STRING,
"port" : NUMBER,
"mysqlUser" : {
"class_name" : "CmonProxySqlUser",
"username" : STRING,
"password" : STRING,
"frontend" : NUMBER,
"backend" : NUMBER
}
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • mysqlUser
    Properties should be according to CmonProxySqlUser class. Has no default value, so this is a mandatory field.
  • frontend
    Not mandatory, by default set to 1.
  • backend
    Not mandatory, by default set to 1.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "importMysqlUsers" Request

Receives lists of CmonProxySqlUser objects for a list of mysql nodes to import the specified users from the mysql nodes into proxysql.

The user's global privileges, accessible databases and privileges for those databasses and password will be read from the mysql source host.

Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "importMysqlUsers",
"hostName" : STRING,
"port" : NUMBER,
"userList" : [
{
"sourceHostName" : STRING,
"sourcePort" : NUMBER,
"proxySqlUsers" : [
{
"class_name" : "CmonProxySqlUser",
"username" : STRING,
"host_allow" : STRING,
"default_hostgroup": "10"
}
]
}
]
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • userList
    A list of lists of CmonProxySqlUser object (representing users to import) for each mysql hosts to import the users from.
  • sourceHostName
    Mysql host to import users from.
  • sourcePort
    Port for the mysql node to import users from.
  • proxySqlUsers
    A list of CmonProxySqlUser to import to. Properties like default_hostgroup will be used as in case of create new user call. Except for password, db_database and db_privs, these will be read from the source mysql node.
  • host_allow
    Not a real CmonProxySqlUser property, just an enhancement for this import only. It specifies exactly what mysql user account's password and privileges to use for the new mysql user that will be used by proxysql. The new mysql user is a must have side effect of this import to be able to login from the defined proxysql host.

Here is an example result:

In the returned answer json structure there can be a userList array structured the same way as the input parameter. This array will countain all the users those import was not finished because some error happened during the import procedure.

In practice after every successfull import the just imported CmonProxySqlUser object is removed, and in case of error, the remaining list is returned in the answer.

{
"cc_timestamp": 1484041378,
"requestStatus": "ok",
"userList" : []
}

The "queryVariables" Request

Does a sql query for proxysql global variables set for proxysql host. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "queryVariables",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • orderByColumn
    May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
  • limit
    A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
  • offset
    An offset of the first line to return of the result set (table lines). By default this is 0.

Here is an example result:

{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name": "CmonProxySqlUser",
"variable_name": "mysql-shun_on_failures",
"variable_value": "5"
}],
"queryResultCount" : 1,
"queryResultTotalCount" : 75,
"requestStatus": "ok"
}

The "updateVariable" Request

Updates a variable on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "updateVariable",
"hostName" : STRING,
"port" : NUMBER,
"variable" : {
"class_name" : "CmonProxySqlVariable",
"variable_name" : STRING,
"variable_value" : STRING
}
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • variable
    Properties should be according to CmonProxySqlVariable class. Has no default value, so this is a mandatory field.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "querySchedules" Request

Does a sql query for scheduler records set for proxysql host. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "querySchedules",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • orderByColumn
    May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
  • limit
    A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
  • offset
    An offset of the first line to return of the result set (table lines). By default this is 0.

Here is an example result:

{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name": "CmonProxySqlSchedule",
"id": "1",
"active": "1",
"interval_ms" : "100",
"filename" : "/path/to/scriptefile",
"arg1": "first script argument",
"arg2": "second script argument",
"arg3": "third script argument",
"arg4": "forth script argument",
"arg5": "fifth script argument",
"comment" : "comment"
}],
"queryResultCount" : 1,
"queryResultTotalCount" : 1,
"requestStatus": "ok"
}

The "insertSchedule" Request

Inserts the schedule record on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is a minimal example request:

{
"operation" : "insertSchedule",
"hostName" : STRING,
"port" : NUMBER,
"schedule" : {
"class_name" : "CmonProxySqlSchedule",
"filename" : "/path/to/scriptefile"
}
}

Here is a complete example request:

{
"operation" : "insertSchedule",
"hostName" : STRING,
"port" : NUMBER,
"schedule" : {
"class_name" : "CmonProxySqlSchedule",
"active": "1", // default is 1
"interval_ms" : "100", // default and minimum value is 100 = 0.1 sec
"filename" : "/path/to/scriptefile",
"arg1": "first script argument", // default is empty for all args
"arg2": "second script argument",
"arg3": "third script argument",
"arg4": "forth script argument",
"arg5": "fifth script argument",
"comment" : "comment" // default is empty
}
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • schedule
    Properties should be according to CmonProxySqlScheduler class. Has no default value, so this is a mandatory field.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "deleteSchedule" Request

Deletes a schedule record from the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "deleteSchedule",
"hostName" : STRING,
"port" : NUMBER,
"schedule" : {
"class_name" : "CmonProxySqlSchedule",
"username" : STRING
}
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • schedule
    Properties should be according to CmonProxySqlSchedule class. Has no default value, so this is a mandatory field.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "updateSchedule" Request

Updates a schedule record on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "updateSchedule",
"hostName" : STRING,
"port" : NUMBER,
"schedule" : {
"class_name" : "CmonProxySqlSchedule",
"id": "1",
"active": "1",
"interval_ms" : "100",
"filename" : "/path/to/scriptefile",
"arg1": "first script argument",
"arg2": "second script argument",
"arg3": "third script argument",
"arg4": "forth script argument",
"arg5": "fifth script argument",
"comment" : "comment"
}
}
  • hostName
    Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
  • port
    Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
  • schedule
    Properties should be according to CmonProxySqlSchedule class. Has no default value, so this is a mandatory field.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "updateAdminCredentialsInCmon" Request

Updates proxysql admin credentials in cmon only. Useful when for some reason the proxysql admin name and or password was changed outside of cmon, and thus cmon has wrong credentials.

Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation": "updateAdminCredentialsInCmon",
"hostName": "192.168.30.71",
"adminName": "adminname",
"adminPassword": "adminpwd"
}
  • hostName
    Selects host for which to update values in cmon. By default, when this is empty, the first found proxySQL node will be used.
  • adminName
    Admin user's name defined in proxysql global_variables table.
  • adminPassword
    Admin user's password defined in proxysql global_variables table.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "updateMonitorCredentialsInCmon" Request

Updates proxysql monitor user credentials in cmon only. Useful when for some reason the proxysql monitor user name and or password was changed outside of cmon, and thus cmon has wrong values.

Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation": "updateMonitorCredentialsInCmon",
"hostName": "192.168.30.71",
"monitorUserName": "monitorName",
"monitorUserPassword": "monitorPassword"
}
  • hostName
    Selects host for which to update values in cmon. By default, when this is empty, the first found proxySQL node will be used.
  • monitorUserName
    Monitor user's name defined in proxysql global_variables table.
  • monitorUserPassword
    Monitor user's password defined in proxysql global_variables table.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The Cloud Services API

/0/cloud

The "proxy" Request to clustercontrol-cloud service

Forwards the request as REST http request to clustercontrol-cloud service. Returns its response in "response" named json object. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

If the clustercontrol-cloud answer hold a json, it is parsed and included into response.json object. If the parse fails, the service provided answer will be found as raw string in response.body. Also non json format response body will be answered in response.body.

Here is an example request witj json returned:

{
"operation" : "proxy",
"method" : "GET",
"uri" : "/aws/vm",
"body" : ""
}

Here is the result:

{
"cc_timestamp": 1484041378,
"response":
{
"headers": {
"date": "Thu, 25 Jan 2018 11:15:18 GMT",
"content-length": "1553",
"content-type": "application/json; charset=UTF-8"
},
"json": [
{
"network": {
"public_ip": [
"52.58.107.236",
"ec2-52-58-107-236.eu-central-1.compute.amazonaws.com"
],
"private_ip": [
"172.31.2.217",
"ip-172-31-2-217.eu-central-1.compute.internal"
]
},
"subnet_id": "subnet-6a1d1c12",
"image": "ami-653bd20a",
"vpc_id": "vpc-8238dfeb",
"region": "eu-central-1",
"firewalls": [
"sg-d2bc5abb"
],
"size": "c3.xlarge",
"id": "i-068e665a16334283a",
"cloud": "aws",
"name": "V1-DO_NOT_REMOVE_S9S-JENKINS-BUGZILLA-SERVER.jenkins.severalnines.com"
}
],
"status_code": 200,
"reason_phrase": "OK"
},
"requestStatus": "ok"
}

Here is an example request with plain text returned:

{
"operation" : "proxy",
"method" : "GET",
"uri" : "/aws/storage/list-buckets",
"body" : ""
}

Here is the result:

{
"cc_timestamp": 1484041378,
"response":
{
"body": "Not Found",
"headers": {
"date": "Thu, 25 Jan 2018 11:15:23 GMT",
"content-length": "9",
"content-type": "text/plain; charset=utf-8"
},
"status_code": 404,
"reason_phrase": "Not Found"
},
"requestStatus": "ok"
}

The "list_credentials" Request

Returns a json structure containing all registered cloud credentials. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "list_credentials",
}

Here is an example result:

{
"cc_timestamp": 1484041378,
"result":
{
"aws" : [
{
"id" : : 0
"name" : "My aws backup target",
"comment" : "For hat purpose we are using this cloud service.",
"credentials" :
{
"access_key_id" : "AKFBXXXXXXXXXKO4ZY2A",
"access_key_secret" : "CzrDyNiZgRcS0Wt2jnXXXXXXXXXXOpZJHX3I5QT/",
"access_key_region" : "eu-central-1"
}
},
...
],
"gce" : [
{
"id" : 1
"name" : "My gce backup target",
"comment" : "For hat purpose we are using this cloud service.",
"credentials" :
{
"type" : "service_account",
"project_id" : "Project ID",
"private_key_id" : "Private key Id",
"private_key" : "Private key contents",
"client_email" : "Client Email",
"client_id" : "Client ID",
"auth_uri" : "https://accounts.google.com/o/oauth2/auth",
"token_uri" : "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url" : "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url" : "Client x509 certificate url"
}
},
...
]
},
"requestStatus": "ok"
}

Get a specific credential

Description: With this RPC API it is possible to obtain the credentials by ID and provider string.

An example request & reply:

1 $ curl -XPOST -d '{"token:"ng59qVifPA7PS881","operation": "get_credentials", "id": 0, "provider": "aws"}' http://localhost:9500/185/cloud
{
"cc_timestamp": 1506417375,
"data":
{
"access_key_id": "XKIAIXUKPGHXIGTO6RVQ",
"access_key_region": "eu-west-2",
"access_key_secret": "XIsU6BcDiac5UWiRewVXCmT6Fv6X5YDkYgUXVPrZ"
},
"requestStatus": "ok"
}

The "add_credentials" Request

Adds a cloud service credentials json structure to the backend's collection. Saves it in /var/lib/cmon/cloud_credentials.json file. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "add_credentials",
"provider" : STRING,
"name" : STRING,
"comment" : STRING,
"credentials" :
{
// specific depending on the value of "provider"
}
}
  • provider
    Can have these values:
    • "aws" for amazon web service credentials
    • "gce" for google clound engine credentials
      Has no default value, and thus must be defined.
  • name
    A user choosen name/string for human way of identifying a credential profile. By default the value is empty, which is valid. Note that the same name might not be used twice for the same cloud provider.
  • comment
    Any remark of the user related to the credentials to save, does not have any technical meaning. By default this is empty.
  • credentials
    The possible structure depend on the value of provider. Please check the example result for list_credentials operation for the possible fields for credentials structures. Also please not that this structure is defined by our clud named tool, which can upload and download files to/from could providers' file storage services. More information can be found at https://github.com/severalnines/cloudlink-go/tree/develop/clud

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok",
"added_id" : NUMBER
}
  • added_id
    Every credentials saved on the backend will have a unique id assigned. That id can be used for update and remove operations and is returned in this field after adding a new credentials profile.

The "update_credentials" Request

Updates a cloud service credentials profile / json structure in the backend's collection. Saves it in /var/lib/cmon/cloud_credentials.json file. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "update_credentials",
"provider" : STRING,
"id" : NUMBER,
"name" : STRING,
"comment" : STRING,
"credentials" :
{
// specific depending on the value of "provider"
}
}
  • provider
    Can have these values:
    • "aws" for amazon web service credentials
    • "gce" for google clound engine credentials
      Has no default value, and thus must be defined.
  • id
    The unique identifier number of the credentials profile to update. The id can be found for each profile in the result of list_credentials rpc operation or returned in the field "added_id" when add_credentials operation is used.
  • name
    A user choosen name/string for human way of identifying a credential profile. By default the value is empty, which is valid. Note that the same name might not be used twice for the same cloud provider.
  • comment
    Any remark of the user related to the credentials to save, does not have any technical meaning. By default this is empty.
  • credentials
    The possible structure depend on the value of provider. Please check the example result for list_credentials operation for the possible fields for credentials structures. Also please not that this structure is defined by our clud named tool, which can upload and download files to/from could providers' file storage services. More information can be found at https://github.com/severalnines/cloudlink-go/tree/develop/clud

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The "remove_credentials" Request

Removes a cloud service credentials profile / json structure from the backend's collection. Also saves the remaining credentials in /var/lib/cmon/cloud_credentials.json file. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.

Here is an example request:

{
"operation" : "remove_credentials",
"provider" : STRING,
"id" : NUMBER,
}
  • provider
    Can have these values:
    • "aws" for amazon web service credentials
    • "gce" for google clound engine credentials
      Has no default value, and thus must be defined.
  • id
    The unique identifier number of the credentials profile to remove. The id can be found for each profile in the result of list_credentials rpc operation or returned in the field "added_id" when add_credentials operation is used.

Here is an example result:

{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}

The password reset API

Description: These APIs are made for UI to be able to maintain password reset tokens and sending emails out to the users.

forgot_password

Description: This operation could be used to trigger a password reset procedure for a UI user, a token will be generated (with expiration time) and also an email will be sent to the user with the specified URL (token will be appended).

Arguments:

  • email_address: the user's email address
  • base_url: the URL from browser + the password specific paths (must be urlencoded)

An example request and reply:

1 curl 'http://localhost:9500/0/passwordreset?token=5be993bd3317aba6a24cc52d2a39e7636d35d55d&operation=forgot_password&email_address=not-exists@severalnines.com&base_url=https%3A%2F%2Ftest.severalnines.com%2Fclustercontrol%2Fpasswordreset%26token%3D'
2 {
3  "cc_timestamp": 1514908388,
4  "requestStatus": "ok"
5 }

reset_password

Description: Once the user received the e-mail and clicked on the link, the frontend must get the token from the URL and pass it to this RPC to set the new password for the user.

NOTE: as an experiment this call also updates the dcps.users table password+hash fields so the user can log in with the new password

Arguments:

  • password_token: the password reset token from the URL
  • password_new: the new password to be set

An example request and reply:

1 curl 'http://localhost:9500/0/passwordreset?token=5be993bd3317aba6a24cc52d2a39e7636d35d55d&operation=reset_password&password_token=1ced41461efe4688bd215dcd0f1bbef6&password_new=admin'
2 {
3  "cc_timestamp": 1514908494,
4  "requestStatus": "ok"
5 }

Setup, configuration

1 $ cmon --version
2 cmon, version 1.2.9
3 
4 Enabled features: mongodb, rpc, libssh, mysql
5 Build git-hash: 54309b368f29f266a115cfbf99ba4aa27421168c

The RPC can be configured using the following command line arguments:

1 -p, --rpc-port=<int> Listen on RPC port (default: 9500)
2 -b, --bind-addr='ip1,ip2..' Bind RPC to IP addresses (default: 127.0.0.1)

For cmon (SysV) service, you can create one of the following configuration files:

  • /etc/default/cmon (on debian like systems)
  • /etc/sysconfig/cmon (on redhat like systems)

with the following content:

1 # custom port (NOTE: RPCv2 will listen on this port + 1):
2 #RPC_PORT=9500
3 # custom bind addresses (default 127.0.0.1):
4 #RPC_BIND_ADDRESSES="192.168.0.100,192.168.33.1"
5 #RPC_BIND_ADDRESSES="0.0.0.0"
6 
7 # this might be already here for clustercontrol-notifications service:
8 
9 # New events client http callback as of v1.4.2!
10 EVENTS_CLIENT="http://127.0.0.1:9510"

(Don’t forget to restart the cmon service.)

After cmon is started up, you can verify it by posting a JSon request (replace the url):

1 $ curl -XPOST -d '{"operation":"getCellFunctions","spreadsheetUser":"admin"}' \
2  http://ec2-54-220-127-157.eu-west-1.compute.amazonaws.com:9500/1/sheet

You should get some JSon reply back…

Authentication key configuration

The cmon.cnf (or the corresponding one for a specific cluster) could specify an authentikation token/key to restrict the access to the RPC interfaces.

NOTE: cmon now provides an RPC to actualy enforce the security on a cluster, see generateToken .

1 rpc_key=AABBCCDDEEFF

then the RPC requests must contain the authentication key ("token"), otherwise the server will replies back 'access denied' because of the missing token.

Please NOTE that it is also possible to specify the authentication token in a http header, using the "X-Token" header name.

X-Token: AABBCCDDEEFF
{ "token": "AABBCCDDEEFF", ... }

Some example JSon replies with authentication turned on:

1 $sudo cat /etc/cmon.d/cmon_12.cnf | grep rpc_key
2 rpc_key=EBBEABBA
3 
4 $ curl -X POST -H"Content-Type: application/json" -d '{"operation": "getJobs"}' http://localhost:9500/12/job
{
"errorString": "Access denied (invalid authentication token)",
"requestStatus": "error"
}
1 $ curl -X POST -H"Content-Type: application/json" -d '{"operation": "getJobs", "token": "invalid"}' http://localhost:9500/12/job
{
"errorString": "Access denied (invalid authentication token)",
"requestStatus": "error"
}

And finally a good one ;-)

1 $ curl -X POST -H"Content-Type: application/json" -d '{"operation": "getJobs" , "token": "EBBEABBA"}' http://localhost:9500/12/job
{
"jobs": [ ],
"requestStatus": "ok"
}

TLS support

RPCv2 services (requires a different way of authentication, other than to tokens), will listen on an additional port (RPC portNum+1, so on 9501 by default) using TLS.

The daemon will tries to use the following SSL/TLS certificate and private keys, if these are not exists there, then it will auto-generate a self-signed key (with one year validity):

1 /var/lib/cmon/ca/cmon/rpc_tls.crt
2 /var/lib/cmon/ca/cmon/rpc_tls.key
3 
4 $ sudo openssl x509 -noout -text -in /var/lib/cmon/ca/cmon/rpc_tls.crt
5 Certificate:
6  Data:
7  Version: 3 (0x2)
8  Serial Number: 1453908206 (0x56a8e0ee)
9  Signature Algorithm: sha256WithRSAEncryption
10  Issuer: CN=127.0.0.1/description=clustercontrol RPC TLS key
11  Validity
12  Not Before: Jan 26 15:23:26 2016 GMT
13  Not After : Jan 26 15:23:26 2018 GMT
14  Subject: CN=127.0.0.1/description=clustercontrol RPC TLS key
15  Subject Public Key Info:
16  Public Key Algorithm: rsaEncryption
17  Public-Key: (2048 bit)
18  Modulus:
19  ...
20  Exponent: 65537 (0x10001)
21  X509v3 extensions:
22  X509v3 Subject Key Identifier:
23  26:E7:BB:86:24:82:69:76:8E:96:66:15:B8:D5:B2:FD:B9:B8:2F:28
24  X509v3 Basic Constraints: critical
25  CA:FALSE, pathlen:1
26  X509v3 Key Usage: critical
27  Digital Signature, Key Encipherment, Key Agreement
28  X509v3 Extended Key Usage:
29  TLS Web Server Authentication
30  X509v3 Subject Alternative Name:
31  IP Address:127.0.0.1, DNS:localhost
32  Signature Algorithm: sha256WithRSAEncryption
33  ...

Hosts and Containers

The "/0/host" path is for managing servers, hosts and virtual machines or containers.

startServers

This call is for starting up (or booting) servers.

{
"operation": "startServers",
"request_created": "2017-10-16T07:48:21.779Z",
"request_id": 3,
"servers": [
{
"class_name": "CmonContainerServer",
"hostname": "host04"
} ]
}
{
"messages": [ "Started server 'host04'." ],
"request_created": "2017-10-16T07:48:21.779Z",
"request_id": 3,
"request_processed": "2017-10-16T07:48:21.943Z",
"request_status": "Ok",
"request_user_id": 3
}

shutDownServers

This call is for shut down up (power off) servers.

registerServers

This call can be used to register a container server in the Cmon Controller. This is the fast way, but there is a job called "create_server" that does a very similar registration. The job will actually install software, this call just registers an existing server.

Here is how a request will be sent:

{
"operation": "registerServers",
"request_created": "2017-08-31T07:53:41.158Z",
"request_id": 3,
"servers": [
{
"class_name": "CmonContainerServer",
"cdt_path": "myservers",
"hostname": "storage01"
} ]
}

Here is a reply that shows the server was registered, but the server is actually turned off:

{
"reply_received": "2017-10-16T11:33:24.501Z",
"request_created": "2017-10-16T11:33:24.496Z",
"request_id": 3,
"request_processed": "2017-10-16T11:33:27.606Z",
"request_status": "Ok",
"request_user_id": 3,
"servers": [
{
"cdt_path": "/",
"class_name": "CmonContainerServer",
"clusterid": 0,
"connected": false,
"hostId": 6,
"hostname": "storage01",
"hoststatus": "CmonHostOffLine",
"ip": "192.168.1.17",
"maintenance_mode_active": false,
"message": "SSH connection failed.",
"owner_group_id": 2,
"owner_group_name": "users",
"owner_user_id": 3,
"owner_user_name": "pipas",
"protocol": "lxc",
"timestamp": 1508153607,
"unique_id": 6
} ]
}

unregisterHost

This call will drop the given host from our database entirely. The computer the host represents will not be touched in any way, software will not be uninstalled, service will not be stopped, nada!

{
"cluster_id": 1,
"dry_run": true,
"host":
{
"class_name": "CmonHost",
"hostname": "192.168.1.127",
"port": 9555
},
"operation": "unregisterHost",
"request_created": "2017-09-08T09:42:49.137Z",
"request_id": 3
}
  • cluster_id: The numerical ID of the cluster that will execute the job. The cluster can also be referenced using the cluster name. For RPC v1 identifying the cluster is not mandatory, but it can be usefull if the same host is part of multiple clusters.
  • cluster_name: The name of the cluster that will execute the job. The cluster can also be referenced using the cluster ID. For RPC v1 identifying the cluster is not mandatory, but it can be usefull if the same host is part of multiple clusters.
  • dry_run: If this field is provided (with any value at all) the host will not be really unregistered. All the checks will be done, a success will be reported back, only the error message will show that this was just a drill.
  • host: The host that shall be unregistered.
  • class_name: The class name of the host will not be strictly checked (because the client might not know this), just send "CmonHost" that'll do.
  • hostname: This is mandatory to identify the host.
  • port: The port of the host. RPC v1 allows the usage of hosts without a port, but for most cases the port will be required.

So there are multiple fields to identify the host, there is teh hostname, the port, the cluster ID, maybe the cluster name. The backend will use whatever it is provided and find the host. If multiple hosts are found with the given data the request will be rejected.

Here is what we get for a dry run:

{
"error_string": "Dry run was requested, not unregistering host.",
"request_created": "2017-09-08T09:42:49.137Z",
"request_id": 3,
"request_processed": "2017-09-08T09:42:49.183Z",
"request_status": "Ok",
"request_user_id": 3
}

unregisterServers

This call is for dropping a container server from the scope of the controller. The server will not seize to exist, but the controller will forget everything about it.

{
"operation": "unregisterServers",
"request_created": "2017-09-01T08:42:22.756Z",
"request_id": 3,
"servers": [
{
"class_name": "CmonContainerServer",
"hostname": "core1"
} ]
}
  • servers: A list of CmonContainerServer class objects to unregister. In the objects only the class name and the host name has to be provided.
{
"messages": [ "Unregistered server 'core1'." ],
"request_created": "2017-09-01T08:42:22.756Z",
"request_id": 3,
"request_processed": "2017-09-01T08:42:22.810Z",
"request_status": "Ok",
"request_user_id": 3
}

createContainer

There is a job for this, that one should be used. Maybe we should remove this RPC call, but it works perfectly...

{
"container":
{
"alias": "test_container1",
"class_name": "CmonContainer"
},
"operation": "createContainer",
"request_created": "2017-09-29T07:38:02.752Z",
"request_id": 3
}
{
"container":
{
"acl": "user::rwx,group::rw-,other::---",
"alias": "test_container1",
"cdt_path": "/servers/group1/core1/containers",
"class_name": "CmonContainer",
"container_id": 11,
"hostname": "192.168.1.212",
"ip": "192.168.1.212",
"ipv4_addresses": [ "192.168.1.212" ],
"owner_group_id": 4,
"owner_user_id": 3,
"parent_server": "core1",
"status": "RUNNING",
"template": "ubuntu",
"type": "lxc",
"version": 2
},
"messages": [ "Chose server 'core1' to hold the container.", "Created account 'pipas' on the container.", "The container gained '192.168.1.212' IPv4 address." ],
"request_created": "2017-09-29T07:38:02.752Z",
"request_id": 3,
"request_processed": "2017-09-29T07:38:10.633Z",
"request_status": "Ok",
"request_user_id": 3
}

getContainers

Returns the list of containers known by the controller.

{
"operation": "getContainers",
"request_created": "2017-09-07T05:09:16.895Z",
"request_id": 3
}
{
"containers": [
{
"alias": "debian",
"class_name": "CmonContainer",
"hostname": "debian",
"owner_group_id": 4,
"owner_group_name": "testgroup",
"owner_user_id": 3,
"owner_user_name": "pipas",
"parent_server": "storage01",
"status": "STOPPED",
"type": "lxc"
},
. . .
{
"alias": "vnc_server",
"class_name": "CmonContainer",
"hostname": "192.168.1.210",
"ip": "192.168.1.210",
"ipv4_addresses": [ "192.168.1.210" ],
"owner_group_id": 4,
"owner_group_name": "testgroup",
"owner_user_id": 3,
"owner_user_name": "pipas",
"parent_server": "storage01",
"status": "RUNNING",
"template": "ubuntu",
"type": "lxc"
} ],
"request_created": "2017-09-07T05:09:16.895Z",
"request_id": 3,
"request_processed": "2017-09-07T05:09:16.947Z",
"request_status": "Ok",
"request_user_id": 3,
"total": 5
}
  • containers: A list of CmonContainer objects. The following command can be used to print the properties of this class:
    s9s metatype --list-properties --type=CmonContainer --long
  • total: The total number of containers known by the controller.

getServers

Returns the container servers and their properties. Here is a request:

{
"operation": "getServers",
"request_created": "2017-10-03T08:19:58.769Z",
"request_id": 3
}

And here is the reply. It is rather complex, but we have a lot of data.

{
"request_created": "2017-10-03T08:19:58.769Z",
"request_id": 3,
"request_processed": "2017-10-03T08:19:58.824Z",
"request_status": "Ok",
"request_user_id": 3,
"servers": [
{
"cdt_path": "/",
"class_name": "CmonContainerServer",
"clusterid": 0,
"connected": false,
"containers": [
{
"acl": "user::rwx,user:nobody:r--,group::rw-,other::---",
"alias": "bestw_controller",
"cdt_path": "/core1/containers",
"class_name": "CmonContainer",
"container_id": 1,
"hostname": "192.168.1.182",
"ip": "192.168.1.182",
"ipv4_addresses": [ "192.168.1.182" ],
"owner_group_id": 2,
"owner_group_name": "users",
"owner_user_id": 3,
"owner_user_name": "pipas",
"parent_server": "core1",
"status": "RUNNING",
"type": "lxc",
"version": 25
},
. . .
{
"acl": "user::rwx,group::rw-,other::---",
"alias": "ubuntu",
"cdt_path": "/core1/containers",
"class_name": "CmonContainer",
"container_id": 5,
"hostname": "ubuntu",
"owner_group_id": 2,
"owner_group_name": "users",
"owner_user_id": 3,
"owner_user_name": "pipas",
"parent_server": "core1",
"status": "STOPPED",
"type": "lxc",
"version": 25
} ],
"disk_devices": [
{
"class_name": "CmonDiskDevice",
"device": "/dev/mapper/core1--vg-root",
"filesystem": "ext4",
"free_mb": 141617,
"mountpoint": "/",
"total_mb": 166180
},
. . .
{
"device": "/dev/sdc",
"is_hardware_storage": false,
"model": "LSILOGIC Logical Volume",
"total_mb": 170230,
"volumes": [
{
"description": "Linux filesystem partition",
"device": "/dev/sdc1",
"filesystem": "ext2",
"free_mb": 295,
"mount_point": "/boot",
"total_mb": 487
},
{
"description": "Extended partition",
"device": "/dev/sdc2",
"filesystem": "",
"mount_point": "",
"total_mb": 169741,
"volumes": [
{
"description": "Linux LVM Physical Volume partition",
"device": "/dev/sdc5",
"filesystem": "",
"mount_point": "",
"total_mb": 169741
} ]
} ]
},
. . .
{
"device": "/dev/sda",
"is_hardware_storage": false,
"model": "SCSI Disk",
"total_mb": 15272.1,
"volumes": [
{
"description": "EXT4 volume",
"device": "/dev/sda1",
"filesystem": "ext4",
"mount_point": "",
"total_mb": 15271.1
} ]
} ],
"distribution":
{
"codename": "xenial",
"name": "ubuntu",
"release": "16.04",
"type": "debian"
},
"hostId": 1,
"hostname": "core1",
"hoststatus": "CmonHostOffLine",
"ip": "192.168.1.4",
"last_container_collect": 1506791691,
"last_hw_collect": 1507017024,
"lastseen": 1506791691,
"maintenance_mode_active": false,
"memory":
{
"banks": [
{
"bank": "0",
"name": "DIMM 800 MHz (1.2 ns)",
"size": 4294967296
},
. . .
{
"bank": "7",
"name": "DIMM 800 MHz (1.2 ns)",
"size": 4294967296
} ],
"class_name": "CmonMemoryInfo",
"memory_available_mb": 54091,
"memory_free_mb": 41919,
"memory_total_mb": 64421,
"swap_free_mb": 0,
"swap_total_mb": 0
},
"message": "SSH connection failed.",
"model": "SUN FIRE X4170 SERVER (4583256-1)",
"network_interfaces": [
{
"gbits": 1,
"link": true,
"mac": "00:21:28:76:06:2a",
"model": "82575EB Gigabit Network Connection",
"name": "enp1s0f0"
},
. . .
{
"gbits": 0,
"ip": "192.168.122.1",
"mac": "",
"model": "",
"name": "virbr0"
} ],
"owner_group_id": 2,
"owner_group_name": "users",
"owner_user_id": 3,
"owner_user_name": "pipas",
"processors": [
{
"cores": 4,
"cpu_max_ghz": 1.6,
"id": 5,
"model": "Intel(R) Xeon(R) CPU L5520 @ 2.27GHz",
"siblings": 8,
"vendor": "Intel Corp."
},
{
"cores": 4,
"cpu_max_ghz": 1.6,
"id": 9,
"model": "Intel(R) Xeon(R) CPU L5520 @ 2.27GHz",
"siblings": 8,
"vendor": "Intel Corp."
} ],
"timestamp": 1507017024,
"unique_id": 1,
"version": "2.17"
},
. . .
} ],
"total": 8
}