The RPC Reply
Here is an example that shows an RPC reply that notifies the caller about a failure:
{
"cc_timestamp": 1461225902,
"errorString": "Cluster 323 is not running.",
"requestStatus": "ClusterNotFound"
}
The "requestStatus" contains information about the nature of the failure and currently can hold the following values:
ok
Request is successfully processed.
InvalidRequest
Invalid value or missing field in the request (e.g. the request contains a job ID and the value of it is -1).
ObjectNotFound
Object not found (e.g. the request contains a job ID and the job with that ID was not found).
TryAgain
Currently not available, try again later. This happens for example when the request arrives before the controller is fully started.
ClusterNotFound
The cluster is not found, not running. This only happens if the cluster ID is not 0 and the cluster object is not found.
AccessDenied
Insufficient rights to perform the requested operation.
Redirect
Redirect in the Cmon HA subsystem. This happens when the request is sent to a controller that is currently only a follower and so can not perform the requested operation.
When this error is returned the "controllers" field will contain the list of the controllers where the client can retry the request. Please check the documentation of RPC v2 for an example.
UnknownError
We are not proud of this, should not be used at all.
Here is an other example that shows a request that was executed successfully:
{
"cc_timestamp": 1461225902,
"data": [
{
"contents": "# this is some comment\n# and an another one\n\n# here is a value without section\nnosection=true\n\n[section1]\nkey1\t=\t\"value1\"\n\n# and here is another section:\n[section2]\ninteger=20140719\n\n",
"crc": "00000000",
"filename": "myconfig1.cfg",
"hasChange": false,
"hostId": 6,
"hostname": "myhost1",
"path": "/my/path/myconfig1.cfg",
"size": 183,
"timestamp": 1461225902
} ],
"requestStatus": "ok",
"total": 1
}
The host discovery API
/0/discovery
Description: In case of cluster / node creation/addition, frontend might would like to fetch/get some host related details, this API can give some basic info.
checkHosts
Description: checks if a host can be reached on SSH and whether controller can gain superuser privileges, if these passing, the method also gives some network/memory/cpu/storage/os information about the host
Arguments:
- ssh_user: the username used for SSH login
- ssh_keyfile: the absolute path of the SSH private keyfile on controller
- ssh_password: in case of no keyfile password based authentication may be used
- ssh_port: SSH port (defaults to 22 if not specified or <= 0)
- sudo_password: a sudo password may be used
- hosts: comma separated list of hostname:port combinations (:port is optional)
Note: If you send this request to some existing cluster /$CLUSTERID/discovery, you do not need to pass the credentials (ssh* and sudo* options), the call will get the info from the existing clusters configuration.
Important fields in the returned data:
- status/available: whether the host is available for addition (so hostname:port is not part of other already existing cluster)
- status/reachable: whether the host is accessible trough SSH and cmon can gain superuser priliges (so use sudo for non-root user)
- status/message: human readable message about host status
- status/message_advice: in case of failure some advice about how to fix the issue
- status/message_techincal: the techincal details about the failure
1 $ curl -XPOST -d '{"operation":"checkhosts","token": "5be993bd3317aba6a24cc52d2a39e7636d35d55d", "hosts": "192.168.0.100:3306", "ssh_user":"kedz","ssh_keyfile": "/home/kedz/.ssh/id_rsa"}' 'http://localhost:9500/0/discovery'
{
"cc_timestamp": 1486382761,
"data": [
{
"cpu_info":
{
"cores": 8,
"mhz": 4396.73
},
"hostname": "192.168.0.100",
"memory":
{
"free_mb": 15047,
"total_mb": 24030
},
"network_interfaces":
{
"eth0": "192.168.0.100",
"vboxnet0": "192.168.33.1",
"vboxnet0:1": "192.168.44.1",
"virbr0": "192.168.122.1"
},
"os_version":
{
"distribution/codename": "yakkety",
"distribution/name": "ubuntu",
"distribution/release": "16.10",
"distribution/type": "debian"
},
"port": 3306,
"status":
{
"available": true,
"message": "Host is reachable.",
"message_advice": "",
"message_technical": "Host is reachable on SSH and can gain superuser privileges.",
"reachable": true
},
"storage_info": [
{
"filesystem": "ext4",
"free_mb": 93229,
"mountpoint": "/",
"partition": "/dev/sda2",
"total_mb": 231833
},
{
"filesystem": "ext4",
"free_mb": 47046,
"mountpoint": "/mnt/ssd128g",
"partition": "/dev/sdd2",
"total_mb": 111536
} ]
} ],
"requestStatus": "ok",
"total": 1
}
And a failure example, with advice / technical info:
1 $ curl -XPOST -d '{"operation":"checkhosts","token": "5be993bd3317aba6a24cc52d2a39e7636d35d55d", "hosts": "5.5.5.5", "ssh_user":"kedz","ssh_keyfile": "/home/kedz/.ssh/id_rsa"}' 'http://localhost:9500/0/discovery'
{
"cc_timestamp": 1486382809,
"data": [
{
"hostname": "5.5.5.5",
"port": -1,
"status":
{
"available": false,
"message": "SSH connection failed.",
"message_advice": "Check hostname and verify network/firewall settings.",
"message_technical": "libssh connect error: Timeout connecting to 5.5.5.5",
"reachable": false
}
} ],
"requestStatus": "ok",
"total": 1
}
The Backup API
/$CLUSTERID/backup
listbackups
Description: lists the created backups for the current cluster.
There are two formats supported. By default the format version 1 is returned, the original one, which contains backup metadata only.
When the value of backup_record_version is 2, the olda content is put inside property named metadata. New properties added holding information of locations of the backup copies. A backup can be available on certain hosts (controller and other) and also in the cloud ie.: AWS S3 buckets.
Parameters:
- backup_record_version: (default 1) the backup format version (1, 2)
- limit: the max. number of backup records to be returned
- offset: return backup records after this many records (paging)
- ascending: whether to return backups in reversed/ascending order (default false)
- parent_id: -1: list all backups, 0: only root backups, >0 incrementals of a specific backup (default -1, list all)
Example of backup format version 1 (the default):
1 $ curl -XPOST -d '{"operation": "listbackups", "token": "RB81tydD0exsWsaM"}' http://localhost:9500/101/backup
{
"cc_timestamp": 1477063671,
"data": [
{
"backup": [
{
"db": "mysql",
"files": [
{
"class_name": "CmonBackupFile",
"created": "2016-10-21T15:26:40.000Z",
"hash": "md5:c7f4b2b80ea439ae5aaa28a0f3c213cb",
"path": "mysqldump_2016-10-21_172640_mysqldb.sql.gz",
"size": 161305,
"type": "data,schema"
} ],
"start_time": "2016-10-21T15:26:41.000Z"
} ],
"backup_host": "192.168.33.125",
"cid": 101,
"class_name": "CmonBackupRecord",
"config":
{
"backupDir": "/tmp",
"backupHost": "192.168.33.125",
"backupMethod": "mysqldump",
"backupToIndividualFiles": false,
"backup_failover": false,
"backup_failover_host": "",
"ccStorage": false,
"checkHost": false,
"compression": true,
"includeDatabases": "",
"netcat_port": 9999,
"origBackupDir": "/tmp",
"port": 3306,
"set_gtid_purged_off": true,
"throttle_rate_iops": 0,
"throttle_rate_netbw": 0,
"usePigz": false,
"wsrep_desync": false,
"xtrabackupParallellism": 1,
"xtrabackup_locks": false
},
"created": "2016-10-21T15:26:40.000Z",
"created_by": "",
"description": "",
"finished": "2016-10-21T15:26:41.000Z",
"id": 5,
"job_id": 2952,
"log_file": "",
"lsn": 140128879096992,
"method": "mysqldump",
"parent_id": 0,
"root_dir": "/tmp/BACKUP-5",
"status": "Completed",
"storage_host": "192.168.33.125"
},
{
"backup": [
{
"db": "",
"files": [
{
"class_name": "CmonBackupFile",
"created": "2016-10-21T15:21:50.000Z",
"hash": "md5:538196a9d645c34b63cec51d3e18cb47",
"path": "backup-full-2016-10-21_172148.xbstream.gz",
"size": 296000,
"type": "full"
} ],
"start_time": "2016-10-21T15:21:50.000Z"
} ],
"backup_host": "192.168.33.125",
"cid": 101,
"class_name": "CmonBackupRecord",
"config":
{
"backupDir": "/tmp",
"backupHost": "192.168.33.125",
"backupMethod": "xtrabackupfull",
"backupToIndividualFiles": false,
"backup_failover": false,
"backup_failover_host": "",
"ccStorage": false,
"checkHost": false,
"compression": true,
"includeDatabases": "",
"netcat_port": 9999,
"origBackupDir": "/tmp",
"port": 3306,
"set_gtid_purged_off": true,
"throttle_rate_iops": 0,
"throttle_rate_netbw": 0,
"usePigz": false,
"wsrep_desync": false,
"xtrabackupParallellism": 1,
"xtrabackup_locks": true
},
"created": "2016-10-21T15:21:47.000Z",
"created_by": "",
"description": "",
"finished": "2016-10-21T15:21:50.000Z",
"id": 4,
"job_id": 2951,
"log_file": "",
"lsn": 1627039,
"method": "xtrabackupfull",
"parent_id": 0,
"root_dir": "/tmp/BACKUP-4",
"status": "Completed",
"storage_host": "192.168.33.125"
} ],
"requestStatus": "ok",
"total": 2
}
Example of backup format version 2 (with information about backup locations):
1 echo '{"token": "K8VdzG2vG81ik0zo", "operation": "listbackups", "backup_record_version": "2"}' | curl -sX POST -H"Content-Type: application/json" -d @- http://192.168.30.4:9500/47/backup
[
{
"metadata": {
"root_dir": "/mongo-backups/BACKUP-106",
"class_name": "CmonBackupRecord",
"schedule_id": 0,
"id": 106,
"verified": {
"status": "Unverified",
"message": "",
"verified_time": "1969-12-31T23:59:59.000Z"
},
"job_id": 1683,
"use_for_pitr": true,
"created_by": "",
"chain_up": 0,
"parent_id": 0,
"config": {},
"method": "mongodump",
"status": "Completed",
"backup_host": "",
"description": "",
"lsn": 0,
"finished": "2017-09-14T20:23:37.934Z",
"compressed": true,
"cid": 47,
"backup": [
{
"files": [
{
"hash": "md5:2889b115cc388f6d2535dcca24e78378",
"created": "2017-09-14T20:23:36.000Z",
"class_name": "CmonBackupFile",
"path": "replica_set_0.gz",
"type": "mongodump-gz",
"size": 1022
}
],
"start_time": "2017-09-14T20:23:37.000Z",
"db": "$replica_set_0"
}
],
"created": "2017-09-14T20:21:02.000Z",
"storage_host": "192.168.30.70",
"log_file": "",
"total_datadir_size": 24566
},
"cloud_locations": [
{
"provider" : "aws"
"bucket_and_path": "s9s-acceptance-test-bucket",
"cloud_location_uuid": "23dae2cb-09c7-4d8c-913b-4617608d3da0",
"retention": 400,
"credentials_id": 0
"created_time": "2017-09-15T13:38:47.000Z",
"finished_time": "2017-09-15T13:38:52.000Z"
}
],
"version": 2,
"host_locations": [
{
"storage_host": "127.0.0.1",
"root_dir": "/home/backups"
"created_time": "2017-09-15T13:38:40.000Z",
"finished_time": "2017-09-15T13:38:42.000Z"
"host_location_uuid": "32f4f814-777a-468c-8b5b-37ca0605946c",
}
]
}
]
deletebackup
Description: Deletes a backup record and all associated backup files. (DEPRECATED)
Please use Delete backup job instead of this from now.
1 $ curl -XPOST -d '{"operation": "deletebackup", "id": 3, "token": "RB81tydD0exsWsaM"}' http://localhost:9500/101/backup
{
"cc_timestamp": 1477062125,
"errorString": "Backup 3 not exists.",
"requestStatus": "UnknownError"
}
1 $ curl -XPOST -d '{"operation": "deletebackup", "id" 2, "token": "RB81tydD0exsWsaM"}' http://localhost:9500/101/backup
{
"cc_timestamp": 1477062130,
"requestStatus": "ok"
}
listschedules
Description: lists the created backup schedules for the current cluster.
1 $ curl -XPOST -d '{"operation": "listschedules", "token": "5xCzdArurlwtnEG2"}' http://localhost:9500/78/backup
{
"cc_timestamp": 1473433701,
"data": [
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/mnt/dbbackup01",
"cc_storage": "0",
"hostname": "192.168.134.7",
"port": "3306",
"wsrep_desync": false
}
},
"enabled": true,
"id": 24,
"lastExec": "2016-09-07T23:00:06.000Z",
"schedule": "0 1 * * 5",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/mnt/dbbackup01",
"cc_storage": "0",
"hostname": "192.168.134.7",
"port": "3306",
"wsrep_desync": false
}
},
"enabled": true,
"id": 25,
"lastExec": "2016-02-28T04:13:30.000Z",
"schedule": "0 1 * * 1",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/mnt/dbbackup01/ROMOTO",
"cc_storage": 0,
"hostname": "192.168.134.7:3306"
}
},
"enabled": true,
"id": 26,
"lastExec": "2016-02-28T23:02:39.000Z",
"schedule": "30 0 * * 2",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/mnt/dbbackup01/ROMOTO",
"cc_storage": 0,
"hostname": "192.168.134.7:3306"
}
},
"enabled": true,
"id": 27,
"lastExec": "2016-02-29T23:30:27.000Z",
"schedule": "30 0 * * 3",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/mnt/dbbackup01/ROMOTO",
"cc_storage": 0,
"hostname": "192.168.134.7:3306"
}
},
"enabled": true,
"id": 28,
"lastExec": "2016-03-01T23:30:26.000Z",
"schedule": "30 0 * * 4",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/mnt/dbbackup01/ROMOTO",
"cc_storage": 0,
"hostname": "192.168.134.7:3306"
}
},
"enabled": true,
"id": 29,
"lastExec": "2016-09-07T22:30:10.000Z",
"schedule": "30 0 * * 5",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/tmp/okokok",
"cc_storage": 1,
"hostname": "192.168.33.121",
"netcat_port": "9999",
"port": "3306",
"wsrep_desync": false
}
},
"enabled": true,
"id": 37,
"lastExec": "2016-09-08T22:00:04.000Z",
"schedule": "0 0 * * 6",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/tmp/okokok",
"cc_storage": 1,
"hostname": "192.168.33.121",
"netcat_port": "9999",
"port": "3306",
"wsrep_desync": false
}
},
"enabled": true,
"id": 38,
"lastExec": "1970-01-01T00:00:00.000Z",
"schedule": "0 0 * * 7",
},
{
"command":
{
"command": "backup",
"job_data":
{
"backup_failover": "no",
"backup_method": "mysqldump",
"backupdir": "/tmp/okokok",
"cc_storage": 1,
"hostname": "192.168.33.121",
"netcat_port": "9999",
"port": "3306",
"wsrep_desync": false
}
},
"enabled": true,
"id": 39,
"lastExec": "1970-01-01T00:00:00.000Z",
"schedule": "0 0 * * 1",
} ],
"requestStatus": "ok",
"total": 16
}
schedule
Deprecated. It is converted inside to a scheduled job instance, please see scheduleJobInstance. Description: creates a backup schedule. Arguments:
- schedule: cron like schedule string (m h dom mon dow), it may contains a TZ string (either the short name like CET/CEST, or a hour/min value), examples: TZ='-5:30' 18 21 * * *, TZ='CET' 1 0 1 1 *
- job: the backup JSon job string (or object)
For job see Creating a Backup for backup job reference.
An example request & reply:
1 curl -XPOST -d '{"operation": "schedule", "token":"5xCzdArurlwtnEG2", "schedule" : "0 15 * * 3","job":{"command": "backup", "job_data": {"backup_method": "mysqldump", "backupdir": "/dbbackup01", "cc_storage": "0", "hostname": "192.168.33.121"}} }' http://localhost:9500/78/backup
{
"cc_timestamp": 1473773254,
"requestStatus": "ok",
"schedule":
{
"enabled": true,
"id": 44,
"job":
{
"command": "backup",
"job_data":
{
"backup_method": "mysqldump",
"backupdir": "/dbbackup01",
"cc_storage": "0",
"hostname": "192.168.33.121"
}
},
"lastExec": "1969-12-31T23:59:59.000Z",
"schedule": "0 15 * * 3",
}
}
deleteschedule
Description: removes a backup schedule. Arguments:
An example request & reply:
1 curl -XPOST -d '{"operation": "deleteschedule", "token": "5xCzdArurlwtnEG2", "id": 55}' http://localhost:9500/78/backup
{
"cc_timestamp": 1473773753,
"requestStatus": "ok"
}
updateschedule
Description: Updates an existing backup schedule. Arguments:
- id: the schedule id
- schedule: the new cron line
- enabled: whether the schedule is enabled/disabled
- job: (optional) the new backup JSon job string (or object)
An example request & reply:
1 $ curl -XPOST -d '{"operation": "updateschedule", "token": "nV5SEdkZLheZxyrh", "id": 7, "enabled": false }' http://localhost:9500/120/backup
{
"cc_timestamp": 1478873968,
"requestStatus": "ok",
"schedule":
{
"class_name": "CmonBackupSchedule",
"enabled": false,
"id": 7,
"job":
{
"command": "backup",
"job_data":
{
"backup_method": "mysqldump",
"backupdir": "/tmp",
"cc_storage": "0",
"hostname": "192.168.33.121",
"port": 3306,
"wsrep_desync": false
},
"scheduleId": 7
},
"lastExec": "1970-01-01T00:00:00.000Z",
"schedule": "20 15 * * *"
}
}
The Certificate Authority API
/0/ca
create
Description: creates a certificate request and signs by a CA or self-sign, and store the generated key + certificate data. Arguments:
- type: "ca", "server" or "client"
- name: the desired name with full path (without extensions) (eg.: group1/servers/host55)
- validity: the certificate validity in days (defaults to 365 if not specified)
- issuerId: the CA certificate to be used for signing, otherwise it will be self-signed
- user_id: the requester user ID (for accounting)
- data: the certificate parameters
NOTE: the specified name (which can contain directory paths like some/dir/mycert) is a relative path to the cmon's CA directory which is /var/lib/cmon/ca by default.
Certificate parameters, the following keys are supported there:
- keybits: the key size for the RSA key generation (1024, 2048, 4096)
- CN: the certificate common-name
- subjectAltName: a JSon list of possible domain names, IP addresses
- C: the country name ISO code
- L: locality name
- ST: state or province name
- O: organization name
- OU: organization unit name
- title: title
- GN: given name
- SN: surname
- description: description field of the certificate
- emailAddress: e-mail address field
Example request for CA certificate and key generation:
1 curl -X POST -d '{"operation": "create", "user_id": 100, "type": "ca", "name": "CoolCA", "validity": 3650, "data": { "keybits": 2048, "description": "My cool CA certificate.", "emailAddress": "
[email protected]" } }' http://localhost:9500/0/ca
{
"data":
{
"certfile": "CoolCA.crt",
"id": 1,
"isCA": true,
"isClient": false,
"isServer": false,
"issuerId": 0,
"keybits": 2048,
"keyfile": "CoolCA.key",
"requesterUserId": 100,
"serialNumber": 1,
"status": "Issued",
"subjectName":
{
"basicConstraints": "ca",
"description": "My cool CA certificate.",
"extendedKeyUsage": [ "OCSPSigning" ],
"keyUsage": [ "DigitalSignature", "KeyCertificateSign", "CRLSign" ],
"serialNumber": "1"
},
"validFrom": 1449579031,
"validUntil": 1765025431
},
"total": 1
}
An example request using the previous CA key to sign a server certificate
curl -X POST -d '{"operation": "create", "user_id": 100, "type": "server", "name": "coolca_servers/server01", "issuerId": 1, "validity": 1000, "data": { "keybits": 2048, "CN": "coolca.server.tld", "subjectAltName": [ "192.168.33.1", "fancy.domain.name" ] } }' http:
4 "certfile": "coolca_servers/server01.crt",
11 "keyfile": "coolca_servers/server01.key",
12 "requesterUserId": 100,
17 "CN": "coolca.server.tld",
18 "extendedKeyUsage": [ "ClientAuth", "ServerAuth" ],
19 "keyUsage": [ "DigitalSignature", "NonRepudiation", "KeyEncipherment", "KeyAgreement" ],
21 "subjectAltName": [ "IP Address:192.168.33.1", "DNS:fancy.domain.name" ]
23 "validFrom": 1449579372,
24 "validUntil": 1536069372
listcerts
Description: lists the available certificates on the system
1 curl -X POST -d '{"operation": "listcerts"}' http://localhost:9500/0/ca
2 # NOTE: the certificates and keys are actually stored in the filesystem:
3 sudo find /var/lib/cmon/ca
5 /var/lib/cmon/ca/CoolCA.crt
6 /var/lib/cmon/ca/CoolCA.key
7 /var/lib/cmon/ca/coolca_servers
8 /var/lib/cmon/ca/coolca_servers/server01.crt
9 /var/lib/cmon/ca/coolca_servers/server01.key
{
"data": [
{
"certfile": "CoolCA.crt",
"id": 1,
"isCA": true,
"isClient": false,
"isServer": false,
"issuerId": 0,
"keybits": 2048,
"keyfile": "CoolCA.key",
"requesterUserId": 100,
"serialNumber": 1,
"status": "Issued",
"subjectName":
{
"basicConstraints": "ca",
"description": "My cool CA certificate.",
"extendedKeyUsage": [ "OCSPSigning" ],
"keyUsage": [ "DigitalSignature", "KeyCertificateSign", "CRLSign" ],
"serialNumber": "1"
},
"validFrom": 1449579031,
"validUntil": 1765025431
},
{
"certfile": "galera/cluster_11/galera_rep.crt",
"id": 7,
"inUseBy":
{
"clusters": [
{
"id": 16,
"name": "cluster_16"
} ],
"hosts": [ "192.168.33.123:3306", "192.168.33.122:3306" ]
},
"isCA": true,
"isClient": true,
"isServer": true,
"issued": 1457969123,
"issuerId": 0,
"keybits": 2048,
"keyfile": "galera/cluster_11/galera_rep.key",
"serialNumber": 7,
"status": "Issued",
"subjectName":
{
"C": "SE",
"CN": "Galera_Replication_Link_Auto_Generated_Certificate",
"L": "Stockholm",
"O": "Severalnines AB",
"ST": "ST",
"basicConstraints": "ca",
"extendedKeyUsage": [ "ClientAuth", "ServerAuth", "OCSPSigning" ],
"keyUsage": [ "DigitalSignature", "NonRepudiation", "KeyEncipherment", "KeyAgreement", "KeyCertificateSign", "CRLSign" ],
"serialNumber": "7"
},
"validFrom": 1457879123,
"validUntil": 1773325523
} ],
"total": 2
}
certinfo
Description: gets the certificate information by certificate id. Arguments:
NOTE: It is also possible to get the certificate info by a GET request, path must contain then the certificate id, like in the following example:
1 curl http://localhost:9500/0/ca/1
{
"data":
{
"certfile": "CoolCA.crt",
"id": 1,
"isCA": true,
"isClient": false,
"isServer": false,
"issuerId": 0,
"keybits": 2048,
"keyfile": "CoolCA.key",
"requesterUserId": 100,
"serialNumber": 1,
"status": "Issued",
"subjectName":
{
"basicConstraints": "ca",
"description": "My cool CA certificate.",
"extendedKeyUsage": [ "OCSPSigning" ],
"keyUsage": [ "DigitalSignature", "KeyCertificateSign", "CRLSign" ],
"serialNumber": "1"
},
"validFrom": 1449579031,
"validUntil": 1765025431
},
"total": 1
}
importlocal
Description: imports a local certificate + key pair, with CA NOTE: CA cert file path and name is not required for self-signed certificates.
If a certificate (or CA) is already imported, a duplicate will not be created in our CA database.
Arguments:
- cert_file: the certificate file path on the controller
- key_file: the private key file path on the controller
- ca_file: the CA certificate path on the controller
- name: the used path (including name) for the certificate in our CA storage
- name_ca: the CA path/name in our CA storage, if not specified the 'name' will be used with '_ca' suffix
The return value will contain the certificate details, and additionally 'wasCaKnown' will be set to 'true' in case of when a CA certificate was already known (existing) in the storage, so no CA certificate import happened.
1 curl -X POST -d '{"operation": "importlocal", "user_id": 100, "cert_file": "/var/lib/mycerts/001/certificate.crt", "key_file": "/var/lib/mycerts/001/private.key", "name": "imported_keys/mycertificate001" }' http://localhost:9500/0/ca
{
"cc_timestamp": 1463572534,
"data":
{
"certfile": "imported_keys/mycertificate001.crt",
"id": 7,
"inUseBy":
{
"clusters": [ ],
"hosts": [ ]
},
"isCA": true,
"isClient": true,
"isServer": true,
"issued": 1463572534,
"issuerId": 0,
"keybits": 2048,
"keyfile": "imported_keys/mycertificate001.key",
"serialNumber": 17912915768810270261,
"status": "Issued",
"subjectName":
{
"C": "US",
"CN": "*.test0123.tld",
"L": "New Haven",
"ST": "Connecticut",
"basicConstraints": "ca",
"serialNumber": "17912915768810270261"
},
"validFrom": 1463572199,
"validUntil": 1495108199
},
"requestStatus": "ok",
"total": 1
}
revoke
Description: toggles the certificate status to 'Revoked' Arguments:
- 'id' the internal identifier of the certificate
- user_id: the requester user ID (for accounting)
An example request:
1 curl -X POST -d '{"operation": "revoke", "user_id": 205, "id": 4}' http://localhost:9500/0/ca
{
"data":
{
"certfile": "coolca_servers/server02.crt",
"id": 4,
"isCA": false,
"isClient": true,
"isServer": true,
"issued": 1450195314,
"issuerId": 1,
"keybits": 2048,
"keyfile": "coolca_servers/server02.key",
"requesterUserId": 100,
"revoked": 1450195957,
"revokerUserId": 205,
"serialNumber": 4,
"status": "Revoked",
"subjectName":
{
"CN": "coolca.server.tld",
"extendedKeyUsage": [ "ClientAuth", "ServerAuth" ],
"keyUsage": [ "DigitalSignature", "NonRepudiation", "KeyEncipherment", "KeyAgreement" ],
"serialNumber": "4",
"subjectAltName": [ "IP Address:192.168.33.1", "DNS:2001:470:1f1a:2c2::43" ]
},
"validFrom": 1450105313,
"validUntil": 1536595313
},
"total": 1
}
delete
Description: Deletes a certificate (please note this is an irreversible operation, it removes the certificate file and private key) Arguments:
- 'id' the internal identifier of the certificate
- user_id: the requester user ID (for accounting)
An example request:
1 curl -X POST -d '{"operation": "delete", "user_id": 205, "id": 4}' http://localhost:9500/0/ca
{
"data": {},
"total": 1
}
move
Description: moves/renames a certificate (+private key) Arguments:
- id: the id of the certificate
- name: the new path+name
An example reply/request:
1 curl -X POST -d '{"operation": "certinfo", "id": 4}' http://localhost:9500/0/ca | egrep '(keyfile,certfile)'
2 "certfile": "postgresql_single/cluster_3/server.crt",
3 "keyfile": "postgresql_single/cluster_3/server.key",
5 curl -X POST -d '{"operation": "move", "id": 4, "name": "new_path/pgsqlserver"}' http://localhost:9500/0/ca
{
"cc_timestamp": 1457442688,
"data":
{
"certfile": "new_path/pgsqlserver.crt",
"id": 4,
"isCA": false,
"isClient": false,
"isServer": true,
"issued": 1455111265,
"issuerId": 3,
"keybits": 2048,
"keyfile": "new_path/pgsqlserver.key",
"serialNumber": 4,
"status": "Issued",
"subjectName":
{
"CN": "PostgreSQL_Server_Cmon_Auto_Generated_Server_Certificate",
"description": "Generated by ClusterControl",
"extendedKeyUsage": [ "ServerAuth" ],
"keyUsage": [ "DigitalSignature", "KeyEncipherment", "KeyAgreement" ],
"serialNumber": "4"
},
"validFrom": 1455021265,
"validUntil": 1800707665
},
"requestStatus": "ok",
"total": 1
}
crl
Description: writes out a CRL (Certificate Revokation List) next to the CA certificate Arguments: 'id' the CA certificate id (which signed the certificate and shall sign the CRL)
Example request/reply
1 [email protected]$ curl -X POST -d '{"operation": "crl", "id": 1}' http://localhost:9500/0/ca
5 "certfile": "CoolCA.crt",
6 "crlfile": "CoolCA.crl"
11 Certificate Revocation List (CRL):
13 Signature Algorithm: sha256WithRSAEncryption
15 Last Update: Dec 15 16:18:46 2015 GMT
16 Next Update: Jan 14 16:18:46 2016 GMT
19 Revocation Date: Dec 15 16:12:37 2015 GMT
21 X509v3 CRL Reason Code:
23 Signature Algorithm: sha256WithRSAEncryption
24 3c:1e:cf:3a:83:5c:29:a7:02:29:c8:fe:98:89:91:d2:95:68:
25 2c:5c:12:f8:83:b0:b6:87:17:cc:a0:9d:27:46:e7:07:a2:ac:
26 fe:66:cd:d0:ce:c0:fc:8c:db:f2:c9:3e:52:05:68:4a:09:26:
27 02:4e:73:dd:ff:2d:c8:d6:de:64:dc:f3:3c:de:cc:3a:1f:7a:
28 db:3d:ac:18:b6:d7:c1:92:f8:10:0a:9c:db:85:a7:9c:5d:07:
29 c7:8e:ff:bf:ff:77:cb:5d:4c:20:e8:2d:9b:37:3b:3f:e1:66:
30 13:13:15:8d:c9:84:82:9d:aa:fc:b4:44:05:bc:50:94:49:39:
31 e9:8a:e7:62:19:b9:da:6e:5f:4f:7a:38:76:68:5d:01:3e:7d:
32 da:b7:bc:d3:20:d4:b1:69:41:c5:d1:f3:4b:63:f3:e8:18:89:
33 d3:70:9f:79:33:84:76:6b:33:bb:67:79:a8:fa:98:c8:2f:ec:
34 b2:bb:18:a2:c6:31:5d:e1:5c:d9:02:c3:d8:da:79:5c:27:30:
35 1c:5c:11:71:83:54:09:c3:75:60:24:a1:b9:69:57:71:e6:d6:
36 ef:dc:12:ae:d2:5b:14:57:68:34:35:aa:ec:fa:fe:3a:09:1c:
37 53:fa:29:ac:21:82:3f:bc:67:d1:44:da:72:97:ed:c0:14:2a:
39 -----BEGIN X509 CRL-----
40 MIIBtzCBoAIBATANBgkqhkiG9w0BAQsFADBKMSAwHgYDVQQNDBdNeSBjb29sIENB
41 IGNlcnRpZmljYXRlLjEmMCQGCSqGSIb3DQEJARYXa2VkYXpvQHNldmVyYWxuaW5l
42 cy5jb20XDTE1MTIxNTE2MTg0NloXDTE2MDExNDE2MTg0NlowIjAgAgEEFw0xNTEy
43 MTUxNjEyMzdaMAwwCgYDVR0VBAMKAQAwDQYJKoZIhvcNAQELBQADggEBADwezzqD
44 XCmnAinI/piJkdKVaCxcEviDsLaHF8ygnSdG5weirP5mzdDOwPyM2/LJPlIFaEoJ
45 JgJOc93/LcjW3mTc8zzezDofets9rBi218GS+BAKnNuFp5xdB8eO/7//d8tdTCDo
46 LZs3Oz/hZhMTFY3JhIKdqvy0RAW8UJRJOemK52IZudpuX096OHZoXQE+fdq3vNMg
47 1LFpQcXR80tj8+gYidNwn3kzhHZrM7tneaj6mMgv7LK7GKLGMV3hXNkCw9jaeVwn
48 MBxcEXGDVAnDdWAkoblpV3Hm1u/cEq7SWxRXaDQ1quz6/joJHFP6Kawhgj+8Z9FE
50 -----END X509 CRL-----
The operational reports API
/${CLUSTERID}/reports
listreports
Description: lists the available reports for the cluster
Arguments:
- limit: limit the number of returned items
- offset: for pagination
The 'total' field will have the total available reports in the DB.
1 curl -X POST -d '{"operation": "listreports" }' http://localhost:9500/1/reports
{
"cc_timestamp": 1448030788,
"data": [
{
"cid": 1,
"created": "2015-11-20T14:40:53+00:00",
"days": 7,
"generatedby": "<unknown-rpc-user>",
"id": 1,
"name": "default_2015-11-20_154052.html",
"path": "/home/kedz/s9s_tmp/1/galera/cmon-reports/default_2015-11-20_154052.html",
"recipients": "",
"timestamp": 1448030453,
"type": "default"
},
{
"cid": 1,
"created": "2015-11-20T14:44:06+00:00",
"days": 7,
"id": 2,
"name": "default_2015-11-20_154405.html",
"path": "/home/kedz/s9s_tmp/1/galera/cmon-reports/default_2015-11-20_154405.html",
"recipients": "",
"timestamp": 1448030646,
"type": "default"
} ],
"requestStatus": "ok",
"total": 2
}
GET request to fetch the report file
Description: the backend provides a way to get the report files using the following HTTP GET request: http://localhost:9500/1/reports/default_2015-11-20_154405.html
In case of protected RPC, you may need to specify the authentication token in the GET request like: http://localhost:9500/16/reports/default_2016-03-22_115301.html?token=NrA2AoIrk6iq9ChD
Use the "name" field (of the report meta-data) to fetch a specific report file.
generatereport
Generates a report.
Possible report types/names:
- 'default': a generic reports about the cluster state
- 'availability': availibily report
- 'backup': backup report
Arguments:
- name: the report name/type
- username: the RPC username
- days: how many days back we need data (default = 7)
- clusterIds: some reports can contain info about multiple clusters, here the frontend could list the clusterId-s whose the user might be interested in (or have access) (please use comma separated list, like: "clusterIds": "1, 4, 5" )
- recipients: comma separated list of e-mail addresses to send out the report
Supported types:
- type : default (per cluster, sysadmin report for one cluster)
- type : availability (global, summary of all clusters)
- type : backup (global, summary of all clusters)
1 curl -X POST -d '{"operation": "generatereport", "name": "default", "username": "
[email protected]" }' http://localhost:9500/1/reports
{
"cc_timestamp": 1448030646,
"data":
{
"cid": 1,
"created": "2015-11-20T14:44:06+00:00",
"days": 7,
"id": 2,
"name": "default_2015-11-20_154405.html",
"path": "/home/kedz/s9s_tmp/1/galera/cmon-reports/default_2015-11-20_154405.html",
"recipients": "",
"timestamp": 1448030646,
"type": "default"
},
"requestStatus": "ok"
}
deletereport
Deletes a report (both the report file and meta-data from cmon database)
1 curl -X POST -d '{"operation": "deletereport", "id": 1 }' http://localhost:9500/1/reports
{
"cc_timestamp": 1448031045,
"requestStatus": "ok"
}
addschedule
Creates a schedule for a report
Arguments:
- name: the report name/type
- schedule: the cron-like schedule line
- username: the RPC username
- days: how many days back we need data (default = 7)
- clusterIds: some reports can contain info about multiple clusters, here the frontend could list the clusterId-s whose the user might be interested in (or have access) (please use comma separated list, like: "clusterIds": "1, 4, 5" )
- recipients: e-mail recipients to send the report (comma separated list)
Supported types:
- type : default (per cluster, sysadmin report for one cluster)
- type : availability (global, summary of all clusters)
- type : backup (global, summary of all clusters)
2 "cc_timestamp": 1448361188,
schedules
Lists the currently scheduled reports for the actual cluster.
curl -X POST -d '{"operation": "schedules"}' http:
2 "cc_timestamp": 1448366203,
9 "schedule": "*/5 * * * *"
11 "requestStatus": "ok",
removeschedule
Removes an opreport schedule
Arguments:
- id: the schedule id (from schedules RPC retval)
1 curl -X POST -d '{"operation": "removeschedule", "id": 1}' http://localhost:9500/1/reports
{
"cc_timestamp": 1448366306,
"requestStatus": "ok"
}
listErrorReports
Lists the available/created error report for a specific cluster.
1 $ curl -XPOST -d '{ "token":"td0vd3usRMuNXSC3","operation": "listerrorreports"}' http://localhost:9500/156/reports
{
"cc_timestamp": 1496053698,
"data": [
{
"created": "2017-05-29T09:24:19.000Z",
"id": 1,
"path": "/var/www/clustercontrol/app/tmp/logs/error_report_20170529-112416.tar.gz",
"size": "220.80 KiB",
"www": true
},
{
"created": "2017-05-29T09:24:29.000Z",
"id": 2,
"path": "/home/kedz/s9s_tmp/error_report_20170529-112426.tar.gz",
"size": "221.72 KiB",
"www": false
},
{
"created": "2017-05-29T09:56:05.000Z",
"id": 3,
"path": "/var/www/html/clustercontrol/app/tmp/logs/error-report-cluster156-2017-05-29_115605.tar.gz",
"size": "238.48 KiB",
"www": true
},
{
"created": "2017-05-29T09:59:05.000Z",
"id": 4,
"path": "/var/www/html/clustercontrol/app/tmp/logs/error-report-cluster156-2017-05-29_115905.tar.gz",
"size": "243.97 KiB",
"www": true
} ],
"requestStatus": "ok",
"total": 4
}
downloadErrorReport
Arguments:
The HTTP reply will be either an error message JSon, or the tar-gz stream directly.
An example GET request to download a report by ID
1 wget 'http://localhost:9500/156/reports?token=td0vd3usRMuNXSC3&operation=downloadErrorReport&id=4'
removeErrorReport
Arguments:
Few example requests and replies
1 $ curl -XPOST -d '{ "token":"td0vd3usRMuNXSC3","operation": "removeerrorreport", "id": 2}' http://localhost:9500/156/reports
{
"cc_timestamp": 1496053850,
"requestStatus": "ok"
}
1 $ curl -XPOST -d '{ "token":"td0vd3usRMuNXSC3","operation": "removeerrorreport", "id": 2}' http://localhost:9500/156/reports
{
"cc_timestamp": 1496053852,
"errorString": "Error-report not found.",
"requestStatus": "UnknownError"
}
The local repositories API
/${CLUSTERID}/repos
Description: with these API methods you can list, manage your local mirrored APT/YUM reposities created by cmon
NOTE: these APIs are cluster global (so independent from clusters), as the repositories are shared across the clusters.
Local repository jobs
See create_local_repository and update_local_repository jobs for the local APT/YUM repository mirroring.
listrepos
Lists available local repositories.
Arguments (for filtration):
- cluster-type: filter by cluster-type (galera, mongodb, postgresql, ...)
- vendor: filter by vendor (percona, mariadb, ...)
- db-version: filter by db (major.minor) version (5.6, 10.1, ...)
Example request:
1 curl -X POST -d '{"operation": "listrepos" }' http://localhost:9500/1/repos
{
"cc_timestamp": 1447418582,
"data": [
{
"cluster-type": "galera",
"db-version": "5.6",
"local-path": "/var/www/html/cmon-repos/percona-5.6-yum-el7",
"name": "percona-5.6-yum-el7",
"os":
{
"release": "7",
"type": "redhat"
},
"timestamp": 1454203988,
"used-by-cluster": "1",
"vendor": "percona"
},
{
"cluster-type": "galera",
"db-version": "5.5",
"local-path": "/var/www/html/cmon-repos/percona-5.5-yum-el7",
"name": "percona-5.5-yum-el7",
"os":
{
"release": "7",
"type": "redhat"
},
"timestamp": 1454403988,
"used-by-cluster": "",
"vendor": "percona"
} ],
"requestStatus": "ok",
"total": 2
}
reposetup
Prints out the (semi-)generated repository filename and contents to use the repository manually. You have to substitute the CMON-HOSTNAME string with the controllers IP address before u use it.
Arguments:
- name: the repository name
Example request:
1 curl -X POST -d '{"operation": "reposetup", "name": "percona-5.6-yum-el7" }' http://localhost:9500/1/repos
{
"cc_timestamp": 1447418574,
"content": "[percona-5.6-yum-el7]\nname = percona-5.6-yum-el7\nbaseurl = http://CMON-HOSTNAME/cmon-repos/percona-5.6-yum-el7\nenabled = 1\ngpgkey = http://CMON-HOSTNAME/cmon-repos/percona-5.6-yum-el7/localrepo-gpg-pubkey.asc\n",
"filename": "/etc/yum.repos.d/percona-5.6-yum-el7.repo",
"requestStatus": "ok"
}
removerepo
Removes a repository from controller (it deletes the repository dir too)
Arguments:
- name: the repository name
Example request:
1 curl -X POST -d '{"operation": "removerepo", "name": "percona-5.5-yum-el7" }' http://localhost:9500/1/repos
{
"cc_timestamp": 1447420775,
"requestStatus": "ok"
}
combinations
For the Frontend/UI gives a list of the supported clusterType/vendor/dbVersion/osRelease combinations for the create_local_repository job.
The request returns lists containing the following (4) fields: clusterType, vendor, dbVersion and osRelease
Example request:
1 curl -X POST -d '{"operation": "combinations" }' http://localhost:9500/1/repos | json
{
"cc_timestamp": 1454427430,
"data": [
[
"mongodb",
"10gen",
"3.2",
"5"
],
[
"mongodb",
"10gen",
"3.2",
"6"
],
..
],
"requestStatus": "ok",
"total": 108
}
The settings API
/${CLUSTERID}/settings
Description: with this API you can get/modify the cmon configuration values.
set
Sets/updates a setting value in cmon.
NOTE: at the current moment some setting might not be modifiable by the cmon, and some of the might be overwritten (as cmon.cnf has the priority for some keys).
Arguments:
- key: the setting key
- value: the new value of the property.
Example request:
1 curl -XPOST -d '{"operation":"set","key":"CLUSTER_NAME","value":"MyGreatCluster5"}' 'http://localhost:9500/5/settings'
{
"cc_timestamp": 1444995954,
"requestStatus": "ok"
}
And lets verify the change
1 curl -XPOST -d'{"operation":"list","keys":"CLUSTER_NAME"}' 'http://localhost:9500/5/settings'
{
"cc_timestamp": 1444996169,
"data":
{
"CLUSTER_NAME": "MyGreatCluster5"
},
"requestStatus": "ok",
"total": 1
}
setvalues
The "setvalues" call obsoletes the previous "set" call because it is able to handle multiple settings and change them all in one RPC call.
The "setvalues" will set all the values or none of them, if a "setvalues" contains an invalid key the whole request will be rejected and none of the values will be actually set.
Example:
{
"operation": "setvalues",
"configuration_values":
{
"CPU_CRITICAL": 95,
"CPU_WARNING": 91
}
}
{
[38;5;63m"cc_timestamp"[0;39m: [35m1617970039[0;39m,
[38;5;63m"requestStatus"[0;39m: [38;5;130m"ok"[0;39m,
[38;5;63m"configuration_values"[0;39m:
{
[38;5;63m"CPU_CRITICAL"[0;39m: [38;5;57m95[0;39m,
[38;5;63m"CPU_WARNING"[0;39m: [38;5;57m91[0;39m
}
}
As the example shows the backend will read back the configuration value after it did all the changes and it puts the values into the reply. Since these are the actual configuration values this makes it possible to double check if the configuration subsystem actually accepted all the new values.
list
Lists the current settings/values.
Arguments:
- keys: an optional comma separated list to filter the results
1 curl -XPOST -d'{"operation":"list"}' 'http://localhost:9500/7/settings'
{
"data":
{
"BINDIR": "/usr/bin/",
"CLUSTER_NAME": "cluster_7",
"CMON_CONFIG_PATH": "/etc/cmon.d/cmon_7.cnf",
"CMON_DB": "cmon",
"CMON_HOSTNAME": "192.168.33.1",
"CMON_HOSTNAME1": "192.168.33.1",
"CMON_USER": "cmon",
"CONFIGDIR": "/etc/",
"DB_HOURLY_STATS_COLLECTION_INTERVAL": 5,
"DB_SCHEMA_STATS_COLLECTION_INTERVAL": 10800,
"DB_STATS_COLLECTION_INTERVAL": 30,
"ENABLE_CLUSTER_AUTORECOVERY": true,
"ENABLE_MYSQL_TIMEMACHINE": false,
"ENABLE_NODE_AUTORECOVERY": true,
"HOST_STATS_COLLECTION_INTERVAL": 60,
"LOG_COLLECTION_INTERVAL": 600,
"MONITORED_MYSQL_PORT": 3306,
"MYSQL_BASEDIR": "/usr",
"MYSQL_PORT": 3306,
"NDB_CONNECTSTRING": "127.0.0.1:1186",
"OS": "redhat",
"OS_USER": "kedz",
"OS_USER_HOME": "/home/kedz",
"PURGE": 7,
"SSH_KEYPATH": "/home/kedz/.ssh/id_rsa",
"SSH_PORT": 22,
"STAGING_DIR": "/home/kedz/s9s_tmp",
"SUDO": "sudo -n 2>/dev/null",
"USE_INTERNAL_REPOS": false,
"VENDOR": "percona",
"WWWROOT": "/var/www/html/"
},
"total": 31
}
list-extended
An extended list of the current settings/values.
Arguments:
- keys: an optional comma separated list to filter the results
1 curl -XPOST -d'{"operation":"list-extended"}' 'http://localhost:9500/7/settings'
{
"cc_timestamp": 1531326294,
"data":
{
"extended_info": [
{
"backup": [
{
"category": "backup",
"description": "This setting controls if emails are sent or not if a backup finished or failed.",
"name": "DISABLE_BACKUP_EMAIL",
"value": "",
"value_type": "Bool"
},
{
"category": "backup",
"description": "The database username for backups.",
"name": "BACKUP_USER",
"value": "",
"value_type": "String"
},
...
]
}]
}
setLicense
Sets and verifies a new license on the backend.
NOTE: The method may not allow to set license for a while if you've tried too many times with wrong keys in a short time.
1 curl -XPOST -d '{"operation":"setLicense","email":"
[email protected]","company":"Severalnines AB","exp_date":"31/12/2014","lickey":"deadbeef0123"}' 'http://localhost:9500/0/settings'
{
"cc_timestamp": 1456921835,
"data":
{
"hasLicense": false,
"licenseExpires": -1,
"licenseStatus": "Invalid license data found."
},
"requestStatus": "ok"
}
The new license format is different.
2 "operation": "setlicense",
4 "ewogICAgImNvbXBhbnkiOiAiU2V2ZXJhbG5pbmVzIEFCIiwKICAgICJlbWFpbF9hZGRyZXNzIjog\nImtlZGF6b0BzZXZlcmFsbmluZXMuY29tIiwKICAgICJleHBpcmF0aW9uX2RhdGUiOiAiMjAxOC0x\nMS0xOVQwMDowMDowMC4wMDFaIiwKICAgICJsaWNlbnNlZF9ub2RlcyI6IDUsCiAgICAidHlwZSI6\nICJFbnRlcnByaXNlIgp9CntzOXMtc2lnbmF0dXJlLXNlcGFyYXRvcn0KDbsVZwdgqiYCTQmBhd/M\nhEcI7xOoIsllrob4/VtKDQRgQMP3xKQXPe9SjeKvYs6SzOrjsPzoVk/s9enM2BR1q+I6S8PgLDJf\nyKhsRYRFhS3dHSDeH+2EaYWsm0gfHqcLndT9oC6wnHPGKdrz/V8Yywu2rHsacFETHOINRboIsL7w\n9OB/KTajeNU/g6BDE5Pd33C9Zh1FoJglOhnwqEjtdiRdY7ishIR/0ztCGVcwEZbALbsQMBrc8iir\ndeITmb/j03IM7XBGCMpSsV0fNqgLEhHqgNmTK7of7+67ZCobG56d8Ty2REGVZyiIL518kpnpsFQj\nTNZ7oCMue8LrdMPwN0TaqH/c4hOGdEqirA4Ouce4sgWGDhYPbZOJE44bbzasBND29+5R3752kwCN\n4jYYz5kdCPBk7cez3AC+bdDtVTm0G8N4kpRtln2D33zJVqEg1wVushfqg6Ww1y6FB4ZVoH7yY16t\nvw3yczuXVRG1TR5LB3nUsE3Kvpd1QujLk2/UwlmOvMfn/4A8CbhsUI59x977Q3XpiPKYWoRdCHrd\ntUa53AHr2Ju4/U456pWyHUYQVInzD2EZ/LbVGosIEDrTQzuGn3l7JbJ6oUbEBjqolGzibVqCIpeK\nN3iYU+HH2giprtWJ/LWBYXWzDmGW3MVpaQWEaxSqRTQoE/MS88GfFI8=\n"
{
"cc_timestamp": 1540390293,
"data":
{
"hasLicense": true,
"license":
{
"class_name": "CmonLicense",
"company": "Severalnines AB",
"days_left": 432,
"expiration_date": "2019-12-31T00:00:00.001Z",
"licensed_nodes": 10,
"lickey": "*** hidden ***",
"type": "Enterprise",
"used_nodes": 5,
"valid_date": true
},
"licenseExpires": 432,
"licenseStatus": "License found."
},
"requestStatus": "ok"
}
getLicense
Returns the current license data (key is masked, only the 4 chars are available) and the validity information.
1 $curl -XPOST -d '{"operation":"getlicense"}' 'http://localhost:9500/0/settings?token=5be993bd3317aba6a24cc52d2a39e7636d35d55d'
{
"cc_timestamp": 1540390293,
"data":
{
"hasLicense": true,
"license":
{
"class_name": "CmonLicense",
"company": "Severalnines AB",
"days_left": 432,
"expiration_date": "2019-12-31T00:00:00.001Z",
"licensed_nodes": 10,
"lickey": "*** hidden ***",
"type": "Enterprise",
"used_nodes": 5,
"valid_date": true
},
"licenseExpires": 432,
"licenseStatus": "License found."
},
"requestStatus": "ok"
}
verifyLicense
A method just to verify (but not set or change anything) if the supplied license key looks valid or not. (NOTE: this method won't check the expiration date, just the key)
An example with the new license format:
1 $ curl -XPOST -d '{"operation":"verifyLicense", "licensedata": "ewogICAgImNvbXBhbnkiOiAiU2V2ZXJhbG5pbmVzIEFCIiwKICAgICJlbWFpbF9hZGRyZXNzIjog
2 ImtlZGF6b0BzZXZlcmFsbmluZXMuY29tIiwKICAgICJleHBpcmF0aW9uX2RhdGUiOiAiMjAxOC0x
3 MS0xOVQwMDowMDowMC4wMDFaIiwKICAgICJsaWNlbnNlZF9ub2RlcyI6IDUsCiAgICAidHlwZSI6
4 ICJFbnRlcnByaXNlIgp9CntzOXMtc2lnbmF0dXJlLXNlcGFyYXRvcn0KDbsVZwdgqiYCTQmBhd/M
5 hEcI7xOoIsllrob4/VtKDQRgQMP3xKQXPe9SjeKvYs6SzOrjsPzoVk/s9enM2BR1q+I6S8PgLDJf
6 yKhsRYRFhS3dHSDeH+2EaYWsm0gfHqcLndT9oC6wnHPGKdrz/V8Yywu2rHsacFETHOINRboIsL7w
7 9OB/KTajeNU/g6BDE5Pd33C9Zh1FoJglOhnwqEjtdiRdY7ishIR/0ztCGVcwEZbALbsQMBrc8iir
8 deITmb/j03IM7XBGCMpSsV0fNqgLEhHqgNmTK7of7+67ZCobG56d8Ty2REGVZyiIL518kpnpsFQj
9 TNZ7oCMue8LrdMPwN0TaqH/c4hOGdEqirA4Ouce4sgWGDhYPbZOJE44bbzasBND29+5R3752kwCN
10 4jYYz5kdCPBk7cez3AC+bdDtVTm0G8N4kpRtln2D33zJVqEg1wVushfqg6Ww1y6FB4ZVoH7yY16t
11 vw3yczuXVRG1TR5LB3nUsE3Kvpd1QujLk2/UwlmOvMfn/4A8CbhsUI59x977Q3XpiPKYWoRdCHrd
12 tUa53AHr2Ju4/U456pWyHUYQVInzD2EZ/LbVGosIEDrTQzuGn3l7JbJ6oUbEBjqolGzibVqCIpeK
13 N3iYU+HH2giprtWJ/LWBYXWzDmGW3MVpaQWEaxSqRTQoE/MS88GfFI8="}' 'http://127.0.0.1:9500/0/settings?token=5be993bd3317aba6a24cc52d2a39e7636d35d55d'
{
"cc_timestamp": 1542634254,
"data":
{
"hasLicense": false,
"licenseExpires": 0,
"licenseStatus": "The license has expired."
},
"license":
{
"class_name": "CmonLicense",
"company": "Severalnines AB",
"days_left": 0,
"expiration_date": "2018-11-19T00:00:00.001Z",
"licensed_nodes": 5,
"type": "Enterprise",
"used_nodes": 2,
"valid_date": false
},
"requestStatus": "ok"
}
An example with an old license format:
1 curl -XPOST -d '{"operation":"verifyLicense","email":"
[email protected]","company":"Severalnines AB","exp_date":"31/12/2014","lickey":"huhuhuh122134241"}' 'http://localhost:9500/0/settings'
{
"cc_timestamp": 1456921835,
"data":
{
"hasLicense": false,
"licenseExpires": -1,
"licenseStatus": "The license key is valid (expiration date is not checked)."
},
"requestStatus": "ok"
}
generateToken
Generates and sets (in cmon_X.cnf) an RPC authentication token for the correspondig cluster. If this method succeeds the further RPC calls will only work by specifying the generated token.
Example usage of 'generateToken' RPC method:
1 $ curl -XPOST -d'{"operation":"generateToken"}' 'http://localhost:9500/4/settings'
{
"data":
{
"token": "ry3AabVrZS7XSzVV"
},
"requestStatus": "ok",
"total": 1
}
1 $ curl -XPOST -d'{"operation":list","keys":"CLUSTER_NAME"}' 'http://localhost:9500/4/settings'
{
"cc_timestamp": 1455802341,
"errorString": "Access denied (invalid authentication token)",
"requestStatus": "error"
}
1 $ curl -XPOST -d'{"token":"ry3AabVrZS7XSzVV","operation":"list","keys":"CLUSTER_NAME"}' 'http://localhost:9500/4/settings'
{
"cc_timestamp": 1455802356,
"data":
{
"CLUSTER_NAME": "cluster_4"
},
"requestStatus": "ok",
"total": 1
}
1 $ sudo -n grep ^rpc_key /etc/cmon.d/cmon_4.cnf
2 rpc_key=ry3AabVrZS7XSzVV
getMailserver
Description: this method can be used the obtain the (global, not cluster specific) SMTP mail server settings.
2 'http://localhost:9500/0/settings?token=5be993bd3317aba6a24cc52d2a39e7636d35d55d&operation=get_mailserver
{
"cc_timestamp": 1514992255,
"requestStatus": "ok",
"smtp_server":
{
"hostname": "my.smtp.server",
"password": "password",
"port": 587,
"use_tls": false,
}
}
setMailserver
Description: a method to set/update the global SMTP server settings.
Arguments in 'smtp_server':
- hostname: the SMTP server hostname
- port: the SMTP server port
- use_tls: whether we should use TLS (if false, but server supports STARTTLS then cmon will automatically upgrades the connection to encrypted)
- username: the SMTP authentication username (plain)
- password: the SMTP authentication password (plain)
- sender: a valid e-mail address accepted by SMTP server, be used in From: field
1 curl -XPOST -d'{"operation":"set_mailserver","smtp_server": { "hostname": "my.smtp.server", "password": "password","port": 587, "sender": "
[email protected]", "use_tls": false, "username": "
[email protected]" }}' 'http://localhost:9500/0/settings?token=5be993bd3317aba6a24cc52d2a39e7636d35d55d'
{
"cc_timestamp": 1514992933,
"requestStatus": "ok"
}
unregisterHost
Description: this method can be used to remove a service from cmon without being stopped/uninstalled, it just gets unregistered from controller.
Arguments:
- host: a Cmon*Host instance (hostname and port fields are mandatory)
An example request & reply:
1 $ curl -XPOST -d '{"token":"6T17RE8PBaSzOKbZ","operation": "unregisterHost", "host":{"hotname":"10.0.3.29","port":9600}}' http://127.0.0.1:9500/180/settings
{
"cc_timestamp": 1504868192,
"errorString": "",
"requestStatus": "ok"
}
Get software packages
Description: An API to get the currently selected software package.
1 $ curl 'http://localhost:9500/23/settings?token=jgqeVoNPO3x90sDJ&operation=getSelectedPackage'
{
"cc_timestamp": 1551439141,
"data":
{
"package_files": [ "/tmp/software_packages/mysql-cluster_7.6.8-1ubuntu18.10_amd64.deb-bundle.tar" ],
"package_id": 100,
"package_name": "My NDB packages"
},
"requestStatus": "ok",
"total": 1
}
1 $ curl http://localhost:9500/24/settings?token=E9vJ2SoDg0CaUfqf&operation=getSelectedPackage'
{
"cc_timestamp": 1551439144,
"errorString": "There is no selected software package.",
"requestStatus": "UnknownError"
}
Get e-mail component description strings
1 $ curl 'http://localhost:9500/24/settings?token=E9vJ2SoDg0CaUfqf&operation=getComponentMeta'
{
"cc_timestamp": 1551446013,
"data": [
{
"alarm_name": "Network alarms",
"component": "Network",
"log_name": "Network related logs",
"message_name": "Network related messages, e.g host unreachable, SSH issues."
},
{
"alarm_name": "Cmon Internal alarms",
"component": "CmonDatabase",
"log_name": "Cmon database logs",
"message_name": "Internal Cmon database related messages."
},
{
"alarm_name": "Mail related alarms",
"component": "Mail",
"log_name": "Mail logs",
"message_name": "Mail system related messages"
},
{
"alarm_name": "Cluster alarms",
"component": "Cluster",
"log_name": "Cluster logs",
"message_name": "Cluster related messages, e.g Cluster Failed."
},
{
"alarm_name": "Cluster configuration alarms",
"component": "ClusterConfiguration",
"log_name": "Cluster configuration logs",
"message_name": "Cluster configuration messages, e.g sw configuration messages."
},
{
"alarm_name": "Recovery alarms",
"component": "ClusterRecovery",
"log_name": "Recovery logs",
"message_name": "Recovery messages like Cluster or Node Recovery failures."
},
{
"alarm_name": "Node alarms",
"component": "Node",
"log_name": "Node related logs",
"message_name": "Messages related to nodes, e.g Node Disconnected, missing GRANT, failed to start HAProxy, failed to start NDB Cluster nodes."
},
{
"alarm_name": "Host related alarms (RAM, CPU, DISK, NETWORK)",
"component": "Host",
"log_name": "Host logs",
"message_name": "Host related messages, e.g CPU/DISK/RAM/SWAP alarms."
},
{
"alarm_name": "Database health related alarms (e.g advisors)",
"component": "DbHealth",
"log_name": "Health logs",
"message_name": "Database health related messages, e.g memory usage of mysql servers, connections."
},
{
"alarm_name": "Database performance alarms",
"component": "DbPerformance",
"log_name": "Database performance",
"message_name": "Alarms for long running transactions and deadlocks."
},
{
"alarm_name": "Software installation alarms",
"component": "SoftwareInstallation",
"log_name": "Installation logs",
"message_name": "Software installation related messages."
},
{
"alarm_name": "Alarms about backup",
"component": "Backup",
"log_name": "Backup logs",
"message_name": "Messages about backups."
},
{
"alarm_name": "Other alarms",
"component": "Unknown",
"log_name": "Other logs",
"message_name": "Other uncategorized messages."
} ],
"requestStatus": "ok",
"total": 13
}
Get E-mail notification recipients
1 $ curl 'http://localhost:9500/24/settings?toen=E9vJ2SoDg0CaUfqf&operation=getRecipients'
{
"cc_timestamp": 1551709890,
"requestStatus": "ok",
"total": 1
}
Get E-mail notification settings
Arguments:
- email: optional, to show only one e-mail recipient's settings
1 $ curl 'http://localhost:9500/24/settings?token=E9vJ2SoDg0CaUfqf&operation=getNotificationSettings'
{
"cc_timestamp": 1551707236,
"data": [
{
"daily_limit": 1,
"digest_hour": 7,
"settings":
{
"Backup":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"Cluster":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"ClusterConfiguration":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"ClusterRecovery":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"CmonDatabase":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"DbHealth":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"DbPerformance":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"Host":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"Mail":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"Network":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"Node":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"SoftwareInstallation":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"Unknown":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
}
},
"time_zone": 0
} ],
"requestStatus": "ok",
"total": 1
}
Set E-mail notification settings
Arguments:
- email: the recipients email address
- owner: optional, if not specified will defaults to 'email' value
- daily_limit: the number of out non-digest email sending daily limit (default 100)
- digest_hour: when to send the daily digest e-mails
- time_zone: The time-zone (hour value, to be added to UTC+0) for digest sending
- use_defaults: (default false), use the component defaults for the not specified settings
- settings: map of components, each component must list value (Deliver|Digest|Ignore) for each severity (CRITICAL|WARNING|INFO) also in a map, see the examples above
In case of invalid / missing data an error will be thrown:
3 'http://localhost:9500/38/settings?token=tcfCp5p7PvmZPQLT'
{
"cc_timestamp": 1554469785,
"errorString": "Invalid data for component 'Network' in settings (critical: '', warning: '', info: '')",
"requestStatus": "InvalidRequest"
}
Creating a new recipient just using defaults:
2 '{"operation":"setNotificationSettings","email":"
[email protected]", "use_defaults": true}' \
3 'http://localhost:9500/38/settings?token=tcfCp5p7PvmZPQLT'
{
"cc_timestamp": 1554469881,
"data": [
{
"daily_limit": 100,
"digest_hour": 7,
"settings":
{
"Backup":
{
"CRITICAL": "Digest",
"INFO": "Digest",
"WARNING": "Digest"
},
... ..
"Unknown":
{
"CRITICAL": "Digest",
"INFO": "Digest",
"WARNING": "Digest"
}
},
"time_zone": 0
} ],
"requestStatus": "ok",
"total": 1
}
Create or update a new recipient (everything specified).
1 curl -XPOST
[email protected] 'http://localhost:9500/38/settings?token=tcfCp5p7PvmZPQLT' <<REQEND
3 "operation":"setNotificationSettings",
12 "CRITICAL": "Deliver",
18 "CRITICAL": "Deliver",
22 "ClusterConfiguration":
24 "CRITICAL": "Deliver",
30 "CRITICAL": "Deliver",
36 "CRITICAL": "Deliver",
42 "CRITICAL": "Deliver",
48 "CRITICAL": "Deliver",
54 "CRITICAL": "Deliver",
60 "CRITICAL": "Deliver",
66 "CRITICAL": "Deliver",
72 "CRITICAL": "Deliver",
76 "SoftwareInstallation":
78 "CRITICAL": "Deliver",
84 "CRITICAL": "Deliver",
{
"cc_timestamp": 1554470082,
"data": [
{
"daily_limit": 3,
"digest_hour": 12,
"settings":
{
"Backup":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"Cluster":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"ClusterConfiguration":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"ClusterRecovery":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"CmonDatabase":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"DbHealth":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"DbPerformance":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"Host":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"Mail":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"Network":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"Node":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"SoftwareInstallation":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
},
"Unknown":
{
"CRITICAL": "Deliver",
"INFO": "Ignore",
"WARNING": "Ignore"
}
},
"time_zone": 4
} ],
"requestStatus": "ok",
"total": 1
}
Remove e-mail recipient
Arguments:
- email: an e-mail address to be removed from recipients lists
1 curl 'http://localhost:9500/24/settings?token=E9vJ2SoDg0CaUfqf&operation=removeRecipient&email=kedazo%
[email protected]'
{
"cc_timestamp": 1551711721,
"requestStatus": "ok"
}
1 $ curl 'http://localhost:9500/24/settings?toen=E9vJ2SoDg0CaUfqf&operation=getNotificationSettings&email=kedazo%
[email protected]'
{
"cc_timestamp": 1551711724,
"data": [ ],
"requestStatus": "ok",
"total": 0
}
The Processes API
/${CLUSTERID}/proc
Description: at this RPC path you can request the cmon to return the collected running processes on the nodes/controller.
top
Returns all the running processes (like the 'top' utility) and its properties (cpu/mem usage) from the nodes.
Possible arguments:
- "hostname" : to filter the results (by hostname)
Example:
{
"operation": "top",
"including_hosts": "127.0.0.1",
"limit": 3
}
{
"cc_timestamp": 1617970194,
"requestStatus": "ok",
"total": 1,
"data":
[
{
"hostname": "127.0.0.1",
"processes":
[
{
"class_name": "CmonProcStats",
"cpu_time": 1.17399e+08,
"cpu_usage": 545.423,
"executable": "geth_bsc",
"mem_usage": 12.1326,
"nice": 0,
"pid": 419369,
"priority": 20,
"res_mem": 24600190976,
"shr_mem": 14884864,
"state": "S",
"user": "kedz",
"virt_mem": 52866027520
},
{
"class_name": "CmonProcStats",
"cpu_time": 4.77212e+07,
"cpu_usage": 154.058,
"executable": "geth",
"mem_usage": 14.1925,
"nice": 0,
"pid": 445467,
"priority": 20,
"res_mem": 28776759296,
"shr_mem": 84066304,
"state": "S",
"user": "kedz",
"virt_mem": 56679010304
},
{
"class_name": "CmonProcStats",
"cpu_time": 161693,
"cpu_usage": 124.896,
"executable": "cmon",
"mem_usage": 0.663894,
"nice": 0,
"pid": 185249,
"priority": 20,
"res_mem": 1346117632,
"shr_mem": 26906624,
"state": "S",
"user": "root",
"virt_mem": 20719349760
}
],
"status":
{
"class_name": "CmonCollectorReport",
"last_sample": "2021-04-09T12:09:51.000Z",
"last_sample_age_secs": 3,
"message": "Sample 2 created",
"sample_counter": 2,
"success": true
}
}
]
}
The GetRunningProcesses Call
The "GetRunningProcesses" obsoletes the previous "top" RPC call. It has a number of new features including new parameters and a different return value.
The call supports the following arguments (more arguments will be added soon):
including_hosts
A list of host names that will be returned if they are found in the cluster. If any of these hosts are not in the cluster they will simply be ignored.
limit
Limits the number of processes to return for every hosts.
with_process_properties
The list of process properties that will be returned.
Example:
{
"operation": "getRunningProcesses",
"including_hosts": "127.0.0.1",
"limit": 3
}
{
"cc_timestamp": 1617970195,
"requestStatus": "ok",
"total": 1,
"data":
[
{
"hostname": "127.0.0.1",
"processes":
[
{
"class_name": "CmonProcStats",
"cpu_time": 1.17399e+08,
"cpu_usage": 545.423,
"executable": "geth_bsc",
"mem_usage": 12.1326,
"nice": 0,
"pid": 419369,
"priority": 20,
"res_mem": 24600190976,
"shr_mem": 14884864,
"state": "S",
"user": "kedz",
"virt_mem": 52866027520
},
{
"class_name": "CmonProcStats",
"cpu_time": 4.77212e+07,
"cpu_usage": 154.058,
"executable": "geth",
"mem_usage": 14.1925,
"nice": 0,
"pid": 445467,
"priority": 20,
"res_mem": 28776759296,
"shr_mem": 84066304,
"state": "S",
"user": "kedz",
"virt_mem": 56679010304
},
{
"class_name": "CmonProcStats",
"cpu_time": 161693,
"cpu_usage": 124.896,
"executable": "cmon",
"mem_usage": 0.663894,
"nice": 0,
"pid": 185249,
"priority": 20,
"res_mem": 1346117632,
"shr_mem": 26906624,
"state": "S",
"user": "root",
"virt_mem": 20719349760
}
],
"sample_report":
{
"class_name": "CmonCollectorReport",
"last_sample": "2021-04-09T12:09:51.000Z",
"last_sample_age_secs": 4,
"message": "Sample 2 created",
"sample_counter": 2,
"success": true
}
}
]
}
managedProcesses
This API returns the list (with current status) of the monitored managed processes
(these infos was available from the old 'processes' and 'ext_proc' SQL tables)
Possible arguments:
- "hostname" : to filter the results (by hostname)
1 curl -XPOST -d '{"operation": "managedprocesses"}''http://localhost:9500/5/proc'
{
"cc_timestamp": 1435658110,
"data": [
{
"category": "ClusterProvider",
"command": "nohup service postgresql start",
"executable": "postgres",
"getpidcommand": "pgrep -f ^postgres",
"hostname": "192.168.33.121",
"managed": true,
"pid": 891
},
{
"category": "ClusterProvider",
"command": "nohup service postgresql start",
"executable": "postgres",
"getpidcommand": "pgrep -f ^postgres",
"hostname": "192.168.33.122",
"managed": true,
"pid": 848
} ],
"requestStatus": "ok",
"total": 2
}
toggleManaged
With this flag you can actually deactivate/activate the process recovery (temporary).
(when managed is set to false, cmon will not try to restart the failing process)
Possible arguments:
- "hostname": to specify on which host we toggle the managed flag
- "executable": the processe executable name to enable/disable
- "managed": bool (true/false) value of the new setting
An example request:
1 $ curl -XPOST -d '{"operation": toggleManaged", "hostname": "192.168.33.122", "executable": "postgres", "managed": false}' 'http://localhost:9500/5/proc'
{
"cc_timestamp": 1437478194,
"requestStatus": "ok"
}
Virtual Files API
/${CLUSTERID}/files
Description: here you can get files from CMON using GET requests, for example if you have a imperative script which produces a 'graph', then it will be stored here for some time (~1 hour).
Please consider the following script
var graph = new CmonGraph;
for (row = 0; row < 100; ++row) {
graph.setData(0, row, row);
graph.setData(1, row, sin(row / 10.0));
graph.setData(2, row, cos(row / 10.0));
}
exit(graph);
When you execute it, you get an RPC reply like this:
{
"cc_timestamp": 1427809088,
"requestStatus": "ok",
"results": {
"exitStatus": {
"class": "CmonGraph",
"fileName": "51ff4aec-29cd-baab-f2fb-e3467cc254f8.png",
"height": 600,
"mimeType": "image/png",
"width": 800
},
"fileName": "/dkedves_test/graph01.js",
"status": "Ended"
},
"success": true
}
And then you can get the generated (in-memory) image from the following URL:
1 curl http://localhost:9500/1/files/51ff4aec-29cd-baab-f2fb-e3467cc254f8.png
The imperative scripting API
/${CLUSTERID}/imp
With these API methods, you can actually execute the scripts written in the The imperative (JavaScript like) language language.
NOTE* : These APIs are only available from cmon >= 1.2.10.
saveScript
Description: saves (creates/updates) a script file Arguments:
- filename: the full path of the script (/path/to/script.js)
- user: the author username (for internal logging purposes)
- content: the script contents
- tags: (optional) the script tags (can be a ; separated list in a string or a JSon list)
For tags, see setTags for example how it can be passed.
1 $ curl -XPOST -d'{"operation":"saveScript","content":"var a = 1;\nprint(a);","user":"superAdmin007","filename":"/test/test1.js"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425390000,
"requestStatus": "ok"
}
loadScript
Description: loads a script file (or only its meta-data) Arguments:
- filename: the full path of the script (/path/to/script.js)
- onlymetadata: (defaults to false), if this set, the content will not be replied back
1 $ curl -XPOST -d'{"operation":"loadScript","filename":"/test/test1.js"}' 'http://localhost:9500/14/imp';
{
"cc_timestamp": 1425390125,
"data": {
"content": "var a = 7;\nprint(a);",
"filename": "test1.js",
"lasteditor": "superAdmin007",
"path": "/test/",
"timestamp": 1425390123,
"version": 2
},
"requestStatus": "ok"
}
compileScript
Description: compiles (parses) the script file Arguments:
- filename: the full path of the script (/path/to/script.js)
NOTE: every compiled script will be kept in the memory until a new modification arrives or until it becomes invalidated (it is set now for 120 secs). You can still execute a non-compiled or invalidated script, then it will be compiled first and then executed.
1 curl -XPOST -d'{"operation":"compileScript","filename":"/test/test1.js"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425392797,
"requestStatus": "ok",
"results": { }
}
On error, you may got a syntax error back in the results like here:
{
"cc_timestamp": 1425393004,
"requestStatus": "ok",
"results":
{
"exitStatus": null,
"messages": [
{
"lineNumber": 1,
"message": ":1: syntax error.",
"severity": "error"
} ],
"status": "Parsed"
}
}
executeScript
Description: executes (and before that re-compiles if needed) a script
Arguments:
- filename: the full path of the script (/path/to/script.js)
- arguments: the arguments string
- user: the username (for internal logging purposes)
Here is an example about a successful run:
1 $ curl -XPOST -d'{"operation":"executeScript","filename":"/test/test1.js"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425393260,
"requestStatus": "ok",
"results":
{
"exitStatus": null,
"messages": [
{
"message": "8"
} ],
"status": "Ended"
}
}
And an other example when error occurs:
1 curl -XPOST -d'{"operation":"executeScript","filename":"/test/test2.js"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425393300,
"requestStatus": "ok",
"results":
{
"exitStatus": null,
"messages": [
{
"lineNumber": 1,
"message": ":1: syntax error.",
"severity": "error"
} ],
"status": "Parsed"
}
}
schedule
Description: schedules a script to be periodically executed
Arguments:
- filename: the full path of the script (/path/to/script.js)
- schedule: the cron-like schedule string (or an empty string -> disables the schedule)
- arguments: the arguments string
- user: the username (for internal logging purposes)
Schedule string: It should be consinsted from 5 parts (separated by space or tab): m h dom mon dow, (minute, hour, day of month, month, day of week).
For example we schedule the following script to run in every 5 minutes:
1 curl -XPOST -d'{"operation":"schedule","filename":"/test/test1.js","schedule":"*/5 2 3 * *","arguments": "50 120 true"}' http://127.0.0.1:9500/118/imp?token=Hlrz67jgPSmAYxJz
{
"cc_timestamp": 1527156972,
"data":
{
"filename": "test1.js",
"name": "test1.js",
"path": "/test/",
"schedule": "*/5 2 3 * *",
"schedule_args": "50 120 true",
"schedule_enabled": true,
"schedule_id": 4537,
"settings":
{
"project":
{
"tags": "test"
}
},
"timestamp": 1527156931,
"type": "file",
"version": 1
},
"requestStatus": "ok"
}
changeschedule
Description: changes the settings of a schedule
Arguments:
- schedule_id|filename: the Id of the schedule or the full path to the script. MANDATORY.
- schedule: the cron-like schedule string (or an empty string -> disables the schedule) (OPTIONAL)
- arguments: the arguments string (OPTIONAL)
- enabled: true/false (defaults to true)
Examples: To disable a schedule:
1 curl -XPOST -d '{"operation": "changeSchedle", "enabled": "true", "schedule": "*/5 10 * * *", "filename":"/s9s/host/cpu_usage.js"}' http://127.0.0.1:9500/118/imp?token=Hlrz67jgPSmAYxJz
2 "cc_timestamp": 1527156804,
5 "filename": "cpu_usage.js",
6 "name": "cpu_usage.js",
8 "schedule": "*/5 10 * * *",
10 "schedule_enabled": true,
19 "timestamp": 1520924338,
To enable a schedule:
1 curl -XPOST -d '{"operation": changeSchedule", "schedule_id": 1314,"enabled": true, "schedule": "*/10 * * * *", "token": "VxNXyL0TFl6CARkO"}' http://127.0.0.1:9500/102/imp
{
"cc_timestamp": 1425643537,
"requestStatus": "ok"
}
scheduleResults
Description: fetch/get the last result of one or more scheduled scripts
Arguments:
- filename|filenames: a single or multiple script paths (/path/to/script.js)
An example query where we fetch multiple script results:
1 curl -XPOST -d'{"operation":"scheduleResults","filenames":["/test/test1.js","/test/test2.js"]}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425652908,
"data": [
{
"exitStatus": "null",
"filename": "/test/test1.js",
"messages": [
{
"message": "13"
} ],
"status": "Ended",
"timestamp": 1425652500
},
{
"exitStatus": "null",
"filename": "/test/test2.js",
"messages": [
{
"message": "55"
} ],
"status": "Ended",
"timestamp": 1425652500
} ],
"requestStatus": "ok",
"total": 2
}
dirTree
Description: returns the file directory tree Arguments:
- path: (defaults to "/") specify a directory (/subdir1/) if you want to get only a sub-tree
- showFiles: (defaults to true) whether to show also the file(s) in the tree
1 curl -XPOST -d '{"operation": dirTree", "path":"/s9s/host/", "token": "VxNXyL0TFl6CARkO"}' http://127.0.0.1:9500/102/imp
{
"cc_timestamp": 1477661429,
"data":
{
"contents": [
{
"filename": "cpu_usage.js",
"name": "cpu_usage.js",
"path": "/s9s/host/",
"schedule": "*/10 * * * *",
"schedule_args": "",
"schedule_enabled": true,
"schedule_id": 1314,
"settings":
{
"project":
{
"tags": "s9s;host"
}
},
"timestamp": 1466500249,
"type": "file",
"version": 1
} ],
"name": "host",
"path": "/s9s/",
"type": "directory"
},
"requestStatus": "ok"
}
setTags
Description: sets tags for the specified script-file Arguments:
- filename: the full path of the script (/path/to/script.js)
- user: the username (for internal logging purposes)
- tags: the script tags (can be a ; separated list in a string or a JSon list)
2 curl -XPOST -d'{"operation":"setTags","user":"superadmin01","tags":["mytag","helllooo"],"filename":"/test/tags/script1.js"}' 'http://localhost:9500/1/imp'
5 curl -XPOST -d'{"operation":"setTags","tags":"tag1;tag2;tag3","filename":"/test/tags/script2.js"}' 'http://localhost:9500/1/imp'
{.sh}
removeScript
Description: removes a script from the system Arguments:
- filename: the full path of the script (/path/to/script.js)
1 $ curl -XPOST -d'{"operation":"removeScript","filename":"/test2/subdir/filename.js"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425394458,
"requestStatus": "ok"
}
moveScript
Description: renames or moves a script to a new place Arguments:
- filename: the full path of the script (/path/to/script.js)
- newname: the new full path of the script (/other/path/newname.js)
- user: the username (for internal logging purposes)
1 $ curl -XPOST -d'{"operation":"moveScript","filename":"/test/test2.js","newname":"/newpath/newname.js"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425394458,
"requestStatus": "ok"
}
importTarGz
Description: imports a set of scripts from a .tar.gz (it should contain a directory with the same name as the .tar.gz (test.tar.gz should contain a 'test' directory). Arguments:
- localpath: the local (full) file path to the .tar.gz file on the controller
- overwrite: (defaults to true) if this set to false then the call will fail if there are any files to be replaced by the tar.gz script
NOTE: the imported scripts will appear in a subdirectory (using the .tar.gz name)
NOTE: the filename 'root.tar.gz' is handled specially, will be imported to '/'
1 curl -XPOST -d'{"operation":"importTarGz","localpath":"/home/kedz/mytargz.tar.gz"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425483265,
"requestStatus": "ok"
}
exportTarGz
Description: exports sub-tree of the scripts to a .tar.gz file. (note the tar.gz file will be overwtitten if it exists already).
Arguments:
- outdir: the local directory path of the output .tar.gz file on the controller
- path: the virtual directory to be exported (must start with '/')
NOTE: the filename will be constructed of the path name and ".tar.gz" suffix.
NOTE2: if you export '/' then the output filename will be root.tar.gz
1 curl -XPOST -d'{"operation":"exportTarGz","outdir":"/home/kedz/","path":"/test"}' 'http://localhost:9500/14/imp'
{
"cc_timestamp": 1425555463,
"requestStatus": "ok"
}
1 sudo tar -tzf test.tar.gz
The Logger API
/${CLUSTERID}/logger
The logger API is the new RPC API for the new logger subsystem. This is under construction.
The getLogEntries RPC call
The log is returned in cmonlogentry objects.
Possible arguments:
- ascending: sorting order
- created_after: (ISO TZ format datetime string)
- created_before: (ISO TZ format datetime string)
- component: filter by component (Network, CmonDatabase, Mail, Cluster, DbHealth, DbPerformance, Host, ClusterConfiguration, ClusterRecovery, Node, SoftwareInstallation, Backup, Unknown
- severity: LOG_EMERG, LOG_ALERT, LOG_CRIT, LOG_ERR, LOG_WARNING, LOG_NOTICE, LOG_INFO, LOG_DEBUG
- limit: limit for pagination
- offset: offset for pagination
1 curl -XPOST -d '{"operation": "getLogEntries", "token": "QfVlqKGRajRrsaaF"}' http://localhost:9500/78/logger
{
"operation" : "getLogEntries",
"created_after" : "2015-05-08T10:10:45.+0200Z",
"created_before" : "2019-05-08T10:10:45.+0200Z",
"limit" : 2,
"offset" : 0,
"cluster_id" : 200
}
{
"cc_timestamp": 1617969742,
"requestStatus": "ok",
"total": 0,
"log_entries": [],
"log_entry_counts": { }
}
The getLogStatistics RPC call
1 curl -XPOST -d '{"operation": "getLogStatistics", "token": "QfVlqKGRajRrsaaF"}' http://localhost:9500/0/logger
{
"operation" : "getLogStatistics"
}
{
"cc_timestamp": 1617969742,
"requestStatus": "ok",
"log_statistics":
{
"current_time": "2021-04-09T12:02:22.314Z",
"last_error_message": "Success.",
"last_flush_time": "2021-04-09T12:02:17.008Z",
"log_debug_enabled": false,
"writer_thread_running": true,
"writer_thread_started": "2021-04-09T12:01:36.923Z",
"cluster_log_statistics":
[
{
"cluster_id": 0,
"disabled": false,
"entries_received": 13,
"format_string": "%Z : (%S) %M\n",
"last_error_message": "",
"lines_written": 0,
"log_file_name": "",
"max_log_file_size": 5242880,
"messages_per_sec": 0.216667,
"syslog_enabled": false,
"write_cycle_counter": 1
},
{
"cluster_id": 200,
"disabled": false,
"entries_received": 73,
"format_string": "%Z : (%S) %M\n",
"last_error_message": "Success.",
"lines_written": 73,
"log_file_name": "./cmon-ut-communication.log",
"max_log_file_size": 5242880,
"messages_per_sec": 1.21667,
"syslog_enabled": false,
"write_cycle_counter": 4
},
{
"cluster_id": 400,
"disabled": false,
"entries_received": 4,
"format_string": "%Z : (%S) %M\n",
"last_error_message": "Success.",
"lines_written": 4,
"log_file_name": "./cmon-ut-communication.log",
"max_log_file_size": 5242880,
"messages_per_sec": 0.0666667,
"syslog_enabled": false,
"write_cycle_counter": 1
}
],
"entries_statistics":
{
"entries_received": 90,
"entries_written_to_cmondb": 90
}
}
}
List collected log files
To refresh the collected log files you can use the collect_logs job.
1 $ curl -XPOST -d '{"operation":"listcollectedlogs"}' 'http://localhost:9500/47/logger?token=lPojckD3AIKCCzks'
{
"cc_timestamp": 1543415991,
"files": [
{
"created": "2018-11-28T14:32:03.000Z",
"filename": "/var/lib/proxysql/proxysql.log",
"hostname": "10.35.112.97",
"length": 279801
},
{
"created": "2018-11-28T14:32:03.000Z",
"filename": "/var/log/mysql/mysqld.log",
"hostname": "10.35.112.97",
"length": 18952
} ],
"requestStatus": "ok",
"total": 2
}
Get collected log files
1 $ curl -XPOST -d '{"operation":"getcollectedlog", "hostname": "10.35.112.97", "filename": "/var/log/mysql/mysqld.log"}' 'http://localhost:9500/47/logger?token=lPojckD3AIKCCzks'
{
"cc_timestamp": 1543416094,
"data":
{
"content": "2018-11-28T13:04:40.441185Z 0 [Warning] The syntax '--log_warnings/-W' is deprecated and will be removed in a future release. Please use '--log_error_verbosity' instead.\r\n2018-11-28T13:04:40.441277Z 0 [Warning] options --log-slow-admin-statements, --log-queries-not-using-indexes and --log-slow-slave-statements have no effect if --slow-query-log is not set\r\n2018-11-28T13:04:40.441280Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).\r\n2018-11-28T13:04:40.441300Z 0 [Note] Ignoring --secure-file-priv value as server is running with --initialize(-insecure) or --bootstrap.\r\n2018-11-28T13:04:40.441312Z 0 [Note] /usr/sbin/mysqld (mysqld 5.7.24-log) starting as process 2222 ...\r\n2018-11-28T13:04:40.442437Z 0 [Warning] Duplicate ignore-db-dir directory name 'lost+found' found in the config file(s). Ignoring the duplicate.\r\n2018-11-28T13:04:40.442470Z 0 [Note] --initialize specifed on an existing data directory.\r\n....",
"created": "2018-11-28T14:32:03.000Z",
"filename": "/var/log/mysql/mysqld.log",
"hostname": "10.35.112.97",
"length": 18952
},
"requestStatus": "ok",
"total": 1
}
The NDB Cluster Log API
Description: Returns the collected NDB cluster logs (from NDB Management instances) in backwards order (from new to old).
Operation: getClusterLog
Arguments:
- limit: the number of returned items (for pagination)
- offset: the offset of the returned items (for pagination)
- nodeid: if specified, filter by nodeid
- severity: filter by severity
- event: filter by event
1 $ curl -XPOST -d '{"operation":"getClusterLog","limit":4}' 'http://localhost:9500/29/logger?token=kJyeypsjxeFHDkhM'
{
"cc_timestamp": 1552399549,
"data": [
{
"created": "2019-03-12T12:41:53.000Z",
"event": "StartCompleted",
"id": 70,
"loglevel": 1,
"message": "Node 4: Start Completed version 7.5.12",
"nodeid": 0,
"severity": "INFO",
"source_nodeid": 4
},
{
"created": "2019-03-12T12:41:53.000Z",
"event": "StartPhaseCompleted",
"id": 69,
"loglevel": 4,
"message": "Node 4: StartPhase completed phase = 101 type = 2",
"nodeid": 0,
"severity": "INFO",
"source_nodeid": 4
},
{
"created": "2019-03-12T12:41:51.000Z",
"event": "ConnectedApiVersion",
"id": 68,
"loglevel": 8,
"message": "Node 4: ConnectedApiVersion connected node 11 , version 7.5.12",
"nodeid": 0,
"severity": "INFO",
"source_nodeid": 4
},
{
"created": "2019-03-12T12:41:50.000Z",
"event": "ConnectedApiVersion",
"id": 67,
"loglevel": 8,
"message": "Node 4: ConnectedApiVersion connected node 12 , version 7.5.12",
"nodeid": 0,
"severity": "INFO",
"source_nodeid": 4
} ],
"requestStatus": "ok",
"total": 70
}
The config API
/${CLUSTERID}/config
NOTE* : This API is only available from cmon >= 1.2.10.
These RPC methods are provided for UI for nodes config file viewing and manipulation (edit).
list
List the (collected) configuration files of the cluster.
1 curl -XPOST -d'{"operation":"list"}' 'http://localhost:9500/14/config'
{
"cc_timestamp": 1435239570,
"data": [
{
"crc": -1563075013,
"filename": "postgresql.conf",
"hasChange": false,
"hostId": 15,
"hostname": "192.168.33.121",
"path": "/var/lib/pgsql/data/postgresql.conf",
"size": 22048,
"timestamp": 1435239001
},
{
"crc": -988904738,
"filename": "postgresql.conf",
"hasChange": false,
"hostId": 18,
"hostname": "192.168.33.122",
"path": "/var/lib/pgsql/data/postgresql.conf",
"size": 21130,
"timestamp": 1435239001
},
{
"crc": 352626019,
"filename": "recovery.conf",
"hasChange": false,
"hostId": 18,
"hostname": "192.168.33.122",
"path": "/var/lib/pgsql/data/recovery.conf",
"size": 155,
"timestamp": 1435239001
} ],
"requestStatus": "ok",
"total": 3
}
variables
Retruns the currently existing/set sections and variables and its values in a config file.
Arguments:
- hostId, hostname: you can specify it if you looking for a specifc host's configuration
- showAllHosts: (defaults to false), if it is set, then all hosts will be returned even if there are no config files are available (because not fetched or any other reason).
1 $ curl -XPOST -d'{"operation":"variables","hostname":"192.168.33.123"}' 'http://localhost:9500/6/config'
{
"cc_timestamp": 1436936500,
"data": [
{
"hostId": 127,
"hostname": "192.168.33.123",
"variables": [
{
"filepath": "/etc/my.cnf",
"linenumber": 2,
"section": "mysqld",
"value": "/var/lib/mysql",
"variablename": "datadir"
},
{
"filepath": "/etc/my.cnf",
"linenumber": 3,
"section": "mysqld",
"value": "/var/lib/mysql/mysql.sock",
"variablename": "socket"
},
{
"filepath": "/etc/my.cnf",
"linenumber": 5,
"section": "mysqld",
"value": "0",
"variablename": "symbolic-links"
},
{
"filepath": "/etc/my.cnf",
"linenumber": 13,
"section": "mysqld_safe",
"value": "/var/log/mariadb/mariadb.log",
"variablename": "log-error"
},
{
"filepath": "/etc/my.cnf",
"linenumber": 14,
"section": "mysqld_safe",
"value": "/var/run/mariadb/mariadb.pid",
"variablename": "pid-file"
},
{
"filepath": "/etc/my.cnf.d/server.cnf",
"linenumber": 13,
"section": "mysqld",
"value": "0.0.0.0",
"variablename": "bind-address"
} ]
} ],
"requestStatus": "ok",
"total": 1
}
contents
Get the contents of one or more configuration file(s).
The 'filename' argument must be specified, and optionally the caller could specify the 'hostname' or the 'hostId'.
1 curl -XPOST -d'{"operation":"contents","filename":"postgresql.conf","hostId":5}' 'http://localhost:9500/14/config'
And an example reply (please note that i haven't copied the whole contents there as it was too long)
{
"cc_timestamp": 1423215173,
"data": [
{
"contents": "#restart_after_crash = on\r\nlisten_address = '127.0.0.1'\r\n",
"crc": 1910057875,
"filename": "postgresql.conf",
"hasChange": true,
"hostId": 5,
"hostname": "192.168.33.99",
"size": 21274,
"timestamp": 1425289335
} ],
"requestStatus": "ok",
"total": 1
}
edit
Perform an edit action on a config file (or on multiple config files), the same filtering is available here like in case of 'contents' operation, so 'filename' must be specified and the others are optional.
add
With this action you can add new configuration entries to the config file(s).
Arguments:
- section (optional): the config file section to be edited (think for [section])
- key: the configuration name key (for example: ssl_cert_file)
- value: and the value to be used for this new configuration entry
1 curl -XPOST -d'{"operation":"edit","filename":"postgresql.conf","hostId":5,"action":"add","key":"ssl_cert_file","value":"/etc/certfile.crt"}' 'http://localhost:9500/14/config'
{
"cc_timestamp": 1423215692,
"requestStatus": "ok"
}
In this example a new line will be added to the config file with the following content:
ssl_cert_file=/etc/certfile.crt\n\n
change
With this action you can change the value(s) of the configuration file(s) entry(ies).
Arguments:
- section (optional): the config file section to be edited (think for [section])
- key: the configuration name key (for example: ssl_cert_file)
- value: the new value to be set for the key
1 curl -XPOST -d'{"operation":"edit","filename":"postgresql.conf","hostId":5,"action":"change","key":"ssl_cert_file","value":"/etc/mycert.crt"}' 'http://localhost:9500/14/config'
{
"cc_timestamp": 1423215692,
"requestStatus": "ok"
}
In this example the 'ssl_cert_file' configuration key value will be changed to the new value:
ssl_cert_file=/etc/mycert.crt\n\n
disable
This action is meant for disable, i.e. comment out an unused/unneded configuration key.
Arguments:
- section (optional): the config file section to be edited (think for [section])
- key: the configuration name key (for example: ssl_cert_file)
1 curl -XPOST -d'{"operation":"edit","filename":"postgresql.conf","hostId":5,"action":"disable","key":"ssl_cert_file"}' 'http://localhost:9500/14/config'
{
"cc_timestamp": 1423215692,
"requestStatus": "ok"
}
In this example the 'ssl_cert_file' configuration key value will be commented out:
# ssl_cert_file=/etc/mycert.crt\n\n
setContent
This operation is allows the web-ui to replace the whole contents of a configuration file
Arguments:
- hostname/hostId : one of these must be specified
- filename: the filename to be changed
- content: the contents of the file
- export_config: (bool) when this is enabled config will be also saved directly on the node too
An example:
1 curl -XPOST -d'{"operation":"setcontent","hostId":3,"filename":"postgresql.conf","content":"hello=world"}' 'http://localhost:9500/14/config';
}
{
"cc_timestamp": 1425982204,
"requestStatus": "ok"
}
1 curl -XPOST -d'{"operation":"contents","filename":"postgresql.conf","hostId":3}' 'http://localhost:9500/14/config'
{
"cc_timestamp": 1425982219,
"data": [
{
"contents": "hello=world",
"crc": -1308491584,
"filename": "postgresql.conf",
"hasChange": true,
"hostId": 3,
"hostname": "10.10.10.13",
"size": 11,
"timestamp": 1425982204
} ],
"requestStatus": "ok",
"total": 1
}
list_templates
Get the configuration template used by a particular vendor/version/cluster type. Arguments:
- vendor (a supported vendor, may be omitted if the cluster exists) [STRING]
- version (a supported version, may be omitted if the cluster exists) [STRING]
- cluster_type (a supported cluster_type, may be omitted if the cluster exists) [STRING]
- db_role (in case of mongodb, pass shardsvr,configsvr,arbiter or mongos.)
If the cluster does not exist, then one must set the arguments for vendor, version, and cluster_type:
1 curl -XPOST -d'{"operation":"list_templates", "version":"10.2", "vendor": "mariadb", "cluster_type":"replication"}' 'http://localhost:9500/0/config'
{
"cc_timestamp": 1527011596,
"data":
{
"cluster_type": "replication",
"config_templates": [ "my.cnf.gtid_replication" ],
"vendor": "percona",
"version": "5.6",
"requestStatus": "ok",
"total": 1
}
Arguments (except "operation":"list_templates", may be ommitted f the cluster already exists. The missing (vendor, version, cluster_type) arguments may then be derived.
1 curl -XPOST -d'{"operation":"list_templates"}' 'http://localhost:9500/14/config'
Important: for mongo jobs, you may need to specify different kind of templates:
- mongodb_conf_template: Template for database nodes (role: shardsvr,configsvr,arbiter)
- mongos_conf_template: Template for Mongos node (role: mongos) To get a list of available template of each DB role, you need to pass 'db_role' argument.
To list MongoDB database template(s) (for mongodb_conf_template job argument):
1 $ curl -XPOST -d'{"operation":"list_templates", "version":"3.2", "vendor": "percona", "cluster_type":"mongodb","db_role":"shardsrv"}' 'http://localhost:9500/0/config?token=5be993bd3317aba6a24cc52d2a39e7636d35d55d'
{
"cc_timestamp": 1551952404,
"data":
{
"cluster_type": "mongodb",
"config_templates": [ "mongodb.conf.percona" ],
"sw_package": [ ],
"vendor": "percona",
"version": "3.2"
},
"requestStatus": "ok",
"total": 1
}
And to list MongoDB Mongos template(s) (for mongos_conf_template job argument):
1 $ curl -XPOST -d'{"operation":"list_template", "version":"3.2", "vendor": "percona", "cluster_type":"mongodb","db_role":"mongos"}' 'http://localhost:9500/0/config?token=5be993bd3317aba6a24cc52d2a39e7636d35d55d'
{
"cc_timestamp": 1551952474,
"data":
{
"cluster_type": "mongodb",
"config_templates": [ "mongos.conf.org" ],
"sw_package": [ ],
"vendor": "percona",
"version": "3.2"
},
"requestStatus": "ok",
"total": 1
}
The Spreadsheet API
/${CLUSTERID}/sheet
The JSon syntax of this interface is described in Spreadsheets Spreadsheets documentation.
The Jobs API
/${CLUSTERID}/job
createJob
The createJob RPC call is deprecated, please use createJobInstance instead.
To push a new job (to a specific cluster, or a generic one [use clusterId = 0 in the path]), the a JSon in the following format, where the "job" value is a properly formatted JSon jobspec. (see the details at 'jobs_json_format.text')
For auditing the UI (or other clients) should specify the following fields:
{
"operation": "createJob",
"ip": "${THE IP OF THE CLIENT/BROWSER}",
"username": "${THE USERNAME}",
"userid": "${THE USER ID, dcps}",
"job": { }
}
Look at this example how a new MongoDB server must be installed by invoking the setupserver job:
1 $ curl -X POST -H"Content-Type: application/json" -d '{
2 "operation": "createJob",
6 "job": { "command": "create_cluster",
7 "job_data": { "type": "mongodb", "mongodb_hostname": "192.168.33.10",
8 "mongodb_user": "root", "mongodb_password": "password",
9 "mongodb_rs_name": "test_replica_set", "enable_mongodb_uninstall": 1,
10 "ssh_port": 22, "ssh_user": "kedz",
11 "ssh_keyfile": "/home/kedz/.ssh/id_rsa",
13 http://localhost:9500/0/job
The interface is reports back the 'jobId' in the following format:
{
"requestStatus": "ok",
"jobId": 24,
"status": "DEFINED"
}
createJobInstance
This call obsoletes the "createJob" RPC call. It provides a standard way to pass job information in a CmonJobInstance Class object.
Example:
{
"operation" : "createJobInstance",
"job" :
{
"class_name": "CmonJobInstance",
"job_spec": "{\n}",
"status_text": "Waiting",
"title": "The title of the job.",
"user_name": "pipas",
"user_id": 42
}
}
{
"cc_timestamp": 1617969732,
"requestStatus": "ok",
"job":
{
"class_name": "CmonJobInstance",
"can_be_aborted": false,
"can_be_deleted": true,
"cluster_id": 200,
"created": "2021-04-09T12:02:12.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 3,
"job_spec": "{\n}",
"rpc_version": "1.0",
"status": "DEFINED",
"status_text": "Pending",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
}
}
It is also possible to create scheduled and recurring jobs using this RPC call by adding the following properties to the job instance object:
- scheduled: This field has to be a string representation of a date&time value. The job will be scheduled and executed only after the internal clock reaches the schedule date&time.
- recurrence: This can be used to create a recurring job, a job that is executed over and over again. The value of this property has to be a five field long crontab style recurrence definition (e.g. "☆/30 ☆ ☆ ☆ ☆").
Please note that a job can not be scheduled and recurring the same time (this is not implemented) and the scheduling field has precedence over the recurrence.
scheduleJobInstance
A call that can be used to create job instances that will be executed in the future. The passed job instance should have a "scheduled" field that holds the planned execution time of the job instance and it should be in the future.
Arguments:
- recurrence: cron like schedule string (m h dom mon dow), it may contains a TZ string (either the short name like CET/CEST, or a hour/min value), examples: TZ='-5:30' 18 21 * * *, TZ='CET' 1 0 1 1 *
Example:
{
"operation" : "scheduleJobInstance",
"job" :
{
"class_name": "CmonJobInstance",
"job_spec": "{\n}",
"status_text": "Waiting",
"title": "The title of the job.",
"user_name": "pipas",
"user_id": 42,
"scheduled": "2021-04-09T12:03:12.277Z"
}
}
{
"cc_timestamp": 1617969732,
"requestStatus": "ok",
"job":
{
"class_name": "CmonJobInstance",
"can_be_aborted": false,
"can_be_deleted": true,
"cluster_id": 200,
"created": "2021-04-09T12:02:12.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 2,
"job_spec": "{\n}",
"rpc_version": "1.0",
"scheduled": "2021-04-09T12:03:12.277Z",
"status": "SCHEDULED",
"status_text": "Scheduled",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
}
}
Note: I am not sure if we need this, I think it was a mistake (my mistake) to add this. The jobs can be scheduled using the normal "createJobInstance" call, please use that one.
getStatus
To fetch the job current status, you need to push the following request
1 curl -X POST -H"Content-Type: application/json" -d '{
2 "operation": "getStatus", "jobId": 14 }' http://localhost:9500/0/job
You will get a JSon reply like this:
{
"exitCode": 0,
"jobId": 14,
"requestStatus": "ok",
"status": "FINISHED",
"statusText": "Job finished."
}
or an error message otherwise:
1 curl -X POST -H"Content-Type: application/json" -d '{"operation": "getStatus", "jobId": 99 }' http://localhost:9500/0/job
{
"errorString": "No such job.",
"requestStatus": "error"
}
1 curl -X POST -H"Content-Type: application/json" -d '{"operation": "getStatus", "jobId": 13 }' http://localhost:9500/55/job
{
"errorString": "Cluster 55 is not running.",
"requestStatus": "error"
}
getJobInstance
The "getJobInstance" RPC call obsoletes the previous "getStatus" call because it is able to return all the properties of a job and not just the status. In the future when we add new properties to the jobs (e.g. a progress percent to be shown as a progress bar) this RPC call will be handle the new properties.
This RPC call also available using the deprecated "getJob" call.
Example:
{
"operation": "getJob",
"job_id": 3
}
{
"cc_timestamp": 1617969732,
"requestStatus": "ok",
"job":
{
"class_name": "CmonJobInstance",
"can_be_aborted": false,
"can_be_deleted": true,
"cluster_id": 200,
"created": "2021-04-09T12:02:12.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 3,
"job_spec": "{\n}",
"rpc_version": "1.0",
"status": "DEFINED",
"status_text": "Pending",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
}
}
getJobInstances
The getJobInstance call supports the "limit" and "offset" arguments the same way the SQL syntax uses the LIMIT and OFFSET keywords. The default value limit is 100.
Example:
{
"operation": "getJobInstances"
}
{
"cc_timestamp": 1617969732,
"requestStatus": "ok",
"total": 4,
"jobs":
[
{
"class_name": "CmonJobInstance",
"can_be_aborted": false,
"can_be_deleted": true,
"cluster_id": 200,
"created": "2021-04-09T12:02:12.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 4,
"job_spec": "{\n}",
"rpc_version": "1.0",
"status": "DEFINED",
"status_text": "Pending",
"title": "The second job.",
"user_id": 42,
"user_name": "pipas"
},
{
"class_name": "CmonJobInstance",
"can_be_aborted": false,
"can_be_deleted": true,
"cluster_id": 200,
"created": "2021-04-09T12:02:12.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 3,
"job_spec": "{\n}",
"rpc_version": "1.0",
"status": "DEFINED",
"status_text": "Pending",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
},
{
"class_name": "CmonJobInstance",
"can_be_aborted": false,
"can_be_deleted": true,
"cluster_id": 200,
"created": "2021-04-09T12:02:12.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 2,
"job_spec": "{\n}",
"rpc_version": "1.0",
"scheduled": "2021-04-09T12:03:12.277Z",
"status": "SCHEDULED",
"status_text": "Scheduled",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
},
{
"class_name": "CmonJobInstance",
"can_be_aborted": false,
"can_be_deleted": true,
"cluster_id": 200,
"created": "2021-04-09T12:02:12.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 1,
"job_spec": "{\n \"group_id\": 1,\n \"group_name\": \"admins\",\n \"user_id\": 0,\n \"user_name\": \"\"\n}",
"rpc_version": "1.0",
"status": "DEFINED",
"status_text": "Pending",
"title": "Unknown Command",
"user_id": 1,
"user_name": "system"
}
]
}
deleteJobInstance
The "deleteJobInstance" can be used to delete job instances that are not currently executed.
Example:
{
"operation": "deleteJobInstance",
"job_id": 5
}
{
"cc_timestamp": 1617969742,
"requestStatus": "ok"
}
getActiveJobInstances
The getActiveJobInstances RPC call is similar to the getJobInstances call, but only handles job instances that are not finished or aborted. The default value limit is 100.
The getJobInstance call supports the "limit" and "offset" arguments the same way the SQL syntax uses the LIMIT and OFFSET keywords.
Example:
{
"operation": "getActiveJobInstances"
}
{
"cc_timestamp": 1617969732,
"requestStatus": "ok",
"total": 4,
"jobs":
[
{
"class_name": "CmonJobInstance",
"can_be_aborted": false,
"can_be_deleted": true,
"cluster_id": 200,
"created": "2021-04-09T12:02:12.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 4,
"job_spec": "{\n}",
"rpc_version": "1.0",
"status": "DEFINED",
"status_text": "Pending",
"title": "The second job.",
"user_id": 42,
"user_name": "pipas"
},
{
"class_name": "CmonJobInstance",
"can_be_aborted": false,
"can_be_deleted": true,
"cluster_id": 200,
"created": "2021-04-09T12:02:12.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 3,
"job_spec": "{\n}",
"rpc_version": "1.0",
"status": "DEFINED",
"status_text": "Pending",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
},
{
"class_name": "CmonJobInstance",
"can_be_aborted": false,
"can_be_deleted": true,
"cluster_id": 200,
"created": "2021-04-09T12:02:12.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 2,
"job_spec": "{\n}",
"rpc_version": "1.0",
"scheduled": "2021-04-09T12:03:12.277Z",
"status": "SCHEDULED",
"status_text": "Scheduled",
"title": "The title of the job.",
"user_id": 42,
"user_name": "pipas"
},
{
"class_name": "CmonJobInstance",
"can_be_aborted": false,
"can_be_deleted": true,
"cluster_id": 200,
"created": "2021-04-09T12:02:12.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 1,
"job_spec": "{\n \"group_id\": 1,\n \"group_name\": \"admins\",\n \"user_id\": 0,\n \"user_name\": \"\"\n}",
"rpc_version": "1.0",
"status": "DEFINED",
"status_text": "Pending",
"title": "Unknown Command",
"user_id": 1,
"user_name": "system"
}
]
}
getJobMessages
This operation returns back all the job messages related to the 'jobId', for the exact request & response format please see the following example:
Arguments:
1 curl -X POST -H"Content-Type: application/json" -d '{"operation": "getJobMessages", "jobId": 24 }' http://localhost:9500/0/job
{
"jobId": 24,
"messages": [
{
"exitCode": 0,
"id": 732,
"message": "Checking job parameters.",
"time": "2014-08-29 16:34:10"
},
{
"exitCode": 0,
"id": 733,
"message": "Check if host is already exist in other cluster.",
"time": "2014-08-29 16:34:10"
},
{
"exitCode": 1,
"id": 734,
"message": "Host (192.168.33.10) is already in an other cluster.",
"time": "2014-08-29 16:34:10"
} ],
"requestStatus": "ok"
}
getJobLog
The "getJobLog" RPC call can be used to simultaneously access the properties of the specified job and its messages. This call is an enhanced version of the "gotjobmessages" call.
curl -X POST -H"Content-Type: application/json" -d'{ "operation": "getjoblog", "token": "rBf51gA3NZgJgys6", "jobId":"39287" }' http:
Example:
{
"job_id": "19",
"limit": 2,
"offset": 5,
"operation": "getjoblog"
}
{
"cc_timestamp": 1617970000,
"requestStatus": "ok",
"job":
{
"class_name": "CmonJobInstance",
"can_be_aborted": false,
"can_be_deleted": true,
"cluster_id": 0,
"created": "2021-04-09T12:06:40.000Z",
"exit_code": 0,
"group_id": 1,
"group_name": "admins",
"job_id": 19,
"job_spec": "{ }",
"rpc_version": "1.0",
"status": "DEFINED",
"status_text": "Pending",
"title": "Unknown Command",
"user_id": 1,
"user_name": "system"
},
"messages":
[
{
"class_name": "CmonJobMessage",
"created": "2021-04-09T12:06:40.000Z",
"file_name": "ut_cmoncommandhandler.cpp",
"job_id": 19,
"line_number": 1882,
"message_id": 172,
"message_status": "JOB_SUCCESS",
"message_text": "Test message 04."
},
{
"class_name": "CmonJobMessage",
"created": "2021-04-09T12:06:40.000Z",
"file_name": "ut_cmoncommandhandler.cpp",
"job_id": 19,
"line_number": 1882,
"message_id": 171,
"message_status": "JOB_SUCCESS",
"message_text": "Test message 03."
}
]
}
The properties of both the jobs and the job messages might be changed in the future, we might add new properties to implement new features, but hopefully there is no need to change the general structure of the reply message.
getJobStatistics
The getJobStatistics call is designed to return the number of jobs in every state, so the caller can find how many jobs are to be done.
Example:
{
"operation": "getJobStatistics"
}
{
"cc_timestamp": 1617969732,
"cluster_id": 200,
"requestStatus": "ok",
"statistics":
{
"class_name": "CmonJobStatistics",
"by_state":
{
"ABORTED": 0,
"DEFINED": 3,
"DEQUEUED": 0,
"FAILED": 0,
"FINISHED": 0,
"RUNNING": 0,
"SCHEDULED": 1
}
}
}
getJobs
Get the jobs list of a specific cluster. NOTE: the jobs will be returned sorted by jobId in descending order.
Arguments:
- limit: (optional) if you want to get only a specified amount of the latest jobs
- returnfrom: (optional) if this unix-timestamp is specified cmon will do return only the jobs newer than this time.
Please note that the following example doesn't have any jobs to return:
1 curl -X POST -H"Content-Type: application/json" -d'{ "operation": "getJobs", "limit": 2 }' http://localhost:9500/4/job
{
"cc_timestamp": 1432721311,
"jobs": [
{
"exitCode": 0,
"ip": "127.0.0.1",
"jobId": 1467,
"jobStr": "{\"command\":\"restore_backup\",\"job_data\":{\"backupid\":\"20\",\"stop_cluster\":false}}",
"status": "FINISHED",
"time": 1432721296,
"userid": 1,
"username": "Admin"
},
{
"exitCode": 1,
"ip": "127.0.0.1",
"jobId": 1466,
"jobStr": "{\"command\":\"backup\",\"job_data\":{\"hostname\":\"192.168.33.122\",\"backupdir\":\"/tmp/backups\",\"cc_storage\":\"0\",\"compression\":\"1\",\"port\":\"5432\"}}",
"status": "FAILED",
"time": 1432721284,
"userid": 1,
"username": "Admin"
} ],
"requestStatus": "ok"
}
The returned "jobs" item is JSon list of "job" objects, a job "object" with the following syntax:
{
"exitCode": 1,
"jobId": 23,
"jobStr": "{\"command\": \"create_cluster\",
\"job_data\": {
\"api_id\": 1,
\"enable_mongodb_uninstall\": 1,
\"mongodb_hostname\": \"192.168.33.10\",
\"mongodb_password\": \"password\",
\"mongodb_rs_name\": \"test_replica_set\",
\"mongodb_user\": \"root\",
\"ssh_keyfile\": \"/home/kedz/.ssh/id_rsa\",
\"ssh_port\": 22,
\"ssh_user\": \"kedz\",
\"type\": \"mongodb\"
}}",
"status": "FAILED",
"time": "2014-08-29 16:33:25"
}
NOTE that this contains the job specification in string format instead of real JSon, as the cmon jobs table may contain syntactically incorrect jobspec-s or jobspecs in the old format
The Alarm API
/${CLUSTERID}/alarm
The "getStatistics" Request
The "getStatistics" call can be used to find how many active alarms are there. The alarms that has set to be "ignored" are not counted.
The request can also contain two dates to filter the alarms by creation date and/or report dates. If the dates are provided in the request alarms that are created/reported before the set dates are not counted.
Example:
{
"operation" : "getStatistics",
"created_after" : "2016-05-08T10:10:45.+0200Z",
"reported_after": "2016-06-07T10:10:45.+0200Z",
"cluster_id" : 200
}
{
"cc_timestamp": 1617969742,
"requestStatus": "ok",
"alarm_statistics":
[
{
"class_name": "CmonAlarmStatistics",
"cluster_id": 200,
"created_after": "2016-05-08T10:10:45.000Z",
"critical": 4,
"reported_after": "2016-05-08T10:10:45.000Z",
"warning": 1
}
]
}
This call also works with multiple cluster IDs in the request.
{
"operation" : "getStatistics",
"created_after" : "2016-05-08T10:10:45.+0200Z",
"reported_after": "2016-06-07T10:10:45.+0200Z",
"cluster_ids" : [ 200, 201, 202 ]
}
{
"cc_timestamp": 1617969742,
"requestStatus": "ok",
"alarm_statistics":
[
{
"class_name": "CmonAlarmStatistics",
"cluster_id": 200,
"created_after": "2016-05-08T10:10:45.000Z",
"critical": 4,
"reported_after": "2016-05-08T10:10:45.000Z",
"warning": 1
},
{
"class_name": "CmonAlarmStatistics",
"cluster_id": 201,
"created_after": "2016-05-08T10:10:45.000Z",
"critical": 0,
"reported_after": "2016-05-08T10:10:45.000Z",
"warning": 0
},
{
"class_name": "CmonAlarmStatistics",
"cluster_id": 202,
"created_after": "2016-05-08T10:10:45.000Z",
"critical": 0,
"reported_after": "2016-05-08T10:10:45.000Z",
"warning": 0
}
]
}
The "getAlarms" Request
The "getAlarms" call can be used to retrieve the active alarms from the backend. This call works with one cluster ID (aka "cluster_id") and also with multiple cluster IDs (aka "cluster_ids").
Example (one cluster):
{
"operation" : "getAlarms",
"created_after" : "2016-05-08T10:10:45.+0200Z",
"reported_after": "2016-06-07T10:10:45.+0200Z",
"cluster_id" : 200
}
{
"cc_timestamp": 1617969742,
"cluster_id": 200,
"requestStatus": "ok",
"alarms":
[
{
"class_name": "CmonAlarm",
"alarm_id": 5,
"cluster_id": 200,
"component": 5,
"component_name": "ClusterRecovery",
"counter": 1,
"created": "2021-04-09T12:02:09.000Z",
"ignored": 0,
"measured": 0,
"message": "Galera node recovery failed. Permanent error.",
"recommendation": "Check mysql error.log file.",
"reported": "2021-04-09T12:02:09.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Galera node recovery failed",
"type": 3000,
"type_name": "GaleraNodeRecoveryFail"
},
{
"class_name": "CmonAlarm",
"alarm_id": 2,
"cluster_id": 200,
"component": 6,
"component_name": "Node",
"counter": 1,
"created": "2021-04-09T12:01:54.000Z",
"ignored": 0,
"measured": 0,
"message": "The cmon lost contact to the management server(s).",
"recommendation": "Check the connection and/or star the management servers.",
"reported": "2021-04-09T12:01:54.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "The cmon lost contact to the management server(s)",
"type": 5005,
"type_name": "NdbMgmdFailure"
},
{
"class_name": "CmonAlarm",
"alarm_id": 3,
"cluster_id": 200,
"component": 7,
"component_name": "Host",
"counter": 1,
"created": "2021-04-09T12:01:58.000Z",
"host_id": 1,
"hostname": "127.0.0.1",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: Server 127.0.0.1 reports: 99.05 percent swap space is used. Swapping is decremental for database performance.<br/>\n<pre>+---------+---------------------+------------+------------+------------+---------+---------+-----------------+\n| PID | USER | VIRT | RES | SHR | %CPU | %MEM | COMMAND |\n| 166192 | blockbook-bitcoin | 31.13 GiB | 28.16 GiB | 10.44 MiB | 0.00% | 14.91% | blockbook |\n| 445467 | kedz | 52.79 GiB | 26.79 GiB | 79.96 MiB | 0.00% | 14.19% | geth |\n| 419369 | kedz | 49.24 GiB | 22.90 GiB | 14.14 MiB | 0.00% | 12.13% | geth_bsc |\n| 6382 | mysql | 27.04 GiB | 14.28 GiB | 15.01 MiB | 0.00% | 7.56% | mysqld |\n| 1547179 | kedz | 19.10 GiB | 9.65 GiB | 10.59 MiB | 0.00% | 5.11% | openethereum |\n| 2062412 | blockbook-ethereum | 8.08 GiB | 4.66 GiB | 13.10 MiB | 0.00% | 2.47% | blockbook |\n| 376432 | kedz | 7.89 GiB | 2.14 GiB | 209.55 MiB | 0.00% | 1.13% | bitcoind |\n| 1953468 | kedz | 20.27 GiB | 1.59 GiB | 3.41 MiB | 0.00% | 0.84% | btcd |\n| 185249 | root | 19.31 GiB | 1.24 GiB | 25.66 MiB | 0.00% | 0.66% | cmon |\n| 633594 | prometheus | 33.00 GiB | 1.16 GiB | 238.75 MiB | 0.00% | 0.61% | prometheus |\n| 1316693 | root | 928.82 MiB | 659.18 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 1346934 | root | 864.82 MiB | 659.05 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 1317336 | root | 864.82 MiB | 659.03 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 1315753 | root | 864.82 MiB | 658.96 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 47262 | usbmux | 1.94 GiB | 578.92 MiB | 6.05 MiB | 0.00% | 0.30% | mysqld |\n| 6113 | debian-transmission | 739.82 MiB | 279.00 MiB | 3.15 MiB | 0.00% | 0.14% | transmission-da |\n| 14364 | root | 4.21 GiB | 218.11 MiB | 7.94 MiB | 0.00% | 0.11% | lxd |\n| 80404 | kedz | 195.86 MiB | 195.52 MiB | 89.27 MiB | 0.00% | 0.10% | vault |\n| 46998 | root | 1.23 GiB | 175.07 MiB | 6.89 MiB | 0.00% | 0.09% | cmon |\n| 379464 | prometheus | 4.82 GiB | 148.64 MiB | 8.75 MiB | 0.00% | 0.08% | process_exporte |\n+---------+---------------------+------------+------------+------------+---------+---------+-----------------+\n\n</pre>",
"recommendation": "Increase RAM, tune swappiness, reduce memory footprint of processes running on the node. You can also reboot the server as a temporary countermeasure.",
"reported": "2021-04-09T12:01:58.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Host is swapping",
"type": 10000,
"type_name": "HostSwapping"
},
{
"class_name": "CmonAlarm",
"alarm_id": 4,
"cluster_id": 200,
"component": 7,
"component_name": "Host",
"counter": 1,
"created": "2021-04-09T12:01:59.000Z",
"host_id": 2,
"hostname": "127.0.0.2",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.2 reports: Server 127.0.0.2 reports: 99.05 percent swap space is used. Swapping is decremental for database performance.<br/>\n<pre>+---------+---------------------+------------+------------+------------+---------+---------+-----------------+\n| PID | USER | VIRT | RES | SHR | %CPU | %MEM | COMMAND |\n| 166192 | blockbook-bitcoin | 31.13 GiB | 28.16 GiB | 10.44 MiB | 0.00% | 14.91% | blockbook |\n| 445467 | kedz | 52.79 GiB | 26.79 GiB | 79.96 MiB | 0.00% | 14.19% | geth |\n| 419369 | kedz | 49.24 GiB | 22.90 GiB | 14.14 MiB | 0.00% | 12.13% | geth_bsc |\n| 6382 | mysql | 27.04 GiB | 14.28 GiB | 15.01 MiB | 0.00% | 7.56% | mysqld |\n| 1547179 | kedz | 19.10 GiB | 9.65 GiB | 10.60 MiB | 0.00% | 5.11% | openethereum |\n| 2062412 | blockbook-ethereum | 8.08 GiB | 4.66 GiB | 13.10 MiB | 0.00% | 2.47% | blockbook |\n| 376432 | kedz | 7.89 GiB | 2.14 GiB | 209.61 MiB | 0.00% | 1.13% | bitcoind |\n| 1953468 | kedz | 20.27 GiB | 1.59 GiB | 3.41 MiB | 0.00% | 0.84% | btcd |\n| 185249 | root | 19.31 GiB | 1.24 GiB | 25.66 MiB | 0.00% | 0.66% | cmon |\n| 633594 | prometheus | 33.00 GiB | 1.16 GiB | 238.75 MiB | 0.00% | 0.61% | prometheus |\n| 1316693 | root | 928.82 MiB | 659.18 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 1346934 | root | 864.82 MiB | 659.05 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 1317336 | root | 864.82 MiB | 659.03 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 1315753 | root | 864.82 MiB | 658.96 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 47262 | usbmux | 1.94 GiB | 578.92 MiB | 6.05 MiB | 0.00% | 0.30% | mysqld |\n| 6113 | debian-transmission | 739.82 MiB | 279.00 MiB | 3.15 MiB | 0.00% | 0.14% | transmission-da |\n| 14364 | root | 4.21 GiB | 218.11 MiB | 7.94 MiB | 0.00% | 0.11% | lxd |\n| 80404 | kedz | 195.86 MiB | 195.52 MiB | 89.27 MiB | 0.00% | 0.10% | vault |\n| 46998 | root | 1.23 GiB | 175.07 MiB | 6.89 MiB | 0.00% | 0.09% | cmon |\n| 379464 | prometheus | 4.82 GiB | 148.64 MiB | 8.75 MiB | 0.00% | 0.08% | process_exporte |\n+---------+---------------------+------------+------------+------------+---------+---------+-----------------+\n\n</pre>",
"recommendation": "Increase RAM, tune swappiness, reduce memory footprint of processes running on the node. You can also reboot the server as a temporary countermeasure.",
"reported": "2021-04-09T12:01:59.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Host is swapping",
"type": 10000,
"type_name": "HostSwapping"
},
{
"class_name": "CmonAlarm",
"alarm_id": 1,
"cluster_id": 200,
"component": 6,
"component_name": "Node",
"counter": 1,
"created": "2021-04-09T12:01:48.000Z",
"host_id": 1,
"hostname": "127.0.0.1",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: MySQL Server is not connected to data nodes.",
"recommendation": "Check firewall/security rules and ndb-connectstring.",
"reported": "2021-04-09T12:01:48.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "MySQL server is not connected to NDB",
"type": 5010,
"type_name": "MySqlClusterNotConnected"
}
]
}
Example (multiple clusters)
{
"operation" : "getAlarms",
"cluster_ids" : [ 200, 201 ]
}
{
"cc_timestamp": 1617969742,
"cluster_id": 200,
"requestStatus": "ok",
"alarms":
[
{
"class_name": "CmonAlarm",
"alarm_id": 5,
"cluster_id": 200,
"component": 5,
"component_name": "ClusterRecovery",
"counter": 1,
"created": "2021-04-09T12:02:09.000Z",
"ignored": 0,
"measured": 0,
"message": "Galera node recovery failed. Permanent error.",
"recommendation": "Check mysql error.log file.",
"reported": "2021-04-09T12:02:09.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Galera node recovery failed",
"type": 3000,
"type_name": "GaleraNodeRecoveryFail"
},
{
"class_name": "CmonAlarm",
"alarm_id": 2,
"cluster_id": 200,
"component": 6,
"component_name": "Node",
"counter": 1,
"created": "2021-04-09T12:01:54.000Z",
"ignored": 0,
"measured": 0,
"message": "The cmon lost contact to the management server(s).",
"recommendation": "Check the connection and/or star the management servers.",
"reported": "2021-04-09T12:01:54.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "The cmon lost contact to the management server(s)",
"type": 5005,
"type_name": "NdbMgmdFailure"
},
{
"class_name": "CmonAlarm",
"alarm_id": 3,
"cluster_id": 200,
"component": 7,
"component_name": "Host",
"counter": 1,
"created": "2021-04-09T12:01:58.000Z",
"host_id": 1,
"hostname": "127.0.0.1",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: Server 127.0.0.1 reports: 99.05 percent swap space is used. Swapping is decremental for database performance.<br/>\n<pre>+---------+---------------------+------------+------------+------------+---------+---------+-----------------+\n| PID | USER | VIRT | RES | SHR | %CPU | %MEM | COMMAND |\n| 166192 | blockbook-bitcoin | 31.13 GiB | 28.16 GiB | 10.44 MiB | 0.00% | 14.91% | blockbook |\n| 445467 | kedz | 52.79 GiB | 26.79 GiB | 79.96 MiB | 0.00% | 14.19% | geth |\n| 419369 | kedz | 49.24 GiB | 22.90 GiB | 14.14 MiB | 0.00% | 12.13% | geth_bsc |\n| 6382 | mysql | 27.04 GiB | 14.28 GiB | 15.01 MiB | 0.00% | 7.56% | mysqld |\n| 1547179 | kedz | 19.10 GiB | 9.65 GiB | 10.59 MiB | 0.00% | 5.11% | openethereum |\n| 2062412 | blockbook-ethereum | 8.08 GiB | 4.66 GiB | 13.10 MiB | 0.00% | 2.47% | blockbook |\n| 376432 | kedz | 7.89 GiB | 2.14 GiB | 209.55 MiB | 0.00% | 1.13% | bitcoind |\n| 1953468 | kedz | 20.27 GiB | 1.59 GiB | 3.41 MiB | 0.00% | 0.84% | btcd |\n| 185249 | root | 19.31 GiB | 1.24 GiB | 25.66 MiB | 0.00% | 0.66% | cmon |\n| 633594 | prometheus | 33.00 GiB | 1.16 GiB | 238.75 MiB | 0.00% | 0.61% | prometheus |\n| 1316693 | root | 928.82 MiB | 659.18 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 1346934 | root | 864.82 MiB | 659.05 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 1317336 | root | 864.82 MiB | 659.03 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 1315753 | root | 864.82 MiB | 658.96 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 47262 | usbmux | 1.94 GiB | 578.92 MiB | 6.05 MiB | 0.00% | 0.30% | mysqld |\n| 6113 | debian-transmission | 739.82 MiB | 279.00 MiB | 3.15 MiB | 0.00% | 0.14% | transmission-da |\n| 14364 | root | 4.21 GiB | 218.11 MiB | 7.94 MiB | 0.00% | 0.11% | lxd |\n| 80404 | kedz | 195.86 MiB | 195.52 MiB | 89.27 MiB | 0.00% | 0.10% | vault |\n| 46998 | root | 1.23 GiB | 175.07 MiB | 6.89 MiB | 0.00% | 0.09% | cmon |\n| 379464 | prometheus | 4.82 GiB | 148.64 MiB | 8.75 MiB | 0.00% | 0.08% | process_exporte |\n+---------+---------------------+------------+------------+------------+---------+---------+-----------------+\n\n</pre>",
"recommendation": "Increase RAM, tune swappiness, reduce memory footprint of processes running on the node. You can also reboot the server as a temporary countermeasure.",
"reported": "2021-04-09T12:01:58.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Host is swapping",
"type": 10000,
"type_name": "HostSwapping"
},
{
"class_name": "CmonAlarm",
"alarm_id": 4,
"cluster_id": 200,
"component": 7,
"component_name": "Host",
"counter": 1,
"created": "2021-04-09T12:01:59.000Z",
"host_id": 2,
"hostname": "127.0.0.2",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.2 reports: Server 127.0.0.2 reports: 99.05 percent swap space is used. Swapping is decremental for database performance.<br/>\n<pre>+---------+---------------------+------------+------------+------------+---------+---------+-----------------+\n| PID | USER | VIRT | RES | SHR | %CPU | %MEM | COMMAND |\n| 166192 | blockbook-bitcoin | 31.13 GiB | 28.16 GiB | 10.44 MiB | 0.00% | 14.91% | blockbook |\n| 445467 | kedz | 52.79 GiB | 26.79 GiB | 79.96 MiB | 0.00% | 14.19% | geth |\n| 419369 | kedz | 49.24 GiB | 22.90 GiB | 14.14 MiB | 0.00% | 12.13% | geth_bsc |\n| 6382 | mysql | 27.04 GiB | 14.28 GiB | 15.01 MiB | 0.00% | 7.56% | mysqld |\n| 1547179 | kedz | 19.10 GiB | 9.65 GiB | 10.60 MiB | 0.00% | 5.11% | openethereum |\n| 2062412 | blockbook-ethereum | 8.08 GiB | 4.66 GiB | 13.10 MiB | 0.00% | 2.47% | blockbook |\n| 376432 | kedz | 7.89 GiB | 2.14 GiB | 209.61 MiB | 0.00% | 1.13% | bitcoind |\n| 1953468 | kedz | 20.27 GiB | 1.59 GiB | 3.41 MiB | 0.00% | 0.84% | btcd |\n| 185249 | root | 19.31 GiB | 1.24 GiB | 25.66 MiB | 0.00% | 0.66% | cmon |\n| 633594 | prometheus | 33.00 GiB | 1.16 GiB | 238.75 MiB | 0.00% | 0.61% | prometheus |\n| 1316693 | root | 928.82 MiB | 659.18 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 1346934 | root | 864.82 MiB | 659.05 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 1317336 | root | 864.82 MiB | 659.03 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 1315753 | root | 864.82 MiB | 658.96 MiB | 14.19 MiB | 0.00% | 0.34% | ndbd |\n| 47262 | usbmux | 1.94 GiB | 578.92 MiB | 6.05 MiB | 0.00% | 0.30% | mysqld |\n| 6113 | debian-transmission | 739.82 MiB | 279.00 MiB | 3.15 MiB | 0.00% | 0.14% | transmission-da |\n| 14364 | root | 4.21 GiB | 218.11 MiB | 7.94 MiB | 0.00% | 0.11% | lxd |\n| 80404 | kedz | 195.86 MiB | 195.52 MiB | 89.27 MiB | 0.00% | 0.10% | vault |\n| 46998 | root | 1.23 GiB | 175.07 MiB | 6.89 MiB | 0.00% | 0.09% | cmon |\n| 379464 | prometheus | 4.82 GiB | 148.64 MiB | 8.75 MiB | 0.00% | 0.08% | process_exporte |\n+---------+---------------------+------------+------------+------------+---------+---------+-----------------+\n\n</pre>",
"recommendation": "Increase RAM, tune swappiness, reduce memory footprint of processes running on the node. You can also reboot the server as a temporary countermeasure.",
"reported": "2021-04-09T12:01:59.000Z",
"severity": 2,
"severity_name": "ALARM_CRITICAL",
"title": "Host is swapping",
"type": 10000,
"type_name": "HostSwapping"
},
{
"class_name": "CmonAlarm",
"alarm_id": 1,
"cluster_id": 200,
"component": 6,
"component_name": "Node",
"counter": 1,
"created": "2021-04-09T12:01:48.000Z",
"host_id": 1,
"hostname": "127.0.0.1",
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: MySQL Server is not connected to data nodes.",
"recommendation": "Check firewall/security rules and ndb-connectstring.",
"reported": "2021-04-09T12:01:48.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "MySQL server is not connected to NDB",
"type": 5010,
"type_name": "MySqlClusterNotConnected"
}
]
}
The documentation of the CmonAlarm Class contains the list of properties that are returned for the API.
The "getAlarm" Request
The "getAlarm" returns information about one alarm identified by the alarm ID.
Example:
{
"operation" : "getAlarm",
"alarm_id" : 1
}
{
"cc_timestamp": 1617969742,
"requestStatus": "ok",
"alarm":
{
"class_name": "CmonAlarm",
"alarm_id": 1,
"cluster_id": 200,
"component": 6,
"component_name": "Node",
"counter": 1,
"created": "2021-04-09T12:01:48.000Z",
"host_id": 1,
"ignored": 0,
"measured": 0,
"message": "Server 127.0.0.1 reports: MySQL Server is not connected to data nodes.",
"recommendation": "Check firewall/security rules and ndb-connectstring.",
"reported": "2021-04-09T12:01:48.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "MySQL server is not connected to NDB",
"type": 5010,
"type_name": "MySqlClusterNotConnected"
}
}
The "ignoreAlarm" Request
The "ignoreAlarm" RPC call will set the alarm to be ignored. The alarm is identifyed by the alarm ID.
Example:
{
"operation" : "ignoreAlarm",
"alarm_id" : 1,
"ignore" : true
}
{
"cc_timestamp": 1617969742,
"requestStatus": "ok",
"alarm":
{
"class_name": "CmonAlarm",
"alarm_id": 1,
"cluster_id": 200,
"component": 6,
"component_name": "Node",
"counter": 1,
"created": "2021-04-09T12:02:22.000Z",
"host_id": 1,
"ignored": 1,
"measured": 0,
"message": "Server 127.0.0.1 reports: MySQL Server is not connected to data nodes.",
"recommendation": "Check firewall/security rules and ndb-connectstring.",
"reported": "2021-04-09T12:02:22.000Z",
"severity": 1,
"severity_name": "ALARM_WARNING",
"title": "MySQL server is not connected to NDB",
"type": 5010,
"type_name": "MySqlClusterNotConnected"
}
}
The Stats API
/${CLUSTERID}/stat
The "setHost" Request
The "setHost" call is a newer and better version of the "setHostAlias" call and so it renders that deprecated. The "setHost" can be used to set multiple properties of a host at once.
The CmonHost class and the inherited classes have a number of properties a few of them are writable from outside (e.g. by the UI). So not all of the properties can be changed using this call, only those that marked as publicly writable in the CmonHost class documentation. If a "setHost" request contains a reference to a property that is not writable from outside the whole request will be rejected.
Here is an example for the "setHost" request:
{
"operation": "setHost",
"hostname": "127.0.0.1",
"port": 3306,
"properties":
{
"alias": "TestAlias001",
"description": "This is a test host..."
}
}
This will change three properties of the host. The new property values are going to be stored in the Cmon Database and a host change event will be emitted about the changes.
The "addFileMonitoring" request
File monitoring is a feature the UI can use to continuously monitor the changes of assorted files on a specific host. There are files that are monitored because the controller thinks they are important and the UI itself can request the monitoring of files if the user thinks they are important.
File monitoring utilizes the RPC and the event system. The monitoring can be requested through the RPC and the actual file changes are reported through the event system: the controller simply sends an event when something happens to the monitored files.
It is possible to monitor files that does not exist. If this happens the controller will simply report non-existing file status and when the file is created the report will reflect that.
One important aspect is that there are two distinct type of file monitoring. The file metadata monitoring will report if the file metadata changes. When the owner, the file size, the modification date changes, the event will report that.
The other monitoring option is the content monitoring. Content monitoring will report the metadata changes, but it also will monitor the file content. This is what we like to call the "tail -f" feature because it is doing exactly the same as the user would want issuing the "tail -f" command on the host.
The content monitoring is pricy, the controller has to go and fetch the file content every time the file changes and in addition it has to monitor the file more frequently so that the user gets a "real-time" view of the file. This is why the content monitoring is only available on request and it also has an expiration time. When the UI requests a content monitoring on a specific file it gets an UUID and that UUID has an expiration date. The request can be re-newed using the UUID, but if the UI misses this opportunity the content monitoring will stop.
Multiple UI instances can have multiple UUIDs to monitor the same file. The content monitoring will expire when the last UUID expires.
Example:
{
"operation": "addFileMonitoring",
"hostname": "127.0.0.1",
"port": 3306,
"content_monitoring": true,
"full_path": "/var/log/mysql/mysqld.log"
}
{
"cc_timestamp": 1617969732,
"content_monitoring": true,
"full_path": "/var/log/mysql/mysqld.log",
"requestStatus": "ok",
"uuid": "52ef72aa-87ee-4dee-86ec-f5acf7f9f3c2"
}
This is a tipical request to start the content monitoring. It may have the following fields:
operation
For registering a file monitoring this field must be "fileMonitoringRequest".
hostname
The name of the host on which the file can be found.
port
Identifies the host on which the file can be found.
content_monitoring
Currently only the content monitoring can be registered using this call, so this field must be true.
full_path
The file monitoring can only be registered using the full path to identify the file, so this field must be a valid path, but the file does not need to exist at the time of the call.
uuid
When the caller wants to re-new an existing content monitoring it should send the UUID it received for the initial request. If there is no previously received UUID this field should not be sent for UUIDs that does not exist will trigger a failure.
The reply should contain the following fields:
content_monitoring
This field is true if the content monitoring was requested and the operation was successful. This field is true even if the file itself does not exist. Non-existing files can be monitored, events will be sent showing that the file does not exist and the content will be available once the file is created.
full_path
The full path of the monitored file.
uuid
The UUID that identifies the request itself. The same UUID will be sent in the events so the caller can find if the monitoring request expires and it can re-new in time.
requestStatus
The usual.
Should an error occure the reply will look like this:
{
"operation": "addFileMonitoring",
"hostname": "127.0.0.1",
"port": 3306,
"content_monitoring": true,
"full_path": "/var/log/mysql/mysqld.log",
"uuid": "57c66973-51ff-4aec-29cd-baabf2fbe346"
}
{
"cc_timestamp": 1617969732,
"errorString": "UUID '57c66973-51ff-4aec-29cd-baabf2fbe346' not found.",
"requestStatus": "UnknownError"
}
And finally here is an example showing a fragment of an event that actually delivers the result of the content monitoring. Please note that this is only a fragment of a more complex event showing only information about one monitored file:
...
{
"access": 432,
"changed": "2016-03-08T09:59:57.+0100Z",
"class_name": "CmonFile",
"content":
{
"as_string": "lugin 'INNODB_BUFFER_PAGE_LRU'\r\n...",
"end_index": 50878,
"start_index": 40517
},
"content_monitoring": [
{
"active": true,
"ends": "2016-03-08T10:05:47.+0100Z",
"uuid": "c4cb0c6b-12d1-3f2a-ffac-fab5cbd60322"
} ],
"content_monitoring_active": true,
"exists": true,
"full_path": "/var/log/mysql/mysqld.log",
"group": "mysql",
"hard_links": 1,
"host_name": "10.10.2.3",
"modified": "2016-03-08T09:59:57.+0100Z",
"size": 50878,
"used": "2016-03-08T09:59:07.+0100Z",
"user": "mysql"
}
...
Here are the list of the fields related to the content monitoring:
The "getFileMonitoring" request
Example:
{
"operation": "getFileMonitoring",
"hostname": "127.0.0.1",
"port": 3306
}
{
"cc_timestamp": 1617969732,
"requestStatus": "ok",
"total": 3,
"monitored_files":
[
{
"class_name": "CmonFile",
"access": 33188,
"changed": "2021-01-12T14:43:19.000Z",
"exists": true,
"full_path": "/etc/mysql/my.cnf",
"group": "root",
"hard_links": 0,
"host_name": "127.0.0.1",
"modified": "2021-01-12T14:43:19.000Z",
"size": 1131,
"used": "2021-01-12T14:43:19.000Z",
"user": "mysql"
},
{
"class_name": "CmonFile",
"content": "",
"exists": false,
"full_path": "/var/lib/mysql/stderr",
"host_name": "127.0.0.1"
},
{
"class_name": "CmonFile",
"content": "",
"content_monitoring_active": true,
"exists": false,
"full_path": "/var/log/mysql/mysqld.log",
"host_name": "127.0.0.1",
"content_monitoring":
[
{
"active": true,
"ends": "2021-04-09T12:12:12.000Z",
"uuid": "52ef72aa-87ee-4dee-86ec-f5acf7f9f3c2"
}
]
}
]
}
The "getMetaTypeInfo" request
So you received some reply, event or result from the backend, it contains some structures that have the well known "class_name" set to some string. The structure has a number of properties and you want to know about those properties.
Some properties are even so obvious and some are well known, for example the hosts have host names and that should be of course obvious, but what's with the more complicated less frequently used properties. Should your code be wired in interpreters that inderstand how to visualize properties nobody knows? Of course not, here is the "getMetaTypeInfo" request.
An example request to request all the CmonDiskInfo properties:
1 curl -XPOST -d '{"operation": "getMetaTypenfo", "type-name": "CmonDiskInfo"}' 'http://localhost:9500/10/stat'
Example:
{
"operation": "getMetaTypeInfo",
"type-name": "CmonDiskInfo",
"property-name": "capacity, temperature-celsius"
}
{
"cc_timestamp": 1617969732,
"requestStatus": "ok",
"metatype_info":
[
{
"class_name": "CmonParamSpec",
"default_value": 0,
"description": "Disk size reported by the disk in bytes.",
"is_counter": false,
"is_public": true,
"is_writable": false,
"owner_type_name": "CmonDiskInfo",
"property_name": "capacity",
"short_ui_string": "Capacity",
"type_name": "Ulonglong",
"unit": "byte"
},
{
"class_name": "CmonParamSpec",
"default_value": 0,
"description": "The internal temperature of the disk.",
"is_counter": false,
"is_public": true,
"is_writable": false,
"owner_type_name": "CmonDiskInfo",
"property_name": "temperature-celsius",
"short_ui_string": "Temperature",
"type_name": "Int",
"unit": "℃ "
}
]
}
Here is a comprehensive list about the fields of the reply message. It is important to note, that not all the information about all the properties and metatypes are registered, so some fields might be missing or hold the wrong value right now.
class_name
Well, this is the class name of the structure that holds information about the information about a property. One CmonParamSpec object holds information about one property of one class and of course one class can have many properties.
owner_type_name
The type name of the owner of the property. Properties are inherited from parent classes, so the owner here might be different from the type name that is sent as getMetaTypeInfo request. No matter, this just means the property is inherited.
property_name
The name of the property. This can be a single string to get information about one property, a string that contains one or more property names sperated by semicolon (e.g. "capacity; temperature-celsius") to get information about more than one property or completely omitted to get information about all the properties the given class has.
type_name
The type name of the property.
default_value
The default value of the property. If the property has the default value the backend might choose not to send the property in the JSon messages, the events for example have this filtering already implemented.
is_public
If this is true that means the property is visible for external processes like the UI itself. Non-public properties are properties that are kept inside the controller because nobody is interested seeing them.
is_writable
If this is true the property can be changed from outside the controller e.g. from the UI. The hosts for example have an "alias name" the user can change and so is writable by the UI through RPC calls.
is_counter
Oh, yes, counters. We call a property a counter if the actual information is held by the difference of two values. Counters are started from 0 when the computer boots up, they can only be incremented so their value are always the same or bigger than before and of course nobody interested their actual value. The interesting thing is how much they grew since the last time we checked them.
description
A human readable string describing the property in one or a few sentences.
short_ui_string
A human readable string describing the property in one or a few words. So the UI code doesn't need to know what the property is, it just knows that this string is shown the user he will understand what it means.
unit
This is obvious, numerical values can have units.
setHostAlias
Sets a user-defined custom alias name for a host instance.
Well this is not really a statistics request, but maybe it is better to have this request close to the getHosts RPC call.
Arguments:
- hostname: the hostname field to identify the host
- port: a portnumber to identify the host
- alias: the new 'alias' for the hostname
Example request and response:
1 curl -X POST -d '{"operation": "setHostAlias", "hostname": "192.168.33.116", "port": 3306, "alias": "My favorite SQL server 1." }' http://localhost:9500/5/stat
{
"cc_timestamp": 1444919572,
"requestStatus": "ok"
}
And verify the result:
1 curl -X POST -d '{"operation": "getHosts", "fields": "hostname,port,alias" }' http://localhost:9500/5/stat
{
"cc_timestamp": 1444919650,
"data": [
{
"hostname": "192.168.33.115",
"port": 3306
},
{
"hostname": "192.168.33.1",
"port": -1
},
{
"alias": "My favorite SQL server 1.",
"hostname": "192.168.33.116",
"port": 3306
} ],
"requestStatus": "ok",
"total": 3
}
getHosts
Returns the host details in the cluster. See The hosts Hosts page for more detailed description of the returned fields.
1 $ curl -X POST -d '{"operation": "getHosts" }' http://localhost:9500/14/stat
{
"data": [
{
"clusterid": 14,
"configfile": "/etc/postgresql/9.4/main/postgresql.conf",
"connected": true,
"distributioncodename": "utopic",
"distributionname": "Ubuntu",
"distributionrelease": "14.10",
"hostId": 1,
"hostname": "192.168.33.1",
"ip": "192.168.33.1",
"pingstatus": -1,
"port": 5432,
"role": "postgres",
"version": "9.4beta3"
},
{
"clusterid": 14,
"configfile": "/etc/cmon.d/cmon_14.cnf",
"connected": true,
"distributioncodename": "utopic",
"distributionname": "Ubuntu",
"distributionrelease": "14.10",
"hostId": 2,
"hostname": "10.10.10.13",
"ip": "10.10.10.13",
"pingstatus": -1,
"port": -1,
"role": "controller",
"version": "1.2.9"
} ],
"requestStatus": "ok",
"total": 2
}
It is also possible to request only the needed/used fields by specifying a filter, please note that the field names must be specified case sensitive.
1 curl 'http://localhost:9500/14/stat?operation=getHosts&fields=hostId,hostname,role'
2 # or by posting the params in JSon format
3 curl -X POST -d '{"operation": "getHosts", "fields": "hostId,hostname,role" }' http://localhost:9500/14/stat
{
"data": [
{
"hostId": 1,
"hostname": "192.168.33.1",
"role": "postgres"
},
{
"hostId": 2,
"hostname": "10.10.10.13",
"role": "controller"
} ],
"requestStatus": "ok",
"total": 2
}
cpuInfo
Returns the CPU informations per host, but it is possible to filter the results here by specifying the hostId parameter.
The "cpuInfo" call is deprecated, please consider using the "getcpuphysicalinfo" call instead.
1 $ curl 'http://localhost:9500/12/stat?operation=cpuinfo&hostId=1
3 $ curl -X POST -d '{"operation": "cpuinfo", "hostId": 1}' 'http://localhost:9500/12/stat'
{
"data": [
{
"cpucores": 4,
"cpumaxmhz": 3400,
"cpumhz": 2200,
"cpumodel": "Intel(R) Core(TM) i3-4130 CPU @ 3.40GHz",
"cputemp": 63,
"hostid": 1
} ],
"requestStatus": "ok",
"total": 1
}
The GetCpuPhysicalinfo request
The "getcpuphysicalinfo" obsoletes the old "getcpuinfo" request. The most inportant difference is that the new "getcpuphysicalinfo" request is able to handle more than one physical CPU in one host, so the caller has to be ready to process such replies.
The "getcpuphysicalinfo" returns information about the physical CPU devices found in the hosts. If the request is processed before the CPU information becomes available (before the CPU stat collector has a chance to get the data from the remote host) the reply will indicate an error ("requestStatus" will be "TryAgain"). If the data already available the reply will enumerate all the physical CPU devices on all the requested.
Example:
{
"operation": "getcpuphysicalinfo",
"hostId": 1
}
{
"cc_timestamp": 1617970039,
"requestStatus": "ok",
"total": 2,
"data":
[
{
"class_name": "CmonCpuInfo",
"cpucores": 8,
"cpumaxmhz": 3.8e+06,
"cpumhz": 2893.06,
"cpumodel": "Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz",
"cputemp": 0,
"hostid": 1,
"physical_cpu_id": 0,
"siblings": 16,
"vendor": "GenuineIntel"
},
{
"class_name": "CmonCpuInfo",
"cpucores": 8,
"cpumaxmhz": 3.8e+06,
"cpumhz": 2893.06,
"cpumodel": "Intel(R) Xeon(R) CPU E5-2690 0 @ 2.90GHz",
"cputemp": 0,
"hostid": 1,
"physical_cpu_id": 1,
"siblings": 16,
"vendor": "GenuineIntel"
}
]
}
The reply provides the following information about the CPU devices:
class_name
This is always CmonCpuInfo.
physical_cpu_id
The unique ID of this physical CPU.
cpucores
Shows how many cpu cores the CPU has.
siblings
Indicates how many virtual CPU is provided by this physical CPU. If for example the CPU is an "Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz" it has 6 cores and 2x hyperthreading, so the number of siblings is 12. If a host has two of these CPUs we should show 24 CPUs in the UI.
cpumaxmhz
The maximum clock frequency measured in MegaHertz.
cpumhz
The current CPU frequency measured in MegaHertz.
cpumodel
The name of the model.
vendor
The name of the vendor.
cputemp
The temperature of the CPU measured is Celsius.
getStatInfo
A method to fetch each statistics object possible keys (and whether it is a speed/gauge counter or an absolute value).
1 curl 'http://localhost:9500/17/stat?token=6cBC1Us5v4NsmilJ&operation=statinfo&class_name=CmonSqStats'
{
"cc_timestamp": 1511334486,
"data":
{
"ABORTED_CLIENTS": "GaugeCounter",
"ABORTED_CONNECTS": "GaugeCounter",
"ARCHIVED_COUNT": "GaugeCounter",
"BUFFERS_ALLOC": "GaugeCounter",
"BUFFERS_BACKEND": "GaugeCounter",
"BUFFERS_BACKEND_FSYNC": "GaugeCounter",
"BUFFERS_CHECKPOINT": "GaugeCounter",
"BUFFERS_CLEAN": "GaugeCounter",
"BYTES_RECEIVED": "GaugeCounter",
"BYTES_SENT": "GaugeCounter",
"CHECKPOINTS_REQ": "GaugeCounter",
"CHECKPOINTS_TIMED": "GaugeCounter",
"rows-fetched": "GaugeCounter",
"rows-inserted": "GaugeCounter",
"rows-updated": "GaugeCounter"
},
"requestStatus": "ok"
}
getStatByName
Fetching all the stats by name, it is also possible to filter the results by hostId.
NEW arguments in 1.7.3:
- 'host_port': you can filter the stats only to a specific Cmon*Host instance, just send its hostname:port-s (comma separated list). It is being implemented to support co-located services, if this defined you may omit the 'hostid' argument.
NEW arguments in 1.4.2:
- calculate_per_sec: (boolean) re-calculates the speed counters to /sec value (Default: false) This does the value=(value*1000.0/interval) calculation on backend side.
- stat_info: (boolean) includes info about all possible stat properties wheter it is speed (GaugeCounter) or absolute (Counter) value (Default: false)
Possible stat name keys are:
Supported 'row' filters:
- by hostid by specifying 'hostid' or 'hostId': it can list multiple hosts (comma separated)
- by samplekey (useful for CmonHaProxyStats filtration)
- by specifying an interval (defaults to from one-day-ago to now), using 'startdate' and/or 'enddate' fields (UNIX timestamps).
- use 'returnfrom' if you want only the last few records with the same scale of ('startdate'-'enddate' interval).
- the network stats could be filtered by setting 'interface' name in the request.
- the disk stats could be filtered by setting 'device' name in the request.
- the cpustats could be filtererd out by sending 'cpuid' or 'coreid'
- you can specify a count 'limit' if you need only the last few results.
And supported 'column' filters:
- 'fields': It is possible now to request only the needed/used fields by specifying a record field filter: 'fields', please note that the field names must be specified case sensitive, and should be separated by a comma.
There is a possibility to aggregate the host values in the results (so in such case no hostid must be specified, as all host values will be aggregated), by specifying the 'aggrhosts' field. 'aggrhosts' only supports the following aggregation: 'sum'. The backend groups the data together by hostId-s for example if you have 3 hosts: {(host1, host2, host3), (host1, host2, host3), ...} then the aggregation (sum) will be done on the groups.
2 curl -X POST -H"Content-Type: application/json" -d '{"operation": "getStatByName",
3 "name": "memorystat", "hostId": 2, "startdate": 149642776, "limit": 1}' \
4 http://localhost:9500/12/stat
6 # it is also possible to use GET request
7 curl 'http://localhost:9500/12/stat?operation=getStatByName&name=memorystat&hostId=2&startdate=149642776&limit=1'
9 # an example with aggregation:
10 curl 'http://localhost:9500/2/stat?operation=getStatByName&name=sqlstat&fields=hostid,COM_SELECT,THREADS_CONNECTED&aggrhosts=sum'
12 # and example with field filtering:
13 $ curl 'http://localhost:9500/14/stat?operation=getStatByName&name=sqlstat&fields=hostid,iceated,interval,sampleends,commits&startdate=1421326457'
20 "sampleends": 1421326482
22 "requestStatus": "ok",
Few example requests and replies for the statByName operation (i truncated the returned stat lists to keep this text as short as possible, so only 'one' sample stat is there in the outputs)
1 $ curl -X POST -d '{"operation": "statByName", "name": "invalidstat"}' 'http://localhost:9500/12/stat'
{
"errorString": "Invalid name field, possible values are: netstat,memorystat,diskstat,cpustat,sqlstat","dbstat",
"requestStatus": "error"
}
1 $ curl -X POST -d '{"operation": "statByName", "name": "netstat"}' 'http://localhost:9500/12/stat'
{
"requestStatus": "ok",
"data": [
{
"created": 1410429060,
"hostid": 2,
"interface": "eth1",
"rxBytes": 227928539,
"rxPackets": 2535849,
"rxSpeed": 1488.6,
"sampleends": 1410429673,
"txBytes": 1111575613,
"txPackets": 4331919,
"txSpeed": 11740.4
} ]
}
Memory statistics
1 $ curl -X POST -d '{"operation": "statByName", "name": "memorystat", "hostId": 2}' 'http://localhost:9500/12/stat'
{
"requestStatus": "ok",
"data": [
{
"created": 1410429673,
"hostid": 2,
"memoryutilization": 0.073535,
"rambuffers": 83820544,
"ramcached": 374575104,
"ramfree": 752859136,
"ramfreemin": 752779264,
"ramtotal": 1307394048,
"sampleends": 1410429734,
"swapfree": 0,
"swaptotal": 0,
"swaputilization": 0
} ]
}
Disk statistics
1 $ curl "http://localhost:9500/180/stat?token=6T17RE8PBaSzOKbZ&operation=getStatByName&name=diskstat&hostid=122&returnfrom=$(date +%s --date='3 minutes ago')"
{
"cc_timestamp": 1504771248,
"data": [
{
"blocksize": 4096,
"created": 1504771018,
"device": "/dev/sda2",
"free": 43078811648,
"hostid": 122,
"interval": 118915,
"mountpoint": "/",
"reads": 16,
"readspersec": 0,
"sampleends": 1504771107,
"samplekey": "CmonDiskStats-122-/dev/sda2",
"sectorsread": 1472,
"sectorswritten": 96384,
"total": 243095224320,
"utilization": 0.0243065,
"writes": 2568,
"writespersec": 27
},
{
"blocksize": 4096,
"created": 1504771138,
"device": "/dev/sda2",
"free": 43078881280,
"hostid": 122,
"interval": 120914,
"mountpoint": "/",
"reads": 16,
"readspersec": 0,
"sampleends": 1504771228,
"samplekey": "CmonDiskStats-122-/dev/sda2",
"sectorsread": 240,
"sectorswritten": 74264,
"total": 243095224320,
"utilization": 0.0203032,
"writes": 2302,
"writespersec": 24
} ],
"requestStatus": "ok",
"total": 2
}
CPU statistics
1 $ curl -X POST -d '{"operation": "statByName", "name": "cpustat", "hostId": 2}' 'http://localhost:9500/12/stat'
{
"requestStatus": "ok",
"data": [
{
"busy": 0.0252534,
"cpumhz": 3383.43,
"cputemp": 0,
"created": 1410429734,
"hostid": 2,
"idle": 0.974747,
"iowait": 0,
"loadavg1": 0.05,
"loadavg15": 0.05,
"loadavg5": 0.08,
"sampleends": 1410429734,
"steal": 0,
"sys": 0.0167802,
"uptime": 191351,
"user": 0.00847317
} ]
}
SQL Server statistics
For properties see CmonSqlStats class documentation.
1 $ curl -X POST -d '{"operation": "getStatByName", "name": "sqlstat", "startdate":1417431406 }' http://localhost:9500/99/stat
2 # or an example using GET request:
3 $ curl 'http://localhost:9500/14/stat?operation=getStatByName&name=sqlstat&startdate=1417431406'
{
"data": [
{
"blocks-hit": 25152,
"blocks-read": 0,
"commits": 47,
"connections": 2,
"created": 1417431381,
"hostid": 1,
"interval": 30000,
"rollbacks": 0,
"rows-deleted": 0,
"rows-fetched": 496,
"rows-inserted": 0,
"rows-updated": 0,
"sampleends": 1417431406,
"samplekey": "SqlStats-1"
},
{
"blocks-hit": 12549,
"blocks-read": 0,
"commits": 21,
"connections": 2,
"created": 1417431411,
"hostid": 1,
"interval": 15000,
"rollbacks": 0,
"rows-deleted": 0,
"rows-fetched": 247,
"rows-inserted": 0,
"rows-updated": 0,
"sampleends": 1417431421,
"samplekey": "SqlStats-1"
} ],
"requestStatus": "ok",
"total": 2
}
Database statistics
This kind of statistics currently only implemented for PostgreSQL, as this server provides these more detailed statistics on per database basis, therefore it should be collected separatedly.
NOTE: cmon currently only gets/stores this statistics data only about the current (postgres) database...
For properties see CmonDatabaseStats class documentation.
1 curl 'http://localhost:9500/14/stat?operation=getStatByName&name=dbstat&startdate=1417429497'
{
"data": [
{
"blocks-hit": 5097,
"blocks-read": 0,
"created": 1417429497,
"hostid": 1,
"idx-hit": 3,
"idx-read": 0,
"interval": 30000,
"sampleends": 1417429522,
"samplekey": "PgDbStats-1",
"tidx-hit": 0,
"tidx-read": 0,
"toast-hit": 0,
"toast-read": 0
},
{
"blocks-hit": 18689,
"blocks-read": 0,
"created": 1417429527,
"hostid": 1,
"idx-hit": 11,
"idx-read": 0,
"interval": 30000,
"sampleends": 1417429552,
"samplekey": "PgDbStats-1",
"tidx-hit": 0,
"tidx-read": 0,
"toast-hit": 0,
"toast-read": 0
} ],
"requestStatus": "ok",
"total": 2
}
TCP Network statistics
Here we collect various network TCP statistics, an example request and reply:
1 curl -X POST -d '{"operation": "statByName", "name": "tcpStat"}' 'http://localhost:9500/10/stat
{
"cc_timestamp": 1457447276,
"data": [
{
"created": 1457360880,
"hostid": 70,
"interval": 81113,
"received_bad_segments": 0,
"received_segments": 20519,
"retransmitted_segments": 0,
"sampleends": 1457360951,
"samplekey": "CmonTcpStats-70",
"sent_segments": 49884
},
{
"created": 1457360880,
"hostid": 71,
"interval": 81114,
"received_bad_segments": 0,
"received_segments": 150187,
"retransmitted_segments": 27,
"sampleends": 1457360951,
"samplekey": "CmonTcpStats-71",
"sent_segments": 67772
},
{
"created": 1457447163,
"hostid": 72,
"interval": 79247,
"received_bad_segments": 0,
"received_segments": 19471,
"retransmitted_segments": 0,
"sampleends": 1457447231,
"samplekey": "CmonTcpStats-72",
"sent_segments": 46250
},
{
"created": 1457447183,
"hostid": 73,
"interval": 81061,
"received_bad_segments": 0,
"received_segments": 19611,
"retransmitted_segments": 0,
"sampleends": 1457447252,
"samplekey": "CmonTcpStats-73",
"sent_segments": 46249
} ],
"requestStatus": "ok",
"total": 4436
}
NDB node statistics
For properties see CmonNdbStats properties class documentation.
1 # curl -X POST -d '{"token":"yjQnm2fPlTjCJ9bn","operation": "statByName", "name": "ndbstat"}' 'http://localhost:9500/65/stat'
{
"cc_timestamp": 1469794066,
"data": [
{
"created": 1469791127,
"dm_total_bytes": 134217728,
"dm_used_bytes": 819200,
"hostid": 70,
"im_total_bytes": 22282240,
"im_used_bytes": 172032,
"interval": 0,
"sampleends": 1469791217,
"samplekey": "CmonNdbStats-70"
},
{
"created": 1469791127,
"dm_total_bytes": 134217728,
"dm_used_bytes": 819200,
"hostid": 71,
"im_total_bytes": 22282240,
"im_used_bytes": 172032,
"interval": 0,
"sampleends": 1469791217,
"samplekey": "CmonNdbStats-71"
},
{
"created": 1469791247,
"dm_total_bytes": 134217728,
"dm_used_bytes": 819200,
"hostid": 70,
"im_total_bytes": 22282240,
"im_used_bytes": 172032,
"interval": 0,
"sampleends": 1469791337,
"samplekey": "CmonNdbStats-70"
} ],
"requestStatus": "ok",
"total": 4436
}
HAProxy load-balancers statistics
For properties see CmonHaProxyStats properties class documentation.
An example request and reply
curl 'http://localhost:9500/162/stat?token=6ZMBqXvLYXBAqGy&operation=getStatByName&name=haproxystat'
1 $ curl 'http://localhost:9500/162/stat?token=6ZMBqXvsLYXBAqGy&operation=getStatByName&name=haproxystat'
3 "cc_timestamp": 1502110271,
22 "pxname": "admin_page",
29 "sampleends": 1502109990,
30 "samplekey": "CmonHaProxyStats-27-admin_page-FRONTEND",
47 "created": 1502109914,
65 "pxname": "admin_page",
70 "sampleends": 1502109990,
71 "samplekey": "CmonHaProxyStats-27-admin_page-BACKEND",
88 "created": 1502109914,
96 "pxname": "haproxy_10.0.3.31_3307",
103 "sampleends": 1502109990,
104 "samplekey": "CmonHaProxyStats-27-haproxy_10.0.3.31_3307-FRONTEND",
111 "svname": "FRONTEND",
114 /* some items are cut... */
116 "requestStatus": "ok",
Last HAProxy sample
Description: with these calls it is possible to obtain the last RAW sample from cmon for each HAProxy, the hostId is required in the request
Name syntax: haproxystat.HOSTID.lastsample
1 $ curl "http://localhost:9500/162/stat?token=6ZMBqXvLYXBAqGy&operation=getinfo&name=haproxystat.27.lastsample"
{
"cc_timestamp": 1502178222,
"data":
{
"contents": "# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,\r\nadmin_page,FRONTEND,,,0,2,8192,714,134575,2494175,0,0,0,,,,,OPEN,,,,,,,,,1,1,0,,,,0,0,0,13,,,,0,713,0,1,0,0,,0,13,714,,,\r\nadmin_page,BACKEND,0,0,0,0,8192,0,134575,2494175,0,0,,0,0,0,0,UP,0,0,0,,0,339868,0,,1,1,0,,0,,1,0,,0,,,,0,0,0,0,0,0,,,,,0,0,\r\nhaproxy_10.0.3.31_3307,FRONTEND,,,0,1,8192,122921,119107836,47413912,0,0,0,,,,,OPEN,,,,,,,,,1,2,0,,,,0,0,0,2,,,,,,,,,,,0,0,0,,,\r\nhaproxy_10.0.3.31_3307,10.0.3.31,0,0,0,1,64,122921,119107836,47413912,,0,,0,0,0,0,UP,100,1,0,0,0,339868,0,128,1,2,1,,122921,,2,0,,2,L7OK,200,12,,,,,,,0,,,,0,0,\r\nhaproxy_10.0.3.31_3307,BACKEND,0,0,0,1,8192,122921,119107836,47413912,0,0,,0,0,0,0,UP,100,1,0,,0,339868,0,,1,2,0,,122921,,1,0,,2,,,,,,,,,,,,,,0,0,\r\n\r\n",
"timestamp": "2017-08-08T07:48:45.457Z"
},
"requestStatus": "ok"
}
The Prometheus RPC API entry point
Description: using GET requests to /$CLUSTERID/stat/prometheus/... cmon will forward the requests to an active Prometheus instance (or the one what was specified optionally by the caller). This works when a cluster has the agentless monitoring enabled.
Clustercontrol expects the Prometheus URL path without the /api/v1/ part.
Arguments:
monitor
Optionally caller can specify explicitly a Prometheus instance to be queried (hostname or hostname:port syntax works here).
Prometheus arguments:
This method will forward the following arguments (in the GET request) to the Prometheus instance: query, time, timeout, start, end, step, match[]
NOTE: for Prometheus query language and for its functions please look at its official documentation: https://prometheus.io/docs/prometheus/latest/querying/basics/
Few example requests and replies:
Query up the available metrics:
1 curl 'http://127.0.0.1:9500/61/stat/prometheus/label/__name__/values?token=dx6hyGk3wOhK0dIG'
{
"cc_timestamp": 1517576509,
"data": [ "go_gc_duration_seconds", "go_gc_duration_seconds_count",
"go_gc_duration_seconds_sum", "go_goroutines", "go_info",
"go_memstats_alloc_bytes", "go_memstats_alloc_bytes_total",
"go_memstats_buck_hash_sys_bytes", "go_memstats_frees_total",
"go_memstats_gc_cpu_fraction", "go_memstats_gc_sys_bytes",
"go_memstats_heap_alloc_bytes", "go_memstats_heap_idle_bytes",
"go_memstats_heap_inuse_bytes", "go_memstats_heap_objects",
"go_memstats_heap_released_bytes",
"up" ],
"requestStatus": "ok",
"status": "success"
}
Get the exporters status
1 curl 'http://127.0.0.1:9500/61/stat/prometheus/query?query=\{__name__=%22up%22\}&token=dx6hyGk3wOhK0dIG'
{
"cc_timestamp": 1517576333,
"data":
{
"result": [
{
"metric":
{
"__name__": "up",
"clustercontrol": "1.5.2",
"instance": "10.0.3.119:9100",
"job": "node"
},
"value": [ 1.51758e+09, "0" ]
},
],
"resultType": "vector"
},
"requestStatus": "ok",
"status": "success"
}
Query some (filesystem) stats using PromQL
1 curl 'http://127.0.0.1:9500/61/stat/prometheus/query_range?start=1517576531&end=1517577531&step=60&query=\{__name__=~%22(node_filesystem.*)%22,instance=%22127.0.0.1:9100%22,mountpoint=%22/%22\}&token=dx6hyGk3wOhK0dIG'
{
"cc_timestamp": 1517577592,
"data":
{
"result": [
{
"metric":
{
"__name__": "node_filesystem_avail",
"clustercontrol": "1.5.2",
"device": "/dev/sda2",
"fstype": "ext4",
"instance": "127.0.0.1:9100",
"job": "node",
"mountpoint": "/"
},
"values": [ [ 1517576531, "40473014272" ], [ 1517576591, "40472379392" ], [ 1517576651, "40471703552" ], [ 1517576711, "40471044096" ], [ 1517576771, "40470200320" ], [ 1517576831, "40469516288" ], [ 1517576891, "40468815872" ], [ 1517576951, "40468205568" ], [ 1517577011, "40467578880" ], [ 1517577071, "40466911232" ], [ 1517577131, "40466255872" ], [ 1517577191, "40473985024" ], [ 1517577251, "40473432064" ], [ 1517577311, "40472805376" ], [ 1517577371, "40472117248" ], [ 1517577431, "40471400448" ], [ 1517577491, "40470937600" ] ]
},
],
"resultType": "matrix"
},
"requestStatus": "ok",
"status": "success"
}
Multiple queries (in one request) to Prometheus
1 curl -XPOST -d'{"queries": [{"query":"{__name__=\"up\"}"},{"query":"{__name__=\"up\"}"}]}' 'http://127.0.0.1:9500/119/stat/prometheus/query?token=jzsyhiMDUFMctzoz'
{
"cc_timestamp": 1528207634,
"data": [
{
"data":
{
"result": [
{
"metric":
{
"__name__": "up",
"clustercontrol": "1.5.2",
"instance": "10.35.112.126:9011",
"job": "process"
},
"value": [ 1.52821e+09, "1" ]
},
.....
},
{
"data":
{
"result": [
{
...
} ],
"requestStatus": "ok",
"total": 2
}
Multiple queries to Prometheus (the args from root level are used like start/end/steop...):
1 $ curl -XPOST -d'{"step":600, "start":128207531, "end": 1528208031,"queries": [{"query":"{__name__=\"node_filesystem_avail\"}"},{"query":"{__name__=\"node_memory_MemFree\"}"}]}' 'http://127.0.0.1:9500/119/stat/prometheus/query_range?token=jzsyhiMDUFMctzoz'
Kill a database process
With this RPC API you can kill a running database query (or process), using the 'pid' (MySQL and friends) or 'backendPid' (PostgreSQL) value.
Arguments:
- hostId: the host ID of the process to be killed
- pid: the process ID to be killed
1 curl http://localhost:9500/23/stat?operation=killprocess&hostId=123&pid=2343&token=jgqeVoNPO3x90sDJ
Processes (database clients)
With this RPC API you can get the list about the currenty running query processes on the database nodes. Arguments:
- (optional) hostId (to get only a specific host queries)
- limit: limit the count of the returned items
- offset: offset for pagination (to be used together with limit)
NOTE: this API is implemented for PostgreSQL, and MySQL (galera, single .. etc) and for MongoDB (so well for all cluster types)
1 curl http://localhost:9500/14/stat?operation=processes
For PostgreSQL, the reply looks like this:
{
"data": [
{
"appName": "",
"backendPid": 4615,
"backendStart": "2014-11-28 15:08:58.379023+01",
"client": "192.168.33.1:46734",
"databaseName": "postgres",
"hostId": 1,
"query": "SELECT datname, pid, usename, application_name, COALESCE(client_hostname, host(client_addr), 'localhost'), client_port, backend_start, xact_start, query_start, waiting, query, state FROM pg_stat_activity",
"queryStart": "2014-11-28 15:11:18.413027+01",
"state": "active",
"userName": "root",
"waiting": false,
"xactStart": "2014-11-28 15:11:18.413027+01"
} ],
"requestStatus": "ok",
"total": 1
}
An example for MySQL Replication:
1 curl 'http://localhost:9500/23/stat?operation=processes&token=jgqeVoNPO3x90sDJ&limit=2&offset=3
{
"cc_timestamp": 1551365408,
"data": [
{
"client": "",
"currentTime": "2019-02-28_145008",
"databaseName": "",
"hostId": 296,
"hostname": "10.35.112.242",
"info": "",
"pid": 3,
"query": "Daemon",
"queryStart": 0,
"reportTs": 1551365408,
"state": "InnoDB purge worker",
"userName": "system user"
},
{
"client": "",
"currentTime": "2019-02-28_145008",
"databaseName": "",
"hostId": 296,
"hostname": "10.35.112.242",
"info": "",
"pid": 5,
"query": "Daemon",
"queryStart": 0,
"reportTs": 1551365408,
"state": "InnoDB shutdown handler",
"userName": "system user"
} ],
"requestStatus": "ok",
"total": 5
}
And finally a MongoDB example:
1 curl 'http://localhost:9500/155/stat?operation=proesses&token=aQo4BWVdlW7NqwWZ'
{
"cc_timestamp": 1499694695,
"data": [
{
"active": true,
"desc": "ReplBatcher",
"hostId": 130,
"hostname": "10.0.3.105",
"ns": "local.oplog.rs",
"op": "none",
"opid": 62,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.105:27017",
"secs_running": 2845,
"threadId": "139644933940992",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "WT RecordStoreThread: local.oplog.rs",
"hostId": 130,
"hostname": "10.0.3.105",
"ns": "local.oplog.rs",
"op": "none",
"opid": 58,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.105:27017",
"secs_running": 2846,
"threadId": "139645261256448",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 70795,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"client": "10.0.3.74:60288",
"connectionId": 23,
"desc": "conn23",
"hostId": 130,
"hostname": "10.0.3.105",
"ns": "local.oplog.rs",
"op": "getmore",
"opid": 17192,
"query":
{
"collection": "oplog.rs",
"getMore":
{
"$numberLong": "19653741566"
},
"lastKnownCommittedOpTime":
{
"t":
{
"$numberLong": "1"
},
"ts":
{
"$timestamp":
{
"i": 5,
"t": 1499691860
}
}
},
"maxTimeMS":
{
"$numberLong": "5000"
},
"term":
{
"$numberLong": "1"
}
},
"reportTs": 1499694693,
"reported_by": "10.0.3.105:27017",
"secs_running": 2,
"threadId": "139644916102912",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"client": "10.0.3.1:53612",
"connectionId": 41,
"desc": "conn41",
"hostId": 130,
"hostname": "10.0.3.105",
"ns": "admin.$cmd",
"op": "command",
"opid": 17217,
"query":
{
"currentOp": 1
},
"reportTs": 1499694693,
"reported_by": "10.0.3.105:27017",
"secs_running": 0,
"threadId": "139645509216000",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "rsSync",
"hostId": 130,
"hostname": "10.0.3.105",
"ns": "local.oplog.rs",
"op": "none",
"opid": 61,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.105:27017",
"secs_running": 2845,
"threadId": "139645244471040",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "SyncSourceFeedback",
"hostId": 130,
"hostname": "10.0.3.105",
"ns": "",
"op": "none",
"opid": 282,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.105:27017",
"threadId": "139645227685632",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "rsSync",
"hostId": 131,
"hostname": "10.0.3.41",
"ns": "local.replset.minvalid",
"op": "none",
"opid": 95,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.41:27017",
"secs_running": 2842,
"threadId": "140496365635328",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "rsBackgroundSync",
"hostId": 131,
"hostname": "10.0.3.41",
"ns": "local.replset.minvalid",
"op": "none",
"opid": 313,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.41:27017",
"secs_running": 2831,
"threadId": "140496357242624",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"client": "10.0.3.1:52144",
"connectionId": 33,
"desc": "conn33",
"hostId": 131,
"hostname": "10.0.3.41",
"ns": "admin.$cmd",
"op": "command",
"opid": 16252,
"query":
{
"currentOp": 1
},
"reportTs": 1499694693,
"reported_by": "10.0.3.41:27017",
"secs_running": 0,
"threadId": "140496612542208",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "SyncSourceFeedback",
"hostId": 131,
"hostname": "10.0.3.41",
"ns": "",
"op": "none",
"opid": 16247,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.41:27017",
"threadId": "140496348849920",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "WT RecordStoreThread: local.oplog.rs",
"hostId": 131,
"hostname": "10.0.3.41",
"ns": "local.oplog.rs",
"op": "none",
"opid": 87,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.41:27017",
"secs_running": 2842,
"threadId": "140496322619136",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 74693,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "ReplBatcher",
"hostId": 131,
"hostname": "10.0.3.41",
"ns": "local.oplog.rs",
"op": "none",
"opid": 96,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.41:27017",
"secs_running": 2842,
"threadId": "140496037267200",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"client": "10.0.3.1:34164",
"connectionId": 35,
"desc": "conn35",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "admin.$cmd",
"op": "command",
"opid": 17645,
"query":
{
"currentOp": 1
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"secs_running": 0,
"threadId": "140626511873792",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"client": "10.0.3.41:59786",
"connectionId": 18,
"desc": "conn18",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "local.oplog.rs",
"op": "getmore",
"opid": 17635,
"query":
{
"collection": "oplog.rs",
"getMore":
{
"$numberLong": "25473614514"
},
"lastKnownCommittedOpTime":
{
"t":
{
"$numberLong": "1"
},
"ts":
{
"$timestamp":
{
"i": 5,
"t": 1499691860
}
}
},
"maxTimeMS":
{
"$numberLong": "5000"
},
"term":
{
"$numberLong": "1"
}
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"secs_running": 0,
"threadId": "140625903023872",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "rsSync",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "local.replset.minvalid",
"op": "none",
"opid": 64,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"secs_running": 2843,
"threadId": "140626264962816",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "ReplBatcher",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "local.oplog.rs",
"op": "none",
"opid": 65,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"secs_running": 2843,
"threadId": "140625937647360",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "rsBackgroundSync",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "local.replset.minvalid",
"op": "none",
"opid": 262,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"secs_running": 2833,
"threadId": "140626256570112",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "SyncSourceFeedback",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "",
"op": "none",
"opid": 17638,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"threadId": "140626248177408",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 0,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
},
{
"active": true,
"desc": "WT RecordStoreThread: local.oplog.rs",
"hostId": 132,
"hostname": "10.0.3.74",
"ns": "local.oplog.rs",
"op": "none",
"opid": 57,
"query":
{
},
"reportTs": 1499694693,
"reported_by": "10.0.3.74:27017",
"secs_running": 2843,
"threadId": "140626222999296",
"timeAcquiringMicrosReadIS": 0,
"timeAcquiringMicrosReadIX": 0,
"timeAcquiringMicrosWriteIS": 74864,
"timeAcquiringMicrosWriteIX": 0,
"waitingForLock": false
} ],
"requestStatus": "ok",
"total": 19
}
Database server variables
With this RPC API you will get the server variables. Implemented for MySQL and its variants and for PostgreSQL.
Possible parameters:
- hostId: to be able to get that only about a specific host (this can be a coma separated list as well)
- variables: a comma separated list about the wanted parameters (to save some bandwidth)
- refresh: a boolean (defaults to false), to refresh the variables first
Here you can see few example requests and responses:
PostgreSQL:
1 curl 'http://localhost:9500/14/stat?operation=variables&variables=TimeZone,log_filename,wal_segment_size'
{
"cc_timestamp": 1442560211,
"data": [
{
"hostId": 2,
"hostname": "192.168.33.121",
"variables":
{
"TimeZone": "US/Eastern",
"log_filename": "postgresql-%a.log",
"wal_segment_size": "16MB"
}
},
{
"hostId": 3,
"hostname": "192.168.33.122",
"variables":
{
"TimeZone": "US/Eastern",
"log_filename": "postgresql-%a.log",
"wal_segment_size": "16MB"
}
} ],
"requestStatus": "ok",
"total": 2
}
MySQL/Galera:
1 curl 'http://localhost:9500/6/stat?operation=variales&variables=version,performance_schema,port,thread_pool_idle_timeout,thread_pool_max_threads,thread_pool_oversubscribe,thread_pool_size'
{
"cc_timestamp": 1442560350,
"data": [
{
"hostId": 1,
"hostname": "192.168.33.123",
"variables":
{
"performance_schema": "OFF",
"port": "3306",
"thread_pool_idle_timeout": "60",
"thread_pool_max_threads": "500",
"thread_pool_oversubscribe": "3",
"thread_pool_size": "1",
"version": "5.5.41-MariaDB"
}
} ],
"requestStatus": "ok",
"total": 1
}
Database server variables (multinode)
With this RPC API you will get the server variables. Implemented for PostgreSQL.
Possible parameters:
- host_port: to be able to get that only about a specific node on a specific host (this can be a coma separated list as well)
- variables: a comma separated list about the wanted parameters (to save some bandwidth)
- refresh: a boolean (defaults to false), to refresh the variables first
Here you can see few example requests and responses:
PostgreSQL:
1 curl 'http://localhost:9500/14/stat?operation=nodevariables&variables=TimeZone,log_filename,wal_segment_size'
{
"cc_timestamp": 1442560211,
"data": [
{
"hostname": "192.168.33.121",
"port": 5432,
"variables":
{
"TimeZone": "US/Eastern",
"log_filename": "postgresql-%a.log",
"wal_segment_size": "16MB"
}
},
{
"hostname": "192.168.33.121",
"port": 5433,
"variables":
{
"TimeZone": "US/Eastern",
"log_filename": "postgresql-%a.log",
"wal_segment_size": "16MB"
}
} ],
"requestStatus": "ok",
"total": 2
}
Database server statuses
With this RPC API you will get the server states. This basically returns the latest mysql/mongo server statistics "snapshot".
NOTE: This method works for MySQL/PostgreSQL and MongoDB clusters.
Possible parameters:
- hostId: to be able to get that only about a specific host
- keys: a comma separated list about the wanted parameters (to save some bandwidth)
Here you can see few example requests and responses:
1 curl 'http://localhost:9500/6/stat?operation=getdbstatus&keys=COM_SELECT,COM_COMMIT,COM_DELETE,COM_FLUSH'
{
"cc_timestamp": 1442562250,
"data": [
{
"hostId": 1,
"hostname": "192.168.33.123",
"statuses":
{
"COM_COMMIT": "0",
"COM_DELETE": "0",
"COM_FLUSH": "0",
"COM_SELECT": "1997221"
}
} ],
"requestStatus": "ok",
"total": 1
}
Database Deadlock Log
With this RPC API you will get the deadlocked transasctios Implemented for MySQL and its variants and for PostgreSQL.
Possible parameters:
- startdate
- enddate
- limit : Can be included in any of the examples below. If startdate + enddate is ommited, limit defaults to 25 and up to 25 of the latest records will be sent back. For pagination.
- offset : Can be included if limit is set and if startdate + enddate is ommited. For pagination.
Here you can see few example requests and responses:
1 curl -XPOST -d '{"operation": "txdeadlock", "startdate":"1408044387", "enddate":"1458044387"}' 'http://localhost:9500/11/stat
or
1 curl -XPOST -d '{"operation": "txdeadlock", "startdate":"1408044387", "limit": "2"}' 'http://localhost:9500/11/stat
or
1 curl -XPOST -d '{"operation": "txdeadlock", "limit": 25, "offset": 25}' 'http://localhost:9500/11/stat
or
1 curl -XPOST -d '{"operation": "txdeadlock"}' 'http://localhost:9500/11/stat
Example output
{
[ {
"blocked_by_trx_id": "none",
"db": "sbtest",
"duration": "55",
"host": "localhost",
"hostId": 1439,
"info": "SELECT /*!40001 SQL_NO_CACHE */ * FROM `sbtest1`",
"innodb_status": "a lot of data",
"innodb_trx_id": "7371225",
"instance": "10.10.10.10:3306",
"internal_trx_id": "7371225",
"lastseen": "2016-03-09 13:57:36",
"message": "NULL",
"mysql_trx_id": "261",
"sql": "SELECT /*!40001 SQL_NO_CACHE */ * FROM `sbtest1`",
"state": "RUNNING",
"user": "backupuser"
}, ... ],
"requestStatus": "ok",
"total": 25
}
Get database storage (growth) info over time
With this RPC API you will get the database meta info (size). Implemented for MySQL and its variants and for PostgreSQL.
Possible parameters:
- startdate: unix timestamp. Default is now() - 31.
- enddate: unix timestamp. Default now().
- dayofyear: INT. Filter on 'day of year', default 0 (no filtration). 0-366 are valid values.
- year: INT. Filter on year (use with 'dayofyear'), default 0 (nofiltration)
- db: STRING. Filter on particular database name, default empty string (no filtration)
- include_tables: BOOLEAN. If tables should be included or not in the reply, default false.
- limit : Can be included in any of the examples below by using startdate/enddate If startdate + enddate is ommited, limit defaults to 31 and up to 31 of the latest records will be sent back.
Here you can see few example requests and responses:
1 curl -X POST -d '{"token":"igIAHI3cALXIuOvM", "operation": "getdbgrowth"}' http://localhost:9500/2/stat
or
1 curl -X POST -d '{"token":"igIAHI3cALXIuOvM", "operation": "getdbgrowth", "startdate":"1408044387", "enddate":"1518044387"}' http://localhost:9500/2/stat
or
1 curl -X POST -d '{"token":"xxx", operation": "getdbgrowth", "dayofyear":134, "db":"sbtest", "include_tables":true}' http://localhost:9500/2/stat
Example output with "dayofyear":134, "db":"sbtest", "include_tables":true:
{
"cc_timestamp": 1526288962,
"data": [
{
"class_name": "CmonDbStats",
"created": "May 14 10:12:36",
"data_size": 4046848,
"database_count": 1,
"datadir": "/var/lib/mysql/",
"dbs": [
{
"data_size": 3686400,
"db_name": "sbtest",
"index_size": 278528,
"row_count": 12000,
"table_count": 1,
"tables": [
{
"count": 12000,
"data_size": 3686400,
"db_name": "sbtest",
"engine": "InnoDB",
"index_size": 278528,
"row_count": 12000,
"table_name": "sbtest1"
} ]
} ],
"free_datadir_size": 31091101696,
"hostname": "10.10.10.16",
"index_size": 278528,
"port": 3306,
"total_datadir_size": 41072066560,
"year": 2018,
"yearday": 134
} ],
"requestStatus": "ok",
"total": 1
}
Old PHP CMONAPI (used by web UI) WEB_UI compatible APIs
The request is a simple GET request, in the following syntax:
1 http://${IP_OF_CMON_HOST}:9500/${CLUSTERID}/stat/ram_history.json?hostId=2&startdate=1410441439
The request should contain also the 'token=RPC_TOKEN' specified if the cmon.cnf contains 'rpc_key'.
The following paths are implemented in the web-ui compatible way:
- /clusterid/stat/ram_history.json
- /clusterid/stat/network_history.json
- /clusterid/stat/cpu_history.json
- /clusterid/stat/disk_history.json
Currently only the following filtering options are available for the results:
- startdate (unix timestamp), defaults to now() - one day
- enddate
- hostId (please note that all parameters are case sensitive)
- interface (applies for network_history.json)
Some example requests & replies ( ... means i removed some items from the results for better readability)
1 $ curl 'http://localhost:9500/12/stat/ram_history.json?hostId=2&startdate=1410443539'
{
"data": [
{
"hostid": 2,
"ram_free": 755666944,
"ram_total": 1307394048,
"ram_used": 551727104,
"report_ts": 1410443507,
"swap_free": 0,
"swap_total": 0,
"swap_used": 0
},
{
"hostid": 2,
"ram_free": 749642752,
"ram_total": 1307394048,
"ram_used": 557751296,
"report_ts": 1410443539,
"swap_free": 0,
"swap_total": 0,
"swap_used": 0
} ],
"requestStatus": "ok",
"total": 6
}
1 $ curl 'http://localhost:9500/12/stat/disk_history.json?hostId=1'
{
"data": [
{
"_reads": 0,
"_writes": 0,
"disk_name": "sda2",
"free_bytes": 18801885184,
"hostid": 1,
"report_ts": 1410440765,
"total_bytes": 116954603520
},
{
"_reads": 0,
"_writes": 0,
"disk_name": "sda2",
"free_bytes": 18825199616,
"hostid": 1,
"report_ts": 1410442113,
"total_bytes": 116954603520
} ],
"requestStatus": "ok",
"total": 3
}
1 $ curl 'http://localhost:9500/12/stat/network_history.json?hostid=1&stardate=1410444489'
{
"data": [
{
"hostid": 1,
"interface": "eth1",
"report_ts": 1410441439,
"rx_bytes_sec": 6178.55,
"tx_bytes_sec": 3490.93
},
{
"hostid": 1,
"interface": "eth1",
"report_ts": 1410442113,
"rx_bytes_sec": 181965,
"tx_bytes_sec": 16914.5
} ],
"requestStatus": "ok",
"total": 2
}
1 $ curl 'http://localhost:9500/12/stat/cpu_history.json?hostid=1&stardate=1410444489'
{
"data": [
{
"coreid": 65535,
"hostid": 1,
"idle": 0.350358,
"iowait": 0.0601709,
"report_ts": 1410441438,
"steal": 0,
"sys": 0.0559242,
"usr": 0.533547,
"util": 0.116095
},
{
"coreid": 65535,
"hostid": 1,
"idle": 0.504347,
"iowait": 0.00402398,
"report_ts": 1410442112,
"steal": 0,
"sys": 0.0424423,
"usr": 0.449187,
"util": 0.0464663
} ],
"requestStatus": "ok",
"total": 2
}
hostsStats
This API is especially made for the web-ui, to show some generic "current" statuses/stats of the nodes on the node overview page.
The '_interval' field is for the disk stats (_reads,_writes, _sectors_read, _sectors_written) fields, to calculate the actual rates.
1 curl 'http://localhost:9500/14/stat?operation=hostsStats'
or
{
"operation": "hostsStats"
}
and the reply is:
{
"cc_timestamp": 1617969732,
"requestStatus": "ok",
"total": 2,
"data":
[
{
"_interval": 4584,
"_reads": 18,
"_sectors_read": 144,
"_sectors_written": 32040,
"_writes": 1644,
"cmon_status": 1617969731,
"host_is_up": true,
"hostname": "127.0.0.1",
"hoststatus": "CmonHostFailed",
"id": 1,
"idle": 0.392248,
"iowait": 0.00555624,
"loadavg1": 42.22,
"loadavg15": 43.82,
"loadavg5": 48.16,
"maintenance_mode_active": false,
"ping_status": 1,
"ping_time": 1,
"report_ts": 1617969727,
"rx_bytes_sec": 9355287,
"sshfailcount": 0,
"status": 18,
"steal": 0,
"sys": 0.198644,
"tx_bytes_sec": 11237699,
"uptime": 2.15277e+06,
"usr": 0.40034
},
{
"_interval": 4767,
"_reads": 31,
"_sectors_read": 248,
"_sectors_written": 35312,
"_writes": 1845,
"cmon_status": 1617969727,
"host_is_up": true,
"hostname": "127.0.0.2",
"hoststatus": "CmonHostOnline",
"id": 2,
"idle": 0.390797,
"iowait": 0.00565976,
"loadavg1": 42.22,
"loadavg15": 43.82,
"loadavg5": 48.16,
"maintenance_mode_active": false,
"ping_status": -1,
"ping_time": -1,
"report_ts": 1617969727,
"rx_bytes_sec": 9352601,
"sshfailcount": 0,
"status": 10,
"steal": 0,
"sys": 0.200034,
"tx_bytes_sec": 11268840,
"uptime": 2.15277e+06,
"usr": 0.400308
}
]
}
getInfo
With this operation it is possible to get the cluster infos from the internal info collector.
Few info names (this is is not full and subject to change atm):
- "conf.configfile"
- "conf.clustername"
- "conf.os"
- "conf.clusterid"
- "conf.hostname"
- "conf.clustertype"
- "conf.clustertypestr"
- "cmon.hostname"
- "cmon.domainname"
- "cmon.uptime"
- "cmon.starttime"
An example calling, and reply format:
1 $curl -X POST -d '{"operation": "getInfo", "name":"conf.clustertypestr" }' http://localhost:9500/14/stat
{
"data": "postgresql_single",
"requestStatus": "ok"
}
It is also possible (here too) to use GET request:
1 curl 'http://localhost:9500/12/stat?operation=getInfo&name=cmon.hostname'
getMongoShardingStatus
Queries the status of sharding of a mongo cluster. The query is done on the first found mongos node. All the fields (except total_chunks) and their names are coming from the result of 'sh.status()' mongo command. For more information please take a look at https://docs.mongodb.com/manual/reference/method/sh.status/index.html
The status.shards.replica_set_0.total_chunks and its respective for other replicasets (shards) are not in the original sh.status() report. The cmon backend calculates it's vaule by summarizing all the number of chunks of all the databases and their collections.
The value of status.balancer.Migration_Results_for_the_last_24_hours.Message is optional. If it is non empty, then Success and Failure have no sense becuase there were no migration to succeed or fail.
An example calling, and reply format:
1 echo '{"token": "KpFrUV2sdEn9uMSr", "operation": "getmongoshardingstatus"}' | curl -sX POST -H"Content-Type: application/json" -d @- http://192.168.30.4:9500/66/stat
{
"status": {
"shards": {
"replica_set_1": {
"total_chunks": 6
},
"replica_set_0": {
"total_chunks": 6
}
},
"balancer": {
"Last_reported_error": "could not get updated shard list from config server due to Operation timed out",
"Failed_balancer_rounds_in_last_5_attempts": 2,
"Migration_Results_for_the_last_24_hours": {
"Failure": 0,
"Message": "No recent migrations",
"FailureMessage": "",
"Success": 0
},
"Currently_enabled": true,
"Currently_running": false,
"Time_of_Reported_error": "Wed Oct 25 2017 02:28:41 GMT-0400 (EDT)"
},
"autosplit": {
"Currently_enabled": false
},
"databases": {
"test1": {
"collections": {
"testCollection1": {
"chunks": {
"replica_set_1": 2,
"replica_set_0": 2
},
"balancing": true
},
"testCollection2": {
"chunks": {
"replica_set_1": 2,
"replica_set_0": 2
},
"balancing": true
}
},
"primary": "replica_set_0",
"partitioned": true
},
"test2": {
"collections": {
"testCollection": {
"chunks": {
"replica_set_1": 2,
"replica_set_0": 2
},
"balancing": true
}
},
"primary": "replica_set_0",
"partitioned": true
}
}
},
"requestStatus": "ok",
"cc_timestamp": 1509095519
}
The RPC for Maintenance
Maintenance periods currently can be registered for hosts and for entire clusters, so when a new maintenance period is registered it can be done by either providing a "hostname" or a "cluster_id" to identify the target of the maintenance.
Every maintenance period has an owner identified by the username and user ID. No maintenance period can be registered without these properties. The RPC can't be used to register maintenance periods under the username "system" or the user ID 0, these are for internal use only.
Maintenance can overlap. It is possible create a window between e.g XX:00 to YY:00 and then create another maintenance period that starts within the first window and stretches beyond YY:00 to ZZ:000. In this case the maintenance period is extended to cover XX:00 to ZZ:00.
A number of jobs implicitly creates a maintenance period. At this moment of time it is not possible to set the maintenance period for a job inside the UI. CMON supports setting a time for most jobs.
The following jobs put the node/nodes in maintenance mode: Restore Backup: 60 minutes or until the restore is finished. Remove Node: 10 minutes or until the job is finished. Start Cluster: 60 minutes or until the job is finished. Stop Cluster: 60 minutes or until the job is finished. Stop Node: 30 minutes. No less and no more. After 30 minutes the node will exit maintenance mode. Since the node is then CmonHostShutdown, no alarms should be sent and it is a bug if it does. Add Node: 10 minutes or until the job is finished. Upgrade Cluster: 20 minutes or until the job is finished. Rolling Restart: 20 minutes or until the job is finished. Replication Failover: 5 minutes or until job is finished. Replication Switchover: 5 minutes or until job is finished.
These settings should be made configurable from the UI.
The addMaintenance RPC Call
The "addMaintenance" is a simple request that adds a new maintenance period for one host or for an entire cluster. Both the host and the cluster might have multiple even overlapping maintenance periods, so the RPC reply holds an "UUID" field that identifies the newly added maintenance period.
The call should either contain a field called "hostname" or "cluster_id" to create host or cluster maintenance periods.
{
"operation": "addMaintenance",
"hostname": "10.10.10.1",
"initiate": "2026-08-19T05:54:01.043Z",
"deadline": "2026-08-19T06:54:01.043Z",
"user": "joe",
"user_id": 42,
"reason": "Some reason."
}
{
"UUID": "97fa5a14-b085-4279-9a53-68a0483cd7a0",
"cc_timestamp": 1617969732,
"requestStatus": "ok"
}
The getMaintenance RPC Call
The getMaintenance call can be used to return all the maintenance periods for a cluster or for a host. The RPC v2 has the same call, please check out the documentation there.
{
"operation": "getMaintenance"
}
{
"cc_timestamp": 1617969732,
"requestStatus": "ok",
"maintenance_records":
[
{
"class_name": "CmonMaintenanceInfo",
"hostname": "10.10.10.1",
"is_active": false,
"maintenance_periods":
[
{
"UUID": "97fa5a14-b085-4279-9a53-68a0483cd7a0",
"deadline": "2026-08-19T06:54:01.043Z",
"groupid": 0,
"groupname": "",
"initiate": "2026-08-19T05:54:01.043Z",
"is_active": false,
"reason": "Some reason.",
"userid": 42,
"username": "joe"
}
]
}
]
}
The removeMaintenance RPC Call
The removeMaintenance RPC call can be used to remove maintenance periods before their time.
Similarly to the addMaintenance call the removeMaintenance call should also contain a "hostname" or "cluster_id" to handle host and cluster maintenance periods.
The call also should have an UUID field to identify which maintenance period should be removed.
{
"operation": "removeMaintenance",
"hostname": "10.10.10.1",
"user": "joe",
"user_id": 42,
"UUID": "97fa5a14-b085-4279-9a53-68a0483cd7a0"
}
{
"cc_timestamp": 1617969732,
"requestStatus": "ok"
}
The RPC for the Query Monitor
Both Mysql and MongoDb clusters supports this.
{
"operation" : "qm_topqueries|qm_queryoutliers",
"order_by_col" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
The list of top queries API
List the top queries.
\code{.js}
{
"operation" : "qm_topqueries",
"order_by_col" : STRING (default 'total_time')
"order_by_col" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
\endcode
\code{.js}
{
"cc_timestamp": 1523433168,
"data": [
{
"avg_query_time": 90,
"canonical": "SELECT ?",
"command": "",
"count": 153840,
"db": "proxydemo",
"host": "10.10.10.19",
"hostid": 48,
"info": "SELECT 1",
"last_seen": 1523427205,
"lock_time": 0,
"max_query_time": 14105,
"min_query_time": 15,
"query_id": 9014708004629822606,
"query_time": 14105,
"rows_examined": 0,
"rows_sent": 0,
"state": "",
"stddev": 19,
"sum_created_tmp_disk_tables": 0,
"sum_created_tmp_tables": 0,
"sum_lock_time": 0,
"sum_no_good_index_used": 0,
"sum_no_index_used": 0,
"sum_query_time": 13927861,
"sum_rows_examined": 0,
"sum_rows_sent": 0,
"user": "proxydemo",
"variance": 398
},
..
]
"requestStatus": "ok",
"total": N
}
\endcode
The list of top queries API
List the outliers.
\code{.js}
{
"operation" : "qm_queryoutliers",
"order_by_col" : STRING (default 'last_seen')
"limit" : NUMBER,
"offset" : NUMBER,
"startTime": EPOCH,
"stopTime": EPOCH
}
\endcode
\code{.js}
{
"cc_timestamp": 1523433168,
"data": [
{
"avg_query_time": 90,
"canonical": "SELECT ?",
"count": 1,
"hostid": 48,
"info": "SELECT 1",
"last_seen": 1523427205,
"lock_time": 0,
"max_query_time": 14105,
"min_query_time": 15,
"query_id": 9014708004629822606,
"query_time": 14105,
"stddev": 19
},
..
]
"requestStatus": "ok",
"total": N
}
\endcode
Delete query API
Purge the querymonitor.
\code{.js}
{
"operation" : "qm_purge"
}
\endcode
\code{.js}
{
"cc_timestamp": 1523434192,
"data": [],
"requestStatus": "ok",
"total": 1
}
\endcode
The Performance API
/${CLUSTERID}/performance
Get tables without primary key
Description: API to query (currenty MySQL only) tables without primary keys.
This RPC returns back the database name, table name and the used database engine for each table (without primary key).
1 curl -X POST -d '{"operation": "no_primary_keys"}' http://localhost:9500/23/performance?token=jgqeVoNPO3x90sDJ
{
"cc_timestamp": 1551101659,
"data": [
{
"database": "cmon",
"engine": "InnoDB",
"table": "cmon_job_tags"
},
{
"database": "xyz",
"engine": "MyISAM",
"table": "myisam1"
},
{
"database": "xyz",
"engine": "MyISAM",
"table": "myisam2"
},
{
"database": "xyz",
"engine": "InnoDB",
"table": "t5"
},
{
"database": "xyz",
"engine": "InnoDB",
"table": "t6"
} ],
"requestStatus": "ok",
"total": 5
}
Get tables with MyISAM engine
Description: API to query (currenty MySQL only) tables with the legacy MyISAM engine.
This RPC returns back the database name, table name and the used database engine for each table (without primary key).
1 curl -X POST -d '{"operation": "myisam_tables"}' http://localhost:9500/23/performance?token=jgqeVoNPO3x90sDJ
{
"cc_timestamp": 1551101908,
"data": [
{
"database": "xyz",
"engine": "MyISAM",
"table": "myisam1"
},
{
"database": "xyz",
"engine": "MyISAM",
"table": "myisam2"
} ],
"requestStatus": "ok",
"total": 2
}
List the redundant indexes
Description: API to query (currenty MySQL only) up the redundand indexes.
This RPC returns back the database name, table name and the main index and the redundand index details (name and colums in indexes).
1 curl -X POST -d '{"operation": "redundant_indexes"}' http://localhost:9500/23/performance?token=jgqeVoNPO3x90sDJ
{
"cc_timestamp": 1551102096,
"data": [
{
"advise": "Duplicate indexes use more spaces and may slow down writes. Plan carefully if you want to remove duplicate indexes.",
"database": "cmon",
"index": "index2",
"index_columns": "recipient,created",
"redidx_columns": "recipient",
"redundant_index": "index4",
"table": "outgoing_digest_messages"
},
{
"advise": "Duplicate indexes use more spaces and may slow down writes. Plan carefully if you want to remove duplicate indexes.",
"database": "xyz",
"index": "two",
"index_columns": "intcol1,charcol1",
"redidx_columns": "intcol1",
"redundant_index": "one",
"table": "t6"
} ],
"requestStatus": "ok",
"total": 2
}
The Clusters API
This is the documentation for the RPC API found on /0/clusters and on /${CLUSTERID}/clusters.
Setting db node config file manually
Manually set the database node's config file. Might be very usefull in rare stucked situations when this config file name can not be fixed and firgured out automatically.
\code{.js}
{
"operation" : "setdbnodeconfigfile",
"hostname" : STRING, // host to do operation on
"port" : NUMBER, // identify service on which to do operation
"configfilepath" : STRING, // path of config file for the database node
}
\endcode
\code{.js}
{
"requestStatus": "ok",
"cc_timestamp": 1533304104
}
\endcode
Setting db server certificate id manually
Manually set the database node's server certificate id value. Might be very usefull in rare stucked situations when this certificate id has no or has wrong value.
{
"operation" : "setdbservercertificateid",
"hostname" : STRING,
"port" : NUMBER,
"certificateid" : STRING,
}
{
"requestStatus": "ok",
"cc_timestamp": 1533304104
}
Enabling profiler for mongo
Enable mongo profiler for a specific node/db.
\code{.js}
{
"operation" : "enabledbprofiler",
"hostname" : STRING, // host to do operation on
"port" : NUMBER, // identify service on which to do operation
"dbname" : STRING, // database name to collect profiler data about
"slowms" : NUMBER // collect db queries taking more time than specified miliseconds
}
\endcode
\code{.js}
{
"requestStatus": "ok",
"cc_timestamp": 1533304104
}
\endcode
Enabling profiler for all mongo nodes
Enable mongo profiler for all nodes and databases in a specific cluster by using settings API.
echo '{"token": "eFgG0MPh7x04Z5Hz",
"configuration_values": {"QUERY_SAMPLE_INTERVAL": "1"},
"operation": "setvalues"}' |
curl -sX POST -H"Content-Type: application/json" -d @-
http:
\code{.js}
{
"operation" : "setvalues",
"configuration_values" : {
"QUERY_SAMPLE_INTERVAL" : 1 // enable collecting slow db queries on all nodes and databases
}
}
\endcode
\code{.js}
{
"configuration_values" : {
"QUERY_SAMPLE_INTERVAL": "1" // the new value that was just set
},
"requestStatus": "ok",
"cc_timestamp": 1533304104
}
\endcode
Set slow query time limit by using settings API. Queries taking longer time will be collected.
echo '{"token": "eFgG0MPh7x04Z5Hz",
"configuration_values": {"LONG_QUERY_TIME": "100"},
"operation": "setvalues"}' |
curl -sX POST -H"Content-Type: application/json" -d @-
http:
\code{.js}
{
"operation" : "setvalues",
"configuration_values" : {
"LONG_QUERY_TIME" : NUMBER // collect db queries taking more time than specified miliseconds on all nodes and databases
}
}
\endcode
\code{.js}
{
"configuration_values" : {
"LONG_QUERY_TIME": STRING // the new value that was just set
},
"requestStatus": "ok",
"cc_timestamp": 1533304104
}
\endcode
Disabling profiler for mongo
Disable mongo profiler for a specific node/db.
\code{.js}
{
"operation" : "disabledbprofiler",
"hostname" : STRING, // host to do operation on
"port" : NUMBER, // identify service on which to do operation
"dbname" : STRING // database name not to collect profiler data about
}
\endcode
\code{.js}
{
"requestStatus": "ok",
"cc_timestamp": 1533304104
}
\endcode
Disabling profiler on all mongo nodes
Disable mongo profiler on all nodes and databases by using settings API.
echo '{"token": "eFgG0MPh7x04Z5Hz",
"configuration_values": {"QUERY_SAMPLE_INTERVAL": "0"},
"operation": "setvalues"}' |
curl -sX POST -H"Content-Type: application/json" -d @-
http:
\code{.js}
{
"operation" : "setvalues",
"configuration_values" : {
"QUERY_SAMPLE_INTERVAL" : 0 // disable collecting slow db queries on all nodes and databases
}
}
\endcode
\code{.js}
{
"configuration_values" : {
"QUERY_SAMPLE_INTERVAL": "0" // the new value that was just set
},
"requestStatus": "ok",
"cc_timestamp": 1533304104
}
\endcode
Clear profiler data collected by mongo
Clear profiler data collected by mongo for a specific node/db.
\code{.js}
{
"operation" : "cleardbprofiler",
"hostname" : STRING, // host to do operation on
"port" : NUMBER, // identify service on which to do operation
"dbname" : STRING // database name to clear profiler data about
}
\endcode
\code{.js}
{
"requestStatus": "ok",
"cc_timestamp": 1533304104
}
\endcode
Query profiler stats collected by mongo
Query profiler stats collected by mongo for a specific node/db.
\code{.js}
{
"operation" : "profilerstat",
"hostname" : STRING, // host to do operation on
"port" : NUMBER, // identify service on which to do operation
"dbname" : STRING, // database name to query profiler data about
"slowms" : NUMBER // query queries taking more time than specified miliseconds
}
\endcode
\code{.js}
{
"requestStatus": "ok",
"cc_timestamp": 1533304104
}
\endcode
Append tag to cluster
Appends a tag to a cluster
$ curl -XPOST -d'{"operation":"appendtag","tag":"newtag2"}' http:
[
"newtag",
"newtag2"
]
Remove tag from cluster
Removes a tag from a cluster
$ curl -XPOST -d'{"operation":"removetag","tag":"newtag2"}' http:
[
"newtag",
"newtag2"
]
setTags
Set tags of a cluster, please note that this method is overwrites all exisitings tags.
$ curl -XPOST -d'{"operation":"settags","tags":"a;b"}' http:
[
"a",
"b"
]
The GetClusterInfo RPC call
The "getclusterinfo" RPC call is designed to provide the basic information about one specific cluster. The CmonClusterInfo Class holds the properties of the clusters in the reply messages.
Example:
{
"operation": "getclusterinfo",
"with_hosts": true,
"with_host_properties": "class_name, hostname, port, ip",
"cluster_id": 200
}
{
"cc_timestamp": 1617970195,
"requestStatus": "ok",
"cluster":
{
"class_name": "CmonClusterInfo",
"cdt_path": "/",
"acl": "user::rwx,group::rwx,other::---",
"tags": [],
"cluster_auto_recovery": true,
"cluster_id": 200,
"cluster_name": "default_repl_200",
"cluster_type": "MYSQLCLUSTER",
"configuration_file": "configs/UtCmonRpcService_01.conf",
"effective_privileges": "",
"log_file": "./cmon-ut-cmonrpcservice.log",
"maintenance_mode_active": false,
"managed": true,
"node_auto_recovery": true,
"state": "MGMD_NO_CONTACT",
"status_text": "No contact to the management node.",
"vendor": "oracle",
"version": "8.0",
"alarm_statistics":
{
"class_name": "CmonAlarmStatistics",
"cluster_id": 200,
"created_after": "1970-01-01T00:00:00.000Z",
"critical": 3,
"reported_after": "1970-01-01T00:00:00.000Z",
"warning": 1
},
"group_owner":
{
"class_name": "CmonGroup",
"cdt_path": "/groups",
"owner_user_id": 1,
"owner_user_name": "system",
"owner_group_id": 1,
"owner_group_name": "admins",
"acl": "user::rwx,group::rwx,other::---",
"created": "2021-04-09T12:07:12.234Z",
"group_id": 1,
"group_name": "admins"
},
"hosts":
[
{
"class_name": "CmonMySqlHost",
"hostname": "127.0.0.2",
"ip": "127.0.0.2",
"port": 3306
},
{
"class_name": "CmonHost",
"hostname": "127.0.0.1",
"ip": "127.0.0.1",
"port": 9555
}
],
"job_statistics":
{
"class_name": "CmonJobStatistics",
"cluster_id": 200,
"by_state":
{
"ABORTED": 0,
"DEFINED": 0,
"DEQUEUED": 0,
"FAILED": 0,
"FINISHED": 0,
"RUNNING": 0
}
},
"owner":
{
"class_name": "CmonUser",
"cdt_path": "/",
"owner_user_id": 1,
"owner_user_name": "system",
"owner_group_id": 1,
"owner_group_name": "admins",
"acl": "user::rwx,group::r--,other::r--",
"created": "2021-04-09T12:07:12.240Z",
"disabled": false,
"first_name": "System",
"last_failed_login": "",
"last_login": "2021-04-09T12:07:19.868Z",
"last_name": "User",
"n_failed_logins": 0,
"origin": "CmonDb",
"password_encrypted": "c9ae848282b1b047af611aacd4e128b4c30a53910aa7a5e2ad0462e5a90db8a5",
"password_format": "sha256",
"password_salt": "8891e717-6966-47b0-8f7e-2d907d4be2f8",
"suspended": false,
"user_id": 1,
"user_name": "system",
"groups":
[
{
"class_name": "CmonGroup",
"cdt_path": "/groups",
"owner_user_id": 1,
"owner_user_name": "system",
"owner_group_id": 1,
"owner_group_name": "admins",
"acl": "user::rwx,group::rwx,other::---",
"created": "2021-04-09T12:07:12.234Z",
"group_id": 1,
"group_name": "admins"
}
],
"timezone":
{
"class_name": "CmonTimeZone",
"name": "Central European Time",
"abbreviation": "CET",
"offset": -3600,
"use_dst": false
}
}
}
}
The GetAllClusterInfo RPC call
The "GetAllClusterInfo" RPC call can be used to get the cluster info for all clusters that are known to the Cmon controller. The CmonClusterInfo Class holds the properties of the clusters in the reply messages.
The returned data is pretty much the same as seen in the "GetClusterInfo" call but instead of returning information about one cluster it returns a list that holds information about multiple clusters.
The RPC calls that are getting clusters can be used to the get the host list of the clusters too. Use the "with_hosts" and "with_host_properties" optional fields to request the host list and filter the host properties.
Example:
1 $ curl -XPOST -d '{"operation":"getallclusterinfo", "with_hosts":true, "token":
2 "d62d8adf4f32f5f4a388888eb4def7c86860257c"}' http://127.0.0.1:9500/0/clusters
{
"operation": "getallclusterinfo",
"with_hosts": true,
"with_host_properties": "class_name, hostname, port, ip",
"with_license_check": true,
"cluster_ids": [ 200 ]
}
{
"cc_timestamp": 1617970195,
"requestStatus": "ok",
"total": 2,
"clusters":
[
{
"class_name": "CmonClusterInfo",
"cdt_path": "/",
"acl": "user::rwx,group::rwx,other::---",
"tags": [],
"cluster_auto_recovery": true,
"cluster_id": 200,
"cluster_name": "default_repl_200",
"cluster_type": "MYSQLCLUSTER",
"configuration_file": "configs/UtCmonRpcService_01.conf",
"effective_privileges": "",
"log_file": "./cmon-ut-cmonrpcservice.log",
"maintenance_mode_active": false,
"managed": true,
"node_auto_recovery": true,
"state": "MGMD_NO_CONTACT",
"status_text": "No contact to the management node.",
"vendor": "oracle",
"version": "8.0",
"alarm_statistics":
{
"class_name": "CmonAlarmStatistics",
"cluster_id": 200,
"created_after": "1970-01-01T00:00:00.000Z",
"critical": 3,
"reported_after": "1970-01-01T00:00:00.000Z",
"warning": 1
},
"group_owner":
{
"class_name": "CmonGroup",
"cdt_path": "/groups",
"owner_user_id": 1,
"owner_user_name": "system",
"owner_group_id": 1,
"owner_group_name": "admins",
"acl": "user::rwx,group::rwx,other::---",
"created": "2021-04-09T12:07:12.234Z",
"group_id": 1,
"group_name": "admins"
},
"hosts":
[
{
"class_name": "CmonMySqlHost",
"hostname": "127.0.0.2",
"ip": "127.0.0.2",
"port": 3306
},
{
"class_name": "CmonHost",
"hostname": "127.0.0.1",
"ip": "127.0.0.1",
"port": 9555
}
],
"job_statistics":
{
"class_name": "CmonJobStatistics",
"cluster_id": 200,
"by_state":
{
"ABORTED": 0,
"DEFINED": 0,
"DEQUEUED": 0,
"FAILED": 0,
"FINISHED": 0,
"RUNNING": 0
}
},
"owner":
{
"class_name": "CmonUser",
"cdt_path": "/",
"owner_user_id": 1,
"owner_user_name": "system",
"owner_group_id": 1,
"owner_group_name": "admins",
"acl": "user::rwx,group::r--,other::r--",
"created": "2021-04-09T12:07:12.240Z",
"disabled": false,
"first_name": "System",
"last_failed_login": "",
"last_login": "2021-04-09T12:07:19.868Z",
"last_name": "User",
"n_failed_logins": 0,
"origin": "CmonDb",
"password_encrypted": "c9ae848282b1b047af611aacd4e128b4c30a53910aa7a5e2ad0462e5a90db8a5",
"password_format": "sha256",
"password_salt": "8891e717-6966-47b0-8f7e-2d907d4be2f8",
"suspended": false,
"user_id": 1,
"user_name": "system",
"groups":
[
{
"class_name": "CmonGroup",
"cdt_path": "/groups",
"owner_user_id": 1,
"owner_user_name": "system",
"owner_group_id": 1,
"owner_group_name": "admins",
"acl": "user::rwx,group::rwx,other::---",
"created": "2021-04-09T12:07:12.234Z",
"group_id": 1,
"group_name": "admins"
}
],
"timezone":
{
"class_name": "CmonTimeZone",
"name": "Central European Time",
"abbreviation": "CET",
"offset": -3600,
"use_dst": false
}
}
}
],
"license":
{
"class_name": "CmonLicense",
"used_nodes": 3
},
"license_check":
{
"class_name": "CmonLicenseCheck",
"has_license": false,
"status_text": "No license found."
}
}
Some of the fields are as follows:
with_hosts
If this optional argument is provided with a true value the host list (the nodes) will also be returned with the reply. The "hosts" property of the reply will hold a list of host objects.
with_host_properties
This property can be passed with a list of property names to filter the properties of the returned hosts. If this argument is omitted and the "with_hosts" property is true all the available properties of the hosts will be returned.
with_containers
If this argument is provided with true value the list of containers related to the cluster(s) will be returned. The "containers" property of the reply will hold a list of container objects. The connection between the hosts and the containers can be made using the "container_id" property of the hosts.
Please note that the controller will return information about the containers that are hosted by any of the registered container servers. If no container server is registered the controller will not be able to return any container info.
with_container_properties
This optinal argument can hold a list of strings defining what properties of the containers should be returned. If this argument is omitted and the "with_containers" parameter is true all the available properties of the containers will be returned.
The "listAccounts" Request
Does a sql query for a list of database accounts (users) for the requested hosts (by default the master).
Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "listAccounts",
"hosts" : STRING,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
hosts
Selects hosts to query database users on. When not specified, by default the master or primary cluster node will be used. The value should be a ';' separated list of 'hostname' or 'hostname:port' values.
orderByColumn
May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
limit
A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
offset
An offset of the first line to return of the result set (table lines). By default this is 0.
Here is a minimal example request:
{
"operation": "listAccounts"
}
Here is a minimal example request with limit:
{
"operation": "listAccounts",
"limit" : 1
}
Here is an example result:
{
"cc_timestamp": 1500543343,
"queryResultCount": 1,
"queryResultTotalCount": 8,
"queryResults": [
{
"hostname": "192.168.30.74",
"port": 3306,
"accounts": [
{
"class_name": "CmonAccount",
"grants": "REPLICATION CLIENT",
"host_allow": "192.168.30.74",
"own_database": "",
"password": "*0976C3DEA4D15D73572FEBD24E8BB7B1375EB818",
"user_name": "proxysql-monitor"
}
]
}
],
"requestStatus": "ok"
}
The Original Clusters API
This is the original an default interpretation of the /0/clusters and /${CLUSTERID}/clusters RPC calls. To use this leave the "operation" field empty in the RPC request. (It is also ok to send any non-recognized string in the "operation" field, but of course easier to simply omit it.)
/0/clusters or /${CLUSTERID}/clusters (clusterId will be ignored anyway)
On this REST path you will get back the list of the manged clusters by the clustercontroller instance. (You can get only one specific clusters data by sending an 'id' field specified.)
Few example calls/replies:
1 $ curl 'http://localhost:9500/0/clusters'
{
"cc_timestamp": 1444310882,
"clusters": [
{
"clusterAutorecovery": true,
"configFile": "/etc/cmon.d/cmon_5.cnf",
"id": 5,
"logFile": "/var/log/cmon_5.log",
"name": "cluster_5",
"nodeAutorecovery": true,
"running": true,
"status": 0,
"statusText": "",
"type": "postgresql_single"
},
{
"clusterAutorecovery": true,
"configFile": "/etc/cmon.d/cmon_6.cnf",
"id": 6,
"logFile": "/var/log/cmon_6.log",
"name": "cluster_6",
"nodeAutorecovery": true,
"running": true,
"status": 0,
"statusText": "",
"type": "mysql_single"
} ],
"info":
{
"hasLicense": true,
"licenseExpires": 83,
"licenseStatus": "License found.",
"version": "1.2.12"
},
"requestStatus": "ok"
}
1 $ curl 'http://10.0.0.6:9500/0/clusters'
{
"cc_timestamp": 1432889560,
"data": [
{
"clusterAutorecovery": true,
"configFile": "/etc/cmon.d/cmon_4.cnf",
"id": 4,
"logFile": "/var/log/cmon_4.log",
"name": "cluster_4",
"nodeAutorecovery": true,
"running": true,
"status": 2,
"statusText": "Cluster started.",
"type": "postgresql_single"
} ],
"requestStatus": "ok",
"total": 1
}
The ProxySql API
/${CLUSTERID}/proxysql
The "topQueries" Request
Does a sql query for stats about top queires sent for proxysql. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "topQueries",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
orderByColumn
May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
limit
A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
offset
An offset of the first line to return of the result set (table lines). By default this is 0.
Here is an example result:
{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name" : "CmonProxySqlTopQuery",
"count_star": "2",
"digest": "0x99531AEFF718C501",
"digest_text": "show tables",
"first_seen": "1483775487",
"hostgroup": "10",
"last_seen": "1483775674",
"max_time": "578",
"min_time": "217",
"schemaname": "proxydemo",
"sum_time": "795",
"username": "proxydemo"
}],
"queryResultCount" : 1,
"queryResultTotalCount" : 13,
"requestStatus": "ok"
}
The "processlist" Request
Does a sql query for stats about running processes for proxysql. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "processlist",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
orderByColumn
May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
limit
A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
offset
An offset of the first line to return of the result set (table lines). By default this is 0.
Here is an example result:
{
"cc_timestamp": 1572588476,
"queryResultCount": 1,
"queryResultTotalCount": 1,
"queryResults": [
{
"SessionID": "832688",
"ThreadID": "0",
"class_name": "CmonProxySqlProcessList",
"cli_host": "10.10.10.1",
"cli_port": "35482",
"command": "Query",
"db": "proxydemo",
"extended_info": null,
"hostgroup": "20",
"info": "select sleep(30)",
"l_srv_host": "10.10.10.12",
"l_srv_port": "33656",
"srv_host": "10.10.10.12",
"srv_port": "3306",
"status_flags": "0",
"time_ms": "12016",
"user": "proxydemo"
} ],
"requestStatus": "ok"
}
The "resetQueryRuleStats" Request
Resets the stats of query rules in proxysql. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "resetQueryRuleStats",
"hostName" : STRING,
"port" : NUMBER
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "queryRules" Request
Does a sql query for stats about top queires sent for proxysql. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "queryRules",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
orderByColumn
May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
limit
A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
offset
An offset of the first line to return of the result set (table lines). By default this is 0.
Here is an example result:
{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name" : "CmonProxySqlQueryRule",
"active": "1",
"apply": "1",
"cache_ttl": "NULL",
"client_addr": "NULL",
"comment": "NULL",
"delay": "NULL",
"destination_hostgroup": "10",
"digest": "NULL",
"error_msg": "NULL",
"flagIN": "0",
"flagOUT": "NULL",
"log": "NULL",
"match_digest": "NULL",
"match_pattern": "^SELECT . .. .cache FOR UPDATE",
"mirror_flagOUT": "NULL",
"mirror_hostgroup": "NULL",
"negate_match_pattern": "0",
"proxy_addr": "NULL",
"proxy_port": "NULL",
"reconnect": "NULL",
"replace_pattern": "NULL",
"retries": "NULL",
"rule_id": "100",
"schemaname": "NULL",
"timeout": "NULL",
"username": "NULL"
}],
"queryResultCount" : 1,
"queryResultTotalCount" : 3,
"requestStatus": "ok"
}
The "insertQueryRule" Request
Inserts the queryRule on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "insertQueryRule",
"hostName" : STRING,
"port" : NUMBER,
"queryRule" : {
"class_name" : "CmonProxySqlQueryRule",
"rule_id" : NUMBER
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
queryRule
Properties should be according to CmonProxySqlQueryRule class. Has no default value, so this is a mandatory field.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "deleteQueryRule" Request
Deletes a queryRule on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "deleteQueryRule",
"hostName" : STRING,
"port" : NUMBER,
"queryRule" : {
"class_name" : "CmonProxySqlQueryRule",
"rule_id" : NUMBER
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
queryRule
Properties should be according to CmonProxySqlQueryRule class, but only rule_id is used for constructing DELETE sql command. Has no default value, so this is a mandatory field.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "updateQueryRule" Request
Updates a queryRule on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "updateQueryRule",
"hostName" : STRING,
"port" : NUMBER,
"queryRule" : {
"class_name" : "CmonProxySqlQueryRule",
"rule_id" : NUMBER
"new_rule_id" : NUMBER
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
queryRule
Properties should be according to CmonProxySqlQueryRule class, except that when the rule_id (the primary key) is to be changed, the new value should be defined as "new_rule_id" property. Has no default value, so this is a mandatory field.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "queryHostgroups" Request
Does a sql query for mysql server hostgroups set for proxysql host. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "queryHostgroups",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
orderByColumn
May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
limit
A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
offset
An offset of the first line to return of the result set (table lines). By default this is 0.
Here is an example result:
{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name": "CmonProxySqlHostgroup",
"writer_hostgroup": 10,
"reader_hostgroup": 20,
"comment" : "host groups"
}],
"queryResultCount" : 1,
"queryResultTotalCount" : 3,
"requestStatus": "ok"
}
The "queryServers" Request
Does a sql query for mysql servers registered for proxysql. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "queryServers",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
orderByColumn
May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
limit
A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
offset
An offset of the first line to return of the result set (table lines). By default this is 0.
Here is an example result:
{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name" : "CmonProxySqlServer",
"Bytes_data_recv": "0",
"Bytes_data_sent": "0",
"ConnERR": "0",
"ConnFree": "0",
"ConnUsed": "0",
"Latency_ms": "285",
"Queries": "0",
"status": "ONLINE",
"comment": "read server",
"compression": "0",
"hostgroup_id": "20",
"hostname": "192.168.30.11",
"max_connections": "100",
"max_latency_ms": "0",
"max_replication_lag": "10",
"port": "3306",
"use_ssl": "0",
"weight": "1"
}],
"hostgroupIdList": [20],
"queryResultCount" : 1,
"queryResultTotalCount" : 3,
"requestStatus": "ok"
}
The "insertMysqlServer" Request
Inserts a mysql server the specified proxysql host configuration. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "insertMysqlServer",
"hostName" : STRING,
"port" : NUMBER,
"mysqlServer" : {
"class_name" : "CmonProxySqlServer",
"hostgroup_id" : NUMBER,
"hostname" : STRING
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
mysqlServer
Properties should be according to CmonProxySqlServer class. Has no default value, so this is a mandatory field.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "deleteMysqlServer" Request
Deletes a mysql server from proxysql host configuration. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "deleteMysqlServer",
"hostName" : STRING,
"port" : NUMBER,
"mysqlServer" : {
"class_name" : "CmonProxySqlServer",
"hostgroup_id" : NUMBER,
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
mysqlServer
Properties should be according to CmonProxySqlServer class, but only hostgroup_id, hostname and port is used for constructing DELETE sql command. Has no default value, so this is a mandatory field.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "updateMysqlServer" Request
Updates a mysql server config on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "updateMysqlServer",
"hostName" : STRING,
"port" : NUMBER,
"mysqlServer" : {
"class_name" : "CmonProxySqlServer",
"hostgroup_id" : NUMBER,
"hostname" : STRING,
"port" : NUMBER,
"new_hostgroup_id" : NUMBER,
"new_hostname" : STRING,
"new_port" : NUMBER,
}
}
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "queryProxysqlServers" Request
Does a sql query for proxysql servers registered for proxysql. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "queryProxysqlServers",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
orderByColumn
May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
limit
A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
offset
An offset of the first line to return of the result set (table lines). By default this is 0.
Here is an example result:
{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name" : "CmonProxySqlServers",
"hostname: "localhost",
"port": "6032",
"weight": "0",
"comment": ""
}],
"queryResultCount" : 1,
"queryResultTotalCount" : 3,
"requestStatus": "ok"
}
The "insertProxysqlServer" Request
Inserts a proxysql server to the specified proxysql host configuration. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "insertProxysqlServer",
"hostName" : STRING,
"port" : NUMBER,
"proxysqlServer" : {
"class_name" : "CmonProxySqlServers",
"hostname" : STRING,
"port" : NUMBER
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "deleteProxysqlServer" Request
Deletes a proxysql server from proxysql host configuration. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "deleteProxysqlServer",
"hostName" : STRING,
"port" : NUMBER,
"mysqlServer" : {
"class_name" : "CmonProxySqlServers",
"hostname" : STRING
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "updateProxysqlServer" Request
Updates a mysql server config on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "updateProxysqlServer",
"hostName" : STRING,
"port" : NUMBER,
"mysqlServer" : {
"class_name" : "CmonProxySqlServers",
"hostname" : STRING
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "queryUsers" Request
Does a sql query for mysql users set for proxysql host. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "queryUsers",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
orderByColumn
May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
limit
A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
offset
An offset of the first line to return of the result set (table lines). By default this is 0.
Here is an example result:
{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name": "CmonProxySqlUser",
"active": "1",
"backend": "1",
"default_hostgroup": "10",
"default_schema": "proxydemo",
"fast_forward": "0",
"frontend": "1",
"max_connections": "10000",
"password": "proxydemo",
"schema_locked": "0",
"transaction_persistent": "0",
"use_ssl": "0",
"username": "proxydemo"
}],
"queryResultCount" : 1,
"queryResultTotalCount" : 3,
"requestStatus": "ok"
}
The "insertMysqlUser" Request
Inserts the mysql user on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "insertQueryRule",
"hostName" : STRING,
"port" : NUMBER,
"mysqlUser" : {
"class_name" : "CmonProxySqlUser",
"username" : STRING,
"password" : STRING,
"db_database" : STRING,
"db_privs" : STRING,
"frontend" : NUMBER,
"backend" : NUMBER
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
mysqlUser
Properties should be according to CmonProxySqlUser class. Has no default value, so this is a mandatory field.
db_database
Extra property for CmonProxySqlUser, usable only for insertion (for now). This should have the form of "databasename.*" or "*.*" by default this will be set to "*.*".
db_privs
Extra property for CmonProxySqlUser, usable only for insertion (for now). This should be a list of coma separated mysql user privileges. Has no default value and at least one privilege must be defined, or the user creation will fail.
List of supported privilege values:
CREATE, DROP, GRANT OPTION, LOCK TABLES, REFERENCES, EVENT, ALTER, DELETE, INDEX, INSERT, SELECT, UPDATE, CREATE TEMPORARY TABLES, TRIGGER, CREATE VIEW, SHOW VIEW, ALTER ROUTINE, CREATE ROUTINE, EXECUTE, FILE, CREATE TABLESPACE, CREATE USER, PROCESS, PROXY, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, SHOW DATABASES, SHUTDOWN, SUPER, ALL [PRIVILEGES]
frontend
Not mandatory, by default set to 1.
backend
Not mandatory, by default set to 1.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "deleteMysqlUser" Request
Deletes a mysql user on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "deleteMysqlUser",
"hostName" : STRING,
"port" : NUMBER,
"mysqlUser" : {
"class_name" : "CmonProxySqlUser",
"username" : STRING
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
mysqlUser
Properties should be according to CmonProxySqlUser class. Has no default value, so this is a mandatory field.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "updateMysqlUser" Request
Updates a mysql user on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "updateMysqlUser",
"hostName" : STRING,
"port" : NUMBER,
"mysqlUser" : {
"class_name" : "CmonProxySqlUser",
"username" : STRING,
"password" : STRING,
"frontend" : NUMBER,
"backend" : NUMBER
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
mysqlUser
Properties should be according to CmonProxySqlUser class. Has no default value, so this is a mandatory field.
frontend
Not mandatory, by default set to 1.
backend
Not mandatory, by default set to 1.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "importMysqlUsers" Request
Receives lists of CmonProxySqlUser objects for a list of mysql nodes to import the specified users from the mysql nodes into proxysql.
The user's global privileges, accessible databases and privileges for those databasses and password will be read from the mysql source host.
Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "importMysqlUsers",
"hostName" : STRING,
"port" : NUMBER,
"userList" : [
{
"sourceHostName" : STRING,
"sourcePort" : NUMBER,
"proxySqlUsers" : [
{
"class_name" : "CmonProxySqlUser",
"username" : STRING,
"host_allow" : STRING,
"default_hostgroup": "10"
}
]
}
]
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
userList
A list of lists of CmonProxySqlUser object (representing users to import) for each mysql hosts to import the users from.
sourceHostName
Mysql host to import users from.
sourcePort
Port for the mysql node to import users from.
proxySqlUsers
A list of CmonProxySqlUser to import to. Properties like default_hostgroup will be used as in case of create new user call. Except for password, db_database and db_privs, these will be read from the source mysql node.
host_allow
Not a real CmonProxySqlUser property, just an enhancement for this import only. It specifies exactly what mysql user account's password and privileges to use for the new mysql user that will be used by proxysql. The new mysql user is a must have side effect of this import to be able to login from the defined proxysql host.
Here is an example result:
In the returned answer json structure there can be a userList array structured the same way as the input parameter. This array will countain all the users those import was not finished because some error happened during the import procedure.
In practice after every successfull import the just imported CmonProxySqlUser object is removed, and in case of error, the remaining list is returned in the answer.
{
"cc_timestamp": 1484041378,
"requestStatus": "ok",
"userList" : []
}
The "queryVariables" Request
Does a sql query for proxysql global variables set for proxysql host. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "queryVariables",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
orderByColumn
May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
limit
A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
offset
An offset of the first line to return of the result set (table lines). By default this is 0.
Here is an example result:
{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name": "CmonProxySqlUser",
"variable_name": "mysql-shun_on_failures",
"variable_value": "5"
}],
"queryResultCount" : 1,
"queryResultTotalCount" : 75,
"requestStatus": "ok"
}
The "updateVariable" Request
Updates a variable on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "updateVariable",
"hostName" : STRING,
"port" : NUMBER,
"variable" : {
"class_name" : "CmonProxySqlVariable",
"variable_name" : STRING,
"variable_value" : STRING
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
variable
Properties should be according to CmonProxySqlVariable class. Has no default value, so this is a mandatory field.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "querySchedules" Request
Does a sql query for scheduler records set for proxysql host. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "querySchedules",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
orderByColumn
May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
limit
A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
offset
An offset of the first line to return of the result set (table lines). By default this is 0.
Here is an example result:
{
"cc_timestamp": 1484041378,
"queryResults":
[ {
"class_name": "CmonProxySqlSchedule",
"id": "1",
"active": "1",
"interval_ms" : "100",
"filename" : "/path/to/scriptefile",
"arg1": "first script argument",
"arg2": "second script argument",
"arg3": "third script argument",
"arg4": "forth script argument",
"arg5": "fifth script argument",
"comment" : "comment"
}],
"queryResultCount" : 1,
"queryResultTotalCount" : 1,
"requestStatus": "ok"
}
The "insertSchedule" Request
Inserts the schedule record on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is a minimal example request:
{
"operation" : "insertSchedule",
"hostName" : STRING,
"port" : NUMBER,
"schedule" : {
"class_name" : "CmonProxySqlSchedule",
"filename" : "/path/to/scriptefile"
}
}
Here is a complete example request:
{
"operation" : "insertSchedule",
"hostName" : STRING,
"port" : NUMBER,
"schedule" : {
"class_name" : "CmonProxySqlSchedule",
"active": "1",
"interval_ms" : "100",
"filename" : "/path/to/scriptefile",
"arg1": "first script argument",
"arg2": "second script argument",
"arg3": "third script argument",
"arg4": "forth script argument",
"arg5": "fifth script argument",
"comment" : "comment"
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
schedule
Properties should be according to CmonProxySqlScheduler class. Has no default value, so this is a mandatory field.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "deleteSchedule" Request
Deletes a schedule record from the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "deleteSchedule",
"hostName" : STRING,
"port" : NUMBER,
"schedule" : {
"class_name" : "CmonProxySqlSchedule",
"username" : STRING
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
schedule
Properties should be according to CmonProxySqlSchedule class. Has no default value, so this is a mandatory field.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "updateSchedule" Request
Updates a schedule record on the specified proxysql host. Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "updateSchedule",
"hostName" : STRING,
"port" : NUMBER,
"schedule" : {
"class_name" : "CmonProxySqlSchedule",
"id": "1",
"active": "1",
"interval_ms" : "100",
"filename" : "/path/to/scriptefile",
"arg1": "first script argument",
"arg2": "second script argument",
"arg3": "third script argument",
"arg4": "forth script argument",
"arg5": "fifth script argument",
"comment" : "comment"
}
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
schedule
Properties should be according to CmonProxySqlSchedule class. Has no default value, so this is a mandatory field.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "updateAdminCredentialsInCmon" Request
Updates proxysql admin credentials in cmon only. Useful when for some reason the proxysql admin name and or password was changed outside of cmon, and thus cmon has wrong credentials.
Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation": "updateAdminCredentialsInCmon",
"hostName": "192.168.30.71",
"adminName": "adminname",
"adminPassword": "adminpwd"
}
hostName
Selects host for which to update values in cmon. By default, when this is empty, the first found proxySQL node will be used.
adminName
Admin user's name defined in proxysql global_variables table.
adminPassword
Admin user's password defined in proxysql global_variables table.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "updateMonitorCredentialsInCmon" Request
Updates proxysql monitor user credentials in cmon only. Useful when for some reason the proxysql monitor user name and or password was changed outside of cmon, and thus cmon has wrong values.
Returns "requestStatus" = "ok" on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation": "updateMonitorCredentialsInCmon",
"hostName": "192.168.30.71",
"monitorUserName": "monitorName",
"monitorUserPassword": "monitorPassword"
}
hostName
Selects host for which to update values in cmon. By default, when this is empty, the first found proxySQL node will be used.
monitorUserName
Monitor user's name defined in proxysql global_variables table.
monitorUserPassword
Monitor user's password defined in proxysql global_variables table.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "queryproxysqlcluster" Request
Does a sql query for stats about top queires sent for proxysql. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "queryproxysqlcluster",
"hostName" : STRING,
"port" : NUMBER,
"orderByColumn" : STRING,
"limit" : NUMBER,
"offset" : NUMBER
}
hostName
Selects host where proxySQL (to query on) is running. By default, when this is empty, the first found proxySQL node will be used.
port
Port of proxySQL in case the host have multiple proxySQLs running. By default this is 0, and not used for host selection.
orderByColumn
May contain a column name to order (descending) the results. By default this is empty, so no ordering is made.
limit
A limit on the number of returned records (table lines). By default, when this is 0, some low number of records will be returned. The maximum number of results returned is 1000.
offset
An offset of the first line to return of the result set (table lines). By default this is 0.
Here is an example result:
{
"cc_timestamp": 1484041378,
"queryResults":
[{
"class_name": "CmonProxySqlCluster",
"comment": "proxysql_group",
"hostname": "10.10.10.10",
"port": "6032",
"weight": "0"
},
..
{ ... }],
"queryResultCount" : 3,
"queryResultTotalCount" : 3,
"requestStatus": "ok"
}
The Cloud Services API
/0/cloud
The "proxy" Request to clustercontrol-cloud service
Forwards the request as REST http request to clustercontrol-cloud service. Returns its response in "response" named json object. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
If the clustercontrol-cloud answer hold a json, it is parsed and included into response.json object. If the parse fails, the service provided answer will be found as raw string in response.body. Also non json format response body will be answered in response.body.
Here is an example request witj json returned:
{
"operation" : "proxy",
"method" : "GET",
"uri" : "/aws/vm",
"body" : ""
}
Here is the result:
{
"cc_timestamp": 1484041378,
"response":
{
"headers": {
"date": "Thu, 25 Jan 2018 11:15:18 GMT",
"content-length": "1553",
"content-type": "application/json; charset=UTF-8"
},
"json": [
{
"network": {
"public_ip": [
"52.58.107.236",
"ec2-52-58-107-236.eu-central-1.compute.amazonaws.com"
],
"private_ip": [
"172.31.2.217",
"ip-172-31-2-217.eu-central-1.compute.internal"
]
},
"subnet_id": "subnet-6a1d1c12",
"image": "ami-653bd20a",
"vpc_id": "vpc-8238dfeb",
"region": "eu-central-1",
"firewalls": [
"sg-d2bc5abb"
],
"size": "c3.xlarge",
"id": "i-068e665a16334283a",
"cloud": "aws",
"name": "V1-DO_NOT_REMOVE_S9S-JENKINS-BUGZILLA-SERVER.jenkins.severalnines.com"
}
],
"status_code": 200,
"reason_phrase": "OK"
},
"requestStatus": "ok"
}
Here is an example request with plain text returned:
{
"operation" : "proxy",
"method" : "GET",
"uri" : "/aws/storage/list-buckets",
"body" : ""
}
Here is the result:
{
"cc_timestamp": 1484041378,
"response":
{
"body": "Not Found",
"headers": {
"date": "Thu, 25 Jan 2018 11:15:23 GMT",
"content-length": "9",
"content-type": "text/plain; charset=utf-8"
},
"status_code": 404,
"reason_phrase": "Not Found"
},
"requestStatus": "ok"
}
The "list_credentials" Request
Returns a json structure containing all registered cloud credentials. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "list_credentials",
}
Here is an example result:
{
"cc_timestamp": 1484041378,
"result":
{
"aws" : [
{
"id" : : 0
"name" : "My aws backup target",
"comment" : "For hat purpose we are using this cloud service.",
"credentials" :
{
"access_key_id" : "AKFBXXXXXXXXXKO4ZY2A",
"access_key_secret" : "CzrDyNiZgRcS0Wt2jnXXXXXXXXXXOpZJHX3I5QT/",
"access_key_region" : "eu-central-1"
}
},
...
],
"gce" : [
{
"id" : 1
"name" : "My gce backup target",
"comment" : "For hat purpose we are using this cloud service.",
"credentials" :
{
"type" : "service_account",
"project_id" : "Project ID",
"private_key_id" : "Private key Id",
"private_key" : "Private key contents",
"client_email" : "Client Email",
"client_id" : "Client ID",
"auth_uri" : "https://accounts.google.com/o/oauth2/auth",
"token_uri" : "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url" : "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url" : "Client x509 certificate url"
}
},
...
]
},
"requestStatus": "ok"
}
Get a specific credential
Description: With this RPC API it is possible to obtain the credentials by ID and provider string.
An example request & reply:
1 $ curl -XPOST -d '{"token:"ng59qVifPA7PS881","operation": "get_credentials", "id": 0, "provider": "aws"}' http://localhost:9500/185/cloud
{
"cc_timestamp": 1506417375,
"data":
{
"access_key_id": "XKIAIXUKPGHXIGTO6RVQ",
"access_key_region": "eu-west-2",
"access_key_secret": "XIsU6BcDiac5UWiRewVXCmT6Fv6X5YDkYgUXVPrZ"
},
"requestStatus": "ok"
}
The "add_credentials" Request
Adds a cloud service credentials json structure to the backend's collection. Saves it in /var/lib/cmon/cloud_credentials.json file. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "add_credentials",
"provider" : STRING,
"name" : STRING,
"comment" : STRING,
"credentials" :
{
}
}
provider
Can have these values:
- "aws" for amazon web service credentials
- "gce" for google clound engine credentials
Has no default value, and thus must be defined.
name
A user choosen name/string for human way of identifying a credential profile. By default the value is empty, which is valid. Note that the same name might not be used twice for the same cloud provider.
comment
Any remark of the user related to the credentials to save, does not have any technical meaning. By default this is empty.
credentials
The possible structure depend on the value of provider. Please check the example result for list_credentials operation for the possible fields for credentials structures. Also please not that this structure is defined by our clud named tool, which can upload and download files to/from could providers' file storage services. More information can be found at https://github.com/severalnines/cloudlink-go/tree/develop/clud
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok",
"added_id" : NUMBER
}
added_id
Every credentials saved on the backend will have a unique id assigned. That id can be used for update and remove operations and is returned in this field after adding a new credentials profile.
The "update_credentials" Request
Updates a cloud service credentials profile / json structure in the backend's collection. Saves it in /var/lib/cmon/cloud_credentials.json file. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "update_credentials",
"provider" : STRING,
"id" : NUMBER,
"name" : STRING,
"comment" : STRING,
"credentials" :
{
}
}
provider
Can have these values:
- "aws" for amazon web service credentials
- "gce" for google clound engine credentials
Has no default value, and thus must be defined.
id
The unique identifier number of the credentials profile to update. The id can be found for each profile in the result of list_credentials rpc operation or returned in the field "added_id" when add_credentials operation is used.
name
A user choosen name/string for human way of identifying a credential profile. By default the value is empty, which is valid. Note that the same name might not be used twice for the same cloud provider.
comment
Any remark of the user related to the credentials to save, does not have any technical meaning. By default this is empty.
credentials
The possible structure depend on the value of provider. Please check the example result for list_credentials operation for the possible fields for credentials structures. Also please not that this structure is defined by our clud named tool, which can upload and download files to/from could providers' file storage services. More information can be found at https://github.com/severalnines/cloudlink-go/tree/develop/clud
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The "remove_credentials" Request
Removes a cloud service credentials profile / json structure from the backend's collection. Also saves the remaining credentials in /var/lib/cmon/cloud_credentials.json file. Returns "requestStatus" = "ok" and sets queryResults on success. Returns "requestStatus" != "ok" and sets errorMsg on failure.
Here is an example request:
{
"operation" : "remove_credentials",
"provider" : STRING,
"id" : NUMBER,
}
provider
Can have these values:
- "aws" for amazon web service credentials
- "gce" for google clound engine credentials
Has no default value, and thus must be defined.
id
The unique identifier number of the credentials profile to remove. The id can be found for each profile in the result of list_credentials rpc operation or returned in the field "added_id" when add_credentials operation is used.
Here is an example result:
{
"cc_timestamp": 1484041378,
"requestStatus": "ok"
}
The password reset API
Description: These APIs are made for UI to be able to maintain password reset tokens and sending emails out to the users.
forgot_password
Description: This operation could be used to trigger a password reset procedure for a UI user, a token will be generated (with expiration time) and also an email will be sent to the user with the specified URL (token will be appended).
Arguments:
- email_address: the user's email address
- base_url: the URL from browser + the password specific paths (must be urlencoded)
An example request and reply:
1 curl 'http://localhost:9500/0/passwordreset?token=5be993bd3317aba6a24cc52d2a39e7636d35d55d&operation=forgot_password&
[email protected]&base_url=https%3A%2F%2Ftest.severalnines.com%2Fclustercontrol%2Fpasswordreset%26token%3D'
3 "cc_timestamp": 1514908388,
reset_password
Description: Once the user received the e-mail and clicked on the link, the frontend must get the token from the URL and pass it to this RPC to set the new password for the user.
NOTE: as an experiment this call also updates the dcps.users table password+hash fields so the user can log in with the new password
Arguments:
- password_token: the password reset token from the URL
- password_new: the new password to be set
An example request and reply:
1 curl 'http://localhost:9500/0/passwordreset?token=5be993bd3317aba6a24cc52d2a39e7636d35d55d&operation=reset_password&password_token=1ced41461efe4688bd215dcd0f1bbef6&password_new=admin'
3 "cc_timestamp": 1514908494,
Setup, configuration
4 Enabled features: mongodb, rpc, libssh, mysql
5 Build git-hash: 54309b368f29f266a115cfbf99ba4aa27421168c
The RPC can be configured using the following command line arguments:
1 -p, --rpc-port=<int> Listen on RPC port (default: 9500)
2 -b, --bind-addr='ip1,ip2..' Bind RPC to IP addresses (default: 127.0.0.1)
For cmon (SysV) service, you can create one of the following configuration files:
- /etc/default/cmon (on debian like systems)
- /etc/sysconfig/cmon (on redhat like systems)
with the following content:
1 # custom port (NOTE: RPCv2 will listen on this port + 1):
3 # custom bind addresses (default 127.0.0.1):
4 #RPC_BIND_ADDRESSES="192.168.0.100,192.168.33.1"
5 #RPC_BIND_ADDRESSES="0.0.0.0"
7 # this might be already here for clustercontrol-notifications service:
9 # New events client http callback as of v1.4.2!
10 EVENTS_CLIENT="http://127.0.0.1:9510"
(Don’t forget to restart the cmon service.)
After cmon is started up, you can verify it by posting a JSon request (replace the url):
1 $ curl -XPOST -d '{"operation":"getCellFunctions","spreadsheetUser":"admin"}' \
2 http://ec2-54-220-127-157.eu-west-1.compute.amazonaws.com:9500/1/sheet
You should get some JSon reply back…
Authentication key configuration
The cmon.cnf (or the corresponding one for a specific cluster) could specify an authentikation token/key to restrict the access to the RPC interfaces.
NOTE: cmon now provides an RPC to actualy enforce the security on a cluster, see generateToken .
then the RPC requests must contain the authentication key ("token"), otherwise the server will replies back 'access denied' because of the missing token.
Please NOTE that it is also possible to specify the authentication token in a http header, using the "X-Token" header name.
{ "token": "AABBCCDDEEFF", ... }
Some example JSon replies with authentication turned on:
1 $sudo cat /etc/cmon.d/cmon_12.cnf | grep rpc_key
4 $ curl -X POST -H"Content-Type: application/json" -d '{"operation": "getJobs"}' http://localhost:9500/12/job
{
"errorString": "Access denied (invalid authentication token)",
"requestStatus": "error"
}
1 $ curl -X POST -H"Content-Type: application/json" -d '{"operation": "getJobs", "token": "invalid"}' http://localhost:9500/12/job
{
"errorString": "Access denied (invalid authentication token)",
"requestStatus": "error"
}
And finally a good one ;-)
1 $ curl -X POST -H"Content-Type: application/json" -d '{"operation": "getJobs" , "token": "EBBEABBA"}' http://localhost:9500/12/job
{
"jobs": [ ],
"requestStatus": "ok"
}
TLS support
RPCv2 services (requires a different way of authentication, other than to tokens), will listen on an additional port (RPC portNum+1, so on 9501 by default) using TLS.
The daemon will try to use the following SSL/TLS certificate and private keys, if these are not exists there, then it will auto-generate a self-signed key (with one year validity):
1 /var/lib/cmon/ca/cmon/rpc_tls.crt
2 /var/lib/cmon/ca/cmon/rpc_tls.key
4 $ sudo openssl x509 -noout -text -in /var/lib/cmon/ca/cmon/rpc_tls.crt
8 Serial Number: 1453908206 (0x56a8e0ee)
9 Signature Algorithm: sha256WithRSAEncryption
10 Issuer: CN=127.0.0.1/description=clustercontrol RPC TLS key
12 Not Before: Jan 26 15:23:26 2016 GMT
13 Not After : Jan 26 15:23:26 2018 GMT
14 Subject: CN=127.0.0.1/description=clustercontrol RPC TLS key
15 Subject Public Key Info:
16 Public Key Algorithm: rsaEncryption
17 Public-Key: (2048 bit)
20 Exponent: 65537 (0x10001)
22 X509v3 Subject Key Identifier:
23 26:E7:BB:86:24:82:69:76:8E:96:66:15:B8:D5:B2:FD:B9:B8:2F:28
24 X509v3 Basic Constraints: critical
26 X509v3 Key Usage: critical
27 Digital Signature, Key Encipherment, Key Agreement
28 X509v3 Extended Key Usage:
29 TLS Web Server Authentication
30 X509v3 Subject Alternative Name:
31 IP Address:127.0.0.1, DNS:localhost
32 Signature Algorithm: sha256WithRSAEncryption
Hosts and Containers
The "/0/host" path is for managing servers, hosts and virtual machines or containers.
startServers
This call is for starting up (or booting) servers.
{
"operation": "startServers",
"request_created": "2017-10-16T07:48:21.779Z",
"request_id": 3,
"servers": [
{
"class_name": "CmonContainerServer",
"hostname": "host04"
} ]
}
{
"messages": [ "Started server 'host04'." ],
"request_created": "2017-10-16T07:48:21.779Z",
"request_id": 3,
"request_processed": "2017-10-16T07:48:21.943Z",
"request_status": "Ok",
"request_user_id": 3
}
shutDownServers
This call is for shut down up (power off) servers.
registerServers
This call can be used to register a container server in the Cmon Controller. This is the fast way, but there is a job called "create_server" that does a very similar registration. The job will actually install software, this call just registers an existing server.
Here is how a request will be sent:
{
"operation": "registerServers",
"request_created": "2017-08-31T07:53:41.158Z",
"request_id": 3,
"servers": [
{
"class_name": "CmonContainerServer",
"cdt_path": "myservers",
"hostname": "storage01"
} ]
}
Here is a reply that shows the server was registered, but the server is actually turned off:
{
"reply_received": "2017-10-16T11:33:24.501Z",
"request_created": "2017-10-16T11:33:24.496Z",
"request_id": 3,
"request_processed": "2017-10-16T11:33:27.606Z",
"request_status": "Ok",
"request_user_id": 3,
"servers": [
{
"cdt_path": "/",
"class_name": "CmonContainerServer",
"clusterid": 0,
"connected": false,
"hostId": 6,
"hostname": "storage01",
"hoststatus": "CmonHostOffLine",
"ip": "192.168.1.17",
"maintenance_mode_active": false,
"message": "SSH connection failed.",
"owner_group_id": 2,
"owner_group_name": "users",
"owner_user_id": 3,
"owner_user_name": "pipas",
"protocol": "lxc",
"timestamp": 1508153607,
"unique_id": 6
} ]
}
unregisterHost
This call will drop the given host from our database entirely. The computer the host represents will not be touched in any way, software will not be uninstalled, service will not be stopped, nada!
{
"cluster_id": 1,
"dry_run": true,
"host":
{
"class_name": "CmonHost",
"hostname": "192.168.1.127",
"port": 9555
},
"operation": "unregisterHost",
"request_created": "2017-09-08T09:42:49.137Z",
"request_id": 3
}
- cluster_id: The numerical ID of the cluster that will execute the job. The cluster can also be referenced using the cluster name. For RPC v1 identifying the cluster is not mandatory, but it can be usefull if the same host is part of multiple clusters.
- cluster_name: The name of the cluster that will execute the job. The cluster can also be referenced using the cluster ID. For RPC v1 identifying the cluster is not mandatory, but it can be usefull if the same host is part of multiple clusters.
- dry_run: If this field is provided (with any value at all) the host will not be really unregistered. All the checks will be done, a success will be reported back, only the error message will show that this was just a drill.
- host: The host that shall be unregistered.
- class_name: The class name of the host will not be strictly checked (because the client might not know this), just send "CmonHost" that'll do.
- hostname: This is mandatory to identify the host.
- port: The port of the host. RPC v1 allows the usage of hosts without a port, but for most cases the port will be required.
So there are multiple fields to identify the host, there is teh hostname, the port, the cluster ID, maybe the cluster name. The backend will use whatever it is provided and find the host. If multiple hosts are found with the given data the request will be rejected.
Here is what we get for a dry run:
{
"error_string": "Dry run was requested, not unregistering host.",
"request_created": "2017-09-08T09:42:49.137Z",
"request_id": 3,
"request_processed": "2017-09-08T09:42:49.183Z",
"request_status": "Ok",
"request_user_id": 3
}
unregisterServers
This call is for dropping a container server from the scope of the controller. The server will not seize to exist, but the controller will forget everything about it.
{
"operation": "unregisterServers",
"request_created": "2017-09-01T08:42:22.756Z",
"request_id": 3,
"servers": [
{
"class_name": "CmonContainerServer",
"hostname": "core1"
} ]
}
- servers: A list of CmonContainerServer class objects to unregister. In the objects only the class name and the host name has to be provided.
{
"messages": [ "Unregistered server 'core1'." ],
"request_created": "2017-09-01T08:42:22.756Z",
"request_id": 3,
"request_processed": "2017-09-01T08:42:22.810Z",
"request_status": "Ok",
"request_user_id": 3
}
getContainers
Returns the list of containers known by the controller.
{
"operation": "getContainers",
"request_created": "2017-09-07T05:09:16.895Z",
"request_id": 3
}
{
"containers": [
{
"alias": "debian",
"class_name": "CmonContainer",
"hostname": "debian",
"owner_group_id": 4,
"owner_group_name": "testgroup",
"owner_user_id": 3,
"owner_user_name": "pipas",
"parent_server": "storage01",
"status": "STOPPED",
"type": "lxc"
},
. . .
{
"alias": "vnc_server",
"class_name": "CmonContainer",
"hostname": "192.168.1.210",
"ip": "192.168.1.210",
"ipv4_addresses": [ "192.168.1.210" ],
"owner_group_id": 4,
"owner_group_name": "testgroup",
"owner_user_id": 3,
"owner_user_name": "pipas",
"parent_server": "storage01",
"status": "RUNNING",
"template": "ubuntu",
"type": "lxc"
} ],
"request_created": "2017-09-07T05:09:16.895Z",
"request_id": 3,
"request_processed": "2017-09-07T05:09:16.947Z",
"request_status": "Ok",
"request_user_id": 3,
"total": 5
}
- containers: A list of CmonContainer objects. The following command can be used to print the properties of this class:
s9s metatype --list-properties --type=CmonContainer --long
- total: The total number of containers known by the controller.
getServers
Returns the container servers and their properties. Here is a request:
{
"operation": "getServers",
"request_created": "2017-10-03T08:19:58.769Z",
"request_id": 3
}
And here is the reply. It is rather complex, but we have a lot of data.
{
"request_created": "2017-10-03T08:19:58.769Z",
"request_id": 3,
"request_processed": "2017-10-03T08:19:58.824Z",
"request_status": "Ok",
"request_user_id": 3,
"servers": [
{
"cdt_path": "/",
"class_name": "CmonContainerServer",
"clusterid": 0,
"connected": false,
"containers": [
{
"acl": "user::rwx,user:nobody:r--,group::rw-,other::---",
"alias": "bestw_controller",
"cdt_path": "/core1/containers",
"class_name": "CmonContainer",
"container_id": 1,
"hostname": "192.168.1.182",
"ip": "192.168.1.182",
"ipv4_addresses": [ "192.168.1.182" ],
"owner_group_id": 2,
"owner_group_name": "users",
"owner_user_id": 3,
"owner_user_name": "pipas",
"parent_server": "core1",
"status": "RUNNING",
"type": "lxc",
"version": 25
},
. . .
{
"acl": "user::rwx,group::rw-,other::---",
"alias": "ubuntu",
"cdt_path": "/core1/containers",
"class_name": "CmonContainer",
"container_id": 5,
"hostname": "ubuntu",
"owner_group_id": 2,
"owner_group_name": "users",
"owner_user_id": 3,
"owner_user_name": "pipas",
"parent_server": "core1",
"status": "STOPPED",
"type": "lxc",
"version": 25
} ],
"disk_devices": [
{
"class_name": "CmonDiskDevice",
"device": "/dev/mapper/core1--vg-root",
"filesystem": "ext4",
"free_mb": 141617,
"mountpoint": "/",
"total_mb": 166180
},
. . .
{
"device": "/dev/sdc",
"is_hardware_storage": false,
"model": "LSILOGIC Logical Volume",
"total_mb": 170230,
"volumes": [
{
"description": "Linux filesystem partition",
"device": "/dev/sdc1",
"filesystem": "ext2",
"free_mb": 295,
"mount_point": "/boot",
"total_mb": 487
},
{
"description": "Extended partition",
"device": "/dev/sdc2",
"filesystem": "",
"mount_point": "",
"total_mb": 169741,
"volumes": [
{
"description": "Linux LVM Physical Volume partition",
"device": "/dev/sdc5",
"filesystem": "",
"mount_point": "",
"total_mb": 169741
} ]
} ]
},
. . .
{
"device": "/dev/sda",
"is_hardware_storage": false,
"model": "SCSI Disk",
"total_mb": 15272.1,
"volumes": [
{
"description": "EXT4 volume",
"device": "/dev/sda1",
"filesystem": "ext4",
"mount_point": "",
"total_mb": 15271.1
} ]
} ],
"distribution":
{
"codename": "xenial",
"name": "ubuntu",
"release": "16.04",
"type": "debian"
},
"hostId": 1,
"hostname": "core1",
"hoststatus": "CmonHostOffLine",
"ip": "192.168.1.4",
"last_container_collect": 1506791691,
"last_hw_collect": 1507017024,
"lastseen": 1506791691,
"maintenance_mode_active": false,
"memory":
{
"banks": [
{
"bank": "0",
"name": "DIMM 800 MHz (1.2 ns)",
"size": 4294967296
},
. . .
{
"bank": "7",
"name": "DIMM 800 MHz (1.2 ns)",
"size": 4294967296
} ],
"class_name": "CmonMemoryInfo",
"memory_available_mb": 54091,
"memory_free_mb": 41919,
"memory_total_mb": 64421,
"swap_free_mb": 0,
"swap_total_mb": 0
},
"message": "SSH connection failed.",
"model": "SUN FIRE X4170 SERVER (4583256-1)",
"network_interfaces": [
{
"gbits": 1,
"link": true,
"mac": "00:21:28:76:06:2a",
"model": "82575EB Gigabit Network Connection",
"name": "enp1s0f0"
},
. . .
{
"gbits": 0,
"ip": "192.168.122.1",
"mac": "",
"model": "",
"name": "virbr0"
} ],
"owner_group_id": 2,
"owner_group_name": "users",
"owner_user_id": 3,
"owner_user_name": "pipas",
"processors": [
{
"cores": 4,
"cpu_max_ghz": 1.6,
"id": 5,
"model": "Intel(R) Xeon(R) CPU L5520 @ 2.27GHz",
"siblings": 8,
"vendor": "Intel Corp."
},
{
"cores": 4,
"cpu_max_ghz": 1.6,
"id": 9,
"model": "Intel(R) Xeon(R) CPU L5520 @ 2.27GHz",
"siblings": 8,
"vendor": "Intel Corp."
} ],
"timestamp": 1507017024,
"unique_id": 1,
"version": "2.17"
},
. . .
} ],
"total": 8
}