blog
Using the Percona Audit Log Plugin for Database Security
Why Do You Need To Use an Audit Plugin For Your Database?
Auditing in a database doesn’t deviate from its meaning as it shares the same connotation i.e. to inspect, examine, and evaluate for such database events/transactions that are being logged or performed within your database. It in fact adds more feasibility for databases especially as a security feature, as it commends the administrative side to be sensitive for managing and processing data. It embraces the responsibility and accountability for data management.
Database Audit requires that for every transactions (i.e. DDLs and DMLs) has to be logged so as to record traces and get the full overview of what is happening during database operations. These operations can take the considerations:
- Provides capability to monitor and debug so as to increase performance on the application side
- Security and data privacy compliance such as PCI DSS, HIPAA, GDPR, etc.
- Provides capability to take data autonomy specific to multi-tenancy environments. This allows them to take data analysis so as to differentiate and filter transactions based on the sensitivity and privacy for security and performance considerations.
- Drives administrative actions to prevent database users from inappropriate actions based on investigative suspicious activity or limited by its role. This means, read users, for example, shall only be allowed to pull data and only limited access to specific databases that they are only responsible for or with limited scope in accordance to their job role.
What is the Percona Audit Log Plugin?
Previous approaches on auditing transactions or events running in your database can be a hefty approach. Enabling general log file or either using slow query log. It’s not a perfect approach so the audit log plugin manages to add more flexibility and customizable parameters to fill up the gap. Percona claims their Audit Log Plugin is an alternative to MySQL Enterprise Audit. Although that is true, there’s a caveat here that Percona’s Audit Log Plugin is not available for Oracle’s MySQL for installation. There’s no downloadable tarball for this binary but it is easy to install by just copying an existing audit_log.so file from an existing Percona Server or Percona XtraDB Cluster installation. Best to recommend to use or copy an existing audit_log.so of the same version of Percona Server with the MySQL community version as well. So if your target MySQL Community version is of 8.x, then use the audit_log.so from a Percona Server 8.x version as well. We will show you how to do this on a MySQL community version later on this blog.
The Percona Audit Log Plugin is of course open-source and it is available for free of use. So if your enterprise application uses a backend database such as Percona Server or the vanilla MySQL, then you can use this plugin. MySQL Enterprise Audit is only available for MySQL Enterprise Server and that comes with a price. Additionally, Percona is constantly updating and maintaining this software and this comes as a major advantage as if any major release from the MySQL upstream is available. Percona will also release based on its major version and that affects updates and tested functionality as well for their audit log plugin tool. So any incompatibility from its previous versions, shall be updated as well to work with the most recent and secure version of MySQL.
The Percona Audit Log Plugin is tagged as one of a security tool but let us clarify this again. This tool is used for auditing logs. It’s sole purpose is to log traces of transactions from your database. It does not do firewalling nor it does not apply preventive measures to block specific users. This tool is mainly for auditing logs and use for database transaction analysis.
Using The Percona Audit Log Plugin
In this section, we’ll go over on how to install, use, and how beneficial the plugin can be especially in real-world situations.
Installing The Plugin
Percona comes with various sources for their database binaries. Once you install the database server properly, the standard installation will place the audit log plugin shared-object in /usr/lib64/mysql/plugin/audit_log.so. Installing the plugin as a way to enable it within the Percona/MySQL server can be done with the following actions below. This steps is done using Percona Server 8.0,
mysql> select @@version_comment, @@versionG
*************************** 1. row ***************************
@@version_comment: Percona Server (GPL), Release 12, Revision 7ddfdfe
@@version: 8.0.21-12
1 row in set (0.00 sec)
Then the steps are as follows:
-
Verify first if the plugin exists or not
## Check if the plugin is enabled or installed
mysql> select * from information_schema.PLUGINS where PLUGIN_NAME like '%audit%';
Empty set (0.00 sec)
mysql> show variables like 'audit%';
Empty set (0.00 sec)
-
Install the plugin,
## Check where are the plugins located
mysql> show variables like 'plugin%';
+---------------+--------------------------+
| Variable_name | Value |
+---------------+--------------------------+
| plugin_dir | /usr/lib64/mysql/plugin/ |
+---------------+--------------------------+
1 row in set (0.00 sec)
mysql> ! ls -a /usr/lib64/mysql/plugin/audit_log.so
/usr/lib64/mysql/plugin/audit_log.so
## Ready and then install
mysql> INSTALL PLUGIN audit_log SONAME 'audit_log.so';
Query OK, 0 rows affected (0.01 sec)
-
Verify it back once again
mysql> select * from information_schema.PLUGINS where PLUGIN_NAME like '%audit%'G
*************************** 1. row ***************************
PLUGIN_NAME: audit_log
PLUGIN_VERSION: 0.2
PLUGIN_STATUS: ACTIVE
PLUGIN_TYPE: AUDIT
PLUGIN_TYPE_VERSION: 4.1
PLUGIN_LIBRARY: audit_log.so
PLUGIN_LIBRARY_VERSION: 1.10
PLUGIN_AUTHOR: Percona LLC and/or its affiliates.
PLUGIN_DESCRIPTION: Audit log
PLUGIN_LICENSE: GPL
LOAD_OPTION: ON
1 row in set (0.00 sec)
mysql> show variables like 'audit%';
+-----------------------------+---------------+
| Variable_name | Value |
+-----------------------------+---------------+
| audit_log_buffer_size | 1048576 |
| audit_log_exclude_accounts | |
| audit_log_exclude_commands | |
| audit_log_exclude_databases | |
| audit_log_file | audit.log |
| audit_log_flush | OFF |
| audit_log_format | OLD |
| audit_log_handler | FILE |
| audit_log_include_accounts | |
| audit_log_include_commands | |
| audit_log_include_databases | |
| audit_log_policy | ALL |
| audit_log_rotate_on_size | 0 |
| audit_log_rotations | 0 |
| audit_log_strategy | ASYNCHRONOUS |
| audit_log_syslog_facility | LOG_USER |
| audit_log_syslog_ident | percona-audit |
| audit_log_syslog_priority | LOG_INFO |
+-----------------------------+---------------+
18 rows in set (0.00 sec)
Installing the Percona Audit Plugin Over the MySQL Community Version
When installing on Oracle MySQL versions, as what we have mentioned above, always match with the version of Percona Server where the audit_log.so file came from. So for example, I have the following versions of MySQL below,
nodeB $ mysqld --version
/usr/sbin/mysqld Ver 8.0.22 for Linux on x86_64 (MySQL Community Server - GPL)
Whereas, my Percona Server is,
nodeA $ mysqld --version
/usr/sbin/mysqld Ver 8.0.21-12 for Linux on x86_64 (Percona Server (GPL), Release 12, Revision 7ddfdfe)
All you need to do is copy from the Percona source to the server where you have MySQL Community Server installed.
nodeA $ scp /usr/lib64/mysql/plugin/audit_log.so nodeB:/tmp/
Then move to /usr/lib64/mysql/plugin for which plugins shall be located.
root@nodeB > show global variables like 'plugin%';
+---------------+--------------------------+
| Variable_name | Value |
+---------------+--------------------------+
| plugin_dir | /usr/lib64/mysql/plugin/ |
+---------------+--------------------------+
1 row in set (0.00 sec)
nodeB $ mv /tmp/audit_log.so /usr/lib64/mysql/plugin
All the rest, you can follow the steps as stated above to continue installing or enabling the Percona Audit Login Plugin for MySQL Community Server.
Configuration and Managing Percona Audit Log Plugin
Percona Audit Log Plugin is a very flexible tool that is very configurable or customizable to cater your requirements as you log your database connections or transactions. It’s a linear fashion implementation for its given configuration so even if it’s flexible to be customized by its given parameters, only those given values shall be logged and audited during the whole time your database runs and it’s done asynchronously by default. Every parameter variables in this plugin are important but below are the most important parameters that you can use to configure the plugin:
- audit_log_strategy – Used to specify the audit log strategy and when audit_log_handler is set to FILE. which is either the following values are possible:
- ASYNCHRONOUS – (default) log using memory buffer, do not drop messages if buffer is full
- PERFORMANCE – log using memory buffer, drop messages if buffer is full
- SEMISYNCHRONOUS – log directly to file, do not flush and sync every event
- SYNCHRONOUS – log directly to file, flush and sync every event
- audit_log_file – Filename to be used to store audit logs, which defaults to file ${datadir}/audit.log. You can use relative file path from the datadir of your database or the absolute file path.
- audit_log_flush – Useful when you need to flush the log such as being used in coordination with logrotate
- audit_log_buffer_size – By default, Percona Audit Log records traces to the default file log. This variable is useful when audit_log_handler = FILE, and audit_log_strategy = ASYNCHRONOUS or PERFORMANCE. When set, it is used to specify the size of the memory buffer used for logging. This allows you to avoid performance penalty degradation when auditing logs is enabled.
- audit_log_format – Format to specify when recording or saving information to your audit log file. Accepts formats as OLD/NEW (based on XML format), JSON, and CSV. This is very useful especially when you incorporate later with other external tools to pull your audit logs that support specific formats.
- audit_log_exclude_accounts/audit_log_include_accounts – Used to specify the list of users you can include or exclude respective to its param name. Accepts NULL otherwise a comma separated list in the format of user@host or ‘user’@’host’. These variables are mutually exclusive so it has to be unset (i.e. value is NULL) one or the other
- audit_log_include_commands/audit_log_exclude_commands – Used to specify the list of commands (either NULL or comma separated list) for which filtering by SQL command type is applied. These variables are mutually exclusive so it has to be unset (i.e. value is NULL) one or the other. To get the list of SQL command types in MySQL or Percona, do the following:
- enable performance_schema=ON variable in your my.cnf (requires database server restart)
- Run the following query: SELECT GROUP_CONCAT(SUBSTRING_INDEX(name, ‘/’, -1) ORDER BY name) sql_statement FROM performance_schema.setup_instruments WHERE name LIKE “statement/sql/%”G
- audit_log_include_databases/audit_log_exclude_databases – used to specify to filter by database name and with conjunction to audit_log_{include,exclude}_commands to filter the list of commands so as to be more granular when logging during auditing logs. These variables are mutually exclusive so it has to be unset (i.e. value is NULL) one or the other.
- audit_log_policy – Used to specify which events should be logged. Technically, you can set this variable dynamically to enable or disable (set value to NONE) for your audit logging. Possible values are:
- ALL – all events will be logged
- LOGINS – only logins will be logged
- QUERIES – only queries will be logged
- NONE – no events will be logged
Managing the Audit Log Plugin
As mentioned, default log file goes to ${data_dir}/audit.log and uses XML format just like my example below:
[root@testnode20 ~]# ls /var/lib/mysql/audit.log | xargs tail -28
Now, let’s manage the Percona Audit Log Plugin in a real case scenario. Inspired by the work of Dani’s blog of Percona, let’s consider changing the following variables in my.cnf,
[root@testnode20 ~]# grep -i 'audit' /etc/my.cnf
## Audit Log
audit_log_format=JSON
audit_log_strategy=PERFORMANCE
audit_log_policy=QUERIES
audit_log_exclude_databases=s9s
Then let’s create the following database and tables,
CREATE DATABASE s9s;
CREATE TABLE `audit_records` ( `id` int unsigned NOT NULL AUTO_INCREMENT, `audit_record` json, PRIMARY KEY (`id`) ) ENGINE=InnoDB;
Then let’s use A named pipe or FIFO in Linux to collect logs ready for auditing but which we can later use feasibly.
$ mkfifo /tmp/s9s_fifo
$ exec 1<>/tmp/s9s_fifo
$ tail -f /var/lib/mysql/audit.log 1>/tmp/s9s_fifo 2>&1
Then, let's insert any logs to our table `s9s`.`audit_records` using the following script below,
#/bin/bash
pipe=/tmp/s9s_fifo
while true; do
if read line <$pipe; then
if [[ "$line" == 'quit' ]]; then
break
fi
mysql --show-warnings -vvv -e "INSERT INTO s9s.audit_records (audit_record) VALUES("${line//"/\"}")"
fi
done
Then I did try running a benchmark using sysbench. Now, with the following entries I have,
mysql> select count(1) from audit_recordsG
*************************** 1. row ***************************
count(1): 37856
1 row in set (0.11 sec)
I can do some auditing using JSON which is making it feasible for me to do auditing and investigation or even performance analysis of my database. For example,
mysql> SELECT top10_select_insert from ((select audit_record->"$.audit_record" as top10_select_insert from audit_records where audit_record->"$.audit_record.command_class" in ('select') order by audit_records.id desc limit 10) union all (select audit_record->"$.audit_record" as top10_select_insert from audit_records where audit_record->"$.audit_record.command_class" in ('insert') order by audit_records.id desc limit 10)) AS bG
*************************** 1. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263176_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "SELECT DISTINCT c FROM sbtest1 WHERE id BETWEEN 5001 AND 5100 ORDER BY c", "timestamp": "2020-10-29T11:11:56Z", "command_class": "select", "connection_id": "25143"}
*************************** 2. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263175_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "SELECT c FROM sbtest4 WHERE id BETWEEN 4875 AND 4974 ORDER BY c", "timestamp": "2020-10-29T11:11:56Z", "command_class": "select", "connection_id": "25143"}
*************************** 3. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263174_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "SELECT SUM(k) FROM sbtest1 WHERE id BETWEEN 5017 AND 5116", "timestamp": "2020-10-29T11:11:56Z", "command_class": "select", "connection_id": "25143"}
*************************** 4. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263173_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "SELECT c FROM sbtest8 WHERE id BETWEEN 4994 AND 5093", "timestamp": "2020-10-29T11:11:56Z", "command_class": "select", "connection_id": "25153"}
*************************** 5. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263172_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "SELECT c FROM sbtest3 WHERE id=4976", "timestamp": "2020-10-29T11:11:56Z", "command_class": "select", "connection_id": "25153"}
*************************** 6. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263171_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "SELECT c FROM sbtest3 WHERE id=5018", "timestamp": "2020-10-29T11:11:56Z", "command_class": "select", "connection_id": "25153"}
*************************** 7. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263170_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "SELECT c FROM sbtest3 WHERE id=5026", "timestamp": "2020-10-29T11:11:56Z", "command_class": "select", "connection_id": "25153"}
*************************** 8. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263169_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "SELECT c FROM sbtest3 WHERE id=5711", "timestamp": "2020-10-29T11:11:56Z", "command_class": "select", "connection_id": "25153"}
*************************** 9. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263168_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "SELECT c FROM sbtest3 WHERE id=5044", "timestamp": "2020-10-29T11:11:56Z", "command_class": "select", "connection_id": "25153"}
*************************** 10. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263167_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "SELECT c FROM sbtest3 WHERE id=5637", "timestamp": "2020-10-29T11:11:56Z", "command_class": "select", "connection_id": "25153"}
*************************** 11. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263151_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "INSERT INTO sbtest9 (id, k, c, pad) VALUES (4998, 4986, '02171032529-62046503057-07366460505-11685363597-46873502976-33077071866-44215205484-05994642442-06380315383-02875729800', '19260637605-33008876390-94789070914-09039113107-89863581488')", "timestamp": "2020-10-29T11:11:56Z", "command_class": "insert", "connection_id": "25124"}
*************************** 12. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263133_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "INSERT INTO sbtest8 (id, k, c, pad) VALUES (6081, 4150, '18974493622-09995560953-16579360264-35381241173-70425414992-87533708595-45025145447-98882906947-17081170077-49181742629', '20737943314-90440646708-38143024644-95915967543-47972430163')", "timestamp": "2020-10-29T11:11:56Z", "command_class": "insert", "connection_id": "25133"}
*************************** 13. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263126_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "INSERT INTO sbtest2 (id, k, c, pad) VALUES (5014, 5049, '82143477938-07198858971-84944276583-28705099377-04269543238-74209284999-24766869883-70274359968-19384709611-56871076616', '89380034594-52170436945-89656244047-48644464580-26885108397')", "timestamp": "2020-10-29T11:11:56Z", "command_class": "insert", "connection_id": "25135"}
*************************** 14. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263119_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "INSERT INTO sbtest5 (id, k, c, pad) VALUES (4995, 3860, '07500343929-19373180618-48491497019-86674883771-87861925606-04683804124-03278606074-05397614513-84175620410-77007118978', '19374966620-11798221232-19991603086-34443959669-69834306417')", "timestamp": "2020-10-29T11:11:56Z", "command_class": "insert", "connection_id": "25142"}
*************************** 15. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263112_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "INSERT INTO sbtest10 (id, k, c, pad) VALUES (5766, 5007, '46189905191-42872108894-20541866044-43286474408-49735155060-20388245380-67571749662-72179825415-56363344183-47524887111', '24559469844-22477386116-04417716308-05721823869-32876821172')", "timestamp": "2020-10-29T11:11:56Z", "command_class": "insert", "connection_id": "25137"}
*************************** 16. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263083_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "INSERT INTO sbtest7 (id, k, c, pad) VALUES (5033, 4986, '20695843208-59656863439-60406010814-11793724813-45659184103-02803540858-01466094684-30557262345-15801610791-28290093674', '14178983572-33857930891-42382490524-21373835727-23623125230')", "timestamp": "2020-10-29T11:11:56Z", "command_class": "insert", "connection_id": "25118"}
*************************** 17. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263076_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "INSERT INTO sbtest1 (id, k, c, pad) VALUES (5029, 5016, '72342762580-04669595160-76797241844-46205057564-77659988460-00393018079-89701448932-22439638942-02011990830-97695117676', '13179789120-16401633552-44237908265-34585805608-99910166472')", "timestamp": "2020-10-29T11:11:56Z", "command_class": "insert", "connection_id": "25121"}
*************************** 18. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263036_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "INSERT INTO sbtest1 (id, k, c, pad) VALUES (5038, 5146, '62239893938-24763792785-75786071570-64441378769-99060498468-07437802489-36899434285-44705822299-70849806976-77287283409', '03220277005-21146501539-10986216439-83162542410-04253248063')", "timestamp": "2020-10-29T11:11:55Z", "command_class": "insert", "connection_id": "25127"}
*************************** 19. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326263018_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "INSERT INTO sbtest4 (id, k, c, pad) VALUES (5004, 5028, '15487433957-59189974170-83116468418-96078631606-58760747556-09307871236-40520753062-17596570189-73692856496-38267942694', '98937710805-24695902707-05013528796-18454393948-39118534483')", "timestamp": "2020-10-29T11:11:55Z", "command_class": "insert", "connection_id": "25129"}
*************************** 20. row ***************************
top10_select_insert: {"db": "sbtest", "ip": "192.168.10.200", "host": "", "name": "Query", "user": "cmon[cmon] @ [192.168.10.200]", "record": "326262989_2020-10-29T10:35:07", "status": 0, "os_user": "", "sqltext": "INSERT INTO sbtest3 (id, k, c, pad) VALUES (5015, 5030, '30613877119-41343977889-67711116708-96041306890-46480766663-68231747217-07404586739-83073703805-75534384550-12407169697', '65220283880-37505643788-94809192635-84679347406-74995175373')", "timestamp": "2020-10-29T11:11:55Z", "command_class": "insert", "connection_id": "25139"}
20 rows in set (0.00 sec)
Aggregate Your Audit Logs With Other Tools
Now that you are able to parse the output of your audit logs, you can start incorporating it to other external tools and start aggregating with your current environment or technology stack as long as it reads or supports JSON. For example, using ELK (Elasticsearch, Logstash Kibana) to parse and centralize your logs. You might also try to incorporate with Graylog or Fluentd. On the other hand, you might create your own viewer and incorporate with your current software setup. Using Percona Audit Log makes these things feasible to do more analysis with high productivity and of course feasible and extensible as well.