cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

When using the SVM application with an on prem server infrastructure, you may occasionally experience the application locking up. The root issue of this can possibly be an issue with the vuln_track database since the functionality of the application can rely heavily on the health of this database.

Checking the sync log (/user/local/Secunia/csi/log/sync.log) you may see something like this:

[2019-10-31 15:40:13] Setting next update to BINLOG
[2019-10-31 15:40:13] Setting next update to DUMP
[2019-10-31 15:40:13]Trying to import binlog...
[2019-10-31 15:40:13] Free space: 44042 Mb
[2019-10-31 15:40:13] Making file request
[2019-10-31 15:40:14] Received file is ready for import
[2019-10-31 15:40:14] Executing binlog import
The synchronisation process appears to be locked in an inconstent state.Unlocking in dump branch.
Synchronisation cron started
Synchronisation cron started
Synchronisation cron started
Synchronisation cron started

Likely the issue is that the sync dump was interrupted somehow. You can verify this by checking the update status table in the replication metadata database in MySQL/MariaDB:

MariaDB [(none)]> select * from replication_metadata.update_status;
+----+------------------+---------------------+
| id | name | value |
+----+------------------+---------------------+
| 1 | RUNNING_PROCESS | 2019-10-31 10:40:02 |
| 2 | NEXT_UPDATE | DUMP |
| 3 | LAST_UPDATE_ID | 139838 |
| 4 | LAST_UPDATE_TYPE | BINLOG |
| 5 | ERROR_FLAGS | 0 |
+----+------------------+---------------------+
5 rows in set (0.00 sec)

Above we can see that there is currently a running process. Since we saw the inconstent state message in the log, we can infer that it may be "stuck" or timing out. If we did not see that message, then it is doubtful there is an issue and it is simply still in the process of dumping the vuln_track data into the database.

There are two methods for handling this scenario:

Method #1:

Simply wait for the application to fix the issue on its own. After a few cycles of the sync cronjob it will start to work again. In the test scenario I observed that the application sorted itself out after 7 cycles of the cronjob or 35 minutes.

So, if it is an option to hold off for an hour or so, it is likely that the issue will eventually resolve itself.

Method #2:

The second method is a manual approach if it is absolutely necessary to get the issue resolved and the application back to a functioning state ASAP. The process to remediate the issue manually is as follows:

Step 1: Drop the databases replication_metadata and vuln_track:

MariaDB [(none)]>drop database replication_metadata;

MariaDB [(none)]>drop database vuln_track;

Doing so will remove the two databases from MySQL/MariaDB.

Step 2: Run the crontab_importer.php file in /usr/local/Secunia/csi/cronjobs/

php /usr/local/Secunia/csi/cronjobs/crontab_importer.php

Running this command will manually run the sync process and recreate the two databases that were dropped in the previous step. Another option is to simply wait for the cronjob to run again on its own. It runs every 5 minutes.

When the issue is resolved the running process in the database will be cleared once the sync.log reports that the import is finished:

[2019-10-31 16:25:23] Import finished

+----+------------------+--------+
| id | name | value |
+----+------------------+--------+
| 1 | RUNNING_PROCESS | 0 |
| 2 | NEXT_UPDATE | BINLOG |
| 3 | LAST_UPDATE_ID | 178 |
| 4 | LAST_UPDATE_TYPE | DUMP |
| 5 | ERROR_FLAGS | 0 |
+----+------------------+--------+
5 rows in set (0.00 sec)

Was this article helpful? Yes No
No ratings
Version history
Last update:
‎Jun 23, 2020 04:38 AM
Updated by: