Frequently Asked Questions
Get Answers to the Most FAQ
Does DataSunrise have virtual patching capabilities for the apps it protects?
No, DS does not have such capabilities. It should be noted that DS is a Database Security solution and not a WAF (Web Application Firewall). DataSunrise protects databases, not applications run on top of DBs.
The only thing which can be used as some sort of virtual patching is SQL Injection Prevention Security Rule as SQL injection attack is usually done at the application tier by exploiting poor application design enabling a bad guy to send malicious SQL commands along with unparametrized or un-escaped request parameters.
Does DataSunrise’s VA (Vulnerability Assessment) Scanner perform actual or virtual patching of vulnerabilities it found?
No, DataSunrise’s VA Scanner can only be used for reporting purposes and does not do any patching as a part of the VA Scanner business logic.
However, for the selected range of DBMS engines and versions DataSunrise can provide a list of action items to be done to harden DBMS from the CIS and STIG Security Benchmarks standpoint.
DataSunrise is only able to find and suggest fixing vulnerabilities and misconfigurations diagnosed and fixable through SQL interface commands.
Does DataSunrise have a native or built-in failover or Load Balancer for High Availability mode? What does DS offer as HA out-of-the-box?
DataSunrise does not have any components to work in HA (active/passive or active-active) anymore.
Load balancing or failover should be configured using third-party solutions (commercial or open-source ones):
Keepalived (Active/Passive HA failover) on Linux
HAProxy (Active/Active L4/L7 Load Balancer)
The solutions mentioned below are commercial, we can’t recommend them but they should be mentioned:
Citrix
Kemp
F5
Any other (e.g. CISCO) hardware load balancer
Configuration of DS in fully HA mode stands out of scope of the DS support team an should be done by clients themselves based on their own decisions.
By default, DataSunrise offers the following HA techniques:
Multiple DS nodes can work on a single configuration database mitigating the necessity to configure each node on its own (the settings are taken from a shared Dictionary)
In case when a main Dictionary connection is lost, use a System Dictionary Backup as a reserve configuration database. DS checks once a certain period whether the Dictionary connection is restored and switches back when it is available.
If an Audit Storage database becomes unavailable, DataSunrise is able to write audited data into locally stored flat files and flush them to the Audit storage DB as soon as DS discovers that connectivity is restored.
Are there any best practices for DataSunrise application hardening? Management Console or Proxies?
No, we do not have any recommendations on extra application hardening (Proxy Endpoints/Management console).
Usually, these depend on the requirements the particular environment/client has for the applications deployed at its site.
DataSunrise offers a rich variety of Additional Parameters for hardening/performance optimization. These are entirely optional and should be discussed to be configured individually for each case.
Nevertheless, default values for the majority of the parameters throughout DataSunrise were configured to be optimal for the majority of cases.
What makes up Core Memory consumption?
In terms of Process Architecture, a DataSunrise instance consists of a Single AppBackendService (or Backend, BE for short) and from zero to many AppFirewallCore (or Core for short) processes.
Backend is an entity which manages the configuration of DS and runs different utility tasks (generating reports, updating metadata, sending SMTP alerts, running Data Discovery etc.)
Core processes the traffic received from Proxy/Sniffer/Trailing/Agent (TBA) and does Auditing/Masking/Blocking.
The RAM consumed by a Core is determined by the volume of the metadata (tables and their column details, views, packages (Oracle), procedures, functions, synonyms) of Target Database this Proxy/Sniffer/Trailing/Agent is configured for: the metadata is loaded to the cache.
Apart from that, DataSunrise’s Core also caches SQL queries recognized in the flowing traffic to boost the processing speed.
The Core memory consumption also rises if the Audit DB Server spec is not able to handle events in time: DataSunrise is sending processed events to the Audit Storage via internal queue. The processing happens almost immediately, however if Audit DB is not able to process events timely, the events may bulk up at the DS server side and cause increased RAM consumption which goes away as the events are being sent to the Audit Storage.
Finally, some memory is reserved for protocol parser buffers and for each proxy connection.
Then, DataSunrise is reporting on the consumption of the Virtual Memory.
Virtual memory does not match 1:1 or at any other ratio to the Physical Memory (RAM).
This means that high Virtual Memory consumption cannot cause DS host crash due to low memory.
DataSunrise uses Virtual Memory to monitor Virtual Memory allocation for the runtime objects DS spawns during operation. We are using this metric to monitor the operation of DataSunrise service on a host and in case if the memory consumption exceeds internal threshold the processes can be restarted automatically.
Please refer to MaxCoreMemoryForTerminate or MaxBackendMemoryForTerminate Additional Parameters for the ultimate threshold (in Virtual Memory units) exceeding which automatically forces DS Core or Backend process to terminate.
Therefore, memory spikes are OK especially when you are using the average hardware specs.
Most importantly, if memory consumption goes away over time then there is nothing to worry about.
However, if Virtual memory consumption stays high after an intensive activity period then there might be some memory leaks or other aspects for exploration.
In this case, it is important to share the log files from the problem environment using Download All button.
You can do this in System Settings -> Logging & Logs -> Logs Tab -> For each Server available from the Servers* drop-down list "Download All".
NB: this results in a log file archive (ZIP) to be downloaded to your workstation therefore you might need to allow pop-ups for this operation.
If possible, it is recommended to share ways to reproduce the problem persistent memory consumption. If there are Rules configured, then it is important to provide the screenshots of your Rules’ settings. The best option is to use the Dictionary backup option in System Settings -> General. Note that a backup file may be large.
What are the system requirements for DataSunrise Database Security Suite?
DataSunrise can run on any commodity hardware. No special hardware requirements. If DataSunrise is to be used in production, we suggest something like the following specs:
CPU: 8 cores
RAM: 8-16GB is sufficient
No special storage requirements
Available disc space: 100 GB for data audit
Operating system: 64-bit Linux (Red Hat Enterprise Linux 7+, Debian 10+, Ubuntu 18.04 LTS+, Amazon Linux 2) or 64-bit Windows Server 2019+
Can DataSunrise be paired with load balancers?
DataSunrise supports load balancers. For example, we support the Classic Load Balancer on AWS. You can also use a certain load balancer when deploying DataSunrise on premises in a HA configuration. DataSunrise supports various types of load balancers. For example, DataSunrise supports AWS-based application being fully integrated with AWS Classic Load Balancer. Additionally, DataSunrise can be configured to use a certain load balancer like HAProxy, etc. Note: DataSunrise Supports load balancers when operating in HA mode only.
I’m trying to run DataSunrise on Linux but getting an error message: “Data source name not found and no default driver specified”
Check ODBC driver availability. Execute:
odbcinst -j
to determine ODBC files location and ensure that they are not removed or modified.
Basically, the data source you are attempting to connect to does not exist on your machine. On Linux and UNIX, SYSTEM data sources are typically defined in /etc/odbc.ini
USER data sources are defined in ~/.odbc.ini
You should grant read access to the .ini file that contains the data source. You may need to set ODBCSYSINI, ODBCINSTINI or ODBCINI environment variables to pinpoint
files location if it hasn’t been done before.
I am getting “Could not find libodbc.so.2 (unixODBC is required)” error while trying to install DataSunrise on Ubuntu 14.04. UnixODBC is installed.
Run the following commands:
cd /usr/lib/x86_64-linux-gnu/ >cd ln -s libodbc.so.1.0.0 libodbc.so.2
I am getting a “Could not find “setcap”” error while trying to install DataSunrise on OpenSUSE 42.1.
Install libcap-progs. To do this, run the following command:
sudo zypper install libcap-progs
I can't update my DataSunrise. I run a newer version of the DataSunrise installer, but the installation wizard is not able to locate the old DataSunrise installation folder.
Run DataSunrise installer in Repair mode. It removes the previous installation and updates your DataSunrise to a newer version.
I'm trying to enter the Web Console after DataSunrise has been updated, but it displays "Internal System Error" message.
Most likely, you kept the Web Console tab open in your browser while updating the firewall. Log out the Web Console if necessary and press Ctrl + F5 to refresh the page.
I'm trying to enter the web interface after program update, but it displays "Internal System Error" message
Most likely you kept web interface tab opened in your web browser while updating the firewall. Log out the web interface if necessary and press Ctrl + F5 to reload the page.
I am not able to create a new Oracle instance on Ubuntu.
Most likely Oracle can’t find a missing libaio.so.1 file. Run the following command to install it on Ubuntu:
sudo apt-get install libaio1
I'm trying to add a new Oracle database via Configuration menu, but the connection is failing because of a "Couldn't load oci.dll" error.
Probably you installed the 32-bit version of Oracle Database Instant Client or did not set system variables correctly. You need to install 64-bit version of Oracle Database Instant Client and add its home directory path to the %ORACLE_HOME% system variable. Then you need to add the same directory path to %PATH% system variable.
I am trying to run PostgreSQL on Linux machine, but database connection is failed: Missing server name, port, or database name in call to CC_connect. (error code 201)
Check ODBC driver availability by executing the following command:
odbcinst -q -d
Locate and configure ODBC.ini file in the following way:
[postgres_i]
Description = Postgres Database
Driver = PostgreSQL
Database = postgres
Servername = 127.0.0.1
Port = 5432
Check PostgreSQL connection by executing the following command:
isql postgres_i username password
I'm working on Linux and trying to establish connection between DataSunrise and MySQL database, but it fails because of missing ODBC MySQL driver.
Certain Linux-type operating systems don’t add MySQL driver parameters into odbcinst.ini file, so you should do it manually.If necessary, install MySQL ODBC driver by running the following commands:
For Debian and Ubuntu:
sudo apt-get install libmyodbc libodbc1
For CentOS, Red-Hat and Fedora:
sudo yum install mysql-connector-odbc
Then edit odbcinst.ini file. Run the following command:
sudo nano /etc/odbcinst.ini
Paste the following code into odbcinst.ini and save the file:
[MySQL]
Description = ODBC for MySQLDriver = /usr/libx86_64-linux-gnu/odbc/libmyodbc.soSetup = /usr/libx86_64-linux-gnu/odbc/libodbcmyS.soFileUsage = 1Update configuration files that control ODBC access to database servers by running the following command:
sudo odbcinst -I -d -f /etc/odbcinst.ini
I can't create an SAP Hana Database instance in DataSunrise because of the following error:
ERROR: invalid byte sequence for encoding "UTF8": 0xdc
Use a custom connection string with the CHAR_AS_UTF8=true parameter. For example:
DRIVER=HDBODBC;SERVERNODE=192.168.1.244:39017;UID=SYSTEM;PWD=mawoi3Nu;DATABASENAME=SYSTEMDB;CHAR_AS_UTF8=true;
I'm deploying DataSunrise on Windows OS: I installed the database server, client database and the firewall on one host. I’m trying to run DataSunrise in Sniffer mode, but it is not listening for the traffic.
In this case, DataSunrise can’t capture traffic sent from host machine to that same host machine. You should use DataSunrise Proxy mode only or install database server and database client on separate hosts.
When I am trying to run DataSunrise in sniffer mode, it displays a message: "Can not to parsing SSL connection in sniffer mode".
In order to run the firewall in sniffer mode, you should disable SSL support in your client application settings (SSL Mode -> Disable).You can also switch application’s SSL Mode to “Allow” or “Prefer” but disable SSL support in database server settings first.
When running DataSunrise in the sniffer mode, I get an error: 'DS_31037E: Crypto {ha-05:1433}'
Can not determine the username with Kerberos or local NTLM authentication in the Sniffer mode. Until the parameters of the crypto provider are properly configured, we can not identify the login/user. The UNKNOWN LOGIN account will be used as the current user. Rules checks may not work correctly until this error is resolved. Refer to subs. 4.7.2 of the Administration Guide for details. And we cannot define name of user for NTLM if the client and the server are installed on the same host.
I'm using DataSunrise in Sniffer mode and get the following messages in the Event Monitor:
"Crypto [<Network interface>]: <Error text> "
"Until the parameters of the crypto provider are properly configured, we can not identify the login/user. "
"The GUEST account will be used as the current user. "
"Rules checks may not work correctly until this error is resolved. "
"Refer to '4.7.2 Configuring SSL for Microsoft SQL Server' section of the Administration Guide for details.",
The current version of DataSunrise sniffer supports TLS v.1.0 only. You need to downgrade TLS version on the server side. Create two keys in the register:
[HKEY_LOCAL_MACHINE][System][CurrentControlSet][Control][SecurityProviders][SCHANNEL][Protocols][TLS 1.1][Server]
[HKEY_LOCAL_MACHINE][System][CurrentControlSet][Control][SecurityProviders][SCHANNEL][Protocols][TLS 1.2][Server]
and add two DWORD-type parameters:
DisabledByDefault=1
Enabled=0
Restart the server;
If DataSunrise have intercepted an SSL session with improper cryptoprovider settings, then change your cryptoprovider settings and reset the current SSL session. To reset a session, restart your SSMS (if you're using a third-party app contacting the sniffed server, restart it as well);
You can also bypass resumed sessions by disabling caching of SSL sessions on the client side. To do this, on the SSMS's host, select the following registry parameter:
[HKEY_LOCAL_MACHINE][System][CurrentControlSet][Control][SecurityProviders][SCHANNEL]
and add to it the ClientCacheTime == 0 parameter. Then restart the server.
When connecting to Aurora DB, the MySQL ODBC driver stops responding.
Most probably, you're using ODBC driver version 5.3.6, which is known to cause freezes from time to time. Install MySQL ODBC driver version 5.3.4.
I need to use an SSL certificate for database connection. What are my options?
Turn off certificate validation for the connection in your client application (Sisense). For example, you can check Trust Server Certificate in your client software.
In your environment, you can use a certificate for DataSunrise generated by your CA from the root certificate.
Generate a self-signed certificate and copy it to your client system.
I'm trying to establish a connection to a DataSunrise proxy created for an Amazon Redshift database, but receive the following error: "[HY000][600000] [Amazon](600000) Error setting/closing connection: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target"
This issue is caused by DataSunrise's self-signed certificate which is used by default to handle encrypted connections. The problem is that some client applications perform strict certificate check and don't accept self-signed certificates.
You can solve this issue with the following methods:
- Allow usage of self-signed certificates in your client application
- Issue a certificate using your corporate Certification Authority and paste the certificate into the proxy.pem file
- Generate a self-signed certificate and allow usage of root certificates in your database connection string (ex sslrootcert=/path/to/certificate.pem).
I've configured Google-based two-factor authentication, but I can't authenticate in the target database.
Probably, your smartphone and database server are working in different time zones. Your smartphone and database server should work in the same time zone, so synchronize the timezones and time.
I'm trying to establish a connection between DataSunrise and an Oracle database but get the following error:
Warning| Could not connect to the database. Error: Couldn't load libclntsh.so.
Ensure that you have Oracle Instant Client installed (see the corresponding Admin Guide) and create a corresponding .conf file:
sudo bash -c "echo /opt/oracle/instantclient_12_1 > /etc/ld.so.conf.d/oracle-instantclient.conf"
sudo ldconfig
DataSunrise fails to connect to the database.
1. Check the state of proxies using DataSunrise GUI.
- Open DataSunrise web UI and go to Configurations > Databases subsection.
- Click Edit on the database instance you want to check.
- Click the Test Сonnection button.
- Enter the password and click the Test All button. If the status of all ports is OK, go to the next step of this guide.
2. Test the connection with Nmap (Linux) or Telnet Client (Windows).
Windows | Linux |
To enable Telnet client, run the command prompt with administrative privileges and perform the following command: dism /online /Enable-Feature /FeatureName:TelnetClient Wait until the operation finishes. You will have to restart your computer in order to implement system changes. | If you don’t have Nmap installed on your machine, open the command line and perform the following command: sudo apt-get install nmap |
Launch Telnet Client and use the o command with the required hostname and port as shown below: o 192.168.1.100 3306 | After the installation perform the nmap command with the required hostname: nmap 192.168.1.100 |
I'm creating a Dynamic Data Masking rule and I want to mask different columns (for example: name, email address) of a table, but I can select only one masking method for all the columns. Is it possible to use different masking methods for different columns of the same table?
Available masking methods depend on the column’s data type. You can assign only one masking method to a column, so you might need to create multiple rules to mask multiple columns that contain various data types. You can assign the same rule to columns of the same data type or use a custom function for multiple columns with various data types, only if the custom function logic is capable of dealing with multiple data types.
If Local Syslog is enabled, where does log data get written to?
By default, AWS EC2 is configured to write to /var/log/messages. You have to enable the syslog service on your system if it's not done yet. For Local Syslog messages you can select the default "Syslog Configuration" in the Rules.
How can I audit DQL, DML, DDL and TCL queries?
In the DataSunrise's Web Console, navigate to Audit → Rules. Then create a new rule and in the Filter Statements subsection, change filter type to Admin Queries. Click Add Admin Query and select queries to add to the filter.
My query doesn't trigger the Rule I set up. What's wrong?
Before reaching our Support Team, please check the following:
DataSunrise deployment scheme: Proxy, Trailings or Sniffer. Note that Sniffer doesn't work with SSL/TLS encrypted connections except MS SQL Server
Basic checks:
A valid license should be installed. DataSunrise with an expired license doesn't block/audit/masks queires but just passes traffic without any processing
Check you problematic Rule:
Filter Sessions: if not empty, define what you're trying to achieve
Filter Statements: if not empty, ensure actions/user/application does match the list of SQL query types/ CRUD operations and/or Objects (or Groups) selected
You can try debugging: enable Log Event in Storage in your Rule's settings if disabled to see if a new entry is generated in the corresponding Events list. You can also enable Rules Trace and check how your query is processed
DataSunrise specific:
Proxy: ensure that your user is connecting through your DataSunrise Proxy
Sniffer: check if SSL/TLS is used or any database-specific transport encryption (for exaempl Oracle Native Encryption). Note that MS SQL Server Sniffer only supports encrypted traffic processing
Trailing: check if Native Audit is configured to capture expected actions
Advanced checks
Check if there are no PARSER ASSERT messages in the Core log files of the problematic worker.
If anything of the aforementioned helps, contact our Support Team for.
I'm using Static Masking on an Oracle database and get the following error:
Error: ORA-01950: no privileges on tablespace 'USERS' / 0 processed rows.
Execute the following query:
ALTER USER C##ELL quota unlimited on users;
I'm hosting DataSunrise on Windows. I try to configure dynamic masking for Unstructured files but get the following error:
Code: 10 The JVM was not initialized: Please check the documentation for setting up the JVM
If you're experiencing some problems with JVM on Windows, add the path to your JVM folder to the PATH environment variable. For example:
C:\Program Files\Java\jre1.8.0_301\bin\server
I'm getting the following notification: "Reached the limit on delayed packets".
This notification is displayed when a sniffer captured a big amount of traffic on SSL sessions started before the DataSunrise service had been started. By default, the volume of captured traffic should not exceed 10 Mb (pnMsSqlDelayedPacketsLimit parameter).
Sometimes this notification can be displayed if there is a huge load on pcap driver. Thus a sniffer can capture too much of a delayed traffic. In this case you need to increase pnMsSqlDelayedPacketsLimit parameter's value.
How to deal with putq and DS_32016W?
General Audit Queue In Thread #x' is filled for more than XX%. The current level is XX%
This message indicates that your Audit Storage database can't process events in timely manner (AuditQueue is less than AuditHighWaterMark). To get rid of these errors, you can do the following:
Increase Audit Storage database performance: enlarge CPU, RAM, change HDD to SSD
Decrease the amount of data to audit: , audit only queries you really need to monitor
Activity on business logic objects (where PII data is stored)
Audit only those queries you need to monitor
Use Filter Sessions to specify conditions to log events (skip ETL/OLTP/service applications activity for example)
Adjust you Audit Storage parameters for better performance. Note that DataSunrise doesn't provide any guidilines on how to do that.
DataSunrise Suite running on a host cannot capture data packets between database client running on the same host and database server running on an Oracle VirtualBox virtual machine.
Please, check your setup:
Host:
Windows 8.1 (64-bit)
DataSunrise Database Security Suite
WinPcap 4.1.3
Database client: EMS SQL Manager for DB2
VirtualBox 5.0.X virtual machine (running on the host):
Guest OS: Windows 7 Professional (64-bit)
Database Server: DB2
If you’re using VirtualBox 5.0.2, for instance, DataSunrise will likely fail to capture data packets between database client running on the host and database server running on the guest OS. This problem can occur under various network connection settings such as NAT, bridged and host-only. However, if you run the DB client on the guest OS and DB server — on the host, DataSunrise will be able to capture network packets. This issue is caused by VirtualBox 5.0.X virtual network adapter (VirtualBox NDIS Bridged Network Driver). Try to install an older version of VirtualBox and check if DataSunrise captures data packets between the host and guest OS.
I’m trying to send emails to subscribers and get the following error: “Could not send email to [email protected] Error: Operation timed out after 10000 milliseconds with 0 out of 0 bytes receive”
This error can be caused by unavailable SMTP server. Please refer to the User Guide, subs. 5.8.1 for SMTP server configuration description.
I'm planning to integrate DataSunrise with an LDAP server and I want to use the LDAP port 636 with SSL. How can I do that?
DataSunrise supports both SSL and non-SSL authentication for LDAP. To run DataSunrise with SSL, navigate to System Settings → LDAP servers and check the “SSL” checkbox in the server’s settings.
I've generated a new self-signed certificate and updated the appfirewall.pem, but the client browsers still deem it as an untrusted certificate.
A self-signed certificate should have an exception added to it as a trusted certificate on each client machine’s browser. If a certificate gets updated, you will need to add another exception for it as a trusted certificate on each client machine’s browser again. If your client machines are administered under Domain Controller, you’d have the option to install the certificate into the client machines via the domain controller. Refer to this link for detailed instructions:
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732625(v=ws.11)I'm running DataSunrise on-premises and I want to move to HA configuration. How can I do it?
DataSunrise does not support creation of HA configuration unless you have initially installed it in HA mode. If you want to use your non-HA installation in the HA mode, you can create a dictionary backup, remove DataSunrise, install it in the HA mode and then import dictionary backup to the new DataSunrise installation. Here’s how you can do this:
Create dictionary backup (Navigate to “System Settings” → “General”, click “Backup +”, select all checkboxes in the popup window).
Save the “backup” folder from the DataSunrise installation directory somewhere.
Uninstall DataSunrise.
Install DataSunrise in the HA mode.
Copy your “backup” folder to the DataSunrise installation directory.
Restore the dictionary from the backup (“System Settings” -> “General”, “Restore”).
Can I configure another external application except Slack to receive DataSunrise messages?
You can use any instant messenger if a common line for this messenger exists. But DataSunrise doesn’t maintain external applications. You can see how to configure DataSunrise to be used with Slack here: https://www.datasunrise.com/professional-info/sending-slack-notifications/
You can configure any other external application in the same manner. For example, you can use this client for WhatsApp: https://github.com/tgalal/yowsup/wiki/Command-line-client
I'm getting the following warning: 'The free disk space limit for audit is reached. The current disc space amount is XXX MB. The disk space limit is 10240 MB'
If you want to decrease the disk space threshold for this warning, navigate to the System Settings → Additional and change the “LogsDiscFreeSpaceLimit” parameter’s value from 10240 to 1024 Mb for example.
On Ubuntu, when creating a Server for Subscribers, if I select "Signed" certificate type, I get an error:
error setting certificate verify locations: CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none
The problem is that the root certificate is placed in another location. Add the following string to the /etc/ datasunrise.conf file:
CURL_CA_BUNDLE=<location of the file that contains the root certificate>
for example, on Ubuntu the root certificate file is located here: /etc/ssl/certs/ca-certificates.crt
My Dictionary and/or Audit Storage are located in the integrated SQLite database and I get the following message:
SQLITE_BUSY
It's not an error! SQLite supports only one writer (Backend/Core thread) at a time so when some process accesses DB file for a write operation, others have to wait and receive the SQLITE_BUSY message.
Let's take a look at two scenarios:
Audit Storage: more that one proxy with Audit/Learning Rules an/or Security/Masking Rules with the Log event in Storage option enabled. In this case, you can check Core log files for the SQLITE_BUSY message. The another option is to check Monitoring → Queues → Audit queue length. You get a problem if the graph is constantly rises to the Watremark.
To solve this issue, disable Log events in storage in your Security/Masking Rule and disable your Audit/Learning Rules.
Dictionary: an Update Metadata task or a Table Relations task (any type of this task) is running.
To solve this issue, wait for the task to be completed.
Another solution is to transfer your Dictionary and/or Audit Storage to another database type supported by DataSunrise.
I'm trying to decrypt a PostgreSQL table I encrypted before but getting the following error:
SQL Error [39000]: ERROR: decrypt error: Data not a multiple of block size Where: PL/pgSQL function ds_decrypt(bytea,text) line 6 at RETURN
This means that somebody edited your encrypted table's contents directly, bypassing your DataSunrise's proxy. This process is irreversible and your encrypted table can't be decrypted.
I'm trying to export a big number of resources to a Resource group with Resource Manager but get the following error:
Input otl_long_string is too large to fit into the buffer Variable...
Navigate to System Settings → Additional Parameters. Locate the DictionaryAuditOtlLongSize parameter and set its value to 8192.
I'm trying to audit Oracle queries but get the following error:
can not get CCSID from oracle charsetId, charsetId: 0
This problem occurs on DataSunrise 6.3.1 when updated from version 5.7 and lower. Update your database's metadata to get rid of that problem.
I configured a MySQL database to be used as the Dictionary and Audit Storage. I get the following error:
The total number of locks exceeds the lock table size
in Innodb, row level locks are implemented by having a special lock table located in the buffer pool where small record allocated for each hash and for each row locked on that page bit can be set. If the pool size is overflown, the aforementioned error is thrown. The MySQL "innodb_buffer_pool_size" parameter's recommended value is 3/4 of your RAM size. To get rid of that error, execute the following command:
SET GLOBAL innodb_buffer_pool_size=402653184;
or edit the mysqld section of the my.cnf (Linux) or my.ini (Windows) file in the following way:
[mysqld]
innodb_buffer_pool_size = 2147483648
I want to delete audit data manually from my Audit Storage database. Can I do it?
Yes you can but you can't do that for SQLite. Regarding other databases, to delete audit data manually, you need to derive the SESSION_ID from the date you want to remove all events before. Use the following Python script to get the SESSION_ID:
from datetime import datetime
BASE_TIME = 1451606400000
remove_before_date = "2022-10-19 10:15:20"
dt_obj = datetime.strptime(remove_before_date, '%Y-%m-%d %H:%M:%S')
timestamp = dt_obj.timestamp() * 1000
timestampWithDiff = timestamp - BASE_TIME
result = (timestampWithDiff / 10) * 10000
print(result)
Once you get your SESSION_ID value and OK with REMOVE_BEFORE_DATE value, excute the following queries in your Audit Storage:
DELETE FROM sessions WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM operation_exec WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM transactions WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM operations WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM connections WHERE end_time IS NOT NULL AND end_time < '<remove_before_date>';
DELETE FROM app_sessions WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM long_data WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM session_rules WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_sub_query WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_rules WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_meta WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_dataset WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM operation_data WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM lob_operation WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM col_objects WHERE session_id < <derived_session_id_as_a_number>;
DELETE FROM tbl_objects WHERE session_id < <derived_session_id_as_a_number>;
Note: deleting data like that generates BLOAT. Consider running VACUUM FULL ANALYZE or configuring autovacuum to run periodically to catch up with the changes done to storage due to DELETEs.
How can I enable SSL/TLS for my Dictionary/Audit Storage connection?
By default, DataSunrise Audit Storage and Dictionary connector logic uses preferred SSL mode which means that if Audit/Dictionary database server supports SSL, DataSunrise nodes establish SSL/TLS ecnrypted connection with it. Otherwise, it falls back to unencrypted connection.
If you're using DBaaS like CloudSQL or AWS RDS or MS Azure counterpart of this service, CSPs (Cloud Service Providers) enable SSL/TLS ecnryption out-of-the-box (unless you deliberately disable it which is not recommended), so the service connections to DataSunrise-used database are always encrypted.
I configured DataSunrise in the Trail DB Audit LOgs mode but there're no events in Transactional Trails
First, do the following:
Check DB User used for your Instance or your IAM user permissions (AWS RDS)
Check native audit settings of the target database (it should be enabled and required permissions should be issued)
Check native audit logging policies (AUDIT statements in Oracle, Server/Database Audit Specification in MS SQL Server)
Check native Audit Storage location to ensure it logs SQL statements
Check things at DataSunrise end:
Audit Rules configured to capture queries directed to the required objects and required query types. You can suggest an empty Query Type Rule to capture ANY queries (note the prompt in the Web Console)
Ensure you've copnfigured Native Audit properly:
Configuration details are exclusive for each DB type and platform (AWS RDS for example)
Please review the chapters on native audit configuration for your platform and configure it properly if necessary
If you're operating MS SQL erver or Oracle, ensure that you don't test native audit from the same session where you configured it
Check the repository of a native audit log on the target database side:
Example 1: check sys.aud$ table or DBA_AUDIT_TRAIL view on Oracle with audit_trail=db,extended Standard Auditing mechanism to ensure audited statements are logged there
Example 2: in case of MySQL/PostgreSQL RDS, ensure you can see audited data in the audit log files of your RDS
Example 3: for MS SQL Server, check if you can see the data by the means offered by this DBMS (using a special function or by using SSMS Audit logs viewer ability (may not work well on AWS RDS))
If you see audited data in the target DBMS audit logs storage location, enable TrailAuditTrace and check the corresponding Trails worker log files. Ensure the timestamps for events are actual (to confirm Trailing is not just too busy and is not able to catch up with the flow of events)
If your DataSunrise Audit Srorage is the integrated SQLIte, and there are no entries in Transactional Trails, do the following:
Refresh the Audit Transactional Trails page of the Web Console
Re-login into the Web Console and check the Transactional Trails again.
What is the difference between DataSunrise Backend/Core Logs and Audit Trails?
Backend/Core logs contain information about settings for a system, such as setting the period to store logs, specifying a total size limit for logs, and managing the amount of logs stored on the computer.
Audit logs are the logs of Oracle, Snowflake, Neo4J, PostgreSQL-like, AWS S3, MS SQL Server, GCloud BigQuery, MongoDB, and MySQL-like databases collected using native auditing tools.