Azure Sentinel SQL Solution Query Deep-Dive

by  & Andrey Karpovsky
Thanks to: Tamer Salman, Moshe Israel, Hani Neuvirth-Telem & Yoav Daniely


 


In May 2021 Azure Sentinel saw the launch of Azure Sentinel Solutions into public preview. This launch provided Azure Sentinel preview customers with access to over 32 solutions spanning Microsoft and other vendor data sources.


 


As part of this release Azure Defender and Microsoft Threat Intelligence Center (MSTIC) collaborated to contribute Detections and Hunting queries to the Azure Sentinel for Azure SQL solution. These detection and hunting queries are based on real world attack scenarios and provide a basis for detecting and investigating potential SQL exploitation attacks.


 


In this tech community post we will cover each of the Detection and Hunting queries included in the Azure Sentinel SQL solution. This post will cover what malicious activity these queries are designed to uncover, how to tailor them for your environment using their configurable parameters and provide some insight into how the query works.


 


Whilst the detection and hunting queries discussed in this Tech Community post will focus on Azure SQL, many of the techniques can be adapted to work with any relational database management system (RDMS).


 


More information on where to find Solutions within Azure Sentinel can be found here. Within the solutions tab search for “Azure Sentinel for Azure SQL” to install the SQL solution.


 


Common SQL Anomalies Indicating Compromise


For those unfamiliar with potentially malicious SQL activity, this section will briefly cover the different types of unusual activity that the detection and hunting queries are designed to detect. If you are already familiar with malicious SQL activity you may wish to skip straight to the queries.


 


Unsecured Databases


Databases that are improperly secured and exposed to the open internet will almost certainly see exploitation attempts. The MSTIC sensor network is a collection of deception services that collect data on active internet exploitation. In 2020 the MSTIC sensor network observed over 300,000 SQL code execution attempts targeted at our honeypot network. These were attempts where an SQL query was sent to the server with the intent of executing an unauthorised command.


 


Unpatched vulnerable server software, weak passwords, and exposed administrative interfaces are all common vectors for attackers conducting mass scanning and exploitation operations.


 


Azure hosted SQL servers benefit from some default protections. New SQL servers will be firewalled from the open internet and can only communicate with other Azure services. Firewall settings in Azure allow you to configure your server to only allow connections from specific IP addresses or ranges.


 


SQL Injection


SQLi is a code injection technique where an attacker sends crafted query statements to an SQL server, these statements are crafted to exploit poorly implemented program logic. If successful commands sent by the attacker may allow data to be extracted from the database, existing records to be modified or new records created.


 


SQL injection vulnerabilities are introduced when user-controllable input is improperly sanitised, or input sanitisation is circumvented by the attacker. SQL injection vulnerabilities may be found in any application that interacts with an SQL server, most commonly these are web applications.


 


As the vulnerable application processes and executes the command on behalf of the attacker, any commands that are injected successfully into a web application are executed from the IP address where the vulnerable application is hosted. Successful SQLi can be used to bypass IP restrictions enforced using Firewalls and make attacks harder to investigate.


 


 Threat actor injects SQL code into a vulnerable page, providing access to the underlying SQL database.Threat actor injects SQL code into a vulnerable page, providing access to the underlying SQL database.


 


SQLi can be performed with or without knowledge of the underlying database structure, when queries are sent to the server without knowledge of the underlying database structure these are commonly referred to as being “blind”. Blind SQLi is generally used to determine if the application is vulnerable, the two most common techniques are Boolean based and Timing based blind SQL injection attacks.


 


Boolean based Blind SQLi


Boolean based blind SQLi is a reconnaissance technique which uses Boolean statements, returning a True or False result, to determine if the code is being executed by the SQL server after injection into a vulnerable parameter. The attacker will view the page using standard parameters and then inject code to force a true or false result. They will then look for differences in the returned result to determine if their injection was successful.


 


In the below example, the attacker may inject some SQL into the “id” parameter of the members page, in the first instance providing and statement that will return a false:


 


http://contoso.com/member.php?id=1 and 1=2

 


This will cause the page to believe that the member ID requested doesn’t exist. While the member with an ID of 1 exists the second part of the statement will return false. The attacker will then inject SQL that will return a true:


 


http://contoso.com/member.php?id=1 and 1=1

 


By looking for differences in the response the attacker can determine the page is vulnerable. In the example above, when the first request is sent the page returns an error stating the member was not found. In the second request the attacker forces a true result, and therefore the page returns a successful lookup. If the attacker’s injection were not working, or the injected SQL was being filtered out, the page would behave normally, returning member information for member ID 1 in both instances.


 


Time-based blind SQLi


Time based blind SQL injection uses SQL sleep functions to force the query to wait for a pre-determined amount of time. Like with Boolean blind SQL injection it relies on the attacker first baselining the vulnerable pages behaviour and then injecting crafted requests to force unusual execution times.


 


An attacker may visit the page with normal parameters, and note that the base executes in 88 milliseconds.


 


http://contoso.com/member.php?id=1

 


The attacker may then append the SQL Server WAITFOR DELAY command, forcing the query to wait for 5 seconds if executed.


 


http://contoso.com/member.php?id=1; WAITFOR DELAY ’00:00:05′

 


The attacker will know if the injection is successful if the page which previous took 88 milliseconds to execute now takes over 5 seconds to execute.


 


This technique can be detected by analysing logs or web requests for the presence of WAITFOR commands, however there are many techniques to obfuscate the injected command, allowing the attacker to bypass static keyword detection.


 


Data Extraction & Manipulation


The end goal for most attackers targeting SQL servers is to extract data or delete/damage that data. SQL databases often contain sensitive information that can be attractive to both criminal or nation-state actors. Extraction of sensitive user information may provide a nation-state threat group with information to further exploit the network or may be of intelligence value alone. Organised crime groups may seek to sell the data onwards or encrypted/destroy the data as part of a ransomware campaign.


 


Detection Queries


This section will cover detection queries in detail. These are queries that are designed to be used as alerts on potential malicious attempts to access and exploit the resources. Detection queries expose several parameters that can be easily tuned to control the accuracy and quantity of detections. Detections look for suspicious events inside recent time window, which are anomalous based on behavior that was learned in the preceding training window. In addition, these queries can be used as templates for other security scenarios.


 


Each detection query is built on top of a standard normalisation, using the following Kusto query to process SQL log data. For brevity, each individual query will not include this normalisation step.  Complete queries can be found within the Azure Sentinel SQL solution.


 


let timeSliceSize = 1h;
let timeRange = 14d;
let processedData = materialize (
AzureDiagnostics
| where TimeGenerated > ago(timeRange)
| where Category == ‘SQLSecurityAuditEvents’ and action_id_s has_any (“RCM”, “BCM”) // Keep only SQL affected rows
| project TimeGenerated, PrincipalName = server_principal_name_s
, ClientIp = client_ip_s, HostName = host_name_s, ResourceId
, ApplicationName = application_name_s, ActionName = action_name_s
, Database = strcat(LogicalServerName_s, ‘/’, database_name_s)
, IsSuccess = succeeded_s, DurationMs = duration_milliseconds_d
, AffectedRows = affected_rows_d,ResponseRows = response_rows_d
, Statement = statement_s
, Error = case( additional_information_s has ‘error_code’, toint(extract(“<error_code>([0-9.]+)”, 1, additional_information_s))
, additional_information_s has ‘failure_reason’, toint(extract(“<failure_reason>Err ([0-9.]+)”, 1, additional_information_s))
, 0)
, State = case( additional_information_s has ‘error_state’, toint(extract(“<error_state>([0-9.]+)”, 1, additional_information_s))
, additional_information_s has ‘failure_reason’, toint(extract(“<failure_reason>Err ([0-9.]+), Level ([0-9.]+)”, 2, additional_information_s))
, 0),
AdditionalInfo = additional_information_s, timeSlice = floor(TimeGenerated, timeSliceSize))
| extend RowNumber = row_number(), WindowType = case( TimeGenerated >= ago(detectionWindow), ‘detection’, (ago(trainingWindow) <= TimeGenerated and TimeGenerated < ago(detectionWindow)), ‘training’, ‘other’)
;

 


Detection queries can be divided into 3 broad categories by similar logic and content.


 


Detections for volumetric spikes


These detections monitor a single numeric variable and look for significant upward spikes, that are anomalous when compared to normal behaviour of the same variable on that resource in the training period. For example, if a malicious actor attempts to extract all the records in a database, it will be detected.


 


Currently 2 detections of these types are included, looking for different numeric columns (and security scenarios):



  1. VolumeAffectedRowsStatefulAnomalyOnDatabase: Monitors the ‘AffectedRows’ metric, which reflects changes made in the database. Upward spikes indicate that too many data rows were changed, which might indicate potential attempts to destroy data or make it temporary unusable.

  2. VolumeResponseRowsStatefulAnomalyOnDatabase: Monitors the ’ResponseRows’ metric, that reflects exfiltration from the database. Spikes indicate that too many data rows were extracted, which might indicate potential data theft.


 


The queries share several parameters that control the query logic and are exposed for easy control.



  • monitoredColumn (values: ‘ResponseRows’, ‘AffectedRows’): Name of the volumetric variable column that is analyzed in the query.

  • volumeThresholdZ (default value: 3.0): Minimal threshold for relative anomaly score (zScore), which represents the number of standard deviations that the value is above the average. For higher values, the alerts will be triggered only by very significant anomalies. For example, values of 1500 and 900 will both trigger alerts in case of volumeThresholdZ = 3.0, but only 1500 will trigger alert for volumeThresholdZ = 5.0.

  • volumeThresholdQ (default value: volumeThresholdZ): Minimal threshold for absolute anomaly score (qScore), which represents the number of median ranges that the value is above the 99th percentile. For higher values, the alerts will be triggered only by very significant anomalies.

  • volumeThresholdHardcoded (default value: 500): Minimal threshold for actual value of the monitored column that will trigger the alert. For example, in case of default value, only values above it will trigger alerts (in case the other thresholds are passed as well).

  • detectionWindow (default value: 1h): Size of the recent time window (back from current time). Events in this window will be analyzed for anomalies, and anomalies that are significant (based on thresholds) will appear in the output.

  • trainingWindow: (default value: detectionWindow + 14d): Window size before the detection window that is used to calculate the metrics for normal behaviour. Events in detection window will be compared with these metrics.


The standardized data is split to 2 datasets. Training window is a relatively long period of time (14 to 30 days are recommended) which precedes the detection window and is used to learn the normal and expected behaviour on the resource. Basic measures of central tendency (average and median) and dispersion (standard deviation and inter-percentile range) are calculated to model this behaviour.


 


trainingSet = processedData
| where WindowType == ‘training’
| summarize AvgVal = round(avg(QuantityColumn), 2)
, StdVal = round(stdev(QuantityColumn), 2), N = max(RowNumber)
, P99Val = round(percentile(QuantityColumn, 99), 2)
, P50Val = round(percentile(QuantityColumn, 50), 2)
by Database;

 


The detection window is relatively short period (1 to 24 hours are advised) and is used to monitor recent values of the metric. If a recent value is significantly higher than the chosen thresholds, an alert is triggered. By lowering the thresholds, the user can allow for more (less-significant) alerts to be raised.


 


processedData
| where WindowType == ‘detection’
| join kind = inner (trainingSet) on Database
| extend ZScoreVal = iff(N >= 20, round(todouble(QuantityColumn – AvgVal) / todouble(StdVal + 1), 2), 0.00)
, QScoreVal = iff(N >= 20, round(todouble(QuantityColumn – P99Val) / todouble(P99Val – P50Val + 1), 2), 0.00)
| extend IsVolumeAnomalyOnVal = iff((ZScoreVal > volumeThresholdZ and QScoreVal > volumeThresholdQ and QuantityColumn > volumeThresholdHardcoded), true, false), AnomalyScore = round((ZScoreVal + QScoreVal)/2, 0)
| project TimeGenerated, Database, PrincipalName, ClientIp, HostName, ApplicationName, ActionName, Statement, IsSuccess, ResponseRows, AffectedRows, IsVolumeAnomalyOnVal, AnomalyScore
| where IsVolumeAnomalyOnVal == ‘true’
| sort by AnomalyScore desc, TimeGenerated desc

 


The output of the query contains a list of events in detection window that were flagged as anomalous (IsVolumeAnomalyOnVal = ‘true’) by the detection query with the defined parameters, together with AnomalyScore field. Higher AnomalyScore values represent the significance of the anomaly (higher values represent events that are more anomalous).


 


Detections for Monitored Errors


These detections look for statements or login attempts that fail with specific errors, which might indicate malicious attempts to access or exploit the database. For example, login failures on firewall blocks or credential errors might indicate a malicious attempt to gain access to the database. In some environments, specific errors are expected.  To prevent noise in these cases, minimal thresholds for events that trigger the alerts are exposed and can be adapted to raise alert on any relevant event, or only when specific count is reached.


Currently, 3 detections of this type are included, looking for different errors (and security scenarios):


 



  1. ErrorsFirewallStatefulAnomalyOnDatabase: Looks for login attempts that are blocked by existing firewall rules (error 40615). Such events might indicate that unauthorized actor unsuccessfully tried to gain access to the database. This alert is interesting since it can help preventing further attacks on the same target (e.g. coming from a different source).

  2. ErrorsCredentialStatefulAnomalyOnDatabase: Looks for login attempts that failed due to incorrect credentials (error 18456). Such events might indicate that unauthorized actor tried to gain access to the database but didn’t have the correct credential combination. The attempts might continue with different credential sets (Brute Force).

  3. ErrorsSyntaxStatefulAnomalyOnDatabase: Attempts to execute queries that failed on syntax mistakes (errors 102 and 105). The attacker may be unfamiliar with the structure and content of the resource. In such a case, the malicious queries may be malformed (e.g. in case of SQLi) or contains unsupported scripting attempts and will fail on syntax errors.


 


The queries share the following parameters:



  • detectionWindow (default value: 1h): size of the recent time window (as described above)

  • trainingWindow: (default value: detectionWindow + 14d): training window size before the detection window (as described above).

  • timeSliceSize (default value: 1h): size of slices into which the data is divided. The number of suspicious failed attempts is calculated per each slice.

  • monitoredStatementsThreshold (default value: 1): the minimal threshold for monitored failed attempts per slice to be flagged as suspicious. For example, if some amount of credential failures is expected due to human error, this number can be set to 10. In this case, only if 10 or more failures are seen in the slice, alert can be triggered.

  • trainingSlicesThreshold (default value: 5): the maximal threshold for the number of flagged slices (based on the previous parameters) in the training window. If the number of such slices is above the threshold, the behaviour is considered normal, and no alert will be triggered for the current slice on similar event. If the number is set high (e.g. 1000), all slices with monitored attempts will trigger alerts. If the number is set to 0, alert will be triggered only on first event of the kind.

  • monitoredErrors (values: (40615) Firewall block failures, (18456) Credential failure errors, (102, 105) Syntax errors): SQL error codes that are monitored for the scenario.


 


This detection looks for logins or queries that fail with monitored error codes. Within the training window each time slice (e.g. an hour, as defined by respective parameter) that has above certain value (monitoredStatementsThreshold parameter) of errors is flagged. For example, we can count each slice that has more than 5 errors.


 


let trainingSet = processedData
| where WindowType == ‘training’
| summarize countSlicesWithErrors = dcountif(timeSlice, countStatementsWithError >= monitoredStatementsThreshold) by Database;
processedData
| where WindowType == ‘detection’
| join kind = inner (trainingSet) on Database
| extend IsErrorAnomalyOnStatement = iff(((countStatementsWithError >= monitoredStatementsThreshold) and (countSlicesWithErrors <= trainingSlicesThreshold)), true, false), anomalyScore = round(countStatementsWithError/monitoredStatementsThreshold, 0)
| where IsErrorAnomalyOnStatement == ‘true’
| sort by anomalyScore desc, timeSlice desc

 


The alert is triggered if the detection window contains events with monitored error code, whilst the number of flagged slices within training window is below threshold (trainingSlicesThreshold parameter). The latter logic helps control the number of alerts, especially in environments where such behavior is expected and common.


The output of the query contains a list of events in detection window that were flagged as anomalous and the AnomalyScore, as described above.


 


The output of the query contains a list of events in detection window that were flagged as anomalous (IsVolumeAnomalyOnVal = ‘true’) by the detection query with the defined parameters, together with AnomalyScore field. Higher AnomalyScore values represent the significance of the anomaly (higher values represent events that are more anomalous).


 


Detections for Monitored “Hot Words”


These detections look for statements that include hot words that might indicate attempts to execute malicious code. For example, shell commands (e.g. using the ‘xp_cmdshell’ procedure) can be used to run malicious scripts. These attempts might be successful or blocked, depending on SQL version and settings.


 


Currently, 5 detections of this type are included, covering different scenarios:


 



  1. HotwordsDropStatefulAnomalyOnDatabase: Looks for statements that drop resources (tables or databases). This might indicate that a malicious actor attempts to delete valuable data and disrupt business activity.

  2. HotwordsExecutionStatefulAnomalyOnDatabase: Looks for attempts to run shell commands or scripts via SQL. This could indicate an attempt to download malware, run malicious processes (e.g. mining). Even if shell execution is disabled, this might indicate that a malicious agent gained access to SQL server and makes blind attempts.

  3. HotwordsFirewallRuleStatefulAnomalyOnDatabase: Looks for attempts to change or delete firewall rules. Since tight firewall rules hinder potential attacks, malicious user might change them in order to pave the way for persistent attacks in the future.

  4. HotwordsOLEObjectStatefulAnomalyOnDatabase: Looks for attempts to create or use OLE objects, which can be used as part of persistent attack scenario.

  5. HotwordsOutgoingStatefulAnomalyOnDatabase: Looks for attempts to access external websites or resources. These might be used in attack scenarios to download malware or contact command and control centers during the attack.


The queries share the following parameters:


 



  • detectionWindow (default value: 1h): size of the recent time window (as described above)

  • trainingWindow: (default value: detectionWindow + 14d): training window size before the detection window (as described above).

  • timeSliceSize (default value: 1h): size of slices into which the data is divided (as described above).

  • MonitoredStatementsThreshold (default value: 1): the minimal threshold for statements with monitored hot words per slice to be flagged as suspicious (as described above).

  • trainingSlicesThreshold (default value: 5): the maximal threshold for the number of flagged slices  in the training window (as described above).

  • Hot words:

    • (‘drop table’, ‘drop database’): Dropping tables or databases

    • (‘xp_cmdshell’, ‘ps.exe’, ‘powershell’, ‘cmd.exe’, ‘msiexec’, ‘<script>’): Shell command executions

    • (‘http:’, ‘https:’, ‘ftp:’, ‘onion.pet’): Outgoing connections to external websites

    • (‘sp_set_firewall_rule’, ‘sp_set_database_firewall_rule’, ‘sp_delete_database_firewall_rule’, ‘sp_delete_firewall_rule’, ‘sys.firewall_rules’, ‘sys.database_firewall_rules’): Firewall rule tampering

    • (‘sp_oamethod’, ‘sp_oacreate’, ‘sp_oasetproperty’): OLE object manipulation




 


The detection logic is like the monitored errors detection. Time slices containing monitored hot words in the training window are flagged. When the detection window contains such words but the number of flagged slices in training period is below threshold, an alert is triggered.


 


Hunting Queries


Hunting queries are designed for use in active threat hunts. Most threat hunting queries will require some amount of manual tuning, but this allows threat hunters to detect activity that may fall below detection baselines. 


 


These hunting queries will focus on exploring the abuse of applications that communicate directly with SQL like web applications, management software and accounting software. Focussing on application activity allows us to establish predicable patterns of interaction with the database. These predictable behaviours will provide loose baselines for our hunting queries.


 


We can roughly identify SQL applications by grouping using the ApplicationName and PrincipalName columns. This will provide combinations of applications with associated accounts, roughly mapping to each application in use.


 


Each hunting query is built on top of a standard normalisation described above. For brevity each individual query will not include this normalisation step, however complete queries can be found within the Azure Sentinel SQL solution.


 


Affected Row Anomaly


Azure for SQL diagnostic logging provide us with the column affected_rows_d. This column contains a count of the rows that were impacted by the SQL query.


 


In a real-world scenario, the attacker may attempt to dump the database schema, drop the table, or overwrite data in the database. Each of these commands would result in the query that was exploits returning an unusually high number of rows when compared to normal database operation.


 


Using our processed data, we can create a simple query which calculates the total number of operations for a given SQL application and then calculate the prevalence for each query executed. We can then tune to query to return only queries that represent the top 1% of affected row counts.


 


let threshold = 1;
let totals = processedData
| summarize count() by PrincipalName, ApplicationName
| extend joinKey = strcat(PrincipalName, ApplicationName)
| project joinKey, count_;
processedData
//Count the total number of times each number of rows was accessed
| summarize count() by AffectedRows, PrincipalName, ApplicationName
| extend joinKey = strcat(PrincipalName, ApplicationName)
| join kind=leftouter (
totals
) on joinKey
| extend prevalence = round(toreal(count_) / toreal(count_1) * 100, 2)
| where prevalence <= threshold

 


By using affected row prevalence, the query can detect not only outliers from normal operation with a high number of affected rows, but also outliers with low affected row counts. An abnormally low number of affected rows may be indicative of manual SQL injection or SQL reconnaissance activity.


 


Execution Time Anomaly


As explored earlier, time-based SQLi is a reconnaissance technique that uses SQL commands to purposely delay execution. It is also common for SQLi attacks to result in longer execution times than normal, this is because the attack may introduce additional processing.


 


This hunting query uses the Kusto series_decompose_anomalies function to detect queries that are executed in an anomalous amount of time. The following Kusto is used to prepare the processed data so that series_decompose_anomalies can be executed across it.


 


First, the query will summarise all SQL query operations by each user into 1-hour bins. The query will then summarise the average duration of the query grouping on user and application to provide a list of query durations.


 


processedData
//Bin the data into 1h windows, taking the average of exeuction time
| summarize round(avg(DurationMs), 2), min(TimeGenerated), max(TimeGenerated) by PrincipalName, ApplicationName, bin(TimeGenerated, timeSliceSize)
//Summarise by user and application and create list ready for anomaly detection
| summarize make_list(avg_DurationMs), make_list(min_TimeGenerated), make_list(max_TimeGenerated) by PrincipalName, ApplicationName

 


With the data prepared, series_decompose_anomalies can be applied to the list of durations to detect outliers. This will identify hours where the average query duration increases, providing us with periods of time where possible time based SQLi could have occurred.


 


| extend series_decompose_anomalies(list_avg_DurationMs, scoreThreshold, -1, ‘linefit’)

 


Finally, the query will expand the results of series_decompose_anomalies and re-join them with the original query.


 


| mv-expand TimeAnomaly=series_decompose_anomalies_list_avg_DurationMs_ad_flag, WindowStart=list_min_TimeGenerated, WindowEnd=list_max_TimeGenerated
| project WindowStart, WindowEnd, PrincipalName, ApplicationName, TimeAnomaly
| where TimeAnomaly == 1
//Split the query here to see raw anomaly results
//The next section will re-join back to the SQL diagnostics data to
//display the queries executed within the anomalous windows identified
| extend joinKey = strcat(PrincipalName, ApplicationName)
| join kind=leftouter (
processedData
| project ApplicationName, PrincipalName, Statement, TimeGenerated, DurationMs
| extend joinKey = strcat(PrincipalName, ApplicationName)
) on joinKey
| where TimeGenerated between (todatetime(WindowStart) .. todatetime(WindowEnd))
| project TimeGenerated, TimeAnomaly, WindowStart, WindowEnd, PrincipalName, ApplicationName, Statement, DurationMs, ResourceId, ClientIp, HostName
| order by DurationMs desc

 


The resolution and the anomaly decompose threshold can both be configured. The anomaly threshold allows you to control how strong the anomaly must be before a result is returned, increasing this value will effectively make the query less sensitive to minor fluctuations.


 


Boolean Blind SQLi


As covered earlier in the post, Boolean blind SQLi is one of the most common SQL injection reconnaissance techniques and is often used by automated database takeover tools.


 


This hunting query will process the SQL statement to extract both sides of a Boolean expression. The query will first identify SQL statements with an “=” are part of their structure. A regular expression is then used to extract either side of the “=”, the result of this is stored as the right and left side of the query.


 


let queryData = processedData
| where Statement contains “=”
| extend extract_equals = extract_all(@”([a-zA-Z0-9\-\’]+\s?=\s?[a-zA-Z0-9\-\’]+)”, Statement)
| where isnotempty(extract_equals)
| mv-expand extract_equals
| extend left = tostring(split(extract_equals, “=”, 0)[0])
| extend right = tostring(split(extract_equals, “=”, 1)[0]);

 


We then need to separate instances where the left and right side of the statement is wrapped in quotation marks, using a regular expression to handle parsing the correct portion of the statement.


 


let cleanData = queryData
| where left !has “‘” and right !has “‘”;
//Data has a quote in both sides, we need to parse this properly
//We only care when the query is balanced e.g. ‘1’=’1′, so both sides will have a quote
//This allows us to drop some results early
let quoteData = queryData
| where left has “‘” and right has “‘”
| extend extract_equals = extract_all(@”(\’.+\’\s?=\s?\’.+\’)”, Statement)
| extend left = tostring(split(extract_equals, “=”, 0)[0])
| extend right = tostring(split(extract_equals, “=”, 1)[0]);

 


Finally, after processing, we can check to see if the SQL statement contained a balanced parameter by comparing the right and left side of the “=”.


 


cleanData
| union quoteData
| where left == right
| extend alertText = strcat(left, “=”, right)
| summarize AlertText=make_list(alertText) by TimeGenerated, Database, ClientIp, PrincipalName, Statement, ApplicationName, ResourceId

 


 


Prevalence Based Query Size Anomaly


SQL injection will usually result in more parameters being added into the query. It is normal not possible for the attacker to remove existing parameters which are used by the legitimate page that is being exploited. As parameters will be appended to an existing query, it’s possible to baseline and then detect unusual numbers of parameters for a given user and application combination.


 


This query uses a prevalence calculation to identify queries that have an unusual number of parameters for a given application and user. First the number of queries for each user and application group is calculated, this is stored as the total count of the overall prevalence of each query size can be calculated later.


 


let total = processedData
| summarize count() by PrincipalName, ApplicationName
| extend key = strcat(PrincipalName, ApplicationName);

 


Next, the query splits the query up using “=” and a space as delimiters. Roughly each “=” indicates an SQL parameter whilst each space represents a new token, or query component. The query will then summarise to count the number of tokens based on the username, application, and number of tokens. This provides us with counts for each unique number of tokens based on the username and password.


 


let anomalyData = processedData
//Splitting on “=” provides a good estimate to the parameters
| extend parameters = countof(Statement, “=”)
//Splitting on space provides a good estimate to the tokens in the query
| extend tokens = array_length(split(Statement, ” “))
| summarize count(), make_set(parameters), make_list(TimeGenerated), make_list(ClientIp), dcount(ClientIp), min(TimeGenerated), max(TimeGenerated), make_list(Statement) by tokens, PrincipalName, ApplicationName

 


Now that we have the total number of queries executed by each username and application, we can calculate the prevalence for each number of tokens observed. If that number of tokens represents less than 10% of the total queries executed, a row will be returned.


 


| extend key = strcat(PrincipalName, ApplicationName)
| join kind=leftouter (total) on key
| extend prevalence = toreal(count_) / toreal(count_1) * 100
| where prevalence <= prevalenceThreshold

 


SQL applications, especially those that have pre-defined database interactions, generally produce consistent outcomes this query can help to flag anomalies where an unusual number of tokens were observed. There are limitations to this hunting query, if the attacker uses the same number of tokens as a legitimate function, no result will be returned. This query also needs application behaviour to be predictable and is most effective with applications such as content management systems and other web applications.


 


The query that is bundled as part of the SQL solution also allows additional filtering; this includes the ability to detect anomalies that only occurred on a single day.


 


Time Based Query Size Anomaly


This hunting query uses a similar mechanism to the Prevalence based query size anomaly, however, instead of using prevalence to detect outliers, this query uses series_decompose_anomalies to detect spikes in the number of parameters or token for a given day of the week.


First the query needs to prepare data for the series_decompose_anomalies function. Data is summarized into 1-day bins whilst calculating the average number of tokens and parameters for that window. The data is then summarized using the username an application name to create lists of token and parameter averages.


 


processedData
//Splitting on “=” provides a good estimate to the number of parameters in the query
| extend parameters = countof(Statement, “=”)
//Splitting on space provides a good estimate to the tokens in the query
| extend tokens = array_length(split(Statement, ” “))
//Bin the data into 1 day windows, taking the average of tokens and parameters for that user and application during the period
| summarize round(avg(tokens), 2), round(avg(parameters),2), min(TimeGenerated), max(TimeGenerated) by PrincipalName, ApplicationName, bin(TimeGenerated, resolution)
//Summarise by user and application and create lists ready for anomaly detection
| summarize make_list(avg_tokens), make_list(avg_parameters), make_list(min_TimeGenerated), make_list(max_TimeGenerated) by PrincipalName, ApplicationName

 


With the prepared data we can call series_decompose_anomalies to identify anomalies in the number of tokens and parameters for a given day. We will then keep results where a positive or negative anomaly is detected (too few or too many tokens for that day when compared to the baseline). It’s also important to record the start and end time of this activity so that we know when the anomaly occurred.


 


| extend series_decompose_anomalies(list_avg_tokens, scoreThreshold, -1, ‘linefit’), series_decompose_anomalies(list_avg_parameters, scoreThreshold, -1, ‘linefit’)
| mv-expand TokenAnomaly=series_decompose_anomalies_list_avg_tokens_ad_flag, ParameterAnomaly=series_decompose_anomalies_list_avg_parameters_ad_flag, WindowStart=list_min_TimeGenerated, WindowEnd=list_max_TimeGenerated
| project WindowStart, WindowEnd, PrincipalName, ApplicationName, TokenAnomaly, ParameterAnomaly
//Enable to detect SQL statement token anomalies
| where TokenAnomaly == 1 or TokenAnomaly == -1

 


Finally, the query will create a join key using the username and application name to re-join back to the original dataset. The start and end times recorded earlier can now be used to surface results within the time window of the anomaly.


 


| extend joinKey = strcat(PrincipalName, ApplicationName)
| join kind=leftouter (
processedData
| project ApplicationName, PrincipalName, Statement, TimeGenerated
| extend joinKey = strcat(PrincipalName, ApplicationName)
) on joinKey
| where TimeGenerated between (todatetime(WindowStart) .. todatetime(WindowEnd))
| extend Parameters = countof(Statement, “=”)
| extend Tokens = array_length(split(Statement, ” “))
| project TimeGenerated, ParameterAnomaly, Parameters, TokenAnomaly, Tokens, WindowStart, WindowEnd, PrincipalName, ApplicationName, Statement
| order by Tokens desc, Parameters desc

 


The results returned will represent activity occurring with an unusual number of tokens and parameters for that given time of day.


 


Suspicious Stored Procedures


Using the MSTIC sensor network, MSTIC can observe the most used commands by attackers when they gain access to an SQL database. Using this information, this hunting query will show SQL queries that contain the most abused SQL stored procedures.


 


let abusedProcedures = dynamic([“xp_cmdshell”, “xp_regwrite”, “xp_regdeletekey”, “xp_regdeletevalue”, “xp_dirtree”, “xp_fileexist”, “xp_msver”, “xp_makecab”, “xp_sqlshell”, “xp_fixeddrivesd”, “xp_regread”, “sp_configure”, “sp_oacreate”, “sp_password”, “sp_OACreate”, “sp_addextendedproc”, “sp_dropextendedproc”, “sp_makewebtask”, “sp_delete”, “SP_OAcreate”, “sp_OADestroy”]);
AzureDiagnostics
| where Category =~ “SQLSecurityAuditEvents”
| where statement_s has_any (abusedProcedures)
| project TimeGenerated, SubscriptionId, ResourceId, ClientIp=client_ip_s, PrincipalName=session_server_principal_name_s, statement_s, action_id_s, HostName=host_name_s, ApplicationName=application_name_s

 


The of the above commands are often abused as they allow attackers to progress their access from only the SQL database onto the underlying server. Executing command line, manipulating registry keys, and performing directory traversal are all key components to successful onward exploitation.


 


Commands such as” xp_dirtree” can be exploited to allow attackers to capture MSSQL credentials. This is especially useful if the attacker only has access via SQLi through a web application as it grants them direct database access. Credentials may also be reused to access other systems in the network.


 


All the above procedures are not supported on Azure SQL Databases. This is because Azure SQL Databases run in a shared environment. Presence of these commands in Azure SQL Database logging likely indicates random attempts if your database is exposed to the open internet.


 


If you are running your own SQL server it is recommended that you disabled commands that are not in use. Many of these commands are disabled by default.


 


First you must enable advanced options:


 


EXEC sp_configure ‘show advanced options’, 1
RECONFIGURE

 


 


Then you can disable the chose procedure, for example:


 


EXEC sp_configure ‘xp_dirtree’, 0
RECONFIGURE

 


 


If you leave any of these procedures enabled a trap should be set in your SIEM for when they are executed.


 


Conclusion


The Azure Sentinel SQL solution allows security analysts and administrators to rapidly deploy a range of detection and hunting queries to their Azure Sentinel environment. As you have seen, there are numerous ways that SQL attacks can be detected. The ever-changing security landscape means it’s not always possible to detect attacks with a single query. It’s important to develop diverse detection and hunting queries using numerous novel approaches to provide the best possible visibility of SQL attacks.


 


We hope this blog post has provided insight into how our detection and hunting queries work, and we are excited to see new Hunting and Detection queries built by the community.


 


All the queries in this post can be found in-full within the Azure Sentinel SQL solution. Additional hunting and detection queries written for Azure Sentinel can be found on the Azure Sentinel GitHub.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.