Azure Monitor: Expanding the Out-of-the-Box observability for your IT infrastructure

This post has been republished via RSS; it originally appeared at: Core Infrastructure and Security Blog articles.

Hello ladies and gentlemen,

How many times have you found yourself in the need of monitoring a custom KPI? How many times did you need to react to a situation in which a given number of records in a database was representing an issue? Or was a file in a storage account left there for more than 2 hours an indicator of some broken process?

If you have been into any of the above, please continue reading. Differently, continue reading as well to know how to manage the above situations should they happen to happen.

 

Unfortunately, Azure Monitor does not have any out-of-the-box (aka OOTB) data source to help you capture such data, but can interact with other different services, products, and features to get you to the final goal of retrieving the necessary data. Thinking about Azure Monitor as a member of a bigger family, we will ask the help of other family members such as the Azure Automation and the HTTP Data Collector API.

Let us explore the 2 scenarios I encountered while working with customers of mine and see how Azure Monitor can help.

 

Scenario 1

Dealing with a shipment service, one of my customers wanted to measure, monitor, and react to those shipments that remained unactioned over the last 2 hours. Necessary data should come from a SQL Server database and as anticipated, Azure Monitor cannot collect data from inside a database irrespective of being PaaS, IaaS, or non-azure workload. So how can we get this data into a Log Analytics workspace for further analysis and alerting? We need to create our own data source.

 

The big family cooperation will give the advantage of creating a custom script-based data source using all the flexibility and parsing logic that we need to send data through a robust ingestion pipeline provided by the Azure platform.

 

To fulfil this scenario and create the script-based custom data source, continue the following high-level steps:

  1. Make sure you have a user account with the necessary permission to access the data contained in the SQL Database and to run the necessary SQL query.
  2. Run the SQL query against the database, leveraging the SQLServer PowerShell module, to retrieve those records whose last update happened more than 2 hours ago using the necessary authentication.
  3. In the SQL Query, which can be passed as parameter to the runbook, take care of the filtering and the presentation layer. It is strongly recommended to filter and format the results during the query so it will be easier to manage the information to be uploaded later.
  4. Assemble the records appropriately into a JSON object. (For better script performance, use ArrayList. See https://docs.microsoft.com/en-us/powershell/scripting/dev-cross-plat/performance/script-authoring-considerations?view=powershell-5.1#array-addition )
  5. Upload the JSON object to Log Analytics using the HTTP Data Collector API.

 

Scenario 2

Payment service is normally a service offered by a vendor but operated by a different one. In this scenario another customer wanted to make sure that SLAs were respected and that all the required payments were managed.

To achieve the above goal the customer needed to get visibility for the number and age of files stored in a Storage Account. In case of files still present on the storage account after 1 hour, an action was required to notify the customer (and successively the vendor) of the clear symptom that the payment pipeline stopped somewhere. Again, there is no native mechanism to collect this precise info.

 

In this case the high-level steps I followed and can recommend creating the custom script-based data source are the following:

  1. Find or create a user account with the necessary access permission to storage account.
  2. Retrieve the file information including the creation date and parse it accordingly using the relevant Az.Storage PowerShell module and commands
  3. Format and assemble the records appropriately into a JSON object. (For better script performance, use ArrayList. See https://docs.microsoft.com/en-us/powershell/scripting/dev-cross-plat/performance/script-authoring-considerations?view=powershell-5.1#array-addition )
  4. Upload the JSON object to Log Analytics using the HTTP Data Collector API.

 

The common approach

As far as the custom script-based data source is concerned, it is just a PowerShell Azure Automation runbook type. The PowerShell script you need to create should:

  1. Take care of retrieving and parsing data
  2. Include the two standard functions (Function Build-Signature and Function Post-LogAnalyticsData)
  3. Call the Post-LogAnalyticsData function after the parsing logic to upload the data

The standard functions are included in the Sample requests script.

Of course, depending on where the data resides and how it can be accessed, you will need to consider the use of Hybrid Runbook Workers as well as the different type of authentication for those scenarios where the data is only accessible from within a virtual machine.

With that said you should have a clear picture of the possible solution and so, instead of discussing the PowerShell runbook in dept which would not be helpful since every customer might have unique needs, I will show the PowerShell runbook creation approach using the flowchart diagram below assuming to use Azure Automation:

 

BrunoGabrielli_0-1651246854693.png

 

 

Lesson learned

Let me share something that I learned as well during these engagements:

  1. Pass as much information as you can to the script using parameters. Avoid hard-coded inputs such as server name, database name or credentials.
  2. Regarding Scenario 1, should you want to reuse the same script for a comparable situation, try to make the SQL query result format as much agnostic as you can. In other words, make sure you do not use any specific field name as column name. Instead call the columns with a meaningful, but generic, name. As an example, imagine that the number of records equates to a performance counter value and the shipment type equates to the performance counter name. Why not call the columns perfCounterName and perfCounterValue? Do you also need to add an additional field which is the object type to be shipped? No problem, simply name it object.
  3. As far as credentials go, you can leverage encrypted Automation variables or any other type of identity. Never store username and password in the script since, as you can imagine, they will be stored in clear text.
  4. Using HTTP Data Collector implies the respect of certain limits. For instance, a single post request cannot go over 30Mb. If the amount of data you need to upload goes over this limit, you need to split the data into smaller sized chunks. One good approach to see if the data is over the limit can be saving the JSON object into a file and checking the file size. The necessary info about limits can be found at Data limits

 

Moral of the story: once you have the data you can do everything with it but what if there’s no datasource to collect the data you need? Just invent your own :lol::lol::lol:

By now, you have enough food for thought and if you are hungry like I was, don’t wait any more and start creating your custom data source solution to extend the OOB observability :smile:

Leave a Reply

Your email address will not be published. Required fields are marked *

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.