środa, 13 grudnia 2017

Update - Log Files Sanitizier v2

    Following up the recent post about removing confidential data from the log files,
that can be found over here:

    I decided to post a new version of the script with several improvements comparing to the first version. In the initial release of the script was able to remove the confidential data from all the files in the same folder. Recently I was put against a bigger challenge though - I had to provide and sanitize the logs contained in multiple sub-folders in a complex folder structure. It would be a daunting challenge to copy and run the sanitizer script to every single location. Therefore I created a second version of the script with the following improvements:

1. The script digs now recursively through the whole sub-folders in a folder structure and removes confidential information from every single file
2. Previous version was creating a copy of the file where the confidential information has been found with a ".parsed" suffix next to the original file. In a multiple folder tree scenario fishing for those parsed files and manually removing the non-parsed files would be another time consuming task which I wanted to avoid. The new version of the script is creating a "_Senstive" folder at the top of the tree where it moves all the files with the confidential information. The sanitized version are replacing the original files in their respective locations
3. Minor bug fix - tests proved, that the previous version was not dealing well with the files containing other dots in the name, that the ones separating the file name and the file extension. In an extreme situation, when the files had similar names it would lead to overwriting ".parsed" versions of the files. Current version deals in a different way with renaming the files and this problem is resolved

    Save the code of the log parser as a .sp1 file in the root of a directory containing all the files, that you want to sanitize. Run the log parser from Administrator PowerShell and it will create new files for any log files, that had the IP addresses detected

 Log Sanitizer output

Additional Notes:
    Be careful with *.evtx files, as they store IP addresses in a way, that there are spaces stored between each character (i.e. 1 0 . 1 . 1 2 2 . 1 3). This would not be detected by the log parser, so if you are exporting Windows Event Viewer logs for parsing ensure they are exported in the .csv format
New-Item .\_Sensitive -Type Directory
$logfiles = Get-ChildItem .\. -Recurse | ?{!$_.PSIsContainer}

forEach ($log in $logfiles){
                Write-Host -f green "Parsing $log"
        $IPsMatched = 0
        Get-Content $log.VersionInfo.FileName | ?{$_ -match '(?<IP>(10|172|255|192|45|48|49)\.\d{1,3}\.\d{1,3}\.\d{1,3})'} | ForEach-Object {$IPsMatched++}
        if ($IPsMatched -gt 0){
                Write-Host "Found $IPsMatched IP addresses"
                $parsedLogName = $log.Name.SubString(0, $log.Name.lastIndexOf('.')) + "_parsed." + $log.Name.Split('.')[-1]
                                $parsedLogFullName = $log.FullName | Split-Path
                                $parsedLogFullName += "\"
                                $parsedLogFullName += $parsedLogName
                (Get-Content $log.VersionInfo.FileName) -replace '(10|172|255|192|45|48|49)\.\d{1,3}\.\d{1,3}\.\d{1,3}','X.X.X.X' | Set-Content $parsedLogFullName
                Write-Host "Sanitized log file written into " -nonewline; Write-Host -f yellow $parsedLogFullName
                                Write-Host $log.VersionInfo.FileName -nonewline; Write-Host -f yellow " moved to .\_Sensitive folder"
                                Move-Item $log.VersionInfo.FileName .\_Sensitive
        else {
                Write-Host "No IP addresses found"

piątek, 17 listopada 2017

Checking the Full Folder Path of Driver Objects in SCCM

    Following up the recent life hacks series post about finding certain drivers in SCCM, that can be found over here:

    I decided to post about the follow up question that I received afterwards. As we already identified, that certain drivers exist in the infrastructure, the next question is - where are they? The answer to this is not too obvious, as finding the full folder path in SCCM requires looping through the database with a SQL query, that will iterate towards the parent folder until such exists. With the below query you can identify this rather than checking folder by folder in the SCCM Console. Below you can find the exemplary output of the query:

 Exemplary Output of the Path of Driver Objects SQL query

    Replace the _DriverModelRegexp_ string with the regular expression matching your needs depending on the model of the drivers you are looking for.
    Obviously the query should be run via SQL Management Studio or any other similar tool letting you execute T-SQL queries.

Additional Notes:
    Interesting fact about this query - it does find the drivers not only if they are present as an object with an exact same name as the queried one. It also finds the drivers that are compatible with the queried ones even with the other name. Apparently this is the way that the information is stored in SCCM database. You can see the example below, as in the query above I was looking for Allied Telesis AT-2911xx Gigabit Fiber Ethernet. The results of the query pointed me to the following folder with the following 4 objects inside:

Returned Folder Does Not Contain Allied Telesis AT-2911 Drivers

    However when you looks closer and check the Applicability tab in the dirver's properties you see, that they are actually compatible with AT-2911xx Gigabit Fiber Ethernet. Very convenient!

 Returned Drivers are compatible with Allied Telesis AT-2911 Ones

SELECT ROW_NUMBER() OVER (ORDER BY ModelName,vSMS_Folders.ContainerNodeID) AS Row,ModelName,vSMS_Folders.ContainerNodeID,ModelName AS Folder INTO #Temp
FROM v_CI_DriverModels
JOIN vFolderMembers ON v_CI_DriverModels.CI_UniqueID = vFolderMembers.InstanceKey
JOIN vSMS_Folders ON vFolderMembers.ContainerNodeID = vSMS_Folders.ContainerNodeID
WHERE ModelName LIKE '_DriverModelRegexp_'
ORDER BY ModelName
      DECLARE @ContainerID INT=(SELECT ContainerNodeID FROM #Temp WHERE Row=@It)
      DECLARE @ContainerIDBeg INT=(SELECT ContainerNodeID FROM #Temp WHERE Row=@It)
      DECLARE @ContainerFullName VARCHAR(MAX)
      DECLARE @ContainerName VARCHAR(MAX)
      WHILE (SELECT ParentContainerNodeID FROM vSMS_Folders WHERE ContainerNodeID=@ContainerID) != 0
            SET @ContainerName = (SELECT Name FROM vSMS_Folders WHERE ContainerNodeID=@ContainerID)
            IF (@ContainerID=(SELECT TOP 1 ContainerNodeID FROM #Temp))
                  SET @ContainerFullName = @ContainerName
                  SET @ContainerFullName = @ContainerName + '\' + @ContainerFullName
            SET @ContainerID = (SELECT ParentContainerNodeID FROM vSMS_Folders WHERE ContainerNodeID=@ContainerID)
SET @ContainerFullName = (SELECT Name FROM vSMS_Folders WHERE ContainerNodeID=@ContainerID) + '\' + @ContainerFullName
SET Folder = @ContainerFullName
WHERE Row = @It
SET @It += 1
SET @ContainerFullName = ''

SELECT ModelName,Folder FROM #Temp

piątek, 3 listopada 2017

Life Hacks - Checking the Presence of Certain Drivers in SCCM

    What will you say if someone asks you - do we have this particular driver already imported anywhere in SCCM? Here is the answer - a short but useful T-SQL query which will help you to find this out

    Replace the _DriverModelRegexp_ string with the regular expression matching your needs depending on the model of the drivers you are looking for.
    Obviously the query should be run via SQL Management Studio or any other similar tool letting you execute T-SQL queries

SELECT ModelName

FROM v_CI_DriverModels

WHERE ManufacturerName LIKE '_DriverModelRegexp_'

GROUP BY ModelName

ORDER BY ModelName

poniedziałek, 2 października 2017

System Uptime Report SQL Query

    Some time ago I was asked by the customer how to differentiate the actual downtime of the server from the network connectivity failure with usage of SCOM. Obviously from SCOM's Availability Reports' perspective, which was a main tool used by a customer to assess the state of their assets, there is no differentiation at all. In order to come up with a backup solution providing this information. Unfortunately the built-in reporting mechanism is not very convenient when it comes particular counter, because when you try to run the report for a group of multiple Health Service objects, it will aggregate all of them and try to calculate a mean value for every sample, which makes absolutely no sense in this case and produces a saw-shaped diagram like the one, you can observe below.

System UpTime report for a group of Health Service objects

    Creation of one report subscription per server could be a daunting task for few hundreds of objects, therefore we took an approach of taking the data directly from the database.

    The SQL query presented below will provide the samples os System UpTime performance rule for a particular object from the database. It has to be run against SCOM DataWarehouse database. You have to replace the XXXX values below with the SQL regular expression matching the pattern, that will suit your needs. The example of the output produced by the query is shown below.

Exemplary output of System Uptime SQL Query

SELECT DisplayName,
FROM Perf.vPerfRaw
JOIN vManagedEntity ON vPerfRaw.ManagedEntityRowId = vManagedEntity.ManagedEntityRowId
WHERE PerformanceRuleInstanceRowId IN
(SELECT PerformanceRuleInstanceRowId FROM vPerformanceRuleInstance
WHERE RuleRowId IN (SELECT RuleRowId FROM vPerformanceRule WHERE CounterName LIKE 'System Up Time'))
AND DisplayName LIKE '%XXXX%'
AND FullName LIKE '%HealthService%'
ORDER BY DisplayName,[DateTime] DESC

wtorek, 26 września 2017

Troubleshooting - Unusual Error during Linux Agent Deployment


   During the installation of SCOM agent on RHEL servers I encountered the following error message during the process of signing agent's certificate:

Exception message: Unable to create certificate context
; {ASN1 bad tag value met.

    The message quite unusual - apart from one previous case I could not find any other reference to this problem associated in any way with SCOM.


    The only suggested solution - a firewall problem has been ruled out in first place. After trying several approaches it turned out, that during certificate signing process, SCOM agent was trying to use the older versions of two particular libraries, that the ones present on the system, and failed due to this. The workaround applied was creation of the symbolic links named as the old library file pointing to the new files with the following commands and manually re-initiating certificate signing process:

cd /usr/lib
sudo ln -s libcrypto.so.1.0.1e libcrypto.so.1.0.0
sudo ln -s libssl.so.1.0.1e  libssl.so.1.0.0
sudo /opt/microsoft/scx/bin/tools/scxsslconfig -f -v

    Following up on the threads suggesting this approach (even though for a different problem) I figured out, that the problems reported to have been fixed with that script were mitigated with the release of next Cumulative Update for Management Pack for UNIX and Linux Operating Systems. After verification it turned out, that the agent binaries were taken from SCOM 2012 R2 Sp1 iso and didn't contain the latest fixes applied to the Management Pack. After downloading the latest version of the binaries the "ASN1 bad tag value" problem disappeared for all the Linux servers

niedziela, 27 sierpnia 2017

Troubleshooting - Disappearing Run As Profiles Configuration Settings


   Sometimes you have a general feeling, that there is something wrong with the infrastructure, and by looking around you catch the symptoms one after another until you are able to compose an overall image of the problem. This is what happened in this case I had with one of the customers recently, that has been resolved together with Microsoft Premier Support. It seems very interesting though, and that's why I have decided to share it with you. Here are all the symptoms observed before pinning the problem down, in more or less chronological order:

1. The groups created in the SCOM were not available for choice in the reports. They appeared in the console, but not in the Reporting part of SCOM (which suggests problems with processing data from Ops DB to DataWarehouse DB)
2. Big amount of data stored in the Staging area of the DataWarehouse DB. Running the following T-SQL query revealed hundreds of thousands of rows in the Alert and State parts of the Staging area

SELECT count(*) from Alert.AlertStage
SELECT count(*) from Event.EventStage
SELECT count(*) from Perf.PerformanceStage
SELECT count(*) from State.StateStage

3. Data Warehouse Data Collection State errors showing up in the Health Explorer of Management Servers themselves in SCOM
4. Large amount of 31551 events in SCOM event viewer log informing about failures while storing data into Data Warehouse. They look similar to the following event:
Log Name:      Operations Manager

Source:        Health Service Modules

Date:          27/01/2013 22:00:15

Event ID:      31551

Task Category: Data Warehouse

Level:         Error

Keywords:      Classic

User:          N/A

Computer:      XXX


Failed to store data in the Data Warehouse. The operation will be retried.

Exception 'SqlException': Management Group with id 'VVVVVVVV-VVVV-VVVV-VVVV-VVVVVVVVVVVV' is not allowed to access Data Warehouse under login 'YYY\WRITER'


One or more workflows were affected by this. 


Workflow name: Microsoft.SystemCenter.DataWarehouse.CollectPerformanceData

Instance name: XXX

Management group: ZZZ


    It turns out, that we suffered from an issue, that Microsoft admitted to be kind of a bug, which seems to randomly occur in different environments. It turns out, that on rare occasions default configuration of SCOM Run As accounts for Data Warehouse created during the installation of SCOM servers might disappear from Run As profiles configuration. The root cause of this behavior unfortunately hasn't been yet identified by Microsoft.


    In order to resolve the problem you have to re-introduce the settings once again. Below you can find the screenshots of properly configured Data Warehouse Account and Data Warehouse Report Deployment Account Run As profiles

Data Warehouse Run As Profiles default configuration

     After re-introducing the configuration everything should get back to normal.