Quantcast
Channel: Kevin Holman's System Center Blog
Viewing all 179 articles
Browse latest View live

How SQL database free space monitoring works in the SQL management pack

$
0
0

 

image

 

This is based on 6.6.4.0 version of the SQL MP

 

First – understand the SQL MP discovers the following items:

  • SQL Database
  • SQL DB File Group
  • SQL DB File
  • SQL DB Log File

The Database > hosts > DB File Group > hosts DB File.

Also – the Database > hosts > DB Log File.

 

Let’s start with free space monitoring in the DB file, this is the lowest level of monitoring.

There are unit monitors that directly target the “SQL Server 2012 DB File” class.

The monitor for space is called: “DB File Space”   (Microsoft.SQLServer.2012.Monitoring.DBFileSpaceMonitor)

 

clip_image001

 

This runs every 15 minutes, and accepts a default threshold of 10% (critical) and 20% (warning). This monitor does not generate alerts – it simply rolls up state. The reason for this is because you can have multiple files in a file group for a DB, and just having a single file being full is not an issue.

 

Microsoft.SQLServer.2012.Monitoring.DBFileSpaceMonitor uses the Microsoft.SQLServer.2012.DBFileSizeMonitorType

Microsoft.SQLServer.2012.DBFileSizeMonitorType  uses the Microsoft.SQLServer.2012.DBFileSizeRawPerfProvider datasource.

Microsoft.SQLServer.2012.DBFileSizeRawPerfProvider datasource runs GetSQL2012DBFilesFreeSpace.vbs with the following parameters from the Monitor configuration:

“$Config/ConnectionString$” “$Config/ServerName$” “$Config/SqlInstanceName$” “$Target/Host/Host/Host/Property[Type=”SQL!Microsoft.SQLServer.DBEngine”]/TcpPort$”

 

This script checks many configuration settings about the individual DB file – then rolls up a health state after complete.

 

Scenario: Autogrow is enabled

  • If autogrow is enabled for the DB file, the script checks the DB setting for FileMaxSize to be set.
  • If FileMaxSize is set – this is considered the upper limit to threshold against. (unless logical disk size is smaller than FileMaxSize)
  • If FileMaxSize is NOT set (Unlimited) then the logical disk size is considered the upper limit.

Scenario: Autogrow is NOT enabled:

  • If autogrow is not enabled, then the file size is considered the max file size and this value is used for threshold comparison.

 

The DB files will be healthy or unhealthy based on this calculation. Again – no alerts yet.

Next – all the discovered DB file monitors roll their health state up one level to the monitor “DB File Space (rollup)”

clip_image002

 

This is a rollup dependency monitor targeting the filegroup object, and has a “best state” rollup policy. Which means if ANY child DB file has free space, then the rollup is healthy.  That makes sense.

clip_image003

 

This monitor DOES generate alerts named “File Group is Running out of space”

clip_image004

 

This monitor rolls up health to “DB File Group Space” monitor.

clip_image005

 

which is an Aggregate monitor, which has a “Worst state of any member” policy. This is used for rollup only.

clip_image006

 

This monitor rolls up health to the “DB File Group Space (rollup)” monitor

clip_image007

 

This is a rollup dependency monitor targeting the database object, and has a “worst state” rollup policy. Which means if ANY FILE GROUP is unhealthy, we consider the DB unhealthy.

 

This rolls up to the “DB Space” monitor, which is an Aggregate rollup monitor to roll health to the DB object.

image

 

 

SUMMARY of DB file monitoring:

  • The ACTUAL space monitoring in the SQL MP is done at the individual DB file level.
  • Alerting is done at the DB File GROUP level based on a “best of” rollup.
  • Everything else is designed to roll the health up correctly from DB file to File Group, and from File Group to Database object.

 

Log file free space monitoring:

This works EXACTLY like DB file space monitoring, except it is less complicated because there is no concept of a “filegroup” so the log file monitor rolls up to the DB object with a single dependency monitor (rollup), which is also where the alerts generate from.

image

 

 

Now, if you DO use autogrow, and you place multiple DB files or log files on the SAME logical disk – the management pack does NOT take that into account, so your individual DB and log file monitors might not trigger because they individually are not above the threshold, but cumulatively they could fill the disk. This is why the Base OS disk free space monitoring is still critical for SQL volumes.  This is documented in the MP guide.

 

 

Alternatives:

IF – for some reason – a customer did not want to discover DB files and file groups, and ONLY wanted the total database space calculated, there is a disabled monitor targeting the DB object for the DB and one for the log file. You could optionally disable the discovery of DB files and filegroups, and have a MUCH simpler design (although not quite as actionable potentially)

clip_image008

A customer might take this approach if they have a VERY large SQL environment, and wants to reduce scale impact by not discovering DB file groups and DB files. Additionally, this reduces all the performance collection impact which would otherwise be collecting data for all those individual objects. 

Another reason to take this approach is if you have a HUGE SQL server with a LOT of databases and DB files.  The amount of scripts running on that server could be VERY large and very impactful to the server.  You could selectively disable the discoveries for that server, run the Remove-SCOMDisabledClassInstance to clean them out of SCOM, and then enable just the smaller monitors.

If you don’t NEED monitoring of individual files and file groups, this approach makes some sense.


MP Update: SCCM 2012 MP version 5.00.8239.1008

$
0
0

 

The System Center Config Mgr 2012 MP has been updated.

Unfortunately – it is undocumented what was changed from the previous release.  Sad smile

 

We can make some guesses, however from the supported configurations page in the guide:

Configuration

Support

System Center 2012 Configuration Manager Service Pack (SP2) CU3 or later version

Yes

System Center 2012 R2 Configuration Manager CU3 or later version

Yes

System Center Configuration Manager 1602 or Later

Yes

Configuration Manager 2007

Not supported

 

The previous ConfigMgr 2012 MP supported SCCM 2012, and SCCM 2012 SP1.  It was never updated for SCCM 2012 SP2 or SCCM 2012R2…. but it worked for those versions.

However – this new MP now explicitly states support for:

  • SCCM 2012 SP2 CU3+
  • SCCM 2012 R2 CU3+
  • SCCM build 1602+

Writing a custom class for your network devices

$
0
0

 

image

 

While there is built in monitoring for network devices in SCOM – there are scenarios where we want to create custom classes for specific network device types.  Perhaps you want to create your own SNMP based polling monitors, and run them against specific network device types, such as a specific firewall brand, or router.

Creating your custom class is quite simple – based on a common System OID that the devices will share.

This concept was documented by Daniele Grandini, https://nocentdocent.wordpress.com/2013/05/21/discovery-identifying-the-device-snmp-mp-chap-2-sysctr-scom/  I am simply taking it a step further, in publishing a full MP example for you to work by.

 

In the first step – we need to define the MP manifest, and add a reference to the System.NetworkManagement.Library since we will be targeting the “Node” class from that MP:

 

<Manifest> <Identity> <ID>Example.Network</ID> <Version>1.0.0.2</Version> </Identity> <Name>Example Network</Name> <References> <Reference Alias="Network"> <ID>System.NetworkManagement.Library</ID> <Version>7.1.10226.0</Version> <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> </Reference> <Reference Alias="Windows"> <ID>Microsoft.Windows.Library</ID> <Version>6.0.4837.0</Version> <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> </Reference> <Reference Alias="System"> <ID>System.Library</ID> <Version>6.0.4837.0</Version> <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> </Reference> <Reference Alias="SC"> <ID>Microsoft.SystemCenter.Library</ID> <Version>6.0.4837.0</Version> <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> </Reference> <Reference Alias="Health"> <ID>System.Health.Library</ID> <Version>6.0.4837.0</Version> <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> </Reference> </References> </Manifest>

 

Next, we will define our class.  We will use Node as the Base Class.

 

<TypeDefinitions> <EntityTypes> <ClassTypes> <ClassType ID="Example.Network.Device" Accessibility="Public" Abstract="false" Base="Network!System.NetworkManagement.Node" Hosted="false" Singleton="false" Extension="false" /> </ClassTypes> </EntityTypes>

 

Then – a datasource module that will be used for each discovery for a unique device type.  We try to make datasource modules reusable – and have each workflow simply pass the necessary items instead of hard coding them:

 

<ModuleTypes> <DataSourceModuleType ID="Example.Network.Device.Discovery.DS" Accessibility="Internal" Batching="false"> <Configuration> <xsd:element minOccurs="1" name="IntervalSeconds" type="xsd:integer" /> <xsd:element minOccurs="0" name="SyncTime" type="xsd:string" /> <xsd:element name="OID" type="xsd:string" /> <xsd:element name="DisplayName" type="xsd:string" /> <xsd:element name="Model" type="xsd:string" /> <xsd:element name="Vendor" type="xsd:string" /> </Configuration> <OverrideableParameters> <OverrideableParameter ID="IntervalSeconds" ParameterType="int" Selector="$Config/IntervalSeconds$"/> <OverrideableParameter ID="SyncTime" ParameterType="string" Selector="$Config/SyncTime$"/> </OverrideableParameters> <ModuleImplementation Isolation="Any"> <Composite> <MemberModules> <DataSource ID="Scheduler" TypeID="System!System.Discovery.Scheduler"> <Scheduler> <SimpleReccuringSchedule> <Interval>$Config/IntervalSeconds$</Interval> <SyncTime>$Config/SyncTime$</SyncTime> </SimpleReccuringSchedule> <ExcludeDates /> </Scheduler> </DataSource> <ConditionDetection ID="MapToDiscovery" TypeID="System!System.Discovery.FilteredClassSnapshotDataMapper"> <Expression> <SimpleExpression> <ValueExpression> <Value>$Target/Property[Type="Network!System.NetworkManagement.Node"]/SystemObjectID$</Value> </ValueExpression> <Operator>Equal</Operator> <ValueExpression> <Value Type="String">$Config/OID$</Value> </ValueExpression> </SimpleExpression> </Expression> <ClassId>$MPElement[Name='Example.Network.Device']$</ClassId> <InstanceSettings> <Settings> <Setting> <Name>$MPElement[Name='System!System.Entity']/DisplayName$</Name> <Value>$Config/DisplayName$</Value> </Setting> <Setting> <Name>$MPElement[Name='Network!System.NetworkManagement.Node']/DeviceKey$</Name> <Value>$Target/Property[Type="Network!System.NetworkManagement.Node"]/DeviceKey$</Value> </Setting> <Setting> <Name>$MPElement[Name='Network!System.NetworkManagement.Node']/Model$</Name> <Value>$Config/Model$</Value> </Setting> <Setting> <Name>$MPElement[Name='Network!System.NetworkManagement.Node']/Vendor$</Name> <Value>$Config/Vendor$</Value> </Setting> </Settings> </InstanceSettings> </ConditionDetection> </MemberModules> <Composition> <Node ID="MapToDiscovery"> <Node ID="Scheduler" /> </Node> </Composition> </Composite> </ModuleImplementation> <OutputType>System!System.Discovery.Data</OutputType> </DataSourceModuleType> </ModuleTypes>

 

The above datasource is probably the most complicated part of this.  We are creating a composite DS, combining the System.Discovery.Scheduler module, with the System.Discovery.FilteredClassSnapshotDataMapper module.

The scheduler is simple – we pass in the interval.

The System.Discovery.FilteredClassSnapshotDataMapper is more complicated – it basically allows you to create a filtered discovery of existing objects, based on an expression matching on a class property.  In this case, if the System OID equals a specific OID we pass in the discovery, it is a match and we will create an instance of the class.  Since all your desired network devices will share a common System OID, this is the perfect property to match on.

In this DS, I also included the ability to pass the Model and Vendor – you can inherit whatever is present from the Node property if the discovered network device is CERTIFIED, or provide your own custom ones in the discovery, if GENERIC.

 

Last – we define our discovery.

 

<Discoveries> <Discovery ID="Example.Network.Device.Discovery" Enabled="true" ConfirmDelivery="false" Remotable="true" Priority="Normal" Target="Network!System.NetworkManagement.Node"> <Category>Discovery</Category> <DiscoveryTypes> <DiscoveryClass TypeID="Example.Network.Device" /> </DiscoveryTypes> <DataSource ID="DS" TypeID="Example.Network.Device.Discovery.DS"> <IntervalSeconds>14400</IntervalSeconds> <SyncTime /> <OID>.1.3.6.1.4.1.8072.3.2.10</OID> <DisplayName>$Target/Property[Type="Network!System.NetworkManagement.Node"]/sysName$</DisplayName> <Model>$Target/Property[Type="Network!System.NetworkManagement.Node"]/Model$</Model> <Vendor>$Target/Property[Type="Network!System.NetworkManagement.Node"]/Vendor$</Vendor> </DataSource> </Discovery> </Discoveries>

 

The discovery is simple – we simply call the datasource module, and pass any necessary parameters to the discovery.  In this case, each discovery will include a Class Type which we are trying to discover, a System OID for the device type/class, map the existing display name, and then include the model and vendor.  You can hard code the model and vendor as text in each discovery if desired.  The OID in my example is for a Linux system, you will need to change this.

You should add multiple discoveries for each different class type you want to create.  These can be placed in unique MP’s for each network device type, or combine them all into one MP, up to you.

 

You can download a copy of the entire example mp here:

https://gallery.technet.microsoft.com/SCOM-Custom-Network-device-b2b16959

SQL MP Run As Accounts – NO LONGER REQUIRED

$
0
0

 

image             image

 

Over the years I have written many articles dealing with RunAs accounts.  Specifically, the most common need is for monitoring with the SQL MP.  I have explained the issues and configurations in detail here:  Configuring Run As Accounts and Profiles in OpsMgr – A SQL Management Pack Example

 

Later, I wrote an automation solution to script the biggest pain point of RunAs accounts:  distributing them, here:  Automating Run As Account Distribution – Finally!  Then – took it a step further, and built this automation into a management pack here:  Update-  Automating Run As Account distribution dynamically

 

Now – I want to show a different approach to configuring monitoring for the SQL MP, which might make life a lot simpler for SCOM admins, and SQL teams.

 

What if I told you – there was a way to not have to mess with RunAs accounts and the SQL MP at all?  No creating the accounts, no distributing them, no associating them with the profiles – none of that?    Interested?   Then read on.

 

The big challenge in SQL monitoring is that the SCOM agent runs as LocalSystem for the default agent action account.  However, LocalSystem does not have full rights to SQL server, and should not ever be granted the SysAdmin role in SQL.  This is because the LocalSystem account is quite easy to impersonate to anyone who already has admin rights to the OS.

We can solve this challenge, by introducing Service SID’s.  SQL already uses Service Security Identifiers (SID’s) to grant access for the service running SQL server, to the SQL instance.  You can read more about that here:  https://support.microsoft.com/en-us/kb/2620201

 

We can do the same thing for the SCOM Healthservice.  This idea was brought to me by a fellow MS consultant – Ralph Kyttle.  He pointed out, this is exactly how OMS works to gather data about SQL server.  We have an article describing this recommended configuration here:  https://support.microsoft.com/en-us/kb/2667175

 

Essentially – this can be accomplished in two steps:

  1. Enable the HealthService to be able to use a service SID.
  2. Create a login for the HealthService SID to be able to access SQL server.

 

That’s it!

This creates a login in SQL, and allows the SCOM agent to be able to monitor SQL server, without having to maintain another credential, deal with password changes, and removes the security concern of a compromised RunAs account being able to access every SQL server in the company!  No more configuration, no more credential distribution.

 

I even wrote a Management Pack to make setting this initial configuration up much simpler.  Let me demonstrate:

 

First, we need to ensure that all SCOM agents, where SQL is discovered – have the service SID enabled.  I wrote a monitor to detect when this is not configured, and targeted the SQL SEED classes:

image

 

This monitor will show a warning state when the Service SID is not configured, and will generate a warning alert:

 

image

 

The monitor has a script recovery action, which is disabled by default.  You can enable this and it will automatically configure this as soon as SQL is detected, and will restart the agent.

 

image

 

Alternatively – I wrote two tasks you can run – the second one configures the service SID, but will wait for the next reboot (or service restart) before this actually becomes active.  The first task configures the service AND then restarts the agent Healthservice:

 

image

 

Here is what it looks like in action:

 

image

 

So – once that is complete – we can create the login for SQL.

If you switch to the SQL instances view, or a Database Engine view – you will see a new task show up which will create a SQL login for the HealthService.

 

image

 

If you run this task, and don’t have rights to the SQL server – you will get this:

 

image

 

Have your SQL team run the task and provide a credential to the task that will be able to create a login and assign the necessary SysAdmin role to the service:

 

image

 

Voila!

 

image

 

What this actually does – is create this login on the SQL server and set it to SysAdmin role:

 

image

 

All of these activities are logged for audit in the Task Status view:

 

image

 

Now – as new SQL servers are added over time – the Service SID can automatically be configured using the recovery, and the SQL team will just need to add the HealthService login as part of their build configuration, or run this task one time for each new SQL server to enable it for monitoring.

 

I find this to be much simpler than dealing with RunAs accounts, and it appears to be a more secure solution as well.  I welcome any feedback on this approach, or for my Management Pack Addendum.

 

I have included my SQL RunAs Addendum MP’s to be available below:

 

https://gallery.technet.microsoft.com/SQL-Server-RunAs-Addendum-0c183c32

UR9 for SCOM 2012 R2 – Step by Step

$
0
0

image48

 

This is an updated article replacing the original – to include the deployment of the Linux MP’s which shipped later.  Since Microsoft changed blog platforms over to WordPress – it will not allow me to update the previous one.

 

NOTE:  I get this question every time we release an update rollup:   ALL SCOM Update Rollups are CUMULATIVE.  This means you do not need to apply them in order, you can always just apply the latest update.  If you have deployed SCOM 2012R2 and never applied an update rollup – you can go strait to the latest one available.  If you applied an older one (such as UR3) you can always go straight to the latest one!

 

 

KB Article for OpsMgr:  https://support.microsoft.com/en-us/kb/3129774

KB article for Linux updates:  https://support.microsoft.com/en-us/kb/3141435

Download catalog site:  http://catalog.update.microsoft.com/v7/site/Search.aspx?q=3129774

 

Key fixes:

  • SharePoint workflows fail with an access violation under APM
    A certain sequence of the events may trigger an access violation in APM code when it tries to read data from the cache during the Application Domain unload. This fix resolves this kind of behavior.
  • Application Pool worker process crashes under APM with heap corruption
    During the Application Domain unload two threads might try to dispose of the same memory block leading to DOUBLE FREE heap corruption. This fix makes sure that memory is disposed of only one time.
  • Some Application Pool worker processes become unresponsive if many applications are started under APM at the same time
    Microsoft Monitoring Agent APM service has a critical section around WMI queries it performs. If a WMI query takes a long time to complete, many worker processes are waiting for the active one to complete the call. Those application pools may become unresponsive, depending on the wait duration. This fix eliminates the need in WMI query and significantly improves the performance of this code path.
  • MOMAgent cannot validate RunAs Account if only RODC is available
    If there’s a read-only domain controller (RODC), the MonAgent cannot validate the RunAs account. This fix resolves this issue.
  • Missing event monitor does not warn within the specified time range in SCOM 2012 R2 the first time after restart
    When you create a monitor for a missed event, the first alert takes twice the amount of time specified time in the monitor. This fix resolves the issue, and the alert is generated in the time specified.
  • SCOM cannot verify the User Account / Password expiration date if it is set by using Password Setting object
    Fine grained password policies are stored in a different container from the user object container in Active Directory. This fix resolves the problems in computing resultant set of policy (RSOP) from these containers for a user object.
  • SLO Detail report displays histogram incorrectly
    In some specific scenarios, the representation of the downtime graph is not displayed correctly. This fix resolves this kind of behavior.
  • APM support for IIS 10 and Windows Server 2016
    Support of IIS 10 on Windows Server 2016 is added for the APM feature in System Center 2012 R2 Operations Manager. An additional management pack Microsoft.SystemCenter.Apm.Web.IIS10.mp is required to enable this functionality. This management pack is located in %SystemDrive%\Program Files\System Center 2012 R2\Operations Manager\Server\Management Packs for Update Rollups alongside its dependencies after the installation of Update Rollup 9.
    Important Note One dependency is not included in Update Rollup 9 and should be downloaded separately:

    Microsoft.Windows.InternetInformationServices.2016.mp

  • APM Agent Modules workflow fail during workflow shutdown with Null Reference Exception
    The Dispose() method of Retry Manager of APM connection workflow is executed two times during the module shutdown. The second try to execute this Dispose() method may cause a Null Reference Exception. This fix makes sure that the Dispose() method can be safely executed one or more times.
  • AEM Data fills up SCOM Operational database and is never groomed out
    If you use SCOM’s Agentless Exception Monitoring to examine application crash data and report on it, the data never grooms out of the SCOM Operational database. The problem with this is that soon the SCOM environment will be overloaded with all the instances and relationships of the applications, error groups, and Windows-based computers, all which are hosted by the management servers. This fix resolves this issue. Additionally, the following management pack’s must be imported in the following order:
    • Microsoft.SystemCenter.ClientMonitoring.Library.mp
    • Microsoft.SystemCenter.DataWarehouse.Report.Library.mp
    • Microsoft.SystemCenter.ClientMonitoring.Views.Internal.mp
    • Microsoft.SystemCenter.ClientMonitoring.Internal.mp
  • The DownTime report from the Availability report does not handle the Business Hours settings
    In the downtime report, the downtime table was not considering the business hours. This fix resolves this issue and business hours will be shown based on the specified business hour values.
    The updated RDL files are located in the following location:

    %SystemDrive%\Program Files\Microsoft System Center 2012 R2\Operations Manager\Server\Reporting

    To update the RDL file, follow these steps:

    1. Go to http://MachineName/Reports_INSTANCE1/Pages/Folder.aspxMachineName //Reporting Server.
    2. On this page, go to the folder to which you want to add the RDL file. In this case, click Microsoft.SystemCenter.DataWarehouse.Report.Library.
    3. Upload the new RDL files by clicking the upload button at the top. For more information, see https://msdn.microsoft.com/en-us/library/ms157332.aspx.
  • Adding a decimal sign in an SLT Collection Rule SLO in the ENU Console on a non-ENU OS does not work
    You run the System Center 2012 R2 Operations Manager Console in English on a computer that has the language settings configured to use a non-English (United States) language that uses a comma (,) as the decimal sign instead of a period (.). When you try to create Service Level Tracking, and you want to add a Collection Rule SLO, the value you enter as the threshold cannot be configured by using a decimal sign. This fix resolves the issue.
  • SCOM Agent issue while logging Operations Management Suite (OMS) communication failure
    An issue occurs when OMS communication failures are logged. This fix resolves this issue.

 

Issues that are fixed in the UNIX and Linux management packs

 

  • Discovery of Linux computers may fail for some system locales
    Using the Discovery Wizard or Windows PowerShell cmdlets to discover Linux computers may fail during the final Agent Verification step for computers that have some system locales, such as zh_TW.UTF-8. The scxadmin command that is used to restart the agent during the discovery process did not correctly handle Unicode text in the standard out-of-the-service command.
  • The UNIX/Linux Agent intermittently closes connections during TLS handshaking
    Symptoms include the following:
    • Failed heartbeats for UNIX or Linux computers, especially when the SSLv3 protocol is disabled on the Management Servers.
    • Schannel errors in the System log that contain text that resembles the following:

      A fatal error occurred while creating an SSL client credentials. The internal error state is 10013.

    • WS-Management errors in the event log that contain text that resembles the following:

      WSManFault
      Message = The server certificate on the destination computer (<UNIX/LINUX-COMPUTER-NAME) has the following errors:
      Encountered an internal error in the SSL library.
      Error number: -2147012721 0x80072F8F
      A security error occurred

 

 

Lets get started.

From reading the KB article – the order of operations is:

  1. Install the update rollup package on the following server infrastructure:
    • Management servers
    • Gateway servers
    • Web console server role computers
    • Operations console role computers
  2. Apply SQL scripts.
  3. Manually import the management packs.
  4. Update Agents

Now, NORMALLY we need to add another step – if we are using Xplat monitoring – need to update the Linux/Unix MP’s and agents.   However, in UR8 and UR9 for SCOM 2012 R2, there are no updates for Linux

 

 

 

1.  Management Servers

image

Since there is no RMS anymore, it doesn’t matter which management server I start with.  There is no need to begin with whomever holds the RMSe role.  I simply make sure I only patch one management server at a time to allow for agent failover without overloading any single management server.

I can apply this update manually via the MSP files, or I can use Windows Update.  I have 3 management servers, so I will demonstrate both.  I will do the first management server manually.  This management server holds 3 roles, and each must be patched:  Management Server, Web Console, and Console.

The first thing I do when I download the updates from the catalog, is copy the cab files for my language to a single location:

Then extract the contents:

image

Once I have the MSP files, I am ready to start applying the update to each server by role.

***Note:  You MUST log on to each server role as a Local Administrator, SCOM Admin, AND your account must also have System Administrator (SA) role to the database instances that host your OpsMgr databases.

My first server is a management server, and the web console, and has the OpsMgr console installed, so I copy those update files locally, and execute them per the KB, from an elevated command prompt:

image

This launches a quick UI which applies the update.  It will bounce the SCOM services as well.  The update usually does not provide any feedback that it had success or failure. 

I got a prompt to restart:

image

I choose yes and allow the server to restart to complete the update.

 

You can check the application log for the MsiInstaller events to show completion:

Log Name:      Application
Source:        MsiInstaller
Date:          1/27/2016 9:37:28 AM
Event ID:      1036
Description:
Windows Installer installed an update. Product Name: System Center Operations Manager 2012 Server. Product Version: 7.1.10226.0. Product Language: 1033. Manufacturer: Microsoft Corporation. Update Name: System Center 2012 R2 Operations Manager UR9 Update Patch. Installation success or error status: 0.

You can also spot check a couple DLL files for the file version attribute. 

image

Next up – run the Web Console update:

image

This runs much faster.   A quick file spot check:

image

Lastly – install the console update (make sure your console is closed):

image

A quick file spot check:

image

 

 

Additional Management Servers:

image

I now move on to my additional management servers, applying the server update, then the console update and web console update where applicable.

On this next management server, I will use the example of Windows Update as opposed to manually installing the MSP files.  I check online, and make sure that I have configured Windows Update to give me updates for additional products: 

image

The applicable updates show up under optional – so I tick the boxes and apply these updates.

After a reboot – go back and verify the update was a success by spot checking some file versions like we did above.

 

 

Updating Gateways:

image

I can use Windows Update or manual installation.

image

The update launches a UI and quickly finishes.

Then I will spot check the DLL’s:

image

I can also spot-check the \AgentManagement folder, and make sure my agent update files are dropped here correctly:

image

 

***NOTE:  You can delete any older UR update files from the \AgentManagement directories.  The UR’s do not clean these up and they provide no purpose for being present any longer.

 

 

 

2. Apply the SQL Scripts

In the path on your management servers, where you installed/extracted the update, there are two SQL script files: 

%SystemDrive%\Program Files\Microsoft System Center 2012 R2\Operations Manager\Server\SQL Script for Update Rollups

(note – your path may vary slightly depending on if you have an upgraded environment of clean install)

image

First – let’s run the script to update the OperationsManager database.  Open a SQL management studio query window, connect it to your Operations Manager database, and then open the script file.  Make sure it is pointing to your OperationsManager database, then execute the script.

You should run this script with each UR, even if you ran this on a previous UR.  The script body can change so as a best practice always re-run this.

image

Click the “Execute” button in SQL mgmt. studio.  The execution could take a considerable amount of time and you might see a spike in processor utilization on your SQL database server during this operation.  I have had customers state this takes from a few minutes to as long as an hour. In MOST cases – you will need to shut down the SDK, Config, and Monitoring Agent (healthservice) on ALL your management servers in order for this to be able to run with success.

You will see the following (or similar) output:

image47

or

image

IF YOU GET AN ERROR – STOP!  Do not continue.  Try re-running the script several times until it completes without errors.  In a production environment, you almost certainly have to shut down the services (sdk, config, and healthservice) on your management servers, to break their connection to the databases, to get a successful run.

Technical tidbit:   Even if you previously ran this script in UR1, UR2, UR3, UR4, UR5, UR6, UR7, or UR8, you should run this again for UR9, as the script body can change with updated UR’s.

image

Next, we have a script to run against the warehouse DB.  Do not skip this step under any circumstances.    From:

%SystemDrive%\Program Files\Microsoft System Center 2012 R2\Operations Manager\Server\SQL Script for Update Rollups

(note – your path may vary slightly depending on if you have an upgraded environment of clean install)

Open a SQL management studio query window, connect it to your OperationsManagerDW database, and then open the script file UR_Datawarehouse.sql.  Make sure it is pointing to your OperationsManagerDW database, then execute the script.

If you see a warning about line endings, choose Yes to continue.

image

Click the “Execute” button in SQL mgmt. studio.  The execution could take a considerable amount of time and you might see a spike in processor utilization on your SQL database server during this operation.

You will see the following (or similar) output:

image

 

 

 

3. Manually import the management packs

image

There are 55 management packs in this update!   Most of these we don’t need – so read carefully.

The path for these is on your management server, after you have installed the “Server” update:

\Program Files\Microsoft System Center 2012 R2\Operations Manager\Server\Management Packs for Update Rollups

However, the majority of them are Advisor/OMS, and language specific.  Only import the ones you need, and that are correct for your language.  I will remove all the MP’s for other languages (keeping only ENU), and I am left with the following:

image

 

What NOT to import:

The Advisor MP’s are only needed if you are using Microsoft Operations Management Suite cloud service, (Previously known as Advisor, and Operation Insights).

The APM MP’s are only needed if you are using the APM feature in SCOM.

Note the APM MP with a red X.  This MP requires the IIS MP’s for Windows Server 2016 which are in Technical Preview at the time of this writing.  Only import this if you are using APM *and* you need to monitor Windows Server 2016.  If so, you will need to download and install the technical preview editions of that MP from https://www.microsoft.com/en-us/download/details.aspx?id=48256

The TFS MP bundle is only used for specific scenarios, such as DevOps scenarios where you have integrated APM with TFS, etc.  If you are not currently using these MP’s, there is no need to import or update them.  I’d skip this MP import unless you already have these MP’s present in your environment.

However, the Image and Visualization libraries deal with Dashboard updates, and these always need to be updated.

I import all of these shown without issue.

 

 

4.  Update Agents

image43_thumb

Agents should be placed into pending actions by this update for any agent that was not manually installed (remotely manageable = yes):  

 

One the Management servers where I used Windows Update to patch them, their agents did not show up in this list.  Only agents where I manually patched their management server showed up in this list.  FYI.   The experience is NOT the same when using Windows Update vs manual.  If yours don’t show up – you can try running the update for that management server again – manually.

image

 

If your agents are not placed into pending management – this is generally caused by not running the update from an elevated command prompt, or having manually installed agents which will not be placed into pending.

In this case – my agents that were reporting to a management server that was updated using Windows Update – did NOT place agents into pending.  Only the agents reporting to the management server for which I manually executed the patch worked.

I manually re-ran the server MSP file manually on these management servers, from an elevated command prompt, and they all showed up:

 

 image

 

You can approve these – which will result in a success message once complete:

 

 image

 

Soon you should start to see PatchList getting filled in from the Agents By Version view under Operations Manager monitoring folder in the console:

 

image

 

 

  5.  Update Unix/Linux MPs and Agents

image

The current Linux MP’s can be downloaded from:

https://www.microsoft.com/en-us/download/details.aspx?id=29696

 

7.5.1050.0 is current at this time for SCOM 2012 R2 and these shipped shortly after UR9. 

****Note – take GREAT care when downloading – that you select the correct download for SCOM 2012 R2.  You must scroll down in the list and select the MSI for 2012 R2:

 

 

image

 

Download the MSI and run it.  It will extract the MP’s to C:\Program Files (x86)\System Center Management Packs\System Center 2012 R2 Management Packs for Unix and Linux\

Update any MP’s you are already using.   These are mine for RHEL, SUSE, and the Universal Linux libraries. 

 

image

 

You will likely observe VERY high CPU utilization of your management servers and database server during and immediately following these MP imports.  Give it plenty of time to complete the process of the import and MPB deployments.

 

Next – you need to restart the “Microsoft Monitoring Agent” service on any management servers which manage Linux systems.  I don’t know why – but my MP’s never drop/update in the \Program Files\Microsoft System Center 2012 R2\Operations Manager\Server\AgentManagement\UnixAgents\DownloadedKits filder until this servcie is restarted.

 

Next up – you would upgrade your agents on the Unix/Linux monitored agents.  You can now do this straight from the console:

image 

image

 

You can input credentials or use existing RunAs accounts if those have enough rights to perform this action.

Finally:

 

image

 

 

6.  Update the remaining deployed consoles

image

This is an important step.  I have consoles deployed around my infrastructure – on my Orchestrator server, SCVMM server, on my personal workstation, on all the other SCOM admins on my team, on a Terminal Server we use as a tools machine, etc.  These should all get the matching update version.

 

 

 

Review:

Now at this point, we would check the OpsMgr event logs on our management servers, check for any new or strange alerts coming in, and ensure that there are no issues after the update.

image

Known issues:

See the existing list of known issues documented in the KB article.

1.  Many people are reporting that the SQL script is failing to complete when executed.  You should attempt to run this multiple times until it completes without error.  You might need to stop the Exchange correlation engine, stop all the SCOM services on the management servers, and/or bounce the SQL server services in order to get a successful completion in a busy management group.  The errors reported appear as below:

——————————————————
(1 row(s) affected)
(1 row(s) affected)
Msg 1205, Level 13, State 56, Line 1
Transaction (Process ID 152) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
Msg 3727, Level 16, State 0, Line 1
Could not drop constraint. See previous errors.
——————————————————–

You probably have a ton of old event data in your Data Warehouse

$
0
0

 

image

 

Prior to SCOM 2012 R2 UR7, we had an issue where we did not groom out old data from the Event Parameter and Event Rule tables in the DW.  This will show up as these tables growing quite large, especially the event parameter tables.  They will never groom out the old, orphaned data.

It isn’t a big deal, but if you’d like to free up some space in your Data Warehouse database – read on.

 

I’ll just go out and say that ANYONE who ever ran a SCOM management group prior to SCOM 2012 R2 UR7, is affected.  How much just depends on how many events you were collecting and shoving into your DW.

Once you apply UR7 or later, this issue stops, and the normal grooming will groom out the data as events get groomed.  HOWEVER – we will never go back and clean out the old, already orphaned event parameters and event rules.

 

Nicole was the first person I saw write about this issue:

https://blogs.msdn.microsoft.com/nicole_welch/2016/01/07/scom-2012-large-event-parameter-tables/

 

Essentially – to know if you are affected, there are some SQL statements you can run…. but I wrote my own.  These take a long time to run – but it gives you an idea of how many events are in scope to be groomed.

 

SELECT count(*) from event.vEventParameter ep WHERE ep.EventOriginId NOT IN (SELECT distinct EventOriginId from event.vEvent) select count(*) from event.vEventRule er WHERE er.EventOriginId NOT IN (SELECT distinct EventOriginId from event.vEvent)

 

 

Nicole has a stored procedure listed on her site – where you can run that to create the stored proc – then use the statement calling the sproc with a “max rows to groom” parameter.   It works well and I recommend it.

 

Alternatively – you can just run this as a straight SQL query.  I will post that below:

I set MaxRowsToGroom hard coded to 1,000,000 rows.  I found this runs pretty quick and doesn’t use a lot of transaction log space.  You can adjust this depending on how much cleanup you need to do if you prefer the query approach, or just use the stored proc and the loop command in the blog post linked above.

 

DECLARE @MaxRowsToGroom int ,@RowsDeleted int SET NOCOUNT ON; SET @MaxRowsToGroom = 1000000 DECLARE @RuleTableName sysname ,@DetailTableName sysname ,@ParamTableName sysname ,@DatasetId uniqueidentifier = (select DatasetId from StandardDataset where SchemaName = 'Event') ,@TableGuid uniqueidentifier ,@Statement nvarchar(max) ,@schemaName sysname = 'Event' SET @TableGuid = (select TableGuid from StandardDatasetTableMap where DatasetId = @datasetID) --BEGIN TRY BEGIN TRAN SELECT TOP 1 @RuleTableName = BaseTableName + '_' + REPLACE(CAST(@TableGuid AS varchar(50)), '-', '') FROM StandardDatasetAggregationStorage WHERE (DatasetId = @DatasetId) AND (AggregationTypeId = 0) AND (DependentTableInd = 1) AND (TableTag = 'Rule') SET @Statement = 'DELETE TOP (' + CAST(@MaxRowsToGroom AS varchar(15)) + ')' + ' FROM ' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@RuleTableName) + ' WHERE (EventOriginId NOT IN (SELECT EventOriginId FROM Event.vEvent)) ' execute (@Statement) SELECT TOP 1 @ParamTableName = BaseTableName + '_' + REPLACE(CAST(@TableGuid AS varchar(50)), '-', '') FROM StandardDatasetAggregationStorage WHERE (DatasetId = @DatasetId) AND (AggregationTypeId = 0) AND (DependentTableInd = 1) AND (TableTag = 'Parameter') SET @Statement = 'DELETE TOP (' + CAST(@MaxRowsToGroom AS varchar(15)) + ')' + ' FROM ' + QUOTENAME(@SchemaName) + '.' + QUOTENAME(@ParamTableName) + ' WHERE (EventOriginId NOT IN (SELECT EventOriginId FROM Event.vEvent)) ' execute (@Statement) SET @RowsDeleted = @@ROWCOUNT COMMIT

 

 

I do recommend you clean this up.  It doesn’t hurt anything sitting there, other than potentially making any event based reports run slower, but the big impact to me is just dealing with such a large DW, backups, restores, and cost of ownership of a database that big, for little reason.

Make sure you update statistics when you are done, if not also a full DBReindex.   To update statistics – run:    exec sp_updatestats

 

 

Here is an example of my before and after:

Before:

image

 

After:

image

 

Trimmed from 3.3 GB to 117 MB!!!!!   If this were a large production environment, this could be a substantial amount of data.

 

 

And remember – most collected events are worthless to begin with.  As a tuning exercise – I recommend disabling MOST of the out of the box event collections, and also reduce your event retention in the DW:

https://blogs.technet.microsoft.com/kevinholman/2009/11/25/tuning-tip-turning-off-some-over-collection-of-events/

https://blogs.technet.microsoft.com/kevinholman/2010/01/05/understanding-and-modifying-data-warehouse-retention-and-grooming/

Monitoring a file hash using SCOM

$
0
0

 

I had an interesting customer request recently – to monitor for a specific system file, and make SURE it is not a modified/threat file.

 

You can use this as a simple example of a two-state timed script monitor (using vbscript) which demonstrates script arguments, logging, alerting, propertybag outputs, etc.

 

In this case – there is a file located at %windir%\system32\sethc.exe

This is the “Sticky Keys” UI that pops up when you press shift key 5 times.  There are several articles out there on how to create a “back door” to change this file out with cmd.exe, and open a command prompt without logging into a system, if you have access to the console.

In this case – the customer wanted to monitor for any changes to this file. 

I started by writing a script using VBScript, so it will work on Server 2003, 2008, 2008R2, 2012, and 2012R2.  The script calls CertUtil.exe, which will generate the hash for any file.  Then the scripts compares this file hash to a list of “known good” hashes.

The script accepts two arguments, the filepath location, and the comma separated list of known good hashes.

 

' ' File Hash monitoring script ' Kevin Holman ' 5/2016 ' Option Explicit dim oArgs, filepath, paramHashes, oAPI, oBag, strCommand, oShell dim strHashCmd, strHashLine, strHashOut, strHash, HashesArray, Hash, strMatch 'Accept arguments for the file path, and known good hashes in comma delimited format Set oArgs=wscript.arguments filepath = oArgs(0) paramHashes = oArgs(1) 'Load MOMScript API and PropertyBag function Set oAPI = CreateObject("MOM.ScriptAPI") Set oBag = oAPI.CreatePropertyBag() 'Log script event that we are starting task Call oAPI.LogScriptEvent("filehashcheck.vbs", 3322, 0, "Starting hashfile script with filepath: " & filepath & " with known good hashes: " & paramHashes) 'build the command to run for CertUtil strCommand = "%windir%\system32\certutil.exe -hashfile " & filepath 'Create the Wscript Shell object and execute the command Set oShell = WScript.CreateObject("WScript.Shell") Set strHashCmd = oShell.Exec(strCommand) 'Parse the output of CertUtil and output only on the line with the hash Do While Not strHashCmd.StdOut.AtEndOfStream strHashLine = strHashCmd.StdOut.ReadLine() If Instr(strHashLine, "SHA") Then 'skip ElseIf Instr(strHashLine, "CertUtil") Then 'skip Else strHashOut = strHashLine End If Loop 'Remove spaces from the hash strHash = Replace(strHashOut, " ", "") 'Split the comma seperated hashlist parameter into an array HashesArray = split(paramHashes,",") 'Loop through the array and see if our file hash matches any known good hash For Each Hash in HashesArray 'wscript.echo Hash If strHash = Hash Then 'wscript.echo "Match found" Call oAPI.LogScriptEvent("filehashcheck.vbs", 3323, 0, "Good match found. The file " & filepath & " was found to have hash " & strHash & " which was found in the supplied known good hashes: " & paramHashes) Call oBag.AddValue("Match","GoodHashFound") Call oBag.AddValue("CurrentFileHash",strHash) Call oBag.AddValue("FilePath",filepath) Call oBag.AddValue("GoodHashList",paramHashes) oAPI.Return(oBag) wscript.quit Else 'wscript.echo "Match not found" strMatch = "missing" End If Next 'If we get to this part of the script a hash was not found. Output a bad propertybag If strMatch = "missing" Then Call oAPI.LogScriptEvent("filehashcheck.vbs", 3324, 2, "This file " & filepath & " does not match any known good hashes. It was found to have hash " & strHash & " which was NOT found in the supplied known good hashes: " & paramHashes) Call oBag.AddValue("Match","HashNotFound") Call oBag.AddValue("CurrentFileHash",strHash) Call oBag.AddValue("FilePath",filepath) Call oBag.AddValue("GoodHashList",paramHashes) oAPI.Return(oBag) End If wscript.quit

 

I then put this script into a two-state monitor targeting Windows Server OperatingSystem, so every monitored server will run it once a day, and check to see if the supplied file is correct, or if a vulnerability might exist.

 

Here is the Monitor example:

 

<UnitMonitor ID="Custom.HashFile.CompareHash.Monitor" Accessibility="Public" Enabled="true" Target="Windows!Microsoft.Windows.Server.OperatingSystem" ParentMonitorID="Health!System.Health.SecurityState" Remotable="true" Priority="Normal" TypeID="Windows!Microsoft.Windows.TimedScript.TwoStateMonitorType" ConfirmDelivery="false"> <Category>SecurityHealth</Category> <AlertSettings AlertMessage="Custom.HashFile.CompareHash.Monitor.AlertMessage"> <AlertOnState>Warning</AlertOnState> <AutoResolve>true</AutoResolve> <AlertPriority>Normal</AlertPriority> <AlertSeverity>Warning</AlertSeverity> <AlertParameters> <AlertParameter1>$Target/Host/Property[Type="Windows!Microsoft.Windows.Computer"]/NetworkName$</AlertParameter1> <AlertParameter2>$Data/Context/Property[@Name='FilePath']$</AlertParameter2> <AlertParameter3>$Data/Context/Property[@Name='CurrentFileHash']$</AlertParameter3> <AlertParameter4>$Data/Context/Property[@Name='GoodHashList']$</AlertParameter4> </AlertParameters> </AlertSettings> <OperationalStates> <OperationalState ID="GoodHashFound" MonitorTypeStateID="Success" HealthState="Success" /> <OperationalState ID="HashNotFound" MonitorTypeStateID="Error" HealthState="Warning" /> </OperationalStates> <Configuration> <IntervalSeconds>86321</IntervalSeconds> <SyncTime /> <ScriptName>FileHashCheck.vbs</ScriptName> <Arguments>filepath hashlist</Arguments> <ScriptBody><![CDATA[' ' File Hash monitoring script ' Kevin Holman ' 5/2016 ' Option Explicit dim oArgs, filepath, paramHashes, oAPI, oBag, strCommand, oShell dim strHashCmd, strHashLine, strHashOut, strHash, HashesArray, Hash, strMatch 'Accept arguments for the file path, and known good hashes in comma delimited format Set oArgs=wscript.arguments filepath = oArgs(0) paramHashes = oArgs(1) 'Load MOMScript API and PropertyBag function Set oAPI = CreateObject("MOM.ScriptAPI") Set oBag = oAPI.CreatePropertyBag() 'Log script event that we are starting task Call oAPI.LogScriptEvent("filehashcheck.vbs", 3322, 0, "Starting hashfile script with filepath: " & filepath & " with known good hashes: " & paramHashes) 'build the command to run for CertUtil strCommand = "%windir%\system32\certutil.exe -hashfile " & filepath 'Create the Wscript Shell object and execute the command Set oShell = WScript.CreateObject("WScript.Shell") Set strHashCmd = oShell.Exec(strCommand) 'Parse the output of CertUtil and output only on the line with the hash Do While Not strHashCmd.StdOut.AtEndOfStream strHashLine = strHashCmd.StdOut.ReadLine() If Instr(strHashLine, "SHA") Then 'skip ElseIf Instr(strHashLine, "CertUtil") Then 'skip Else strHashOut = strHashLine End If Loop 'Remove spaces from the hash strHash = Replace(strHashOut, " ", "") 'Split the comma seperated hashlist parameter into an array HashesArray = split(paramHashes,",") 'Loop through the array and see if our file hash matches any known good hash For Each Hash in HashesArray 'wscript.echo Hash If strHash = Hash Then 'wscript.echo "Match found" Call oAPI.LogScriptEvent("filehashcheck.vbs", 3323, 0, "Good match found. The file " & filepath & " was found to have hash " & strHash & " which was found in the supplied known good hashes: " & paramHashes) Call oBag.AddValue("Match","GoodHashFound") Call oBag.AddValue("CurrentFileHash",strHash) Call oBag.AddValue("FilePath",filepath) Call oBag.AddValue("GoodHashList",paramHashes) oAPI.Return(oBag) wscript.quit Else 'wscript.echo "Match not found" strMatch = "missing" End If Next 'If we get to this part of the script a hash was not found. Output a bad propertybag If strMatch = "missing" Then Call oAPI.LogScriptEvent("filehashcheck.vbs", 3324, 2, "This file " & filepath & " does not match any known good hashes. It was found to have hash " & strHash & " which was NOT found in the supplied known good hashes: " & paramHashes) Call oBag.AddValue("Match","HashNotFound") Call oBag.AddValue("CurrentFileHash",strHash) Call oBag.AddValue("FilePath",filepath) Call oBag.AddValue("GoodHashList",paramHashes) oAPI.Return(oBag) End If wscript.quit]]></ScriptBody> <TimeoutSeconds>60</TimeoutSeconds> <ErrorExpression> <SimpleExpression> <ValueExpression> <XPathQuery Type="String">Property[@Name='Match']</XPathQuery> </ValueExpression> <Operator>Equal</Operator> <ValueExpression> <Value Type="String">HashNotFound</Value> </ValueExpression> </SimpleExpression> </ErrorExpression> <SuccessExpression> <SimpleExpression> <ValueExpression> <XPathQuery Type="String">Property[@Name='Match']</XPathQuery> </ValueExpression> <Operator>Equal</Operator> <ValueExpression> <Value Type="String">GoodHashFound</Value> </ValueExpression> </SimpleExpression> </SuccessExpression> </Configuration> </UnitMonitor>

 

Lastly – I create an override for the monitor – which allows you to specify the file, and the known good hash list, which appears like this:

 

image

 

When a bad hash is detected – we generate an alert:

 

image

 

And Health Explorer provides good context:

 

image

 

We also do logging for the script when it starts, and the output:

 

Log Name:      Operations Manager
Source:        Health Service Script
Date:          5/26/2016 11:45:55 AM
Event ID:      3324
Task Category: None
Level:         Warning
Keywords:      Classic
User:          N/A
Computer:      WINS2008R2.opsmgr.net
Description:
filehashcheck.vbs : This file C:\Windows\system32\sethc.exe does not match any known good hashes.  It was found to have hash 0f3c4ff28f354aede202d54e9d1c5529a3bf87d8 which was NOT found in the supplied known good hashes: 167891d5ef9a442cce490e7e317bfd24a623ee12,81de6ab557b31b8c34800c3a4150be6740ef445a

 

 

The download of the complete management pack is available at:

https://gallery.technet.microsoft.com/Management-Pack-to-Monitor-153d8cfa

MP Updated: Windows Operations Systems Management Pack updated to 6.0.7310.0

$
0
0

 

 

The Base OS MP’s have been updated to version 6.0.7310.0.

Get them here:  https://www.microsoft.com/en-us/download/details.aspx?id=9296

 

Many will remember the last Base OS MP version 7303: Base OS MP’s have been updated – version 6.0.7303.0

If you are running 7303 version I recommend you update to this version as soon as your normal testing procedure and change control allows you to.

 

If you are running version 6.0.7297.0 or older, then I recommend you test, evaluate, and update according to your normal MP update cycle.

 

Changes in version 6.0.7310.0

  • Several bugs located in Clustered Shared Volumes MP were fixed (see below); error handling migrated to common recommended scenario. Enabled quorum monitoring via changing the monitoring logic. The monitoring logic is splitting for Nano Server (with usage of PowerShell) and all other operation systems.
    • Fixed bug: disk free space monitoring does not work on Quorum disks in failover clusters.
    • Fixed bug: logical disk discovery did not discover logical disk on non-clustered server with Failover Cluster Feature enabled.
    • Fixed bug: clustered shared volumes were being discovered twice – as a clustered shared volume and as a logical disk.
    • Fixed bug: mount points were being discovered twice for clustered disks mounted to a folder – as a clustered Disk and as a logical disk.
    • Fixed bug: Clustered Shared Volume was being discovered incorrectly when it had more than one partition (applied to discovery and monitoring).
  • Added support for Nano Server Failover cluster disk monitoring: the monitoring logic was fixed based on improved cluster discovery with registry and WMI. Error handling was also corrected. Logical disk correct discoveries on non-cluster server with Failover Clustered Server Feature is installed.
  • Created new overrides for Clustered Shared Volume MP, as long as the old ones did not work.
  • Some cosmetic changes were introduced for cluster disk monitors alert messages.

 

Changes in version 6.0.7303.0

  • MP used to discover physical CPU, which performance monitor instance name property was not correlated with Windows PerfMon object (expecting instance name in (socket, core) format). That affected related rules and monitors. With this release, MP discovers logical processors, rather than physical, and populates performance monitor instance name in proper format
  • Microsoft.Windows.Server.ClusterSharedVolumeMonitoring.mp and Microsoft.Windows.Server.Library.mp scripts code migration to PowerShell in scope of Windows Server 2016 Nano support (relevantly introduced in Windows Server 2016 MP version 10.0.1.0).
  • Updated Microsoft.Windows.Server.ClusterSharedVolumeMonitoring.ClusterSharedVolume.Monitoring.State monitor alert properties and description. The fix resolved property replacement failure warning been generated on monitor alert firing.

I’ll be testing these in detail and sharing feedback here as well.


How to change the SCOM agent heartbeat interval in PowerShell

$
0
0

 

Perhaps you have a special group of servers that are on poorly connected network segments, but most of your servers are in datacenters.  You may want to set the default heartbeat interval higher for these specific agents, so they are less likely to create heartbeat failures.  You can do this easily in the UI, but there isn’t a simple cmdlet to do this for a group of agents.

Here is a method you can use:

$agent = get-scomagent | where {$_.DisplayName -eq 'yourspecialsnowflake.domain.com'} $agent.HeartbeatInterval = 360 $agent.ApplyChanges()

 

In this example – you might set this differently for all the agents in your DMZ domain:

$agents = get-scomagent | where {$_.domain -eq 'DMZ'} foreach ($agent in $agents) { $agent.HeartbeatInterval = 360 } $agent.ApplyChanges()

Follow me on twitter

Authoring Management Packs – the fast and easy way, using Visual Studio???

$
0
0

 

imageimage

 

I know what you are thinking.  “Wait, what did he just say?”

 

image

 

Well, hear me out.

Over the years, there have been many tools that we have used to write MP’s.  The SCOM UI Authoring tab, Notepad, XML Notepad, Notepad++, SCOM 2007R2 Authoring Console, Silect MP Author, and Visual Studio.

 

They all have tradeoffs.  I’d argue the most powerful tool over the years that was “somewhat” user friendly was the SCOM 2007R2 authoring console.  Once you learned the quirks, it was pretty good when you needed to make complex MP’s.  But even it was far from perfect.  And moving into SCOM 2012, now that the schema changed, using it becomes VERY challenging to update your existing MP’s.  Silect’s MPAuthor (http://www.silect.com/mp-author) has stepped up in a big way to fill some of these gaps, and they have done a fantastic job of creating wizards that spit out MP’s that are useful, and relatively easy to author.  But they don’t have a wizard for all scenarios, and they don’t offer a way to update the workflows using the same UI that created them, so you are back to XML at that point.

 

Visual Studio has a powerful plugin called VSAE (Visual Studio Authoring Extensions) https://www.microsoft.com/en-us/download/details.aspx?id=30169

The challenge with VSAE is unless you come from a developer background using Visual Studio, or you write management packs all day for a living, most people find it VERY daunting to use.  Even with experience, I find adding certain module types VERY cumbersome to use.  I find customers rarely use Visual Studio for authoring because of this.

 

I think that can change.  I have been presenting a different method on using VSAE with my customers and it has resonated very well.  The most powerful part of VSAE to me, is the ability to use Management Pack fragments.

A Management Pack fragment is simply a bit of XML, that contains all the “working parts” for a specific workflow….   Several authors have written about the power of fragments since VSAE launched, but the biggest gap I saw can be broken up into two major issues:

  • Nobody provided a good “library” of workable MP fragments
  • Nobody came up with a VERY simple method to reuse fragments quickly and easily

 

I hope to change that.  I am presenting an MP fragment library for you to download, and a simple methodology to add fragments to a MP you wish to create.  If you can do a FIND and REPLACE in notepad, you can use this.

 

What if I told you, in a few *minutes*, you could write a full Management Pack, that discovered an application dynamically, created views for the app, monitored for events and performance, monitored key services, and could even run your custom scripts against the app using VBScript or PowerShell?

 

I will be starting a blog series, which will include step by step examples of these fragments, and how to use them.

 

You can download the fragment library here on TechNet gallery:  https://gallery.technet.microsoft.com/SCOM-Management-Pack-VSAE-2c506737

 

This will be a step by step series.  You really should go in order if you are just getting started, but it isn’t really required except for the first two parts.  You can complete ALL of these in less than an hour, even for a first timer.

 

Part 1: Use VSAE to create a new Management Pack Project

Part 2: Use VSAE fragments to dynamically discover an application based on the existence of a registry key or value

Part 3: Use VSAE fragments to monitor a service

Part 4: Use VSAE fragments to create an alert generating event log rule

Part 5: Use VSAE fragments to create a performance collection rule

Part 6: Use VSAE fragments to add Alert, State, and Performance views to your MP

Part 7: Use VSAE fragments to add custom Groups to your MP

Part 8: Use VSAE fragments to create a Windows Performance Monitor with Consecutive Samples

 

There MANY more fragments available in my download than just what I am documenting here.  These are just a basic walkthrough of VERY common workflows, to show how easy it can be, to use VSAE and create great management packs, quickly.  I will add additional examples as time goes on.  I welcome any feedback or MP fragment requests to be added to the library.

 

One of the important things to remember – is you aren’t limited by the fragments I provide.  You can (and should) make your own fragments, specializing them to your company.  For instance, one of the FIRST things you should do, is download all the fragments, and replace ##CompanyID## with your actual company ID.  This way – that is one step already eliminated in your find/replace steps.  If you ALWAYS do specific things with your classes (such as you always create classes AND groups) then you should combine these fragments into one.  It will just make it that much faster.

 

 

Let the “easy series” begin.  Smile

 

image

Part 1: Use VSAE to create a new Management Pack Project

$
0
0

 

This is Part 1 in a multi-part series explained here:  https://blogs.technet.microsoft.com/kevinholman/2016/06/04/authoring-management-packs-the-fast-and-easy-way-using-visual-studio/

 

 

Step 1:  Install a supported version of Visual Studio.

 

Step 2:  Install the VSAE components from https://www.microsoft.com/en-us/download/details.aspx?id=30169

 

Step 3:  Open Visual Studio.  File > New > Project

 

image

 

Step 4:  Pick the version you want to write for, and give your project a name based on a naming standard.  My naming standard will be “CompanyID.AppName” so if my company abbreviation is “Fab” (for Fabrikam) and I am writing this to monitor a custom application (I will make up a fake “DemoApp”)  I will call mine “Fab.DemoApp”

 

image

 

Step 5: 

Right click your “Fab.DemoApp” in solution explorer, and choose properties:

 

image

 

Here you can make changes to the core properties of your MP:

 

image

 

you can configure these to automatically deploy as you build the MP, to a development management group if you wanted, and even seal the MP as you build.

You are almost done!  We have the basic framework of a MP now, which is mostly empty except for some default references.  Let’s build this one just for fun.

 

 

 

Step 6: Build > Build Solution

 

image

 

In your output – you will see if it built successfully or if you had a problem that needs to be fixed:

 

image

 

In your bin\Debug path, which is where the management pack is written – open the XML using XML Notepad just to view it:

 

<?xml version="1.0" encoding="utf-8"?> <ManagementPack SchemaVersion="2.0" ContentReadable="true" xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <Manifest> <Identity> <ID>Fab.DemoApp</ID> <Version>1.0.0.0</Version> </Identity> <Name>Fab.DemoApp</Name> <References> <Reference Alias="System"> <ID>System.Library</ID> <Version>7.5.8501.0</Version> <PublicKeyToken>31bf3856ad364e35</PublicKeyToken> </Reference> </References> </Manifest> </ManagementPack>

 

As you can see – it is pretty much empty, having only a Manifest section.  This will grow as we start to use fragments to add monitoring.

 

Step 7:  Save your MP

Use the “Save All” buttons at the top to make sure you save your changes.

image

 

This will be the foundation MP for all the Parts moving forward.

Part 2: Use VSAE fragments to dynamically discover an application based on the existence of a registry key or value

$
0
0

 

This is Part 2 in a series of posts described here:  https://blogs.technet.microsoft.com/kevinholman/2016/06/04/authoring-management-packs-the-fast-and-easy-way-using-visual-studio/

 

Now we will start with our first example fragment – discovering an app and creating a class for it.

 

Step 1:  Download and extract the sample MP fragments.  These are available here:  https://gallery.technet.microsoft.com/SCOM-Management-Pack-VSAE-2c506737

I will update these often as I enhance and add new ones, so check back often for new versions.

 

Step 2:  Open your newly created MP solution, and open Solution Explorer.

We want to keep things organized – so we want to create folders to organize our solution as we go.  This wont affect anything in the MP XML, it just keeps the solution organized.

Right click “Fab.DemoApp” and choose Add > New Folder

 

image

 

Name the folder “Classes”

image

 

Step 3:  Add the class fragment:  Right click “Classes” and choose Add > Existing Item.

 

image

 

Browse to where you extracted my sample fragments, and choose the Generic.Class.And.Discovery.Registry.KeyExists.Fragment.mpx.

Select this fragment which now shows up under classes in solution explorer, and you should see the XML pop up in Visual Studio.

 

Step 4:  Find and Replace!

This is the area where I tried to make using Visual Studio and VSAE MUCH easier.  I cam up with a standard item list that you will need to commonly replace in your XML, and enclosed each item with “##” to make them easy to find.  I also included notes at the top of each fragment, explaining what the fragment does, and what you need to replace.

This allows you to create LOTS of monitoring ins SECONDS  simply using Find and Replace.

For this example, we to replace ##CompanyID##, ##AppName##, ##RegistryKey##

CompanyID is easy – for my demo’s that’s my company abbreviation, or “Fab”.

AppName in this case, is a fake application I called “DemoApp”

RegistryKey is simply going to be the path in the registry which designates that “DemoApp” is installed.

 

I start with replacing ##CompanyID## with “Fab”

Edit > Find and Replace > Quick Replace

image

 

image

 

There is a “Replace All” button at the red arrow above.

 

image

 

Now I repeat this for ##AppName##

image

 

And lastly – the ##RegistryKey””##.

 

My Registry Key for this app is HKEY_LOCAL_MACHINE\SOFTWARE\DemoApp

image

However, in SCOM “HKEY_LOCAL_MACHINE” is already hard coded in the code, and if you look at the sample XML – SOFTWARE is already present:

 

              <Path>SOFTWARE\##RegistryKey##</Path>

 

So all I need to do is replace ##RegistryKey## with “DemoApp”

 

image

 

Done!

Three quick find/replace actions, and we have a working class definition, with a registry discovery.  Look through the XML to familiarize yourself with all that you just created.  There is a TypeDefinitions section with your Class definition, along with a Discovery to discovery all machines with the registry key.

 

 

Step 5:  Build the MP.  Then import it as a test.

Open Discovered Inventory in the SCOM console – Change Target type – and find the class you just created

image

 

After a few minutes, the agents should download this MP, run the discovery, and any agents with that registry key will show up as an instance of our new class:

 

image

 

 

 

Congrats!  You have dynamically discovered all computers with the “DemoApp” application in your company.  Start to finish, about 1 minute.  5 minutes tops if you are learning VSAE for the first time.

Part 3: Use VSAE fragments to monitor a service

$
0
0

 

This is Part 3 in a series of posts described here:  https://blogs.technet.microsoft.com/kevinholman/2016/06/04/authoring-management-packs-the-fast-and-easy-way-using-visual-studio/

 

In our next example fragment – we will monitor a service by creating a monitor that targets our custom class.

 

Step 1:  Download and extract the sample MP fragments.  These are available here:  https://gallery.technet.microsoft.com/SCOM-Management-Pack-VSAE-2c506737

I will update these often as I enhance and add new ones, so check back often for new versions.

 

Step 2:  Open your newly created MP solution, and open Solution Explorer.  This solution was created in Part 1, and the class was created in Part 2.

 

Step 3:  Create a folder and add the fragment to it. 

Create a folder called “Monitors” in your MP:

 

image

 

Right click Monitors, and Add > Existing item.

Find the fragment named “Generic.Monitor.Service.WithAlert.Fragment.mpx” and add it.

Select Generic.Monitor.Service.WithAlert.Fragment.mpx in solution explorer to display the XML.

 

Step 4:  Find and Replace

Replace ##CompanyID## with our company ID which is “Fab

Replace ##AppName## with our App ID, which is “DemoApp

Replace ##ClassID## with the custom class we created in Step 2.  This was “Fab.DemoApp.Class” from our previous class fragment.

Replace ##ServiceName## with the short name of any service.  For this Demo, since “DemoApp” is a made up example, we will just use the spooler service.  So replace with “spooler

 

That took all of 2 minutes.  Take another few minutes to review the XML we have in this fragment.  It is a simple monitor definition, that will generate an alert and change state when the spooler service isn’t running.  There are also display strings which can be modified for the monitor display name, alert name, and alert description.

 

 

Step 5:  Build the MP.   BUILD > Build Solution.

image

 

 

Step 6:  Import or Deploy the management pack.

image

 

When enough time passes, the agent will get the new MP, and will load the new monitor.  In our discovered inventory view – we should be able to see the state change from “Unmonitored” to “Healthy” because our custom class now gets health rollup from the monitor we just created.

 

image

 

Step 7:  Test the MP.

Stop the Print Spooler service.  Verify we see a state change and the alert we expect:

image

 

image

 

 

Nice!   And easy.

image

Part 4: Use VSAE fragments to create an alert generating event log rule

$
0
0

 

This is Part 4 in a series of posts described here:  https://blogs.technet.microsoft.com/kevinholman/2016/06/04/authoring-management-packs-the-fast-and-easy-way-using-visual-studio/

 

In our next example fragment – we will monitor the event log for a specific event, and generate an alert if it occurs.

 

Step 1:  Download and extract the sample MP fragments.  These are available here:  https://gallery.technet.microsoft.com/SCOM-Management-Pack-VSAE-2c506737

I will update these often as I enhance and add new ones, so check back often for new versions.

 

Step 2:  Open your newly created MP solution, and open Solution Explorer.  This solution was created in Part 1, and the class was created in Part 2.

 

Step 3:  Create a folder and add the fragment to it. 

Create a folder called “Rules” in your MP:

image

 

Right click Rules, and Add > Existing item.

Find the fragment named “Generic.Rule.AlertGenerating.EventLog.Fragment.mpx” and add it.

Select Generic.Rule.AlertGenerating.EventLog.Fragment.mpx in solution explorer to display the XML.

 

Step 4:  Find and Replace

Replace ##CompanyID## with our company ID which is “Fab

Replace ##AppName## with our App ID, which is “DemoApp

Replace ##EventID## with an event.  I will use “100

Replace ##EventSource## with a valid Event Source for our event, I will use “TEST

Replace ##ClassID## with the custom class we created in Step 2.  This was “Fab.DemoApp.Class” from our previous class fragment.

Replace ##LogName## with the event log you want to monitor.  I will use “Application

 

That took all of 2 minutes.  Take another few minutes to review the XML we have in this fragment.  It is a simple rule definition, that will generate an alert when the event is seen in the log.  There are also display strings which can be modified for the rule display name, alert name, and alert description.

 

Step 5:  Build the MP.   BUILD > Build Solution.

image

 

 

Step 6:  Import or Deploy the management pack.

image

 

 

Step 7:  Test the MP.

We need to wait for the agent to get the new MP version.  You wcan watch for this in the agents OperationsManager event log.

We will see a 1200, 1201, then 1210 event sequence:

image

 

Once you get the 1210 – you can test the MP.

I will use EVENTCREATE to test this rule.  At an elevated command prompt, run:

eventcreate /T ERROR /ID 100 /L APPLICATION /SO TEST /D “This is a Test event 100″

Verify you get the event:

image

 

Verify you got the alert:

 

image

 

All done!  Time to hit the easy button.

 

image


Part 5: Use VSAE fragments to create a performance collection rule

$
0
0

 

This is Part 5 five in a series of posts described here:  https://blogs.technet.microsoft.com/kevinholman/2016/06/04/authoring-management-packs-the-fast-and-easy-way-using-visual-studio/

 

In our next example fragment – we will create a rule to collect Windows Performance data.

 

Step 1:  Download and extract the sample MP fragments.  These are available here:  https://gallery.technet.microsoft.com/SCOM-Management-Pack-VSAE-2c506737

I will update these often as I enhance and add new ones, so check back often for new versions.

 

Step 2:  Open your newly created MP solution, and open Solution Explorer.  This solution was created in Part 1, and the class was created in Part 2.

 

Step 3:  Create a folder and add the fragment to it.

Create a folder called “Rules” in your MP, if you don’t already have this folder.

image[2]

 

Right click Rules, and Add > Existing item.

Find the fragment named “Generic.Rule.Performance.Collection.Perfmon.Fragment.mpx” and add it.

Select Generic.Rule.Performance.Collection.Perfmon.Fragment.mpx in solution explorer to display the XML.

 

Step 4:  Find and Replace

Replace ##CompanyID## with our company ID which is “Fab

Replace ##AppName## with our App ID, which is “DemoApp

Replace ##ClassID## with the custom class we created in Step 2.  This was “Fab.DemoApp.Class” from our previous class fragment.

Replace ##ObjectName## with a valid perfmon object.  I will use “Print Queue

Replace ##CounterName## with a valid perfmon counter.  I will use “Total Jobs Printed

Replace ##CounterNameWithoutSpaces## with the same as above, but remove any spaces.  I will use “TotalJobsPrinted

Replace ##InstanceName## with a valid perfmon instance.  I will use “_Total

 

(Note:  If your counter doesn’t have instances – you can just remove this in the XML so it looks like <InstanceName></InstanceName> )

 

That took all of 2 minutes.  Take another few minutes to review the XML we have in this fragment.  It is a simple rule definition, that collect windows performance counters.

 

 

 

Step 5:  Build the MP.   BUILD > Build Solution.

image

   

Uh Oh!

Error    80    Cannot resolve identifier Perf!System.Performance.OptimizedDataProvider in the context of management pack Fab.DemoApp. Unknown alias: Perf    C:\Program Files (x86)\MSBuild\Microsoft\VSAC\Microsoft.SystemCenter.OperationsManager.targets    255    6    Fab.DemoApp

 

This is simple.  We simply are calling on a module in our XML – but we haven’t added the MP that contains this module in our references.  Lets do that now.

In Solution Explorer – add a reference by right clicking “References” and choose “Add Reference”

image

 

VSAE came with a bunch of common reference files – so browse to C:\Users\<username>\Documents\Visual Studio 2013\References\ folder.  Pick the version of SCOM you want to be able to import this into, and select the “System.Performance.Library.mp”.

Highlight this MP in Solution Explorer under References, and in the properties window you will see the default Alias used, which you can change if necessary.  I used the default VSAE reference aliases in all my fragments.

 

image

 

Ok, lets try again.  Save All, then Build.

Grrrrrrrrr.

Error    80    Cannot resolve identifier MSDL!Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData in the context of management pack Fab.DemoApp. Unknown alias: MSDL    C:\Program Files (x86)\MSBuild\Microsoft\VSAC\Microsoft.SystemCenter.OperationsManager.targets    255    6    Fab.DemoApp

 

Same issue – because we need a reference for the data warehouse library in order to save performance data to the warehouse.  So lets add another reference, for Microsoft.SystemCenter.DataWarehouse.Library.mp

image

image

 

Save All, then Build.

SUCCESS!!!!!!

image

 

This was a little painful given how easy MP fragments are, because we got some errors.  However, it is a good exercise in understand how Visual Studio tells us what is wrong, and some simple ways to go and fix them.  Once we add references to our project, it will only need to be done once, so we wont have to go thought this again each time we add a performance rule.  The hard part is over.

Take a break and get a coffee if you need to, but I recommend importing it first in the next step…. it takes a while after import before we can test our rule.  Smile

 

 

 

Step 6:  Import or Deploy the management pack.

image

 

 

Step 7:  Test the MP.

We need to wait for the agent to get the new MP version.  You can watch for this in the agents OperationsManager event log.

We will see a 1200, 1201, then 1210 event sequence:

image[11]

 

Once you get the 1210 – you can normally test the MP.

However, for performance collection, we need to wait about 10 minutes after the agent gets a copy of the MP and makes it active, because performance data takes a little longer to show up in the console.

After 10 minutes or so – find the instance of your class in Discovered Inventory, right click it, and choose Open > Performance View

 

image

 

This will show all performance data associated with our custom class.  Our default interval was every 5 minutes, so you need to wait considerable time before you will see lines in the chart.  In order to test this one – you can actually print some notepad jobs to the “Microsoft XPS Document Writer” to change the counter values seen in perfmon.

 

image

 

 

 

image

Part 6: Use VSAE fragments to add Alert, State, and Performance views to your MP

$
0
0

 

This is Part 6 in a series of posts described here:  https://blogs.technet.microsoft.com/kevinholman/2016/06/04/authoring-management-packs-the-fast-and-easy-way-using-visual-studio/

 

In our next example fragment – we will create a folder and views to see our monitoring data we have generated thus far.

 

Step 1:  Download and extract the sample MP fragments.  These are available here:  https://gallery.technet.microsoft.com/SCOM-Management-Pack-VSAE-2c506737

I will update these often as I enhance and add new ones, so check back often for new versions.

 

Step 2:  Open your newly created MP solution, and open Solution Explorer.  This solution was created in Part 1, and the class was created in Part 2.

 

Step 3:  Create a folder and add the fragment to it.

Create a folder called “Views” in your MP, if you don’t already have this folder.

image

 

Right click Views, and Add > Existing item.

Find the fragment named “Generic.Folder.State.Alert.Perf.Views.Fragment.mpx” and add it.

Select Generic.Folder.State.Alert.Perf.Views.Fragment.mpx in solution explorer to display the XML.

 

Step 4:  Find and Replace

Replace ##CompanyID## with our company ID which is “Fab

Replace ##AppName## with our App ID, which is “DemoApp

Replace ##ClassID## with the custom class we created in Part 2 of the series.  This was “Fab.DemoApp.Class” from our previous class fragment.

 

That took all of 2 minutes.  Take another few minutes to review the XML we have in this fragment.  It is a simple set of view definitions for Alerts, Performance, and State, along with DisplayStrings for displaynames for each.

 

 

 

Step 5:  Build the MP.   BUILD > Build Solution.

image

 

 

 

Step 6:  Import or Deploy the management pack.

image

 

 

Step 7:  Test the MP.

Open the Monitoring pane of the console – you will have a new folder and views:

 

image

 

From here you will see the alerts scoped to our custom app class, along with any performance and health state data for all discovered instances.

 

 

 

 

image[41]

Part 7: Use VSAE fragments to add custom Groups to your MP

$
0
0

 

This is part 7 in a series of posts described here:   https://blogs.technet.microsoft.com/kevinholman/2016/06/04/authoring-management-packs-the-fast-and-easy-way-using-visual-studio/

 

In our next example fragment – we will create custom groups and add them to our MP.

Groups are a critical part of any management pack.  We will use them for overrides, to scope monitoring views, and to scope subscriptions.

 

I like to consider adding three different groups to most of my custom application MP’s, depending on how you use SCOM.

 

First – a group of all instances of my custom class.

I will use this for overrides, and subscriptions, where needed.

 

Second – a group of all Windows Computer objects that contain an instance of my custom class. 

I will use this to scope console views so I can expose more monitoring data about the computers running my app – to app owners.  It can also be used for subscriptions and overrides, since these computers host (and therefore contain) my class.

 

Third – a group of all Windows Computer objects AND their corresponding Health Service Watcher objects, that contain an instance of my custom class.

I will use this group when I need to allow app owners know when their computers are down – so they can see “heartbeat” and “computer down” alerts.

I have created three fragments which add these groups independently, so you can pick and choose.  Don’t just add them all for every MP you make, because if you are a large enterprise, you might end up with too many groups (I’m talking >1000 here), which can cause SCOM to get overloaded.

 

 

Step 1:  Download and extract the sample MP fragments.  These are available here:  https://gallery.technet.microsoft.com/SCOM-Management-Pack-VSAE-2c506737

I will update these often as I enhance and add new ones, so check back often for new versions.

 

Step 2:  Open your newly created MP solution, and open Solution Explorer.  This solution was created in Part 1, and the class was created in Part 2.

 

Step 3:  Create a folder and add the fragment to it.

Create a folder called “Groups” in your MP, if you don’t already have this folder.

image

 

Right click Groups, and Add > Existing item.

Find the fragment named “Generic.Class.Group.ClassInstances.Fragment.mpx” and add it.

Select Generic.Class.Group.ClassInstances.Fragment.mpx in solution explorer to display the XML.

 

Step 4:  Find and Replace

Replace ##CompanyID## with our company ID which is “Fab

Replace ##AppName## with our App ID, which is “DemoApp

Replace ##ClassID## with the custom class we created in Part 2 of the series.  This was “Fab.DemoApp.Class” from our previous class fragment.

 

That took all of 2 minutes.  Take another few minutes to review the XML we have in this fragment.  It is a simple class definition for our group, a discovery to populate the group, along with DisplayStrings for displaynames for each.

 

 

 

Step 5:  Build the MP.   BUILD > Build Solution.

 

DOH!

Error    92    Cannot resolve identifier MSIL!Microsoft.SystemCenter.InstanceGroup in the context of management pack Fab.DemoApp. Unknown alias: MSIL    C:\Program Files (x86)\MSBuild\Microsoft\VSAC\Microsoft.SystemCenter.OperationsManager.targets    255    6    Fab.DemoApp

 

This is because the group fragment needs a reference to the Instance Group Library.

In Solution Explorer – add a reference by right clicking “References” and choose “Add Reference”

 

image

 

VSAE came with a bunch of common reference files – so browse to C:\Users\<username>\Documents\Visual Studio 2013\References\ folder.  Pick the version of SCOM you want to be able to import this into, and select the “Microsoft.SystemCenter.InstanceGroup.Library.mp”.

Highlight this MP in Solution Explorer under References, and in the properties window you will see the default Alias used, which you can change if necessary.  I used the default VSAE reference aliases in all my fragments.

 

image

 

Now Save All, then BUILD again.

Boom!

 

image

 

 

Step 6:  Import or Deploy the management pack.

image

 

 

Step 7:  Test the MP.

Open the Authoring pane of the console – and select “Groups” 

Find your new DemoApp Instance group:

image

 

Right click and View Group Members:

(Note:  This may take a few minutes in your environment for Group Population to run, and generate new config)

 

image

 

At this point – you can repeat these same steps for the other two group fragments:

Generic.Class.Group.WindowsComputers.Fragment.mpx

Generic.Class.Group.WindowsComputersAndHealthServiceWatchers.Fragment.mpx

 

image

image

 

 

 

 

image[41]_thumb

Part 8: Use VSAE fragments to create a Windows Performance Monitor with Consecutive Samples

$
0
0

 

This is Part 8 in a series of posts described here:   https://blogs.technet.microsoft.com/kevinholman/2016/06/04/authoring-management-packs-the-fast-and-easy-way-using-visual-studio/

In our next example fragment – we will create Monitor for Windows Performance for our MP.

 

 

Step 1:  Download and extract the sample MP fragments.  These are available here:  https://gallery.technet.microsoft.com/SCOM-Management-Pack-VSAE-2c506737

I will update these often as I enhance and add new ones, so check back often for new versions.

 

Step 2:  Open your newly created MP solution, and open Solution Explorer.  This solution was created in Part 1, and the class was created in Part 2.

 

Step 3:  Create a folder and add the fragment to it.

Create a folder called “Monitors” in your MP, if you don’t already have this folder.

image

 

Right click Monitors, and Add > Existing item.

Find the fragment named “Generic.Monitor.Performance.ConsecSamples.TwoState.Fragment.mpx” and add it.

Select Generic.Monitor.Performance.ConsecSamples.TwoState.Fragment.mpx in solution explorer to display the XML.

 

Step 4:  Find and Replace

Replace ##CompanyID## with our company ID which is “Fab

Replace ##AppName## with our App ID, which is “DemoApp

Replace ##ClassID## with the custom class we created in Part 2 of the series.  This was “Fab.DemoApp.Class” from our previous class fragment.

Replace ##ObjectName## with a valid perfmon object.  I will use “Print Queue

Replace ##CounterName## with a valid perfmon counter.  I will use “Total Jobs Printed

Replace ##CounterNameWithoutSpaces## with your counter, but remove any spaces.  I will use “TotalJobsPrinted

Replace ##InstanceName## with a valid perfmon instance.  I will use “_Total

Replace ##Threshold## with a valid threshold for the monitor.  I will use “5

 

That took all of 2 minutes.  Take another few minutes to review the XML we have in this fragment.  It is a simple monitor definition, it checks every minute, and when 5 consecutive samples are over a threshold value of “5”, it will change state and alert.   

 

 

Step 5:  Build the MP.   BUILD > Build Solution.

image

 

 

 

Step 6:  Import or Deploy the management pack.

image

 

 

Step 7:  Test the MP.

Open the Monitoring pane of the console – and find your folder you created in Part 6.

Open the state view.

Open Health Explorer for an instance of your class.

image

 

To test this perf counter, you can print dummy jobs from Notepad to the Microsoft XPS document writer, to get the value over the threshold.

After 5 consecutive samples, based on our interval (every 60 seconds) I should see a statechange after 5 minutes.

 

Boom:

image

 

image

 

 

 

 

 

image[41]_thumb_thumb

Writing events with parameters using PowerShell

$
0
0

 

When we write scripts for SCOM workflows, we often log events as the output, for general logging, debug, or for the output as events to trigger other rules for alerting.  One of the common things I need when logging, is the ability to write parameters to the event.  This helps in making VERY granular criteria for SCOM alert rules to match on.

 

One of the things I HATE about the MOM Script API LogScriptEvent method, is that it places all the text into a single blob of text in the event description, all of this being Parameter 1.

Luckily – there is a fairly simple method to create paramitized events to output using your own PowerShell scripts.  I got this from Mark Manty, a fellow PFE.

 

Here is a basic script that demonstrates the capability:

 

#Script to create events with parameters #Define the event log and your custom event source $evtlog = "Application" $source = "MyEventSource" #These are just examples to pass as parameters to the event $hostname = "computername.domain.net" $timestamp = (get-date) #Load the event source to the log if not already loaded. This will fail if the event source is already assigned to a different log. if ([System.Diagnostics.EventLog]::SourceExists($source) -eq $false) { [System.Diagnostics.EventLog]::CreateEventSource($source, $evtlog) } #function to create the events with parameters function CreateParamEvent ($evtID, $param1, $param2, $param3) { $id = New-Object System.Diagnostics.EventInstance($evtID,1); #INFORMATION EVENT #$id = New-Object System.Diagnostics.EventInstance($evtID,1,2); #WARNING EVENT #$id = New-Object System.Diagnostics.EventInstance($evtID,1,1); #ERROR EVENT $evtObject = New-Object System.Diagnostics.EventLog; $evtObject.Log = $evtlog; $evtObject.Source = $source; $evtObject.WriteEvent($id, @($param1,$param2,$param3)) } #Command line to call the function and pass whatever you like CreateParamEvent 1234 "The server $hostname was logged at $timestamp" $hostname $timestamp

 

The script uses some variables to set which log you want to write to, and what your custom source is.

The rest is pretty self explanatory from the comments.

You can add additional params if needed to the function and the command line calling the function.

 

Here is an event example:

 

image

 

 

But the neat stuff shows up in the XML view where you can see the parameters:

 

image

Viewing all 179 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>