SCUtils APC PDU monitoring

I already wrote a blog post about the free SCUtils APC UPS monitoring pack a while ago. Now SCUtils provided the promised APC PDU Monitoring pack for System Center Operations Manager 2012 SP1/R2. The only disadvantage is that this one is with costs. You pay 716,14€ without VAT (852,21€ incl. VAT) per license which includes monitoring for 10 devices and you get one year support.

The management pack monitors two types of PDUs: PDU and PDU2, which are two different generations of APC PDUs. PDU objects are first generation devices of the AP7000 series. PDU2 objects are from the second generation AP8000 series, they have sensors included which are monitored also.

Here are some more details about the monitoring pack.

Included MPs:
SCUtils.APC.PDU.FirstDiscovery.Overrides.xml (only required to speed up the discoveries => changes frequency to 700 sec. You can remove that after all PDUs are discovered.)

The regular discoveries run ever 12 hours, that is ok. Most of the monitors run every 5 min, rules run every 10 min. All are enabled by default.



The following folder/view structure gets created:

Diagram View:  (PDU Generation 1 => PDU2 Generation 2 would have additional objects)

You need to install the management pack on the management server and activate the license through a task. To do that go to the administration pane in the SCOM console and find the SCUtils Settings.

Select SCUtils Products Activation, then the Activation view is shown.

On the right side you have three Tasks:
Check a license Task
Get unique ID for offline activation
If your console machine has internet access, then you can run the Activate task. Otherwise you can also use the Get Unique ID for an offline activation task to request the activation through email.

In the Activate task you override the LicenseKey and the CompanyName fields:ActivateTaskDetails

The management pack will work after activation.

I only tested PDU devices not PDU2.

This management pack closes the APC monitoring gap. So with both offered management packs (UPS/PDU) you can monitor your APC environment.

Attention! This MP only works if you monitor the devices through a management server not on a gateway server!

SCOM 2012: Get Pool Member monitoring details

I recently had a problem that a custom rule was not running correctly, so I wanted to find out which of my SCOM 2012 Management Servers was running the All Management Servers Resource Pool instance, where the rule was targetted to.

I could not find a something which matched in the web so I contacted some of my great SCOM colleagues and got feedback from Kevin Holman with the correct solution. Thanks Kevin!

There are two tasks in SCOM already, which can give you more deatils about which Management Server takes care of which instance (class).


I will show now where you find them and what you need to enter.

Both tasks you require the ID of the resource pool which handles the instances. In my case it is the ‘All Management Servers Resource Pool’.

To find that run the Operations Manager Shell and enter the command:

Get-SCOMResourcePool | FT Displayname, IDgetscomresourcepool

The output shows the resource pool names and the IDs. So copy the ID of the pool you need.

Then go to your Operations Manager Console.

Open the Management Servers State Dashboard view:

Select one of the Management Servers in the Management Server State section.


Then run the task ‘Get the Pool Member Monitoring a Top Level Instance’.


Here you need to override the PoolId and the ManagedEntityID. In my case the IDs are both the same as I want to know which pool member of the All Management Servers Resource Pool manages the pool. In your case you perhaps want to know that for another class. You can find the ManagedEntityID of the class through the PowerShell command: (Get-SCOMClass -Displayname ‘xxx’).Id.

The output of the Task gives you the Management Server Name:


The second task has another approach. It gives you all top level instances which a Pool Member monitors.

So run the ‘Get Top Level Instances Monitored by a Pool Member’ task.


Here you only need to override the PoolId.


The output lists all classes monitored by the pool member you selected in the Management Server State view.

With that information you can now go on and troubleshoot the logs why things are not working correctly on that Management Server.


SCOM 2012: OperationsManager module not found after WMF update

I recently had the situation in my System Center Operations Manager 2012 SP1 environment that Windows Management Framework was updated on all of the management servers. At first sight everything looked good. At second sight I recognized that one management server had a problem. It was running a PowerShell script to set custom properties on alerts and this script did not find any alerts anymore.

During investigation I found this error:

import-module : The specified module ‘OperationsManager’ was not loaded because no valid module file was found in any module directory.


I checked the PowerShell module directory, the module was there, but I couldn’t call it.

I ifxed it by running a repair of the SCOM console installation.

  • Go to Control Panel
  • Select Programs: Programs and Features
  • Select System Center 2012 – Operations Manager and click Uninstall/Change
  • Select Repair the Operations Manager InstallationRepairSCOM.JPG
  • Select Operations consoleSelectSCOMconsole.JPG
  • Click Repair

Additionally I recommend to reinstall the latest Update Rollup for the console again.



SCOM 2012: SCUtils APC Monitoring

It is already a while ago when I found out that there is a free management pack from SCUtils which monitors APC UPS devices, the SCUtils APC Monitoring Management Pack.
When I wanted to test it, I realized that it was only available for SCOM 2012 R2. So I contacted the support and asked if they can also provide a SP1 version for me.
And they really did it and were very responsive – a big plus!

So I was able to implement it in my test environment and checked it out.
Here are my findings.

The management pack is well designed. The bundle consists of two MPs:


It monitors APC UPS devices and APC EMUs (environmental monitoring unit). APC PDUs are not covered yet, but the support promised, that this will be added in the near future.
All discoveries run on a 4 hour schedule, the rules every 5 min and the monitors between 5 and 15 min. That is ok.

It creates all necessary views, including a Diagram View:

APC Folder

UPS Diagram View

With the UPS Dashboard you get a good overview of your APC environment.

UPS Dashboard


APC Monitors

All monitors are enabled by default, but there are also overrides, which disable some EMU monitors:

APC Overrides


APC Rules

Only one rule is disabled by default.

The MP has successfully detected the low battery runtime (8 min) and you can see that the Description, Path, Source is always very descriptive.APC Alert

They also added some nice reports:

APC Reports

So from what I see, it has all you need to monitor APC UPS devices. SCUtils promised to create a documentation for that MP bundle soon, but there is not really a lot you need to do to implement the management pack. The only thing is that you add the APC devices through the Network Monitoring to your environment and import the MPs. That’s it.
Very easy. And it is free at the moment.
I will only wait for the PDU monitoring to be added, then it will have all I want.

Information: I have created the Monitor, Rule and Report-Overview with MP Studio

Update: The APC PDU monitoring packs has been released. Here is my review.




PowerShell: Temperature monitoring

If you want to monitor the temperature of your server rooms, then you have a lot of options. One is a temperature module, which is directly connected to your network and where you can access the temperature value through a XML file like: http://moduleIP/state.xml.


We have used a solution from ControlByWeb, a PoE module with one sensor.

The idea is to have a System Center Orchestrator runbook, which checks the temperature of all sensors and creates a SCOM alert when the temperature is higher than the threshold of 30°C.


Then we also wanted to have a view directly in SCOM with the current values for all sensors. I used the PowerShell Web Widget for this.


The main part for all of this is a PowerShell script.

You can even use parts of the script and collect the data in SCOM.


But herefore you will need one rule for each sensor.

Functionality description:

The script reads a text file from a share with all IP addresses and names of the temperature modules.
Example:, Frankfurt, Paris

Then it connects to each module, loads the state.xml and reads the value of the first sensor.
With that data it creates an HTML table and writes that to a HTML file in a share on a web server.
The last step is that it can load the web page in the PowerShell Web Widget.

You can download the script on TechNet Gallery.




Microsoft Azure: Die German Cloud

Am letzten Freitag habe ich an der Veranstaltung Microsoft Azure Tour in Frankfurt teilgenommen und möchte hiermit ein paar Information über die German Cloud teilen. Sie können den Artikel auch in English lesen.

Microsoft hat während dieser Veranstaltung einen Einblick in den aktuellen Stand und die Zukunftspläne der German Cloud gezeigt und Hintergrundinformationen geliefert. Die Rechenzentren liefern schon einige Funktionalität, was sich aber von Tag zu Tag ändert. Microsoft wird eine Liste aller verfügbaren Dienste veröffentlichen sobald die German Cloud öffentlich verfügbar ist – Q2/2016.

Aufgrund von speziellen deutschen Anforderungen ist die German Cloud anders geplant als andere Rechenzentren der Microsoft Azure Cloud.

Hier ein paar Details:

  • Zwei Rechenzentren:
    • Germany Central – Frankfurt
    • Germany NorthEast – Magdeburg
  • Separiertes Azure Active Directory:
    • Nur ein Minimum von (nicht-persönlichen) Informationen ist geteilt
      • um doppelte Kundenkonten (Tenants) zu vermeiden
      • um Kundenkonten und ihre Regionen zu finden
    • Nur in Deutschland repliziert

Warum das?

Deutsche Firmen müssen sichergehen, dass ihre Daten im Land bleiben, um Probleme mit dem Patriot Act zu vermeiden. Das ist der Hauptgrund für die Separierung. Sie werden daher ein separates Kundenkonto (Tenant) bekommen, wenn Sie die German Cloud und ihre Ressourcen nutzen möchten. Es wird keine direkte Verbindung zwischen der German Cloud und der Publich Azure Cloud geben. Beide Rechenzentren replizieren über Leitungen auf deutschem Boden betrieben von einer deutschen Firma.


Aus Datenschutzgründen werden beide Rechenzentren nicht von Microsoft, sondern von T-Systems einer Telekom-Tochter betrieben. T-System übernimmt hier die Rolle eines Datentreuhändlers, welcher alle Microsoft Aktivitäten, die Kundendaten betreffen können, überwacht. Microsoft Mitarbeiter können auch nur in Begleitung von T-Systems Mitarbeitern das Rechenzentrum betreten und dort Arbeiten ausführen. T-Systems stellt sicher, dass deutsches Recht befolgt wird. Kunden der German Cloud werden einen Anhang an ihren Microsoft Vertrag erhalten, welche ein direktes Vertragsverhältnis mit T-Systems herstellt – nicht als Sub-Unternehmen. Der Support für die German Cloud wird direkt aus Deutschland erfolgen. Alle Firmen, die eine EFTA Rechnungsadresse besitzen, können die German Cloud nutzen.

Das folgende Bild zeigt wie der Zugriff kontrolliert wird.


Mehr Details:

  • Nur das neue Azure Portal ist verfügbar, das alte Azure Portal wird nicht eingesetzt
  • Kein Rückgriff auf das alte Portal
  • Fehlende Dienste müssen anderweitig kompensiert werden.
    • Wie AAD Anwender mit PowerShell oder Azure AD Connect
  • Rechenzentren sind nicht Teil des Microsoft Basisnetzes
    • Langsamerer Speicher Transport zwischen AzureCloud und AzureGermany
  • Neue Dienst Einstiegspunkte (Service endpoint)

Was bedeutet das?

  • Alle Dienste, die aktuell auf das alte Azure Portal umleiten, werden nicht verfügbar sein.
  • Um eine erfolgreiche Nutzung der German Cloud sicherzustellen, wäre es sinnvoll, wenn Sie Microsoft kontaktieren, bevor Sie German Cloud Dienste buchen. Damit wird sichergestellt, dass die Dienste, welche Sie benötigen, auch verfügbar sind. Es kann sein, dass ein späterer Zeitpunkt sinnvoller wäre, an welchem die notwendigen Dienste bereitgestellt werden können. Daher ist es wichtig, den richtigen Zeitpunkt heraus zu finden. Microsoft hilft dabei.
  • Sie können Ihre Umgebung in der Public Azure Cloud planen und später alle Vms in die German Cloud kopieren. Trotzdem dass dies möglich ist, ist folgendes zu beachten:
    • Das Kopieren zur German Cloud ist langsamer als innerhalb der Public Cloud.
    • PowerShell Skripte können genutzt werden (siehe Blog unten)
    • Die Konfiruation (Netzwerke, etc.) müssen manuell erstellt werden, Microsoft bietet momentan noch keine einfache Lösung per Mausklick an.
  • Wenn Sie auf die Dienst Einstiegspunkte (Service endpoints) zugreifen wollen, dann müssen Sie die neuen Endungen (.de) verwenden.

Falls Sie an der Vorschau für die German Cloud teilnehmen wollten, dann können Sie Microsoft über kontaktieren.

Hier gibt es noch Tipps zur German Cloud:



Microsoft Azure: The German Cloud

I have attended the Microsoft Azure Tour in Frankfurt on Friday and want to share some information about the German Cloud. You can read the article also in German.

During this event Microsoft gave a preview of their German Cloud and shared some background information. So the datacenters are already providing some functionality, but that changes from day to day and Microsoft will publish all services which will be available during GA – Q2/2016.

The German Cloud is planned to be different than the General Public Azure Cloud, because of the special regulation requirements in Germany.

Here are some details:

  • Two datacenters:
    • Germany central – Frankfurt
    • Germany NorthEast – Magdeburg
  • Separated Azure Active Directory:
    • Only a (non-personal) minimum of information is shared
      • to avoid duplicate tenants
      • to find tenants and their regions
    • Only replicated inside Germany

Why that?

German companies need to be sure, that their data stays in the country – to avoid Patriot act problems. That is the reason for this separation. So if you want to use the German Cloud, then you will get a separated tenant to access resources from the German Cloud. There is no direct access to or from the Public Azure Cloud. The two datacenters are replicated over landlines which are hosted from a German company also. The following pictures are only in German, sorry.


For data privacy reasons the datacenters are also not hosted by Microsoft, T-Systems is taking this part. This is a German Company – part of Telekom – and they take the role of a data trustee who will monitor all activities of Microsoft employees related to customer data in the German Cloud. Microsoft employees will also not be able to work in the datacenter in person without T-Systems supervision. T-Systems will take care that German law is followed. Customers of the German Cloud will have an appendix in their Microsoft contract which establishes a direct contract with T-Systems. The support for the German Cloud will also come directly out of Germany. All companies with an EU EFTA billing address can access the German Cloud.

The following picture shows how this data trustee control will be handled.


More Details:

  • Only the new Azure Portal is available, the old Azure Portal is not deployed
  • No fallback to old portal
  • Missing Services have to be compensated otherwise
    • Like AAD User with PowerShell or Azure AD Connect
  • DataCenters are not part of the MS Backbone
    • Slower storage transport between AzureCloud and AzureGermany
  • New Service endpoints

What does that mean?

  • All services which are currently redirecting to the old Azure Portal will not be available.
  • It would be important to contact Microsoft before you want to use the German Cloud and check which services you need and when they will be available to improve your onboarding experience. The timing is crucial here.
  • You can now design your Environment in the Public Cloud and copy the vms over to the German Cloud, that is possible
    • Transport to German Cloud is slower than in the Public Cloud.
    • PowerShell scripts can be used (see blog below)
    • Configuration (Networks, etc.) needs to be done manual, Microsoft does not provide a point and click solution.
  • If you want to access the service endpoints of the German Cloud then you need to use the new ones with the .de ending.

If you want to participate in the preview then contact Microsoft through

You can also follow this blog about tips for the German Cloud:  (only in German!)



SCOM 2012: Remove-SCOMManagementPack

Happy New Year!

I recently had a problem in my SCOM test environment with a management pack, which already should have been removed, but really wasn’t.

The situation was this:
I tested the Solarwinds Orion Management Pack which also required a connector to be installed on the management server – which makes my skin crawl. After testing this management pack I came to the decision to remove it again. I didn’t like that I should manually add new devices in the connector wizard and it also created alerts with the same name and I could only see the real problem in the description – more shivers over my back. I uninstalled the connector through Add/Remove Programs and deleted the management pack in the console. The problem was that the Solarwinds.Orion.SCOM.Library still stayed in the database. It was not deleted! Strange. Have not seen this before.

So I tried to use PowerShell. I opened up the OpsManager Shell and ran the Remove-SCOMManagementPack cmdlet as shown in this Picture:


But it timed out after 30 minutes.


In the OpsManager event log on the management server I had these events (4508):


Which showed me that the MP really still was alive, but the assemblies of the connector were missing. Sure, because I uninstalled it.

I tried to remove the MP again from the console. It ran and ran and ran and during that I saw that my management servers slowly greyed out.

I checked the event logs and found this event (20034):


My console was fozen and I could only kill it.

I contacted my SQL admin colleague and she found blockings on the database:


After an hour or so, I asked her to kill these tasks. I had no clue how to get this crazy MP out of my environment. I was glad that it was only my test environment – one more reason why I highly recommend to have that!

Then I remembered that I met Vlad last time at MMS. So I contacted him and he was so kind to forward my request to his colleagues. They helped me out. Here is what I had to do:

I opened up SQL Server Management Studio on my SCOM DB Server

  1. I performed a backup of my OperationsManager database
  2. Then I ran the following query to get the ManagementPackId:
    SELECT ManagementPackId,MPName
    FROM [OperationsManager].[dbo].[ManagementPack]
    where MPName like ‘Solarwinds%’
  3. Then I ran this query:
    exec [dbo].[p_ManagementPackRemove] ‘ManagementPackId

This still took 45 min to finish but it worked!

Be careful with this solution because it is not officially supported from Microsoft and don’t forget to perform the backup before running the queries. I also would recommend to do this out of normal Business hours if you Need to run it on a production Environment because it influences the management Server Performance.





Orchestrator 2012: Start server patching from Service Manager

In my MMS 2015 session “Real world Automation with Service Manager and Azure Automation” with Steve Buchanan I showed how you can patch Servers initialized from a Service Manager Change Request.

The idea behind that is that there are systems which cannot be patched (and rebooted) during normal patch windows because the application owners need to control the outage times by themselves. They only know when production can handle a server outage. With Service Manager they can follow the ITIL Standards and create a Change Request, select a SCCM Collection with its Servers and the Software Updates to be applied. The Change Request will then call an Orchestrator Runbook and implement the Patches on all Servers in the given Collection.


  • The Software Updates need to be pre-deployed to all effected Servers through SCCM (Deployment Type: Available).
  • System Center Orchestrator 2012 R2, System Center Service Manager 2012 R2, System Center Configuration Manager 2012 R2
  • Log Database on SQL to store process Information
  • Sync SCCM Collections with SCSM

Temp DB Setup:






Service Manager:

Select Template: (Patch Server)

Enter Title:

Select Config Items to Change – SCCM Collection (Collection Info):

Select Related Items – Configuration Items: Computers, Services and People (Software Update):

Runbook Automation Activity:


The following screenshots show the runbooks which are used for this solution.

The main runbook:

Install Software Updates (called from SCSM)MMS - Install Software Updates

Sub runbooks:

Get CR Details (writes all necessary CR information to the DB)

MMS - Get CR Details

Get Software Updates (write Software Update Information to the DB)MMS - Get Software Updates

Get Collection IDs (writes SCCM Collection Information to the DB)MMS - Get Collection IDs

Split Patching by Server (gets all Servers within the Collection)
MMS - Split By Server

Split by Patch (reads all updates from the DB)

SCCM - Split By Patch

Check Updates (checks if the Patch is available on the machine)
MMS - Check Updates

Install Update (installs the update on the machine)
SCCM - Install Updates

Update CR (updates the Change Request)
MMS - Update CR

Improvement ideas:

  • Use Service Request instead of CR
  • Import SCCM Software Update Groups into SCSM and select them

This YouTube-Video shows you the process in action.

The complete solution can be downloaded here.

SCOM 2012: Detect Event Storm

System Center Operations Manager collects a lot of events but one System with a flapping service can cause SCOM to be flooded by events – an Event Storm. Operations Manager does not recognize this until the database is too full which causes performance issues or even greyed out management servers because they cannot proceed the data anymore.

It is important to avoid that Situation. There is one easy solution: a Monitor based on a PowerShell script which checks the number of events written to the database in a predefined schedule. If the number of events is higher than a given threshold an alert is created which shows the top 5 machines creating events. This makes it easy to find the cause of the problem. 

I have mentioned this situation in my presentation “Getting The Most From Operation Manager” at MMS 2015.

You can download the solution here. It also includes the rule to check greyed out agents.

A big thank to Thomas Peter from Vaserv EU who helped with this solution.


Get every new post delivered to your Inbox.

Join 38 other followers