OMS Management Packs

If you ever have to author a SCOM management pack, which references Microsoft OMS resources, then it would be good, if you have the management packs to reference.

I was recently searching for the Microsoft.IntelligencePacks.Performance.mp and Microsoft.IntelligencePacks.Types.mp, but could only find the xml here. Sealing this management pack did not help as I needed the original MP (with the signature from Microsoft) for my management pack.
But SystemCenterCore.com provided the information that the MP is part of the Microsoft.IntelligencePack.Core.mpb. And the last hint came from a MVP colleague (thanks Stefan!).

You can find all OMS (formerly Advisor) management pack bundles on the SCOM Management Server which is already connected to OMS. They all get downloaded to C:\Windows\Temp.

omsmbps

Don’t get confused by the names, you can use them as they are. So, for my reference, I selected the Microsoft.IntelligencePack.Core_635779185101086268.mpb in Visual Studio and all included Management Packs appeared – also the Microsoft.IntelligencePacks.Types.mp🙂.

 

 

SCSM 2012: Asset Management Part 2 – Classes and Relationships

This is the second part of the blog series about my Asset Management solution for SCSM 2012 R2.

Part 1: General overview
Part 3: Authoring – Folders and Views
Part 4: Authoring – Forms
Part 5: Reports
Part 6: Runbook/Automation details

This part of the series covers the classes and relationships created with Visual Studio 2015 Community Edition (incl. VSAE).

Classes
The solution has two base classes

  • AssetManagementBaseClass (DisplayName: Asset)assetclass
    The AssetID is automatically filled and auto-increments.
    Some fields reference lists (EnumType), so that the user can select the entries from predefined values.
  • OrderBaseClass (DisplayName: Order)
    order

Four sub-classes (based on AssetManagementBaseClass):

  • ComputerAsset (DisplayName: Computer Asset)
    computer
  • Peripheral (DisplayName: Peripheral)Peripheral
  • ServerInfrastructureAsset (DisplayName: Server Infrastructure Asset)
  • NetworkInfrastructureAsset (DisplayName: Network Infrastructure Asset)

ServerInfrastructureAsset and NetworkInfrastructureAsset are similar to Peripheral, that is the reason why I did not add a picture. Because of the inheritance the sub-classes get all properties from the AssetManagementBaseClass.

And 8 Groups to be able to give permissions to these objects. They all look like this:
AssetGroup
There are separated Groups for Assets, Computer Assets, Peripherals, NetworkInfrastructureAssets, ServerInfrastructureAssets, Orders, Windows Computers and AD Users.

Each group needs to be discovered. So the solution also has 8 discoveries like this:
Discovery

EnumTypes
Each property which should use a list needs an enumtype defined. I also created three additional entries for the AssetStatus list

EnumTypes.JPG

Categories
Categories need to be defined, to be able to see and edit the lists in the console.

The Category  Category.ManagementPackName is required to define that this a SCSM Management Pack:Category

Here is an example how the other categories look like: SiteCategory
The categories with Value=”System!VisibleToUser” enables the user to view the list in the list view.
The categories with the Value=”Authoring!Microsoft.EnterpriseManagement.ServiceManager.UI.Authoring.EnumerationViewTasks
lets the edit task appear in the list view.

Relationships
Relationships are required to be able to reference the different class objects with each other. All relationships are from the type System.Reference.

  • Asset To Order Relationship:AssetToOrderRel
  • Asset To Custodian Relationship:
    AssetToCustodianRel
    With that an AD User can be linked to the Asset as Custodian.
  • Asset To Computer Asset Relationship:
    AssetToCARel
    The same type of relationship is also created for Asset To Peripheral, Asset To NetworkInfrastructureAsset and Asset To ServerInfrastructureAsset.

Remember these relationships as you need to create them when you want to view fields from different related classes in the same view/form. This means when an ComputerAsset object gets created then you also need to create relationship object for the AssetToComputerAsset relationship. Otherwise you cannot show field values from both classes in the same view/form.

You can also create new classes i.e. for mobile devices or software assets. Use these classes here as an example and create all required objects (EnumTypes, Groups, Discoveries, Relationships, Categories).

Here are some additional links:
https://technet.microsoft.com/en-us/library/hh519583(v=sc.12).aspx

 

 

 

 

AzureRM: Move Virtual Machines to a new subscription

I have a subscription which ends soon, which forces me to migrate my resources to a new subscription. I have used the DevTestLab service to create my System Center test environment, which is a nice feature, but also has some limitations as some resources are locked. I tried to use the Azure PowerShell commands to migrate the resources or tried it directly in the console, but there were always parts which could not be migrated so the Validation failed. The other option I found is directly copy the blob storage. Here are two blog posts about it:
http://gauravmantri.com/2012/07/04/how-to-move-windows-azure-virtual-machines-from-one-subscription-to-another/
https://blogs.msdn.microsoft.com/laurelle/2015/10/01/how-to-move-azure-vm-between-subscriptions/

Both reference the old portal. That cannot be used for virtual machines which are part of a DevTestLab as they are AzureRM resources, which can only be managed in the new Azure Portal.

Both solutions need a tool to browse and copy the blob storage. I have also used the free CloudBerry Explorer for Azure Blob Storage. It reminded me a bit of the old File Commander😉.

So to move my virtual machines, I had to find out, which storage account was used in my old DevTestLab. Open up your DevTestLab and click on the resource group name in the overview.

ResGroup.jpg

You will see all resources which belong to this resource group:

ResGroupDetails.jpg

I have two storage accounts, by selecting one-by-one, I find out which one holds the virtual machines.

storageaccount

Click on Blobs and then on Vhds.vhds

The entries with the Page blob type are the vhds, the Block blob type are the VM Status files.

Now I know the Storage account. Then I created a new DevTestLab in my new subscription and checked which resources are there. There were two storage accounts, both with blob storage assigned, but without entries. Okay. I do not have the details which one you should select, but I selected the first one in the list. Now I have the names of both storage accounts.

To reference these accounts for the copy job, I need one more thing: the Access key. To find that open up the Storage accounts service in your Azure Portal.
StorageAccounts.JPG
Select your storage account, click on Access keys and copy the first key.

You will need to do that for both storage accounts when you create your connection in CloudBerry Explorer.

So open up CloudBerry Explorer and create the connection to both of your storage accounts (old subscription, new subscription).

CloudBerry-NewAzureblobStorage.JPG
Enter a Display name, the name of the Storage account and the shared key (which you copied before).

CloudBerry.JPG

When you have entered both Storage accounts, then you can browse them. On the old storage account browse to Root\Vhds and on the new one to Root\Uploads.

Then select the vhd files one by one and click copy. A queued copy job will start.

CloudBerryUploads.JPG
The job will close as soon as it is finished.

Now you can go back to the Azure Portal and to your new DevTestLab. The next step is to create Custom Images and the last step is to create the VMs with it.

  1. Create Custom Images
    In you DevTestLab Service, select your lab. Then select Configuration, Custom Images:
    CreateCustomImage.JPG
    Click Add.
    CustomImage.JPG
    Enter a Name for the Image, select the correct VHD and select the OS configuration.
    Click OK and the custom image gets created. Wait until this is finished successful.
  2. Create new VMs
    Now go back to My Virtual Machines in your new DevTestLab.
    newvm
    Click Add. And you will see your Custom Images at the top of the base selection.vmbase

You can go on with your VM creation as you are used to within your first DevTestLab. If you create a Domain Controller, as I still have it (yes, I should move to Azure AD, will do that later ;-)), then remember to give it a static IP and enter that Ip in the DNS Server configuration of your Azure Network – not in the VM!

When you have verified that the VMs are running in your new DevTestLab, then you can delete your old one and you are done.

Surely you can also use this process to move any other VM which is not part of a DevTestLab.

That is it! Have fun with it🙂.

Azure: Move OMS Resource Group to new Subscription

Sometimes you have to move Azure resources to a new subscription. One reason could be that your current subscription will end and you have a new one, which you want to use. Another reason could be that you need to migrate it to a subscription of someone else (demerger, etc.).

Anyhow if you do not want to loose the connection with your SCOM environment or your directly connected agents, how can I get this done? One way is always PowerShell, but I want to show you here, how you can do that within the Azure portal.

So open up http://portal.azure.com and Login.

Then go to Log Analytics and check which resource group you need to move.

oms

Now go to Resource Groups and select your resource group (here OMS).

In the Details view of the resource group you will have at the top the Move button.

OMSMove.JPG

You can now select all resources which belong to this resource group.

move

Select the new subscription and create a new resource group or select an existing one in the new subscription.

The problem which we get now is that the validation fails. Why? When you have Solutions selected in your OMS environment, which you normally have, then these cannot be moved. You need to remove them first and then the move works. The error gives the details:

Error: Resource move is not supported for resources that have plan. Resources are ‘Microsoft.OperationsManagement/solutions/ADAssessment(Lab),Microsoft.OperationsManagement/solutions/ADReplication(Lab),Microsoft.OperationsManagement/solutions/AgentHealthAssessment and tracking id is ‘xxx’. (Code: ResourceMoveFailed)

So open up OMS (http://oms.microsoft.com/oms) and remove your Solutions.

omsagenthealth

Click Remove.

When you check in the azure portal, then you will see that the resource is deleted after the solution is removed. Do that for all Solutions in your OMS resource.

At the end it should look like this, when you click Move again:

omsmove2

Do not miss to select the check box at the bottom.

Then click OK.

The old resource group will be empty when the move task is finished. So you can delete it afterwards.

You can then enable the OMS solutions again, now that all your OMS resources are in the new resource group.

The necessarity to remove the solutions from OMS can be a problem in productive environments, not for test environments. So be cautious there.

Microsoft MVP

I am pleased to announce that I received the Microsoft MVP 2016 award for Cloud and Datacenter Management. Now I am part of the MVP family ☺️. The welcome kit arrived today. If you want to know more about the Award program, then you can follow the MVPAward blog. Here you can find more details about the award categories and you can also search for MVPs.

If you want to become a MVP, then you need to be active in the community (tweet, write your blog, speak at community meetings/conferences, share code, etc). When you already did that for a while, contact a MVP from your area and ask him/her, if he/she would nominate you. You could even ask for the MVP lead contact, they can also help you on your journey to become a MVP.

You  will receive an email from Microsoft, when you got nominated. Then you need to submit all your contributions to the community of the past year. And the you can only wait. There are special dates – one each quater – i.e. October 1st, when the award notifications are send out. This is the time, when all MVPs, who want to be renewed, wait nervously in front of their email programs 😉.

Thanks to everyone, who helped me on my journey and greetings to my fellow MVPs. 

SCSM 2012: Asset Management

This blog will be the first part of a series about my Asset Management solution for SCSM 2012 R2. It will give a general overview about the solution. The following parts will go into the details.

Part 2: Authoring – Classes and Relationships
Part 3: Authoring – Folders and Views
Part 4: Authoring – Forms
Part 5: Reports
Part 6: Runbook/Automation details

I was asked by a colleague end of last year, if we can manage hardware assets with System Center Service Manager 2012 R2. I found a solution from Steve Beaumont for SCSM 2010 besides the solutions with costs from Cireson or Provance for example. I contacted Steve and after discussing the options I started to create a new solution for SCSM 2012 R2 based on Steves template. The new solution is designed to meet the requirements of the requesting company but can also be adopted to other companies needs.

The solution covers the following requirements:

  • Sync with Active Directory (User), SCCM (PCs)
  • Assets should stay in the DB
  • Custodian should be linked to Asset
  • Read PC warranty information from Dell
  • Fill location information automatically
  • Additional categories for Computer Assets: Kiosk, Lab, Test
  • Create Assets automatically based on existing Windows Computer objects in SCSM (sync with SCCM)
  • Email notifications for new/deleted/updated objects

Prerequisites:

  • The Windows Computer names need to follow this structure:❤ letter site Code><Serialnumber>.
  • Enable AD and SCCM connectors in SCSM.
  • Install Service Management console and SMlets on the Orchestrator Runbook Server.

How is this implemented?

To achieve that the asset stays in the DB, the solution creates an Asset class based on the ComputerHardware class. When a new Asset gets created then it needs to be assigned to the Windows Computer object. The Orchestrator automation runbooks are handling that. So it can happen that multiple Windows Computer objects which are created in SCSM get linked to one Asset object. With that you can see the history of the Asset deployment.

The Custodian also needs to be linked to the Asset, because the Asset is the hardware that should have a defined owner, which does not change when the system gets reimaged. The automation will take the primary user of the deployed Windows Computer as first Custodian, but this needs to be manually reviewed, because this could not be the real owner of the hardware (shared machines, etc.).

Classes and Relationships:

Classes

An Asset Management workspace is created and views for the new classes and some additional useful views.

views

This form shows which information can be entered for the ComputerAsset. The other asset forms look nearly the same (only the Category field is missing).

ComputerAsset

The Custodian can be selected in the AssetCustodian field. All other drop down fields reference predefined liss. The Windows Computer can be assigned in the Related Items tab. Also the Order can be referenced there.

The order form is very short and simple.Order

Some runbooks are created in Orchestrator to automate the creation of the ComputerAsset objects in SCSM and keep them updated, . Here is a short overview:

processes.JPG

All three main runbooks (Create Asset, Update Asset, Check deleted Windows Computer) check the Windows Computer objects first, then the Computer (Deployed) objects (synced from SCCM) and at last checks if an Asset object exists.

Additionally to that sub workflows are called to check the location, get the warranty, etc.

General considerations

  • Not everything can be automated, so every created/updated/deleted asset needs to be reviewed. Emails are send out to inform responsible teams to review.
  • The solution is an example how hardware asset management can be done with SCSM. It can be extended with additional class types, other properties, different automation, etc.

The solution can be downloaded from Github and you can watch a video on YouTube, which shows the solution in the SCSM console and the Orchestrator runbooks. I already presented about this solution at a German conference. So if you want to watch a German video, then you find it here.

In the other parts of my series I will go into more details how the solution is designed. So stay tuned😉.

 

 

 

SCOM 2012: Monitor MSAs with the HP Storageworks MP 4.2.1

If you ever implemented the HP Storageworks Management Pack for System Center Operations Manager 2012 then you will find that it needs some improvements. Beside the missing monitor alerts and the view properties there is one other topic: it would require a better documentation about the prerequisites on the monitored systems.

I have recently implemented the HP Storageworks MSA Management Pack which is one part of this Storageworks solution and followed Chiyo’s blog to do the initial configuration. But I had problems to connect some MSAs to the Management Server (Error: “Unable to connect to the remote system” in the HP Storage Management Pack User Configuration Tool) and there was nothing in the Management Pack documentation about how to fix it.

Here is what I found out, what was missing:

  • The Firmware had to  be updated (Firmware TS240* or later is required for SMI-S)
  • SMI-S unencrypted had to be enabled (new setting in the newer Firmware)
  • The monitoring user (which you enter in the HP Storage Management Pack User Configuration Tool) needs SMI-S access permissions.

With that the error went away and SCOM started to monitor.

I hope this helps you too.

 

SCOM 2012: Updated SCUtils APC PDU MP V1.1

I already wrote about the SCUtils APC PDU Management Pack for SCOM 2012 in this post. That was about the version 1.0 which had problems with monitoring the PDUs from a Gateway Server.

SCUtils has now released the new Version 1.1, which enables the monitoring from Gateway Servers. I have tested it and it works. So download the latest version and implement it.

Here is what you need to do to get this working (directly taken out of the SCUtils APC PDU MP documentation):

1. Install the Operations Manager console on each gateway server that is a member of a resource pool for monitoring APC PDUs.

2. If the gateway server and management server are separated by a firewall, you have to adjust the firewall to open the following ports in both directions between the gateway server and its management server:

1. TCP 5723
2. TCP 5724

3. Prepare an account that is a member of Operations Manager Read-Only Operators group.

4. Copy SetAccountToRegistry.exe from the installation folder (the default path is ‘C:\Program Files\SCUtils\ SCUtils Management Pack for APC PDU’) to each gateway server that is a member of a resource pool for monitoring APC PDUs.

5. Run SetAccountToRegistry.exe using a local administrator account. Fill in all the required fields. And click on Test button.

Gateway.jpg

6. If the connectivity test has succeeded, click on Save button (the password will be encrypted before saving). Otherwise, reenter the information and try to test again.

7. Repeat the procedure on each gateway server that is a member of a resource pool for monitoring APC PDUs.

 

 

Getting Dell Warranty information

If your company has Dell computers, then it could be necessary that you get the warranty information into your Asset Management System, for example: System Center Service Manager or even System Center Configuration Manager.

There was a solution in the past to get this information directly through a REST api from a dell webpage, but that services was disposed in May 2016. Perhaps you already used one of these Solutions:

Dell now offers a new way to get this information through the Dell Warranty Status API.
Ok, that provides more security, but…

The problem with this is, that every company needs to request access and go through an approval process:

  • Request access and provide information about your environment and the tool which should grab the data incl. throughput estimates
  • Then you get a key and the access to the sandbox system, where you test the connection. The key expires after 90 days.
  • After giving feedback and some more reviews, you will get the approval to use the production webpage with the same key. Until that you are allowed to use the sandbox system.

DellAPI.JPG

More details can be found here.

You can request the access here.

I have created a PowerShell script which gets the ServiceLevelDescription and the EndDate for one Dell Computer. You can download the script here.

#===========================================================================================
# AUTHOR:         Natascia Heil
# Script Name:    GetDellWarrantyInfo.ps1
# DATE:           09/09/2016
# Version:        1.0
# COMMENT:        – Script to check Warranty information for a computer from the
#                 Dell Warranty Status API
# Example:        .\GetDellWarrantyInfo.ps1 -ServiceTag ‘1a2b3c’ -ApiKey “sdfj7122394057sdfiouwer” -Dev $true
#===========================================================================================
Param(
[Parameter(Mandatory=$true)]
[String]$ServiceTag,
[Parameter(Mandatory=$true)]
[String]$ApiKey,
[Parameter(Mandatory=$true)]
[Bool]$Dev
)
#Build URL
If ($Dev)
{
$URL1 = “https://sandbox.api.dell.com/support/assetinfo/v4/getassetwarranty/$ServiceTag&#8221;
}
else
{
$URL1 = “https://api.dell.com/support/assetinfo/v4/getassetwarranty/$ServiceTag&#8221;
}
$URL2 = “?apikey=$Apikey”
$URL = $URL1 + $URL2
# Get Data
$Request = Invoke-RestMethod -URI $URL -Method GET -contenttype ‘Application/xml’
# Extract Warranty Info
$Warranty=$Request.AssetWarrantyDTO.AssetWarrantyResponse.AssetWarrantyResponse.AssetEntitlementData.AssetEntitlement|where ServiceLevelDescription -NE ‘Dell Digitial Delivery’
# Read first entry if available
If ($Warranty -is [Object])
{
$SLA=$Warranty[0].ServiceLevelDescription
$EndDate=$Warranty[0].EndDate
}
else
{
$SLA=’Expired’
}

Here are two examples what information you can get through the API:

OutputExample.jpg

You can use this script in an Orchstrator Runbook or Azure Automation to feed data into i.e. SCSM.

Orchestrator Runbook example:

WarrantyRunbook.JPG

In the Initialize Data create a Parameter for the ServiceTag (Serialnumber of the machine).

In the Run .Net activity paste the PowerShell script, then you do not need the param part, you can directly enter the variables ($ServiceTag, $Apikey, $Dev) with the corresponding values. Enter $SLA and $EndDate as Published Data and define that as returned data of the Runbook. With that you can call this runbook from all other main runbooks and get the warranty data for a Computer.

 

CIM Lingen 2016

Am 27. August hatte ich die Möglichkeit das erste Mal die Community Konferenz Cim in Lingen (www.cim-lingen.de) zu besuchen und auch gleich dort zu sprechen. Die Konferenz wird von der IT Emsland organisiert.

20160827_061802027_iOS

Meine Erfahrung war sehr positiv. Das Gebäude, in dem die Konferenz gehalten wurde, gehört zu einem alten Industriekomplex, der seinen eigenen Charme versprüht.
20160828_071148779_iOS
Der ganze Komplex ist sehr schön renoviert worden und bietet neben den Vortragsräumen der IT Emsland auch einen sehr ansprechenden Aufenthaltsbereich im Freien. Da das Wetter super war, wurde das auch gerne genutzt. Des weiteren wurde auch der überdachte Bereich der Hochschule Osnabrück mit genutzt, da das auch zu dem Gebäudekomplex gehört.

Was mich überrascht hat, war die Sprecherbetreuung. Gleich als ich ankam wurde ich begrüßt und in den Sprecherraum geführt. Grundsätzlich war alles sehr gut organisiert – danke an das Orga Team – im Sprecherraum gab es Getränke und Obst, genügend Stromanschlüsse, Internetanschluss, Wifi. Vor jedem Vortrag wurde der Sprecher angekündigt, die Agenda an die Leinwand projiziert und alle Knöpfe am Rednerpult waren gut beschriftet. Auch wenn das für mich das erste Mal war, bei dem mein Vortrag aufgezeichnet wurde, hat mich das ganze organisierte Umfeld sehr beruhigt.

Die CIM ist aus meiner Sicht eine besondere Konferenz, da sie

  1. kostenlos ist und 4 parallele Tracks anbietet
  2. an einem Samstag stattfindet
  3. die Vorträge hauptsächlich in Deutsch gehalten werden
  4. die Teilnehmerzahl auf 300 begrenzt ist, wodurch die Tickets ganz schnell weg sind
  5. offen ist für Community Beiträge (nicht nur von MVPs oder Herstellern)
  6. keinen Themenschwerpunkt hat, d.h. es können alle IT bezogenen Vorträge eingereicht werden, dadurch sind die Vortragsthemen sehr abwechslungsreich
  7. von tollen Leuten besucht/getragen wird

Alle Vorträge wurden live gestreamt, so dass auch alle, die nicht persönlich teilnehmen konnten, den Vorträgen folgen konnten.

Sobald die Aufzeichnung meines Vortrags “Asset Management mit Service Manager – eine kostengünstige Lösung” verfügbar ist, werde ich den Link hier aktualisieren.

20160827_094610000_iOS

Als Abschluss der Veranstaltung gab es auch noch die Möglichkeit, sich bei ein paar Currywürsten und Getränken zusammen zu setzen und gemeinsam den Tag Revue passieren zu lassen, was auch noch von einigen genutzt wurde und wobei sich interessante Gespräche entwickelt haben😉.

Die nächste CIM Lingen wird am 9.9.2017 am gleich Ort stattfinden. Ich kann nur empfehlen, sich den Tag zu reservieren und zu versuchen, einen Platz zu bekommen.

Vielleicht sieht man sich im nächsten Jahr wieder!

Update: Die Aufzeichnung ist jetzt verfügbar!