Quantcast
Channel: The Manageability Guys
Viewing all 158 articles
Browse latest View live

Migrating from WSUS to Configuration Manager

$
0
0

Hi Everyone,

This is our first post in a long while, we've had our heads down ramping up on System Center 2012 and helping the first wave of early adopters. As part of some of the work we've done, we found one common scenario where customers are looking to migrate their server patching from WSUS to System Center 2012 Configuration Manager.  For desktop migrations, customer are usually happy to take all updates or the majority of updates and just start fresh.  For servers, they want to be sure that they only thing they pull across is whatever was approved by change and release management. 

To this end, I've written up a couple of sample scripts that help with this migration.  The first script dumps a list of all approvals to all software update groups.  The second script takes this list and create Software Update Groups (or Update Lists if you're using 2007) for each computer group with an update per approval.  The reason these steps were split was to allow for manual review of the exported list.  We found certain updates need clean up prior to importing into WSUS...and some simple Excel clean up does the trick (look for blank fields - these are usually software update titles that have wrapped).

The first script will output the list to console, so you'll need to pipe the script output into another file (powershell.exe script.ps1 > output.csv).  The second script will show a progress bar as it imports, and uses a combination of T-SQL to get CI_IDs from the database and WMI via the provide to create the software update group and add the updates to the group.

NOTE: you may end up with more than 1000 updates in a single update group, something we don't recommended for Configuration Manager, so consider either splitting those groups into separate groups or doing some cleanup/fixup in Excel to split the authorizations being imported.

Both scripts can be found on the TechNet Script Center's Repository:

Hopefully these scripts will make your migration to Configuration Manager a little bit easier.

Keep in mind those that these scripts are sample scripts only and should be tested thoroughly prior to use in any production environments.

Regards,

Saud


Install Software step: Multiple Software sample script

$
0
0

Hi All,

One more quick one for you.  This script shows how to use a CSV file as a data source and map to assign software to a given user role or department using a Install Software step in ConfigMgr.  The step allow you to simplify and condense your task sequences.  The description on TechNet Script Center has more details on how to use the script and customize it. I basically wrote it because we often talk to customers about how to do this but it is hard to explain much less demo.  Hopefully the script will achieve that goal.

Ultimately, the script is just a way for you to see what is possible and how it can be used.  You could feed the data from the MDT database, a custom database, task sequence variables or even a web service if you wanted (plus, it is a sample script, so no production use without testing, etc.)

Here is the script: http://gallery.technet.microsoft.com/scriptcenter/Install-Multiple-Software-e05d2f39

Enjoy!

Regards,

Saud

 

How to search System Center 2012 documentation on TechNet using Bing (no…not that way)

$
0
0

A not so techie post today. J

The ConfigMgr UX team (docs team) blog here that explain how to use a feature of Bing to search their documentation (the MSDN/TechNet team's SDK blog post here about the feature). This feature lets you search just for the documentation for System Center 2012 Configuration Manager rather that all of TechNet. This is very useful when you're looking for a common term, but only in the context of ConfigMgr. This feature is also in the documentation of this feature for use with ConfigMgr here.

Here are some helpful product filters for System Center 2012 to nodes in TechNet:

         App Controller
site:technet.microsoft.com meta:search.MSCategory(hh552973)
(link to Bing results)

         Configuration Manager
site:technet.microsoft.com meta:search.MSCategory(gg682056)
(link to Bing results)

         Data Protection Manager
site:technet.microsoft.com meta:search.MSCategory(hh758347)
(link to Bing results)

         Endpoint Protection
site:technet.microsoft.com meta:search.MSCategory(hh479670)
(link to Bing results)

         Operations Manager
site:technet.microsoft.com meta:search.MSCategory(hh546788)
(link to Bing results)

         Orchestrator
site:technet.microsoft.com meta:search.MSCategory(hh237244)
(link to Bing results)

         Service Manager
site:technet.microsoft.com meta:search.MSCategory(hh546791)
(link to Bing results)

         Virtual Machine Manager
site:technet.microsoft.com meta:search.MSCategory(gg610702)
(link to Bing results)

<geeky_explanation>

This feature allows to use Bing to directly search against the meta HTML tags that sites use to tag their sites.  Meta tags are sort of like headers, but in the HTML.  The search.MSCategory tag is a special tag for search engines…to get it for a page, we need to use the special robot version of the page, for example the VMM documentation here can be access using the robot page here: http://technet.microsoft.com/library/gg610610(robot).aspx Once you have that, hit View Source in IE.  You'll see a bunch more meta HTML tags versus the normal page, one of which will be the search.MSCategory we are looking for.  You'll see a few in of them as follows:

         bb126093 = TechNet Library

         cc138020 = System Center

         hh546793 = System Center 2012

         gg610702 = Virtual Machine Manager

How do I know the friendly name?  This maps to the nodes in the site as follows:

 

With that, we can now plug the following string into Bing site:technet.microsoft.com meta:search.MSCategory(gg610702) to search for information about VMM. For example, to get information about WSUS relating to VMM 2012: http://www.bing.com/search?q=WSUS+site%3Atechnet.microsoft.com+meta%3Asearch.MSCategory%28gg610702%29

Now…the fun bit is this works for other meta HTML tags in the robot view, some interesting ones:

         Search.MSHAttr.DCS.appliesToProduct (for example, SCServiceManager or SCConfigurationManager)

         Search.MSHAttr.DCS.appliesToVersion (for example, 2012 or 2007 )

         Search.MSHAttr.appliesToProduct (for example, System Center 2012 Configuration Manager)

</geeky_explanation>

Happy Binging! J

Saud

How to install software updates using the client-side SDK

$
0
0

One more script from me today. In System Center 2012 Configuration Manager we have deprecated our old COM interface from 2007 (this one here). This has been replaced, along with a few other legacy COM interfaces, with a new set of WMI classes in the appropriately name root\CCM\ClientSDK namespace. This namespace existing on each Configuration Manager client, and can be accessed using standard WMI interfaces (PowerShell, WMIC, WbemTest, VBScript, WinRM, .NET or old school C++). The classes were are interested are the following:

The method we need to call is under CCM_SoftwareUpdatesManager, the also aptly named InstallUpdates J (documentation). The process is relatively straightforward of a logical WMI point of view:

  1. Get all instances of CCM_SoftwareUpdate where ComplianceState is 0 (Missing/ciNotPresent)
  2. Stick the instances in an array (if they aren't already)
  3. Call InstallUpdates passing the array of missing software updates

That's it! Job done, the client will do the rest of the work. You can check CCM_SoftwareUpdate instances to validate installation progress (or potentially use WMI events – though I haven't bothered to play around with this). I've taken the liberty to create a PowerShell sample script and placed it up on the TechNet Script Center (Note: step 2 in the above has a bit more work to devolve some of the PowerShell variable nicety to keep InstallUpdates and Invoke-WmiMethod happy): http://gallery.technet.microsoft.com/scriptcenter/Install-All-Missing-8ffbd525

Happy security update deployment! ;)

Saud.

System Center 2012 Configuration Manager and Untrusted Forests

$
0
0

We had an interesting thread internally on Untrusted Forests and hierarchies in System Center 2012 Configuration Manager. As part of that thread we discovered that Neil Peterson has a series of posts covering the various options. These are definitely worth reviewing if you're in the situation where you have to support Untrusted Forests:

  • Blog 1 - Simple Management of a few cross forest clients (Lookup MP / SLP type functionality) -
  • Blog 2 – More complex management of a larger number of cross forest clients (introduce forest discovery, cross forest system discovery, and cross forest client push installation).
  • Blog 3 - Introducing the placement of Configuration Manager infrastructure (MP, DP) in the non-trusted forest environment.
  • Blog 4 – Child site placement (Child Primary or Secondary) in the cross forest environment.

<soap_box>

Keep in mind that if you setup untrusted forests to achieve security segregation (remember that the forest is the security boundary in AD), you may be breaching that segregation by managing everything with Configuration Manager. Doesn't mean you should look to use a single hierarchy in your environment; however, you should be clear on your requirements, including business, IT operations and security.

</soap_box>

Saud

How to Publish a Windows Azure Application with System Center 2012 Orchestrator: Easy as 1, 2, 3!

$
0
0

The Orchestrator team recently released the Beta Windows Azure Integration Pack for Orchestrator in System Center 2012 SP1.

This Integration Pack is full of activities to throw at Windows Azure and manage the following configuration items of your Windows Azure subscription:

  • Certificates
  • Deployments
  • Cloud Services formally known as Hosted Services
  • Storage
  • Azure Virtual Machine Disks
  • Azure Virtual Machine Images
  • Azure Virtual Machines

 Azure IP Activities

The Orchestrator team did something really cool for this Integration Pack, for each of the configuration items listed above you have sub activities to essentially do things like create, delete, update and list.  This makes it very easy for the Runbook Designer to navigate round the UI and put your hand to tasks a lot quicker! I hope we see more and more of this practise.

The purpose of this post is to walk through deployment of a new Application using Orchestrator.  This comes in useful if your change window is out of hours but you’d rather be at home watching Dallas than sitting at your computer waiting the change window to deploy your new application to Azure.

The high level steps are:

Prerequisites: Prepare the environment

 

 

Prerequisites: Get .cspkg and .cscfg files

  1. Create a new Cloud Service
  2. Upload Package to Blob Storage
  3. Deploy the Application from the Blob

 

 

Prerequisites: Prepare the Environment

 Download and deploy the Integration Pack

Read the documentation: http://technet.microsoft.com/en-us/library/jj721956.aspx

 Then create a connection to your Windows Azure subscription.  This is done the same way as we create connections to many other targets technologies like OpsMgr, ConfigMgr, VMM and so on.

 

  1. In the Runbook Designer, click Options, and then click Windows Azure. The Windows Azure dialog box appears.
  2. On the Configurations tab, click Add to begin the connection setup. The Add Configuration dialog box appears.
  3. In the Name box, enter a name for the connection. This could be the name of the Windows Azure subscription, or a descriptive name to differentiate the type of connection.
  4. In the Type box, click the button and select a connection type.
  5. In the Subscription ID box, enter the subscription ID of the Windows Azure subscription to connect to.
  6. In the PFX File Path box, click the button and select the management certificate file associated with this Windows Azure subscription. Note: Your certificate file enables authentication of requests to your Windows Azure subscription, and so should be stored in a non-public folder to prevent unauthorized access.
  7. In the PFX File Password box, enter the password of the management certificate file associated with this Windows Azure subscription.
  8. Click OK to close the configuration dialog box, and then click Finish.

You also need to collect some pieces of information:

  • Cloud Service Name is the child domain name to register the application against. E.G. MyAppName. Full FQDN will be MyAppName.cloudapp.net
  • Storage Account Name is the storage account the application will be uploaded to and deployed from.
    • Note: This is agnostic of any Storage Account Name (Storage Service) the application, itself, is written to use.
  • Location of Package is the file location of the MyAppName.cspkg and ServiceConfiguration.cscfg files that Visual Studio creates. 

 

  •  GuestBook.cspkg is my packaged application from generated by Visual Studio
  • ServiceConfiguration.cscfg file sets values for the configuration settings defined in the service definition file and specifies the number of instances to run for each role.

Note: Applications themselves can reference storage accounts to store information that needs to be persisted, e.g. diagnostics or use Queues bound to storage accounts.  When the application is packaged this information must be known as there’s a key associated with a storage account name.  That Storage Account must be created and its associated key retrieved prior to packaging.  Below is an example of my serviceconfiguration.cscfg file.  It’s possible to manipulate those values but we’ll leave that for another post.

 

1.     Create a new Cloud Service

 

  • Initialize Data Activity| Intialize Data
    • Input parameters:
      • ServiceName
      • StorageServiceName.
      • Azure Cloud Services Activity| Create Cloud Service
        • Choose an Activity: Create Cloud Service
        • Service DNS Prefix: {ServiceName from “Initialize Data”}
        • Label: {ServiceName from “Initialize Data”} V2
        • Description: Created by Orchestrator
        • Location/Affinity Group: Location
        • Location/Affinity Group Value: North Europe

Note: Location/Affinity Group allows you to choose between the two.  It depends on the complexity of your application as to which one you choose.  For a simple application choosing a Location is fine. If you have multiple components to your application e.g. Windows Azure Application, SQL Azure Database, Storage Service and possibly some VMs, Affinity Groups are the most efficient way to enforce that all those components are located in the same datacenter.  This increases performance by making sure your services are running close to your users, appeases legal requirements to store your data in a certain country, and gives maximum business continuity to services that might be affected by severe network outages or natural disasters that could occur between datacenters.

2.     Upload Package to Blob Storage

We get storage account properties because we want to create a blob in storage to place our .cspkg of our application.

 

  • Azure Storage Activity | Get Storage Account Keys
    • Choose an Activity: Get Storage Account Keys
    • Storage Account Name: {StorageAccountName from “Initialize Data”}
    • Azure Storage Activity | CreateBlobContainer
      • Choose an Activity: Create Container
      • Storage Account Name: {Storage Account Name from “Get Storage Account Keys”}
      • Container Name: scoblob{Activity end time (minutes) from “Get Storage Account Keys”}
      • Primary Key: {Primary Key from “Get Storage Account Keys”}
      • Azure Storage Activity | Put “Service Package” Blob
        • Choose an Activity: Put Blob
        • File to Upload (File Path): {AzureDeploymentPath}\GuestBookPackage\GuestBook.cspkg
        • Storage Account Name: {Storage Account Name from “Get Storage Account Keys”}
        • Container Name: {Container Name from “CreateBlobContainer”}
        • Blob Name: GuestBookPackage
        • Primary Key: {Primary Key from “Get Storage Account Keys”}

Note: Container Name must be all lower case and I’ve just appended the end with the minute from previous activity for some uniqueness, and help with potential troubleshooting.  The container will not be needed after deployment.

3.     Deploy the Application from the Blob

 

  • Azure Deployments Activity | Azure Deployment
    • Choose an Activity: Create Deployment
    • Service DNS Prefix: {Service DNS Prefix from “Create Cloud Service”}
    • Deployment Slot: Production
    • Deployment Name: SCOGuestBook
    • Label: SCO
    • Service Configuration File Path: {AzureDeploymentPath}\GuestBookPackage\ServiceConfiguration.cscfg
    • Service Package URL: {Blob URL from “Put “Service Package” Blob”}
    • Start Deployment Immediately: True
    • Treat Warnings as Errors: True
    • Wait for Completion: True

Testing the Runbook

 

Now our Runbook has been put together, it’s time to put our Application live.

Make sure the application package is in the correct directory for Orchestrator, in my example that’s my {AzureDeploymentPath} variable.  Start the Runbook and supply ServiceName and StorageAccountName

Check the Create Azure Cloud Services acivity has completed successfully.

 

Check the Service has been created in the Azure Management Portal.

 

Check the Put “Service Pack” Blob activity has worked, from below it looks like we wrote it to the scoblob45 container

 

Check scoblob45 container and we can see the GuestBookPackage has been uploaded.

 

Check the deployment activity completes successfully.

 

Looks good in the portal…

 

The site looks good too!

 

How to Publish a Complex Windows Azure Application with System Center 2012 Orchestrator

$
0
0

Following on from my previous post about how easy it was to publish an application to Windows Azure with Orchestrator, I thought it useful to complicate things and include a few scenarios we see customers using with their Windows Azure Applications.

The cases:

  • Publish the Windows Azure Application with RDP enabled.
  • Create a SQL Azure Server and SQL Azure Database.
  • Creating the Storage Service at deployment.

And to make it more interesting:

  • Geo-locate components together
  • Do the housekeeping afterwards

Credit for this post goes to my good friend and very talented colleague Fabrice Aubert, whom I worked with to create a Microsoft Premier workshop titled: Monitoring Windows Azure Applications with System Center 2012 Operations Manager.

Prerequisites

A note on documentation

Try to use a few conventions when developing Runbooks and writing documentation, it's quite OCD but it has served me well:

  • Colour success links in green
  • Colour unencrypted variables in {purple}
  • Colour encrypted variables in {gold}
  • Colour published data in {maroon}

A note on scripting

When writing PowerShell for Run .NET Script activity, always try to put all variables that will use SCO Variables and Published Data at the top of the script, this allows for easy maintenance in the future. Also use try & catch, as Orchestrator can have a habit of completing an activity successfully, only for you to find that the script actually didn't work. I broke one rule I live by in this RunBook and that is to keep PowerShell scripts simple, don't try to throw the entire Runbook into one Run .NET Script Activity. It can really affect your ability to troubleshoot a failed activity. Very good PowerShell authors find this REALLY hard to do, sadly I don't!

The Runbook

  • Initialize Data Activity| Intialize Data
    • Input parameters:
      • ServiceName
      • StorageServiceName
      • location
    • Notes:
      • StorageServiceName must be unique, all lowercase and no special characters.
      • Location must be a valid Windows Azure data center location:
        • West US
        • East US
        • East Asia
        • Southeast Asia
        • North Europe
        • West Europe
  • Azure Cloud Services Activity| Create Affinity Group
    • Choose an Activity: Create Affinity Group
    • Affinity Group Name: afgrp{StorageServiceName from "Initialize Data"}
    • Label: AffGrp-{ServiceName from "Initialize Data"}-{location from "Initialize Data"}
    • Description: Affinity Group for {ServiceName from "Initialize Data"}in {location from "Initialize Data"} -created by Orchestrator
    • Location: {location from "Initialize Data"}
    • Notes:
      • Affinity Group Name must be all lowercase and no special characters
  • Azure Cloud Services Activity| Create Cloud Service
    • Choose an Activity: Create Cloud Service
    • Service DNS Prefix: {ServiceName from "Initialize Data"}
    • Label: {ServiceName from "Initialize Data"} V2
    • Description: Created by Orchestrator
    • Location/Affinity Group: AffinityGroup
    • Location/Affinity Group Value: {Affinity Group Name from "Create Affinity Group"}
    • Notes:
      • Service DNS Prefix must be unique
  • Run .NET Script | Upload RDP Certificate
    • Language: PowerShell
    • Script:

try{

#Grab Certificate

$pathToRDPCertificate = "{AzureDeploymentPath}\GuestBookPackage\GuestBook-RD.pfx"

$RDPCertificatePassword = "{CertificatePassword}"

 

#Get Service

$service = "{Service DNS Prefix from "Create Cloud Service"}"

 

#Import Windows Azure PowerShell Module

Import-Module "{AzurePowershellModulePath}"

 

#Get PublishSettingsFile

Import-AzurePublishSettingsFile -PublishSettingsFile "{AzurePublishSettingsFile}"

$sub = select-AzureSubscription –SubscriptionName "{AzureSubscriptionName}"

 

#Add Service Certificate

Add-AzureCertificate -servicename $service -CertToDeploy $pathToRDPCertificate -Password $RDPCertificatePassword

}

catch

{

Throw $_.Exception

}

  • Published Data: None
  • Best Practise: Create variables for published data and SCO variables at the top of PowerShell scripts, it makes for easy maintenance.
  • Azure Storage Activity | Create Azure Storage Account
    • Choose an Activity: Create Storage Account
    • Storage Account Name: {StorageAccountName from "Initialize Data"}
    • Label: Created By Orchestrator
    • Description: Created By Orchestrator for {Service DNS Prefix from "Create Cloud Service"} in {Affinity Group Name from "Create Affinity Group"} Affinity Group
    • Location/Affinity Group: AffinityGroup
    • Location/Affinity Group Value: {Location/Affinity Group Value from "Create Cloud Service"}
    • Wait for Completion: True
  • Azure Storage Activity | Get Storage Account Keys
    • Choose an Activity: Get Storage Account Keys
    • Storage Account Name: {StorageAccountName from "Create Azure Storage Account"}
  • Run .NET Script | Create SQL Azure Server and Database Properties
    • Language: PowerShell
    • Script:

try{

#Import Windows Azure Powershell Module

Import-Module "{AzurePowerShellModulePath}"

 

#Storage information

$storageAccount = "{StorageAccountName from "Get Storage Account Keys"}/"

$storageLocation = "{Location from "Initialize Data}"

 

#SQL Azure Server information

$adminLogin = "{SQLLogin}"

$adminPassword = "{SQLPassword}"

 

Import-AzurePublishSettingsFile -PublishSettingsFile "{AzurePublishSettingsFile}"

$sub = Get-AzureSubscription –SubscriptionName "{AzureSubscriptionName}"

 

#Create a new SQL Azure Server

$newServer = New-AzureSqlDatabaseServer -AdministratorLogin $adminLogin -AdministratorLoginPassword $adminPassword -Location $storageLocation

$newServer | New-AzureSqlDatabaseServerFirewallRule -RuleName "EveryBody" -StartIpAddress "0.0.0.0" -EndIpAddress "255.255.255.255"

 

Start-Sleep -s 30

 

#Create ADO.Net Object

$cn = New-Object System.Data.SqlClient.SqlConnection

$cm = New-Object System.Data.SqlClient.SqlCommand

 

#Create GuestBookDB database

$cn.ConnectionString = "Server=tcp:" + $newServer.ServerName + ".database.windows.net,1433;Database=master;User ID=" + $adminLogin + "@" + $newServer.ServerName + ";Password=" + $adminPassword + ";Trusted_Connection=False;Encrypt=True;"

$outconn = $cn.connectionstring

$sql = "CREATE DATABASE GuestBookDB (EDITION='WEB', MAXSIZE=5GB)"

$cm.Connection = $cn

$cm.CommandText = $sql

$cn.Open()

$cm.ExecuteNonQuery()

$cn.Close()

 

Start-Sleep -s 30

 

#Create tables and constraints in GuestBookDB

$cn.ConnectionString = "Server=tcp:" + $newServer.ServerName + ".database.windows.net,1433;Database=GuestBookDB;User ID=" + $adminLogin + "@" + $newServer.ServerName + ";Password=" + $adminPassword + ";Trusted_Connection=False;Encrypt=True;"

 

#Create table Users

$sql = "CREATE TABLE [dbo].[Users]("

$sql = $sql + "[Alias] [nvarchar](50) NOT NULL,"

$sql = $sql + "[FirstName] [nvarchar](100) NOT NULL,"

$sql = $sql + "[LastName] [nvarchar](100) NOT NULL,"

$sql = $sql + "CONSTRAINT [PrimaryKey_9f4b532a-c0c4-47ba-9382-1cd19b4cf96f] PRIMARY KEY CLUSTERED"

$sql = $sql + "("

$sql = $sql + "[Alias] ASC"

$sql = $sql + ")WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF)"

$sql = $sql + ")"

 

$cm.CommandText = $sql

$cn.Open()

$cm.ExecuteNonQuery()

$cn.Close()

 

#Create table Comments

$sql = "CREATE TABLE [dbo].[Comments]("

$sql = $sql + "[ID] [int] IDENTITY(1,1) NOT NULL,"

$sql = $sql + "[AliasKey] [nvarchar](50) NOT NULL,"

$sql = $sql + "[Comment] [nvarchar](200) NOT NULL,"

$sql = $sql + "CONSTRAINT [PrimaryKey_4d45f55b-36c9-48d5-b89c-3d9f7149d4a9] PRIMARY KEY CLUSTERED "

$sql = $sql + "("

$sql = $sql + "[ID] ASC"

$sql = $sql + ")WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF)"

$sql = $sql + ")"

 

$cm.CommandText = $sql

$cn.Open()

$cm.ExecuteNonQuery()

$cn.Close()

 

#Add a foreign key constraint

$sql = "ALTER TABLE [dbo].[Comments] WITH CHECK ADD CONSTRAINT [FK_Comments_0] FOREIGN KEY([AliasKey])"

$sql = $sql + "REFERENCES [dbo].[Users] ([Alias])"

 

$cm.CommandText = $sql

$cn.Open()

$cm.ExecuteNonQuery()

$cn.Close()

 

$sql = "ALTER TABLE [dbo].[Comments] CHECK CONSTRAINT [FK_Comments_0]"

 

$cm.CommandText = $sql

$cn.Open()

$cm.ExecuteNonQuery()

$cn.Close()

 

#Return DB Server Name

$SQLserver = $newServer.ServerName

 

}

catch

{

Throw $_.Exception

}

  • Published Data: SCOSQLServerName: $SQLServer
  • Run .NET Script | Edit ServiceConfig File
    • Language: PowerShell
    • Script:

$storageAccount = "{StorageAccountName from "Get Storage Account Keys"}/"

$storageAccountKey = "{PrimaryKey from "Get Storage Account Keys"}"

$adminLogin = "{SQLLogin}"

$adminPassword = "{SQLPassword}"

$SQLServer = "{SCOSQLServerName from "Create SQL Azure Server and Database"}"

$svccfgfile = "{AzureDeploymentPath}\GuestBookPackage\ServiceConfiguration.cscfg"

 

try

{

 

#Create Connection String

$connectionString = "DefaultEndpointsProtocol=https;AccountName=" + $storageAccount + ";AccountKey=" + $storageAccountKey

 

#Update the ServiceConfiguration.cscfg with the information about the storage account and the SQL Server database created

$xml = New-Object XML

$xml.Load($svccfgfile)

 

$count = $xml.ServiceConfiguration.Role[0].ConfigurationSettings.Setting.count

For($i=0; $i -lt $count; $i++) {

if ($xml.ServiceConfiguration.Role[0].ConfigurationSettings.Setting[$i].Name -eq "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString")

    {

    $xml.ServiceConfiguration.Role[0].ConfigurationSettings.Setting[$i].Value = $connectionString

    }

      

if ($xml.ServiceConfiguration.Role[0].ConfigurationSettings.Setting[$i].Name -eq "ServerName")

    {

    $xml.ServiceConfiguration.Role[0].ConfigurationSettings.Setting[$i].Value = $SQLServer

    }

      

if ($xml.ServiceConfiguration.Role[0].ConfigurationSettings.Setting[$i].Name -eq "UserName")

    {

    $xml.ServiceConfiguration.Role[0].ConfigurationSettings.Setting[$i].Value = $adminLogin

    }    

    

    if ($xml.ServiceConfiguration.Role[0].ConfigurationSettings.Setting[$i].Name -eq "Password")

    {

    $xml.ServiceConfiguration.Role[0].ConfigurationSettings.Setting[$i].Value = $adminPassword

    }

}

 

$count = $xml.ServiceConfiguration.Role[1].ConfigurationSettings.Setting.count

 

For($i=0; $i -lt $count; $i++) {

if ($xml.ServiceConfiguration.Role[1].ConfigurationSettings.Setting[$i].Name -eq "Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString")

    {

    $xml.ServiceConfiguration.Role[1].ConfigurationSettings.Setting[$i].Value = $connectionString

    }

}

 

$xml.Save($svccfgfile)

 

}

catch

{

Throw $_.Exception

}

  • Azure Storage Activity | Create Blob Container
    • Choose an Activity: Create Container
    • Storage Account Name: {Storage Account Name from "Get Storage Account Keys"}
    • Container Name: scoblob{Activity end time (minutes) from "Get Storage Account Keys"}
    • Primary Key: {Primary Key from "Get Storage Account Keys"}
  • Azure Storage Activity | Put "Service Package" Blob
    • Choose an Activity: Put Blob
    • File to Upload (File Path): {AzureDeploymentPath}\GuestBookPackage\GuestBook.cspkg
    • Storage Account Name: {Storage Account Name from "Get Storage Account Keys"}
    • Container Name: {Container Name from "Create Blob Container"}
    • Blob Name: GuestBookPackage
    • Primary Key: {Primary Key from "Get Storage Account Keys"}
  • Azure Deployments Activity | Azure Deployment
    • Choose an Activity: Create Deployment
    • Service DNS Prefix: {Service DNS Prefix from "Create Cloud Service"}
    • Deployment Slot: Production
    • Deployment Name: SCOGuestBook
    • Label: SCO
    • Service Configuration File Path: {AzureDeploymentPath}\GuestBookPackage\ServiceConfiguration.cscfg
    • Service Package URL: {Blob URL from "Put "Service Package" Blob"}
    • Start Deployment Immediately: True
    • Treat Warnings as Errors: True
    • Wait for Completion: True
  • Azure Storage Activity | Remove Blob
    • Choose an Activity: Delete Container
    • Storage Account Name: {Storage Account Name from "Put "Service Package" Blob"}
    • Container Name: {Container Name from "Create Blob Container"}
    • Primary Key: {Primary Key from "Get Storage Account Keys"}

 

Running the Runbook

Open the Runbook tester and fire this up. Input the Initialize Data values making sure to keep the storage name lowercase and use the correct Azure Data Center Location name.

On each step, check the activities are doing their things against Azure.

 

  1. Initialize Data

  1. Create Affinity Group

  1. Create Cloud Service

  1. Upload RDP Certificate

  1. Create Azure Storage Account

  1. Storage Account Keys

Check J

  1. Create SQL Server and Database

  1. Edit Service Configuration File

  1. Create Blob Container

Check!

  1. Put "Service Package" Blob

Check!

  1. Azure Deployment

  1. Remove Blob

 

No SCOBlobxx container, just containers for the GuestBook pictures and the diagnostic monitor data.

Check the site works!

 

 

 

 

Exporting Log History Data from Orchestrator

$
0
0

Orchestrator keeps a log of activities for previous Runbook executions.

The log history view in the Runbook Designer is one of the most heavily used functions in my Orchestrator environment. It's very useful in determining what is happening when an activity fails. I have been looking for a quick and easy way to export this data when troubleshooting remote environments that I don't have access to.

It is possible to get this deep information from Orchestrator via COM as the Product Group explains here.

However, I opted to take a SQL approach and here's the script:

SELECT POL.Name as PolicyName, obj.Name as ActivityName, OI.ObjectStatus, OID.[Key] as PublishedData, OID.Value as PublishDataValue, OI.StartTime, OI.EndTime

FROM OBJECTINSTANCEDATA OID, dbo.OBJECTINSTANCES OI

innerjoinOBJECTS OBJ on OI.ObjectID=obj.UniqueID

innerjoin POLICIES POL on OBJ.ParentID = POL.UniqueID

WHERE    (OI.UniqueID = OID.ObjectInstanceID)

--AND POL.Name = 'RunbookName'

--AND OID.[Key] = 'ErrorSummary.Text'

orderby EndTime desc

 

The script will return Log History including Policy Name, Activity Name, Activity Status, PublishedData (which is the type of data returned), PublishDataValue (e.g. the error, or the PID) and a start and end time.

From the output I can see my activity to CreateLogView failed with a message that the object already exists. I can also see the PID number which can be useful in other scenarios.

The output is consistent with the log history view in the designer:

The ErrorSummaryText from Published Data is the usually the best indication of where a problem exists.

If I want to see only the ErrorSummaryText for a given Runbook, I can remove the comments in the script above to scope the query for me.

There we have it.  An easily exportable log history of a given Runbook, scoped to error text only! Once we have this, we could wrap this into an error handling runbook, to export this history out after a failed execution. Or use the data to parse start and end times to look for performance improvements to our activities and Runbooks.

Thanks to my colleague Anders Bengtsson for sharing his knowledge of the database. Anders has also blogged about how log history is cleaned up, you can find it here.


Possible Way Of: Dealing with multiple instances returned by Get User activity and incrementally create user accounts with a System Center Orchestrator 2012 runbook

$
0
0

 Let's say you have the following requirement:

  • Create new user accounts in Active Directory
  • In case the user already exists you need to create a new distinct SAM account name
  • The new SAM account name should be incrementally named in case another ones exists
  • You used a Get User activity and multiple instances were returned which cause an error in your Runbook

   

In this example the Get User activity will check for the existence of an account that contains a specific username we are about to create (in this case SVC-OPSMGR-DB) which could return multiple instances:

   

   

   

In the scenario as there were two instances it will trigger the actions twice, this is by design, so naturally the first action created the requested user but the second one fails:

   

   

Checking in AD the user that was requested got created and incrementally named (SVC-OPSMGR-DB2):

   

   

The user name was processed this way:

   

   

Basically: In case we find a user with same name the next account will be appended with 1, if we find two it will be appended with 2 and so on…

   

So one Possible Way Of dealing with the multiple instances that may come from the Get User activity could be flattening the data that gets published from it:

   

   

After that is just about comparing the total number of accounts returned:

   

And define the SAM Account Name in the Create User (Alternative) appending the count:

   

   

Then the new user SVC-OPSMGR-DB3 got created without errors in the runbook:

   

   

   

The Runbook for this example looked like this:

   

   

This was one Possible Way Of doing it and HTH somehow!

Possible Way Of: Helping you to get the System Center 2012 Configuration Manager (SCCM2012) Management Pack 5.0.7705.0 to work

$
0
0

If using the Management Pack in subject you may be experiencing issues with the discoveries related to the Central Administration Site (CAS) and Primary Sites.

   

You may think that those Discoveries don't work?

That's not in fact true…

   

Whilst the "typo" on the Management Pack Guide and other resources doesn't get sorted a Possible Way Of helping you to get it to work is telling you:

   

Where to enable Agent Proxy?

  • Central Administration Site (CAS)
  • Primary Sites

   

If not enabling Agent Proxy for the Health Service on the machines with those Site Roles you will also see 33333 events on the related Management Server as well.

   

Also:

   

You might need to override Microsoft.SystemCenter2012.ConfigurationManager.SiteSystemRolesDiscovery (Which targets Site System Information class hosted by the CAS to avoid it to timeout.

Default setting is: 300 seconds. Carefully override it in multiples of 30 seconds until it works.

   

   

Resuming: In the current MP Guide the highlighted text below should not be read.

Agent Proxy

For the Configuration Manager monitoring pack to discover objects, you must turn on Agent Proxy on every site server except for the primary site and the central administration site.

   

   

This was one Possible Way Of getting it to work and Hope This Helps somehow!

How to Build a Consecutive Event Monitor for a Windows Azure Application in Operations Manager

$
0
0

Building a consecutive event monitor for a Windows Azure Application can be very useful. Azure applications should throw exceptions when something goes wrong, sometimes they are minor issues, other times they are major. Building monitoring for this can be difficult.

Should we mark an application as critical if an exception is thrown? The answer is "not always." What if the application is throwing exceptions continually? The answer is "always!" We need to measure the number of exceptions thrown per timeframe and change the state of the application accordingly.

Firstly if you are not familiar with monitoring Windows Azure Applications with Operations Manager, included at the bottom are some links to help:

Custom Monitoring Scenario: Consecutive Event Monitoring

Requirement

Operations Manager must raise an Alert if more than 5 events are received within a certain time period.

Solution

Create a monitor that will dictate the health of a Windows Azure Role Instance based on the occurrence above. An alert will be generated when the monitor enters a critical state.

A rule can satisfy the criteria set out above, however we can learn more concepts by building monitors and we can drive the state of the application.

Procedure

  1. Create a new Management Pack
  2. Create a composite module
  3. Create a monitor using the composite module
  4. Exporting the Management Pack

Note:

We should be using the Visual Studio Authoring Extensions, but this was written a while ago.

 

Create a new Management Pack

  1. Open the System Center Operations Manager 2007 R2 Authoring Resource Kit
  2. Click File>New…
  3. In the New Management Pack dialog box on the Management Pack Template screen under Select a Management Pack Template select Empty Management Pack
  4. Enter a Management Pack Identity. CompanyName.AppName.AppVersion.ManagementPackFunction

    e.g. TestCorp.GuestBook.2012.Monitoring

     

  5. Click Next
  6. On the Name and Description screen fill the Display Name. This usually mirrors the ID from the previous screen with spaces.
  7. Click Create.

 

Create a composite module

Composite modules are composed of one or more other modules. Composite modules are included in library management packs, and custom composite modules may be defined by management pack authors for performing custom logic. Composite modules require no installation on the agent computer and are completely defined in a particular management pack. They may include native code modules, managed code modules, other composite modules, or any combination of such modules.

The Azure MP provides 3 data sources we can leverage:

  • Windows Azure Role Instance Event Log Collection Simple Data Source
  • Windows Azure Role Instance .NET Trace Collection Simple Data Source
  • Windows Azure Role Instance Performance Counter Collection Simple Data Source

We must create a composite module containing:

  • Windows Azure Role Instance Event Log Collection Simple Data Source
  • A Consolidator Module - which consolidate multiple incoming data items based on a specific schedule or a time interval.
  • An expression filter module - which allows the data item through or stops the data item based on the expression specified as configuration

 

  • Navigate to Type Library > Monitor Types.
  • Right click the white space in the Monitor Types pane and click New > Composite Monitor Type…

  • Give a unique identifier

  • On the General tab fill in Name and Description fields appropriately.

Note:

This consolidated module should be named agnostic of application or any variables such as EventIDs. Composite Modules can and should be reused where possible.

 

  • On the States tab ensure 2 State Monitor Type is select and give an ID to State 1 and State 2
    • ID of State 1: Healthy
    • ID of State 2: Critical
  • Under No Detection check Use no detection for this monitor state and select Healthy from the drop down. This ensure that the monitor will start in a healthy condition and will trigger an unhealthy (or Critical) condition if the criteria of the Module Type is matched.

  • On the Member Modules tab click Add…
  • In the Choose Module Type dialog box type "Azure" without quotes in the Look For: field. Scroll down and select the Windows Azure Role Instance Event Log Collection Simple Data Source. In the Module ID: field type a meaningful ID e.g. AzureEventDS and click OK.

  • A Configuration dialog box will pop up to configure this module with some mandatory fields, the elements need to be promoted so we make them accessible by at the Monitor level. Click the Fly Out button for each field and select Promote…

Both values should read:

  • IntervalSeconds – $Config/IntervalSeconds$
  • TimeoutSeconds – $Config/TimeoutSeconds$

Click OK when finished.

  • On the Member Modules tab click Add…
  • In the Choose Module Type dialog box type "consolidator" without quotes in the Look For: field. Scroll down and select System.ConsolidatorCondition. In the Module ID: field type a meaningful ID e.g. Consolidator and click OK.
  • On the Configuration dialog that pops up. The default counting method of the consolidator type is not suitable for our requirements. Therefore we must edit the XML to use a different counting method.

    Click Edit… - if this is the first time you have entered the XML editing mode, you will need to specify a text editor.

  • In the Text Editor paste the following XML between the <Consolidator></Consolidator> tags

<ConsolidationProperties></ConsolidationProperties>

<TimeControl>

<Latency>$Config/Latency$</Latency>

<WithinTimeSchedule>

<Interval>$Config/TimeWindowSeconds$</Interval>

</WithinTimeSchedule>

</TimeControl>

<CountingCondition>

<Count>$Config/Count$</Count>

<CountMode>OnNewItemTestOutputRestart_OnTimerSlideByOne</CountMode>

</CountingCondition>

In the example XML, we are promoting the Latency, Interval and Count elements to be configurable outside the Composite Module e.g. in the Monitor.

We are also using the OnNewItemTestOutputRestart_OnTimerSlideByOne, this counting method uses a sliding windows rather than the default fix window.

  • Close the text editor and save the contents
  • Click OK and ignore the warning about the Latency element.
  • On the Member Modules tab click Add…
  • In the Choose Module Type dialog box type "expression" without quotes in the Look For: field. Scroll down and select System.ExpressionFilter. In the Module ID: field type a meaningful ID e.g. BadExpression and click OK.
  • On the Configuration dialog that pops up. Click Edit
  • In the Text Editor paste the following XML between the <Expression></Expression> tags

<SimpleExpression>

<ValueExpression>

<XPathQuery>EventDisplayNumber</XPathQuery>

</ValueExpression>

<Operator>Equal</Operator>

<ValueExpression>

<Value>$Config/EventID$</Value>

</ValueExpression>

</SimpleExpression>

  • Close the Text Editor and save the file.
  • Review the configuration

 

  • Click Regular Tab and under State select Critical, include each member module then use the drop downs to specify the order of the member modules.

  • Leave the On Demand tab
  • On the Configuration Schema, Add… the following elements to the Simple Configuration Schema:

Name

Type

Required

IntervalSeconds

Integer

Yes

TimeoutSeconds

Integer

Yes

Latency

Integer

Yes

Count

Integer

Yes

TimeWindowSeconds

Integer

Yes

EventID

Integer

Yes

 

  • On the Overridable Parameters tab, click Add… and select TimeWindowSeconds and provide a unique ID e.g. TimeWindow. Then select Integer from the drop down for configuration element.
  • Repeat for Count, EventID or any parameter you want to be able to override.

  • Click Apply and OK to finish the Composite Module.
  • Save the Management Pack at this point.

Create a Monitor using the composite module

  • Select Health Model, then Monitors.
  • In the Monitorspane right-click, select New> Custom Unit Monitor.
  • In the Choose a unique identifier box, type an ID e.g. TestCorp.GuestBook.2012.Monitoring.AzureRoleInstance.EventsperTimeInterval. Click OK.
  • On the General tab, in the Name box, type a name e.g. TestCorp GuestBook 2012 Azure Role Instance Event Per Time Interval.
  • On the General tab, in the Target field click the ellipsis and select Windows Azure Role Instance from the Management Pack Class Chooser dialog. Click OK.

     

Note:

In step 5, we have just targetted EVERY role instance in our Azure application. This monitor would apply to; webroles & workerroles for ALL discovered Azure Applications.

If we want to be more granular over which applications or roles we target, we must reference our Application specific Management Pack the MP references. Then the Management Pack Class Chooser will be populated from classes defined in that Management Pack. Remember though, sealed vs unsealed becomes an issue here.

 

  • On the General tab, in the Parent Monitor field we select the most applicable parent monitor based on the nature of the Event ID in question. E.g. EventID: "666" EventDescription: "Application Down" would be an "Availability" event.
  • On the Configuration tab, do the following:
  1. Click Browse for a type...
  2. In the Choose unit monitor type box, select the name of the composite module you just created e.g. Consolidated Azure Event Module Type. Click OK.
  3. On the Configuration Tabspecify the following values:

Name

Value

IntervalSeconds

300

TimeoutSeconds

60

Latency

10

Count

5

TimeWindowSeconds

300

EventID

666

 

  • On the Health tab, do the following:
  1. In the Health State box for Healthy select Healthy.
  2. In the Health State box for Critical select Critical.

  • On the Alerting tab, do the following:
  1. Check the box next to Generate alerts for this monitor.
  2. In the Generate an alert when, select The monitor is in a critical health state.
  3. In the Alert name: box, type Consolidated Monitor Alert.
  4. In the Alert Description: box, you can write an Alert description using properties of the alert by using the parameters exposed by the ellipsis button. E.g.

An Event with an EventID of $Data/Context/Context/DataItem/EventNumber$ has been raised on

$Target/Property[Type="MicrosoftSystemCenterAzure!Microsoft.SystemCenter.Azure.RoleInstance"]/RoleInstanceName$ $Data/Context/Count$ times with the description $Data/Context/Context/DataItem/EventDescription$

  • Click OK to save the monitor.
  • Select File, then click Save.

Exporting the MP

 

Click Tools and Export MP to Management Group… and select your TEST Management Group

You will get a 1201 event on the target RMS (first) and proxy agent (after) to indicate a new Management Pack has been received.

Then open the Health Explorer for a role instance and check your new monitor has been set up.

And when the application goes bad, this will be reflected all the way up the role instance on up the azure application.

References

 

The steps required to discover a Windows Azure Application are documented in the Monitoring Pack guide for the System Center Monitoring Pack for Windows Azure Applications (Azure MP): http://www.microsoft.com/en-us/download/details.aspx?id=11324

Management Pack Technical Writer Brian Wren: http://blogs.technet.com/b/mpauthor/archive/2011/06/20/custom-monitoring-for-windows-azure-management-pack.aspx

Attached to Brian's post is another sample MP which deals with nearly every connotation of Azure monitor known at the moment. So there's performance, events, .NET trace all that good stuff.

For more Event Logs examples, the following has some good posts on this: http://blogs.msdn.com/b/walterm/archive/2011/08/19/scom-2007-r2-event-log-alerting-and-monitoring-for-azure-applications.aspx

How to create daily reports with System Center Operations 2012 and PowerShell - Part 1

$
0
0

This series of blogs aims to show administrators how to create daily reports summarising the health of Operations Manager using PowerShell. This should reduce the overheads required for daily health reviews of their system environments.

This first entry will serve as an introduction, explaining how PowerShell can be used to extract information from Operations Manager to show the health state of management servers and then output this onto an HTML page. Management servers have been exemplified in the first instance as their optimal health is critical to Operations Manager being able to provide connection for SDK clients, configuration generation/distribution and workload distribution through resource pools. Subsequent posts will build on this principle to allow more sophisticated system health reporting.

 

Step 1  

The first four lines in the script Daily-report.ps1 attached below allow data to be passed from the command line that is used to start the script and write values to variables for later use: 

#ParamString1 - MSServerName = The FQDN of the Management Server 

#ParamString2 - FileLocation = The Folder you want the report to be saved to

#ParamString3 - ReportName = What you want the report to be called 

param([string]$MSServerName,[string]$Filelocation,[string]$ReportName) 

 

The above line of code would take the following command when run and map it to the following parameters:

PowerShell.exe Daily-Report.ps1 MSServerName c:\scripts Daily-Report.htm

 

                               ParamSring1                              ParamString2                  ParamString3

param([string]$MSServerName,[string]$Filelocation,[string]$ReportName) 

 

#Checks if Report of same name already exists and deletes if found 

$FullPath = "$Filelocation\$ReportName"

if ((test-Path -path $FullPath) -ne $False)

{remove-item -Path $FullPath}

 

Step 2 

Now to connect to System Center Operations Manager 2012 using the management server name from parameter $MSServerName written from Paramstring1: 

# Imports OpsMgr 2012 PowerShell Module and  connects to the Management server specified in ParamString1

Import-Module OperationsManager 

New-SCOMManagementGroupConnection -ComputerName:$MSServerName 

 

Step 3

This section of the script allows you modify the appearance of the HTML report: 

#Opens $ReportOutput for input, the <style> tag then is used to define style information for the HTML document 

$ReportOutput= "<style>

 

#The <body> element contains all the contents of an HTML document

body {font-family: Verdana, Geneva, Arial, Helvetica, sans-serif;}

 

# The table tag below defines the HTML table

table {border-collapse: collapse border: none; font: 10pt Verdana, Geneva, Arial, Helvetica, sans-serif;color: black;margin-bottom: 10px;}

 

# The <th> tag defines the table header

th {font-size: 12px; font-weight: bold; padding-left: 0px; padding-right: 20px; text-align: left;}

 

#The <td tag element defines the table cell

td{font-size: 12px; padding-right: 20px; text-align: left;}

 

#The <h1> to <h3> tags are used to define HTML headings for text

h1{ clear: both; font-size: 24pt;  margin-left: 20px;  margin-top: 30px;}

h2{ clear: both;font-size: 115%; margin-left: 20px; margin-top: 30px;}

h3{clear: both; color:red; font-size: 10pt;margin-left: 20px; margin-top: 30px;}

h4{clear: both; color:green; font-size: 10pt;margin-left: 20px; margin-top: 30px;}

h5{ clear: both; font-size: 16pt;  margin-left: 20px;  margin-top: 30px;}

 

#This defines the even rows of an HTML table 

table tr:nth-child(even) { background: #DBE5F1; } 

 

#This defines the odd rows of an HTML table

table tr:nth-child(odd)  { background: #EAF1DD; }

 

#</style> tag is used to close the defined style information for the HTML document

</style>"

 

# outputs HTML for Title of document on report

$ReportOutput +="<h1><center>System Center 2012 - Operations Manager Report</center></h1>"

 

# Gets current date and time and displays on output report 

$targetdate = (get-date) 

$ReportOutput += "<h2>Report run on: $targetdate</h2>" 

 

Step 4 

Next, the queries to run against Operations Manager are entered: in this case focusing primarily on the health of the management server as previously mentioned.

#Creates a line break in the HTML.  

$ReportOutput += "<br>"

 

#Title for the query using the header specified in the Style section

$ReportOutput += "<h5>Management Server State</h5>"

 

#Creates a separating line for formatting purposes in the HTML 

$ReportOutput += "<hr size=4 width=50% align=left>"

 

#Checks if Management Servers are in a Healthy State

$ReportOutput += "<h5>Management Servers not in a Healthy State</h5>"

$Count = Get-SCOMManagementServer | where {$_.HealthState -ne "Success"} | Measure-Object

 if($Count.Count -gt 0) {

       $ReportOutput += Get-SCOMManagementServer | where {$_.HealthState -ne "Success"} | select DisplayName,HealthState,IsGateway | ConvertTo-HTML

 } else {

       $ReportOutput += "<h4>All management servers are in healthy state.</h4>"      

 }

 

#Checks if Management Servers are in Maintenance Mode

$ReportOutput += "<br>"

 $ReportOutput += "<h5>Management Servers in Maintenance Mode</h5>"

 $MSs = get-scomgroup -displayname "Operations Manager Management Servers" |get-scomclassinstance

 $mss.count

 foreach ($MS in $MSs)

 {

 if($MS.inmaintenancemode -eq "False")

 {     $ReportOutput += "<h3>$MS.DisplayName is in Maintenance Mode </h3>"

 } else  {

       $ReportOutput += "<h4>$MS.DisplayName is not in Maintenance Mode </h4>"

 }

 }

  
#Checks for Management Server Open Alerts

$ReportOutput += "<br>"

$ReportOutput += "<h5>Management Server Open Alerts</h5>"

$ManagementServers = Get-SCOMManagementServer

foreach ($ManagementServer in $ManagementServers){

     $ReportOutput += "<p><h5>Alerts on " + $ManagementServer.ComputerName + "</h5></p>"

     $ReportOutput += get-SCOMalert -Criteria ("NetbiosComputerName = '" + $ManagementServer.ComputerName + "'") | where {$_.ResolutionState -ne '255' -and $_.MonitoringObjectFullName -Match 'Microsoft.SystemCenter'} | select TimeRaised,Name,Description,Severity | ConvertTo-HTML

}

 

#Creates a separating line for formatting purposes in the HTML

$ReportOutput += "<hr size=4 width=50% align=left>"

 

Step 5 

Now the final step is to display the report. This example will just output to screen but it can be saved to SharePoint and displayed through the Operations Manager console. Another popular option is to script an automatic email of the generated report.

 #Converts all output from $ReportOutput to $Filelocation

ConvertTo-HTML -head $Head -body "$ReportOutput" | Out-File $FullPath

 

#The </body> element closes the contents of an HTML

$ReportOutput += "</body>"

 

#Invokes HTML output file and opens
invoke-item $FullPath

 

#Clears $ReportOutput variable for future use
Clear-Variable ReportOutput

 

Now save the file and run using the command from step 1 to start the script, replacing parameters as required, as illustrated in this example:

PowerShell.exe Daily-Report.ps1 MSServerName c:\scripts Daily-Report.htm

 

This will produce a report similar to the below: 

 

 

How to create daily reports with System Center Operations 2012 and PowerShell - Part 2

$
0
0

This second entry in this series of blogs delves into reporting on database health. It also aims to explain how PowerShell can be used to extract information from Operations Manager to show the health state of the operations and data warehouse databases and then output this onto an HTML page.

The focus on the Operations Manager databases reflects its key importance in the running of System Center Operations Manager 2012. The operational database contains all configuration data for the management group, it also stores all monitoring data that is collected and processed for the management group. The data warehouse database stores monitoring and alerting data for historical purposes. Information that is written to the Operations Manager database is also written to the data warehouse for long term storage. For more information on the Operations Manager infrastructure please visit Operations Manager Key Concepts at the link provided below:

http://technet.microsoft.com/library/hh230741.aspx 

Subsequent posts will continue to build on this report to provide key reporting of system health.

 

 

 #Retrieves the operational DB name from the connected management server and writes value to $OpsDatabaseName                                                                                                                                                                                                          

 $OpsDatabaseName = get-itemproperty -path "hklm:\software\Microsoft\Microsoft Operations Manager\3.0\Setup" |% {$_.DatabaseName}

 

 

#Retrieves the operational DB Server name, instance name and writes value to $OpsDatabaseSQLServer                                                                                                                                        

$OpsDatabaseSQLServer = get-itemproperty -path "hklm:\software\Microsoft\Microsoft Operations Manager\3.0\Setup" |% {$_.DatabaseServerName}

 

 

#Connects to the operations DB using value from $OpsDatabaseSQLServer and $OpsDatabaseName, gets the data warehouse DB Server name , instance name and writes value to $DWDBServerNameVar

$SqlConnection = New-Object System.Data.SqlClient.SqlConnection

$SqlConnection.ConnectionString = "Server=$OpsDatabaseSQLServer;Database=$OpsDatabaseName;Integrated Security=True"

$SqlCmd = New-Object System.Data.SqlClient.SqlCommand

 $SqlCmd.CommandText = 'SELECT MainDatabaseServerName_2C77AA48_DB0A_5D69_F8FF_20E48F3AED0F

 FROM MT_Microsoft$SystemCenter$DataWarehouse WITH (NOLOCK)

 '

 $SqlCmd.Connection = $SqlConnection

 $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter

 $SqlAdapter.SelectCommand = $SqlCmd

 $DataSet = New-Object System.Data.DataSet

 $SqlAdapter.Fill($DataSet)

 $SqlConnection.Close()

 $DWDBServerNamevar = $DataSet.Tables[0].Rows[0][0]

 

 

#Queries the operations DB that is still connected, gets the data warehouse database name and writes value to $DWDBNamevar

 $SqlCmd = New-Object System.Data.SqlClient.SqlCommand

 $SqlCmd.CommandText = 'SELECT MainDatabaseName_F46548FC_0DFA_877A_A52C_BB8731EBD70D

 FROM MT_Microsoft$SystemCenter$DataWarehouse WITH (NOLOCK)

 '

 $SqlCmd.Connection = $SqlConnection

 $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter

 $SqlAdapter.SelectCommand = $SqlCmd

 $DataSet = New-Object System.Data.DataSet

 $SqlAdapter.Fill($DataSet)

 $SqlConnection.Close()

 $DWDBNamevar = $DataSet.Tables[0].Rows[0][0]

 

 

#Outputs header and formatting for Operations Manager Database Health State

$ReportOutput += "<br>"

$ReportOutput += "<h6>Operations Manager 2012 Database State</h6>"

$ReportOutput += "<hr size=4 width=50% align=left>"

$ReportOutput += "<h5>Operations Manager Database Health State</h5>"

 

#Checks the operational DB health state through the connection to the management server 

$SQLMonitoringClass = Get-SCOMClass -name "Microsoft.SQLServer.Database"

$ReportOutput += Get-SCOMClassInstance -class:$SQLMonitoringClass | where {$_.DisplayName -eq $OpsDatabaseName} | Select Path, DisplayName,HealthState | ConvertTo-HTML

$ReportOutput += "<br>"

$ReportOutput += "<h5>Operations Manager Data Warehouse Health State</h5>"

$ReportOutput += Get-SCOMClassInstance -class:$SQLMonitoringClass | where {$_.DisplayName -eq $DWDBNamevar} | Select Path, DisplayName,HealthState | ConvertTo-HTML

 

#Outputs header and formatting for maintenance tasks over the last day

$ReportOutput += "<br>"

$ReportOutput += "<hr size=4 width=50% align=left>"

$ReportOutput += "<h5>Maintenance tasks in the last day</h5>"

 

#Connects to operational DB by using values from $OpsDatabaseSQLServer and $OpsDatabaseName and checks maintenance tasks that have occurred in the last day 

$SqlConnection = New-Object System.Data.SqlClient.SqlConnection

$SqlConnection.ConnectionString = "Server=$OpsDatabaseSQLServer;Database=$OpsDatabaseName;Integrated Security=True"

$SqlCmd = New-Object System.Data.SqlClient.SqlCommand

$SqlCmd.CommandText = "SELECT * FROM InternalJobHistory where TimeStarted >= DATEADD(day, -1, GETDATE())

"

$SqlCmd.Connection = $SqlConnection

 $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter

 $SqlAdapter.SelectCommand = $SqlCmd

 $DataSet = New-Object System.Data.DataSet

 $SqlAdapter.Fill($DataSet)

 $SqlConnection.Close()

 $OpsMTQuery = $dataSet.Tables[0].rows | Select-Object InternalJobHistoryId, TimeStarted ,TimeFinished, StatusCode, Command

 $ReportOutput += $OpsIntJobsQuery | ConvertTo-HTML 

if($dataSet.Tables[0].rows.count -gt 0){

    $ReportOutput += $OpsIntJobsQuery | ConvertTo-HTML

     }

else{

     $ReportOutput += "<h3>Maintenance tasks are not running</h3>"

     }

 

The default settings for these jobs can be found below

Task

Description

Schedule

 

Discovery Data Grooming

A rule that deletes aged discovery data from the Operations Manager database.

Every day at 2 AM

Partition and Grooming

A rule that runs workflows to partition and deletes aged data from the Operations Manager database.

Every day at 12 AM

Detect and Fix Object Space Inconsistencies

A rule that repairs data block corruption in database schema objects.

Every 30 minutes

Alert Auto Resolve Execute All

A rule that automatically resolves active alerts after a period of time.

Every day at 4 AM

 

Taken from http://technet.microsoft.com/en-us/library/hh212782.aspx

 

 

#Outputs header and formatting for Operations Manager database

$ReportOutput += "<br>"

$ReportOutput += "<hr size=4 width=50% align=left>"

$ReportOutput += "<h5>Operations Manager Database </h5>"

 

#Connects to the operations DB using the values from $opsDatabaseSQLServer and $OpsDatabaseName then checks current space Usage 

$SqlConnection = New-Object System.Data.SqlClient.SqlConnection

$SqlConnection.ConnectionString = "Server=$OpsDatabaseSQLServer;Database=$OpsDatabaseName;Integrated Security=True"

$SqlCmd = New-Object System.Data.SqlClient.SqlCommand

$SqlCmd.CommandText = "select

[FILE_SIZE_MB]=convert(decimal(12,2),round(a.size/128.000,2)),

[SPACE_USED_MB]=convert(decimal(12,2),round(fileproperty(a.name,'SpaceUsed')/128.000,2)),

[FREE_SPACE_MB]=convert(decimal(12,2),round((a.size-fileproperty(a.name,'SpaceUsed'))/128.000,2)) , 

NAME=left(a.NAME,15),

FILENAME=left(a.FILENAME,60)

from dbo.sysfiles a

"

$SqlCmd.Connection = $SqlConnection

 $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter

 $SqlAdapter.SelectCommand = $SqlCmd

 $DataSet = New-Object System.Data.DataSet

 $SqlAdapter.Fill($DataSet)

 $SqlConnection.Close()

 $OpsDWSQLQuery = $dataSet.Tables[0].rows | Select-Object FILE_SIZE_MB, SPACE_USED_MB, FREE_SPACE_MB, NAME, FILENAME 

 $ReportOutput += $OpsDWSQLQuery | ConvertTo-HTML

 

 

#Outputs header and formatting for Operations Manager data warehouse database

$ReportOutput += "<br>"

$ReportOutput += "<hr size=4 width=50% align=left>"

$ReportOutput += "<h5>Operations Manager DW Database </h5>"

 

#Connects to the data warehouse DB using the values from $DWBServerNamevar and $DWDBNamevar then checks current space usage

$SqlConnection = New-Object System.Data.SqlClient.SqlConnection

$SqlConnection.ConnectionString = "Server=$DWDBServerNamevar;Database=$DWDBNamevar ;Integrated Security=True"

$SqlCmd = New-Object System.Data.SqlClient.SqlCommand

$SqlCmd.CommandText = "select

[FILE_SIZE_MB]=convert(decimal(12,2),round(a.size/128.000,2)),

[SPACE_USED_MB]=convert(decimal(12,2),round(fileproperty(a.name,'SpaceUsed')/128.000,2)),

[FREE_SPACE_MB]=convert(decimal(12,2),round((a.size-fileproperty(a.name,'SpaceUsed'))/128.000,2)) , 

NAME=left(a.NAME,15),

FILENAME=left(a.FILENAME,60)

from dbo.sysfiles a

"

$SqlCmd.Connection = $SqlConnection

 $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter

 $SqlAdapter.SelectCommand = $SqlCmd

 $DataSet = New-Object System.Data.DataSet

 $SqlAdapter.Fill($DataSet)

 $SqlConnection.Close()

 $OpsDWSQLQuery = $dataSet.Tables[0].rows | Select-Object FILE_SIZE_MB, SPACE_USED_MB, FREE_SPACE_MB, NAME, FILENAME 

 $ReportOutput += $OpsDWSQLQuery | ConvertTo-HTML

 

 

#Gets the server names of the SQL servers hosting the operational DB and data warehouse DB without instance names

$SQLclassOpsPrincipal = Get-SCOMClassInstance -class:$SQLMonitoringClass | where {$_.DisplayName -eq $OpsDatabaseName} | Select -ExpandProperty *.PrincipalName

$SQLclassOpsPrincipalValue = $SQLclassOpsPrincipal.value

$SQLclassOpsDWPrincipal = Get-SCOMClassInstance -class:$SQLMonitoringClass | where {$_.DisplayName -eq $DWDBNamevar} | Select -ExpandProperty *.PrincipalName

$SQLclassOpsDWPrincipalValue = $SQLclassOpsDWPrincipal.value

$SQLclassOpsPrincipalValue -eq $SQLclassOpsDWPrincipalValue

  

#Checks alerts for SQL servers from above statement. Note: This will also ensure the process is not repeated if the servers have the same name 

$ReportOutput += "<br>"

$ReportOutput += "<h5>Operational Database Server Open Alerts</h5>"

$ReportOutput += get-SCOMAlert | where {$_.MonitoringObjectPath -eq $_.SQLclassOpsPrincipal -and $_.ResolutionState -ne '255'} | select TimeRaised,Name,Description,Severity | ConvertTo-HTML 

$ReportOutput += "<br>"

$ReportOutput += "<h5>Data Warehouse Database Server Open Alerts</h5>" 

 

 if($SQLclassOpsPrincipalValue -eq $SQLclassOpsDWPrincipalValue)

 {      $ReportOutput += "<h4>The Data warehouse database is installed on the same server as the Operational database</h4>"

 } else  {

       ReportOutput += get-SCOMAlert | where {$_.MonitoringObjectPath -eq $_.SQLclassOpsDWPrincipal -and $_.ResolutionState -ne '255'} | select TimeRaised,Name,Description,Severity | ConvertTo-HTML

 

 }

 

 

#CSS border has also now been added for display purposes to the HTML headings to signify the start of new sections in the HTML output

h6{BORDER-BOTTOM: #b1babf 1px solid; POSITION: relative; BORDER-LEFT: #b1babf 1px solid; BACKGROUND-COLOR: #0061bd; PADDING-LEFT: 5px; DISPLAY: block; FONT-FAMILY: Tahoma; HEIGHT: 2em; COLOR: #ffffff; MARGIN-LEFT: 0px; FONT-SIZE: 12pt; BORDER-TOP: #b1babf 1px solid; FONT-WEIGHT: bold; MARGIN-RIGHT: 0px; BORDER-RIGHT: #b1babf 1px solid; PADDING-TOP: 8px}

 

 

Now run using the command from the previous blog to start the script, replacing parameters as required, as illustrated in this example:

PowerShell.exe Daily-Report.ps1 MSServerName c:\scripts Daily-Report.htm

 

This will produce a report similar to the below :

Hotfix 2801987 is out for 0x800b0101, but the cert expires in March

$
0
0

Hi All,

As hopefully most of you are aware we released KB article 2801987 for System Center 2012 Configuration Manager SP1. This update provides a new version of MicrosoftPolicyPlatformSetup.msi (a prerequisite for CI-related activities, in case you were wondering such as DCM, AppMgmt and so on). If you install 2801987 you won't need to install the Windows update provided in Security Advisory 2749655for Configuration Manager. I highlight this, because the Windows update addresses this generically for other products/updates rather than just Configuration Manager (or specifically MicrosoftPolicyPlatformSetup.msi).

That said, one thing that crops up is that the cert used to sign MicrosoftPolicyPlatformSetup.msi expires in March, 2013. Does this mean that you'll need a new hotfix in March this year to install the client because the cert expires then?

The answer is no, you won't have to install new hotfix in March, 2013 because of the cert expiring then.

Why? The reason for this is that the issue described in Security Advisory 2749655 and the hotfix 2801987 has to do not with the signing certificates themselves expiring but with the a missing timestamp. From 2749655:

Microsoft is aware of an issue involving specific digital certificates that were generated by Microsoft without proper timestamp attributes. These digital certificates were later used to sign some Microsoft core components and software binaries. This could cause compatibility issues between affected binaries and Microsoft Windows...

The timestamping extension to digital signatures basically allows a signature (and cert) to be marked as valid at the time of signing. That basically means that a certificate is valid until a certificate is revoked by the Certificate Authority (CA) or marked as untrusted. Timestamping allows those signatures and the binaries they sign to have an indefinite lifecycle, rather than an arbitrary limit. From Security Advisory Security Advisory 2749655:

How are timestamp Enhanced Key Usage (EKU) extensions used? 
Per RFC3280, timestamp Enhanced Key Usage (EKU) extensions are used to bind the hash of an object to a time. These signed statements show that a signature existed at a particular point in time. They are used in code integrity situations when the code signing certificate has expired, to verify that the signature was made before the certificate expired. For more information about certificate timestamps, see How Certificates Work and Windows Authenticode Portable Executable Signature Format.

Hopefully this helps to clarify things for this update!

Saud.

Listing Review Activities which need your vote in Service Manager

$
0
0

In Service Manager, out of the box, we have a view of Review Activities which you need to vote on.  However, if the RA has multiple reviewers, even if you've already voted, it'll stay in your list until the other folks have voted & the activity is completed.  This can make your view a little messy, and difficult to find what you have and have not already voted on. 

The steps below show you how to make a view which will only show Review Activities on which you need to vote.  As soon as you have voted, you'll no longer have the RA in the view. 

In the Service Manager console, go to Work Items > Activity Management > Right Click > Create Folder:

 

 

 

 Give the folder a name, for example “Custom Activity Views”.

Store this in a separate MP for views.  We don’t want this to be stored in another MP with lots of other stuff.

I clicked the ‘New’ button to create an MP Called “Contoso Custom Views”.  

When you’ve created your folder > Right Click > Create View.

 

 In your new view, give it a Name.

 

 

 

 Under Criteria, click the ‘Browse’ button.

 

 

 

In the drop-down list in the top-right, select Combination classes.

Search for “Review Activity (Advanced)” and select it.

 

 

 

In the criteria section:

-          On the left, select “Review Activity”. Under Available properties, select “Status” & click Add.

-          Set Status to ‘In Progress’

-          Expand “Review Activity” and select “Reviewers”.

-          Tick Decision & click Add.

-          Set Decision to “Not Yet Voted”.

The Criteria should now look like this screenshot:

 

 

 Finally, under the ‘Display’ section, simply select which columns you want to display and click OK.

Next, we want to export the Management Pack.  From the SCSM portal, go to Administration > Management Packs.

Find the Management Pack your view is stored in & Export it. 

Next, open up your Management Pack in Notepad, or an XML editor.

-          Search for your view name, so in my example, I’ll search for “My reviews to vote”

-          We want to find the DisplayString section with this text:

 

   

-          Make a note of the ID, so in this example View.02e062005ddf4f148754dead958ff93a

 

Now, search for your View ID in the MP XML.  We want to find the section which looks like <View ID=” View.02e062005ddf4f148754dead958ff93a

 

 

 

 Now, find the <Expression> section within this view.

Within your <Expression> section, copy & paste in the yellow XML below.

 

<Expression>
<And>
<Expression>
<SimpleExpression>
<ValueExpressionLeft>
<Property>$Context/Property[Type='CustomSystem_WorkItem_Activity_Library!System.WorkItem.Activity']/Status$</Property>
</ValueExpressionLeft>
<Operator>Equal</Operator>
<ValueExpressionRight>
<Value>{11fc3cef-15e5-bca4-dee0-9c1155ec8d83}</Value>
</ValueExpressionRight>
</SimpleExpression>
</Expression>
<Expression>
<In>
<GenericProperty Path="$Context/Path[Relationship='CustomSystem_WorkItem_Activity_Library!System.ReviewActivityHasReviewer']/Path[Relationship='CustomSystem_WorkItem_Activity_Library!System.ReviewerIsUser']$">Id</GenericProperty>
<Values>
<Token>[me]</Token>
<Token>[mygroups]</Token>
</Values>
</In>
</Expression>
<Expression>
<SimpleExpression>
<ValueExpressionLeft>
<Property>$Context/Path[Relationship='CustomSystem_WorkItem_Activity_Library!System.ReviewActivityHasReviewer' TypeConstraint='CustomSystem_WorkItem_Activity_Library!System.Reviewer']/Property[Type='CustomSystem_WorkItem_Activity_Library!System.Reviewer']/Decision$</Property>
</ValueExpressionLeft>
<Operator>Equal</Operator>
<ValueExpressionRight>
<Value>{dae75d12-89ac-a8d8-4fe3-516c2a6a26f7}</Value>
</ValueExpressionRight>
</SimpleExpression>
</Expression>
</And>
</Expression>

Next, save your MP XML file.  Import your saved Management Pack into Service Manager.

After importing, close the SCSM console & re-open.

Your view should now only show Review Activities assigned to you, on which you’ve not yet voted.

 

Please note: If you edit this view from the console in the future, you will lose the manually entered XML to display the RA’s only assigned to [me]. 

If you make changes to the view in the UI, you’ll have to enter in the XML above again.

Enjoy!

 


How to create daily reports with System Center Operations 2012 and PowerShell - Part 3

$
0
0

This series of blogs aims to show administrators how to create daily reports summarising the health of Operations Manager using PowerShell. This should reduce the overheads required for daily health reviews of their system environments.

The third entry in the blog series explains how PowerShell can be used to extract information from Operations Manager to show the health state of agents and then output this onto an HTML page. Agent health is key to ensuring data is being received by Operations Manager to allow access to all Operations Manager options and functionality. Subsequent posts will build on this principle to allow more sophisticated system health reporting.

 

#Outputs header and formatting for agent health

$ReportOutput += "<br>"

$ReportOutput += "<hr size=4 width=50% align=left>"

$ReportOutput += "<h6>Agent Health</h6>"

 

 

#Gets all agents that have an agent health state that is not green

$ReportOutput += "<h5>Agent Managed Health States which are not Green</h5>"

$Count = Get-SCOMAgent | where {$_.HealthState -ne "Success"} | Measure-Object

if($Count.Count -gt 0) {

                        $ReportOutput += Get-SCOMAgent | where {$_.HealthState -ne "Success"} | select Name,HealthState | ConvertTo-HTML

} else {

$ReportOutput += "<p><h4>Agent Managed Health State is Green.</h4></p>"

}

 

 

#Outputs header and formatting for unresponsive grey agents

$ReportOutput += "</br>"

$ReportOutput += "<hr size=4 width=50% align=left>"

$ReportOutput += "<h5>Unresponsive Grey Agents</h5>"

 

 

#Gets all agents that are not available and in an unresponsive grey state

$AgentMonitoringClass = Get-SCOMClass -name "Microsoft.SystemCenter.Agent"

$Count = Get-SCOMClass -name "Microsoft.SystemCenter.Agent" | where {$_.IsAvailable -eq $false}  | Measure-Object

if($Count.Count -gt 0) {

                        $ReportOutput += Get-SCOMClassInstance -monitoringclass:$AgentMonitoringClass | where {$_.IsAvailable -eq $false} | select DisplayName | ConvertTo-HTML

} else {

$ReportOutput += "<p><h4>All Agents are Responsive.</h4></p>"

}

 

 

#Outputs header and formatting for Last 24 hours health watcher alerts

$ReportOutput += "</br>"

$ReportOutput += "<hr size=4 width=50% align=left>"

$ReportOutput += "<h5>Last 24 Hours Health Watcher Alerts</h5>"

 

 

#Counts agent heartbeat failure within the last 24 hours

$targetdate = (get-date).AddDays(-1)

$Count = get-SCOMalert | where-object {($_.TimeRaised -gt $targetdate) -and ($_.Name -eq "Health Service Heartbeat Failure")} | group-object Name | Measure-Object

if($Count.Count -gt 0) {

                        $ReportOutput += get-SCOMalert | where-object {($_.TimeRaised -gt $targetdate) -and ($_.Name -eq "Health Service Heartbeat Failure")} | group-object Name | Select count,name | sort-object count -descending | ConvertTo-HTML

} else {

$ReportOutput += "<p><h4>No Health Service Heartbeat Failures Recieved.</h4></p>"  

}

 

 

#Outputs header and formatting for agents in pending state

$ReportOutput += "</br>"

$ReportOutput += "<hr size=4 width=50% align=left>"

$ReportOutput += "<h5>Agents in Pending State</h5>"

 

 

#Checks if any agents are currently pending approval

$Count = Get-SCOMPendingManagement | sort AgentPendingActionType  | Measure-Object

if($Count.Count -gt 0) {

                        $ReportOutput += Get-SCOMPendingManagement | sort AgentPendingActionType | select AgentName,ManagementServerName,AgentPendingActionType | ConvertTo-HTML

} else {

$ReportOutput += "<p><h4>No Agents are Pending Approval.</h4></p>" 

}

 

 

#Outputs header and formatting for Agent Load Balancing

$ReportOutput += "</br>"

$ReportOutput += "<hr size=4 width=50% align=left>"

$ReportOutput += "<h5>Agent Load Balancing</h5>"

 

 

#Checks agent load balancing over System Center – Operations Manager 2012

$ReportOutput += get-SCOMagent | sort Name | Group PrimaryManagementServerName -Noelement | sort Name | Select Count,Name | ConvertTo-HTML

 

 

#Outputs header and formatting for agent patch list

 

$ReportOutput += "</br>"

$ReportOutput += "<hr size=4 width=50% align=left>"

$ReportOutput += "<h5>Agent Patch List</h5>"

 

 

#Queries SQL operations database and counts current agent patch levels 

$SqlConnection = New-Object System.Data.SqlClient.SqlConnection

$SqlConnection.ConnectionString = "Server=$OpsDatabaseSQLServer;Database=$OpsDatabaseName;Integrated Security=True"

$SqlCmd = New-Object System.Data.SqlClient.SqlCommand

$SqlCmd.CommandText = "select COUNT (hs.patchlist) AS 'Count' , hs.patchlist AS 'Patch_Level'  from MT_HealthService hs

GROUP BY Patchlist

 

"

 

$SqlCmd.Connection = $SqlConnection

 $SqlAdapter = New-Object System.Data.SqlClient.SqlDataAdapter

 $SqlAdapter.SelectCommand = $SqlCmd

 $DataSet = New-Object System.Data.DataSet

 $SqlAdapter.Fill($DataSet)

 $SqlConnection.Close()

 $OpsAgentPatchSQLQuery = $dataSet.Tables[0].rows | Select-Object Count , Patch_Level

 $ReportOutput += $OpsAgentPatchSQLQuery | ConvertTo-HTML

 

 

#Outputs line for end of report format purposes

$ReportOutput += "</br>"

$ReportOutput += "<hr size=4 width=50% align=left>"

 

 

Now run using the command from the previous blog to start the script, replacing parameters as required, as illustrated in this example:

 

PowerShell.exe Daily-Report.ps1 MSServerName c:\scripts Daily-Report.htm

 

This will produce a report similar to the below :

 

 

 

 

OpsMgr 2012 Event IDs Spreadsheet

$
0
0

This is a really quick post to get a useful resource out into the Opsmanosphere.

Back in OpsMgr 2007 days, one of the assets I carried with me everywhere was Daniele's OpsMgrEventIDs spreadsheet. Daniele does a good job of explaining usage of a spreadsheet like this. I would use this day in day out to decipher OpsMgr environments, or use as a look up for questions stating EventIDs but no other information e.g. Source, description. It's also very useful when breaking down the top events collected by OpsMgr.

The tool used to extract this event log information is Event Log Explorer.

How to create daily reports with System Center Operations 2012 and PowerShell - Part 4

$
0
0

This series of blogs aims to show administrators how to create daily reports summarising the health of Operations Manager using PowerShell. This should reduce the overheads required for daily health reviews of their system environments.

In this fourth and penultimate entry in the blog series explains how PowerShell can be used to extract alert information from Operations Manager for analysis and then output onto an HTML page. Reviewing alerts and resolving either through tuning or root cause analysis is key to ensuring the overall health of your infrastructure, this can obviously be difficult dependent on the amount of alerts received. The following script helps digest this data to help focus on key problem areas to ensure your resources can be assigned appropriately. Subsequent posts will build on this principle to allow more sophisticated system health reporting.

 

#Outputs header and formatting for Last 24 Hours Alert Count by Severity

$ReportOutput+="<h5>Last 24 Hours Alert Count by Severity</h5>"

 

#Breaks down alerts by Severity in the last 24 hours and counts

$targetdate= (get-date).AddDays(-1)

$ReportOutput+= get-SCOMalert | where-object {($_.ResolutionState -ne 255) -and ($_.TimeRaised -gt$targetdate)} | group-objectseverity | sort-objectcount-descending | select Count, Name | ConvertTo-HTML

 

#Outputs header and formatting for Alerts broken down by hour logged

$ReportOutput+="</br>"

$ReportOutput+="<hr size=4 width=50% align=left>"

$ReportOutput+="<h5>Alerts broken down by hour logged</h5>"

 

#SQL Query breaks down the last 24 hours of alerts and shows the by the hour logged for trend analysis

$SqlConnection=New-ObjectSystem.Data.SqlClient.SqlConnection

$SqlConnection.ConnectionString ="Server=$DWDBServerNamevar;Database= $DWDBNamevar;Integrated Security=True"

$SqlCmd=New-ObjectSystem.Data.SqlClient.SqlCommand

$SqlCmd.CommandText ="

CREATE TABLE #HourReport (

[Hour Alert Added] varchar (5) NOT NULL,

[Number of Alerts Per Hour] numeric (18) NULL

)on [PRIMARY]

 

INSERT INTO #HourReport

values ('00:00','0'),

('01:00','0'),

('02:00','0'),

('03:00','0'),

('04:00','0'),

('05:00','0'),

('06:00','0'),

('07:00','0'),

('08:00','0'),

('09:00','0'),

('10:00','0'),

('11:00','0'),

('12:00','0'),

('13:00','0'),

('14:00','0'),

('15:00','0'),

('16:00','0'),

('17:00','0'),

('18:00','0'),

('19:00','0'),

('20:00','0'),

('21:00','0'),

('22:00','0'),

('23:00','0')

  

SELECT CASE

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '00' THEN '00:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '01' THEN '01:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '02' THEN '02:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '03' THEN '03:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '04' THEN '04:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '05' THEN '05:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '06' THEN '06:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '07' THEN '07:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '08' THEN '08:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '09' THEN '09:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '10' THEN '10:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '11' THEN '11:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '12' THEN '12:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '13' THEN '13:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '14' THEN '14:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '15' THEN '15:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '16' THEN '16:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '17' THEN '17:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '18' THEN '18:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '19' THEN '19:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '20' THEN '20:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '21' THEN '21:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '22' THEN '22:00'

WHEN CONVERT(VARCHAR(2), RaisedDateTime, 114) = '23' THEN '23:00'

END AS [Hour Alert Added], COUNT(*) AS [Number Of Alerts Per Hour]

INTO #HourResults

FROM [Alert].[vAlert]

WHERE RaisedDateTime > dateadd (hh,-23,getutcdate())

GROUP BY CONVERT(VARCHAR(2), RaisedDateTime, 114) 

 

SELECT hr.[Hour Alert Added],CASE

WHEN hp.[Number Of Alerts Per Hour] IS NULL THEN '0'

ELSE hp.[Number Of Alerts Per Hour]

END [Number Of Alerts Per Hour]

FROM #HourReport hr

left OUTER JOIN #HourResults hp

ON hr.[Hour Alert Added] = hp.[Hour Alert Added]

 

drop table #HourReport

drop table #HourResults

"

$SqlCmd.Connection =$SqlConnection

$SqlAdapter=New-ObjectSystem.Data.SqlClient.SqlDataAdapter

$SqlAdapter.SelectCommand =$SqlCmd

$DataSet=New-ObjectSystem.Data.DataSet

$SqlAdapter.Fill($DataSet)

$SqlConnection.Close()

$OpsAlertsbyHourSQLQuery=$dataSet.Tables[0].rows | Select-object'Hour Alert Added', 'Number of Alerts Per Hour'

$ReportOutput+=$OpsAlertsbyHourSQLQuery | ConvertTo-HTML

 

#Outputs header and formatting for Top 5 Open Alerts

$ReportOutput+="</br>"

$ReportOutput+="<hr size=4 width=50% align=left>"

$ReportOutput+="<h5>Top 5 Open Alerts</h5>"

 

#Looks for Top 5 Open Alerts by count , very useful for focusing efforts

$ReportOutput+= get-SCOMalert -Criteria 'ResolutionState < "255"' | Group-ObjectName | Sort-objectCount-desc | select-Object-first 5 Count, Name | ConvertTo-HTML

 

#Outputs header and formatting for Top 5 Open Alerts

$ReportOutput+="</br>"

$ReportOutput+="<hr size=4 width=50% align=left>"

$ReportOutput+="<h5>Top 5 Open Repeating Alerts</h5>"

 

#Look for Top 5 open Alerts with high repeat count.

$ReportOutput+= get-SCOMalert -Criteria 'ResolutionState < "255"' | Sort-descRepeatCount | select-object–first 5 Name, RepeatCount, MonitoringObjectPath, Description | ConvertTo-HTML

 

#Outputs header and formatting for Alerts by Management Pack

$ReportOutput+="</br>"

$ReportOutput+="<hr size=4 width=50% align=left>"

$ReportOutput+="<h5>Alerts by Management Pack</h5>"

 

#Looks for open Alerts and traces back to originating MP, this can be used for finding areas that need tuning

$SqlConnection=New-ObjectSystem.Data.SqlClient.SqlConnection

$SqlConnection.ConnectionString ="Server=$OpsDatabaseSQLServer;Database= $OpsDatabaseName;Integrated Security=True"

$SqlCmd=New-ObjectSystem.Data.SqlClient.SqlCommand

$SqlCmd.CommandText ="

use OperationsManager

Select mp.Name, av.MonitoringRuleId, mp.Id INTO #AlerttoMPCount

FROM dbo.AlertView av

JOIN dbo.RuleView rv

ON av.MonitoringRuleId = rv.id

JOIN dbo.ManagementPackView mp

ON rv.ManagementPackId = mp.Id

where av.ResolutionState <> '255'

UNION ALL

Select mp.Name, av.MonitoringRuleId, mp.Id

FROM dbo.AlertView av

JOIN dbo.MonitorView mv

ON av.MonitoringRuleId = mv.id

JOIN dbo.ManagementPackView mp

ON mv.ManagementPackId = mp.Id

where av.ResolutionState <> '255'

 

Select count (Name) as [Alerts Received],Name as [Management Pack Name]

FROM #AlerttoMPCount

GROUP BY Name

IF OBJECT_ID(N'tempdb..#AlerttoMPCount', N'U') IS NOT NULL

DROP TABLE #AlerttoMPCount;

"

$SqlCmd.Connection =$SqlConnection

$SqlAdapter=New-ObjectSystem.Data.SqlClient.SqlDataAdapter

$SqlAdapter.SelectCommand =$SqlCmd

$DataSet=New-ObjectSystem.Data.DataSet

$SqlAdapter.Fill($DataSet)

$SqlConnection.Close()

$OpsAlertsbyMP=$dataSet.Tables[0].rows | Select-object'Alerts Received', 'Management Pack Name'

$ReportOutput+=$OpsAlertsbyMP | ConvertTo-HTML

 

$ReportOutput+="<br></br>"

$ReportOutput+="<hr size=4 width=50% align=left>"

 

#Outputs line for end of report format purposes

$ReportOutput += "</br>"

$ReportOutput += "<hr size=4 width=50% align=left>"

  

Now run using the command from the previous blog to start the script, replacing parameters as required, as illustrated in this example:

 

PowerShell.exe Daily-Report.ps1 MSServerName c:\scripts Daily-Report.htm

 

This will produce a report similar to the below 

 

 

 

A subnet by any other name...

$
0
0

There has been a lot of discussion recently around boundaries in Configuration Manager...do you use ranges or subnets? All ranges or all subnets?  What about AD sites?  The ConfigMgr product team as put out blog on this here: http://blogs.technet.com/b/configmgrteam/archive/2013/03/01/when-not-to-use-ip-address-ranges-as-boundaries-in-configuration-manager.aspx. Rod Trent has raised a few questions and highlighted community feedback over on myITForum: http://myitforum.com/myitforumwp/2013/03/02/official-microsoft-blog-on-ip-address-ranges-as-configmgr-boundaries-met-with-instant-rebuttal/.

Now - I'm not going to step foot into either camp (or even acknowledge any camps).  Rather...I want to discuss how subnets and ranges are used internally by ConfigMgr.

Subnets

Subnets in a ConfigMgr are a client-side (or more accurately a network host) view of networking.  We'll comeback to that in a second; however, that is one of the major reasons supernets don't exist in ConfigMgr (let alone being supported).  Supernets are a network construct for a way of grouping like subnets to make their management and routing more simple. A supernet is like saying all of Japan is supernet A, composed of Tokyo office subnets 1, 2 & 3.  We might want to serve that location through a single ConfigMgr server or site. 

That said, let's go back to client-side.  A client only knows about its IP address and subnet mask.  It doesn't know anything else.  It uses that to determine if an IP address is local or remote (something that needs to be routed by a gateway/router). It is the ConfigMgr client on that network host (Windows device) that determines its IP subnet by applying its subnet mask against its IP address. 

Now - you might think that's simple and too basic; but it is the ConfigMgr client that drives this entire subnet process. It's a client sends content location request, along with SUP and MP list within 2012. When a client does a location request to the MP for content (e.g. packages) it does so by supplying its subnet to the MP.  You can see this by turning on trace logging and looking at Location Services log files or the MP log files (if you have a few clients) - or even firing up NetMon and doing a network trace.  The MP passes the subnet by calling a SQL query with that information to determine if the content/site system is available on that subnet or a remote one (Jason discusses this more in his blog, http://blogs.technet.com/b/configmgrteam/archive/2013/03/01/when-not-to-use-ip-address-ranges-as-boundaries-in-configuration-manager.aspx).  This is a fairly straight forward comparison of the supplied subnet against a list of subnets.  Obviously...as you get more and more subnets this gets more computationally expensive - but it still string lookup against a list of strings, not the end of the world in SQL complexity.

IP Ranges

IP ranges are conceptually simple.  They aren't even really supernets, there just a range of addresses.  The determination of if a client falls into a range, unlike a subnet, is done by the ConfigMgr server infrastructure.  Specifically it is done by SQL Server.  Again, in the location request you'll see an IP address sent up by the client.  The MP passes that to the SQL Server, and the SQL Server does the determination of where that IP address falls into. Again, Jason discusses this further and discusses why this is a more complex operation than a simple lookup of a subnet we talked about above.  Since the individual query is expensive, adding lots of them burns up more SQL Server resources. 

Real world

Some customers I have worked with have subdivided a subnet into two different IP ranges to support different sites serving different departments or client types.  For example, PCs can only talk to Global End User team's server (not my North America team's server) or Retail Banking cannot talk to Treasury servers (if they contained the same packages).  That's an example of something bad, and something I'd strongly advise my customers against when working with them. Valid ways of using might include (I say might include because the below is not exhaustive):

  • You might use IP Ranges when you have been provided with supernets by your networking team (in my Tokyo example, I might group those three subnets into a single IP Range).
  • You might use them when your AD sites have been consolidated heavily and DC have been consolidated.  This helps you maintain granularity needed for ConfigMgr.
  • You might use them you have a tremendous amount of subnets in your environment (you network team is using subnet masks like 255.255.255.128 or .192 to create very small ranges)
  • You might also need them when dealing with the VPN client scenario (naturally :-)

The key is - don't just start throwing IP Ranges just because they work.  Have a think about it.  Know the trade-off when using something more complex internally within the product and costs you more database perf resources but easier to administer.  It's a trade off...

You could probably create millions of IP Ranges, if you get your boss to sign off on 512GB of RAM for SQL Server and a SSD SAN...kidding... ;)

Happy ConfigMgr'ing - Saud

P.S. If your going to MMS - catch my session on reporting in ConfigMgr 2012 SP1 here: http://www.2013mms.com/topic/details/UD-B338

 

MMS 2013 UDB-338 Additional Content

Viewing all 158 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>