Quantcast
Channel: The Manageability Guys
Viewing all 158 articles
Browse latest View live

Software Deployment to Barely Connected Users

$
0
0

In a world where high-speed Internet connections are ubiquitous, even laptop computers can be expected to have some form of regular connection to the Internet or corporate network. However in the past few months the same scenario has cropped up in conversations with customers: How to include computers that barely connect to corporate network, in mass software deployments such as Office 2010. And to further complicate matters, when they do connect it's remotely over the Internet and with extremely slow connections.

 If the devices had access to a relatively fast internet connection we could simply make a distribution point available to them either using Native Mode in ConfigMgr or allow clients access across the corporate VPN.

 So the question is how to deploy to these machines. As an admin you are left with one of 4 options:

  • 1. Recall the laptop and install manually
  • 2. Utilise mobile engineers to install the software
  • 3. Remotely talk the user through installing
  • 4. Manually deliver the content on a disk, but install using ConfigMgr

With a bit of work we can implement number 4 to hopefully reduce the on-going administrative effort for specific deployments.

 My solution takes the responsibility of delivery from outside of ConfigMgr whilst allowing the installation to occur automatically and still within the Local System context to overcome any permissions issues.

 The first step is to create a package without any source files. Simple enough!

 

 

 

The second step is to decide how to ship the content to user. Again, simple enough, you send the user a DVD or a USB flash drive with the content on.

Now is where the complications start:

  • How can you guarantee the location of the content once the user has plugged in or inserted the disk you send them?
  • Different computers may have different numbers of hard disks, some with CD Drives, others without.
  • The user may have also created their own mapped drives.

In a nutshell, not all computers will have the same drive letter available. This means you can't create a program to run the content from the D drive, or the E drive, or the F drive, or the.... (well you get the idea)

 

To get around this, we will use a script to ensure that the same package, program and advert can be used on every machine we deploy to.

 

For efficiency sake it would be great to not have to copy the content, and in Windows 7 and Windows Vista we could make use of MKLINK to save copying the content. However, given Windows XP's on-going popularity I'd be remiss not to consider deployment to such devices.

 

Unfortunately, I can't think of a platform agnostic method other than copying the content to the local disk.

 

Next is to script this copy. My script of choice is the good old Batch File. All that is needed for this is one line and the XCOPY command:

 

XCOPY <source> <destination> /ecyiq

 

For my testing I am deploying Office 2010 Professional Plus to Windows XP SP3 clients. And my distribution medium is a DVD. I create a batch file 'InstallOffice.bat' in the root of my source disk with the following line:

 

XCOPY Office2010  "c:\temp\Office2010\" /ecyiq

 

By using a relative path for the source, when the script is initiated from the root of the content drive the folder will be copied. Thus sidestepping the inconsistent drive letter issue I discussed earlier.

 

Step 3 is to create the program, as you normally would; in my case I created the program with the command:

 

"C:\temp\Office2020\setup.exe" /adminfile "C:\temp\Office2020\fullunattendedsetup.msp"

Step 4 is to advertise the program. It's best practise to limit the collection to those clients you wish to target to reduce unnecessary policy evaluation and to reduce the risk of users running an advertisement that will fail because no content is available. Not to mention it will skew your reporting if you target your entire estate, 99% of which will be able to download content and there have Office deployed in a more traditional manner!

 

Create a non-mandatory advert and target the program at your client collection!

This is enough to deploy the software, the user can run the bat file to copy the content and then go into the 'Run Advertised Programs' applet in the control panel.

 

NB Make sure that the targeted machines have received the policy before you ship the content disk to the user. The pre-defined report "Status of a Specific Advertisement". Clients showing "No Status" have not received the policy and those showing "Accepted" are good to go!

 

 

However a few more steps will ease user experience and reduce the input and effort required on their part.

The Configuration Manager 2007 SDK has an example of a script on how to initiate an advertisement. We can make use of this by including the script on the content disk we send the user: http://msdn.microsoft.com/en-us/library/cc143667.aspx

 

And the only amendments I make are to declare the variables 'programID' and 'packageID' and set them according to the values of the package and program I am deploying. I save this as InstallOffice.vbs to the same directory as the content and the batch file. To automate this I add the following line to the batch file:

 

Wscript.exe "InstallOffice.VBS"

 

Now all the user has to do is insert the disk and run the batch file and as long as the policy has been downloaded then the install will proceed.

 

There is one final step we can do to automate the whole end to end process, and that is to distribute the content on a CD or a DVD and use an autorun.inf file to automatically begin the copy and install when the disk is inserted.

 

My 'autorun.inf' looks like this:

 

[autorun]

ShellExecute=InstallOffice.bat

icon=Office.ico

label="Office 2010 Distribution"

The root of my DVD now contains the following files:

 

Filename

Purpose

InstallOffice.bat

Copies the files and controls the end to end process

InstallOffice.vbs

Initiates the Advertisement

Office.ico

Provides and icon for the DVD drive in explorer

Office2010

Contains the source binaries

Autorun.inf

Contains parameters for the auto run

 

 

Now when the user receives the disk all they have to do is insert it into their machine. The install will either begin automatically or the user will receive an autorun prompt with the option to run the installation.

 

Deployment success can be seen in any of the usual ways:

Immediately from the 'Run Advertised Programs Applet':

 

And also from the Site once the client has been able to send status messages:

  

And from the reports node:

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Rob York, a Premier Field Engineer with Microsoft Premier Field Engineering, UK.


Preventing Operating System Deployment On Servers And Other Critical Client Systems

$
0
0

An uncharacteristically short one from me today; although normality will soon be resumed!

There isn't usually much scope for catastrophe in the daily workings of ConfigMgr and very few companies I know of would consider it a business critical application. It is, however, often used to manage servers and even desktops that are running business critical applications. Despite the small scope for things to go horrendously wrong, any collection containing all systems can do just that. A mis-advertised Task Sequence or a poorly written collection query can suddenly target your entire environment with an Operating System Deployment.

 The use of mandatory advertisements can exacerbate the situation. Best practice would be to not use them at all for deploying operating systems, but that's not realistic for real life scenarios and when you require zero touch installation there's no way round it.

 The upshot of this is that you, as a ConfigMgr administrator, need to be very careful when creating the collections, task sequences and advertisements for deploying operating systems in your environment.

 Maintenance windows offer a certain amount of protection, but they are not infallible. User initiated advertisements, from the Run Advertised Programs applet do not honour maintenance windows as it is assumed that user interaction implies awareness and forethought of the actions being carried out.

 

 

 

 Creating a maintenance window for a collection 

Secondly, it's easy and quite feasible for an advertisement to be set up to ignore maintenance windows, and the task sequence will carry on its merry way.

 Ignoring maintenance windows during deployment

 

Third, but less likely, it is the possibility of a maintenance window clashing with the erroneous deployment and allowing the task sequence to continue. This possibility can be addressed by having a second maintenance window to disallow operating system deployments all of the time.

 

 

 Limiting a Maintenance Window to OSD Task Sequences Only

 

But again we loop back to the first situation where user interaction will allow the task sequence to continue regardless!

So what can we do? Good processes are the best defence. Some companies have a tiered system where those who create programs, collections and task sequences do not have the requisite permissions in the console to create advertisements. Instead they submit a request and a second administrator is able to review the collection and task sequence before creating the advertisement.

 There is still always more we can do. A belt and braces approach if you will.

By using Task Sequence Variables and logic we can protect critical clients such as servers from the day to day desktop deployments that might be undertaken in the same environment.

 The first step is to create a collection variable against the collection of resources we wish to protect.

 

 

Creating a Collection Variable

 

Setting a Collection Variable

Now update our existing task sequences to check for the presence of the variable. Do this by creating a new group with the rest of the task sequence nested within it and set logic to only allow the Task Sequence to run if OSDAllowed is not equal to FALSE.

 

 

 

Adding task sequence logic to prevent unexpected rebuilds.

Now whenever a client within the specified collection tries to run the task sequence it will gracefully close out without damaging the current install. This will be the case regardless of maintenance windows, advertisement type and even for user initiated execution. As long as the variable is set at the collection and the logic is present the client will be protected.

What if I want to re-image a machine? I hear you ask, either create a collection of target machines you wish to re-image, although this opens you up to all the issues I discussed above, thus bringing the value of this process into serious question

Personally I would opt for the scenario of individually allowing clients to run task sequences. To do this set a variable on the client object in the console which will override that set at the collection level.

 

Creating a variable on the client

Don't forget to remove the client variable once the required rebuild has taken place.

I'm not offering this as a golden scenario that prevents undesired rebuilds outright; it's by no means infallible. At the end of the day, the onus is always on the ConfigMgr administrator to implement processes to ultimately prevent this from happening. That doesn't mean, however, that there aren't different tools you can use to reduce the risk, and this should hopefully reduce the potential of catastrophe!

 

If you do implement this my last word of advice would be to not forget to ensure any new task sequences you create have the necessary logic to protect your valuable machines!

 

Ok, not quite as short as i promised but that was before i got trigger happy with the snipping tool

 

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Rob York, a Premier Field Engineer with Microsoft Premier Field Engineering, UK. 

Which Device Drivers Should I Import Into My Boot Image?

$
0
0

Today’s post comes from a colleague here in the UK, Jason Wallace. Jason is a fellow Configuration Manager Premier Field Engineer. Here he shares his knowledge and experience on device drivers in boot images. Enough of me, over to Jason!

 

 

Something which we sometimes see are questions regarding device drivers and Operating System deployment in System Center Configuration Manager.  In particular which device driver should go where?  In this blog post I’ll try to address these questions.  Here goes.

 

Depending upon which version of System Center Configuration Manager you are using, your boot image will either be WinPE 2.0 or winPE 3.0.  These are either based upon Windows VISTA or Windows 7 respectively, so regardless of which operating system you plan to deploy, you’ll need to have either VISTA or Windows 7 device drivers in use.  Firstly, the hope is that you won’t need to import any device drivers into your boot image, as all of the drivers you need will be there anyway.  The only drivers you need to be worrying about here are network interface card drivers, mass storage drivers and also chipset drivers.  Importing webcam drivers and so on is a bad move at this point.

 

So, how do I know whether I need to import drivers onto a system?  First things, on your boot image in System Center Configuration Manager console, right click on your boot image and under properties, enable the support of [F8].  If you don’t do this then you’ll not be able to do any troubleshooting while in WinPE.  Now, don’t forget to update your distribution points.  Now, try booting up your system into WinPE and opening up a command prompt by hitting [F8].  In the command prompt, type IPCONFIG. If this comes up with a very short message, saying “Media Disconnected” then quite probably you’ll need to import a network driver for this computer.  To check that you have access to the disk drive(s) on the system then in the command prompt, type Diskpart list disk If this shows the disks then you’re good to go on the disk front.

 

Importing a driver should be fairly simple – once you’ve identified the correct driver from the vendor!  All you should need to do is go through the driver import routine in the System Center Configuration Manager console and choose to import the driver into your boot image.  In order for this to work, the driver needs to have a TXTSETUP.OEM file associated with it.  If it doesn’t, you have two options:  Go back to the vendor and ask for one, or to mount the boot image under IMAGEX and manually import it.  Going to the vendor is usually the easier option.  Once you have imported the drivers into your boot image, don’t forget to update your distribution points.

 

Depending upon who your hardware vendor is they may be able to supply the drivers that you need in a bundle.  For example, Dell have such an offering at http://www.delltechcenter.com/page/Dell+Business+Client+Operating+System+Deployment+-+The+.CAB+Files

 

Lastly; just a general note on device drivers.  I have found that after updating a device driver package and updating the distribution points, sometimes the old driver package is read incorrectly and problems persist.  Updating the driver package a second time seems to resolve this issue.

 

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Jason Wallace, a Premier Field Engineer with Microsoft Premier Field Engineering, UK

Preventing PXE Boot on Servers and Other Critical Client Systems Using MACIgnoreListFile

$
0
0

Following on from my last post here’s another method you can implement to protect specific clients in your environment.

 

There is a documented but seemingly little known setting in the registry of the PXE Service Point Role in Configuration Manager. MACIgnoreListFile allows you to have a list of MAC addresses which will be explicitly rejected if they try and boot via PXE.

The setting is documented here: http://technet.microsoft.com/en-us/library/cc431378.aspx but I thought i would share this simple trick with you to further protect vital computers from accidental rebuilds.

For 32 bit servers create a string value called MACIgnoreListFile at

HKLM\Software\Microsoft\SMS\PXE

For 64 bit servers, the value needs to be created under the WOW6432Node at

HKLM\Software\Wow6432Node\Microsoft\SMS\PXE

A small difference but a crucial one if you want the setting to take affect.

 

 

Create the value pointing to a text file that lists all the MACs you wish to protect. Looking something like this

 

 

Now, restart the WDS service so that the MAC file is read in correctly. You will see this in the SMSPXE.log on the PXE Service Point.

 

 

It seems that you will need to restart the WDS service every time you make a change to the exclusion list.

You will be able to see in the SMSPXE.log any attempts from these excluded PCs at PXE booting

 

 

The client itself will continue to retry, hence multiple entries in the log file above, before timing out and booting to the next available device.

 

 

 

 

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Rob York, a Premier Field Engineer with Microsoft Premier Field Engineering, UK. 

PKI - Let's Keep it Secret

$
0
0

 Over here in Thames Valley Park, myself and a few colleagues attended a brilliant session on Intel vPRO, in particular, playing about with AMT (Active Management Technology) which integrates with Configuration Manager under the Out Of Band Management banner.

 

One of the upshots of this session was my colleague, Jason, putting together some details on Public Key Infrastructure. Over the next week or so Jason will be providing a series of articles on PKI which will eventually lead in to blogs covering Native Mode and Out of Band Management. These blogs won’t specifically be ConfigMgr or even strictly speaking be manageability related. But stick with it, Jason is laying the groundwork for some future blogs.

 

Over to Jason, who is a colleague of mine in the Configuration Manager team, but he does also dabble in the security and PKI space

 

 

Let’s imagine that you want to send some information to a friend.  What are the things that you might be interested in within this communication?  This section is going off on a rather long tangent, but please bare with me.  If you know all of this then please skip to the next blog post.

·         Confidentiality

You might want to be certain that only you and your co-respondent can see the message contents.  If there is someone looking to learn your innermost secrets then you’d want to have a degree of certainty that the information is secure.

·         Integrity

 

You might want to be able to guarantee that the information which was received was verifiably the same information that was sent.  Imagine how bad it could be if someone received some information which was even a single character different.  Imagine what impact to your business there could be if the quotation you sent to a potential client charging $10,000 is actually received as £10,000.  Ok, so this might not be a bad thing for you, unless someone else has quoted $12,000!

 

·         Authenticity

 

When I receive some information I want to have some assurance that the person who the message claims to be from actually IS the sender of the information.  If I receive a message from Barack Obama then I’d like to have certainty that it was The President who sent me the message. So, you need to prove your identity.

Right, now that we have the introduction let’s spend just a few moments considering  some of the ideas involved in this whole cryptography world.

Confidentiality

If you have a message to send to someone and you want to keep it secret then you are going to need to do something with that message.  Essentially there are three things that you can do with the data at this stage

·         Obfuscate the message

OK, this is a geek’s way of saying that you want to make something difficult to read.  Programmers will obfuscate code, we will obfuscate passwords – if you look into your ConfigMgr client’s computer policy then you will see that the Network Access Account username is human-readable.   The password isn’t.

·         Hide the message

 

You might want to place the message in some form of container which carries the message, while making it look innocuous.  Imagine, for example, that I decided to take up photography as a hobby and uploaded several hundred photos to my Facebook profile.  Now, imagine if for each of the pixels I changed the colour definition by just one bit. Will you notice? Probably not! If I now use that one bit as a carrier for each bit of my message then even in a simple RGB colour scheme I have 3 bits to play with for each pixel in my image.  Suddenly I have a lot of information embedded within the source image.  This process is called Steganography.

 

These two are somewhat outside the scope of where we are going so if anything were on a tangent within a tangent.  They do show however that we can do things to data which are not . . .

·         Encryption

 

So now we are looking not at hiding our message but changing it in such a way as it becomes unreadable for a casual observer but can easily be read by the legitimate receiver of the message.  A lot has been said about the merits or otherwise on the different methods of encryption. Specifically; which is the BEST, but I want to introduce the opposite idea – is it GOOD ENOUGH?  If I want to send you a message that says “Let’s meet at 8” and it takes an attacker 13 hours to decrypt the message then it’s safe to say that the encryption was GOOD ENOUGH.

 

Now we have a new element in our communication – the possible ATTACKER.  What do we know about the attacker? Well, the answer is, not a whole lot!  We don’t even know whether they exist, so we make some assumptions about them:

 

o   They exist, but we don’t know how many of them there are.  We don’t know whether they are co-ordinated or not.

o   They have lots of time, but we don’t know how much

o   They have lots of computer time, but again we don’t know how much

o   They are clever.  In fact they may even be cleverer than us.

 

Given this ethereal attacker then perhaps we should err on the side of caution.  The 13 hours to decrypt our “Let’s meet at 8” message was a pure guess, so let’s guess caution and choose some form of encryption that will protect for, say 13 years instead.  That should do it.

 

To encrypt something therefore, we need to have three things

 

1.       An encryption protocol.

 

We need some method of making our message secure.  That’s going to be our encryption protocol.  We could for example agree on a protocol:

“We will transpose all characters by x

 

That was easy.  We also need to give this protocol a name – usually named after the inventor of the protocol; this one is called the Caesar Cypher.  This, by the way, is what is known as a symmetric encryption algorithm – the same key is used to decrypt as was used to encrypt the message.

 

2.       A Key

 

You and I need to agree upon a key – let’s say x=5 and then come up with our transposition.  So:

 

ABCDEFGHIJKLMNOPQRSTUVWXYZ

VWXYZABCDEFGHIJKLMNOPQRSTU

 

3.  A Message

 

I’ll leave you to work that out for yourself and you know both the protocol and the key

 

PKBMVYZ OJ XJIADBHBM ORZIOT ORZGQZ

 

OK, I know this IS supposed to be a System Center related blog, so that’s my excuse.  It does, however demonstrate a few things – As an attacker, just looking at the string there are some patterns.  You’re assuming that the language is English and therefore there are some patterns which you can make out.  Here starts your new career as a cryptographer.

 

So, how long is it going to be before the attacker (who does not know our key) will crack the code?  Not long I guess.  So, what can you and I do about that?

 

·         We could have a more complicated protocol or key

 

That’s a great idea.  A more complicated protocol however will add to the length of time that it takes to encrypt or decrypt the message.  In computing terms that means CPU cycles.

 

·         We could re-key more often

 

Another great idea, as a general principle a simple encryption algorithm can be secure as long as the key changes often or a more complicated operation needs to have a shorter key in order to remain secure.

If we are going to re-key often, how are we going to do this?  In the case of our simple example, is there a point in re-keying by encrypting the new key with the old key (in band)?  Probably not!      

If in-band re-keying is not a good plan then how about out of band?  Well, this is either going to involve you and I meeting up on such a regular basis that we might as well just exchange our messages at that time or you and I having some form of book of codes stretching into the future (a code book) where we can just cycle through the codes.  Both of these present some severe logistical problems.

With that bombshell, we’ll close off this blog post, but the story continues in PKI – Let’s make it public

 

 

 

 

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Jason Wallace, a Premier Field Engineer with Microsoft Premier Field Engineering, UK

 

 

PKI – Let’s make it Public

$
0
0

Welcome to part two in the series from Jason Wallace on Public Key Infrastructure. If you didn't read Jason's first article explaining PKI I suggest you nip back down the post list of The Manageability Guys to get up to speed. Over to you again Jason...

 

This is the second in a series of blog posts aimed at introducing PKI so that we can hopefully gain an understanding of the core concepts involved in a PKI.

We left off having considered that information can be encrypted using a protocol and a key, but considering the implications of needing to exchange keys.  This is a problem that perplexed many people for many years - how do I successfully ensure that a short-term key can be securely exchanged.

For many years, people would be issued with some form of code book.  People would have the code of the day and use that code to exchange messages.  Later, they realised that if all messages were exchanged using that key and the key for the day was compromised then all communications for that day would also be compromised.  This is when procedures changed and people would first of all invent their own, new key and send that key encrypted by the day key to the recipient.  Now, we had the concept of the session key.  We also had the idea of in-band key exchange.

All of this, however still relied on us having some form of code book.  That code book would have to have enough pages to cover for long enough to span a replacement of a code book.  If, for example we were supplying code books to infantrymen then a short code book would suffice as it would be possible to replace, say monthly.  But how about a submarine on patrol for many months at a time, for example?

So enters in Public Key Cryptography otherwise known as Asymmetric cryptography.

All history is stories, and there are many stories surrounding the inception of Public Key cryptography - whether it was some clever people in Langley, USA or some clever people in Cheltenham, UK, or some clever university professors.  Sitting here in my study in Cheltenham I can tell you that we know the real story!

The idea of public key cryptography is really quite simple - the idea is that you and I agree a protocol and then within that protocol I construct two keys: the private key, which I will always protect and NEVER allow anybody to access, and a public key which I'll allow anybody to have access to.  These two keys will be mathematically related somehow but the public key will not give any clue as to the content of the private key.

Easy?  Kind of

So, let's look at our protocol.  Let's say the following:

  • A private key will be the sum of two randomly selected prime numbers
  • A public key will be the multiplication of these two randomly selected prime numbers

So, if I were to choose, say 3359 and 1249 as my primes, my private key would be 4608 and the public key 4195391.

What's so cool about that?  Well, pretty much the fact that in order to determine which primes went to make up 4195391 the attacker would need to try each set of prime numbers in sequence to see whether the result was a prime number - something called factoring.

It's all way more complicated than that, and the numbers which we are using are way way larger than two numbers selected from Wikipedia's table of the first 500 primes (remember that we don't know how much time or CPU time the attacker has), but the key point (sorry for the pun) [RY: No he isn't] is that the actual PRIVATE key NEVER goes across the wire in band.  In fact, pretty much anything that involves the private key going anywhere is a concern!

Given this, if you know that my PUBLIC key is 4195391 then you can encrypt some data with this key and I can use my PRIVATE key to decrypt the data.  Hey presto!  I cannot send you anything back though - for that, I'd need your PUBLIC key!

Equally, If I were to encrypt a piece of data with my PRIVATE KEY, then assuming that my private key is indeed private then the only person who could have encrypted that piece of data was me!  That's a big assumption there, though - the assumption that private keys are indeed always kept private.

So, we have started to achieve two of our goals - those of CONFIDENTIALITY and AUTHENTICITY.  More on those later.

There are a couple of major issues which we encounter with Public / Private Key cryptography:

  • We know that there is a distinct possibility that the private key could be mathematically calculated by an attacker from the public key - it's just going to take an attacker with a huge amount of time and/or processing power. Or an attacker who is much more clever than the person who invented the encryption protocol and has spotted a weakness. To mitigate this, we tend to revert to very long key lengths and to complex mathematics, which can make these kinds of protocols slow to use
  • The more data which we send across the wire which is encrypted with any given key then potentially the bigger pool of data which the attacker can use to attack our key there is.

So, what we will tend to do is to use a public key protocol to encrypt a session key for communications and exchange this key with the recipient and then revert to a much simpler, but faster symmetric key for the actual data encryption.  That's why, for example you will see DH/3DES combinations in, say your IPSEC rules, where Diffie Hellman is the public key protocol and 3DES is the symmetric one.

As a side note, all of our assumptions over the mathematics involved are being challenged by products such as Elliptical Curve Diffie Hellman which allow for very fast, very efficient public key cryptography. Oh, and ALL of our assumptions about cryptography in general are being challenged by Quantum Computing, but we'll leave that for people much cleverer than I.

 

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Jason Wallace, a Premier Field Engineer with Microsoft Premier Field Engineering, UK

Deploying Office Environment Assessment Tool (OEAT) with ConfigMgr

$
0
0

I was approached by a colleague who specialises in Office about this particular tool and thought I would post my findings from playing about with it in my test environment. It will probably be fairly obvious from reading this post that I know very little about Office; in fact I'm a pretty basic user. My Dad would put me to shame in a contest of utilising some of the more adventurous parts of Excel, and he's about as technical as a wood burning stove! So I shall focus purely on the deployment side of things!

 

The documentation for the OEAT (http://technet.microsoft.com/en-us/library/ee683865.aspx#section1) skims quickly over the different techniques for running the tool. It only takes enough time to list SMS and ConfigMgr as possible methods for centralised deployment.

 

The tool itself is a fairly simple to use executable file. It allows you to run from either the command line or it will present a GUI. For the needs of deploying via ConfigMgr we first need to use the GUI.

 

Before you can run the tool to scan your client you need to create the settings.xml file the command line will expect later on.

Click Next and Next again to get to this stage.

Here you can specify whether to include the passive scan of the client, and how long to run for. If you choose not to run the passive scan you must select the silent option as we will be deploying via ConfigMgr. If you do opt in for the passive scan this will be selected for you.

Whichever scan type you opt for, select Next.

The final screen allows us to choose the output location. The OEAT documentation talks about using ConfigMgr to collect the output from the local machine. To me this means software inventory and file collection. In my opinion this is additional work. The wizard we have here will allow us to specify a central, network location to automatically upload the outputted to. And that is the approach I am taking here. I shall write another post sometime soon to go through the few extra steps needed to use file collection.

Note, that if you take the central repository route I take, the client must have sufficient writes to write to that network location.

Specify the location, local or otherwise. Click next and then finish. This will output settings.xml to the same location as the OEAT.exe.

 

Now we can begin packaging. And just like any other software distribution package, now would be a great time to test. Open up a command prompt on the client; navigate to the directory containing the OEAT.exe and settings.xml and type

                OEAT.exe -scan

You can monitor the scans progress by opening Task Manager and waiting for OEAT.exe to disappear from the processes tab. If you configured a passive scan this will run for the time you specified. If you didn't opt for passive scan it should close out much sooner. In my testing it took no more than about 30 seconds. However I did only install office on my client this morning and it's a vanilla install with no additional plug-ins J

Now we can package up the tool. Copy the EXE and XML files to your ConfigMgr source location and create a package.

 

Next create a program, using the command line we used to test our tool earlier

 

If you configured the tool for passive scanning, you will need to configure the maximum run time now. using this method will prevent other ConfigMgr deployments from running until the tool has finished.

Next, decide what context to run the tool under. If you are conducting a passive scan, administrative rights are required so you will need to select "Run with administrative rights"

 

Now create the advertisement to target your client systems.

 

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Rob York, a Premier Field Engineer with Microsoft Premier Field Engineering, UK.

PKI – Let’s make it Secure

$
0
0

Post three of what currently stands at a 4 part introduction to Public Key Infrastructure from my colleague Jason Wallace. Jason is laying the foundations to cover some ConfigMgr related certificatary (I know that's not really a word, but it's the end of a very long hot day and I need to amuse myself somehow)  

Over to you Jason.

Thanks for bearing with me on this.

So, to recap, we know that there are two major forms of encryption - symmetric and asymmetric and we said that there are three goals of using encryption -

  • Confidentiality
  • Integrity
  • Authenticity

We have not really touched Integrity, so, before we look at how this all stitches together let's briefly look at hashing.

Hashing

Hashing is unlike encryption as hashing does not alter the source data at all.  What it in fact does is to generate something brand new which can sit alongside the data being hashed.  The hash will be generated through a hashing algorithm which will always return a result of a known length but that would return a hash which is completely different if even only a single bit in the data being hashed is different.  For example, if I were to hash a single byte file with MD5 then I would see a 128 bit result.  If I were to hash 1 terabyte of data I would see a different 128 bit result.  If either of the files were to change by the slightest bit then we'd see a totally different hash.  With hashing, we can guarantee that what we have received is what was sent, thereby giving us integrity.

For example, if we hash the alphabet using the MD5 algorithm the output is:

8eb7ab130d2ebf1e0bff12606ccaabd3

If we hash the same string but capitalised, this time the hash string is totally different

9c4511bd25cb573b3994ea4f80f5652a

Here we can see that even if the data is fundamentally the same but has changed in the most minute of ways, we still see a different hash!

Stitching it all together

  • Confidentiality

This one should be easy by now.  There are a number of options

  • I could create an encryption key and deliver it to you out of band. We could use that key to symmetrically encrypt the data both ways
  • Slightly more secure would be that I create a key which I give to you. You create one which you give to me. We use one of those keys for each direction of communications
  • You could create a public/private key pair and give me your public key. I could then encrypt the data using your public key
  • I could create a session key and then encrypt the session key with your public key. You could then decrypt the session key and then use that to encrypt the communications. When it comes to re-keying then we could revert to the public keys again.
  • Integrity
  • So this will be done by using hashes
  • Authenticity
  • If you encrypt something using your private key then this will prove that you encrypted it

Pretty much all we do with PKI is done using a combination of these technologies.  It can become difficult to track whose keys are who's when we are, for example looking at embedding a hash, which was encrypted with your private key and then re-encrypted using my public key but it does pan out in the end.  Remember that pretty much any solution that would involve a private key crossing the network is likely to be the wrong answer!

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Jason Wallace, a Premier Field Engineer with Microsoft Premier Field Engineering, UK


How to get a report with Friendly scan errors

$
0
0

Hi Everyone,

I haven’t posted anything in ages so I thought I’d post something simple but hopefully useful.  We publish a list of all custom Configuration Manager 2007 errors in TechNet (it’s up here in case you haven’t seen it http://technet.microsoft.com/en-us/library/bb632794.aspx).  Now, while this list is really good when you’re trawling through Trace32 looking at logs, it’s not so useful when you’re trying to figure out why clients are failing.  We have a built-in report in Configuration Manager that provides this information, but again you get hex error codes (which are better than 32-bit decimal integers, but only a little).  So what I did was take the report and add a massive case statement with all the custom scan error messages. 

The new query for the scan errors can be found below.  You can clone the existing report, and put the below query in the new report.  Note that there may be some clipping on the blog page, but the underlying code is still there so when you highlight the text you’ll get the full line.

select us.UpdateSourceName as UpdateSource, 
    us.UpdateSourceDescription as Description,
    us.UpdateSourceVersion as Version, 
    us.SourceSite as SourceSite
from v_SoftwareUpdateSource us with (NOLOCK) 
    where us.UpdateSource_UniqueID = @UpdateSourceID 

select 
    uss.LastStatusMessageID&0x0000FFFF as ErrorStatusID,
    asi.MessageName as Status,
    isnull(uss.LastErrorCode,0) as ErrorCode,
    dbo.fnConvertBinaryToHexString(convert(VARBINARY(8), isnull(uss.LastErrorCode,0))) as HexErrorCode,
    'Error Text' =
        CASE dbo.fnConvertBinaryToHexString(convert(VARBINARY(8), isnull(uss.LastErrorCode,0)))
            WHEN '8024402C' THEN 'WU_E_PT_WINHTTP_NAME_NOT_RESOLVED: Same as ERROR_WINHTTP_NAME_NOT_RESOLVED - The proxy server or target server name cannot be resolved.'
            WHEN '80244016' THEN 'WU_E_PT_HTTP_STATUS_BAD_REQUEST: Same as HTTP status 400 – The server could not process the request due to invalid syntax.'
            WHEN '80244017' THEN 'WU_E_PT_HTTP_STATUS_DENIED: Same as HTTP status 401 – The requested resource requires user authentication.'
            WHEN '80244018' THEN 'WU_E_PT_HTTP_STATUS_FORBIDDEN: Same as HTTP status 403 – Server understood the request, but declines to fulfill it.'
            WHEN '80244019' THEN 'WU_E_PT_HTTP_STATUS_NOT_FOUND: Same as HTTP status 404 – The server cannot find the requested URI (Uniform Resource Identifier).'
            WHEN '8024401A' THEN 'WU_E_PT_HTTP_STATUS_BAD_METHOD: Same as HTTP status 405 – The HTTP method is not allowed.'
            WHEN '8024401B' THEN 'WU_E_PT_HTTP_STATUS_PROXY_AUTH_REQ: Same as HTTP status 407 – Proxy authentication is required.'
            WHEN '8024401C' THEN 'WU_E_PT_HTTP_STATUS_REQUEST_TIMEOUT: Same as HTTP status 408 – The server timed out waiting for the request.'
            WHEN '8024401D' THEN 'WU_E_PT_HTTP_STATUS_CONFLICT: Same as HTTP status 409 – The request was not completed due to a conflict with the current state of the resource.'
            WHEN '8024401E' THEN 'WU_E_PT_HTTP_STATUS_GONE: Same as HTTP status 410 – Requested resource is no longer available at the server.'
            WHEN '8024401F' THEN 'WU_E_PT_HTTP_STATUS_SERVER_ERROR: Same as HTTP status 500 – An error internal to the server prevented fulfilling the request.'
            WHEN '80244020' THEN 'WU_E_PT_HTTP_STATUS_NOT_SUPPORTED: Same as HTTP status 501 – Server does not support the functionality required to fulfill the request.'
            WHEN '80244021' THEN 'WU_E_PT_HTTP_STATUS_BAD_GATEWAY: Same as HTTP status 502 – The server, while acting as a gateway or proxy, received an invalid response from the upstream server it accessed in attempting to fulfill the request.'
            WHEN '80244022' THEN 'WU_E_PT_HTTP_STATUS_SERVICE_UNAVAIL: Same as HTTP status 503 – The service is temporarily overloaded.'
            WHEN '80244023' THEN 'WU_E_PT_HTTP_STATUS_GATEWAY_TIMEOUT: Same as HTTP status 504 – The request was timed out waiting for a gateway.'
            WHEN '80244024' THEN 'WU_E_PT_HTTP_STATUS_VERSION_NOT_SUP: Same as HTTP status 505 – The server does not support the HTTP protocol version used for the request.'
            WHEN '8024400A' THEN 'WU_E_PT_SOAPCLIENT_PARSE: WUA client needs to be updated, message from server cannot be parsed.'
            WHEN '8024001E' THEN 'WU_E_SERVICE_STOP: Operation did not complete because the service or system was being shut down.'
            WHEN '8024400D' THEN 'WU_E_PT_SOAP_CLIENT: SOAP client found the message was malformed.'
            WHEN '80240032' THEN 'WU_E_INVALID_CRITERIA: The search criteria string sent to WUA from ConfigMgr was marked as invalid by WUA.'
            WHEN '80240012' THEN 'WU_E_DUPLICATE_ITEM: Failed to add file to the FileLocationList.'
            WHEN '80240032' THEN 'WUA Error: Failed to end search job. WUA failed searching for update with error.'
            WHEN '8024001D' THEN 'WUA Error: An update contains invalid metadata.'
            WHEN 'C80003F3' THEN 'hrOutOfMemory: The computer is out of memory. Generally reported when WSUS try to initialize its datastore.'
            WHEN 'C800042D' THEN 'hrVersionStoreOutOfMemory: Generally reported when the WUA is unable to update %WINDIR%\SoftwareDistribution folder.'
            WHEN '80040692' THEN 'ConfigMgr Custom Error: Group Policy conflict. Check domain GPOs applying to this machine.'
            WHEN '80040693' THEN 'ConfigMgr Custom Error: WUA version is lower than expected. Upgrade WUA.'
            WHEN '80040708' THEN 'ConfigMgr Custom Error: Software Updates Install not required.'
            WHEN '80040709' THEN 'ConfigMgr Custom Error: Failed to resume the monitoring of the process.'
            WHEN '8004070A' THEN 'ConfigMgr Custom Error: Invalid command line.'
            WHEN '8004070B' THEN 'ConfigMgr Custom Error: Failed to create process.'
            WHEN '8004070C' THEN 'ConfigMgr Custom Error: Software update execution timeout.'
            WHEN '8004070D' THEN 'ConfigMgr Custom Error: Software update failed when attempted.'
            WHEN '8004070E' THEN 'ConfigMgr Custom Error: Empty command line specified.'
            WHEN '8004070F' THEN 'ConfigMgr Custom Error: Invalid updates installer path.'
            WHEN '80040710' THEN 'ConfigMgr Custom Error: Failed to compare process creation time.'
            WHEN '80040711' THEN 'ConfigMgr Custom Error: Software updates deployment not active yet; for example, start time is in the future.'
            WHEN '80040712' THEN 'ConfigMgr Custom Error: A system restart is required to complete the installation.'
            WHEN '80040713' THEN 'ConfigMgr Custom Error: Software updates detection results not received yet.'
            WHEN '80040714' THEN 'ConfigMgr Custom Error: User based install not allowed as system restart is pending.'
            WHEN '80040715' THEN 'ConfigMgr Custom Error: No applicable updates specified in user install request.'
            WHEN '80040154' THEN 'ConfigMgr Custom Error: Class not registered. Try repairing the ConfigMgr client.'
            WHEN '80040668' THEN 'ConfigMgr Custom Error: Software update still detected as actionable after apply.'
            WHEN '80040600' THEN 'ConfigMgr Custom Error: Scan Tool Policy not found.'
            WHEN '80040602' THEN 'ConfigMgr Custom Error: Out of cache space.'
            WHEN '80040603' THEN 'ConfigMgr Custom Error: The ScanTool Policy has been removed, this prevents completion of Scan Operations. (E_SCANTOOL_NOTFOUND_INJOBQUEUE)'
            WHEN '80040604' THEN 'ConfigMgr Custom Error: Scan Tool has been Removed. (E_FAIL_SCAN_TOOL_REMOVED)'
            WHEN '80040605' THEN 'ConfigMgr Custom Error: Scan Tool Policy not found. (E_FAIL_OFFLINE_SCAN_HISTORY_NOT_FOUND)'
            WHEN '80040608' THEN 'configMgr Custom Error: Out of cache space.'
            WHEN '80008201' THEN 'ConfigMgr Custom Error: Out of cache space.'
            WHEN '80008202' THEN 'ConfigMgr Custom Error: Cache size is smaller than requested content''s size.'
            WHEN '8007000E' THEN 'Win32 Error: Not enough storage is available to complete this operation.'
            WHEN '800705B4' THEN 'Win32 Error: The operation returned because the timeout period expired.'
            WHEN '80070050' THEN 'Win32 Error: The file already exists.'
            WHEN '80070005' THEN 'Win32 Error: Access Denied.'
            WHEN '8007041D' THEN 'Win32 Error: The service did not respond to the start or control request in a timely fashion.'
            WHEN '80004002' THEN 'Win32 Error: No such interface supported.'
            WHEN '80072EE2' THEN 'ERROR_INTERNET_TIMEOUT: The request has timed out.'
            WHEN '80072EEC' THEN 'ERROR_INTERNET_SHUTDOWN: WinINet support is being shut down or unloaded.'
            WHEN '80072F84' THEN 'ERROR_INTERNET_SERVER_UNREACHABLE: The Web site or server indicated is unreachable.'
            WHEN '80072F7D' THEN 'ERROR_INTERNET_SECURITY_CHANNEL_ERROR: The application experienced an internal error loading the SSL libraries.'
            WHEN '80072F89' THEN 'ERROR_INTERNET_SEC_INVALID_CERT: SSL certificate is invalid.'
            WHEN '80072F8A' THEN 'ERROR_INTERNET_SEC_CERT_REVOKED: SSL certificate was revoked.'
            WHEN '80072F19' THEN 'ERROR_INTERNET_SEC_CERT_REV_FAILED: Certificate revocation check failed.'
            WHEN '80072F17' THEN 'ERROR_INTERNET_SEC_CERT_ERRORS: The SSL certificate contains errors.'
            WHEN '80072F05' THEN 'ERROR_INTERNET_SEC_CERT_DATE_INVALID: SSL certificate date that was received from the server is bad. The certificate is expired.'
            WHEN '80072F06' THEN 'ERROR_INTERNET_SEC_CERT_CN_INVALID: SSL certificate common name (host name field) is incorrect—for example, if you entered www.server.com and the common name on the certificate says www.different.com.'
            ELSE 'Unknown Error'
        END,
    count (*) as NumberOfComputers,
    @UpdateSourceID as UpdateSourceID,
    @CollID as CollectionID
from v_UpdateScanStatus uss with (NOLOCK) 
join v_ClientCollectionMembers ccm with (NOLOCK) on ccm.ResourceID=uss.ResourceID and ccm.CollectionID=@CollID
join v_SoftwareUpdateSource sus with (NOLOCK) on sus.UpdateSource_ID=uss.UpdateSource_ID
left join v_AdvertisementStatusInformation asi with (NOLOCK) on uss.LastStatusMessageID&0x0000FFFF=asi.MessageID
where sus.UpdateSource_UniqueID=@UpdateSourceID and uss.LastStatusMessageID <> 0
group by uss.LastStatusMessageID, isnull(uss.LastErrorCode,0), asi.MessageName
order by count(*) desc

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Saud Al-Mishari, a Premier Field Engineer with Microsoft Premier Field Engineering, UK.

PKI – It’s a trust Thing!

$
0
0

Part 4 of what currently still stands at a four part series. But I have high hopes for further posts. I was going to start off the intro to this blog by congratulating Jason on avoiding falling into the cliché trap of using Bob & Alice and all their cryptographic friends but you will quickly see, as I did, that this was a rather premature notion on my part. 

As a side note; my Cryptography Lecturer at University was called Chuck. I’m quite surprised to see that “Chuck” is traditionally used as the bad guy that intercepts messages. I’m fairly sure he neglected to use his own name in the many slides he referred to this slightly weird bunch of fictional characters! Enough of my ramblings, Jason has a lot to say in today’s session:

 

If you have been reading the previous blog posts then you’ll know that Public Key Cryptography involves a Public Key which can be passed to whoever you want to give it to and a Private Key which you would never dream of passing to someone. If you haven’t already read the 3 posts preceding today’s, I suggest you have a look at these first as they build up and follow on.

You’ll also know that Public Key Cryptography can be used to offer:

  • ·         Confidentiality through encryption
  • ·         Integrity through hashing
  • ·         Authenticity through encryption and hashing

So, how does all this work?  Well imagine two people meeting up, let’s call them Alice and Bob.  All cryptographers know Alice and Bob very well indeed.  Alice and Bob meet up and want to exchange some encrypted data with each other.  So, the conversation goes rather like this:

 

Alice      :               “Hello, my name’s Alice.  Nice to meet you”

Bob        :               “Hello Alice, nice to meet you.  I’m Bob”

Some pleasantries (geeks call this handshaking)

Alice      :               “Bob, wouldn’t it be good to be able to communicate securely?”

Bob        :               “Yes Alice.  Let’s create some keys.  Oh here we are.  Have my PUBLIC Key called PuB”

Alice      :               “Thanks Bob.  I’ll store PuB in my address book.  Here’s my PUBLIC key PuA”

Bob        :               “Alice, that’s great – PuA’s going in my address book and I’ll be in touch.  Bye”

Some more pleasantries (teardown)

After some time, Bob decides to get back in touch with Alice, so he generates a Symmetric Session Key (SKB), cracks open his address book, pulls out PuA and encrypts SKB with PuA.  He then fires this over the network.  Alice receives the package, retrieves her PRIVATE KEY (PrA) and uses it to decrypt SKB/PuA.  So, Alice can see the B to A session key.  She could reverse this process and then we’d have two security associations and they could exchange data two ways.

OK, two problems here (apart from a very wooden script) :

  • ·         We have a real scalability issue here.  In this scenario if Alice wants to exchange data securely not only with Bob, but also Chuck and Dave then she’s going to have to have a very similar conversation with these other guys too.  That’s going to lead to a real problem with key management.  If everyone needs to communicate securely then we’re going to have n x (n – 1) keys in circulation.
  • ·         There’s a more prosaic problem.  How did Bob know that Alice was in fact Alice and how did Alice know that Bob was in fact Bob?  They don’t – they are just taking each other on trust.  What would have happened If Eve the evil hacker was in fact (insert your own maniacal sound effects here) playing the part of Bob and playing the part of Alice in between the two of them? If Alice receives a Public Key from Eve that purports to be from Bob (PuEB) and Bob receives a Public Key from Eve that purports to be from Alice (PuEA) then Bob thinks he’s speaking to Alice and Alice thinks she’s speaking to Bob when in fact they are both experiencing a “Man in the Middle” attack.

Let’s address the first of those problems first, because that’s a little simpler.   Alice and Bob decided to store each other’s Public Keys in each other’s address books.  That’s a great idea, but how about if they both had a shared address book which they could refer to?  Outlook calls that address book the Global Address List.  Now, all everyone needs to do when then want to exchange emails securely is to publish their public keys to the GAL and when they want to send a message, issue a simple query to the GAL for the recipient’s Public Key.

We’ve been talking about emails here, but the exact same principle applies to any other kind of public key.  Some will be published to Active Directory and some will not.

The trust thing is an issue however, so let’s start looking at that.  Typically if you ask me for my public key then what I’ll do is encapsulate it in a certificate.  A certificate acts as a carrier for my Public Key.  It’s a feature of a certificate that it will have:

  • ·         A unique identifier for this certificate
  • ·         Some subject information – the subject is the entity to which the certificate was issued.
  • ·         A start date
  • ·         An expiry date
  • ·         Issuer Information
  • ·         It will be signed

These features allow us to put information into the certificate to better identify who I am and how long I should be trusted for.

SMS has always used certificates to identify the client but these are self-signed certificates.  In essence the client says to the Management Point “Please trust me because I say that I’m trustworthy”.  You can see this if you go into your certificates MMC and have a look in the SMS/Certificates section 

You’ll see that the expiration date is, well a little longer than the expected lifetime of this laptop which I am using and if I open up one of these certificates, we'll see . . .

. . . The tell-tale signs of a self-signed certificate!  Even though it says the certificate is not trusted and the issuer is the same as the subject, is this certificate good enough?  Well the answer is probably – yes.  This certificate is being used to digitally sign communications from the client to the Management Point in a mixed mode SCCM environment, so it’s likely that the other forms of authentication will also come into play.  This changes in a Native Mode SCCM environment as you likely won’t have, for example a Domain Controller to authenticate your workstation.

So, what is Alice doing when she embeds her Public Key within a certificate?  Well, in essence Alice is passing over the proof of her identity to a third party.  What is Bob doing when he looks at Alice’s certificate? Well, he’s deciding that Alice is Alice not by trusting in Alice but by trusting the issuer.

It’s very much like a passport.  When Angela (below)  arrives at passport control and presents her passport, the choice of whether the border agent trusts Angela is not only that Angela looks like her  photo (let’s call that her public key) but also that the passport is within its validity period and, crucially that the passport was issued by the United Kingdom Passports Agency.  A decision has been taken by the country which Angela is trying to enter that British passports can be trusted.  That’s not always the case – sometimes we need a visa to provide additional evidence of who we are (or an excuse for a country to racket visitors) and sometimes certain passports are de facto untrusted!

 

What’s all this known as in Microsoft Windows?  If you’re still in your certificates MMC you’ll see a list of Trusted Root Certification Authorities.  So, a Certificate Authority is some entity that you go to in order to help prove your identity.

 

When you install a copy of Microsoft Windows you will already have a list of TRCAs on your system, quite simply because someone, somewhere in Redmond decided that the list should contain certificates from GeoTrust, VeriSign, Go Daddy et al and not from Mikes-dirt-cheap-certs-4-U.COM.  When your domain admin gets his or her hands on your system through Group Policies they can add internal CAs to the list and could, if they really wanted to add in Mikes-dirt-cheap-certs-4-U.COM into this list.  So, ultimately who do you trust?  The CA or the SA?

So, let’s say I wanted to start selling something online.  Would you trust me with your credit card details?  You wouldn’t? So, I am going to need to do something to prove to you that I am who I say I am.  I’m going to go out and get a certificate from a Certificate Authority.  There is a whole bunch of CAs out there, which one am I going to choose?

  • ·         The most important thing for me is that the certificate is trusted by you, so I kind of need to second guess what’s going to be in your TRCA.  I know that VeriSign, Geo Trust and so on are likely to be in your list and Mikes-dirt-cheap-certs-4-U.COM is unlikely to be in your list.
  • ·         I’m likely to look for someone who is cost effective.

Of course, if all we are doing is something purely internal then we probably don’t need to go out and buy a certificate from someone else as we’ll be able to control the TRCA list internally and can add our own CA in via GPOs.

Let’s look at this at a well-known email provider.  I went to login in to my emails and i got this:

 

Somewhere, some checks had been performed on the identity of Microsoft to say that they are who they say they are – Internet Explorer also kindly showed them as green so I am even confident.  What happened here? Opening up the certificate I see some interesting things

 

Frist, I see that the Subject is the same as the website.  That’s a good start.  Then I see that the validity dates are good. Even better.  I also see the Issuer Information VeriSign Class 3 Extended Validation.   Let’s look at that a little more.  In the Certification Path we can see that the certificate which Microsoft supplied us in fact also contains the certificate for the issuing CA AND the certificate for the CAs above that:

 

So, what we see here is that at VeriSign they have a ROOT CA and a subordinate ISSUING CA.  The certificate of the issuing CA is issued and digitally signed by the ROOT CA.  What about the ROOT CA’s certificate?  Well, that’s self-signed – the ROOT CA is saying “you have to trust me because I’m the Root CA”

“Why should I trust the CA, in this case VeriSign?” Because they are open as to the steps they take to verify identity in The Issuer Statement – take a look for yourself.

The next question then I guesses is “Why should I trust the ROOT CA?”  There, we’ll leave the discussion for now.

 

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Jason Wallace, a Premier Field Engineer with Microsoft Premier Field Engineering, UK

Fully Automating P2V Migration for Software Assurance in ConfigMgr

$
0
0

I’ve had this topic on my to-do list for a few months now. I presented on P2V Migration at a customer day we hosted at Thames Valley Park in February. My hesitation came from not really knowing how much to cover in terms of the blog. I’m going to try and keep it short and sweet and focus on particulars that pertain to ConfigMgr, as there numerous posts already available giving and overview and introduction to P2V. If you’re not familiar with P2V for Software Assurance you can read up on it here http://technet.microsoft.com/en-us/library/gg180733.aspx

 Whilst I was preparing for my presentation I had one major blocker to the demo I was setting up. Once the Windows 7 Deployment had occurred the Old Applications were failing to port from Windows XP to Windows 7. Manual intervention was necessary to log on to the Windows XP VM in order to allow the scripts to run.

The reason for this was that in order for a fully automated migration to occur the task sequence needed to configure auto logon on the pre-existing Windows XP box.

The way to achieve this automation, I discovered was to set a Task Sequence variable prior to the P2V capture running. Create the task sequence variable with the name “AdminPassword” and  during the capture VHD step the Local Admin Password on the Source Box will be set to whatever value you specify.

 

 The same value will then be written in to the registry to allow the automatic logon necessary to not require user interaction when the task sequence attempts to pull the Windows XP applications across.

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Rob York, a Premier Field Engineer with Microsoft Premier Field Engineering, UK. 

Improving ConfigMgr Console Performance - Disable TechCenter Homepage

$
0
0

In Configuration Manager 2007 does it take a while for the console to start?  If so, then this little tip will help.  The Console home page attempts to connect to Microsoft whenever you open up the console. 

http://technet.microsoft.com/en-us/library/cc431367.aspx

To prevent the System Center Configuration Manager console from downloading the TechCenter home page

1. In a registry editor, locate the key HKLM\Software\Microsoft\ConfigMgr\AdminUI.
2. Create a new DWORD value DisableHomePage.

You do not need to set any data for the DisableHomePage value

 

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Jason Wallace, a Premier Field Engineer with Microsoft Premier Field Engineering, UK

Are My Backups Really Taking Place?

$
0
0

Something which I have seen as a recurring theme a few times with customers is with backups failing and them not realising.  While, of course we would always recommend that you have a test environment with some Hyper-V images where you can always test restores, for some customers this is not possible. In these cases the customers were looking at Component Status and seeing something like this:

This means that the backups are taking place right?  Wrong!   What sets this OK value to something BUT OK is a simple counter which you can control under Status Summary, just like this – the default as you can see is 5 to send you into Critical.  For these customers the value of 5 was not being hit.  Of course this is just an example and you should look at the threshold values for ALL of your components.

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Jason Wallace, a Premier Field Engineer with Microsoft Premier Field Engineering, UK

Improving ConfigMgr Console Performance - Removing the Action Pane

$
0
0

Sometimes I find that the console in ConfigMgr does not perform as quickly as I would like.  My last blog post on this topic showed how we can speed up the boot time for the SCCM console by turning off a web check, but how about when I am in the console itself?

Well, the MMC 3 console is doing quite a lot of work to help me out as it is running.  In particular it’s keeping that Action Pane up to date with a whole bunch of context sensitive menus.  After a while I realised that all this Action Pane was doing for me was taking up screen real estate and in one occasion on a test system (yes, it really WAS a test environment) when it offered me a delete option I ended up accidently a system instead of a collecttion!

So I looked at what would happen when I turned it off.  Hey presto, the console was a whole lot faster.  Here’s a screenshot of how I now set my console.

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Jason Wallace, a Premier Field Engineer with Microsoft Premier Field Engineering, UK

Useful ConfigMgr Resources (Updated)

$
0
0

A while back (2 years to be precise) Saud posted a list of resources to do with ConfigMgr. Below is a well overdue update to that list:

ConfigMgr Resources/Information:

ConfigMgr Design Resources

OS Deployment

Out of Band Management

 

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Rob York, a Premier Field Engineer with Microsoft Premier Field Engineering, UK. 


What Accounts Should I Be Using?

$
0
0

Something which I have seen recently on a few occasions is that customers using the same credentials for ConfigMgr Client Push account and the Network Access Account.  Sometimes, for simplicity sake the account is even a member of Domain Admins.  It’s actually really important that these two needs are not covered by the same user account and even more so that they are not Domain Admins!

 The ConfigMgr Client Push Account (http://technet.microsoft.com/en-us/library/bb632779.aspx)

 For ConfigMgr client push to work, the ConfigMgr Site Server needs to connect to the ADMIN$ share of the prospective client computer.  Once it has done this the ConfigMgr server will then copy down CCMSETUP.EXE and set that to operate as a service.  CCMSETUP then starts and manages the rest of the installation.  In order to do this the ConfigMgr Client Push Account needs to have local admin permissions on the prospective client computer.

 The Network Access Account (http://technet.microsoft.com/en-us/library/bb680398.aspx)

 Now that we have the client software installed, the client will download its policy and store it in WMI.  Something that is very likely to happen is that we will want to distribute some software to the client computer.  When a client is a member of a Microsoft Active Directory Directory Services domain then it will authenticate to a Distribution Point through its computer account and access the content.  What happens however if the client is a member of a non-trusted domain, or is a workgroup member, or in a Windows PE build because we are deploying an operating system using System Center Configuration Manager at this time?  Well, that’s what the Network Access Account is there for.  We download those credentials as policy and store them in encrypted format as part of the client’s policy.

 Really all we need to do is create an account within Active Directory Directory Services and grant no additional access permissions to it.  The account should not be able to logon interactively and it should not be able to add computers to the domain.  It certainly should be in no other groups than Domain Users.

 So, what’s the problem?

 If someone is able to connect to WMI and read these credentials from WMI then under normal circumstances all they will have learned are the names of an account with mediocre permissions.  If you have elevated the Network Access Account then the attacker could use this account to try and access more useful data.

 This post was contributed by Jason Wallace, a Premier Field Engineer with Microsoft Premier Field Engineering, UK

HELP! Where did the disk space on my Management Point go?

$
0
0

Recently I was working with a customer and we found that their ConfigMgr Management Point had an unexpectedly low amount of disk space left.  When we looked at this machine we found that there were lots of IIS Logs in the %SystemDrive%\inetpub\logs\LogFiles folder.  Eagle eyed administrators will spot that this is a change since IIS6  In the case of this customer, 26GB worth.

 

The Management Point is a web server and in IIS 7 is set to log all events into log files.  While you can turn this feature off so that you are not logging transactions against the MP this is not the default and may not be desirable.  Unfortunately IIS 7 does not offer the ability to automatically remove old logs files from this location so there is a tendency for them to build up.  It would be a good plan to periodically delete these files – if you wish, a .VBS script (eg. http://msmvps.com/blogs/kwsupport/archive/2007/12/29/cleanup-old-log-files-revisited.aspx) through a scheduled task, or through System Center Operations Manager would be a good way to do this.  You could even use System Center Orchestrator - http://www.microsoft.com/systemcenter/en/us/opalis.aspx

 

For interest, a screenshot of the IIS7 logging settings

 

 As a side-note, while IIS Logging has limited benefit in an ConfigMgr environment, periodically you may wish to look into the log files to see whether you are throwing many 503 errors – this could indicate an MP which is overloaded.

 

Also note that this applies to any role that uses IIS. So Distribution Points, Fallback Status Points, Server Locator Points are all vulnerable

 

This post was contributed by Jason Wallace, a Premier Field Engineer with Microsoft Premier Field Engineering, UK

 

Testing SLP Availability

$
0
0

Hi all,

A quick one from me. Ever had site discovery issues on your client machine?

 

 You can stick one of the following into your web browser to confirm that the Server Locator Point is up and running:

 

 If you use AD Sites for boundaries

 

 http://<slp>/sms_slp/slp.dll?site&ad=<AD Site Name>

 

 If you use IP Subnets for boundaries

 

http://<slp>/sms_slp/slp.dll?site&ip=<Client Subnet ID>

 

 If you use IP ranges

 

http://<slp>/sms_slp/slp.dll?site&ir=<Client IP Address>

 

 Or if you use a combination you can concatenate the parameters

 

http://<slp>/sms_slp/slp.dll?site&ip=<Client Subnet ID>&ad=<AD Site Name>&ir=<Client IP Address>

 

So in my test environment if I run the following:

 

http://svr-cen/sms_slp/slp.dll?site&ip=192.168.10.0&ad=Default-First-Site-Name&ir=192.168.10.10

 

I get this response:

 

If I make my SLP unavailable, in this case disabling authentication: 

 

 Entering an invalid / undefined boundary

 

 

This post was contributed by Rob York, a Premier Field Engineer with Microsoft Premier Field Engineering, UK.

Modelling Conceptual Entities in a SCOM 2007 Management Pack

$
0
0

Drum roll please…. Below is the first Operations Manager blog to be submitted to the Manageability Guys! Meaning we can now live up to our name as it isn’t just Configuration Manager articles.

My OpsMgr knowledge is limited at best but I do know how to be persistent; it’s taken a year of nagging and for the engineer in question to leave PFE for MCS but here it is!

 

On numerous occasions when putting together a service model for an OpsMgr management pack I have found that it would be worthwhile to model some application component, but that component doesn’t necessarily lend itself to being discovered in a traditional sense.

Example

About a year ago I was involved in a management pack authoring project where there was a need to model functional stages of a business process. These processing stages were carried out by a group of dedicated servers, and any server could run any process at any point in time.

A pair of performance counters existed for each stage, across all servers, together they revealed how well a business stage was performing.

 

 

The circles in the above diagram represent the performance counter pair for each business stage. Please note that for three business stages there would be three unique performance counter pairs, actually found on each processing server.

 

 

In this case it is clear that every business stage exists on each of the processing servers. This means that once we have carried out an initial discovery to work out whether a given windows server computer is a “processing server” we don’t need to do any further deduction. There is no need to carry out any subsequent discovery workflow on the agent because we know that processing servers always host the same set of stages.

 

“Why not use a singleton class for each business stage?” I hear you ask. A singleton class is, by definition, a class which doesn’t need to be instantiated via any discovery mechanism – sounds perfect!

Unfortunately singleton classes cannot be hosted, since there is no discovery process, and no reference to any key properties required by a hosting relationship. This means that we cannot associate the singleton class “business unit 1” with a particular processing server. In turn this means that we cannot get at the performance counters on “processing server 1” by targeting “business unit 1” – there is no “business unit 1” instance associated with the windows computer identified as “processing server 1”.

In the screenshot above I attempted to mark a hosted class as singleton.

 

So what we want is to create hosted instances of a class, without carrying out a redundant workflow on an agent. It turns out the answer is very simple; make direct use of the ClassSnapShotDataMapper module which is used by many OpsMgr discovery types…

The System.Discovery.ClassSnapShotDataMapper module

When we carry out discovery of class x we’re essentially looping through each instance of the target class, checking whether certain criteria is met, and mapping the target instance to a new instance of class x.

The latter part of this process is often carried out by the System.Discovery.ClassSnapShotDataMapper module as part of a composite workflow, for example;

 

 

FilteredRegistryDiscoveryProvider - http://msdn.microsoft.com/en-us/library/ff400178.aspx
WMISinglePropertyProvider2 -
http://msdn.microsoft.com/en-us/library/ee692988.aspx

 

 

In we don’t need any further information to ascertain whether an instance of class x exists we can make use of the ClassSnapShotDataMapper directly, thereby avoiding the need for any agent workflow (awaiting confirmation that this workflow wouldn’t need to be executed on the agent).

 

 

 

This works in the same way as the other two, but comprises only a System.Discovery.Scheduler and a System.Discovery.ClassSnapShotDataMapper. N.B.Though the System.Discovery.Scheduler is now shown in the other two diagrams it is a composite module used in the Microsoft.Windows.Discovery.RegistryProvider and the Microsoft.Windows.WmiProvider.

 

ClassSnapShotDataMapper - http://msdn.microsoft.com/en-us/library/ee692953.aspx

Creating the ClassMapper Data Source

1)      Create a new empty management pack in The Management Pack Authoring Console.

2)      In “Type Library” > “Data sources” right click and choose “New” > “Composite Data Source”.

3)      Give it an Id and Name.

4)      In “Member Modules” add the “System.Discovery.Scheduler”.

5)      In the module configuration promote “Interval” and “SyncTime” (do this by clicking the arrow that appears in the right hand corner of that parameters value cell).

6)      Click “Apply”.

7)      In “Member Modules” add the “System.Discovery.ClassSnapShotDataMapper”.

8)      In the module configuration promote “ClassId”.

9)      Click “Apply”.

10)   In “Member Modules” set the NextModule for the scheduler to the mapper, and the mapper to module output.

11)   In “Configuration Schema” > “Schema References” add the “System.Discovery.MapperSchema”.

12)   In “Configuration Schema” > “Simple Configuration Schema”

  1. a.       Set “Interval” to type “integer”.
  2. b.      Set “SyncTime” to be non-required.
  3. c.       Add a new element with the name “InstanceSettings”.

13)   Click Apply.

14)   Save the management pack.

15)   Open the management pack in a text editor of your choice and change the following lines;

<xsd:element minOccurs="1" name="InstanceSettings" type="xsd:string" />

 

To

 

<xsd:element minOccurs="1" name="InstanceSettings" type="SettingsType" />

 

<ClassId>$Config/ClassId$</ClassId>

 

To

 

<ClassId>$Config/ClassId$</ClassId>
<InstanceSettings>$Config/InstanceSettings$</InstanceSettings>

 

 

N.B. If you attempt to declare InstanceSettings as a SettingsType in The Management Pack Authoring Console it will produce an error.

Click here to view an example

Using the ClassMapper in a Discovery

Now that we have a data source that will allow us to simply map instances of one class to another we can use it in the following manor.

1)      Back in The Management Pack Authoring Console go to “Health Model” > “Discoveries”.

2)      Right click > “New” > “Custom Discovery”.

3)      Give it an Id and Name.

4)      Choose a target, all instances of which we wish to create instances of our new class.

5)      In “Discovery Classes” add our new class.

6)       In “Configuration” browse for the ClassMapper module and give it a friendly name.

7)      In the module configuration provide “Interval” and “ClassId” parameter values. ClassId should take the form “$MPElement[Name=”YOUR CLASS ID HERE”]$”.

8)      Next we need to provide instance settings. To do this click “Edit” and view the XML in a text editor of your choice.  Instance settings take the following form;

<InstanceSettings>
  <Settings>
    <Setting>
      <Name></Name>
      <Value></Value>
    </Setting>
  <Settings>
</InstanceSettings>

We need to provide a name value pair for each key property of the class we wish to discover. Please note this includes all key properties from any inherited relationships (e.g. Microsoft.Windows.LocalApplication inherits the hosting relationship Microsoft.Windows.ComputerHostsLocalApplication and consequently the key property Microsoft.Windows.Computer/PrincipalName).

Taking Microsoft.Windows.LocalApplicatin as an example the instance settings would look as follows;

<InstanceSettings>
  <Settings>
    <Setting>
      <Name>$MPElement[Name=”Windows!Microsoft.Windows.Computer”]/PrincipalName$</Name>
      <Value></Value>
    </Setting>
  <Settings>
</InstanceSettings>

9)      Save and close the editor.

10)   The module configuration will appear updated to include your instance settings.

11)   In the right hand corner of the value field for each instance setting you an arrow will appear (on clicking into the field). This can be used to browse for any required properties of the target class e.g. $Target/Property[Type=”Windows!Microsoft.Windows.Computer”]/PrincipalName$

12)   Click “Apply”.

Click here to view an example.

Disclaimer: The information on this site is provided "AS IS"  with no warranties, confers no rights, and is not supported by the  authors or Microsoft Corporation. Use of included script samples are  subject to the terms specified in the Terms of  Use.

This post was contributed by Gavin Ackroyd, a Developer Consultant with Microsoft Consulting Services, UK.

Disabling User Program Notification for Virtual Applications

$
0
0

I’ve been sitting on this for a little while, partly due to getting confirmation I wasn’t doing anything unsupported, partly because I haven’t had time and partly a couple of other reasons I won’t go in to!

 

The good news is I have finally found the time to write it up and I really hope that you find this of use, and I really hope it may be able to enchant some of you Full Infrastructure types over to the world of ConfigMgr for delivering your virtual applications, I also hope this will ease the experience for those that have already embraced the ConfigMgr way of life J

 

For those that do use ConfigMgr, you will probably be familiar with the problem; you may have even already used the SDK to achieve exactly what I am about to outline. But first of all a summary of the problem:

 

The problem is that a Virtual Application package has no concept of a program, at least not as far as the console goes. This is because all virtual application packages in ConfigMgr effectively have the same program. Using the GUID of the virtual application package, ConfigMgr publishes the file type associations and shortcuts to the client and then adds the package to the App-V Cache using SFTMIME. This behaviour is the same for every virtual application package, whether it’s a whole suite; made up of multiple applications like Office or if it’s a simple little utility with one application in the package.

 

Now in the general day to day running and using of virtual apps this makes very little odds: to either the ConfigMgr admin who is importing and publishing the virtual application or the user, sat at their desk, consuming the application. But if we switch focus slightly to standard software distribution where we DO have the concept of Packages and Programs, there is a tick box that can be very useful in terms of user experience, and that is to supress program notifications. What this means in simple terms is that on a program by program basis you can prevent the user from knowing anything about the advertisement that is about to run.

One potentially appealing side effect of this removal of user notification is that the ConfigMgr client does not go through the user notification countdown, by default five minutes, instead the program will execute as soon as the policy dictates.

 

Unfortunately, because we are unable to access the programs of a Virtual Application Package through the console, we are unable to disable this notification on an individual basis. The only option the console affords is to use the site wide setting on the “Advertised Programs Client Agent” and disable program notification for all advertisements, both standard and virtual.

Now this, whilst being a valid option in theory, is not something most customers want to implement, because in an environment where standard software distribution, patching and or Operating System Deployment is being used alongside virtual application delivery this may cause more pain than good if users aren’t notified about upcoming installs and even worse reboots!

 

Now preventing reboots are one of the many reasons I’ve seen customers move to virtual applications, because it means that a new application can be deployed to a user without inconveniencing them in the middle of the working day. With traditional software distribution this becomes a trade-off between the user getting the new application in order to use it to work and not interrupting the working day.

 

But with virtual applications, if we don’t ever need to reboot and we want to avoid bothering the user with notifications; wouldn’t it be great if we could disable user notification, but only for virtual apps?!

 

Another manifestation of not being able to alter notification behaviour on virtual applications is the deployment time, especially in a hot-desking environment. By default the countdown is 5 minutes, you can change this on the advertisement, but the lowest you can change this setting to is 1 minute. So if a user logs on to a new machine, they will be prompted for each new application, or alternatively, wait for each countdown to reach zero and for the advertisement to fire. Either way waiting a minute for each app or clicking through notifications won’t make for the happiest of users!

 

So to the solution then!

 

As I mentioned there is no concept of a program in a virtual application package, as far as the console is concerned. If we dig a bit deeper, into the WMI of the server hosting my SMS PROVIDER we can see that the console is shielding the admin from the reality.

 

 

 As you can see at the bottom of the list of instances there are 2 programs each with the ProgramName of [VirtualApplication] this is the default, hidden program within a virtual application I mentioned earlier. You will also see that each [Virtual Application] belongs to a different package (in this case Visio Viewer and the ConfigMgr SDK, from which I am working)  and if we have a look at the data within those instances, we will see that they are pretty much identical, aside from the GUID and package ID!

 

There is nothing that stands out as the toggle switch for enabling and disabling notification, but according to the SDK documentation on TechNet ProgramFlags is the property of the SMS_ProgramClass that we need to interest ourselves with. Annoyingly, this a) doesn’t make much sense when looking at the value b) is  responsible for a whole lot more than the notification settings. However, helpfully the property is well documented here: http://msdn.microsoft.com/en-us/library/cc144361.aspx and if we amend the value by 0x00000400 then we change the notification behaviour.

As you can see from the screenshot above, my virtual applications all get created with the ProgramFlags attribute set to 135307273 and within that is the HEX value to turn on notification. So if we subtract the value of 1024, which is 400 in HEX, then we turn off the notification. And to do so, we need to bring together a few more components from the SDK!

 

Using http://msdn.microsoft.com/en-us/library/cc145284.aspx connect to the site provider

Using http://msdn.microsoft.com/en-us/library/cc145701.aspx get the instance of the package and program from the SMS_Program Class

As per http://msdn.microsoft.com/en-us/library/cc144361.aspx setting the program flag to 1024 (Hex 400) less than its current value to set the flag to not display a countdown.

Use http://msdn.microsoft.com/en-us/library/cc145701.aspx to save the value in the Server WMI.

And below is my rough and ready example that simply sets the value to be 1024 less than the default value. This script is purely an example and is intended as a starting point to help the creation of a much more polished one. If for no other reason it would be rather time -consuming to run the script for each Virtual Application Package in your environment, updating the static values as you go.

 

Using this code I was able to reduce the deployment time of three Virtual Applications from over 3 minutes to just 9 seconds simply by removing the user prompting!

 

<codesnippet>

Dim connection

Dim computer

Dim userName

Dim userPassword

Dim password 'Password object

Dim prog

'On Error Resume Next







computer = "svr-sccm"

userName = ""

userPassword = ""





Set connection = Connect(computer,userName,userPassword)



If Err.Number<>0 Then

Wscript.Echo "Call to connect failed"

End If



Call SNIPPETMETHODNAME (connection, prog)



Sub SNIPPETMETHODNAME(connection, prog)



' Get the specific advertisement instance to modify.

Set prog = connection.Get("SMS_Program.PackageID='" & "CEN00065" & "'" & ",ProgramName='" & "[Virtual application]" & "'")

' Set the new property value.

prog.ProgramFlags = "135308297" ' 135307273 = notification on, 135308297= notification off



' Save the program.

prog.Put_



End Sub



Function Connect(server, userName, userPassword)



On Error Resume Next



Dim net

Dim localConnection

Dim swbemLocator

Dim swbemServices

Dim providerLoc

Dim location



Set swbemLocator = CreateObject("WbemScripting.SWbemLocator")



swbemLocator.Security_.AuthenticationLevel = 6 'Packet Privacy



' If the server is local, don't supply credentials.

Set net = CreateObject("WScript.NetWork")

If UCase(net.ComputerName) = UCase(server) Then

localConnection = true

userName = ""

userPassword = ""

server = "."

End If



' Connect to the server.

Set swbemServices= swbemLocator.ConnectServer _

(server, "root\sms",userName,userPassword)

If Err.Number<>0 Then

Wscript.Echo "Couldn't connect: " + Err.Description

Connect = null

Exit Function

End If





' Determine where the provider is and connect.

Set providerLoc = swbemServices.InstancesOf("SMS_ProviderLocation")



For Each location In providerLoc

If location.ProviderForLocalSite = True Then

Set swbemServices = swbemLocator.ConnectServer _

(location.Machine, "root\sms\site_" + _

location.SiteCode,userName,userPassword)

If Err.Number<>0 Then

Wscript.Echo "Couldn't connect:" + Err.Description

Connect = Null

Exit Function

End If

Set Connect = swbemServices

Exit Function

End If

Next

Set Connect = null ' Failed to connect.

End Function

</codesnippet>

 

Disclaimer: The information on this site is provided "AS IS" with no warranties, confers no rights, and is not supported by the authors or Microsoft Corporation. Use of included script samples are subject to the terms specified in the Terms of Use.

This post was contributed by Rob York, a Premier Field Engineer with Microsoft Premier Field Engineering, UK.

Viewing all 158 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>