Welcome to TechNet Blogs Sign in | Join | Help

Why Is That Elephant In the Room?...View From The Top

This is the latest in the View From the Top series of blog post written by the management of Microsoft.com Operations and Portal Team. This contribution is from Todd Weeks, Sr. Director.

Have you ever had a manager tell you “Work Smarter”, and then be really frustrated or even offended at the fact that it implied that you weren’t? Well, as a manager, I have had the overwhelming desire to actually use this phrase, but instead of just saying it and frustrating or offending, I’ve decided to add a little context to it. As I have boiled it down, there are a couple of fairly universal things that almost always impact us “Working Smarter”.

 

The first thing has to do with the title of this post. For many reasons, in almost all projects, teams are letting some of the hard questions/concerns go unattended. But as you probably all have noticed, the longer you let a lagging issue be a lagging issue, the more disruptive it becomes to a project. What is tough about finally addressing the “Elephant in the Room” (meaning, the issue nobody seems to want to talk about but everyone knows is there), is that it is most likely going to cause conflict, and usually people tend to want to get their jobs done without conflict. The way that this ties into the fact that you are now not “Working Smarter” is that if you don’t address the issue, everyone will not be on the same page and heading in the exact same direction. When there is a lack of agreement or understanding, people still do work, code is still written, milestones are still checked off; but will that work all need to be re-done to get us back on track when we finally do decide to address the issue? Usually the longer your teams avoid addressing large issues, the more re-work/additional work required to come together. We all have full workloads, but by knowingly avoiding issues everyone sees are there, we are knowingly adding work to our plates for that project which has absolutely no value. It actually has negative value because you will need to do more work to come to the same goal eventually.

 

So, how do we bridge this social gap and begin to inspire people to address conflict more easily? There are many tools out there today for people to use, the one we are trying throughout the team is called a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. While many project teams use SWOTs to look at their projects in the initial phase, we are going to use them as the monthly or milestone report structure for all our projects. What we are looking to do is have the “issues” addressed sooner by doing SWOTs more frequently than normal. 

 

What is great with this process is that the issue can come up in many forms in the SWOT. Perhaps fixing the issue is an “Opportunity”, now it can be broached in a more positive light, and possibly avoiding conflict. But an issue may also come up as a “Threat” or “Weakness” too, and bringing threats, weaknesses and opportunities up as part of the normal process  helps to break down some of the social barriers that might stop people from bringing the issues up.

 

The final piece that can’t be forgotten when using a new processes or tool like SWOT is: reward the behavior of finding or bringing up these issues in such a way that they resolve the issues, but more importantly resolve the issues without tension. If people on a team are seeing potential problems and getting them addressed and solved before they cause more work, they just helped everyone on that project “Work Smarter”. And as you reward people and look to highlight the greater impact on a team they may have, the SWOT is a great vehicle to track where the ideas came from and their impacts. Now people are not just getting their job done and inspired to approach hard issues, they are looking out and helping others avoid unnecessary work so they too can get their jobs done. Make that behavior core to your reward systems and you will see the culture of your team change, and people will “Work Smarter”.

 

The second thing I quickly wanted to touch on that drives me to want to say “Work Smarter”, is hearing people say, “that’s not my problem” or pushing back on something someone is asking of them. Now you just can’t take on everything, but the way you address someone asking you about work that isn’t your deliverable can make all the difference. The small amount of time it takes to pay attention and get that person directed to the right person, may be a huge time savings. Taking just a few moments to ask yourself, (even though this isn’t your deliverable), “Can I help?”, might save hours. Who hasn’t seen those frustrating mail strings where people debate who should do something?  And when you look at the amount of time put into that mail string, it was often more time than it would have taken to have just do the work.

 

The goal of the entire team, working as a unit, should be to get work on and off its plate efficiently as a group. We don’t want to have a culture of randomization where we are just looking for ways to solve quick small problems that aren’t ours, but on a case by case basis, see if a bit of your time might actually go a long ways to saving not only your time in the future, but others’ as well. And as a bonus: getting something done so it isn’t out there hanging on the group’s “to-do” list. When it comes down to it, taking the time to just listen usually only takes a minute or two, and more importantly it reinforces behavior that should be aspired to for a team where people are willing to ask questions and be open with one another.

 

As a manager, you should be looking to say to your team, “Work Smarter”. For me, I want to be prescriptive when I say it so it can have the most impact and achieve the desired result. And I know that if I am going to ask, having tools in mind like a SWOT analysis, and then reinforcing the behavior with our rewards systems goes a long way towards our group “Working Smarter” and will help me as a manager not just randomize my team by letting 200 people try and each figure out “the goal.”

Posted by MSCOM | 1 Comments

Add a little Development, mix in some Security, a dash of Program Management, apply a liberal amount of IIS and SQL…Part of the Recipe for MSCOM Operations’ Blog

 

Readers of this blog might have noticed (or been puzzled by) the variety of subject matter that we present. We have had blog posts ranging from “Why Dogfood?” to “Scaling Your Windows…and other TCP/IP Enhancements” to “Where Oh Where Did All of the Microsoft.com SQL Clusters Go?” So what drives the topics for this blog going forward?

 

Very recently we had a re-organization that added to our traditional Operations Engineering charter. We are now the Micosoft.com Operations and Portal Team. This new realignment coupled the traditional role of Microsoft.com Operations with a Business Infrastructure team which includes the Lab Hosting Team, Business Management, Services Management and Release Management; and the Portal Development Team which includes Program Management, Development and Test.

 

MSCOM Operations (the System Engineers that provide enterprise engineering) have worked with these teams for a long time. These are the functional teams that it takes to develop software, get it out into production and run it effectively and efficiently. This new tightly aligned structure allows us to take this powerful engine to a new level of performance and maturity.

 

The goal of this blog is still to provide IT Professionals with information that they can use. Each of several sub-teams have specific roles and responsibilities in this enterprise.  Each of these teams is responsible for producing a blog that correlates to their specific area on a rotating basis. Simple math here: there are twelve of these sub-teams, which equates to one blog post every twelve weeks from each team.

 

Does that mean that if you read a great IIS blog from us you will have to wait for 12 weeks before that topic comes up again? Not necessarily, we encourage all of our folks to submit blogs anytime they want. Do they all deliver on time? I wish I could say that they did, those teams do still have their “day jobs” as the first priority. Despite having to actually work (imagine that), these folks regularly are providing some timely information out to the IT Pro community.

 

Here are the functional teams that will provide content for this blog site:

 

Evangelism – where we get to coordinate this blog and other customer facing interactions, web casts, articles, white papers and customer engagements.

 

MSCOM Ops Management – what we call “View From the Top”, that gives MSCOM management a blog forum to address various management related subjects.

 

Web Engineering – written by the web engineers that actually run the MSCOM IIS servers and our internet facing web environment.

 

SQL Engineering ‑ written by the SQL engineers that actually run the MSCOM SQL servers and our backend environment.

 

Debug – a team of senior engineers that specialize in advanced troubleshooting techniques and a deep knowledge of the MSCOM Platform from the Windows Server OS level through the application level.

 

Program Management – an essential team that plays an integral role in our development of the next iteration of the MSCOM portal. These folks are tasked with a variety of deliverables including (but not limited to) writing specifications, keeping the projects on track and ensuring that project status is properly communicated.

 

Development –these folks are writing the code that will constitute the afore mentioned next iteration of the MSCOM portal among other things.

 

Test – the Developers best friend, folks that work to ensure the new code is bug free when it hits the web.

 

Release Management – this team has worked closely with MSCOM Ops for years, as an extension of the product development team efforts. They have recently been re-organized into the central services organization that oversees policy, process and business management for both MSCOM Ops and the MSCOM Portal product development team. This team is responsible for owning the Release Criteria and ensuring product releases deploy smoothly with minimal customer impact into the various Ops managed environments.

 

Service Management ‑ responsible for on-boarding new customers to MSCOM, working with existing customers to provide guidance and smooth the way for releases.

 

Security/Architecture ‑ the folks that are keeping the hackers at bay, hardening our infrastructure and providing architectural guidance.

 

Tools – we are fortunate to have a dedicated Tools team, a talented group of developers that provides us with custom applications that help us monitor, report on and manage all of our environments.

 

If you have a topic you are interested in send us an email mscomblg@microsoft.com.

Posted by MSCOM | 1 Comments

MSCOM Operations Presents At DRJ Conference

Recently Sunjeev Pandey and Paul Wright presented Microsoft.com Operations’ approach to resilience, availability, and DR at the Disaster Recovery Journal’s DRJ Conference in San Diego.   They had to make some changes in the presentation last minute and promised to post the latest deck on our Blog.  So, without further ado, here is the link to download the presentation.   

 

Please feel free to post any questions here and we’ll answer them as best we can.

 

Posted by MSCOM | 0 Comments

STUFF YOU CAN USE!! Finger Saving Good – No Touch Administration

Remote Desktop has changed the way we interact with our servers.  However if you have a farm of 50+ servers…heck even 10 servers, having to Remote into each server can not only be time consuming but cause your fingerprints to wear off.  Utilizing tools such as WMI (Windows Management Instrumentation), For Loop and PSEXEC you can administer a large number of servers remotely with minimum joint pain.  The Examples below go from beginning WMI usage to more advanced techniques.  These scripts are meant to be a base to start from and help you build a powerful toolset.

 

Index of scripts below:

Example 1: A basic WMI Script to warm up with.

Example 2: Collect Connection and Processor Information from 2 separate server lists and email the results.

Example 3: Need an inventory from a large amount of servers and write the results to a file? Personal Favorite!

Example 4: Have an unnecessary service which you need to stop and disable across your environment?

Example 5: Ever wanted to send an email without configuring Outlook Express to test an SMTP server?

Example 6: Need to know who is a member of a group across your environment?

Example 7: Use a For Loop in the command line to execute a command across multiple servers.

Example 8: PSEXEC – Every Engineers necessity.

 

To get a full description and the many functions available for WMI, check out http://windowssdk.msdn.microsoft.com/en-us/library/ms758280.aspx.

 

Example 1: Understanding WMI Basics.  Save file as WMIBasics.vbs.

 

' ------------------------------------------------------------------------------

' August 11th, 2006

' A basic WMI Script to capture performance information

' Output to Screen

'

' Created by Brian Carney

'   bcarney@microsoft.com

'   Systems Engineer

'   Microsoft.Com Operations Team

'

' Copy Contents and paste into a file called WMIBasics.vbs

' Run from a Command Prompt to see output and error information.

' Text encapsulated with <> indicates information required from you.

' ------------------------------------------------------------------------------

 

' Set Variable to call Server List

Dim oFSO

Set oFSO = CreateObject("Scripting.FileSystemObject")

Dim oFile

Set oFile = oFSO.OpenTextFile("<Your Server List.txt>")

 

'Set Variable to pass to Function GetEnvA_Info.

Dim sServer

 

'Loop through server list until each server has been processed.

Do while oFile.AtEndOfStream =false

            sServer = oFile.ReadLine

            'Call Function GetEnvA and pass server name.

            GetEnvA_Info sServer

Loop

 

'Close file when completed.

oFile.Close

 

Function GetEnvA_Info(strServer)

           

            'If the script encounters an error, continue.  Remark out to see error information.

            On Error Resume Next

                     

            'These next 2 lines are the power of WMI.  You can replace these with a seemingly endless amount of WMI Calls:

            'To see all WMI Calls, goto http://windowssdk.msdn.microsoft.com/en-us/library/ms758280.aspx

 

            'Call WMI Object Win32_PerfFormattedData_Tcpip_TCPV4 to determine current user count.

            Set objWMIService = GetObject("winmgmts:\\" & strServer & "\root\cimv2")

            Set colItems = objWMIService.ExecQuery("Select * from Win32_PerfFormattedData_Tcpip_TCPV4",,48)

 

                        'Loop through each WMI Object

                        For Each objItem in colItems

                                    'Set WMI Variable to Current Connections

                                    CountConnections = objItem.ConnectionsEstablished

                        Next

                                    'Display Information to Computer Screen

                                    wscript.echo strServer & ": " & CountConnections

End Function

 

Example 2: Collecting Server Information and email the results.  Save to file: EnvironmentStatsEmail.vbs.

 

' ------------------------------------------------------------------------------

' August 10th, 2006

' Script will Collect Information from 2 Environments: EnvironmentA, EnvironmentB.

' It will collect this information and email the results to a specified email recipient.

'

' Created by Brian Carney

'   bcarney@microsoft.com

'   Systems Engineer

'   Microsoft.Com Operations Team

'

' Copy Contents and paste into a file called EnvironmentStatsEmail.vbs

' Run from a Command Prompt to see output and error information.

' Text encapsulated with <> indicates information required from you.

' ------------------------------------------------------------------------------

 

' Set Variables to 0 to ensure absolute zero starting point.

 

' EnvironmentA Variables

EnvA_ServerCount = 0

EnvA_Proc = 0

EnvA_CountConnections = 0

EnvA_CountTotal = 0

EnvA_ProcTotal = 0

EnvA_ProcAverage = 0

 

' EnvironmentB Variables

EnvB_ServerCount = 0

EnvB_Proc = 0

EnvB_CountConnections = 0

EnvB_CountTotal = 0

EnvB_ProcTotal = 0

EnvB_ProcAverage = 0

 

' Begin Process EnvironmentA Server List

' Set Variable to call Server List

Dim oFile

Set oFSO = CreateObject("Scripting.FileSystemObject")

Set oFile = oFSO.OpenTextFile("<YourServerList-EnvA.txt>")

 

'Set Variable to pass to Function GetEnvA_Info.

Dim sServer

 

'Loop through server list until each server has been processed.

Do while oFile.AtEndOfStream = false

       sServer = oFile.ReadLine

       'Call Function EnvironmentB and pass server name.

       GetEnvA_Info sServer

Loop

 

'Close file when completed.

oFile.Close

 

Function GetEnvA_Info(strServerName)

 

    ‘If the script encounters an error, continue.  Remark out to see error information.

       On Error Resume Next

      

       'Call WMI Object Win32_PerfFormattedData_Tcpip_TCPV4 to determine current user count.

       Set objWMIService = GetObject("winmgmts:\\" & strServerName & "\root\cimv2")

       Set colItems = objWMIService.ExecQuery("Select * from Win32_PerfFormattedData_Tcpip_TCPV4",,48)

 

        'Loop through each WMI Object

              For Each objItem in colItems

                     EnvA_CountConnections = objItem.ConnectionsEstablished

              Next

                  'Sum up Connections Total and keep a running tally

                     EnvA_CountTotal = EnvA_CountTotal + EnvA_CountConnections

                    

                     'Increment each pass of the function to determine how many servers are analyzed.

                     EnvA_ServerCount = EnvA_ServerCount + 1

                    

       'Call WMI Function Win32_Processor to determine processor utilization.

       Set objWMIService = GetObject("winmgmts:\\" & strServerName & "\root\cimv2")

       Set colItems = objWMIService.ExecQuery("Select * from Win32_Processor",,48)

 

        'Loop through each WMI Object

              For Each objItem in colItems

                  'Set Variable to Current processor utilization

                     EnvA_Proc = objItem.LoadPercentage

              Next

                  'Add processor utilization to running talley

                     EnvA_ProcTotal = EnvA_ProcTotal + EnvA_Proc

 

End Function

 

'Determine Average Processor statistic

EnvA_ProcAverage = Round(EnvA_ProcTotal / EnvA_ServerCount ,2)

 

' Begin Process EnvironmentB Server List

' Set Variable to call Server List

Dim oFSO

Set oFSO = CreateObject("Scripting.FileSystemObject")

Set oFile = oFSO.OpenTextFile("<YourServerList-EnvB.txt>")

 

'Loop through server list until each server has been processed.

Do while oFile.AtEndOfStream =false

       sServer = oFile.ReadLine

       'Call Function EnvironmentB and pass server name.

       GetEnvB_Info sServer

Loop

 

'Close file when completed.

oFile.Close

 

Function GetEnvB_Info(strServerName)

 

    ‘If the script encounters an error, continue.  Remark out to see error information.

       On Error Resume Next

      

       'Call WMI Object Win32_PerfFormattedData_Tcpip_TCPV4 to determine current user count.

       Set objWMIService = GetObject("winmgmts:\\" & strServerName & "\root\cimv2")

       Set colItems = objWMIService.ExecQuery("Select * from Win32_PerfFormattedData_Tcpip_TCPV4",,48)

 

        'Loop through each WMI Object

              For Each objItem in colItems

                     EnvB_CountConnections = objItem.ConnectionsEstablished

              Next

                  'Sum up Connections Total and keep a running tally

                     EnvB_CountTotal = EnvB_CountTotal + EnvB_CountConnections

                    

                     'Increment each pass of the function to determine how many servers are analyzed.

                     EnvB_ServerCount = EnvB_ServerCount + 1

                    

       Set objWMIService = GetObject("winmgmts:\\" & strServerName & "\root\cimv2")

       Set colItems = objWMIService.ExecQuery("Select * from Win32_Processor",,48)

 

        'Loop through each WMI Object

              For Each objItem in colItems

                  'Set Variable to Current processor utilization

                     EnvB_Proc = objItem.LoadPercentage

              Next

                  'Add processor utilization to running talley

                     EnvB_ProcTotal = EnvB_ProcTotal + EnvB_Proc

                    

End Function

 

'Determine Average Processor statistic

EnvB_ProcAverage = Round(EnvB_ProcTotal / EnvB_ServerCount,2)

 

'Send Email with All Statistics gathered above

       set msg = WScript.CreateObject("CDO.Message")

              msg.From = "<FromEmailAddress>"

              msg.To = "<ToEmailAddress>"

              msg.Subject = "Performance Counters from Environment A and B."

              msg.TextBody = "Script Completion Time: " & Date() & " at " & Time() _

                  & vbCrLf & vbCrLf & "Number of EnvA_ Servers Analyzed: " & EnvA_ServerCount _

                  & vbCrLf & "Total EnvA_ Connections: " & EnvA_CountTotal _

                  & vbCrLf & "Average EnvA_ Proc: " & EnvA_ProcAverage _

                  & vbCrLf & vbCrLf & "Number of EnvB_ Servers Analyzed: " & EnvB_ServerCount _

                  & vbCrLf & "Total EnvB_ Connections: " & EnvB_CountTotal _

                  & vbCrLf & "Average EnvB_ Proc: " & EnvB_ProcAverage

              msg.Configuration.Fields("http://schemas.microsoft.com/cdo/configuration/smtpserver") =

"<InsertYourSMTPAddress>"

msg.Configuration.Fields("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2

              msg.Configuration.Fields.Update

       msg.Send

 

Example 3: Inventory: Having an accurate Inventory Script can be a time saver.  Save to file: Inventory.vbs

 

' ------------------------------------------------------------------------------

' August 11th, 2006

' A Script to collect Inventory Information and write it to a file

'

' Created by Brian Carney

'   bcarney@microsoft.com

'   Systems Engineer

'   Microsoft.Com Operations Team

'

' Copy Contents and paste into a file called Inventory.vbs

' Run from a Command Prompt to see output and error information.

' Text encapsulated with <> indicates information required from you.

' ------------------------------------------------------------------------------

 

LogFile = "<OutputFile.txt>" 'You may need to create this file in advance.

 

Const ForWriting = 2

Const HARD_DISK = 3

 

'Set Variable to write inventory data to a file.

Set objFSO = CreateObject("Scripting.FileSystemObject")

Set objFile = objFSO.OpenTextFile(LogFile, ForWriting)

 

' Set Variable to call Server List

Dim oFSO

Set oFSO = CreateObject("Scripting.FileSystemObject")

Dim oFile

' Replace <YourServerList.txt> with your list of servers.

Set oFile = oFSO.OpenTextFile("<YourServerList.txt>")

 

Dim sServer

 

'Loop through server list until each server has been processed.

Do while oFile.AtEndOfStream = false

              'Call Function GetInfo and pass server name.

            sServer = oFile.ReadLine

            GetInfo sServer

Loop

 

'Close file when completed.

oFile.Close

 

Function GetInfo(strComputer)

    ‘If the script encounters an error, continue.  Remark out to see error information.

    On Error Resume Next

 

    'Call WMI Object Win32_ComputerSystem to determine server information.

    Set objWMIService = GetObject("winmgmts:\\" & strComputer & "\root\CIMV2")

    Set colItems = objWMIService.ExecQuery("SELECT * FROM Win32_ComputerSystem")

 

    'Loop through each WMI Object and collect specific information.

    For Each objItem In colItems

        WScript.Echo "Server Name: " & objItem.Name

        objFile.WriteLine "Server Name: " & objItem.Name         

        WScript.Echo "Manufacturer: " & objItem.Manufacturer

        objFile.WriteLine "Manufacturer: " & objItem.Manufacturer

        WScript.Echo "Model: " & objItem.Model

        objFile.WriteLine "Model: " & objItem.Model

        WScript.Echo "Number Of Processors (Includes MultiThread): " & objItem.NumberOfProcessors

        objFile.WriteLine "Number Of Processors (Includes MultiThread): " & objItem.NumberOfProcessors

           

    'Call WMI Object Win32_Processor to determine processor information.

    Set colProcessors = objWMIService.ExecQuery("Select MaxClockSpeed from Win32_Processor")

                ClockSpeed = 0

                TotalClockSpeed = 0

                ProcCount = 0

                    'Begin gathering information and storing in a variable.  Used at end of script to sum up processor information.

                    For Each objProcessor in colProcessors

                        ClockSpeed = objProcessor.MaxClockSpeed

                        TotalClockSpeed = ClockSpeed + TotalClockSpeed

                        ProcCount = ProcCount + 1

                    Next

                REM Wscript.echo "Proc Count: " & ProcCount

                WScript.Echo "Maximum Clock Speed: " & ((TotalClockSpeed / ProcCount)/1000) & " - GHZ"

                objFile.WriteLine "Maximum Clock Speed: " & ((TotalClockSpeed / ProcCount)/1000) & " - GHZ"

 

    'Call WMI Object Win32_SystemEnclosure to determine Serial Number.

    Set colSMBIOS = objWMIService.ExecQuery("Select * from Win32_SystemEnclosure")

 

        'Loop Through each item in object.

        For Each objSMBIOS in colSMBIOS

            Wscript.Echo "Serial Number: " & objSMBIOS.SerialNumber

            objFile.WriteLine "Serial Number: " & objSMBIOS.SerialNumber

        Next

       

    'Call WMI Object Win32_Processor to determine Processor load.

    Set ProcItems = objWMIService.ExecQuery("Select LoadPercentage from Win32_Processor",,48)

   

        'Loop Through each item in object.

        For Each ProcItem in ProcItems

            Proc = ProcItem.LoadPercentage

        Next

            'Write out to screen processor information as the script runs.

            Wscript.echo "Proc Usage: (1 Proc Sample Only) " & Proc & "%"

            objFile.WriteLine "Proc Usage: (1 Proc Sample Only) " & Proc & "%"

           

    'Call WMI Object Win32_PhysicalMemory to determine memory utilization.          

    Set MemItems = objWMIService.ExecQuery("Select Capacity from Win32_PhysicalMemory")

        Mem = 0

        TotalMem = 0

       

        'Loop Through each item in object.

        For Each MemItem in MemItems

            Mem = MemItem.Capacity

            TotalMem = Mem + TotalMem

        Next    

            Wscript.Echo "Mem Total: " & (TotalMem / 1000000000) & " - GB"

            objFile.WriteLine "Mem Total: " & (TotalMem / 1000000000) & " - GB"

           

    'Call WMI Object Win32_LogicalDisk to determine hard disk utilization.          

    Set colDisks = objWMIService.ExecQuery("Select * from Win32_LogicalDisk Where DriveType = " & HARD_DISK & "")

 

        'Loop Through each item in object.

        For Each objDisk in colDisks

            Wscript.Echo "DeviceID: "& vbTab & objDisk.DeviceID & "Free Space: " & (objDisk.FreeSpace / 1000000000) & " - GB"

            objFile.WriteLine "DeviceID: "& vbTab & objDisk.DeviceID & "Free Space: " & (objDisk.FreeSpace / 1000000000) & " - GB"

        Next

 

    'Call WMI Object IIsWebServerSetting to determine IIS Information.          

    Set IISWMIService = GetObject ("winmgmts:{authenticationLevel=pktPrivacy}\\" & strComputer & "\root\microsoftiisv2")

    Set IISLogItems = IISWMIService.ExecQuery("Select * from IIsWebServerSetting")

        'Loop Through each item in object.

        For Each IISLogItem in IISLogItems

            Wscript.Echo "Log File Directory: " & IISLogItem.LogFileDirectory

            objFile.WriteLine  "Log File Directory: " & IISLogItem.LogFileDirectory

        Next

 

    'Call WMI Object Win32_NetworkAdapterConfiguration to determine IP Information.          

    Set IPConfigSet = objWMIService.ExecQuery("Select * from Win32_NetworkAdapterConfiguration Where IPEnabled=TRUE")

       

        'Loop Through each item in object.

        For Each IPConfig in IPConfigSet

            If Not IsNull(IPConfig.IPAddress) Then

                For i=LBound(IPConfig.IPAddress) to UBound(IPConfig.IPAddress)

                    WScript.Echo "IP Address: " & IPConfig.IPAddress(i)

                    objFile.WriteLine "IP Address: " & IPConfig.IPAddress(i)

                Next

            End If

        Next                

 

    'Call WMI Object Win32_OperatingSystem to determine Service Pack Information.          

    Set colOperatingSystems = objWMIService.ExecQuery("Select * from Win32_OperatingSystem")

       

        'Loop Through each item in object.

        For Each objOperatingSystem in colOperatingSystems

            Wscript.Echo "Service Pack: " & objOperatingSystem.ServicePackMajorVersion & "." & objOperatingSystem.ServicePackMinorVersion

            objFile.WriteLine "Service Pack: " & objOperatingSystem.ServicePackMajorVersion & "." & objOperatingSystem.ServicePackMinorVersion

        Next

 

    'Call WMI Object Win32_PerfFormattedData_Tcpip_TCPV4 to determine Current Connection Information.          

    Set ConnItems = objWMIService.ExecQuery("Select * from Win32_PerfFormattedData_Tcpip_TCPV4",,48)

       

        'Loop Through each item in object.

        For Each ConnItem in ConnItems

            wscript.echo "Connections: " & ConnItem.ConnectionsEstablished

                objFile.WriteLine "Connections: " & ConnItem.ConnectionsEstablished

        Next

 

    'Call Registry Value to determine HTTPERR Folder Location.          

    Set oReg=GetObject("winmgmts:{impersonationLevel=impersonate}!\\" & strComputer & "\root\default:StdRegProv")

        Const HKEY_LOCAL_MACHINE = &H80000002

            strKeyPath = "SYSTEM\CurrentControlSet\Services\HTTP\Parameters\"

            strValueName = "ErrorLoggingDir"

            oReg.GetStringValue HKEY_LOCAL_MACHINE,strKeyPath,strValueName,strValue

            Wscript.Echo "HTTPERR Directory: " & strValue

            objFile.WriteLine "HTTPERR Folder: " & strValue

   

    'Call WMI Object Win32_OperatingSystem to determine Current Connection Information.          

    Set colOSItems = objWMIService.ExecQuery("SELECT * FROM Win32_OperatingSystem")

   

        'Loop Through each item in object.

        For Each objOSItem In colOSItems

            WScript.Echo "Windows Directory: " & objOSItem.WindowsDirectory

            objFile.WriteLine "OS Directory: " & objOSItem.WindowsDirectory

            WScript.Echo "System Directory: " & objOSItem.SystemDirectory

            objFile.WriteLine "System Directory: " & objOSItem.SystemDirectory

        Next

   

   

            WScript.Echo "----------------------------"

            objFile.WriteLine "----------------------------"

 

    Next

   

'End of the GetInfo Function

End Function

 

Example 4: Stop/Suspend Unnecessary Service.  Save to file: StopSuspendService.vbs

 

' ------------------------------------------------------------------------------

' August 11th, 2006

' A Script to Stop and Suspend an Unneaded Service

'

' Created by Brian Carney

'   bcarney@microsoft.com

'   Systems Engineer

'   Microsoft.Com Operations Team

'

' Copy Contents and paste into a file called StopSuspendService.vbs

' Run from a Command Prompt to see output and error information.

' Text encapsulated with <> indicates information required from you.

' ------------------------------------------------------------------------------

 

' Set Variable to call Server List

Dim oFSO

Set oFSO = CreateObject("Scripting.FileSystemObject")

Dim oFile

' Replace <YourServerList.txt> with your list of servers.

Set oFile = oFSO.OpenTextFile("<YourServerList.txt>")

Dim sServer

 

'Loop through server list until each server has been processed.

Do while oFile.AtEndOfStream =false

            'Call Function GetInfo and pass server name.

            sServer = oFile.ReadLine

            GetInfo sServer

Loop

 

'Close file when completed.

oFile.Close

 

Function GetInfo(Computer)

 

‘If the script encounters an error, continue.  Remark out to see error information.

On Error Resume Next

 

    'Call WMI Object Win32_Service.Name to stop selected service.

    strComputer = Computer

    Set objWMIService = GetObject("winmgmts:" _

        & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")

    Set colServiceList = objWMIService.ExecQuery("Associators of " _

        & "{Win32_Service.Name='NetDDE'} Where " _

        & "AssocClass=Win32_DependentService " & "Role=Antecedent" )

   

        'Loop through each WMI Object.

        For Each objService in colServiceList

            objService.StopService()

        Next

            'Sleep to allow for service to stop.

            Wscript.Sleep 10000

 

    'Call WMI Object Win32_Service.  Replace <ServiceName> with service you would like to stop/disable.

    Set colServiceList = objWMIService.ExecQuery _

        ("Select * from Win32_Service where Name='<ServiceName>'")

        For Each objService in colServiceList

            errReturn = objService.StopService()

        Next

   

        'Disable Service.

        For Each objService in colServiceList

            errReturnCode = objService.Change( , , , , "Disabled")

        Next

   

Wscript.echo Computer & " - Complete"

 

'End of the GetInfo Function

End Function

 

Example 5: Send Email without an Email Client: (Very useful to call or attach at the end of script so you know when the script is done.)  Save to file: SendEmail.vbs

 

' ------------------------------------------------------------------------------

' August 11th, 2006

' Send an email without an email client.

'

' Created by Brian Carney

'   bcarney@microsoft.com

'   Systems Engineer

'   Microsoft.Com Operations Team

'

' Copy Contents and paste into a file called SendEmail.vbs

' Run from a Command Prompt to see output and error information.

' Text encapsulated with <> indicates information required from you.

' ------------------------------------------------------------------------------

 

        set msg = WScript.CreateObject("CDO.Message")

        msg.From = "<FromEmailAddress>"

        msg.To = "<ToEmailAddress>"

        msg.Subject = "TestEmail "

        msg.TextBody = "Hi, This is a Test Email"

        msg.Configuration.Fields("http://schemas.microsoft.com/cdo/configuration/smtpserver") = "<IP Address of SMTP Server>"

        msg.Configuration.Fields("http://schemas.microsoft.com/cdo/configuration/sendusing") = 2

        msg.Configuration.Fields.Update

        msg.Send

 

Example 6: Display Group Membership.  Save to file: DisplayGroupMembership.vbs

 

' Begin Script

'------------------------------------------------------------------------------

' August 11th, 2006

' Send an email without an email client.

'

' Created by Brian Carney

'   bcarney@microsoft.com

'   Systems Engineer

'   Microsoft.Com Operations Team

'

' Copy Contents and paste into a file called DisplayGroupMembership.vbs

' Run from a Command Prompt to see output and error information.

' Text encapsulated with <> indicates information required from you.

' ------------------------------------------------------------------------------

 

' Set Variable to call Server List

Set objNetwork = CreateObject("WScript.Network")

Dim oFSO

Set oFSO = CreateObject("Scripting.FileSystemObject")

Dim oFile

' Replace <YourServerList.txt> with your list of servers.

Set oFile = oFSO.OpenTextFile("<YourServerList>")

Dim sServer

'Replace Administrators with any group.

strGroup = "Administrators" ' Or Any other Group

 

'Loop through server list until each server has been processed.

Do while oFile.AtEndOfStream =false

            'Call Function GetInfo and pass server name.

            sServer = oFile.ReadLine

            GetInfo sServer

Loop

 

'Close file when completed.

oFile.Close

 

Function GetInfo(Computer)

    ‘If the script encounters an error, continue.  Remark out to see error information.

    On Error Resume Next

    strComputer = Computer

       

        'Call to gather group members.

        Set objGroup = GetObject("WinNT://" & strComputer & "/" & strGroup & ",group")

            WScript.Echo "  Group members:"

            WScript.Echo "Computer: " & strComputer

               

                'Loop through each Object.

                For Each objMember In objGroup.Members

                    WScript.Echo "    " & objMember.Name

                Next

 

'End of the GetInfo Function

End Function

 

Example 7: Using a For Loop:

 

For /F %i in (<YourServerList.txt>) Do xcopy d:\test.txt \\%i\d$\test.txt

You can replace xcopy with any command you normally would use on a single server.

 

Log Parser:

For /F %i in (<YourServerList.txt>) Do logparser –q:on “Select Count(*) from \\%i\e$\wwwlog\W3SVC1\ex*.log where sc-status = ‘404’

 

Find Current Connections:

For /F %i in (<YourServerList.txt>) do psexec \\%i netstat -an | find "ESTABLISHED" | FIND /c ":80"

 

Example 8: PSEXEC:

 

http://www.sysinternals.com/ psexec.exe provides a powerful tool to remotely run applications.

 

Run a Command Prompt remotely:

psexec -u <Domain\User> -p <password> \\<ServerName> cmd

When you are done, type Exit.

 

Additional Scripting Resources:

 

Microsoft Scripting Center:  http://www.microsoft.com/technet/scriptcenter/scripts/default.mspx?mfr=true

WMI Resource: http://windowssdk.msdn.microsoft.com/en-us/library/ms758323.aspx

Sysinternals: http://www.sysinternals.com/

 

Posted by MSCOM | 2 Comments

Keeping The Connections Open... HTTP Keep-Alives

HTTP Keep-Alives continues to be one of the most misunderstood settings in IIS.  This particular setting can have huge performance implications on your web site; take for example a typical browser that requests content from a web page. Without HTTP Keep-Alives enabled on the web server, each request for an element on that page, such as an image will require a separate connection from the client.  The server in turn must use its resources to process each of these additional connections.  This means that the server must go through the whole TCP-Handshake process for each connection. For those of you that don’t entirely remember what that means here is a brief explanation:

 

The client machine sends a TCP SYN (Synchronize)

The Server receives the SYN and sends a SYN of its own.  The server also follows with an ACK (Acknowledgement).  This happens in the same TCP packet and is often referred to as SYN-ACK.  The Acknowledge informs the client that the server has received its data and it’s expecting the next segment of data bytes to be sent.

The Client machine receives the SYN-ACK and sends an ACK. 

 

Only after this handshake has taken place can data be transferred.  Both the client machine and server must maintain the port numbers and the sequence numbers used for each connection.

 

In IIS 6.0 HTTP Keep-Alives are enabled by default, IIS will hold an inactive connection open for as long as the ConnectionTimeout value, which by default is 120 seconds.  Not having to handle the extra connections required to download multiple elements from a web page increases the server’s overall efficiency. 

 

Below is an example of a request to http://www.microsoft.com/windows/default.mspx

 

REQUEST: **************\nGET /windows/default.mspx HTTP/1.1\r\n

Host: www.microsoft.com\r\n

Accept: */*\r\n

\r\n

 

RESPONSE: **************\nHTTP/1.1 200 OK\r\n

Connection: Keep-Alive\r\n

Content-Length: 30008\r\n

Date: Wed, 02 Aug 2006 22:49:09 GMT\r\n

Content-Type: text/html; charset=iso-8859-1\r\n

Server: Microsoft-IIS/6.0\r\n

X-Powered-By: ASP.NET\r\n

X-AspNet-Version: 2.0.50727\r\n

Last-Modified: Wed, 02 Aug 2006 20:58:17 GMT\r\n

 

ITEMS *******************

HTTP Status        URI

200          www.microsoft.com               /windows/default.mspx                        

                200          www.microsoft.com               /windows/default.mspx                        

                200          www.microsoft.com                /Windows/mnp_utility.mspx/templatecss?template=%2fbusiness%2fomniture%2ftemplates%2fMNP2.GenericHome&shell=%2fwindows%2fConfiguration.xml&locale=en-us      

                200          www.microsoft.com               /Windows/mnp_utility.mspx/menujs?mnpshell=%2fwindows%2fConfiguration.xml&clicktrax=True               

                200          www.microsoft.com               /library/mnp/2/aspx/css.aspx?locale=en-us&name=Menu&static=Page          1,685      

                200          www.microsoft.com               /library/mnp/2/aspx/js.aspx?&path=/library/include/ctredir.js&name=Menu                   

                200          www.microsoft.com               /library/toolbar/3.0/css.aspx?c=/windows/Configuration.xml

                200          www.microsoft.com               /library/toolbar/3.0/quicklinks/ql.js                     

                200          www.microsoft.com               /library/mnp/2/gif/ql.gif                         

                200          www.microsoft.com               /library/toolbar/3.0/images/banners/windows_masthead_ltr.gif      

                200          www.microsoft.com               /library/mnp/2/gif/arrowLTR.gif           

                200          www.microsoft.com               /windows/shared/js/s_code.js            

                200          www.microsoft.com               /library/media/1033/windows/images/homepage/IE7_Small_ad_e.jpg

                200          www.microsoft.com               /library/media/1033/windows/images/homepage/65349_hero_393x220_Vista.jpg                        

                200          www.microsoft.com               /library/media/1033/windows/images/homepage/green_arrow.jpg                 

                200          www.microsoft.com               /library/media/1033/windows/images/homepage/player_55x55.gif                 

                200          www.microsoft.com               /library/media/1033/windows/images/homepage/62277_55x55_spyware_F.jpg                            

                200          www.microsoft.com               /windows/images/homepage/products/winFamLogo_XP.gif                          

                200          www.microsoft.com               /Windows/images/homepage/products/component_divider.gif     

                200          www.microsoft.com               /windows/images/homepage/products/winFamLogo_prod_hdr.gif                               

                200          www.microsoft.com               /windows/images/homepage/products/55506_134x14_WFnav_wss.gif                        

                200          www.microsoft.com               /windows/images/homepage/products/55506_119x14_WFnav_wemb.gif                     

                200          www.microsoft.com               /windows/images/homepage/products/55506_101x14_WFnav_wm.gif                         

                200          www.microsoft.com               /windows/images/homepage/products/55506_116x14_WFnav_vpc.gif                         

                200          www.microsoft.com               /windows/images/homepage/products/winFamLogo_tech_hdr.gif                

                200          www.microsoft.com               /windows/images/homepage/products/55506_112x14_WFnav_mdx.gif                        

                200          www.microsoft.com               /windows/images/homepage/products/55506_150x14_WFnav_ie.gif                            

                200          www.microsoft.com               /windows/images/homepage/products/55506_131x14_WFnav_wmp.gif                       

                200          www.microsoft.com               /windows/images/homepage/products/55506_150x14_WFnav_wds.gif                        

                200          www.microsoft.com               /windows/images/homepage/products/winFamLogo_relsites_hdr.gif                           

                200          www.microsoft.com                /library/toolbar/3.0/text.aspx?t=TQ%3d%3d&f=FFFFFF&b=6487DB&font=Microsoft+Logo+95%2c+13pt&w=105&h=29&a=0&l=0&v=0&c=DeMqBqiN3ORi7XLcgI%2fvxqO1OI4%3d

                200          www.microsoft.com               /favicon.ico           

 

As you can see in the response from above the server responds back with

Connection: Keep-Alive\r\n

That means that Keep-Alive is enabled on the servers.

 

There were a total of 32 different GET requests for items on www.microsoft.com, each of these requests were using the same TCP connection.  One caveat to this is if a request calls a different hostname, that request will happen over a different TCP connection. 

 

What’s the performance with or without Keep-Alives enabled?

 

Using Keynote Systems (www.keynote.com) to measure the load time of the web page from an agent located in Hong Kong, I get the following data:

 

With Keep-Alives:

Total page load time:5.34

 

Without Keep-Alives:

Total page load time:8.89

 

The increased latency is due primarily to the setup of the initial connection.

 

Why would you disable Keep-Alives?

 

Disabling Keep Alives will cause the server to ignore a client request to keep the connection open.  Some sites may just serve a single URI for example. http://../foo.gif ; in this case there is no need to keep the connection open due to the fact that only this particular resource is being called.  You may get better performance because the connection is not left opened for x amount of time.  The general rule is; disable HTTP Keep-Alives only after you have a clear understanding of how it affects the performance of your site.

Posted by MSCOM | 1 Comments

View From The Top…”Running the Business”

(This is an ongoing series of blog posts from our Directors of Operations.  The latest is written up by Casey Jacobs, Director of Engineering for the Microsoft.com Operations Team.)

 

Mid-Summer is the kickoff of a new fiscal year for Microsoft on an annual basis and with that comes the opportunity to reflect on the past years wins and challenges, as well realign commitments and accountabilities that our organization will focus efforts on in the upcoming 12 months.  That said when taking the “View from the Top” it translates to analyzing our business and ensuring we’re prepared as best as possible to succeed.  So then the natural question to ask next is what exactly does success look like to an Internet Operations shop like ours?  For us it starts with a Vision, followed by a framework of Commitments.  To shed some light on what that really equates to here’s at a high level what we have developed for our organization in Fiscal Year 2007:

 

Vision

Showcase world class operational and development solutions which facilitate customer/partner success; enabling their businesses to achieve strategic initiatives & execute against customer commitments.

 

Commitments

Company: Product & Solution Adoption; early adoption and showcasing Microsoft technologies while executing operational excellence & fiscal maturity

-          Highly Available & Resilient Solutions

-          Technology Analysis & Adoption

-          Business Efficacy: Reduced costs; while improving customer experience

 

Customer: Supporting the businesses of Developer/ITPro, Sales & Marketing, Customer Support, and Download Distribution Services

-          Customer Satisfaction: VSAT / DSAT benchmarking and Quarterly Business Reviews

-          Operational Rhythm of Business : Consistency in execution of Operating Protocols, Business & Technology Standards, and Policy Development

-          Support Services: 24x7 Production Support operations specializing in infrastructure monitoring & management; and domain knowledge of the Microsoft.com networked solutions.

 

Connection: Commitment towards improving ITPro & Developer knowledge by showcasing experiences learned running the Microsoft.com network of solutions

-          Broad Reach Execution Plan: Develop learning experiences as productized material able to be shipped through channels such as TechNet, MSDN, Conferences, Communities and Marketing.

-          Critical Customer Engagements:  Alignment with Microsoft field representative & product management; enabling our operational experiences to be shared in 1-on-1 customized customer engagement formats.

 

People Development: continued focus on development of world class people & organizational agility; emphasizing individual growth & training opportunities while aligning with organization health & maturity.

-          Employee Development:  Training & Career discussions with personalized plan established

-          Organizational Agility: acknowledging change opportunities as a lever to realign people and team functions/accountabilities to effectively meet the demands of business while simultaneously developing new opportunities for continued employee development.

-          Workgroup Health Index: Poll the organization and openly review Highlights as well identification of Plan of Action towards Areas of Improvement

-          Transparent Strategy Communication:  Quarterly review of Organizational, Division & Corporate Initiatives; including a alignment mapping of how an individual or organizations performance impacts those Initiatives defined at a division and corporate level

 

The heart and soul of accomplishing the Vision & Commitments is that of our people development, cross group collaboration & partnership, and organizational agility.  We continually stress the importance of transparency from the top down providing leadership and communicating strategic initiatives; as well from the bottom up ensuring everyone is empowered armed with a voice influencing our direction and means to directly leverage their abilities / efforts to drive impactful results.  Thanks to the team that we have here in Microsoft.com…now it’s time to buckle up and get after it full throttle for another great 12 month journey!!

Posted by MSCOM | 0 Comments

New Twists on the Ancient Art of Persisting Application Data

(Note: If you have been keeping up with this blog it is probably apparent that the subject matter on this blog ranges widely. We have posts on topics as diverse as debugging .Net applications, web farm administration, Log Parser tips and tricks, upper management messages and today, some coding tips. There is a method in our apparent madness. All of the posts on this blog have a common thread…they are all from subteams inside Microsoft.com Operations. If you read this blog regularly you will begin to get a picture of how diverse this team is in terms of skill sets, but also how we utilize these skill sets to run Microsoft.com. Today's post comes from our Tools Team, the internal group of developers that create some of the tools we use help us do our job more effectively.)

 

 

Programs that Remember

 

Let’s talk about some .NET C# programming. Probably since the dawn of computer programming, developers have been inventing ways to store bits of data to make their programs “remember” or persist information between executions. Giving a program some capability to recall things makes it intelligent and convenient to use.

 

The bad news is that persisting application data has not always been simple to accomplish. The good news is that it has become easier as programming languages and platforms have evolved. The purpose of this article is to discuss how the .NET Framework has made this common task much easier to get done.

 

 

Persisting Application Data

 

One useful example of persisted data would simply be the date and time for the last time the program ran. Each time the program runs it could know how long it has been since the last time, which may drive some business rules. A more involved case would be a user interface that conveniently recalls previously-entered values, saving typing on each use.

 

To persist data between executions and system restarts, a program needs to store it on some disk or other drive. The data could take the form of anything from plain text to data base data. The complexity of the data format and location will determine how quickly and easily the data can be stored and retrieved by the program.

 

With a plain text file, the data can take any form the developer chooses. If only a single piece of data needs to be stored, the program could write the data value into a text file dedicated for holding this one value. This would make it fairly simple for the program store and retrieve the data.

 

But if multiple values need to be persisted, the task gets trickier. How will each value be separated/delimited? Commas? Tabs? Line feeds? How will the delimited values be identified from each other? Based on position in the list? Some kind of tag or name placed next to each value? The code that makes sense of all of this could get complex.

 

 

Wait, How Hard Could It Really Be?

 

Ok, maybe I am making this all seem more complicated than it needs to be. For decades, the Windows operating system has provided handy support for saving this kind of data in two useful places: the Windows registry, and INI settings files. The good old Win32 API, still accessible to .NET developers today, provides many functions designed for reading and writing values to both the registry and INI files. Visual Basic eventually added simple wrapper functions for reading and writing registry values, for example GetSetting and SaveSetting. This was a big advancement.

 

The Win32 API and VB functions helped a lot. I felt fortunate every time I coded a Win32 API function call. I was grateful to the Windows designers for providing relatively easy access to this large collection of hardcore system functions, making my job easier. But coding Win32 API calls is pretty “messy” by today’s standards. They are inconsistently named and used -- one big flat list of functions lacking the symmetry and organization that makes modern API’s, such as the .NET Framework, easier to use. The VB features are nice, but VB is not the best option for every application, and it provides no help for working with INI files.

 

INI files are nice because they can reside in any chosen folder and are easy to read with Notepad. One can easily add and remove INI data using Notepad, within these little portable structured data files.

 

The Windows registry is nice because it is a central repository for zillions of application data values, always found in the same place. However, your few application values can get a little lost in its vast expanses, and you still want to fire up the Registry Editor to work with it interactively – yes, you still run “regedt32”, in case you did not know.

 

 

.NET to the Rescue!

 

The .NET Framework now makes persisting application data a snap. First, we have two handy classes for using the Windows registry: Microsoft.Win32.Registry and Application.UserAppDataRegistry. Here is how easy UserAppDataRegistry is to use:

 

 

Application.UserAppDataRegistry.SetValue(ValueName, ValueData);

Application.UserAppDataRegistry.GetValue(ValueName);

 

 

The INI story has not improved much, though. Win32 API calls are still required, which are pretty clunky to call from within managed C# code.

 

But now we have better options: the glorious XML file, and a special purpose XML file called a .NET Managed Resources File or RESX file.

 

Regular XML files are fantastic. Application data files are just one of the countless uses for the XML file format. This self-describing, highly-structured markup language has everything that a data file needs: element names, hierarchal categorization, order, and more, all in a portable text file that Notepad can work with. Better yet, the .NET Framework provides extensive support for working with XML. This support helps make the code required to handle XML quite straightforward. For example, persisting a single data value with XML:

 

 

public string GetData_Xml(string ValueName)

{

    string ReturnValue = string.Empty;

    if (File.Exists(this.XmlFilePath))

    {

        XmlDocument XmlDoc = new XmlDocument();

        XmlDoc.Load(this.XmlFilePath);

        ReturnValue = XmlDoc.FirstChild.InnerText;

    }

    return ReturnValue;

}

public void SaveData_Xml(string ValueName, string ValueData)

{

    XmlDocument XmlDoc = new XmlDocument();

    XmlDoc.LoadXml("<" + ValueName + ">" + ValueData + "</" + ValueName + ">");

    XmlDoc.Save(this.XmlFilePath);

}

 

 

Have You Discovered RESX Files Yet?

 

However, RESX files are now my favorite choice for persisting application data. The .NET Framework support is good. You can edit them with Notepad, but not wise if you have Visual Studio handy. The XML behind RESX files is a little involved to read in a plain text editor. They are much better read and edited with Visual Studio because it presents a RESX file with a very intuitive grid editor. This editor is similar to what you might use to work with a SQL Server table in Visual Studio or SQL Server Management Studio. It is very easy to add, remove, reorder, and edit data rows.

 

I get the feeling that not many other application developers are using RESX files for any purpose. Before I got to know them well, about three years ago, I confused them with C resources files and imagined they would be complicated to work with. Not true! Like standard XML files, RESX files have many uses. And they sure make great vehicles for storing application data.

 

Following is a code snippet showing sample reading and writing to a RESX file for a single value:

 

 

public void SaveData_Resx(string ValueName, string ValueData)

{

    ResXResourceWriter rw = new ResXResourceWriter(ResxFilePath);

    rw.AddResource(ValueName, ValueData);

    rw.Close();

}

public string GetData_Resx(string ValueName)

{

    if (File.Exists(this.ResxFilePath))

    {

        ResXResourceReader rdr = new ResXResourceReader(ResxFilePath);

        foreach (DictionaryEntry resxItem in rdr) if (resxItem.Key.ToString() == ValueName) return resxItem.Value.ToString();

        rdr.Close();

    }

    return string.Empty;

}

 

 

Looking at the code above, you may notice that the code to save data is pretty clean, but the code to get data is a little messy. This is because you need to iterate through the entire list of data waiting to come across the chosen one. It would be much cleaner if any element could be directly accessed by its key.

 

 

Mashup a StringDictionary and a RESX File

 

However, it is likely that you would want to store multiple data elements. In this case you might load up all of the data from the RESX file into some intermediate object, where the program can modify the data, then save it back out to the file on disk before the program terminates. Before you think this intermediate object needs to be some custom fancy thing, I will suggest that the ideal class for this purpose is the built-in .NET System.Collections.Specialized.StringDictionary or a System.Collections.Generic.Dictionary class, typed for storing strings.

 

Now we are getting somewhere! We could subclass the Dictionary into a new custom class I’ll call “DataDictionary”. In the DataDictionary class we will add some code to load it up with data from a RESX file, in the constructor. Then we will give it method to save the data back out to disk when done updating the data. Compiled up into a DLL, we will be able to use this new class from any other program by adding a project reference to it. Our DataDictionary will work just like a StringDictionary, but with the data loading and saving to disk added. This will be slick since the StringDictionary is so simple to work with.

 

The entire DataDictionary class could be written as simply as:

 

 

using System;

using System.IO;

using System.Collections;

using System.Collections.Generic;

using System.Resources;

using System.Windows.Forms;

 

namespace DataSaver

{

    public class DataDictionary : Dictionary<string,string>

    {

        internal string CallingProgramName;

        internal string DataFilePath;

 

        public DataDictionary() : base()

        {

            CallingProgramName = Path.GetFileNameWithoutExtension(Application.ExecutablePath);

            DataFilePath = Environment.GetFolderPath(Environment.SpecialFolder.MyDocuments) + Path.DirectorySeparatorChar + CallingProgramName + ".resx";

            //load from disk into Dictionary

            this.Clear();

            if (File.Exists(this.DataFilePath))

            {

                ResXResourceReader rdr = new ResXResourceReader(this.DataFilePath);

                IDictionaryEnumerator id = rdr.GetEnumerator();

                foreach (DictionaryEntry resxItem in rdr) this.Add(resxItem.Key.ToString(), resxItem.Value.ToString());

                rdr.Close();

            }

        }

        public void Save()

        {

            //save to disk from Dictionary

            ResXResourceWriter rw = new ResXResourceWriter(this.DataFilePath);

            foreach (KeyValuePair<string, string> DataItem in this) rw.AddResource(DataItem.Key, DataItem.Value);

            rw.Close();

        }

    }

}

 

 

And using this class could be very clean:

 

 

DataSaver.DataDictionary dd = new DataSaver.DataDictionary();

DateTime CurrentTime = DateTime.Now;

DateTime PreviousTime = DateTime.MinValue;

if (dd.ContainsKey("CurrentTime")) PreviousTime = DateTime.Parse(dd["CurrentTime"]);

string ElapsedTimeMessage = String.Format("The time elapsed since the last run was {0} seconds.", Convert.ToString(CurrentTime.Subtract(PreviousTime).Seconds));

Console.WriteLine(ElapsedTimeMessage);

dd["CurrentTime"] = CurrentTime.ToString();

dd["PreviousTime"] = PreviousTime.ToString();

dd["ElapsedTimeMessage"] = ElapsedTimeMessage.ToString();

dd.Save();

 

 

The most powerful part of that code is the way each data value is very directly accessed, such as:

 

 

dd["CurrentTime"]

 

 

No methods for getting or setting -- just simple collection array indexing. This shows the power of leveraging the .NET string Dictionary class, making a slight variant of it for our purposes.

 

There you have it -- new age application data persistence. Your applications now can be readily given total recall abilities, making them super intelligent, making you seem almost as intelligent.

 

 

Posted by MSCOM | 1 Comments

Why Dogfood?

What is Dogfooding?

Dogfooding is a Microsoft term for adopting a Microsoft product (or in a generic sense any new technology) typically while the product is still in a state of development. Microsoft Product Teams provide release builds for consumption prior to the final release to manufacturing (RTM), and product teams typically call these pre-RTM builds betas or RC candidates. Therefore, internal Microsoft teams or Microsoft customers who adopt MS betas or RC candidates products are performing real dogfooding.

 

The balancing game that plays out during the dogfooding process is among the value gained from adopting the new product, the superior feedback to the product team during the adoption process and the increase in trouble or pain experienced by the adoption team because the quality of the product is not yet ready for production use. Typically the pain decreases as the product quality increases when the product moves from betas to RC candidates and culminates with the RTM release.

So what’s the big deal?

At Microsoft.com, Engineering runs systems under heavy load due to the large demand by the Microsoft customer base. The infrastructure to run these systems is ripe for real world testing, for those product technologies required to run the systems. Product teams with technologies that align to the MSCom infrastructure salivate when they watch MSCom Engineering dogfood their new product under this type of test conditions because they know that they can never create these conditions in an isolated lab environment.

 

As an internal adopter of new Microsoft product technology Microsoft.com plays a very important role in the successful release of the product by providing critical feedback to the product team on the quality and usability of the product in real world scenarios. When we understand our role properly, we build direct relationships with the product teams to provide this feedback while the dogfooding is in progress.

 

The end result is that Microsoft.com is finding bugs before you do. The reward we get is the ability to showcase new technology to the Microsoft customer community in a real world scenario.

So do you just throw the product out there to see what happens?

Ultimately, that is precisely what we do. However, MSCom Engineering performs a large body of preparation work before we get to the actual first deployment to production. Engineering organizes this preparation work by phases and conducts these phases in chronological order. These are the adoption phases, starting with as much lead time as possible:

Technical Planning

During this initial phase Engineering identifies gaps in existing technology that reduces performance, hinders productivity, or otherwise prevents operations from performing work. Then Engineering researches the feature set of the new product and determines if any new features close any gaps. If Engineering identifies any features which will benefit operations they become candidates for shared goals for adoption. An example of a shared goal is the SQL Server Mirroring technology which officially shipped with SQL Server 2005 SP1. Microsoft.com realized that Mirroring technology filled a huge gap in our availability strategy when failures occurred on SQL hardware.

Incubation

During the Incubation phase, Engineering exploits the new features identified during the Technical Planning phase which has become shared goals. This exploitation typically occurs in a controlled lab environment where the Engineer determines how the feature actually works and makes an assessment of its potential for production use. Using the Mirroring shared goal as an example, Microsoft.com engineers validated the feature and configuration in a lab environment during this phase and determined which production scenarios to best exploit it. This work coincided with the pre-beta release of SQL Server 2005 SP1.

Evaluation

During the Evaluation phase, Engineering deploys the product on production infrastructure and determines if the shared goals can be realized. Engineering works with the product team and other internal teams to remove blocking issues (bugs or other obstacles) that impede progress. Preparation for full scale deployment begins during this phase. The realized goals become the focal point for understanding how the full scale production deployment will proceed. Using the Mirroring shared goal as an example, Microsoft.com engineers deployed SQL Server 2005 SP1 to four different production situations and validated the feature, providing critical feedback to the SQL Server team. This work coincide with the beta and RC candidate releases of SQL Server 2005 and thus the feedback was timely.

Deployment

During the Deployment phase, Microsoft.com engineers deploy the new product technology to all production scenarios where applicable, working with internal application development teams to insure that the deployment does not interfere with the availability and reliability of the applications depending on it.

You want some?

Early adoption of new production technology is not a scary world if you approach it with a level head and a plan. The rewards outgains the risk and pain if you can find how to exploit the technology for the benefit of your business, otherwise it will feel like you are stranded up the creek without a paddle.

Posted by MSCOM | 3 Comments

Light Weight Case Studies…Up-close and Personal With Microsoft.com Operations

A couple of months ago some folks from the Architecture Evangelism group in Microsoft approached us with an interesting idea, they wanted to produce an informal video series that focused on MSCOM Operations. They bill these as “Light Weight Case Studies”, and they have now been completed. They are being formally release on the Skyscrapr site as part of the ARCast series of shows. The 1st of the series entitled Architecting Microsoft.com - Introduction is released to their site, the remaining three will be released in the next few weeks. If you are interested in Solution, Infrastructure, Strategic or Industry Architecture, you should check out their site at http://www.skyscrapr.net.

 

The Skyscrapr site is a great Architecture resource, I encourage you to visit it often. As they say, “Skyscrapr is your window on the architectural perspective. Discover the different disciplines of system architecture, as well as perspectives on building successful systems. Check out our architects' blogs, learn about industry trends, download webcasts, watch videos, find training, and more.”

 

For all of you PodCast fans, they have also extracted the audio that is available to download.

 

And as a special favor to the folks that read this blog, here are the links to all of the MSCOM Light Weight Case Study videos:

ARCast-Architecting Microsoft dot com - Introduction

ARCast-Architecting Microsoft dot com - High Availability

ARCast-Architecting Microsoft dot com - Web Hosting

ARCast-Architecting Microsoft dot com - SQLServer

Posted by MSCOM | 1 Comments

Scaling Your Windows...and other TCP/IP Enhancements in Windows Vista/Longhorn

Our Ops team has been testing and sampling the goods that are Longhorn Server for a while now and one of the areas we're very interested in is networking.  Specifically, we're jazzed about the changes happening in the TCP/IP stack for both Vista and Longhorn.  We know the impact will be huge for backend operations such as moving data between data centers, but we also think there will be significant improvements on the front-end including downloads with Vista clients.  That means snappier downloads for you at home and at work...at least where your network has the bandwidth to allow you to take advantage of this.   

 

Our first taste of the new stack came when the Windows Networking team asked us to help them test the new stack in the data center to get some real-world data.  We set up one server in Bothell, WA and one in Santa Clara, CA (~22ms round-trip latency) and let the Devs have at testing with TTCP.  The results were stellar:  >890 Mbps throughput.

 

Now, TTCP pushes the limits of the stack, CPU, bus, network, etc, but that doesn't reflect the normal file transfers that happen as part of doing real work.  Since those file transfers create some of the more challenging scenarios for us, we put two new servers in WA and two in CA, all with GigE NICs.  Each data center has one W2K3 server and one Longhorn server.

 

From there we set up two Robocopy jobs to pull 20 1GB files from the servers in CA and drop them onto the servers in WA.  One job was run with W2K3 at each end and another was run with Longhorn.  All servers are the same HP DL385 Dual Core machines with 16GB RAM and GigE network uplinks.  Results:

 

Pull with W2K3 at both ends (CA and WA) :  ~12Mb/s (includes SMB and TCP/IP tweaks)

Pull with Longhorn at both ends:  >400Mb/s (default config...no tweaks)
Pull of same 1GB files between two Longhorn boxes on same VLAN:  502Mb/s

 

So, I know, you're thinking, but I don't move a bunch of 1GB test files back and forth all day, I pull web logs from remote servers back to a central location for processing and that takes a significant amount of time. We thought the same thing so, for a real-world sample of something we do regularly we pulled a single hourly web log file (199 MB) from a www.microsoft.com server in CA back to a couple servers in the WA data center.  The WWW server in CA is a W2K3 box with GigE and we pulled the file across the wire with a W2K3 and Longhorn server in WA.  For a good view into the future we also put the file on a Longhorn server in CA and pulled from the same Longhorn server in WA.  Results: (represented in terms of time because when you get up to make a sandwich between file copies, this is how long you have):

 

Pull from W2K3 in CA to W2K3 in WA:  ~2:12

Pull from W2K3 in CA to Longhorn in WA:  ~0:12

Pull from Longhorn in CA to Longhorn in WA:  ~0:04 (not much sandwich time)

 

Currently we have 40 of the boxes that serve www.microsoft.com in the CA data center translating into half of our ~250 GBs of log files per day being 20+ms away.  Today moving that 125+ GBs can take 83,333 seconds which is close to a day.  This means we must be creative and make multiple pulls at the same time to move the data more quickly...or get really full eating a lot of sandwiches.  As we move to pulling this data with Longhorn, we can reduce that time down to ~45 mins without being creative at all.

 

If you have a copy or two of Vista Beta 2 you can test out these changes with a medium to large file download from a server that is over 10ms away in terms of latency.  You should see a nice improvement.

 

What's next for our team:  With these gains in network utilization, there is a paradigm shift in what network utilization amounts to network congestion.  Previously with each client/server connection taking a relatively small portion of the available bandwidth over latent links, it was much easier to determine when network link utilization was becoming an issue.  Now, two servers can fill a 1 Gig WAN link all by themselves, but neither of them would be experiencing congestion that would be of concern; however, that's not so easy to determine when looking at link utilization from the network side of things.  This means we need to partner closely with the Networking folks on how we measure and communicate congestion issues in the future.

 

For further information on the TCP/IP changes in Vista and Longhorn:  http://www.microsoft.com/technet/itsolutions/network/evaluate/new_network.mspx

 

Posted by MSCOM | 3 Comments

Do you manage the forest or do the trees manage you?

If anyone is familiar with satisfying “needs of the moment,” it’s your local Operations team…application errors, network failures, DoS attacks, deploying new applications, etc.  A lot of times it’s simply hard to see the forest through the trees so to speak.  As our Senior Director, Todd Weeks, stated in his last post, you need to understand the system if you’re going to effectively manage, influence, and improve it.  So what is the system I’m referring to?  It’s your organization’s Services.  If you don’t know what your organization’s Services are then chances are you’re handling the daily events but the organization just doesn’t seem to run as effectively as you think it can.  You probably also find it difficult to quantify your organization’s work, its value to the larger business need, and prioritizing internal improvement projects.  Defining and managing your organization in terms of its Services not only creates transparency for your customers but also for everyone in your organization.  For the rest of this post, I’ll provide a brief description of how Services can be modeled and leave discussion of Service Catalogs, implementation obstacles, Service Level Agreements (SLAs), costing models, etc., for future posts.

 

So what is a Service?  Your organization is providing Services whether or not they’re well defined, otherwise your organization wouldn’t exist.  Organizations typically want to define their Services from the inside-out because the daily tasks and functions are what individuals are most familiar with.  Because Services are intended to create value for your customers, not your own organization, you need to define Services from your customer’s perspective.  Doing so, you’ll find yourself bundling the tasks and functions into Services that are meaningful to your customers.  To give you a formal definition, Services are the technical or professional capabilities which enable one or more of your customers’ major business processes or needs.  Most Operations teams find defining their Technical Services (ex. – email, phone, network, etc.) easier than defining their Professional Services (Incident Management, Change Management, Financial Management, etc.) because Technical Services quickly translate into servers and applications, while Professional Services are usually more process oriented.  The basic structure for modeling an individual Technical Service is Service\System\Subsystem\Component, while a Professional Service will follow the structure Service\SubService\Capability or Activity\System\Component.  The following link provides a visual for modeling the two types of Services.  The Professional Service example is a partial model of the customer application onboarding process described in this previous post.

 

In addition to dependencies between parts (logical and/or physical) of a given Service, as you model more of your organization’s Services you will undoubtedly uncover interdependencies between Services.  All of your organization’s Services together define your organization’s “system.”  Knowing what your system is, you now have improved transparency to the organization’s goals, a consistent method for measuring the organization’s performance, and the ability to consistently quantify the benefits of proposed changes/projects.  All of a sudden it’s not so hard to be able to see BOTH the forest and the trees which will allow you to more effectively manage, influence, and improve the system you live in.

 

MSCom Operations has started down the road of Service Management but is far from finished.  An observation that excites me is that as we define our Services the focus of individuals is shifting from a “functional team responsibilities” view to a more of a holistic “Operations Service Provider” view which is where we want and need everybody.

 

If you’d like to read more about IT Service Management sooner rather than later, the IT Infrastructure Library (ITIL) and Microsoft Operations Framework (MOF) are both great references.  As you define your Services MOF’s Continuous Improvement Roadmap will help you set the course for becoming even more responsive to your customers’ needs.

 

Posted by MSCOM | 0 Comments

MSCOM WebCast Morphed Into PodCast…A Great Idea From A TechEd2006 Customer

While the Microsoft.com Operations team was in Boston two weeks ago presenting at TechEd2006, a customer came up to chat with us. He had a great idea. He said, “I really liked the webcasts that MSCOM has done. I only wish that I could get the audio from those webcasts in a format that I could listen to while I am stuck in traffic going back and forth to work.”

 

Wow. We thought that was a dang fine idea ourselves. So we have extracted the unedited audio from those published webcasts and now... (drum roll please)... here they are for your downloading and listening pleasure! 

 

Here is the link to download the MSCOM Ops PodCasts. If you are using Windows Media Player as your default player for MP3 files, clicking on the Download link will launch the specific audio that you want. You can then save the file if you wish.

 

Also stay tuned to this blog as we should also have these available as streaming media in Windows Media Audio format (.wma) very soon (hopefully by the end of the week).

 

On the menu to download are the following:

 

The Microsoft.com Operations Series

 

MSCOM_OPS_PodCast - High Availability Architecture with MS.COM Operations

MSCOM_OPS_PodCast - Configuration Management of Web Farms with MS.com Operations

MSCOM_OPS_PodCast - Change and Release Management Strategies with MS.com Operations

MSCOM_OPS_PodCast - Monitor and Manage and Enterprise Platform with MS.com Operations

MSCOM_OPS_PodCast - Troubleshooting and Debugging Web Hosting Environments

Microsoft.com Operations Introduces Real-World Debugging (March Debug Maddness)

MSCOM_OPS_PodCast - MS.COM Debugging Determining When You Have A Problem and Beginning the Initial Debugging

MSCOM_OPS_PodCast - MS.COM Debugging CLR Internals

MSCOM_OPS_PodCast - MS.COM Debugging Memory Leaks In ASP.NET Applications

MSCOM_OPS_PodCast- MS.COM How To Tackle Problems In Dynamically Generated Assemblies (ASP.NET)

MSCOM_OPS_PodCast - MS.COM Debugging Without a Debugger In IIS and ASP.NET

Of course if you want to watch the webcast that these audio tracks are pulled from, the IIS team has posted all of the Microsoft.com Operations webcasts on their site at http://www.iis.net. Get these and other great webcasts that the IIS team has produced. There is a wealth of information on this site, with webcasts on topics like Security; Performance, Reliance and Scalability; Management; Dignostics; Deployment and of course the Microsoft.com Ops real-world series.

Posted by MSCOM | 1 Comments

Microsoft.com Operations at TechEd2006…Welcome To Boston...POWER TO THE PROS!!

We all are very excited to be a part of this extraordinary event. The scope and depth of technical information at this event is outstanding!!

 

IF YOU HAVE ATTENDED ONE OF OUR SESSIONS PLEASE USE THIS POST TO SUBMIT QUESTIONS, COMMENTS OR FEEDBACK.

.

How do you do that? Either add a comment to this post or email us directly at mscomblg@microsoft.com and we will get back to you ASAP.

 

Here are the sessions that the Microsoft.com Operations team is presenting:

 

Monday 6/12/06 10:45-am to Noon - WEB202: IIS and Microsoft.com Opertations: Leveraging Microsoft Solutions for High Availability Web Platforms

 

Tuesday 6/13/06 1:00 pm -2:15pm Chalk Talk: Microsoft.com Operations & IIS: Migrating IIS 6.0 to 64-bit

 

Tuesday 6/13/06 4:00pm -5:15pm  Chalk Talk: Microsoft.com Operations & IIS:  Configuration of IIS 6.0

 

Wednesday 6/14/06 10:15- 11:30 ARC312: Real World Design for Resilience: The Infrastructure Architecture of Microsoft.com.

 

Wednesday 6/14/06 10:15-11:30 Chalk Talk:IIS and Microsoft.com Operations: Troubleshooting and Debugging ASP.NET Applications

 

Posted by MSCOM | 1 Comments

View From The Top… Stay In Touch!!!

This is the third installment in an ongoing series of blog posts from Todd Weeks, Sr. Director of the Microsoft.com Portal and Operations Team

 

As I have been working with a few people over the past months, there have been a couple of themes that seem to keep popping up with respect to management style. The first is how should I approach a problem to have the most success at resolving it, the second is; as a Manager of Managers, what is the best way to stay in front of the business and make the right decisions.

 

The two things are very similar, and the way I have found myself answering those questions is first:

 

Approach a problem systemically. Systemic Thinking is looking at the entire environment, making sure when you are trying to fix something you are actually applying the proper wrench to the right bolt. If you are talking to the wrong person, or level of person, things won’t happen to the degree you may want. Look at the entire system, the people, the integration points, and devise your plan of attack to have the best results to solving your problem. As much as someone may want to help you, if they are not the right lever in the system, most likely things won’t happen and you’ll feel tension, not success. Or, they will help you and it could cause other problems in the system.

 

This brings me to my second theme that not surprisingly ties right into systemic thinking. You need to understand the system and your environment if you are going to know how to most effectively influence and help it. I have found it extremely helpful to meet with nearly every person on my team on a monthly if not quarterly basis. As my team has significantly grown this has had to change, but the one thing that has remained is meeting with people from every level of the org in every role at least every month just to stay in touch. My role as a manager is to be here to help them, reduce roadblocks and know the business and daily decisions so I can help provide guidance or air cover. Growth and change are things that really intimidate people. By staying in touch with every level of the team, knowing the pain points and the help needed; I can make much better decisions as a manager or try to make change, that will make my team happier and have a healthier work life.

 

So Stay In Touch!! As you have more levels of managers or people under you, meet at every level, don’t just stop at 1 or 2 levels down. I have the ability to affect my system greatly, and it is my responsibility more than anyone’s in it to understand it
Posted by MSCOM | 1 Comments

The Platform Update Dilemma…Eat the Dogfood and Maintain Available

While the MSCom Operations team manages 1800+ production servers, hosting over 120 different web properties for internal Microsoft groups (similar to most Ops teams), we also play an important role in helping ship Microsoft system software.  Meaning we deploy several Microsoft system software products (OS, IIS, SQL, etc) before they ship to the marketplace.   In fact, we are often running all of www.microsoft.com on the next version of an OS or Service Pack one year before it releases. 

 

The MSCom Operations team is challenged to maintain a known and consistent software platform across our managed enterprise.  Our internal customers need to know what platform (technology versions) they should be developing and testing against and understand when and how those technology versions will be deployed into production environments.  The challenge is balancing between the sometimes conflicting missions –

1) provide a known and reliable hosting platform with a predicable change cycle and

2) run new software before it ships in order to prove its features and capabilities, and to provide feedback to product teams.

 

To address this challenge, we partnered with our internal customer groups to revitalize an old practice of running a regular platform update.  Several years ago, we had such a process, but over the course of time, we slowly drifted away from it.  The divergence was partially caused by the rapid growth in the number of servers and sites supported and the belief that tight control of the platform constrained what our hosted customers needed (or wanted) to do.  Not that we totally abandoned this process, but we became less formal about it and over time the standard platform became less standard and the number of software version combinations increased significantly.  That variability created problems for our customers (how do I match my dev/test platform to the target production platform) and increased the cost to Operations in terms of time to resolve issues and in maintaining knowledge of what was fixed in what version.

 

The first step to solving this problem was agreeing there was a problem. The next step was gaining commitment from our customers and our own team that this was a shared problem and that both ‘groups’ needed to work together for a common solution that allowed both to meet individual goals.

 

In finding a solution we’ve had to consider and address the following aspects:

 

Platform: The first step was to agree on what we meant by platform – what software was to be included in this definition and to what level of detail were we trying to define and control.  We decided to start slowly and agreed to define and track the versions of key pieces of the platform – the OS, IIS, SQL Server, .Net framework and a few tools/services required to manage and monitor the environment.  Over time, we will extend this definition to include additional application components as well as new tools we may put in place.

 

Environments: We have 6 defined server environments including test, performance, pre-production, beta, staging, and production.  As an application moves through the SDLC, the code moves from one environment to the next.  Each environment includes a higher level of control and manageability.  The defined platform should apply to all environments.  For now, Operations deploys and audits the platform only in the environments it is directly responsible for – pre-production, beta, staging and production. Lab managers manage the platform deployments and auditing for the test and performance environments. 

 

Frequency:  Initially, Operations and its customer groups decided on a quarterly platform update.  That frequency seemed to strike the right balance and led to the process name ‘Quarterly Platform Update’.

 

Requesting and Approving Change to the platform:  What is added to the platform and who approves the changes?  Good question - anyone can propose a change to the platform.  All proposed changes go through a 2 phase review process, first by an internal change review board, consisting of only Operations personnel for an initial review and sanity check.  5 -10 days later the list of proposed changes is reviewed by the external change review board composed of Operations representatives and 1-2 representatives from each customer group we host applications for.  The external change review board makes the final decision on what will or will not be included in each quarterly platform update.

 

Communication and Scheduling:  Working with our customer groups, we have defined and now maintain a high level quarterly schedule (process milestones) for the next 5 quarters and maintain a rolling detailed schedule for the next 2 quarters.  At any point, we have a detailed view of what specific changes are scheduled for deployment generally 2 quarters in advance.  As part of the process we maintain a regular communication schedule. Details of approved changes are sent out in global (all members of Operations and its customer groups) communications 6 weeks prior to the beginning of the quarter.  Once deployment starts, weekly status reports help Operations maintain focus by communicating progress and sharing information on any issues that arise.  Individual web and database engineers work with the specific customer groups they support to create detailed release plans, synched with the quarterly platform update’s master schedule, for each system/application.  Clear and consistent communication throughout the entire process is a must, not a nice-to-have.

 

Deployment and Auditing the platform:  This topic requires more than 1-2 paragraphs and hence will be a future blog posting.  To get a brief idea of how to deploy changes to a large environment, see the recent blog Scripting Patch Management of Enterprise Web Clusters on Microsoft.com.  Needless to say, deployment and auditing is not a trivial effort.

 

Future Roadmap / Change Schedule:  We’ve defined a forward looking change schedule for major portions of the platform.  This benefits hosted customers as they have a more clear understanding of what the platform will be several quarters into the future and can more easily incorporate platform changes into their application roadmap.  You may be wondering how MSCom handles the adoption of new technologies.  To manage that, we have a separate program for the adoption of new technologies which ties directly into the quarterly platform update program.  This program, internally named MOTAP (Microsoft Operations Technology Adoption Program), defines who and how the Operations team works with product groups and customers for adopting new technologies.  Further describing the MOTAP program will be saved for a future blog.

 

Lessons Learned:  The quarterly update process is helping us rejuvenate shared goals across all customer groups.  We have defined a standard software platform, a schedule and process for regularly updating our managed environments to meet that standard and a process for advancing/changing the platform definition.  The platform defines the ‘minimum bar’ for software versions, but we still support deploying more current versions if the customer and Operations are in agreement.  The minimum bar keeps the platform current and reduces the number of system software combinations to support.   Updating the platform across 1800+ servers each quarter is a lot of work, but it’s the right thing to do. 

 

Posted by MSCOM | 0 Comments
More Posts Next page »