Welcome to TechNet Blogs Sign in | Join | Help
As my family keeps reminding me, I'm not much of a people person. It could just be that I am projecting myself onto others, but I am pretty sure that much of the IT industry is like me, which raises a number of serious security problems. If you are interested in reading about them, I have an article in the July issue of TechNet Magazine about this issue. If you just want to argue about and/or discuss it, we can do that here.

Last week I visited a customer and was greeted by two people who introduced themselves, respectively, as the "Chief Information Security Officer" and the "Chief IT Security Officer." Yes, they had two separate functions for this, one to secure information, and one to secure IT. This immediately seemed like something that would be logical for many organizations. The threats to the infrastructure are obviously different to the threats to the information itself, especially when your business is based on providing one or both to customers.

This stirred me to think more about something I have been mentioning in many of my recent presentations: where does Infosec (both IT and Information for ease of discussion) belong organizationally? When I spoke to a bank recently I asked them where Risk Management sits organizationally and they mentioned there was a VP in charge of that, but that Infosec was sitting in the IT department.

I am not sure I have the right answer about how to structure organizations, but I am pretty certain that putting Infosec under IT causes certain problems. As far back as the Fundamental Tradeoffs article, and even further, I have made the argument that security and IT management are fundamentally two different disciplines. IT management has as a primary objective to make the technology work, to be transparent to people, to ensure the information is simply there when users want to use it. All of those can be summed up in the phrase "to stop the phone from ringing." Any IT manager knows that he phone usually does not ring when things are working; it only rings when something is broken.

Security is, obviously, about restricting access to things. As far as the business is concerned, Infosec provides no intrinsic value. Spending money on Infosec is done to ensure that nothing happens. Success is measured by the absence of events, which could of course be because the efforts of the Infosec folks were successful, or simply because there were no events at all; or possibly because we failed to notice. IT at least provides a valuable business function that is tangible in its absence. The absence of any benefits from the spend on Infosec may be attributed to something extrinsic to the Infosec group; and the benefits are extremely difficult to quantify a priori

This inherent conflict of objectives has been acknowledged before, most famously in the "Confidentiality/Availability/Integrity" triad. However, I would argue that the "Availability" dimension was added by IT management, not by the security folks, because it goes entirely counter to the other two dimensions. Those other two dimensions are best achieved by reducing availability, first to illegitimate users, and then to legitimate users to avoid a spill-over of information to the illegitimate ones. In fact, the CIA triad reflects the historical perception that Infosec somehow belongs in IT. I don't think that is correct, at least not any longer, but maybe it never were.

So why am I writing this? I am writing it because I would like to stir some debate about where Infosec belongs. I think it should sit wherever Risk Management sits (which of course means the organization needs a Risk Management group to start with). The whole purpose of Infosec is risk management. The group that has risk management as its responsibility has oversight ability, it has the skills to assess and quantify risk, and it has a mandate to influence other groups. That group also does not have to be restricted by pecuniary and functional concerns that restrict the service delivery organizations. Infosec obviously needs a deep relationship with IT, but also with other groups. I have seen far too many organizations that have caved to the pressures of service delivery or inaccurate vendor claims and implemented extremely bad security architectures. Invariably, the Infosec folks at the table were too few, too restricted by the organizational structure, too confined in a career dependent on service delivery and not security, and too worried about rocking the boat politically to put a stop to the problems.

Freeing Infosec from IT would allow IT to focus on delivering IT services, and it would allow Infosec to make itself heard and not be voted down by a much larger service delivery constituency in the mainstream IT group.

 

 

At an event in Germany today the issue came up how to access the free security support in your region. For a couple of years now Microsoft has offered no-charge support for security issues. However, the number is different in different regions. To find the number in your region, go to: http://support.microsoft.com/common/international.aspx.

Once again, it seems misguided reporters have appropriated a technical term and are misusing it in ways to confuse the field. "Hacker" was not the first term they ruined, but it is still the one that irks me the most. The primary definition of "Hacker," is of course "a person who creates and modifies computer software and computer hardware, including computer programming, administration, and security-related items" according to Wikipedia.

Now it appears that reporters unwilling to actually understand the terminology they use are in the process of destroying the term "zero-day." We have been reading over the past few days about a "zero-day" vulnerability in Symantec Anti-virus, which Marc Maiffret, probably to protect the world in his own trademark way, made public. Unfortunately (or maybe fortunately), this is not a zero-day, unless zero-day has somehow been redefined to mean "new."

Zero-day, as it pertains to vulnerabilities, means a vulnerability that was exploited before anyone, other than the criminal using it, knew about it. This definition is perfectly in line with the definition of zero-day as something for which information is not publicly available. By definition, the fact that Marc was nice enough to alert the world to Symantec's flaw means that it is not a zero-day, unless Marc went and exploited it before he advised the world of the flaw, and we have no indication that he did that.

It may sound like a rant, and of course it is, but it is really important that we keep these terms straight. A zero-day vulnerability is a security professional's worst nightmare. By diluting the term to refer to any vulnerability for which a patch is not available we dilute the language of our field, and lose a very important definition that we need to be able to discuss without ambiguity. It is unfortunate that reporters write about something without bothering to understand the terms of the field they report on. Those reporters give a bad name to those dedicated reporters who take care, and work hard to do a public service in understanding and documenting a field that is important to illuminate. Inaccurate use of important terminology muddy the waters for those of us who are charged with actually taking the field forward.

We do need a term for a vulnerability, like the current Symantec one, which has been publicly announced, but for which a patch is not yet available, I have in the past used "0.5-day" to describe such an issue, but that term does not yet seem to stick.

Unfortunately, it seems that people are getting the impression that I hate hardening guides. A few people told me that after I delivered the "Security Myths" presentation at Microsoft's Federal Security Summit West last week. It is really not the case.

I do not hate hardening (or security) guides. In fact, I really like them - properly used. As it turns out, I have had a hand in writing, architecting, testing, or at the very least, commenting, on just about every major security guide for the Windows family of operating systems over the past 10 years or so, with only one major exception. Properly used, a good security guide can be a powerful tool in the Infosec manager's or risk management consultant's arsenal. There are several very good security guides for Windows out there, such as the Windows 2000 Security Hardening Guide (which I wrote), the Windows Server 2003 Security Guide (which I designed much of the architecture for and helped develop), the Windows XP Security Guide (which was based on the Windows Server 2003 Security Guide and used the same basic architecture). I also worked on getting the National Security Agency (NSA) endorsement for the Windows Server 2003 Guide.

That all being said, the Security Myths presentation may seem highly critical of security guides, and with some intent actually to be so. You see, the problem, as I see it, with security guides is the fact that far too many people put blind faith in them, and use them as a substitute for proper risk management. The Security Myths presentation is designed specifically to make people question the common wisdom about security guides, thereby being able to put them to better use. There is this perception that if we simply apply a security guide we are done with security, or at the very least, we have done our "due diligence." I do not believe that either is true. If the organization has analyzed the risks they are facing, have determined their threat profile, have worked on developing a set of countermeasures to those risks, and it turns out that their risk profile can be mitigated at least to some extent with a security guide, then that is a proper use of the guide. However, what far too many people do is they start out by saying "Right, we need a security guide for these systems before we can deploy them, because otherwise they are insecure." That statement carries no understanding of the risks the systems are facing, no concept of what defense in depth means to that organization, and no real analysis of whether the guide really mitigates any of the threats they find interesting. In many cases it leads to staying with a much less secure system, in favor of a much better one, because there is a security guide for the older, insecure one, and there is not yet a guide for the new one - the one that is developed against a modern threat model. That type of decision is likely to decrease security, not increase it.

This can be taken to extremes. One of the topics covered in the Security Myths presentation is non-existent settings. Believe it or not, but we find them on a regular basis in various security guides. My personal record is a guide from a defense agency in Europe that had six required settings that do not exist in the platform the guide applied to (four of those did not exist in any platform I know about). This may sound like something you can just shake off; however, the problem runs deeper than that. After one of the major security guides required two non-existent settings about four years ago we started seeing many security auditors require those settings to be made on all systems, otherwise they claim those systems cannot possibly be compliant with HIPAA, SOX, GLB, or whatever other buzzword the auditors claim to prove compliance with. Don't get me wrong, doing what is required to be compliant with all those regulations is very important - but applying non-existent security settings to your system will do nothing to get you there.

Unfortunately, applying existing, but improper, security settings will not get you there either. Many guides require tweaks that simply are not supported by Microsoft, such as modifying Access Control Lists on system files. Other settings are supported, but not widely tested, or simply inappropriate for many systems. For example, turning on SMB Message Signing, prior to Windows XP Service Pack 2 and Server 2003 Service Pack 1, mitigates a very important attack, but it may also incur an overhead of up to 40% on file transfers. In some environments those settings may still prove valuable and proper, but that is completely environmentally dependent. What some of the people using these guides fail to do is analyze whether the systems they are analyzing really need those settings or not, and whether the business needs of those systems permit their use. Instead, some auditor comes in and runs am automated tool against whichever security guide they have chosen to use, and then point out all the discrepancies. A good auditor would never just hand over a print out of the findings from a vulnerability assessment tool. A good auditor would work with the organization to analyze those findings in relation to the business needs, the risk management strategy, and help the organization determine what their actual unacceptable exposure is.

Finally, security guides, as I mentioned earlier, are not a substitute for all the other parts of risk management. There is a common misnomer that a security guide, by itself, makes your system secure. That is unfortunately not true. Together with other methods, such as server and domain isolation, and proper threat modeling and dependency mitigation, it is a highly valuable tool, but if the security guide is all you use, your system is probably only marginally more secure than it was to start with. In fact, in the most recent version of the "How To Get Your Network Hacked in 10 Easy Steps" presentation, the one developed in August of 2005, I actually perform the attack on systems hardened essentially to the military guidelines. There is a taped version of the presentation in the Listening Room at the Protect Your Windows Network site, in case you were not at TechEd Australia, or one of the other events last fall (spring down under) where I delivered it.

 

Today I got a question that reminded me that I have not written a whole lot about how to manage the accounts used by system administrators. The question was whether I could think of any reasons why you would share an administrative account between several people, other than for the sheer convenience of it.

My answer is that I cannot think of such a reason. There is one edge case, used in ultra sensitive environments, where you share an account between multiple people, but each of them knows only a portion of the password. This is not done for convenience though, it is done so that no single one of them can make changes without the others knowing. The system could only be compromised through collusion between them.

Other than that, I cannot think of a single good reason for sharing administrative accounts. In general, there are two extremes when it comes to this. On the one side is a single account, used by everyone, for every purpose on every system. On the other extreme is an account per purpose, per person. Somewhere in between is the happy medium.

We have tried getting closer to the multi-account extreme here at Microsoft, but it is causing some pain for administrators. We also use Smart Cards for high level administrative accounts. I have heard of people who have as many as 28 of these Smart Cards to keep track of. This obviously inreases security on one side, but at the same time, you have to imagine that the administrators will eventually start looking for ways to circumvent the policy for their own convenience, thereby decreasing security. One might argue that they should be able to deal with this level of complexity and that this is why we pay them so much, but as most people do not think they are paid enough, they will try to make life easier on themselves.

Where exactly the happy medium is probably differs by environment, with risk management philosophy, and with the quality of the administrators. However, unless you start with some form of analysis of the security requirements of the systems, and classify the systems into different categories of requirements, there is very little chance to get a reasonable division of the accounts. Again, risk management and thinking about the security requirements underlies all the other things we do.

Yesterday I had a fascinating meeting where we discussed a number of theoretical concepts, including how we think about risk. Risk, of course, should be the driver in everything we do in information security, and risk management should be the discipline that guides us.

The problem with risk is that it is a very nebulous concept. Humans, and those of the management persuasion in particular, need more detail to make decisions. Consequently, we have methods for quantifying risk, such as the annualized loss expectancy (ALE) formula:

ALE=SLE*ARO

where SLE is the Single Loss Expectancy, or the cost of a single loss event, and ARO is the annualized rate of occurence, or the probability of a loss event in a year. Toghether, the ALE gives us a dollar cost per year of some risk.

The problem with thinking about risk solely in terms of the ALE is that it is far too simplistic. In another article I am working on developing the concept that if we implement any type of mitigation, it will modify the item we are securing. In other words, we need to also consider the impact of the mitigation of a risk item.

There are two ways mitigation measures impact us. The first is the cost to implement the mitigation itself. Ideally that cost should be certain, so let us call the cost Cm.

The second way the mitigation impacts us is in its side-effects. For example, if you require anti-virus software on all computers those computers may slow down, impacting productivity. There is only a chance that this will happen, so we also need a probability factor involved. Let us call the side-effect Sm and the probability Ps.

Putting all that together, we get a risk equation that looks like this:

Risk = SLE*ARO - [Cm + Sm*Ps]

This takes into account the cost of actually doing something about the risk. It says nothing, of course, about how we develop the measurements, nor about what is acceptable and what is not. Those items, as they say, are topics for further research.

Just a quick note to let you know why your comments to my blog no longer show up automatically. It turns out that someone decided my blog was a good place to post ads for online pharmacies, gambling, and all that other stuff that we apparently do not get enough of in e-mail. The other day I deleted about 40 of those comments. Since there is apparently no automated spam filter on the blog system (there is a business opportunity for you) I set it up to require manual approval of all comments. As before, I have no problems with contrary opinions and will approve those as all others. However, I can't really approve comments more than a couple of times a day, and some days not at all subject to travel schedule, so your comments may be delayed. I still want them, so please keep posting them. Just be aware it will take a while to get them posted.

Oh, and if the guy posting the spam happens to read this, I have some choice words for you, but they are not suitable for print (even electronic).

About a year ago Steve Riley and I built a presentation based on a set of security myths we put into the book. It was one of the most popular presentations we have ever made, and we kept coming up with more myths every time we delivered it, or talked to people, or sat long enough on an airplane to think a bit. In this month's issue of TechNet Magazine we wrote up another batch of myths. There is currently no presentation scheduled on these (rejected at TechEd US) but if you really think we should build one, let us know.

The schedule for Spring 2006 is in full swing. Just in case anyone is interested in meeting up with me somewhere in the world (or has some new gig they think I should go to) I thought it makes sense to post my schedule here.

  • February 6 and 7 - Albuquerque, NM for a training course. Yes, I need training too
  • February 13 and 14 - Delivering a tech talk at UT Austin
  • February 20 - Chicago for a customer meeting
  • February 27 - March 3 - London (and environs) for the CISO summit and a set of customer meetings
  • March 2 - IT Pro Conference in Paris
  • March 6 - 15 - Australian Security Summits - Yes, it is true! My favorite Microsoft Subsidiary is organizing a new set of security summits. Here is the detailed schedule
    • March 7 - Sydney community event at the Microsoft Office, 1 Epping Road, North Ryde
    • March 8 - Sydney summit at Sofitel Wentworth.
    • March 9 - Community event in Canberra at the Rydges Eagle Hack Resort. What's the best thing about Canberra? The road to Sydney! Closely matched by the fact that the Aussies buried their legislators alive
    • March 10 - Summit in Canberra at the National Museum of Australia
    • March 11 and 12 - scuba diving in Perth, or somewhere around there, maybe interspersed with a visit to the Little Creatures Brewery in Fremantle
    • March 13 - Community event in Perth at the Perth Town Hall
    • March 14 - Security Summit in Perth at the Hyatt Regency
  • March 16 - 18 - Not to be outdone by their neighbors on the West Island, the Kiwis are also doing a security summit, in Auckland, on the 17th.
  • March 27 -30 - Reykjavik, Iceland for Rocked 2006
  • April 10-14 - Vacation
  • April 25 - Security Summit in Dallas at the Convention Center
  • May 9 - 10 - Federal Security Summit West in Redmond, Washington.
  • May 17 - 25 - AusCERT 2006  in Gold Coast. I will be presenting on app safety. 
  • May 29 - Security Summit in Berlin, Germany
  • May 30 - TechNet event in Hamburg, Germany
  • May 31 - June 1 - Security summits in Sweden
  • June 6-7 - Conference in Oslo, Norway
  • June 8 - Meetings in Stuttgart, Germany
  • June 14 - 15 - TechEd North America in Boston, MA (yes, I know it is just called "TechEd" but don't you agree that's a bit conceited since there are 21 other events across the world?). This is the biggest event of the year for us, and it will probably sell out in April this year. Make sure you register early!
  • June 26 - Security Summit in Detroit at the Rock Financial Showplace, the Diamond center

It is interesting how some of the best security features in Windows receive either no attention, or get criticized for the strangest reasons. Case in point: Windows Firewall is one of the best firewalls out there, and yet much of the talk about it are complaints that outbound filtering is disabled by default. I believe there are a lot of incorrect assumptions and outright myths about outbound filtering, but more about those further down. Let's look at the positive side first.

I really like Windows Firewall in Windows XP Service Pack 2 (SP2). It is lightweight, centrally manageable, does the job well, is unintrusive, and does something very critical: it protects the system at boot. That last one is crucial; we have seen many systems in the past get infected during boot even with a firewall turned on.

In Windows Vista, the firewall is getting even better. There are several new features, the most obvious being that finally the firewall is combined with IPsec. This makes a lot of sense. IPsec and the firewall fundamentally do closely related things. By combining them enterprises can administer the two using the same group policy interface and design policies that use the two in conjunction. In other words, enterprises that are implementing Server and Domain Isolation or Network Access Protection (NAP) will have more flexibility and a better interface for configuring it. Here is what the interface looks like in the recent builds:

The interface is specifically designed to make configuring Server and Domain Isolation and NAP easier. As I have said before, Server and Domain Isolation today, and NAP in the future, are two of the most promising security technologies we have. Integrating them into the firewall in this way is going to be tremendously powerful.

Another really great feature in the new firewall is that it can set rules based on three different types of networks. In Windows XP SP2 the concept of a domain and a standard profile were introduced. When a domain controller was reachable the system used the domain profile and when the domain controllers were not reachable the system used the standard profile. However, the administrator really had no ability to configure which of these were used on a particular network - all that could be configured was the ports and applications that were allowed on each. With Windows Vista there are three profiles: domain, private, and public. The domain profile works the same as it did in Windows XP, except that the detection logic has been much improved, resulting in a more reliable transition and fewer systems that think they should be using the standard profile when they are actually on the domain. The private profile is essentially new, and solves an important problem. Many of us have home networks, and we may want to be able to connect to a computer over particular protocols, such as SMB (Windows file sharing) on such networks, while blocking those protocols on public networks. However, there is no domain controller on those networks, so the domain profile cannot be used. In Windows XP our only option was to open those ports in the Standard profile. In Windows Vista we will be able to open them in the private profile, which does not expose them when we are at Starbucks, or the airport, because those networks would be public. When you connect the system to a new network it will ask you whether that network is public or private and configures the system appropriately and it remembers this each time you connect to that network. You can also configure domain isolation rules based on the network type, as shown in this screenshot:

Building a firewall rule is also much simpler in Windows Vista. The new rules wizard, shown below, allow you to define all the usual types of rules, and also contains pre-defined rules for particular services.

There is also a "custom" rule (obscured by the dropdown above) which gives you all the flexibility you can expect from a firewall. Of course, you can very easily configure exactly how the rule behaves. For instance, if you want a rule that only allows IPsec encrypted traffic, which you could do in Windows XP, but through several steps, you simply select the right radio button on the appropriate wizard page:

Here you can configure that only authenticated connections can use this port or program. It really can't get much easier than that to configure Server and Domain Isolation.

There is much, much more in the firewall and in a simple blog post I just cannot describe it all. One very nifty feature is the ability to export and import rules. For example, consultants can build standard rule sets to provide particular types of functionality and then simply deploy those at multiple customer sites. I can see an entire consulting practice and partner ecosystem growing up around firewall rules.

Given all this, it is really unfortunate that all some people seem to be able to say is that, while the Windows Vista firewall "finally" provides outbound filtering, it is disabled by default (which is actually incorrect, see below for more details). This is then usually coupled with denigrating statements about how the Windows XP firewall does not provide outbound filtering and how this means nobody should use it.

Not only is the outbound filtering scenario that provides significant security value actually turned on by default in Windows Vista, but these claims also completely fail to account for a very simple engineering issue: any outbound host-based firewall filtering in Windows XP is really just meaningless as a security feature in my opinion. True, it stops some malware, today, but only because current malware has not been written to circumvent it. There simply are not enough environments that implement outbound rules for the mass market malware authors to need to worry about it. In an interactive attack the attacker can circumvent outbound filters at will. To see how, consider this.

Circumventing outbound host-based firewall filters can be accomplished in several ways, depending on the scenario of the actual attack. First, the vast majority of Windows XP users run as administrators, and any malware running as an administrator can disable the firewall entirely. Of course, even if the outbound filter requires interaction from the user to open a port, the malware can cause the user to be presented with a sufficiently enticing and comprehensible dialog, like this one, that explains that without clicking "Yes" they will not ever get to see the dancing pigs:

See, the problem is that when the user is running as an administrator, or the evil code runs as an administrator, there is a very good chance that either the user or the code will simply disable the protection. Of course, the user does not really see that dialog, because it is utterly meaningless to users. What the user actually processes is a dialog that looks more like this:

That is problem number one with outbound filtering. Given the choice between security and sufficiently enticing rewards, like dancing pigs, the dancing pigs will win every time. If the malware can either directly or indirectly turn of the protection, it will do so.

The second problem is that even if the user, for some inexplicable reason clicked "No. Bug me again" or if the evil code is running in using a low-privileged account, such as NetworkService, the malware can easily step right around the firewall other ways. As long as the account the code is running as can open outbound connections on any port the evil code can simply use that port. Aah, but outbound firewalls can limit outbound traffic on a particular port to specific process. Not a problem, we just piggy back on an existing process that is allowed. Only if the recipient of the traffic filters based on both source and destination port, and extremely few services do that, is this technique for bypassing the firewall meaningful.

The key problem is that most people think outbound host-based firewall filtering will keep a compromised asset from attacking other assets. This is impossible. Putting protective measures on a compromised asset and asking it not to compromise any other assets simply does not work. Protection belongs on the asset you are trying to protect, not the one you are trying to protect against! Asking the bad guys not to steal stuff after they have already broken into your house is unlikely to be nearly as effective as keeping them from breaking into the house in the first place.

In addition, as the dialogs above suggest, the vast majority of users are unable to make intelligent security decisions based on the information presented. Presenting information that does allow them to make intelligent decisions is much harder than it sounds because it would require the firewall to not just understand ports, protocols, and the application that is making the request, but also to understand what it is the request really is trying to do and what that means to the user. This information is very difficult to obtain programmatically. For instance, the fact that Microsoft Word is attempting to make an outbound connection is not nearly as interesting as what exactly Word is trying to do with that connection. A plethora of dialogs, particularly ones devoid of any information that helps an ordinary mortal make a security decision, are simply another fast clicking exercise. We need to reduce the number of meaningless dialogs, not increase them, and outbound filtering firewalls do not particularly help there. While writing this article I went and looked at the sales documentation for a major host-based firewall vendor. They tout their firewall's outbound filtering capacity and advising capability with a screen shot that says "Advice is not yet available for this program. Choose below or click More Info for assistance." Below are two buttons with the texts "Allow" and "Deny." Well, that clarifies things tremendously! My mom will surely understand what that means: "Unless you click 'Allow' below you won't get to see the naked dancing pigs that you just spent 8 minutes downloading." I rest my case.

Fundamentally, it is incumbent on the administrator to configure all outbound filtering because the end user will not be able to, and once the administrator does that, if there are enough systems using the same protection mechanism, automated malware will just adapt and exploit the weaknesses mentioned above.

Now, given what I just said about outbound filtering, why is it even included in Windows Vista? Here is why: there is one particular area where outbound host-based firewall filtering provides real security value, but only in Windows Vista. In that operating system, services can run with a highly restricted token. In essence, each service has its own security identifier (SID) which is unique to that service and different even from the SIDs of other services running in the same account. This Service SID can be used to restrict access to resources, such as network ports. What that means is that even though two services run as NetworkService, they cannot manage each others processes and the firewall can be configured to allow only one of them to communicate out. If the other one, the blocked one, is compromised, it cannot hijack the allowed service and use its allowed port to communicate out. This functionality is another one of the very cool security features added to Windows Vista, and the new Firewall uses it to actually provide real security value by outbound firewall filtering. In fact, firewall filtering on service SIDs is enabled by default in Windows Vista. The rules are predefined in the HKLM\System\CurrentControlSet\services\sharedaccess\parameters\firewallpolicy\RestrictedServices registry key. Below you see a screen shot of that key:

Without the ability to keep a compromised process from hijacking another process outbound host-based firewall filtering provides no protection from a compromised host. Because of the fact that Service SIDs were added in Windows Vista the firewall can actually provide meaningful protection with outbound filtering, but because Windows XP inherently lacks this ability having outbound filtering on Windows XP is meaningless from a security perspective.

This, of course, unless the objective is simply policy enforcement, in other words, attempts to stop non-malicious processes from accidentally communicating out. Some of that you can do with IPsec today, with no additional functionality needed on Windows XP. The new Firewall in Windows Vista will provide more complete desktop policy enforcement power to network administrators. This will allow them to write whatever filters they need to enforce their organizational policies, and, contrary to many Windows XP deployments, have better confidence that users will have a much harder time overriding them, since far fewer users need to run as administrators.

Today I received a message that purports to be from Discover regarding a 5% cashback program on gas purchases on that card. (For the non-American readers, Discover is a credit card widely used in the U.S.). The e-mail had a couple of links to click, both of which were disabled by Outlook since the e-mail was classified as junk mail 

The e-mail contains no information to verify that it is indeed from Discover. The links are disabled for security reasons by Internet Explorer and Microsoft Outlook. In fact, there is not even a plain-text link in the e-mail that you can copy and paste. You would have to know to view the source code for the e-mail to see the URL. If you go to the site that is linked to in the e-mail you find that it does not use HTTPS, but plain HTTP. That site eventually forwards you to the "Account Center" which presents a logon page that is plain HTTP, although the form gets submitted to an HTTPS site. In other words, you cannot verify the identity of the site you are submitting your logon password to, even though it will actually go encrypted across the wire. Once you log on, assuming you trust the site enough to do that, there is no mention of this offer. In short, there is no way I could find to to verify the authenticity of this e-mail.

In this day and age of credit card spoofing, how is a customer supposed to verify that the mail received is actually from Discover when there is no information on how to do so, and the security verifiers are hidden? This is sad, given how many fake messages of this nature most of us get every day. One would hope that credit card companies would start making it easier and more obvious to verify that what they are sending is indeed legitimate.

This is a post I was asked to do a while ago and have been procrastinating on. I apologize for that. For various reasons, every so often, certain FAQ items come up again. One of them is whether certain password policies are enforced when a system is not on the domain. It makes sense to once and for all declare how it really works.

When a domain-joined system is not connected to the domain you can still log on to it with cached credentials. I have addressed how those work both in the book and in a separate article, so I will not go into that here. However, it is important to understand that when you logon with cached credentials certain password policies cannot be enforced. The policies that do not get enforced are the account lockout and password expiration settings.

The reason these are not enforced is because of how they are enforced on domain joined systems. Let's look at each in turn. Password expiration is easiest, so we will do that first.

When a credential is cached locally it is cached by doing another cipher operation on it and then stored in the security key in the registry. Only the credential is cached, however. In the case of a password logon that means the password hash. In the case of a smart card logon both the public key and the password hash are cached. No other information about the account is cached. That means that while getting logged on with cached credentials the system has no knowledge of the expiration date of the password, and therefore cannot enforce that. Of course, we could cache the expiration date, but two problems would occur if we do that. First, and most obvious, it would prevent you from logging on at all. If you have to log on with cached credentials you cannot reach the DC. If you cannot reach the DC you cannot change the password. If you cannot change the password you cannot extend the expiration so the only option the system has is to block the logon. You can imagine the havoc this would cause. You are on an airplane. You have to log on to do some work. You can't because your password has expired. Now your options are to either use the horrendously expensive AirPhone, or the slightly less horrendously expensive in-air WLAN, if either is available, to VPN in so you can change your password. Alternatively, you turn off your system, have a glass of The Glenlivet, and watch the inflight movie instead. (Now that I think about it, that's not so much of a problem really, is it?) The second problem you would run into is that the administrator may well have changed the expiration date on the account on the domain while you were gone. However, the roaming system does not know that and cannot enforce it. Obviously that is much less of an issue, but it could still cause problems.

The second setting that is not enforced when disconnected is the Account Lockout setting. If you have read my writings in the past you know that my opinion is that you should never use account lockout anyway, but it is also not enforced. The reason has to do with how it is enforced when you are on the domain. When a user logs on to a system the request is handled by a domain controller (DC). If the authentication fails on that DC the request is forwarded to the Primary DC (PDC, or in the case of Windows 2000 and higher, the PDC Emulator). Only if it fails there is the account lockout threshold counter incremented. The lockout counter is actually maintained on the PDC. Each DC has a local account lockout counter, but only if the PDC is not available are those used. If it were maintained and incremented on each DC then it would have to be synchronized with all the DCs, otherwise the user would get n tries per DC. By forwarding to the PDC we avoid that issue, and the associated performance hit with it. However, since account lockout is only tracked on the PDC, by definition, it is not enforced when the system cannot reach the PDC. If it were, we would run into the same synchronization problems we would have on the domain. A user could try n passwords on the client and then another n on the DC. Further, what happens on the client if the account is locked out? How does it get unlocked? Only domain admins can do that. Here again is the "airplane problem," except that in this case, just VPNing in would not resolve the problem. Requiring users to take their machine back into the network just to get their accounts unlocked is untenable. Therefore we do not enforce account lockout at all when the machine is disconnected.

There are a couple of related KB articles on this, such as 297157 and 906305. However, as far as I know there is nothing stating explicitly what I say above.

Several times in the past year someone has brought up an issue where they needed to "temporarily" grant someone administrative privilege to a system or a domain. Each time my answer has been the same: "why not just put them in the Administrators group then and leave them there?" The response to this is invariably that they do not trust the people to be administrators.

The crux with that issue is that there is no such thing as a "temporary administrator." A malicious user that is an administrator for long enough to execute a couple of lines of code will have those privileges until the system, and all those that have two-way dependencies with it, are rebuilt. A couple of lines of code, maybe just one if you are good, is all it takes to permanently remain an administrator. Hence the reason for my question: if you do not trust someone enough to make them a bona fide permanent administrator then you do not trust them enough to make them a "temporary administrator." In that case, you need to find another way to do what it is they need to do.

Keep in mind too here that we are not just talking about malicious administrators here. If you are an administrator, and make a mistake while you are an administrator, that mistake may remain even after you remove yourself from the administrators group. Maybe that requires an example: let's say you are building a new system. To install everything you add yourself to the administrators group. While you are installing all the patches you need you decide to surf the web to check out some site, but accidentally fat-finger your favorite web site and end up somewhere you did not intend to. In a worst case scenario that site takes advantage of one of the patches you have not installed yet and installs some rootkit on your system. You quickly hit ALT+F4 to close the site, finish installing the patches and take yourself out of the Administrators group. Is the rootkit now gone? Noohooo. It is still there, and will remain there until you use the rootkit removal tool: format c:\ (from neutral read-only media).

Every parent knows that the main reason you have kids is for the comic relief they provide. However, watching them grow up is also fascinating.

Yesterday my oldest son, who is now seven and a half, and I were sitting in front of the TV when he asked what I was doing. It so happened that I was setting up the AutoAdminLogon feature on the Media Center PC connected to the TV so that it logs on the right account automatically at boot. I explained this to him, and his first reaction was "but doesn't that mean burglars and thieves can get to our movies and pictures?"

This just floored me. I can't get my wife to think about security (not when there are knitting pigs involved) and here is my son with his first actual security question. We ended up having a long chat about this. First I asked if he knew what I was doing for a living, and of course, he did not. He just knew I travelled a lot. So I explained it. Then I asked him why he figured this was a risk, and he explained that without a password people can get to your movies. So we discussed why this might be a problem in some cases, and not in others. When I asked him whether he thought the risk was high that bad guys would break into our house and steal our movies he said it probably was not why they came here. In the end, we decided that if bad guys are in our house we have bigger problems than them sitting down and watching TV on the Media Center. In fact, if they did it would make it easier to catch them.

This was a great little risk management discussion, and proves one thing: even children can assess and make decisions on risk. We are not at all incapable of doing so; we are just not very good at perceiving the right risks. We do have the capacity to do it, if we put our minds to it, open our minds to the whole picture and get rid of preconceived notions, and consider all the factors. However, we all have the preconceived notions and those get in the way of risk management and is one of the things that make us bad at perceiving risk. Risk management, in the end, is not that hard. It is getting rid of our preconceptions and actually perceiving the right risks that is hard for people.

More Posts Next page »