Welcome to TechNet Blogs Sign in | Join | Help

Bill Gates & Steve Ballmer Email to Employees - Retirement Plans

I'm sitting here having security discussions with some of our top customers and I just received the emails from Steve and Bill announcing Bill's plan to depart in two years.  Bill's success story has been historical in scope and this change is not doubt a historical event for Microsoft as well.

Here is a transcript of Steve and Bill remarks: Written Transcript, Steve Ballmer and Bill Gates Remarks

This link (presspass) has the press release and a bunch of other links there too.

Best regards ~ Jeff

posted by jrjones | 0 Comments

Windows Vista : Threat-driven Design combined with Security Quality Process

What is the difference between foundational security and security features?

Name 3 security companies.  Who did you name?  Symantec?  Checkpoint?  RSA?  ISS? These companies all offer products that provide security features or capabilities. 

What if Microsoft had no firewall?  What if we had no PKI and certificate services?  What if we had no plans for Forefront Security products?  Would those of in the Security Technology Unit (STU) be out of work?

No.  Many of us are not focused on products and features, we're focused on security that is more foundational and inherent to all software, not just security features or capabilities.  We want to (a) reduce security flaws  in software, (b) reduce exploitability of flaws that aren't found before ship, and (c) make it easier to mitigate.

So, I think of securty in Windows Vista, I think about design changes driven by Threat modeling such as ASLR , /GS, NX flag, attack surface reduction, /SafeSEH and service hardening.

For an excellent description of how this applies to Windows Vista, read Mike Howard's latest blog post that describes the bigger picture of Windows Vista security.

posted by jrjones | 0 Comments
Filed Under: , ,

Trend Micro CTO hints that Trend will Open Source Code

In a stunning revelation in Trend Micro: Open source is more secure, Trend CTO Raimund Genes hints that Trend may release their code as an open source project!

Though Genes stopped short of actually saying that Trend would be releasing their code and joining the Free Software movement, there are only two possible obvious conclusions from his statements made to CNET:

"Open source is more secure. Period," Raimund Genes, chief technical officer for anti-malware at Trend, said. "More people control the code base; they can react immediately to vulnerabilities; and open source doesn't have so much of a problem with legacy code because of the number of distributions."

With the Trend CTO holding forth such strong positions on Open Source, these statements could be a forshadowing that Trend, currently offering only Closed Source software products themselves, will soon be opening their own source to the world.   One exciting aspect of this possibility is that it would enable others to build upon the Trend history in the antivirus space and offer competitive products based upon Trend's own source code!

Genes did not offer any explanations as to why Trend Micro, as a security company, had been offering "less secure", closed source products to their customers up til now, nor did he offer any comments as to how long they would continue to pursue their "less secure" model of closed source software.

[NOTE:  For those who did not detect the inherent sarcasm in the text above, this note is here to reveal said sarcasm.  Personally, I believe security of software depends on many factors beyond the Open/Closed source model.  See Workload Vulnerability Index for one example where Open Source results in many more vulnerabilities.]

Linus’s Law aka "Many Eyes Make All Bugs Shallow"

How many of you have heard “many eyes make all bugs shallow”?  My guess is that many of you have and that it may have been in conjunction with an argument supporting why Linux and Open Source products have better security.  For example, Red Hat publishes a document at www.redhat.com/whitepapers/services/Open_Source_Security5.pdf, which they commissioned from TruSecure (www.trusecure.com) which has a whole section called “Strength in Numbers: The Security of “Many Eyeballs” and says:

The security benefits of open source software stem directly from its openness. Known as the “many

eyeballs”theory,it explains what we instinctively know to be true – that an operating system or application

will be more secure when you can inspect the code,share it with experts and other members of your

user community,identify potential problems and create fixes quickly.

 

It reads pretty well, but there are a few small problems.  For one, nothing really ties the second sentence (the key one) back to the first one.  Secondly, the ability (can) to inspect code does not confirm that it actually gets inspected.  Let me emphasize by applying similar marketing speak to a similar claim for closed source:

 

The security benefits of closed source software stem directly from its quality processes. Known as quality assurance, it explains what we instinctively know to be true – that an operating system or application

will be more secure when qualified persons do inspect the code, [deleted unnecessary] identify potential problems and create fixes quickly.

 

I would argue that both statements are equally true or false, depending on the reality behind the implied assumptions.  For example, if qualified people are inspecting all parts of the open source with the intent of finding and fixing security issues, it is probably true.  For the latter, if a closed source org does have a good quality process, they are likely finding and fixing more security issues than if they did not have that process.

 

Going Back to the Source:  The Cathedral and the Bazaar

 

Now I’ll ask a different question – how many of you have actually read The Cathedral and the Bazaar (CATB) by Eric S. Raymond (henceforth referred to as ESR)?  Shame on you, if you have not.  It is really interesting, and to me, it asks more interesting questions than it answers … though I’ll try not to digress too much or too far.  Keeping to the core idea I want to discuss, let’s look at the lesson #8 in the CATB, as quoted:

Linus was directly aiming to maximize the number of person-hours thrown at debugging and development, even at the possible cost of instability in the code and user-base burnout if any serious bug proved intractable. Linus was behaving as though he believed something like this:

 

8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.

 

Or, less formally, ``Given enough eyeballs, all bugs are shallow.'' I dub this: ``Linus's Law''.

 

Even these statements have some implicit assumptions (ie, the code churn doesn’t cause new problems quicker than the old ones are solved), but as I read through the lead in context and rule #8, I can’t find anything to disagree with.  What I will note is that nothing in this limits his observation to Open Source.  As many later references use the less formal “given enough eyeballs” paraphrase, it does mentally prompt one to think about visual inspection, however, the original lesson doesn’t refer to visual inspection at all!

 

Though ESR was making observations and drawing lessons from Linus’ Linux experience and his own fetchmail experience, I assert that his lessons can be applied more broadly to any software.  Going a bit further in the text, we find another important part of the discussion:

My original formulation was that every problem ``will be transparent to somebody''. Linus demurred that the person who understands and fixes the problem is not necessarily or even usually the person who first characterizes it. ``Somebody finds the problem,'' he says, ``and somebody else understands it. And I'll go on record as saying that finding it is the bigger challenge.''

 

So, in finding and fixing issues, you need:

·         “Many eyes” identifying the issues, or a large enough beta-tester base (to take from lesson #8) so that almost every problem will be characterized, and

·         Enough developers working on fixing issues so that a fix can be developed and deployed

 

ESR chronicles a lot of interesting stuff in CATB and enumerates them as lessons, but one key one he does not elaborate upon is the enabling ability to communicate cheaply and quickly with his users/co-developers.  At the time, he used a mailing list.  UUCP news and public file servers were also available for communication and for sharing code and files.  What did this allow?  It allowed him to pretty easily find and connect with the roughly 300 people in the Western developed nations that shared his interest in an improved pop client / fetchmail.  Even 10 years prior and this would have been much more difficult.  But I digress too much … suffice it to say that cheap and easy communication and sharing made a distributed, volunteer, virtual team possible.

 

Applying the “Many Eyes” Lessons To Commercial Software

 

8. Given a large enough beta-tester and co-developer base, almost every problem will be characterized quickly and the fix obvious to someone.

 

ESR contrasted between two testing models.  Rather than paraphrase, it seems simplest to quote what he says next in CATB:

In Linus's Law, I think, lies the core difference underlying the cathedral-builder and bazaar styles. In the cathedral-builder view of programming, bugs and development problems are tricky, insidious, deep phenomena. It takes months of scrutiny by a dedicated few to develop confidence that you've winkled them all out. Thus the long release intervals, and the inevitable disappointment when long-awaited releases are not perfect.

 

In the bazaar view, on the other hand, you assume that bugs are generally shallow phenomena—or, at least, that they turn shallow pretty quickly when exposed to a thousand eager co-developers pounding on every single new release. Accordingly you release often in order to get more corrections, and as a beneficial side effect you have less to lose if an occasional botch gets out the door.

 

Now in my experience with commercial products, I can honestly say I never thought of development problems as either deep or shallow.  I thought of flaws as being across a spectrum, where some where simpler and easier to find and others might be deeper and have more challenging pre-conditions to replicate (e.g. timing, state).  I think that would apply to open or close source. 

 

So, ultimately, my analysis of what ESR describes is different in that I see the key difference as time and resources.  The Bazaar model (as he described) created a situation where more resources for both finding and fixing bugs were applied in parallel.  The Cathedral model (as he described) had (by implication) fewer resources that (therefore) needed to work over a longer period of time to achieve a similar level of quality.  This resource analysis makes sense to me, especially if you leave the models out of the equation for a moment.

 

Let’s step back.  What if you had an Open Source project working on a product where their were 5 core developers and about 20 co-developing users?  What if you had a comparable Closed Source project with 50 developers and 50 testers?  Assume both products have 500 active users over a one year period reporting problems and requesting enhancements.  Does it seem likely that the Open Source project will find and fix more bugs simply because it is Open Source?  No.  The number of “eyes” matter, but so do the number of actively contributing developers?  This is consistent with what ESR says (…a large enough beta-tester and co-developer base…), but is not consistent with the common usage of the “many eyes” theory as quoted frequently in the press.

 

How can commercial companies apply this?  First, set up a process that facilitates reasonably frequent releases to large numbers of active users that will find and report problems.  Next, ensure that you have enough developers to fix the reported issues that meet your quality bar.  The CATB also identifies a need for problem reports to have an efficiency of communication that makes the problems easy to replicate and enables the developers to quickly solve the problem.  Finally, there are several more rules which are about being customer-focused, which any product manager would endorse:

7. Release early. Release often. And listen to your customers.

10. If you treat your beta-testers as if they're your most valuable resource, they will respond by becoming your most valuable resource.

11. The next best thing to having good ideas is recognizing good ideas from your users. Sometimes the latter is better.

 

The “Many Eyes” of Microsoft

 

Finally, I would like to think about these issues in the context of how Microsoft currently releases products.

 

First, let’s take the core “many eyes” and consider “Given a large enough beta-tester and co-developer base…”  In CATB, Eric mentions that at a high, he had over 300 people on his active mailing list contributing to the feedback process.  There are multiple levels at which the Microsoft’ product lifecycle seems to work towards achieving many eyes. 

 

Furthest from the development process are the millions and millions of users.  Take a product like Windows Server 2003, which is the next generation of Windows 2000 Server and you find it has benefitted from every bug report from every users in terms of (what Linus described as the harder problem) making bugs shallow.  In the more recent product generations, the communication process has been advanced in a technical way by Windows Error Reporting (WER aka Watson) and Online Crash Analysis (OCA).  Vince Orgovan and Will Dykstra gave a good presentation on the benefits of WER/OCA at WinHec2004 (which you can read here).   OCA also addresses another problem raised by CATB, that of communicating sufficient detail so a developer can properly diagnose a problem.  One might argue that a large percentage of users do not choose to send crash details back to Microsoft for analysis and that brings us to the next item – Betas.

 

Microsoft releases Beta versions of products that see very high numbers of day-to-day usage before final release.  When Windows XP SP2 was developed and released on a shortened one-year process, it had benefitted from over 1 million Beta users during the process – each one using and installing their own combination of shareware, local utilities, custom developed applications and legacy applications on thousands of combinations of hardware.   I’ve been running Windows Defender (aka Antispyware) along with many other users for about 1.5 years now, through several releases.

 

Even before the external Beta stage, Microsoft employees are helping the product teams “release early and often” by dogfooding products internally.  Incidentally, I am writing this entry using Office 12 Beta running on Windows Vista Beta2.  Dogfood testers may not seem like a lot unless you consider that there are 55,000 employees and well over half of them will probably dogfood more major products.  High numbers of dogfood testers will certainly utilize OCA and will also run the internally deployed stress tools to try and help shake bugs out of the products.

 

There are other mechanisms I won’t go into in detail like customer councils, focus groups, customer feedback via support channels, feature requests, not to mention the Product Development and Quality Assurance teams themselves utilizing a variety of traditional and modern tools to find and fix issues.  The core process has even been augmented as described in The Trustworthy Computing Security Development Lifecycle to include threat modeling and source code annotation.

 

I could go on, but I think you get the picture.  Hopefully, this will stir some folks to think beyond the superficial meaning of “many eyes make all bugs shallow” the next time someone throws out as a blind attempt to assert superior security of Open Source.

 

Jeff

Artima: Microsoft Under Attack

A new article called Microsoft Under Attack summarizes itself by saying:

Not by angry customers suing for damages after security breaches, or by governments breaking up monopolies, but by open source developers and security professionals accusing them of being obsessed by security.

The content goes on to chronicle a panel discussion moderated by the author "Should companies be emulating Microsoft’s Security Development Lifecycle?" at the OWASP Europe conference in Leuven.

Reading through the comments, one reader asks "Can you give an example of where a MS product has superceded a comparable open-source project in terms of security?"

I suppose that depends on your definition of security, but I took it to mean "software having less serious vulnerabilities for hackers to potentially exploit" and posted my own reply.  The short answer is that there are more and more examples the longer that Microsoft applies SDL and other security programs while comparable open source projects claim that they don't need to pursue similar security goals (due to "many eyes" or whatever reasons).

I think my previous posting on the Red Hat Workload Vulnerability Index is one good example of  a (not-defined-by-Microsoft) metric comparing the results of differing development processes.

Web-based Security Deja-Vu: Microsoft OneCare Live, Symantec Genesis and McAfee Falcon

Windows Live OneCare has made it's debut, among various comments about this being a new category of security product and apparently it is a hot new category to judge from the established antivirus vendors and the press activity.  Symantec announced in February that it will have a competitive product, code-named Genesis, and McAfee announced this past week it's own product, code-named Falcon in the same space.

As always, exciting exciting stuff going on in the land of security.  New product categories are almost as good as acquisitions and much better than simple product releases.

Well, from my perspective this is Deja Vu All Over Again.  Read why I think so in full detail in what I call A (Not Always Funny) History and Analysis of Web-Based Antivirus and Security Products.  [Update: Originally an article, I changed this to be a blog entry just below due to the results of my empirical study on how many people do not click through to read articles.]

In summary, the first web-based security (antivirus) product was available 10 years ago, but there hasn't been much traction.  Thinking about it, I can see some of the reasons why and optimistically, why these new entries could mean positive changes for home users.

Let me know what you think.  ~Jeff

posted by jrjones | 0 Comments
Filed Under:

A (Not Always Funny) History and Analysis of Web-Based Antivirus and Security Products

When I first read (in 2006) about the “new category for security products” represented by Microsoft OneCare Live, Symantec Genesis and McAfee Falcon, I must admit to a small chuckle.  In my AV days, I saw a few of these web security products launched, each of which did a big belly flop.  Maybe it will be different this time, we’ll have to wait and see.

DISCLOSURE:  Before we go further, I should confess that I ran product management for McAfee corporate antivirus products from 1998 to 2001.  [I never came near the consumer products, I swear!]  This either makes my opinions more informed or simply biased – you pick.

For fun, I went back and researched a little history on Web-base security projects, which I’ll share below along with my own very personal opinions about them, their success, etc.  I did not work on any of these products, though some did use the same AV engine as our corporate products.

Web-Based Antivirus Timeline – 1996 to 2002

1996 – Dr. Web.   I’ll be honest and say I had never heard of these guys before today.  That may or may not say something about their industry impact.  However, they claim to have been providing a free web-based virus scanning service since 1996 and I see no reason to doubt them.  I’m not sure, but I think they allowed you to scan one file.  I’m sure I’ll be corrected if wrong.

1997 (May) – Trend Micro Housecall.  Full on-demand virus scanner, it was also free.  This was a great move for Trend.  If you recall the AV market at the time, Symantec ruled for moving retail boxes, McAfee was the leader in AV for business and Trend had elbowed it’s way in by focusing on being the Internet gateway AV scanner of choice.  Smart move to offer free desktop scans, though it did lack a real-time, on-access ability.

1999 (December) – McAfee.COM VirusScan Online.  In a branding move that confused customers for quite a while (IMO), Network Associates (NAI, the merger of Network General and McAfee Associates) spins out a dotCOM subsidiary, www.mcafee.com, to focus on retail customers with a web-based antivirus product, VirusScan Online.  The stock IPO was very successful, skyrocketing from $12 to $54 in under 2 months, though it is less clear if the business strategy was successful.  NAI retained the corporate antivirus product business and the retail box product business.  The virus offering did include a real-time, on-access scanner that was installed as part of the subscription, which was a first.  In addition to antivirus, mcafee.com offered:

·         Oil Change Online – a service for finding patches and keep all of your components, drivers, etc, up-to-date.

·         Uninstaller Online – self-explanatory.

·         First Aid Online – think Norton Utilities as a web service, at least in concept

You could also subscribe to all of these for a reduced bundle price – NAI was big on bundled subscription pricing.  Very holistic offering, I’m sure you will agree. 

The business did okay, I think largely because they got to leverage the  McAfee AV brand, so a lot of retail and small business purchases, confused by the two companies , purchased from the dotCOM.  Incidentally, they very quickly discovered they couldn’t grow revenue as fast as they wanted by focusing on home users in a largely dial-up world and shifted their strategy to also focus on small business, thus competing directly with NAI corporate, diverting rather than growing revenue.  On the other hand, the (more likely, in my opinion) primary goal of getting some favored Company Officers quickly rich on dotCOM IPO stock options seemed a huge success. 

2000 (January) – myCIO.com VirusScanASaP.  Not having enough self-competition and brand confusion (or possibly to pacify some executive who didn’t get enough mcafee.com stock – who can really know about these things?), NAI sets up myCIO.com, to be run by Zach Nelson (now CEO of Netsuite).  For the trivia-minded, Mr. Nelson had been the NAI Marketing executive that did the sponsorship deal for the Oakland Coliseum (wikipedia info).  This second dotCOM subsidiary was very different from mcafee.com, in that its strategy was to offer web-based security products  with names like VirusScanASaP (ASP … get it?) to … small and medium businesses.  myCIO.com also delivers  PC Firewall ASaP, which is not really a web service, but a web installed product with frequent connections back to a web server for updating and enablement enforcement.

NOTE:  To clarify my previous disclosure, I want to emphasize that I had nothing to do with these crazy multiple self-competitive product strategies.  I can say it wasn’t a ton of fun in the land of Corporate AV with NAI setting up competitors against us in growth markets, but there you are.

2001 (April) – McAfee ASaP.  McAfee ASaP comes into being when Network Associates “spins back in” myCIO.com.  This leaves only McAfee.com as a separate company offering Virusscan Online.

2001 (December) – BitDefender Online Scan.  I don’t remember these guys at that time, but they announced their success 6 months later, so hey, I’ll give them credit.

2002 (March) – NAI to Buy Back Complete Control of McAfee.com.   Bringing back the web-based security products to what was later to be rebranded back to McAfee.

An Informal, Off-the-cuff Analysis of the Web-based AV and Security Success to Date

That brings us up to four years ago.  Trend Micro, as first mover 10 years ago, has not converted any of their core product business over to web services and still uses Housecall as primarily a free tool for marketing benefit.  Symantec has not (wisely, it seems to me) bothered to jump in at all in the past 10 years until Microsoft announced OneCare, which makes one wonder – why now?  Is it only the need to show competition or has something changed about what customers want?

The AV Industry Dirty Little Secret

Antivirus companies make most of their money from selling to businesses in the corporate AV market.  One might compose a compelling argument that supported McAfee treated their retail product more for its marketing value than for net revenue.  You don’t have to look hard to see the signs: who hasn’t seen the offerings from Symantec or McAfee that give you a $30 rebate for a $30 product at Fry’s or Best Buy?  And frankly, there are plenty of free offerings for Personal Use of antivirus products like AVG and AntiVir, even if you ignore products like Trend Housecall or BitDefender free online scan.   Symantec, under John Thompson, has moved from the only AV vendor having some measure of success in the retail business (due to good channel practices and OEM deals) to being a big player in corporate antivirus as well.

So here is the not-so-secret:  the antivirus industry is focused on businesses and not home users.  Home users are just willing to pay so much to keep their PCs healthy, but more, they don’t want to be bothered.  I’m a security guy and when the Symantec nagware kicked in after 6 months for the AV portion of the “integrated security product”, I uninstalled not just the AV, but the personal firewall and other stuff as well – it just annoyed me too much.  Then, I went and downloaded a good free product, that gets VB100 scores and now I’m happy.  I won’t even talk about conflicts or how the retail products don’t measure up in quality compared to their corporate counterparts.  But, why would this change?  Symantec and McAfee are publicly traded companies – if the home user market is limited, in terms of revenue, then it doesn’t make sense to change the strategy much.

Microsoft as Home User Change Agent

NOTE:  I don’t have anything to do with antivirus products at Microsoft, as I focus more on improving core security quality. I don’t even know the OneCare team, so don’t read my opinions as some insider secret – they are NOT!

So, if there isn’t a lot of profit justification to deliver a home user-focused security product, web-based or not, then what is the motivation for these new web-based products like OneCare Live, Genesis and Falcon?

Well, for Microsoft, if isn’t the money, what is it?  Viruses and worms reflect badly on the platforms that they target.  In order to improve the general health of the security ecosystem, there  needs to be a great, integrated, easy-to-install, easy-to-use host security product so that home users will utilize it and keep it up to date.  Improving the security and limiting negative user experience on Windows, Office and Exchange, etc, can only help Microsoft – so that is motivation enough to invest in home user security products even traditional vendors consider it a second priority.  Web-based just makes sense as a quick and easy delivery and update mechanism, given how pervasive broadband is today.

But what would it mean for traditional antivirus vendors if everybody started using a non-McAfee, non-Symantec antivirus product on their home machines and it just quietly updated and worked?  Might those people go to work and when it came time for subscription renewal think, “hey, why not use the same stuff here at work?”  That could be a problem for them, so Microsoft in effect becomes motivation to invest in creating an easier-to-deploy, easier-to-update, easier-to-use, value-added security offering for home users.

Isn’t competition great?

posted by jrjones | 1 Comments
Filed Under: ,

New Enterprise Linux - Ubuntu

For business use, the largest driver of Linux adoption has been the Enterprise Linux releases.  Product names aside, I am referring to those Linux-based distributions that offer longer, multi-year support commitments for a version of the product.  To date, the primary examples of this (and not coincidentally market leaders) have been Red Hat Enterprise Linux, Novell SuSE Linux Enterprise Server and Mandriva Linux.

Matt Zimmerman of the Ubuntu team has just announced that:

Ubuntu, Kubuntu and Edubuntu 6.06 LTS will be the first Ubuntu releases with long-term support: three years on the desktop, and five years on the server.

I haven't installed it to check it out yet, but I probably will once it is released.

Ubuntu, like the other Enterprise Linux vendors, has its share of security vulnerabilities, which can be viewed at http://www.ubuntu.com/usn.  I've not analyzed these in the past given the short lifecycle, but now that they offer a more enterprise-length lifecycle, I may start.  It'll be interesting to compare their first year with Windows Server 2003 and with the first year of Red Hat Enterprise Linux 4, as reported in Red Hat RHEL4 Risk Report.

posted by jrjones | 0 Comments
Filed Under: ,

Address Space Layout Randomization (ASLR) in Windows Vista Beta2 ?

UPDATE:  Mike Howard has posted to his blog, confirming David and providing details on the Vista ASLR features.

 

So, a couple of weeks ago, Jesper Johannsen wrote how the Windows Firewall was one of his favorite security features in Windows Vista.  My favorite security enhancements tend to be architectural security improvements.  I recall the Data Execution Prevention and NX bit support as two good previous examples of this.

 

I've just noticed a full-disclosure post from David Litchfield of NGS Software, asserting that he has confirmed ASLR functionality as part of Vista Beta2.  I note that this security enhancement was not discussed as part of the Vista Security paper that was posted yesterday, but if it did make it into Beta2, is a great enhancement for Vista.  In David's words:

Address Space Layout Randomization is now part of Vista as of beta 2 [1] . I
wrote about ASLR on the Windows platform back in September last year [2] and
noted that unless you rebase the image exe then little (not none!) is added.
ASLR in Vista solves this so remote exploitation of overflows has just got a
lot harder. I've not done a thorough analysis yet but, all going well, this
is a fantastic way for Microsoft to go and builds on the work done with
NX/DEP and stack cookies/canaries.

 

Cheers,

David Litchfield

With only a slight amount of searching I did find Buffer Underruns, DEP, ASLR and improving the 
Exploitation Prevention Mechanisms (XPMs) on the Windows platform
, a paper published by David last September.

 

I'm looking forward to further confirmation from David and/or other researchers and the results of the "thorough analysis" that David implies that he is working on.

 

Think Security ~Jeff

posted by jrjones | 3 Comments
Filed Under: , ,

Windows Vista Beta2 Security Paper

Was reading Dana Epp's blog and found reference to a new Microsoft paper called  Microsoft® Windows Vista™ Security Advancements.  Good overview of most security enhancements in Beta2.

The funny part of this story is that Dana noticed the paper while reading Mike's blog, which I hadn't read yet today.

I hadn't read this paper yet, so thanks to Dana and Michael.  The paper itself is here.

posted by jrjones | 1 Comments
Filed Under: , ,

Novell Removes /truth and Security from Linux Site

Provocative, but technically true.  You may or may not recall that Novell published www.novell.com/linux/truth in response to Microsoft's www.microsoft.com/getthefacts site.  I browsed out there yesterday to see the current truth for myself and was redirected to http://www.novell.com/whynovell/.  You can still look at the google cache of the /truth site by using the search terms "site:novell.com inurl:truth" and selecting one of the cache links.

Bye-bye Security

Novell /truth discussed seven reasons "Why Linux is a better choice than Windows", with security being one of the key topics and then attributed certain "claims" to Microsoft and proceeded to offer alternative data.  I won't even address the fact that several of the attributed claims were not anything Microsoft ever said or would say...

Anyway, all gone now.  If you look at /whynovell, Security is not mentioned on the page.  Clicking the major links, I couldn't find Security mentioned there either.  I even downloaded the "Ten Reasons to Choose Novell Linux Over Windows" and it is very interesting in that it doesn't mention Windows anywhere in the document!  It should be more properly titled: "Ten Reasonse to Choose Novell Linux Over Other Linux Distributions" in my opinion.  It's one security claim (dubious at best) is that "Novell Linux is the most secure Linux."

Reading Too Much Into It

In reality, noticing this as I have is probably already taking it too far.  Their removal of "better security" as a competitive claim likely has nothing to do with the overwhelming numbers of SUSE security patches for SLES9 and their other enterprise Linux products.  I am sure it has nothing to do with the various sponsored reports highlighting the trend towards more and more vulnerabilities in the enterprise Linux products, while Windows vulns in new versions seem to be shrinking.

The more likely interpretation is that they've just went to a leaner, cleaner marketing message for SLES.  Most of us recall the adage that you can get customers to remember 3 things (but not 7), so they've just simplified and decided security wasn't one of their top 3 distinguishers, which now seem to be: reduce costs, improve performance and increase flexibility.  Most actual positioning statements are against other Linux distros.

Still, I can't help but wonder...

posted by jrjones | 2 Comments
Filed Under: , ,

JeffOS EAL4+ Secure System

(read my background article first)

JeffOS gets EAL4+ certification... not really.  Primarily because I haven't created JeffOS.  But hey, I'm thinking about it, so stay with me while I think about what configuration of JeffOS I should submit for evaluation.  What?  Does the evaluated configuration make a difference?  IF JeffOS is evaluated EAL4+, doesn't that mean all of JeffOS is certified?  I'm afraid not, security super friends.  Take a look at this chart from Windows® and SuSE Linux EAL4+ Workload Comparison:

The above table is extracted from a new Microsoft-sponsored study posted at www.microsoft.com/getthefacts.  The question behind the study was:  "If the assurance level and protection profiles are the same, then is there a practical difference?"  As shown in this chart, there is a vast difference depending on the software included or excluded from  the evaluated configuration.

My original post on how this difference can occur got really long, so I created a separate article to explain The Importance of the “Evaluated Configuration” in Common Criteria Evaluations, allowing me to shorten this entry to just key points.  However, it's important stuff and a good read, so you should go read the whole thing as intro and then come back here.

In my opinion, there is a big difference in the amount of work that it takes a customer to get from the starting point of these two EAL4+ evaluated systems to full Certification and Accreditation and this is no accident.  The much more useful and practical evaluated configuration in the Windows client/server evaluation (compared with Linux and compared with the previous Windows 2000 evaluation) is a reflection of Microsoft investment, not just in security improvement processes, but in people with security expertise that are helping drive more thoughtful security investments like this one.

So, what should I do?  Should I pay the extra cost to include DHCP and Apache in my evaluation of JeffOS?  Wait, maybe instead, I should strip even more usefulness out of the system and go for EAL7!!!  Then, I could claim JeffOS has an EAL7 certification and leave the responsibility with customers to make it useful by adding on unevaluated components.  Well, maybe not...

Think Security ~ Jeff

Coverity Confused Claims Cause Consternation and Confusion

Okay, maybe it only causes me consternation, but this is exactly the sort of thing that raises my temperature.  With the academic background of Coverity founders, one should expect a certain amount of rigor and care when it comes to analysis and conclusions, but I find myself disappointed.

Jeff, you say, what are you talking about!?!?

It’s been a while now, but you may recall a headline similar to this one, Security research suggests Linux has fewer flaws, or this one, Study: MySQL Hard on Defects.  I read these headlines and think - this is exciting, this is good stuff!  Let’s dig into these, sit at the feet of Coverity as security Padawan and learn what we can learn.

Security Research Suggests Linux has Fewer Flaws

Lets look at the first article.  It refers to a report published by Coverity called “Analysis of the Linux Kernel” that documents the results of running the Coverity source code analysis tool on various versions of the Linux kernel over four years.  (You can download this report yourself at http://linuxbugs.coverity.com, if you are willing to register with them)    Back to the article, here is a quotation from the opening paragraph:

The project found 985 bugs in the 5.7 million lines of code that make up the latest version of the Linux core operating system, or kernel. A typical commercial program of similar size usually has more than 5,000 flaws or defects, according to data from Carnegie Mellon University.

So here’s the logic (so called) then:

1.       If a normal commercial program of similar size has 5,000 flaws

2.       But, the Linux kernel has only 985 flaws, then

3.       (obviously) the Linux kernel has fewer flaw (985 < 5000 = TRUE)

So, Jeff, what’s your problem then?!?!  My problem is the second statement, which may or may not be true but should more properly be stated here as “Coverity only found 985 flaws.”  For the Linux kernel to have fewer flaws than standard commercial software, we need this to be true:  Coverity_flaws + Other_flaws < 5000.  Essentially, the article assumes the there are no flaws in the Linux kernel other than those found by Coverity (ie, Other_flaws = 0).  Call me skeptical at this point, but I think there are some checks we can make to find out of Other_flaws is truly zero.

You might be thinking, well Jeff, it isn’t Coverity’s fault if the reporter is making this assumption.  They probably have a nicely scientific and accurate report that the writer “spun” a bit for the story.  Hmmm, could be, but let’s examine the quotations by Coverity CEO Seth Hallem:

"Linux is a very good system in terms of bug density," said Seth Hallem, CEO of Coverity.

And

Hallem stressed that the research on Linux--specifically, version 2.6 of the kernel--indicated that the open-source development process produced a secure operating system.

"There are other public reports that describe the bug density of Windows, and I would say that Linux is comparable or better than Windows," he said.

Ouch, this just hurts me.  Here is a test.  Let’s say I run the Coverity tool on some piece of code with 1 million lines of code and it find 100 flaws.  What conclusions can you draw concerning bug density of the code?  Me … I can draw … no conclusions.  Well, none without making some huge assumptions.  If I assume the Coverity tool finds all existing flaws, then yes, I could talk to you about bug density. 

It gets better.  Based upon the Coverity research (ie, running Coverity tools on different Linux kernels over time), Hallem is able to say definitively that the 2.6 kernel is a secure operating system.  This is great for all of us and takes such a burden off my mind.  I think this means if we apply the Coverity learnings, we can probably all produce secure products.

Finally, in a firm scientific comparison with some “other public reports” (unreferenced), Hallen can conclude that Linux comparable or better than Windows.

So, it seems to me that Coverity may not be entirely blameless for the leaps of imagination taken with the report.

Study: MySQL Hard on Defects

Is the MySQL study any different in terms of assumptions and analysis.  Not in my opinion.  I downloaded the study to read and found this paragraph in the executive summary:

An analysis of the source code for the MySQL database has revealed that the code is very good quality.  The results show that the number of defects detected by the Coverity analysis system is low.  In fact, the analysis found results that are at least four time better than is typical with commercial software, even before MySQL fixed the defects that Coverity found.

Same flawed reasoning as before.  First statement not supportable from the analysis performed alone.  Second statement accurate, but not actually a proofpoint for statement 1.  Statement 3 may be technically accurate in that the number of Coverity found flaws is one-quarter of typical commercial software, but misleading unless Coverity found all flaws.

Incidentally, I did a very rough analysis of the MySQL study itself and found the following breakdown:

·         50% of content was related to the MySQL analysis

·         50% of content was Marketing content and value/benefit material concerning Coverity tools

I will leave any conclusions to the reader.

Does Coverity Find all Flaws?

So, I come to the final stretch of my Ironic Diatribe on Coverity Analysis, with the final question that answers all other questions: do Coverity tools find all flaws?  If they do, then the comparisons with “typical commercial software of similar size” are valid and I’ll bow my head and salute them in respect.  If not … then I’ll keep a keen eye out for future studies and try to get the reporter to quote my thoughts.

First, let’s see what Coverity says about their tools’ ability to find all flaws.  In the MySQL study, they say this:

Although the Coverity InspectedTM mark means that software quality is high, it does not mean that the software will be defect-free.  Notwithstanding the foregoing statement, the Coverity Inspected logo does not imply any warranty or guaranteee as to the performance of the software in the deployment environment.  Many defect types fall outside the scope of Coverity analysis.

Or, in other words, Other_flaws != Zero.  There you have it, Coverity says they don’t find all flaws and that MANY types of defects fall out of scope.

Let’s do another check, just to be sure.  Coverity analyzed MySQL 4.1.8.  Maybe Coverity does find all security flaws, if not all flaws?  We pop over the nvd.nist.gov, search for MySQL, scan down …. Hmmm.  There is CVE-2006-1517 which affects 4.1.8.  So does CVE-2006-1516.  CVE-2006-0903, maybe CVE-2006-0692, CVE-2005-2573, CVE-2005-1636, … and so on.  So, check #2 also finds that there are security flaws that don’t get found by Coverity, so their numbers should not be compared to “all typical flaws in similar size commercial code.”

My Final Word on Coverity and Source Code Scanning

I like Source Code Scanning technology.  They are really good for automated searches for certain classes of coding problems that can lead to security vulnerabilities.  At Microsoft under the Security Development Lifecycle, we require the use of Source Code Scanners on all code to be check into tree, and it does, no doubt, help to find lots of common coding flaws that would otherwise make it into products.  I quote Jon Pincus on three questions related to source code scanners:

1.       Do these tools find important defects?  Yes.

2.       Is every warning emitted by the tool useful? No.

3.       Do these tools find all defects? NO, No, no.

For this reason, Microsoft uses source code scanning in conjunction with AppVerifier, the Source Code Annotation Language, code reviews, threat modeling, QA testing, FxCop and much, much more, all in addition to an extensive training program based upon “Writing Secure Code”.

I think Coverity probably produces a fine tool and it would benefit many software vendors if they made it, or a similar product, a part of their development lifecycle.  However, it is what it is, and I am skeptical of reports that assume one can compare Coverity flaws found with … well, anything.  Especially, if the comparison is intended to show something is more secure.

Regards ~ Jeff

posted by jrjones | 0 Comments
Filed Under: , ,

Workload Vulnerability Index

In the recent Risk Report: A Year of Red Hat Enterprise Linux 4 in Red Hat Magazine, Mark Cox defined an interesting new security metric, the Workload Vulnerability Index, that provides a weighted measure of the impact that ongoing security vulnerabilities have to those doing patching.  Here is how the report defines it:

This vulnerability workload index gives a measure of the number of important vulnerabilities that security operations staff would be required to address each day. The higher the number, the greater the workload and the greater the general risk represented by the vulnerabilities. The workload index is calculated in a similar way to the workload index from NIST [3].

For a given month, Vulnerability workload = ((number of critical and important severity vulnerabilities published within the last month) + (number of moderate severity vulnerabilities published within the last month/5) + (number of low severity vulnerabilities published within the last month/20)) / (days in the month)

Note that the weighted value is divided by the number of days in the previous month, so that the equivalent to 60 Critical and Important vulnerabilities over a 30 day period would come out to a WVI value of 2.0, as happened to Red Hat Enterprise Linux 4 Advanced Server (RHEL4AS) in the first month of availability.

In the chart above, I have applied Mark's methodology and formula to Windows Server 2003 (WS2003) during its first year of availability and then charted the RHEL4AS and WS2003 side-by-side over their first year.  Interesting, no?

  • WS2003 has 4 months with a VWI of zero
  • WS2003 has 10 months with a VWI of 0.1 or less
  • The worst month for WS2003 is still better than 9 of the RHEL4 months

Other observations are left up to the reader...

Jeff

 

Washington Post - A Time to Patch III: Apple

You've probably already read Brian Krebs article A Time to Patch III: Apple, but if you haven't, I encourage you to read it and read the various responses he received - the responses run the gamut of

  • Linux advocates ("You do understand that Mac OS X is not a version of Linux, and is not an open source OS in the usual sense of the word?"),
  • conspiracy theorists ("...This sounds much more like Microsoft propaganda..."),
  • open source advocates ("... finally pointing out that Apple is a company that's even more protective of its intellecual property than Microsoft ...")
  • existentialists ("... In fact, I have been using Macintoshes heavily since 1984 and I've never had a single security problem.")
  • allegoricists ("...Potentially, an envelope I lick to seal could have LSD on it.")
  • poor analogies ("...Over the years in a far away country, fires have increasingly ravaged ...")
  • better analogies ("...Imagine someone traveling to a small town and learning ...")

and many, many more.  Good reading and entertaining at the same time.  Brian even provides spreadsheets with his data and links to sources.

When I read this, I thought to myself "What if this article was about Microsoft?" - would the responses have been different?  "What if the article was about Linux?"  Sun?  Oracle?  I think it is clear from the emotional responses that the data matters less to some people than their belief system - and that's not good for security!

Here's the question I ask myself.  If I had one system that housed my critical business information (say customer credit cards) and I believed there were attackers who might target me to get that information, then wouldn't I want to know how many vulnerabilities there are and how long a vendor might leave them unpatched?  I would.  If I was basing a 5-10 year business decision in part on security criteria, I certainly would (among many other things...). 

Of course, I would also consider the threat of a virus and the threat of a targeted attack as two discrete risk issues and not muddle them together... but that's for another day.

posted by jrjones | 0 Comments
Filed Under:
More Posts Next page »