Category Archives: Security

Viewing events for Windows 10 Controlled Folder Access

I wrote about Controlled Folder Access not long ago. Since then, I’ve seen it throw a few dialogs telling me that a particular application was blocked from doing something, but I generally didn’t pay much attention unless I found something that didn’t work. The desktop notification doesn’t show the full path of the blocked executable if it’s anywhere in \program files or \users\appdata. There just isn’t enough room.

Today I saw a message pop up that had some Chinese characters in it– you’d better believe that got my attention. I wanted to see what CFA had blocked. A little digging around led me to an article that explains how to easily create a custom view that shows CFA events. Sure enough, here’s what it showed:

Someone’s up to no good

Since I don’t use Internet Explorer, it’s pretty clear that something is on my machine that shouldn’t be, but, at least for now, CFA has prevented it from doing anything too nefarious. Off to the malware scanner I go!

Leave a comment

Filed under General Tech Stuff, Security

Training Tuesday: IoT insecurity, fitness division

There’s lots of hype about how the Internet of Things (IoT) will make our lives better, and much of it is true. For example, my house has two Internet-connected thermostats that I can use to see and change temperature settings— that way I can keep the house uncomfortably cool or warm when I’m not there and adjust the temperature remotely so it’s comfy when I get there. Fitness devices are definitely a well-established part of the IoT; companies such as BodyMedia and Garmin have been making devices that can connect, either on their own or through a PC or smartphone, to Internet services for a while. That market has been growing very rapidly over the last few years (some estimates put it as $3 billion in 2015), so some bright folks at Open Effect (funded in part by the Canadian government) decided to take a look at the security of IoT-connected fitness devices.

The results (full report here) are pretty horrifying:

  • Many devices transmit their Bluetooth MAC IDs at all times that the device isn’t pried, and those IDs never change, so it’s easy to track someone through rudimentary Bluetooth beacon monitoring.
  • The Jawbone and Withings fitness services don’t do a very good job of data validation; the researchers mention telling the Jawbone service that their test user walked 10,000,000,000 steps in one day, and the service happily accepted that. Worse still, they were able to inject fake data, generating records of “a person taking steps at a specific time when no such steps occurred.” Given that this data has been used in both criminal and civil trials in the US and Canada (see the extensive footnotes in section 1.4 of the report), this is pretty awful.
  • Garmin and Withings don’t use HTTPS to protect data in transit. Given that I wear a Garmin watch and use a Withings scale daily, I have a problem with this. The researchers only studied the Garmin Connect app on iOS and Android, but if I had to bet, I’d guess that my Garmin watch (which has Wi-Fi) isn’t using HTTPS either.

Apart from calling Garmin to yell at them, I’m posting this mostly to point out yet another case where the rush to get things on the Internet may have unintended consequences. While my individual fitness data is not necessarily something I mind being visible, I don’t like that these manufacturers have been so sloppy. I can understand not wanting to implement HTTPS on a very low-power device but there’s no excuse not to implement it in a mobile app, for crying out loud.

Meanwhile, if I ever need to, now I know how to challenge any fitness-related data that may be introduced in court.

Leave a comment

Filed under Fitness, General Tech Stuff, Security

Windows Hello and Microsoft Passport intro

I’ve been working on a white paper explaining how Windows Hello and Microsoft Passport work together in Windows 10– it’s a really neat combination. Over at my work blog, I have a short article outlining what Hello and Passport are and a little about how they work (plus a bonus demo video). If you’re curious, head over and check it out.

Leave a comment

Filed under General Tech Stuff, Security

Setting the record straight on Microsoft and subpoenas

This week I had the opportunity to present a session called “Cloud Best Practices” at the Alabama Digital Government Summit. I had a great time— it was fascinating to see how many different agencies in our state are putting advanced IT to work to save money and get more done for the taxpayer. However, there was one blemish on the experience that I wanted to polish away, so to speak.

Part of my talk concerned the fact that no matter where you live, your local government has lawful means to get your data: they can subpoena you, or your cloud provider, to get it. There’s nothing that you can do about it. It’s a feature, not a bug, of modern legal systems. I often talk about this in the context of people’s fears that the NSA, GCHQ, or whomever will snag their data, by lawful or unlawful means. Here’s the slide I put up:


I don’t think these are controversial assertions. However, at this point in my talk, Stuart McKee (chief technical officer for state and local government at Microsoft) flatly asserted that Microsoft does not comply with government subpoenas for customer data; I believe he used the word “never”. He went on to say that Microsoft has a pattern of resisting subpoena requests and that this “has gotten [them] into some trouble.” He concluded by saying that Microsoft’s standard action is to tell governments that they must subpoena the data owner, not the service provider.

I believe these assertions to be largely untrue, and certainly misleading. (I’ll leave aside the insulting manner in which Stuart asserted that I was wrong— after all, I am certainly wrong sometimes and generally appreciate when people point it out.) I want to set the record straight to the extent that I can.

First, Microsoft absolutely does comply with lawful subpoenas for customer data. This page at Microsoft’s web site summarizes their responses to lawful legal demands for customer information (both information about customers and information belonging to customers) across a broad variety of jurisdictions, from Argentina to Venezuela. To assert otherwise is ludicrous.

Second, Microsoft has a pattern of complying with these lawful subpoenas, not refusing them. When Stuart said that Microsoft is “in trouble” for refusing a subpoena, I suspect that he’s referring to Microsoft vs United States, where the issue at hand is that Microsoft was served a search warrant for data stored in a Microsoft data center hosted in Ireland. The data are stored there because the customer is located outside the US. Microsoft moved to have the warrant vacated, and when that failed, asked the cognizant district court to vacate it. The district court upheld the original warrant; Microsoft refused to comply and was held in contempt. Now this particular case is working its way through the US federal court system.

Let me be clear: I applaud Microsoft for standing up and resisting the overreach in the original warrant— there doesn’t seem to be (at least not to my layman’s understanding) a right of the US government, at any level, to subpoena data belonging to a non-US person or organization if it’s stored outside the US, even if it’s held in a cloud service operated by a US person or organization. The brief Microsoft filed likens this to a German court ordering seizure of letters stored in a safe deposit box in a US branch of a German bank. Having said all that, claiming that this kind of resistance is routine is overblown. It isn’t. If Microsoft were refusing subpoenas left and right, the numbers I mentioned above would look very much different.

Third, Microsoft’s policy is indeed to try to redirect access requests whenever possible. The Office 365 privacy page has this to say:

We will not disclose Customer Data to a third party (including law enforcement, other government entity, or civil litigant; excluding our subcontractors) except as you direct or unless required by law. Should a third party contact Microsoft with a request for Customer Data, we will attempt to redirect the third party to request the data directly from you. As part of that process, we may provide your contact information to the third party. If compelled to disclose Customer Data to a third party, we will use commercially reasonable efforts to notify you in advance of a disclosure unless legally prohibited.

In other words, Microsoft will try to redirect subpoenas from themselves to the data owner, where they are allowed by law to do so, and if they can’t, they will notify you, if allowed by law to do so. This is the only one of Stuart’s claims that I think is inarguable.

Finally, Microsoft proactively cooperates with law enforcement. The Microsoft Digital Crimes Unit newsroom contains press releases touting Microsoft’s cooperation with law enforcement agencies around the world (here’s just one example). This cooperation and disclosure extends to Microsoft proactively notifying law enforcement agencies when their PhotoDNA service identifies child porn images in customer’s private OneDrive data. I support their right to do this (it’s covered very clearly in the terms of service for Microsoft cloud services), and I believe it’s the right thing to do— but to claim that Microsoft never discloses customer data to law enforcement agencies while they are voluntarily doing so is both untrue and misleading.

Everyone’s interests are best served when everyone understands the specifics of the legal interaction between local and national governments and cloud service providers in various jurisdictions. This is a really new area of law in many respects, so it’s understandable that some things may not be clear, or even defined yet, but I wanted to correct what I view as dangerously misleading misinformation in this specific instance.

The bottom line: no matter what cloud service you choose, be sure you understand the policies that your cloud provider uses to determine the conditions under which they’ll cough up your data.


Filed under Office 365, Security, UC&C

An offer for Tim Cook

[note to readers: I encourage you to repost, retweet, and otherwise spread this offer. It’s legit; I am happy to help Apple in any way that I can. Since I don’t have any Apple execs on speed dial, perhaps social media will get this to the right folks. ]

Dear Mr. Cook:

We’ve never met. You’ve almost certainly never heard of me. But I’m going to make you an offer that I hope you’ll accept: I want to help you quit making such a mess of the world’s Exchange servers. More to the point, I want to help the iOS Exchange ActiveSync team clean up their act so we don’t have any more serious EAS bugs in iOS. The meeting hijacking bug was bad enough, but the latest bug? the one that results in Exchange servers running out of transaction log space? That’s bad for everyone. It makes your engineers look sloppy. It makes Exchange administrators into the bad guys because they have to block their users’ iOS devices.

These bugs make everyone lose: you, Microsoft, and your mutual users. They’re bad for business. Let’s fix them.

You might wonder why some dude you’ve never heard of is making you this offer. It’s because I’m a long-time Apple customer (got my first Mac in 1984 and first iPhone on launch day) and I’ve been working with Exchange for more than 15 years. As a stockholder, and fan, of both companies, I want to see you both succeed. Before there was any official announcement about the iOS SDK, I was bugging John Geleynse to let 3Sharp, my former company, help implement Exchange ActiveSync on the phone. He was a sly devil and wouldn’t even confirm that there would be an EAS client for the phone, but the writing was on the wall– the market power of Exchange Server, and the overwhelming prevalence of EAS, made that a foregone conclusion.

I’m an experienced developer and a ten-time Microsoft Most Valuable Professional for Exchange Server. I have experience training developers in Exchange Web Services, and I know EAS well; in fact, I was an expert source of evidence in the recent Google/Motorola vs Microsoft case in the UK. As a long-time member of the Exchange community, I can help your developers get in touch with experts in every aspect of Exchange they might want to know about, too.

It’s pretty clear that your EAS client team doesn’t know how Exchange client throttling works, how to retry EAS errors gently, or all the intricacies of recurring meeting management (and how the server’s business logic works). If they did, the client wouldn’t behave the way it has. They could learn it by trial and error… but look where that’s gotten us.

I’m in Mountain View, right up the road. Seriously. Have your people call my people.

Peace and Exchange 4eva,


Filed under Security, UC&C

Skype automatic updates: in-app vs Microsoft Update

One of the problems I most often run into when working with Windows machines is the way updates work. Microsoft has made great strides in improving the update experience for Windows and for Microsoft applications; compared to the steaming pile of filth that is Adobe’s updater, for example, or the mishmash of every-app-its-own-update-client behavior common on the Mac, Microsoft Update is pretty smooth.

But what about Skype? It’s now a Microsoft application, so you’d expect it to receive updates through Microsoft Update… and it does. However, it also has its own update mechanism. What gives?

Here’s a solid explanation, which Doug Neal of the Microsoft Update team was kind enough to let me republish. (My comments are italicized):/p>

While we’re still working through the best way to complement the updating system available via Skype, here are some insights that may explain the differences:

Skype 5.8 [released nearly a year ago] introduced a Skype-based auto-updating features unrelated to any Microsoft technology (and before knowledge of the merger). This updating service will remain for the foreseeable future – and is Skype’s method of offering updates on a more frequent basis than Microsoft Update. These settings and consent to update via Skype’s updating service can be controlled via the Skype | Options | Automatic Updates setting – which also provides a link to more information on Skype’s updating approach via their updater. These updates via the Skype updater can include major and incremental updates. [in other words, the Skype app can pull both minor updates and entire new versions through its built-in update mechanism, as do many other third-party apps on Windows and OS X. ]

As a new addition to the products supported via Microsoft Update, only major versions of Skype are made available via MU. Consent to automatically update via Microsoft Update is granted via Microsoft Update opt-in – the same opt-in experience available via Windows Control Panel | Windows Update | Change Settings. [So MU may offer you major versions, which is useful if you don’t now about ]

So, updating Skype via Skype’s updating service is controlled from within the Skype application. This updating experience may include various Skype-specific reminders and prompts that a newer update is available. Turning off updates here will reduced the number of incremental updates your Skype client will receive assuming Microsoft Update is still enabled to provide more major, less-frequent updates to Skype.

Updating via Microsoft Update will only occur for major versions and is controlled within the Windows Update control panel – the same place for all Microsoft product updates. Turning off Microsoft Updates is not recommended – and will result in preventing any updates from Microsoft for all 60+ products supported by Microsoft including security updates. The updating experience for Skype will be the same as you expect for all other Microsoft Updates, namely that unmanaged consumer PCs will see these as Important updates with no UI (applied automatically), or to managed PCs via WSUS/SCCM admin approval.

Having a single update mechanism, à la the iTunes App Store and the Windows Phone Marketplace, certainly seems to be the best model for end users: all app updates are packaged and available on demand in a single location. On the other hand, putting the responsibility for applying security-critical updates in the hands of end users, instead of in a centralized patch management system driven by WSUS or equivalent, is a terrible idea for the enterprise. Having a hybrid approach like this is a compromise, albeit an unintentional one, that may deliver the better aspects of each approach. Long-term I’d like to see the major OS vendors offer a flexible method of combining both vendor-specific OS/app updates with opt-in updates provided by third parties– something like the existing Marketplace combined with the controls and reporting in WSUS would be ideal. Here’s hoping…

Leave a comment

Filed under General Tech Stuff, Security

Man-in-the-middle attacks against Exchange ActiveSync

I love the BlackHat security conference, although it’s been a long-distance relationship, as I’ve never been. The constant flow of innovative attacks (and defenses!) is fascinating, but relatively few of the attacks focus on things that I know enough about to have a really informed opinion. At this year’s BlackHat, though, security researcher Peter Hannay presented a paper on a potential vulnerability in Exchange ActiveSync that can result in malicious remote wipe operations. (Hannay’s paper is here, and the accompanying presentation is here.)

In a nutshell, Hannay’s attack depends on the ability of an attacker to impersonate a legitimate Exchange server, then send the device a remote wipe command, which the device will then obey. The attack depends on the behavior of the EAS protocol provisioning mechanism, as described in MS-ASPROV.

Before discussing this in more detail, it’s important to point out three things. First, this attack doesn’t provide a way to retrieve or modify data on the device (apart from erasing it, which of course counts as “modifying” it in the strictest sense.) Second, the attack depends on use of a self-signed certificate. Self-signed certificates are installed and used by Exchange 2007, 2010, and 2013 by default, but Microsoft doesn’t recommend their use for mobile device sync (see the 2nd paragraph here); contrary to Hannay’s claim in the paper, my experience has been that relatively few Exchange sites depend on self-signed certs.

The third thing I want to highlight: this is an interesting result and I’m sure that the EAS team is studying it closely to ensure that the future attacks Hannay contemplates, like stealing data off the device, are rendered impossible. There’s no current cause for worry.

The basis of this attack is that EAS provides a policy update mechanism that allows the server to push an updated security policy to the device when the policy changes. There are 3 cases when the EAS Provision command can be issued by the server:

  • when the client contacts the server for the first time. In this case, the client should pull the policy and apply it. (I vaguely remember that iOS devices prompt the user to accept the policy, but Windows Phone devices don’t.)
  • when the policy changes on the server, in which case the server returns a response indicating that the client needs to issue another Provision command to get the update.
  • when the server tells the device to perform a remote wipe.

The client sends a policy key with each command it sends to the server, so the server always knows what version of the policy the device has; that’s how it knows when to send back the response indicating that the device should reprovision.

If the client doesn’t have a policy, or if the policy has changed on the server, the client policy key won’t match the current server policy key, so the server sends back a response indicating that the client must reprovision before the server will talk to it.

There seems to be a flaw in Hannay’s paper, though.

The mechanism he describes in the paper is that used by EAS 12.0 and 12.1, as shipped in Exchange 2007. In that version of EAS, the server returns a custom HTTP error, 449, to tell the device to get a new policy. A man-in-the-middle attack in this configuration is simple: set up a rogue server that pretends to be the victim’s Exchange server, using a self-signed certificate, then when any EAS device attempts to connect, send back HTTP 449. The client will then request reprovisioning, at which time the MITM device sends back a remote wipe command.

Newer versions of Exchange return an error code in the EAS message itself; the device, upon seeing this code, will attempt to reprovision. (The list of possible error codes is in the section “When should a client provision?” in this excellent MSDN article). I think this behavior would be harder to spoof, since the error code is returned as part of an existing EAS conversation.

In addition, there’s the whole question of version negotiation. I haven’t tested it, but I assume that most EAS devices are happy to use EAS 12.1. I don’t know of any clients that allow you to specify that you only want to use a particular version of EAS. It’s also not clear to me what would happen if you send a device using EAS 14.x (and thus expecting to see the policy status element) the HTTP 449 error.

Having said all that, this is still a pretty interesting result. It points to the need for better certificate-management behavior on the devices, since Hannay points out that Android and iOS devices behaved poorly in his tests. Windows Phone seems to do a better job of handling unexpected certificate changes, although it’s also the hardest of the 3 platforms to deal with from a perspective of installing and managing legitimate certificates.

More broadly, Hannay’s result points out a fundamental flaw in the way all of these devices interact with EAS, one that I’ve mentioned before: the granularity of data storage on these devices is poor. A remote-wipe request from a single Exchange account on the device arguably shouldn’t wipe out data that didn’t come from that server. The current state of client implementations is that they erase the entire device– apps, data, and all– upon receiving a remote wipe command. This is probably what you want if your device is lost or stolen (i.e. you don’t want the thief to be able to access your personal or company data), but when you leave a company you probably don’t want them wiping your entire device. This is an area where I hope for, and expect, improvement on the part of EAS client implementers.

1 Comment

Filed under Security, UC&C

Pistol-packing Paul: in which I get my Florida concealed-weapon permit

As some of my readers may know, California is nominally where I live; however, I’ve been in Pensacola since October. California, of course, has the distinction of having extremely restrictive gun laws. Needless to say, these laws have done little or nothing to reduce gun-related crime. They do, however, make it difficult or impossible for law-abiding citizens to exercise the same rights and freedoms that citizens of other states take for granted. (But at least it’s not as bad in California as it is in DC; check out Emily Miller’s Washington Times series on DC gun ownership to see what I mean.)

(nb. This would be a good time to mention that I’m not interested in debating any aspect of firearms law. I believe that as a law-abiding citizen I have the constituionally-protected right to keep and bear arms, and that that right properly includes the ability to carry a weapon on my person for self-defense, whether or not I face an imminent threat like a crazed ex-spouse. I don’t think that criminals or the mentally ill should have guns.. but criminals get them anyway, even in places like California and DC. Feel free to disagree with me, but do it someplace else.)

Anyway, one side effect of California’s laws is that it is difficult, or impossible, to get a permit to legally carry a concealed weapon in California. Each individual county makes its own rules, and larger counties, like Santa Clara County, just flat-out won’t issue permits. (Unless you donate thousands of dollars to the sheriff’s re-election campaign. But I digress.)

However, Florida and Utah offer permits to non-residents. If you meet the legal requirements to obtain a Florida or Utah permit, you can then use that permit to legally carry a concealed weapon in the 38 or so states that have reciprocity agreements with Florida and/or Utah. That means that a Florida non-resident permit will allow the holder to legally carry in Alabama, Louisiana, Mississippi, Tennessee, Texas, and Washington– all places I travel. Of course, in each state the permit holder still has to obey the laws of that state, which vary from place to place.

Florida and Utah both require a class that covers the legal and safety aspects of concealed carry. The interesting thing is that one can become certified as an instructor qualified to teach this class, then offer it out of state. I’d been trying (though not very hard) to find a convenient class in the Bay Area, but hadn’t managed to do so before I came out to Pensacola. After Christmas, I decided to resume my search and called around to a couple of local gun shops. I quickly got the word that I needed to talk to “Captain Ron.”

“Captain Ron” is actually Ron Beermünder, who runs the Blackwater River Tactical Range. His web site contains a wealth of information on Florida’s CCW law, as well as information about the classes he teaches. I opted for the 4-hour course; for $180, you get the legal instruction that Florida requires plus the chance to shoot 300 rounds of various-caliber pistol ammunition while being coached by an expert instructor. What’s not to like? I signed up, and this past week drove out to Ron’s range to take the class.

The class itself was excellent. Ron is an engaging and funny man, with a sharp sense of humor and a large chest of war stories. We spent about 90 minutes on the legal overview; simply put, in Florida the law is that a CCW permit holder is essentially held to the same standard as a police officer when it comes to use of force. If a police officer would be justified in using deadly force to prevent or stop a crime, so too would a CCW holder, but neither a citizen nor a cop is allowed to use unreasonable or excessive force. That strikes me as a reasonable standard, and it’s easy to keep in mind. Other details we covered include what Florida law says about where you may and may not carry, under what conditions you may use deadly force, and the fact that just because the law says you can stand your ground in the face of a threat doesn’t mean you should.

The range portion was equally good. Ron had a wide variety of pistols; I shot Smith and Wesson revolvers in .22 and .22 Magnum and Glock pistols in 9mm (including the Glock 26, which is what I’d normally be carrying.) We did timed-fire drills, and I learned a great deal about trigger manipulation and indexing. My accuracy and speed both improved quite a bit during our time on the range, and I’m looking forward to getting some more practice when my schedule allows. If nothing else, I learned that the Glock has a reset trigger and how to properly use it; that tip alone made a huge difference in my second-shot accuracy.

The actual mechanics of getting the permit are straightforward if you qualify: once you’ve completed the class, you need to provide the state proof that you completed it, a registration fee, and fingerprints. You can do this via mail, but it takes up to 3 months to get your permit back. Ron suggested driving to the nearest regional office of the Department of Agriculture and Consumer Services and applying in person. (Yes, I did say “Agriculture.”) Thus I found myself driving to Fort Walton Beach in search of the nearest office; there are only 8 throughout the entire state. I had previously made an appointment, and when the appointment time arrived I filled out an on-screen form, gave the clerk a copy of my certificate from Ron’s school, had my fingerprints scanned, wrote a check for $117, and had my application notarized. 20 minutes later, I was done; now all I have to do is wait for my permit to arrive in the mail! (I should note that I have never dealt with state government employees as pleasant, efficient, or helpful as the folks at the FWB licensing office. I wish they could export their attitude to the California DMV!) Once my permit arrives, it will be valid for seven years from the date of issuance.

This is all of course rendered moot by the fact that a) I work on a military base where no one is allowed to have personal weapons and b) all my pistols are in California, not to mention that c) I can’t legally carry in California anyway. If nothing else, I’m glad to have contributed to the numbers of law-abiding CCW permit holders. There are more of us out there than you think.

1 Comment

Filed under General Stuff, Security

Don’t use Symantec security software

You may know that Symantec recently admitted that its network was compromised and that the attackers got the source code to pcAnywhere, Norton Internet Security, and a few other products. Buried in their acknowledgement, however, was the fact that the source code leaked in 2006 and has thus been floating around in the community for quite a while.

Jonathan Shapiro’s response on the IP list seemed to hit the right note for me:

The pcAnywhere source code leaked in 2006, and in all that time nobody thought to do a serious security review to assess the customer exposure that this created? And now after five years in which a responsible software process would have addressed these issues as a matter of routine, they are having people turn the product off?

This is the company that ships the anti-virus and firewall software that you are probably relying on right now. A version of which, by the way, has also leaked. Do you want to be running security software – or indeed any software – from a company that fails to promptly report critical vulnerabilities when they occur and then ignores them for five years?

You can argue about whether Microsoft’s disclosure policy is perfect or not. I cannot, however, imagine a circumstance in which Microsoft became aware of a potential vulnerability and then didn’t fix it for five years.

So: if you’re running Symantec security software on your personal machine, your company’s workstations, or your servers… time to get rid of it and replace it with software from a more responsible (and, one hopes, more security-conscious) vendor.


1 Comment

Filed under FAIL, Security, Smackdown!, UC&C

1394, DMA, and BitLocker

The IEEE 1394 spec (also called FireWire by Apple and briefly, i.Link by our friends at Sony) specifies a high-speed interface for connecting peripherals. One of the reasons 1394 offers high speeds is that it supports the use of direct memory access, or DMA. Normally, when a peripheral device is performing I/O operations, the system CPU has to be involved. For example, to read a block of data from a disk drive, the CPU sends commands to the disk controller, then stores the resulting data into a block of system memory. (This is a somewhat simplistic description, I know, but it’s good enough for now.) That means that I/O operations could end up being CPU-bound, or they could negatively affect CPU performance.

To fix this some bright stars came up with the idea of DMA, which allows the peripheral controller to read from and write to system memory without the CPU’s involvement (and, often, without its knowledge or supervision.) Sounds neat, right? It is, but it also introduces a security threat: a malicious device can read valuable data out of memory… like, say, an encryption key.

The basic attack is simple: the attacker walks up to a BitLocker-protected computer, plugs in a custom 1394 device, and steals the key. (The details of how the attacker finds the key are interesting, but unimportant here.) Key in hand, the attacker can then decrypt the protected volume.

Not all BitLocker-protected machines are vulnerable to this particular attack. If you have a TPM, but are not using an additional authentication factor like a PIN or a USB token, this attack may succeed. However, even if you do use an extra authentication factor, if you leave your machine powered up or on standby, an attacker who gets physical access may be able to steal your BitLocker key.

This isn’t a huge threat for systems that are kept in physically secure locations, but it is worrisome for mobile users. That’s why the Data Encryption Toolkit that I helped write counsels you to be very careful about leaving portable computers powered on and unattended, and it spends some time going over the different security issues with standby, sleep, and hibernate modes. You should read it. Trust me, I’ve been to the doctor Smile

This is all a somewhat long-winded way of explaining that Microsoft has released a KB article describing how to turn off DMA for 1394 ports to reduce the threat of a DMA attack against BitLocker on TPM-only machines. The article, 2516445, describes how you can turn off the driver that provides DMA for 1394 devices. Given that very, very few Windows machines are ever connected to 1394 devices, this is probably something that you should implement if you have sensitive data on your BitLocker-protected machines.

If you’re not running BitLocker, well, why not?

1 Comment

Filed under Security

SecureDoc full-volume encryption for Mac OS X

Windows users have more security options, and that’s just the way it is. Or is it?

Let’s start with the obvious: I love BitLocker and I cannot lie. Despite its faults, it remains a great example of a real-world security feature that delivers immediate value. It’s fully supported by the OS manufacturer, meets government security standards, and doesn’t have to rely on skanky hacks to work its magic.

Windows laptop users can also take advantage of Seagate’s Momentus FDE line of disk drives. These disks, sometimes called self-encrypting disks or just SEDs, perform hardware encryption, and they are qualified by the US National Security Agency as meeting NSTISSP #11. Unfortunately, these drives require support in the BIOS. Since Apple’s laptops all use EFI instead of the standard x86/x64 BIOS, you can’t just plop a Momentus FDE into your Mac and expect it to work.

The only solution I’ve found to get an SED to work in a modern Mac laptop is from WinMagic. Their SecureDoc product is essentially a full-volume encryption tool that competes directly with BitLocker, as well as with other FVE products from PGP, PointSec, and so on. The big difference: the Mac version of SecureDoc supports Momentus FDE disks. Naturally I had to try it.

Installation is simple: you run an installer, which adds a couple of kernel drivers and modifies the boot loader. If (and only if) it detects an unlocked Momentus FDE as the boot volume, it will ask whether you want to use hardware or software encryption. (The installer also tells you that it will change the system’s hibernation mode, but let’s not get ahead of ourselves yet…)

When you’re done, you must reboot, at which point you see the new (and quite ugly) SecureDoc login screen. When you log in here, the SecureDoc bootloader unlocks the FDE disk and the normal Mac OS X boot cycle proceeds.

The docs ask that you turn off pagefile encryption by unchecking the "Use secure virtual memory" option in the General pane of the Security preferences tool. This makes sense: there’s no reason to ask the OS to encrypt the page file if the disk on which it lives is already encrypted. You must also turn off the "Put hard drive to sleep whenever possible" checkbox, as the OS doesn’t deal well with having the disk go to sleep (and thus get locked) while you’re using it.

In my test install, I ran into an odd problem: the machine would freeze when waking from sleep. The cursor and keyboard would work normally, but I’d get the spinning rainbow pizza of death. After doing some digging, and with the help of WinMagic’s tech support folks, I determined that the system’s hibernation mode wasn’t properly set by the installer. (Page 4 of this document is the only place I’ve found the different hibernation mode codes explained.) Uninstalling the SecureDoc software, manually setting the hibernation mode with the pmset tool, and reinstalling it fixed the problem and it has worked flawlessly since.

The standalone version of SecureDoc doesn’t have the same set of management or control features that BitLocker does. Of course, that’s because WinMagic wants you to buy their server-based toolset, which uses a group policy-like mechanism to enforce whatever encryption policies you choose. Without having tested either the server tool or the Windows version, I’m not ready to pick a winner between BitLocker and SecureDoc, but for the Mac it’s a low-impact solution that does what it says, and I’m happy with it so far.

Comments Off on SecureDoc full-volume encryption for Mac OS X

Filed under General Tech Stuff, Security

IEEE Spectrum Risks blog

If you use a computer– at work, at home, at school– you should be reading The Risk Factor, a blog on computer-related risks operated by the fine folks who bring us the IEEE Spectrum. There’s a ton of fascinating stuff there, like this and this. The Risk Factor is like a gateway drug, though. After reading it for a while, you’ll be ready for the hard stuff.

Comments Off on IEEE Spectrum Risks blog

Filed under General Tech Stuff, Security

Oracle failed to produce CEO’s e-mail

Cue the tiny violins: a federal judge ruled that Oracle “destroyed or failed to preserve Chief Executive Larry Ellison’s e-mail files sought as evidence in a class-action lawsuit filed in 2001 against the software maker.” The alleged destruction (or failure, depending on how you look at it) happened in 2006– well after Oracle touted archiving features in Oracle Collaboration Suite. Ooops.

Comments Off on Oracle failed to produce CEO’s e-mail

Filed under General Tech Stuff, Oops!, Security

ISA and TMG announce virtualization plans

A few weeks ago, I wrote a column highlighting Microsoft’s announcement of their Exchange 2007 virtualization strategy. I just found out that the team that owns the Internet Security and Acceleration (ISA) Server and Forefront Threat Management Gateway (TMG) has announced their virtualization policy… and it’s a good one! Basically, they’ll support ISA and TMG on virtualization solutions that are part of the Server Virtualization Validation Program (SVVP)– including Hyper-V.

The full document is here. Here’s the money graf:

… if a hardware virtualization platform is listed as “validated” with the SVVP (not “under evaluation”), Microsoft ISA Server and Forefront TMG will be supported for production use on that platform within the limits prescribed in the Microsoft Product Support Lifecycle, Non-Microsoft hardware virtualization policies and the system requirements for that product version and edition.

This will make both ISA and TMG much more palatable to a wide variety of customers, particularly in the SMB space. I’m looking forward to redeploying ISA (which I haven’t been using for a few years) now that it won’t cost me a server’s worth of electricity to use.

Update: this VMware press release says that VMware ESX has passed the SVVP. This is huge news given that it essentially means Microsoft is now supporting Exchange, ISA, and TMG on the most widely deployed virtualization platforms– welcome air cover for all the folks who have been doing it for a while now 🙂

Comments Off on ISA and TMG announce virtualization plans

Filed under General Tech Stuff, Security

Oracle gets hammered on security

It’s like a joke that never gets old. I’ve written about Oracle’s terrible approach to product security before (here, here, here, and here are a few examples… bonus: this). Now security legend Jericho has written this outstanding timeline of exactly what Oracle has failed to do in the security arena. He should have subtitled it “Bring Me the Head of Mary Ann Davidson”. Well worth a read.

Comments Off on Oracle gets hammered on security

Filed under Security, Smackdown!