Tag Archives: Exchange

Microsoft, encryption, and Office 365

So the gloves are starting to come off: Microsoft general counsel Brad Smith wrote a long blog post this morning discussing how Microsoft plans to protect its customers’ data from unlawful interception by “unauthorized government access”. He never specifically mentions NSA, GCHQ, et al, but clearly the Five Eyes partners are who he’s talking about. Many other news outlets have dissected Smith’s post in detail, so I wanted to focus on a couple of lesser-known aspects.

First is that Microsoft is promising to use perfect forward secrecy (PFS) when it encrypts communications links. Most link-encryption protocols, including IPsec and SSL, use a key exchange algorithm known as Diffie-Hellman to allow  the two endpoints can agree on a temporary session key by using their longer-term private/public key pairs. The session key is usually  be renegotiated for each conversation. If Eve the eavesdropper or Mallet the man-in-the-middle intercept the communications, they may be able to decrypt it if they can guess or obtain the session key. Without PFS, an attacker who can intercept and record a communication stream now and can guess or obtain the private key of either endpoint can decrypt the stream. Think of this like finding a message in a bottle written in an unknown language, then next year seeing Rosetta Stone begin to offer a course in the language. PFS protects an encrypted communication stream now from future attack by changing the way the session keys are generated and shared. Twitter, Google, and a number of other cloud companies have already deployed PFS (Google, in fact, started in 2011) so it is great to see Microsoft joining in this trend. (A topic for another day: under what conditions can on-premises Exchange and Lync use PFS? Paging Mark Smith…)

Second is that Microsoft is acknowledging that they use data-at-rest encryption, and will be using it more often. Probably more than any other vendor, Microsoft is responsible for democratizing disk encryption by including BitLocker in Windows Vista and its successors, then steadily improving it. (Yes, I know that TrueCrypt and PGP predated BitLocker, but their installed bases are tiny by comparison.) Back in 2011 I wrote about some of the tradeoffs in using BitLocker with Exchange, and I suspected that Microsoft was using BitLocker in their Office 365 data centers, a suspicion that was confirmed recently during a presentation by some of the Office 365 engineering team and, now, by Smith’s post. Having said that, data-at-rest encryption isn’t that wonderful in the context of Office 365 because the risk of an attacker (or even an insider) stealing data by stealing/copying physical disks from an Office 365 data center is already low. There are many layers of physical and procedural security that help keep this risk low, so encrypting the stored data on disk is of relatively low value compared to encrypting the links over which that data travels.

The third aspect is actually something that’s missing from Smith’s post, expressed as one word: Skype. Outlook.com, Office 365, SkyDrive, and Azure are all mentioned specifically as targets for improved encryption, but nothing about Skype? That seems like a telling omission, especially given Microsoft’s lack of prior transparency about interception of Skype communications. Given the PR benefits that the company undoubtedly expects from announcing how they’re going to strengthen security, the fact that Smith was silent on Skype indicates, at least to suspicious folks like me, that for  now they aren’t making any changes. Perhaps the newly-announced transparency centers will provide neutral third parties an opportunity to inspect the Skype source code to verify its integrity.

Finally, keep in mind that nothing discussed in Smith’s post addresses targeted operations where the attacker (or government agency, take your pick) mounts man-in-the-middle attacks (QUANTUM/FOXACID)  or infiltrates malware onto a specific target’s computer. That’s not necessarily a problem that Microsoft can solve on its own.

Leave a comment

Filed under Office 365, UC&C

2-factor Lync authentication and missing Exchange features

Two-factor authentication (or just 2FA) is increasingly important as a means of controlling access to a variety of systems. I’m delighted that SMS-based authentication  (which I wrote about in 2008), has become a de facto standard for many banks and online services. Microsoft bought PhoneFactor and offers its SMS-based system as part of multi-factor authentication for Azure, which makes it even easier to deploy 2FA in your own applications.

Customers have been demanding 2FA for Lync, Exchange, and other on-premises applications for a while now. Exchange supports the use of smart cards for authentication with Outlook Anywhere and OWA, and various third parties such as RSA have shipped authentication solutions that support other authentication factors, such as one-time codes or tokens. Lync, however, has been a little later to the party. With the July 2013 release of Lync Server 2013 CU2, Lync supports the use of smart cards (whether physical or virtual) as an authentication mechanism. Recently I became aware that there are some Lync features that aren’t available when the client authenticates with a smart card– that’s because the client authenticates to two different endpoints. It authenticates to Lync using two-factor authentication, but the Lync client can’t currently authenticate to Exchange using the same smart card, so services based on access through Exchange Web Services (EWS) won’t work. The docs say that this is “by design,” which I hope means “we didn’t have time to get to it yet.”

The result of this limitation means that Lync 2013 clients using 2FA cannot use several features, including

  • the Unified Contact Store. You’ll need to use Invoke-CsUcsRollback to disable Lync 2FA users’ UCS access if you’ve enabled it.
  • the ability to automatically set presence based on the user’s calendar state, i.e. the Lync client will no longer set your presence to “out of office”, “in a meeting,” etc. based on what’s on your calendar. Presence that indicates call states such as “in a conference call” still works.
  • integration with the Exchange-based Conversation History folder. If you’ve configured the use of Exchange 2013 as an archive for Lync on the server side, that still works.
  • Access to high-definition user photos
  • The ability to see and access Exchange UM voicemail messages from the Lync client

These limitations weren’t fixed in CU3, but I am hopeful that a not-too-distant future version of the client will enable full 2FA use. In the meantime, if you’re planning on using 2FA, keep these limitations in mind.

1 Comment

Filed under General Tech Stuff, UC&C

Off to Exchange Connections 2013!

Off to Las Vegas I go! I am en route to Exchange Connections 2013, where I’ll be presenting 3 sessions: one on Exchange ActiveSync with the folks from BoxTone, one on Exchange 2013 and Lync 2013 integration, and one on Exchange 2013 unified messaging. I also plan to have breakfast, lunch, dinner, coffee, beer, snacks, or cuddles (well, OK, probably not cuddles) with as many members of the Exchange product group, MVP community, and world at large as possible. If you’re there, by all means please come by and say hello! (and if you want to go lift weights together, even better!)

Sadly, my book won’t be on sale there because it is still being printed. However, I’ll be giving away a copy or two in each of my sessions, so if you’re feeling lucky, come on by.

In related news, registration opened for the 2014 edition of the Microsoft Exchange Conference, or MEC. I am ridiculously excited about the return of the return of MEC, and not just because it’s in Austin and I might finally get to meet some of my Dell coworkers. The product group has been sharing a bit of what they’ve got planned with the MVPs and I can say, with conviction, that it will be just as good, if not better than, MEC 2012.

But back to now. Somewhat unusually, I am flying United, connecting through Houston both ways. Normally I wouldn’t, but scheduling dictated it and with luck I’ll be in Houston long enough to have some of my favorites (plus: Channel 9!)  Then it’s a ridiculously short return to Huntsville– basically, long enough to change suitcases and grab my running shoes– before I head to Vermont to run the Leaf Peepers 5K with my lovely sister (note: subscribe to her blog; you’ll be glad you did), thence to Hoboken to meet with customers.

See you at the show!

1 Comment

Filed under Travel, UC&C

Do mailbox quotas matter to Outlook and OWA?

Great question from my main homie Brian Hill:

Is there a backend DB reason for setting quotas at a certain size? I have found several links (like this one) discussing the need to set quotas due to the way the Outlook client handles large numbers of messages or OST files, but for someone who uses OWA, does any of this apply?

Short answer: no.

Somewhat longer answer: no.

The quota mechanism in Exchange is an outgrowth of those dark times when a large Exchange server might host a couple hundred users on an 8GB disk drive. Because storage was so expensive, Microsoft’s customers demanded a way to clamp down on mailbox size, so we got the trinity of quota limits: prohibit send, prohibit send and receive, and warn. These have been with us for a while and persist, essentially unchanged, in Exchange 2013, although it is now common to see quotas of 5GB or more on a single mailbox.

Outlook has never had a formal quota mechanism of its own, apart from the former limit of 2GB on PST files imposed by the 32-bit offsets used as pointers in the original PST file format. This limit was enforced in part by a dialog that would tell you that your PST file was full and in part by bugs in various versions of Outlook that would occasionally corrupt your PST file as it approached the 2GB size limit. Outlook 2007 and later pretty much extinguished those bugs, and the Unicode PST file format doesn’t have the 2GB limit any longer. Outlook 2010 and 2013 set a soft limit on Unicode PSTs of 50GB, but you can increase the limit if you need to.

Outlook’s performance is driven not by the size of the PST file itself (thought experiment: imagine a PST with a single 10GB item in it as opposed to one with 1 million 100KB messages) but by the number of items in any given folder. Microsoft has long recommended that you keep Outlook item counts to a maximum of around 5,000 items per folder (see KB 905803 for one example of this guidance). However, Outlook 2010 and 2013, when used with Exchange 2010 or 2013, can handle substantially more items without performance degradation: the Exchange 2010 documentation says 100,000 items per folder is acceptable, though there’s no published guidance for Exchange 2013. There’s still no hard limit, though. The reasons why the number of items (and the number of associated stored views) are well enumerated in this 2009 article covering Exchange 2007. Some of the mechanics described in that article have changed in later versions of Exchange but the basic truth remains: the more views you have, and/or the more items that are found or selected by those views, the longer it will take Exchange to process them.

If you’re wondering whether your users’ complaints of poor Outlook performance are related to high item counts, one way to find out is to use a script like this to look for folders with high item counts.

Circling back to the original question: there is a performance impact with high item count folders in OWA, but there’s no quota mechanism for dealing with it. If you have a user who reports persistently poor OWA performance on particular folders, high item counts are one possible culprit worth investigating. Of course, if OWA performance is poor across multiple folders that don’t have lots of items, or across multiple users, you might want to seek other causes.

Leave a comment

Filed under UC&C

Microsoft Certified Systems Master certification now dead

I received a very unwelcome e-mail late last night:

Microsoft will no longer offer Masters and Architect level training rotations and will be retiring the Masters level certification exams as of October 1, 2013. The IT industry is changing rapidly and we will continue to evaluate the certification and training needs of the industry to determine if there’s a different certification needed for the pinnacle of our program.

This is terrible news, both for the community of existing MCM/MCSM holders but also for the broader Exchange community. It is a clear sign of how Microsoft values the skills of on-premises administrators of all its products (because all the MCSM certifications are going away, not just the one for Exchange). If all your messaging, directory, communications, and database services come from the cloud (or so I imagine the thinking goes), you don’t need to spend money on advanced certifications for your administrators who work on those technologies.

This is also an unfair punishment for candidates who attended the training rotation but have yet to take the exam, or those who were signed up for the already-scheduled upgrade rotations, and those who were signed up for future rotations. Now they’re stuck unless they can take, and pass, the certification exams before October 1… which is pretty much impossible. It greatly devalues the certification, of course, for those who already have it. Employers and potential clients can look at “MCM” on a resume and form their own value judgement about its worth given that Microsoft has dropped it. I’m not quite ready to consign MCM status to the same pile as CNE, but it’s pretty close.

The manner of the announcement was exceptionally poor in my opinion, too: a mass e-mail sent out just after midnight Central time last night. Who announces news late on Friday nights? People who are trying to minimize it, that’s who. Predictably, and with justification, the MCM community lists are blowing up with angry reaction, but, completely unsurprisingly, no one from Microsoft is taking part, or defending their position, in these discussions.

As a longtime MCM/MCSM instructor, I have seen firsthand the incredible growth and learning that takes place during the MCM rotations. Perhaps more importantly, the community of architects, support experts, and engineers who earned the MCM has been a terrific resource for learning and sharing throughout their respective product spaces; MCMs have been an extremely valuable connection between the real world of large-scale enterprise deployments and the product group.

In my opinion, this move is a poorly-advised and ill-timed slap in the face from Microsoft, and I believe it will work to their detriment.

18 Comments

Filed under FAIL, UC&C

Leaving messages for non-UM-enabled users

Recently I got a good question from a coworker. He was working with a customer who was piloting Exchange Unified Messaging, and the customer was a little confused by a poorly-documented behavior of Exchange UM.

Consider that you have four test users who are UM-enabled: Alex, Brian, Carole, and David. You also have four users with Exchange mailboxes who are not UM-enabled: Magdalena, Nick, Oscar, and Pete.

The customer reported that he could dial the default automated attendant, or into Outlook Voice Access, and use dial by name to call Alex, Brian, Carole, or David.

However, he had Exchange configured to allow callers to leave voice mail messages without ringing the phone first (what I call “the coward setting”; it’s controlled with Set-UMDialPlan –SendVoiceMsgEnabled:$false). He was able to leave messages for Magdalena and the other non-UM-enabled users, which surprised him and generated the question.

This does seem odd. It’s easy to understand why you can leave a message for the first four users: they are UM-enabled, so they have extensions to which Exchange can transfer the call. But why can you leave a UM message for a user who isn’t UM-enabled? It’s because leaving a voice mail directly for a user doesn’t involve ringing an extension, so not having an extension assigned isn’t an obstacle. When you select that user for a message, UM will play the greeting (which is almost certainly going to be the system-generated TTS version of the user name, as a non-UM-enabled user probably will not have recorded a greeting) and record the message, then deliver it through the standard path.

The More You Know…

2 Comments

Filed under UC&C

Loading PowerShell snap-ins from a script

So I wanted to launch an Exchange Management Shell (EMS) script to do some stuff for a project at work. Normally this would be straightforward, but because of the way our virtualized lab environment works, it took me some fiddling to get it working.

What I needed to do was something like this:

c:\windows\system32\powershell\v1.0\powershell.exe -command "someStuff"

That worked fine as long as all I wanted to do was run basic PowerShell cmdlets. Once I started trying to run EMS cmdlets, things got considerably more complex because I needed a full EMS environment. First I had to deal with the fact that EMS, when it starts, tries to perform a CRL check. On a non-Internet-connected system, it will take 5 minutes or so to time out. I had completely forgotten this, so I spent some time fooling around with various combinations of RAM and virtual CPUs trying to figure out what the holdup was. Luckily Jeff Guillet set me straight when he pointed me to this article, helpfully titled “Configuring Exchange Servers Without Internet Access.” That cut the startup time waaaaay down.

However, I was still having a problem: my scripts wouldn’t run. They were complaining that “No snap-ins have been registered for Windows PowerShell version 2”. What the heck? Off to Bing I went, whereupon I found that most of the people reporting similar problems were trying to launch PowerShell.exe and load snap-ins from web-based applications. That puzzled me, so I did some more digging. Running my script from the PowerShell session that appears when you click the icon in the quick launch bar seemed to work OK. Directly running the executable by its path (i.e. %windir%\system32\powershell\v1.0\powershell.exe) worked OK too… but it didn’t work when I did the same thing from my script launcher.

Back to Bing I went. On about the fifth page of results, I found this gem at StackExchange. The first answer got me pointed in the right direction. I had completely forgotten about file system virtualization, the Windows security feature that, as a side effect, helps erase the distinction between x64 and x86 binaries by automatically loading the proper executable even when you supply the “wrong” path. In my case, I wanted the x64 version of PowerShell, but that’s not always what I was getting because my script launcher is a 32-bit x86 process. When it launched PowerShell.exe from any path, I was getting the x86 version, which can’t load x64 snap-ins and thus couldn’t run EMS.

The solution? All I had to do was read a bit further down in the StackExchange article to see this MSDN article on developing applications for SharePoint Foundation, which points out that you must use %windir%\sysnative as the path when running PowerShell scripts after a Visual Studio build. Why? Because Visual Studio is a 32-bit application, but the SharePoint snap-in is x64 and must be run from an x64 PowerShell session… just like Exchange.

Armed with that knowledge, I modified my scripts to run PowerShell using sysnative vice the “real” path and poof! Problem solved. (Thanks also to Michael B. Smith for some bonus assistance.)

1 Comment

Filed under General Tech Stuff, UC&C

Some MEC schedule and content updates

Today the Exchange team updated the MECIsBack.com website to share more details of what awaits us in a mere 48 days! The complete schedule is a pretty broad outline, but the session list is quite tantalizing.

Day 1 starts with an opening keynote by Rajesh Jha, but the real goodies start with a technical keynote covering the architecture of what Microsoft is calling “the new Exchange.” (It’s interesting, btw, that SharePoint, Lync, Office 2013, and Windows 8/2012 aren’t calling their products “the new X”. I like the Exchange branding.)  There are a total of 8 additional breakout sessions, all on Exchange 2013, scheduled for the rest of day 1. This is definitely a good news/bad news situation, as these 8 sessions are stuffed into three time slots so you cannot attend them all. That means that we’ll all have to choose which sessions seem most interesting. The arrangement reminds me a bit of past MVP summits when we had to make choices such as “would I rather go to the ‘what’s new in PowerShell’ or ‘storage architecture changes’ session?” This is rather jarring given how lame the last few years’ worth of TechEd content has been for Exchange, but it’s a good problem to have. Fortunately the MEC folks will have the Exchange 2013 day-1 sessions recorded for later viewing. (Personally, I think I will probably hit the high availability, security, and “Apps for Outlook and OWA” sessions.)

Days 2 and 3 are all chalk talks. Microsoft is calling them “classroom sessions” but I picture something more informal than the typical lecture sessions, with lots of back-and-forth Q&A. The preview session content list includes a bunch of sessions both on Exchange 2013 and Exchange 2010. There are some interesting tidbits hidden in the session list: “What’s New In Support Programs with Exchange,” for instance, sounds intriguing given that Microsoft has not yet publicly said anything about upcoming support changes. The sessions on site mailboxes, modern public folders, and what’s new in anti-malware (you did know Exchange 2013 includes malware filtering now, right?) look worthwhile as well.

Microsoft hasn’t yet announced exactly which speakers will be presenting the new Exchange 2013 content. However, if you look at the speaker list you can make some informed guesses. I’d expect all of the Exchange 2013 sessions to be covered by Microsoft speakers (I love it that the Microsoft product group folks are listed under the heading of “Exchange Team Personalities”– I can attest that many of the Exchange folks are, in fact, lively personalities), and if you know who does what on the product team you can probably match session titles to personalities pretty easily.

I’m presenting two sessions: E14.302, “Developing Mobile Applications with Exchange Web Services,” and E14.303, “10 Things You Didn’t Know About Exchange Unified Messaging.” Other presenters include unindicted co-conspirator Tony Redmond, fellow MCM instructor Brian Reid, the formidable Glen Scales, ex-3Sharpie Devin Ganger, and a host of others whose names you’ll probably recognize.

Interestingly, Microsoft is still looking for suggestions for sessions– drop mecideas@microsoft.com a line if there are specific things you want to talk about that aren’t covered. The exhibitors list is now up to date as well, with most of the usual suspects represented– Quest, Binary Tree, Sherpa, and so on.

One open question: there are two evening events, plus an option post-event activity… I wonder what the MEC planners have up their sleeves for us? I can’t wait to find out. See you there!

Leave a comment

Filed under UC&C

Man-in-the-middle attacks against Exchange ActiveSync

I love the BlackHat security conference, although it’s been a long-distance relationship, as I’ve never been. The constant flow of innovative attacks (and defenses!) is fascinating, but relatively few of the attacks focus on things that I know enough about to have a really informed opinion. At this year’s BlackHat, though, security researcher Peter Hannay presented a paper on a potential vulnerability in Exchange ActiveSync that can result in malicious remote wipe operations. (Hannay’s paper is here, and the accompanying presentation is here.)

In a nutshell, Hannay’s attack depends on the ability of an attacker to impersonate a legitimate Exchange server, then send the device a remote wipe command, which the device will then obey. The attack depends on the behavior of the EAS protocol provisioning mechanism, as described in MS-ASPROV.

Before discussing this in more detail, it’s important to point out three things. First, this attack doesn’t provide a way to retrieve or modify data on the device (apart from erasing it, which of course counts as “modifying” it in the strictest sense.) Second, the attack depends on use of a self-signed certificate. Self-signed certificates are installed and used by Exchange 2007, 2010, and 2013 by default, but Microsoft doesn’t recommend their use for mobile device sync (see the 2nd paragraph here); contrary to Hannay’s claim in the paper, my experience has been that relatively few Exchange sites depend on self-signed certs.

The third thing I want to highlight: this is an interesting result and I’m sure that the EAS team is studying it closely to ensure that the future attacks Hannay contemplates, like stealing data off the device, are rendered impossible. There’s no current cause for worry.

The basis of this attack is that EAS provides a policy update mechanism that allows the server to push an updated security policy to the device when the policy changes. There are 3 cases when the EAS Provision command can be issued by the server:

  • when the client contacts the server for the first time. In this case, the client should pull the policy and apply it. (I vaguely remember that iOS devices prompt the user to accept the policy, but Windows Phone devices don’t.)
  • when the policy changes on the server, in which case the server returns a response indicating that the client needs to issue another Provision command to get the update.
  • when the server tells the device to perform a remote wipe.

The client sends a policy key with each command it sends to the server, so the server always knows what version of the policy the device has; that’s how it knows when to send back the response indicating that the device should reprovision.

If the client doesn’t have a policy, or if the policy has changed on the server, the client policy key won’t match the current server policy key, so the server sends back a response indicating that the client must reprovision before the server will talk to it.

There seems to be a flaw in Hannay’s paper, though.

The mechanism he describes in the paper is that used by EAS 12.0 and 12.1, as shipped in Exchange 2007. In that version of EAS, the server returns a custom HTTP error, 449, to tell the device to get a new policy. A man-in-the-middle attack in this configuration is simple: set up a rogue server that pretends to be the victim’s Exchange server, using a self-signed certificate, then when any EAS device attempts to connect, send back HTTP 449. The client will then request reprovisioning, at which time the MITM device sends back a remote wipe command.

Newer versions of Exchange return an error code in the EAS message itself; the device, upon seeing this code, will attempt to reprovision. (The list of possible error codes is in the section “When should a client provision?” in this excellent MSDN article). I think this behavior would be harder to spoof, since the error code is returned as part of an existing EAS conversation.

In addition, there’s the whole question of version negotiation. I haven’t tested it, but I assume that most EAS devices are happy to use EAS 12.1. I don’t know of any clients that allow you to specify that you only want to use a particular version of EAS. It’s also not clear to me what would happen if you send a device using EAS 14.x (and thus expecting to see the policy status element) the HTTP 449 error.

Having said all that, this is still a pretty interesting result. It points to the need for better certificate-management behavior on the devices, since Hannay points out that Android and iOS devices behaved poorly in his tests. Windows Phone seems to do a better job of handling unexpected certificate changes, although it’s also the hardest of the 3 platforms to deal with from a perspective of installing and managing legitimate certificates.

More broadly, Hannay’s result points out a fundamental flaw in the way all of these devices interact with EAS, one that I’ve mentioned before: the granularity of data storage on these devices is poor. A remote-wipe request from a single Exchange account on the device arguably shouldn’t wipe out data that didn’t come from that server. The current state of client implementations is that they erase the entire device– apps, data, and all– upon receiving a remote wipe command. This is probably what you want if your device is lost or stolen (i.e. you don’t want the thief to be able to access your personal or company data), but when you leave a company you probably don’t want them wiping your entire device. This is an area where I hope for, and expect, improvement on the part of EAS client implementers.

1 Comment

Filed under Security, UC&C

Exchange 2013 preview ships

Yay! Microsoft has released the preview version (which us normal humans might refer to as a beta) of Exchange 2013, SharePoint 2013, Office 2013, and Lync 2013.

I don’t have time to write a full summary of all the changes, but a few highlights:

  • One big piece of news: there are now only two server roles: client access and mailbox. (Raise your hand if that reminds you of the Exchange 2000/2003 front-end/back-end split.) CAS now has a new service, Front-End Transport, that doesn’t do what you probably think. In addition, the RPC Client Access service (RCA) is now gone.
  • MAPI may not officially be dead, but the fact that Outlook 2013 can use Exchange ActiveSync sure makes it look that way.
  • The new Exchange Administration Center (EAC) is going to be polarizing; some admins will love it, while others will hate it.
  • No more multi-master replication for public folders. You should read this FAQ if you’re a public folder aficionado.
  • This is the first release of Exchange or SharePoint that really enables the “better together” story. In-place discovery searches and site mailboxes (about which, more later) will really make a huge difference in how SharePoint and Exchange are used for data management.
  • By default, malware filtering is enabled in Microsoft Exchange Server 2013 Preview.” Yay!
  • Not a ton of unified messaging changes, but a few welcome ones, including better Voice Mail Preview accuracy and some UI improvements.

There’s a list of “what’s new in this release” items, of course. Keep in mind, though, that Microsoft frequently adds features between preview releases and RTM, so there may well be additional features, or changes to existing features, between this release and the final release later this year.

Download and enjoy!

2 Comments

Filed under UC&C

What "supported" really means

If I had a nickel for every time I had had a discussion like the below…

<Customer> wants to <do something>. I don’t think it’s a good idea and tried to explain that to them. They want to do it anyway. Is it supported?

The particular discussion that triggered this post was a conversation among MCMs concerning a customer who wanted to know if they could configure an Exchange 2010 server so that it was dual-homed, with one NIC on the LAN and another in their DMZ. There are a number of good reasons not to do this, most related to one of two things: the inability to force Windows and/or Exchange to use only one of the installed NICs for certain operations, or the lack of knowledge about how to configure everything properly in such a configuration. For example, you’d have to be careful to get static routes right so that you only passed the traffic you wanted on each interface. You’d also have to be careful about which AD sites your server appeared to be a member of.

The big issue for me: that configuration would add complexity. Any time you add complexity, you should be able to clearly articulate what you’re gaining in exchange. Performance, scalability, flexibility, security, cost savings.. there has to be some reason to make it worth complicating things. This is a pretty fundamental principle of designing anything technical, from airplanes to washing machines to computer networks, and you violate it at your peril.

In this case, the gain is that the customer wouldn’t need to use TMG or a similar solution. That seems like an awfully small gain for the added complexity burden and the supportability issues it raises.

You might be wondering why I’d bring up supportability in this context. The cherry on the sundae was this comment from the fellow who started the thread: “It’s not written that you can’t do it, so they assume that means you can.” This is a dangerous attitude in many contexts, but especially so here.

I’ve said it before (and so has practically everyone who has ever written about Exchange), but it bears repeating:

Just because something is not explicitly unsupported, that doesn’t mean it is supported.

Microsoft doesn’t– indeed, can’t— test every possible configuration of Exchange. Or Windows. Or any of their other products (well, maybe except for closed consumer systems like Windows Phone and Xbox 360). So there’s a simple process to follow when considering whether something meets your requirements for supportability:

  1. Does Microsoft explicitly say that what you want to do is, or is not, supported?
  2. If they don’t say one way or the other, are you comfortable that you can adequately test the proposed change in your environment to make sure that it only has the desired effects?

Point 1 is pretty straightforward. If Microsoft says something’s explicitly supported, you’re good to go. If they explicitly say something is unsupported, you’re still good, provided you don’t do it.

Brief digression: when Microsoft says something’s unsupported, it can mean one of three specific things:

  • We tested it. It doesn’t work. Don’t do it. (Example: a long list of things involving Lync device provisioning.)
  • We tested it. It works. It’s a bad idea for some other unrelated reason. Don’t do it. (Example: going backupless with a 2-copy DAG.)
  • We didn’t test it. We don’t know if it works. You could probably figure out some way to make it work.  If it doesn’t work, on your own head be it. (Example: the prior stance on virtualization of Exchange roles.)

OK, where was I? Oh yeah: if Microsoft doesn’t make an explicit statement one way or another, that is not an unconditional green light for you to do whatever you want. Instead, it’s an invitation for you to think carefully about what you’ll gain from the proposed configuration. If what you want to do is common, then there will probably be a support statement for it already; the fact that there isn’t should give you pause right there. If you believe the gain is worth the potential risk that comes from an increase in complexity, and you can demonstrate through testing (not just a SWAG) that things will work, only then should you consider proceeding.

(n.b. permission is hereby granted for all you Exchange folks out there to copy this and send it to your customers next time they ask you for something dangerous, ignorant, unsupportable, or otherwise undesirable.)

5 Comments

Filed under Musings, UC&C

The Conversation Action Settings folder

I recently got a query from a Mac-using coworker:

When looking at my email account, I see an extra folder called Conversation Action Settings. Is this something I can safely dispose of?

If you’re used to using Outlook on Windows, you may never have seen this folder. In fact, you might not have seen it if you are a WIn Outlook user, because it’s only present on Exchange 2010 mailboxes. Outlook 2007 doesn’t display it, but Outlook 2011 for Mac OS X does, as does Apple’s Mail.app. This has engendered a lot of discussion about what the folder is and whether it’s safe to get rid of it.

So let me answer those points in reverse order. Yes, it’s safe to remove the folder… but if you do so, it’s just going to come back again. I expect that Apple will update Mail.app in Mac OS X “Lion” to hide the folder; they’ve done similar work to hide other Exchange/Outlook-specific folders in the past.

It’s arguably more interesting to talk about what’s in the folder in the first place. The Conversation Actions folder holds (drum roll)… conversation actions. These actions tell Exchange 2010 (and compatible clients, which for now means “OWA 2010” and “Outlook 2010”) what to do with message items under specific circumstances.

One action is the now-famous “ignore” button (see Clint Boessen’s description if you’re not hip to this very useful feature.) When you hit the mute button, Outlook creates a conversation action that automatically moves messages in the target thread to your Deleted Items folder. It can do this because Exchange 2010 automatically tags incoming messages with a conversation ID. Related messages (like replies or forwards of an existing message) get the same conversation ID. It uses a variety of heuristics to do this, and in general they work well to keep related messages together even when people do things like change the subject line mid-thread.

The other data items stored in this folder are Outlook 2010 Quick Steps. I love this feature and use it heavily; in fact, it’s one of the things I miss most when I’m using OWA 2010 and Outlook 2011.

If you’re not using a client that supports these features, then there won’t be anything in the Conversation Action Settings folder. However, just as nature abhors a vacuum, so does Exchange, so if you delete the folder expect to see it come back.

There’s more on conversation actions, and some other interesting Exchange 2010 and Outlook 2010 features, in this article.

1 Comment

Filed under UC&C

Exchange Maestro, day 2

If this is Thursday, it must be time for Thursday Trivia– Maestro style!

Tony’s produced another excellent writeup, this time featuring the second day of our Maestro training festivities (activities? either one works) here in Boston. A few additional notes come to mind.

First, I must confess to a degree of envy for the beautiful Nikon lens that Tony has been using to take pictures, although I would quite like it if he would take pictures of something other than me. I suppose you can’t have everything you wish for. (Ed. note: the actual lens Tony is using is this one. The one I linked to in the preceding sentence is the rough equivalent for my camera, which is why I got them mixed up.)

The RBAC session went quite well, though it ran longer than I wanted to. RBAC is one of the key areas where Exchange 2010 differs significantly from Exchange 2007. Most Windows administrators are so used to the standard Windows security model, which uses discretionary access control, that the concept of access control based on roles seems very foreign. When I teach RBAC, there are a few principles that I focus on to help keep the most important things at the forefront. First, RBAC role assignments are additive. If I assign you three different roles, you will have the ability to do anything that any of those roles allow. This is a big change from the standard Windows model, with its rules about most restrictive permissions.

Second, I often liken RBAC to sculpting using stone. When you create a new role, you can only take away from the entries that the parent role holds. A child role cannot contain role entries that were never present in the parent. This, again, is quite unlike the standard Windows model.

Third, understanding how the “triangle of power” works is key to understanding RBAC. I will probably include a quick review in tomorrow morning’s review session.

After I finished RBAC, Tony embarked on a lengthy disquisition on the mailbox replication service. This is another major difference in Exchange 2010, and he covered it thoroughly. After a quick lunch of hotel Italian, I covered the high points of the Exchange 2010 transport system. I think the students were probably glad to be on more familiar ground, as the transport system still has a lot in common with previous versions.

The afternoon labs went quite well. I was able to help one student fix a nagging problem with the CAS servers in his production system, resulting in him being able to use Outlook 2011 with his Exchange 2007 system. He was happy, as was I. It’s always very rewarding to be able to teach people things that they can immediately apply to their work environments. After all, that’s why we are here. Abstract knowledge is wonderful, but concrete, practical knowledge is better in my book.

(Speaking of book: Tony’s Exchange 2010 Inside Out is due to be released December 1. In related news, I am no longer the holdup in its production!)

One of the interesting things about this class is that we give the students a reasonably complex virtual environment to work with. This has its challenges, including the requirement for students to bring fairly powerful laptops. However, when I compare this class to other classes I have taught where the instructors provided the equipment, I like this model better. Students are confident in the quality of the equipment because it’s theirs. None of the instructor staff has had to spend any significant amount of time helping students with hardware issues, something that often happens when using rental equipment or equipment provided by a venue. In addition, students can take the lab environments with them when they leave for the day, so if they want to work on the more at home, or next week when they are back in their offices, they can easily do so.

After the class was over, we left the hotel with fellow MVP Lee Benjamin to have dinner at a nearby restaurant. The food was quite good, the service was excellent, and the vintage clothing worn by the waitstaff was remarkable in its variety, a most welcome change from the doll clothing worn by weight staffs at most other restaurants. On the way back to Lee’s car, I spotted a plaque marking the location of the first long distance telephone call. I thought that was worth a picture, and the results are below or did I am pleased with how well the brick turned out using the built-in flash on my iPhone.

IMG_0062.JPG

Now I’m off to do a bit more editing work on Tony’s book, along with some last-minute changes to my slides for tomorrow. I’m covering exchange unified messaging, as well as server sizing, scaling, and planning. Should be a fun day!

Comments Off on Exchange Maestro, day 2

Filed under UC&C

Exchange Connections Fall 2010 call for sessions

My co-chairs and I are working on assembling this year’s Exchange Connections content, which we’ll be presenting November 1-4 in Las Vegas at good ol’ Mandalay Bay. That’s why I’m posting this call for sessions!

Everything you should need to know is in this document.

The deadline for session proposals is May 6 – hurry, hurry, as usual! Although the deadline is May 6, the sooner you can send in session proposals, the better the odds are we’ll be able to choose your sessions. I’ll try and respond to your submissions on the same business day with any thoughts or requests or tweaks. The conference has a brochure to get out pretty much ASAP if we’re going to get people to show up, so time is – as always – of the essence.

Note that we’ll be co-located, as usual, with dedicated conferences for Visual Studio, ASP.NET, Windows, SharePoint, and goodness knows what else – so for these proposals, stick strictly with Exchange and OCS topics.

If you want to submit sessions, see the call for sessions. If you have questions, you can ask them here or via e-mail.

Comments Off on Exchange Connections Fall 2010 call for sessions

Filed under UC&C

How I got into the writing business, part 2

In part 1, I started talking about how I got into the writing business. Part 1 ended with me having written a couple of non-Windows-related books (including this) and contributing to several Windows-oriented books (like this). I began to wonder if it made sense for me to get an agent, so I started talking to David Rogelberg, the owner of StudioB. He offered me the tempting possibility of being able to write for O’Reilly, something I had always wanted to do. I signed on as a StudioB client and, true to his word, David got me in touch with O’Reilly about writing a book on programming for the Palm Pilot.

Of course, I didn’t know anything about programming for the Pilot, but I wasn’t about to let a minor technicality stop me.

What did stop me was a communications mixup between Robert Denn, my editor at O’Reilly, and another ORA editor who shall remain nameless. This other editor had signed Rhodes and McKeehan– the experts who had written a book on Newton development too– to write a Palm programming book. That left them in the position of having two PalmOS books under contract, only one of which would be written by, y’know, people who knew what they were doing.

Robert offered to let me write a book on another topic. In fact, he even gave me my pick of topics. I wish I could say that I jumped at the chance to write about Exchange, but I didn’t. I had to be more-or-less bullied into it my my agent, who realized the long-term potential of working in the Exchange market. I didn’t know anything about Exchange either, but I was quickly determined to learn, given that I had just signed a contract to write about it. I started joining every Exchange-related mailing list in sight, printed out all the product documentation, and set up Exchange using Virtual PC on my Powerbook. (Yes, that’s right; my O’Reilly Exchange book was written on a Mac– a trend which continues to this day).

I learned sooooo much from the folks on the swynk Exchange list. Not only were there rock stars like Andy Webb, Missy Koslosky, and Ed Crowley there; there were also a ton of Exchange developers. Just to cite one example, one of the primary perpetrators of the Exchange 5.5 MTA was on the list, as was Laurion Burchall, one of the key ESE developers. Everyone on the list was super generous with their time and knowledge, and it didn’t take me long to get up to speed. (My first “live” exposure to the community, though, was attending the 1998 MEC. I was there when Tony Redmond made his famous “I’ll pass on the clap” remark, and I heard Pierre Bijaoui explain that the average human has one breast and one testicle!)

Coincidentally, at about the same time I got a call from O’Reilly: Windows NT Pro magazine was looking for someone to write a regular Exchange column. Was I interested? You bet I was! I started writing it in September of 1998 and it’s been in print ever since, although it’s morphed into a few different forms.

All this time I was still holding down a real job at LJL Enterprises, writing crypto code on the Mac. Eventually my agent brought me an offer that was too good to refuse: Ford Motor Company wanted someone to write a book about their CAD system. I gave my two weeks’ notice, set up my home office, and got ready to hang out my own shingle as a full-time author. That’s when the real adventures started…

3 Comments

Filed under General Stuff, Musings