Justin Vashisht, whom I’ve never met but already like, unleashed a really good article on his blog this week: the 17 biggest mistakes made in Exchange 2010 deployment. His article stands well on its own; you could maybe quibble over the ordering of the items, but all of the things he cites are potential mistakes that you can easily make if you’re not careful or experienced. (Actually, I think the preferred phrase is now “operationally mature.”)
Justin’s list includes a few things that have been recent topics of discussion in the MCM community lately, and I wanted to call those out for attention because they’re important.
First, two warnings that you can’t get something for nothing:
- Be very careful with allowing memory overcommitment (or, as Microsoft calls it, dynamic memory). Hyper-V and VMware both allow you to allocate more than the total amount of physical RAM across your VMs; that is, on a server with 64GB of RAM you could allow your VMs to consume more than 64GB. Virtual memory is certainly useful, but Exchange doesn’t like it much. Exchange 2010’s storage engine will consume as much RAM as it can get, backing off and releasing RAM only if the page fault rate increases excessively. The problem is that virtualized Exchange instances don’t see the page fault rate increasing– when you enable overcommitment, swapping happens on the host, not the guests. Memory overcommitment is mostly a cost-saving measure, and there are other, better ways to save money on your Exchange deployments. That’s why Microsoft recommends against using overcommitment on Exchange, but for some reason the word doesn’t seem to have gotten out very well.
- Avoid thin provisioning of storage. The idea behind thin provisioning is sound: you authorize a pool of storage for an application, but you don’t actually allocate it until it’s needed. (Cue the joke about the old man whose son wanted to go to college; when the son expressed his desire, the old man said “Well, son, you have my permission.”) While the up-front savings are tempting, thin provisioning makes it very, very hard to predict exactly what your storage usage or performance patterns will look like.. not to mention that it makes stress testing difficult or impossible. Justin mentions the performance burden of on-the-fly storage allocation in his mistake #7, and that’s a sound reason to avoid it too.
The third thing I want to mention is something that you may already have heard, but it bears repeating. Stress test your storage system before deployment. This should go without saying; the fact that it’s on Justin’s list (and a similar list I’ve seen circulating around Microsoft) tells me that not everyone’s gotten the memo. You must, must, MUST test your storage hardware using both jetstress and loadgen to ensure that it will respond properly under load. There is no substitute, shortcut, or workaround that will excuse you from this requirement. If you don’t test it, you can’t know whether you’ll get suitable performance under load… and if you can’t know that, how do you know whether you’re spending your money wisely?
The rest of Justin’s list is good reading too, and I commend it to your attention. I also should mention that the last Exchange Maestro event of 2011 is right around the corner, so if you’re looking for more, shall we say, direct guidance on what to do, and not do, with Exchange, perhaps you should join us there.