My first multisport event: Mountain Deux

Like every other endeavor, endurance sports have their own lexicons. Most people know what a triathlon is, but it’s a little unclear what to call a two-event competition. Turns out, it depends on which two events are involved: a bike/run combo is usually known as a duathlon, and a swim/run combo is apparently known as an aquathlon. I found this out more or less by accident; one of my TRI101 training teammates said he had signed up for Mountain Deux, an aquathlon here in Huntsville. It looked like fun so I signed up too, then ran it yesterday. The course was a hilly trail totaling just over 2 miles, followed by a 200m swim in a very nice, and nearly brand-new, saltwater pool. Sounds like fun, right? Turns out that it was!

Friday night I packed my gear: two towels (one to wrap my wet, muddy clothes in), swim cap and goggles, a change of clothes, and my race belt. Then, bright and early Saturday morning, I headed over to the race area, the Three Caves trail system. The race was small, so there wasn’t much in the way of check-in or registration; I found the transition area, staged my gear, and chatted with friends (including borrowing some bug spray and getting my race number inscribed on my arm) until it was time to start. We actually started from the Three Caves themselves, which means the first leg of the race was all uphill. This was not my favorite. After that initial climb, I settled into a slow and fairly steady pace, terrain permitting.

NewImage

The race course

Thursday and Friday the whole Huntsville area got hammered with rain and thunderstorms, so I was expecting a wet trail, and I wasn’t disappointed. I was surprised by how rocky the trail was. Big rocks, little rocks, flat rocks, pointy rocks: I didn’t feel like I was running so much as I was dodging, stepping over, or hurdling rocks. The last time I ran a trail race, in 2010, I turned an ankle while fording a creek when I stepped on a rock, so I’ve avoided trails since then.Where the trail was not rocky, it was slick and muddy. This definitely made it more challenging than my typical flat-surface runs. I ended up walking about a third of the trail overall, which hurt my time. The last leg of the race was all downhill, which was even more challenging because of the slippery ground. I nearly face planted a couple of times, but survived uninjured.

Transition was super easy: off came the long pants, shirt, shoes, and socks; on went the goggles and cap, and splash! Into the pool.

The swim was easy. I wasn’t trying to swim particularly fast, and I didn’t (although my watch mysteriously didn’t capture a time for the swim, and the event didn’t provide split times). I felt smooth in the water though, which is a big improvement. It was only 200m, so the short distance certainly helped make it feel easier.

My finishing time was 36:59, which more or less agrees with my watch time if you add in the missing swim. I didn’t have any particular expectations since this was so different than my normal race fare; I’m happy to have finished, uninjured, and I learned a few things: I can run just fine without music, it’s a bad idea to accidentally spray bug spray on your water bottle’s mouth unless you like numb lips, and trail running actually can be fun.

Leave a comment

Filed under Fitness

Does getting a root canal hurt?

tl;dr: not as much as the tooth hurt before the root canal

For years I had one tooth that would, very occasionally, make a popping or clicking sound— very faint, but since it was inside my mouth I could feel it. I couldn’t figure out why so, as one does, I ignored it. Then last July, it abruptly became very sensitive to heat and cold: Toothmageddon. I went to my dentist, who referred me to an endodontist, Matthew Friedt, who told me “keep an eye on it.” It seemed to get better, so I left it alone. Then came “Toothmageddon II: The Revenge.” A couple of weeks ago, the heat sensitivity returned with a vengeance, along with the popping sound; eating anything above mouth temperature would make the tooth throb for hours. I immediately made another endodontist appointment, and yesterday was the big day.

I found a lot of contradictory and confusing information online about root canals, so I thought I’d summarize the process, at least the way my endo did it.

First, he took a couple of X-rays and confirmed, by using both heat and cold tests, which tooth was afflicted. Interestingly, the nerves that sense cold tend to be the first ones to die off, which is why I lost the cold sensitivity. The tooth was definitely heat-sensitive, and Matthew used an optical microscope to examine the tooth surface and saw a few small cracks. Once he had verified that he had the right tooth, he shot me up with plenty of lidocaine (or whatever the cool kids use for face numbing), let it percolate, and then started working.

The work phase was simple: he clamped a little metal ring on my back molar and used it to hold a dental dam and frame, which was a new experience for me. Then he drilled a hole in the top of the tooth and used a series of super-fine probes and drills to clean all the tooth-guts (that’s a technical term) out. Once he finished this phase, I got another X-ray so he could verify that everything was out. Then using the same series of probes, he filled the nerve channels in with some sort of rubber or epoxy or something, then added a cone-shaped plug that sealed the hole in the tooth and tamped the material down. That was it; I didn’t have anything stronger than topical anesthesia, and he didn’t prescribe any antibiotics or pain pills.

After I left I noticed that the entire right side of my face— all the way up to my right temple— was as numb as a steak; my jaw was super stiff from holding it open, and my neck and shoulders were tense from me tensing them. I went home, had a protein shake, took an Advil and a leftover pain pill from Toothmageddon I, and napped for a couple of hours, followed by a light dinner and some Netflix. Now I’m back to normal.

While I wouldn’t say it was fun it wasn’t as bad as I had feared, and it is nice to be able to enjoy non-room-temperature food without pain. I very highly recommend Dr. Friedt; he explained very clearly what he was going to do, then he did it efficiently and quickly (the whole process from arrival to departure took a little over 2 hours).

The More You Know™…

1 Comment

Filed under General Stuff

The difference between Suunto cadence and bike pods

I spent way too much time trying to figure this out today, so I’m blogging it in hopes that the intertubez will make it easy for future generations to find the answer to this question: what’s the difference between a cadence pod and a bike pod according to Suunto?

See, the Suunto Ambit series of watches can pair with a wide range of sensors that use the ANT+ standard. You can mix and match ANT+ devices from different manufacturers, so a Garmin sensor will work with a Suunto watch, or a Wahoo heart-rate belt will work with a Specialized bike computer. I wanted to get a speed and cadence sensor for my bike. These sensors measure two parameters: how fast you’re going and how rapidly you’re pedaling. (This is a great explanation of what these sensors really measure and how they work.) Ideally you want a nice, steady cadence of 75-90 rpm. I knew I had a variable cadence, and I wanted to measure it to get a sense for where I was at.

I ordered a Wahoo combined cadence/speed sensor from Amazon and installed it on the bike, which was pretty straightforward. Then I paired it with the watch using the “bike POD” option. (Suunto, for some reason, calls sensors “PODs”). That seemed to work fine, except that I wasn’t getting any cadence or speed data. But I knew the sensor was working because the watch paired with it. I tried changing the sensor battery, moving the sensor and its magnets around, and creating a new tracking activity that didn’t use GPS to see if I got speed data from the sensor. Then I thought “maybe it’s because I didn’t pair a cadence pod”, so I tried that, but no matter what I did, the watch refused to see the Wahoo sensor as a cadence sensor.

Here’s why: to Suunto, a “bike POD” is a combined speed/cadence sensor. A “cadence pod” is for cadence only. Like Bluetooth devices, each ANT+ device emits a profile that tells the host device what it is. That’s why the watch wouldn’t see the sensor, which reported itself as a combined cadence/speed unit, when I tried to pair a cadence pod. After I figured that out, I quit trying to pair the cadence pod… but I still didn’t get speed or cadence data.

The solution turned out to be simple. For some reason, in the cycling sport activity, the “Bike POD” sensor was unchecked, so the watch wasn’t reading its data stream during the activity. I don’t remember unchecking the box, but maybe I did. In any event, once I checked the “Bike POD” box and updated the watch, I immediately started getting cadence and speed data, so I set out for a ride.

NewImage

Hint: if you uncheck any of these boxes the watch will never, ever pay attention to that sensor

I thought it was a pretty good ride from a speed perspective, even though I took a new route that had a number of hills– I had some trouble with that. But look at my cadence… you can see that it definitely needs work. Sigh. One of the nifty things about Suunto’s web site is that it shows vertical speed when you point at cadence data, so I could see where I was struggling to get up hills (meaning I needed to change gears) or loafing when going downhill. Just one more thing to put on my to-fix list…

NewImage

37 Comments

Filed under Fitness, General Tech Stuff, HOWTO

Creating an Office 365 demo tenant

One of the big advantage of software as a service (SaaS) is supposed to be reduced overhead: there are no servers to install or configure, so provisioning services is supposed to be much easier. That might be true for customers, but it isn’t necessarily true for us as administrators and consultants. Learning about Office 365 really requires hands-on experience. You can only get so far from reading the (voluminous) documentation and watching the (many and excellent) training videos that Microsoft has produced. However, there’s a problem: Office 365 costs money.

There are a few routes to get free access to Office 365. If you’re an MVP, you can get a free subscription, limited (I think) to 25 users. If you’re an MSDN subscriber, you can get a tenant with a single user license, which is fine for playtime but not terribly useful if you need a bigger lab. Microsoft also has a 30-day trial program (for some plans: Small Business Premium, Midsize Business, and Enterprise) that allows you to set up a tenant and use it, but at the end of that 30-day period the tenant goes away if you don’t pay for it. That means you can potentially waste a lot of effort customizing a tenant, creating users, and so on only to have it vanish unless you whip out the credit card.

I was a little surprised to find out recently that there’s another alternative: Microsoft has a tool that will create a new demo tenant on demand for you. You can customize many aspects of the tenant behavior, and you can use the provided user accounts (which include contact photos and real-looking sample emails and documents) or create your own. There are even vertical-specific packs that customize the environment for particular customer types. And it’s all free; no payment information is required. However, you do have to have a Windows Live ID that is associated with a Microsoft Partner Network (MPN) account. If you don’t have one, you can join MPN fairly easily.
All this goodness is available from www.microsoftofficedemos.com. Here’s what you need to do to use it.
  1. Go to http://www.microsoftofficedemos.com/ and log in.
  2. Click the “Get Demo” link in the top nav bar, or the “Create Demo” link on the page, or just go to https://www.microsoftofficedemos.com/Provision_step1.aspx. That will display the page below. Note that you can download VHDs that provide an on-prem version of the demo environment if you want those instead.
    Tenant01
  3. Make sure you’ve selected “Office 365 tenant” from the pulldown, then click “Next”. That will display a new page with four choices, all of which are pretty much self-explanatory. If you want an empty tenant to play around with, choose the “Create an empty Office 365 tenant”. If you want one that has users, email, documents, and so on, choose “Create new demo environment” instead.
    tenant02
  4. On the next page, you can choose whether you want the standard demo content or a vertical-specific demo pack. This will be a really useful option once Microsoft adds more vertical packs, but for now the only semi-interesting one is retail, and the provided demo guides (IMHO) are more useful for the standard set, so that’s what I’d pick. After you choose a data set, click “Create Your Demo”.
  5. The next page is where you name the tenant, and where Microsoft asks you to prove you’re not a bot by entering a code that they send to your mobile phone. (Bonus points if you know why I picked this particular tenant name!) The optional “Personalize Your Environment” button lets you change the user names (both aliases and full names) and contact pictures, so if you’re doing a demo for a particular customer you can put in the names of the people who will attend the demo to add a little spice. The simple option is to customize a single user; there’s one main user for each of the demos (which I’ll get to in a minute), but you can customize any or all of the 25 default users.
    Tenant04
  6. Once you click “Create My Account”, the demo engine will start creating your tenant  and provisioning it. This takes a while; for example, yesterday it took about 12 hours from start to finish. Provisioning demos is just about last on Microsoft’s priority list, so if you need a tenant in a hurry use the “create a blank tenant” option I mentioned earlier. You’ll see a progress page like the one below, but you’ll also get a notification email to the address you provided in step 5 when everything’s finished, so there’s no need to sit and watch it.
    Tenant06
Once the tenant is provisioned, you can log into it using any of the test users, or the default “admin” user. How do you know which users are configured (presuming you didn’t customize them, that is)? Excellent question. The demo guides provide a complete step-by-step script both for setting up the demo environment and executing the demo itself. For example, the Office 365 Enterprise “hero demo” is an exhaustive set of steps that covers all the setup you need to do on the tenant and whatever client machines you’re planning on using.
Once the tenant is provisioned, it’s good for 90 days. You can’t renew it, but at any time during the 90 days you can refresh the demo content so that emails, document modification times, and so on are fresh. And on the 91st day, you can just recreate the tenant; there doesn’t seem to be any explicit limit to the number of tenants you can create or the number of times you can create a tenant with a given name.
While the demo data set is quite rich, and the provided demo scripts give you a great walkthrough to show off Office 365, you don’t have to use them. If you just want a play area that you can test with, this environment is pretty much ideal. It has full SMTP connectivity, although I haven’t tested to verify that every federation and sharing feature works properly (so, for example, you might not be able to set up free/busy sharing with your on-prem accounts). I also don’t know whether there are any admin functions that have been RBAC’d to be off limits. (If you see anything like that, please post a comment here.)
Enjoy!

11 Comments

Filed under Office 365, UC&C

Training Tuesday: slow progress is still progress

One of my fellow TRI101 participants shared this excellent article about building on your strengths in one event to bolster your weaknesses in another. I don’t really feel like I have any special strengths in running or cycling, other than “can complete required sprint distance.” But on reflection… that counts too. I definitely feel like my swimming is improving, though, and cycling the 10-15Km routes we use for training has been getting easier: an excellent sign that I need to go either farther or faster. The TRI101 program is scaled for participants who start at a low fitness level, but we are now at the beginning of week 6. If I stopped right now, I’m confident that I could finish a sprint triathlon, which is great news— now I just need to work on doing it faster.

The week’s training:

  • Thursday: 250m swim + swim drills. Slow, but better than nothing. I just wasn’t feeling it that day. As Karen, one of our coaches, pointed out, some days are better than others, and sometimes the best thing to do on a bad swim day is cut the distance short.
  • Friday: I swam 100m without stopping— a big deal for me, since I hadn’t been able to do that before— then 5 x 50m intervals, then another 100m. The intervals are 25m slow and 25m as fast as possible. It was a good workout, and I felt much better than I did after Thursday’s swim.
  • Saturday I did a short run/bike brick with a couple of classmates: 8.5 mi on the bike in 44:26 and 1.1mi running in 11:34. On the bike I spent a while riding slow circles waiting for all our party to catch up, so I didn’t take that time too out of sorts. I also went to the pre-race brief for the Mountain Deux aquathalon, which is this coming Saturday: a ~5Km trail run, followed by a 200m swim. Should be fun, except for the “trail run” part.
  • Sunday I swam with two TRI101 peeps: 100m warmup without stopping, 400m with minimal rest, then another 100m (which I did as 2×50, with the same 25m slow/25m fast). This was the first workout I logged with my new Suunto Ambit 2s triathlon watch.
  • Monday I had a bike fitting. That’s a topic for a post all on its own; look for it next week.

So, the watch. Anyone who knows me knows what a gadget nerd I am. I saw a mention on Facebook of a big watch sale mentioned at the DC Rainmaker blog, so I started poking around and was fascinated by the idea of a GPS-powered watch that could track my workouts for me— something I’d been using my phone for, with varying degrees of success. After reading his über-review, I decided to get an Ambit 2S. As much fun as it would be to have the barometric altimeter in the Ambit 2, it wasn’t worth the extra money. It arrived Saturday, and first thing Sunday morning I strapped it on and headed to the pool. Here was my reward: a more-or-less accurate record of my swim activity. I say “more or less” because I think it miscounted laps a couple of times, and the “total distance” column in the laps table doesn’t match the “distance” field at the top. I will be using it for cycling and running in the next couple of days and can get a better idea of how it works but I am most interested in the “multisport” mode for triathlons. I’ll be using that at Mountain Deux next week. The watch can pair with heart rate monitors, cycle sensors, and all sorts of other goodies that I will eventually add.

First swim

The upcoming week’s training should be good stuff— running tonight and Thursday, swimming Friday, and a bike/run brick Saturday. There’s a group that meets Thursday mornings to run a 4.5mi road course; I might join them to get a little distance in on the theory that if I can run 4.5 miles, it’s not that much of a stretch to run 6 miles, which means that I’d be in striking distance of running a 10K once triathlon season calms down some. On the down side, I also have a root canal Thursday (after running), so that may slow me down a bit. Hopefully I’ll bounce back in time for Mountain Deux on Saturday!

 

Leave a comment

Filed under Fitness, General Stuff

Mailbox-level backups in Office 365

Executive summary: there aren’t any, so plan accordingly.

Recently I was working with a customer (let’s call him Joe, as in “Joe Customer”) who was considering moving to Office 365. They went to our executive briefing center in Austin, where some Dell sales hotshots met and briefed them, then I joined in via Lync (with video!) for a demo. The demo went really well, and I was feeling good about our odds of winning the deal… until the Q&A period.

“How does Office 365 provide mailbox-level backups?” Joe asked.

“Well, it doesn’t,” I said. “Microsoft doesn’t give you direct access to the mailbox databases. Instead, they give you deleted item retention, plus you can use single-item retention and various types of holds.” Then I sent him this link.

“Let me tell you why I’m asking,” Joe retorted after skimming the link. “A couple of times we’ve lost our CIO’s calendar. He uses an Outlook add-in that prints out his calendar every day, and sometimes it corrupts calendar items. We need to be able to do mailbox-level backups so that we can restore any damaged items.”

At that point I had to admit to being stumped. Sure enough, there is no Office 365 feature or capability that protects against this kind of logical corruption. You can’t use New-MailboxExportRequest or the EAC to export the contents of Office 365 mailboxes to PST files. You obviously can’t run backup tools that run on the Exchange server against your Office 365 mailbox databases; there may exist tools that use EWS to directly access a mailbox and make a backup copy, but I don’t know of any that are built for that purpose.

I ran Joe’s query past a few folks I know on the 365 team. Apart from the (partially helpful) suggestion not to run Outlook add-ins that are known to corrupt data, none of them had good answers either.

While it’s tempting to view the inability to do mailbox-level backups as a limitation, it’s perfectly understandable. Microsoft spent years trying to get people not to run brick-level backups using MAPI. The number of use cases for this feature is getting smaller each year as both the data-integrity and retention features of Exchange get better. In fact, one of the major reasons that we now have single-item recovery in its current form is because customers kept asking for expanded tools to recover deleted items, either after an accidental deletion or a purge. Exchange also incorporates all sorts of infrastructure to protect against data loss, both for stored data and data in transit, but nothing really helps in this case: the corrupt data comes from the client, and Exchange is faithfully storing and replicating what it gets from the client. In fairness, we have seen business logic added to Exchange in the past to protect against problems caused by malformed calendar entries created by old versions of Outlook, but clearly Microsoft can’t do that for every random add-in that might stomp on a user’s calendar.

A few days after the original presentation, I sent Joe an email summarizing what I’d found out and telling him that, if mailbox-level backup was an absolute requirement, he probably shouldn’t move those mailboxes to Office 365.

The moral of this story, to an extent that there is one, is that Microsoft is engineering Office 365 for the majority of their users and their needs. Just as Word (for instance) is supplemented by specialized plugins for reference and footnote tracking, mathematical typesetting, and chemistry diagrams, Exchange has a whole ecosystem of products that connect to it in various ways, and Office 365 doesn’t support every single one of those. The breadth and diversity of the Exchange ecosystem is one of the major reasons that I expect on-premises Exchange to be with us for years to come. Until it finally disappears, don’t forget to do some kind of backups.

8 Comments

Filed under Office 365, UC&C

Instructor-induced stupidity

First, a quick recap: this past weekend I had two flights planned. One of them went off OK, the other didn’t. My original plan was to fly 706 down to Louisiana with the boys to see my mom, grandmother, and family in Houma, Baton Rouge, and Alexandria. However, David and Tom were both working each day of the long weekend, so that wasn’t going to work. Instead, I planned to take Matt for a $100 hamburger, then take all 3 boys to Atlanta to eat at Ted’s on Monday after Cotton Row.

Matt and I flew to Anniston and had a fantastic meal at Mata’s (thanks to Bo for the recommendation!)  That gave me the opportunity to practice hot starts, which are a little tricky with fuel-injected Lycoming engines, at least until you get used to them. Sadly, the weather on Monday worsened before we were able to go anywhere, and this weekend is looking pretty crappy too.

Anyway, enough about that. This week’s FLYING LESSONS newsletter was typically excellent– it put a name to a phenomenon I’ve both seen and demonstrated: instructor-induced stupidity.

That the pilot raised the landing gear even while continuing to flare and touch down suggests what may really have been going on was a condition I call Instructor-Induced Stupidity. I credit a student of mine with coining the phrase “instructor-induced stupidity” to describe the tendency of a flight student to defer decision-making or responding to aircraft indications when there’s an instructor on board.

As a student pilot, it’s natural to defer to the instructor; after all, that’s why you’re there. If you read the entire article (which isn’t very long), you’ll see that the possible outcomes of IIS include gear-up landings, unsafe maneuvers, and general tomfoolery. It is fairly easy to unlearn this habit during initial training, but I can see how it might persist when flying with a new instructor, or in a different type of airplane, even with a well-experienced pilot. I did it once on my private-pilot checkride; the examiner called for a power-on stall, and I gave her one, all right, of such degree that we got to see the chevrons (see this video at about 0:22, except that I was pitching up, not down). The hell of it was, I knew better: a classic case of induced stupidity.

This phenomenon isn’t limited to flight instruction, either; I’ve seen it many times when teaching otherwise intelligent and capable people about Exchange, Windows, and other related topics, and I’ve seen it in consulting engagements too: sometimes people seem to just lose their decision-making ability and judgment when placed in a situation where there is someone who (at least on paper) is more knowledgable or experienced. Maybe a better phrase for it would be “authority-induced stupidity”.

To counteract it, you have to remember to own what you own: when you’re pilot-in-command, or in charge of an Exchange deployment, or responsible for planning an event, don’t turn off your brain just because an authority is present or involved. Like so many aspects of human behavior, this is easy to say but harder to do!

Leave a comment

Filed under aviation

Training Tuesday: week in review

I’m a day late this week; blame the Memorial Day holiday for throwing my schedule off a bit.

Last Thursday I had a big day: swim coaching, then a bike/run brick: 7.16 mi on the bike, followed by a 2.32 mi run. Neither was especially fast, but that’s OK. This was the first time I rode without headphones, using just the speaker on my iPhone for music. It worked fine and made me feel slightly more safe; I am super paranoid about sharing the road with cars since there aren’t any bike lanes out here in Limestone County, and not having my ears plugged made it quite a bit easier to hear vehicles behind me.

Swim coaching was a blast! On the advice of some friends from the TRI101 training program, I set up a coaching session with Lisi Bratcher. Over the course of about an hour, she spotted probably half a dozen things I was doing wrong. This is no big trick; almost anyone who’s ever swum a race could watch me and say “dude, you’re not supposed to do that”. The difference is that she taught me what to do about those things. For example, I am turning my head to breathe too late in the stroke, which means that I am working harder than necessary to get air, which explains my pitiful pool endurance. She also gave me some useful advice about my arm stroke, leg kicks, and timing. This week my swim days are Thursday and Friday, and I’ll probably sneak in at least one extra swim over the weekend, and I’m looking forward to putting her advice into practice.

I didn’t do anything Friday, Saturday, or Sunday. Booo hiss. However, the TRI101 schedule (which I am trying hard to follow) had Saturday and Sunday as off days anyway, so no real harm done. I did work a volunteer shift at the packet pickup for the Cotton Row Run, where I bought a cheap triathlon kit at the race expo. I’ll eventually post pictures (of the kit, not of me wearing it), but it basically looks like just-above-the-knee bike shorts with a sleeveless quarter-zip top. The idea behind the kit is that you can wear one outfit for the swim, bike, and run, only changing your footwear. My practice swim suit would be fine for the bike but uncomfortable for the run, and my bike shorts (with their enormous diaper-like groin pad) would be terrible for swimming. I don’t have any opinion about the quality of this kit (it’s Nike) but it was cheap, so I’m sure it’ll be fine.

Monday morning Tom and I ran the Cotton Row 5K, both carrying our flags as we did last year. I ran it in 30:38, which was slower than my last few 5K races. In my defense, I was carrying a big ol’ Marine Corps flag. I ran with the race belt I’ll use during the triathlon; it was really nice to be able to grab a drink in between the water stations, although certainly not a necessity.

Yesterday we had our scheduled TRI101 class; this week the topic du sémaine was cycling again, so we met up outside the Redstone gate and rode an out-and-back circuit. I rode just under 15Km in just over 35 minutes, which was decent for me. I wasn’t pushing especially hard; my pace was roughly on a par with my brick ride from last week. I am getting more used to using the special pedals and shoes on the bike; this week I didn’t fall over, and my mount / dismount mechanics are better. I still need to take my bike in and get it fitted to me though. I did like riding on the Arsenal because it has a lot of big, wide roads with relatively few cars (during this particular time), so I’ll probably use it as a venue occasionally from now on.

Five weeks in! August is looming closer and closer…

Leave a comment

Filed under Fitness, General Stuff

N32706 comes home

Well, I finally went and did it: I bought an airplane. DSC 2057

in Salt Lake City prior to the flight homeward

I’d been considering buying a plane pretty much nonstop since starting work on my pilot’s license, and even looked at a few while I was still in California. My initial plan was to buy something that could hold me and all 3 boys, plus luggage, and still have a reasonable fuel load. This left out most airplanes, including the Cessna 172, the Piper Arrow family, and the Cirrus. I really liked the Piper Cherokee Six and its derivatives, the Lance, Saratoga, and 6X. They combined decent performance with a huge payload: 6 seats and full tanks meant that I could easily haul the whole herd, with baggage, a good 700 nautical miles from home without stopping. After I moved, I put aside my plane search for a while; I found that the rented 182 I was flying from Redstone could, barely, hold me plus the boys plus full fuel, but with no baggage and sluggish climb performance in warm weather. Worse, we were squashed, and as the boys grew (or, more accurately, gained weight), we’d be in danger of going over gross takeoff weight unless I took fuel or people out… so I started looking again, but I couldn’t see a good way to afford a Cherokee Six, so it was sort of a desultory search.

Then I had an epiphany: I was buying more airplane than I needed. “After all,” I reasoned, “now that David is off at college, he won’t be flying with me much, and in a couple of years Tom will be at college too. So a 182 will work; we can just squeeze into it for a little longer until David is fully out of the nest.” So I started looking for an affordable 182, put a deposit down, and promptly had the deal fall through (a story for another time). Back to the drawing board.

Then I offhandedly mentioned to my financial advisor that I was looking for an airplane. “Oh, my husband’s a pilot,” she said. “Would you be interested in a partnership?” Yes. Yes, I would.

Long story short, Derek (my new partner, and a hell of a guy) scoured the market for Cherokee Sixes. We found one that we really liked and it was sold out from under us. Then we found another one that we really liked, and when I called the seller, he said “oh, that ad shouldn’t still be up there, because the plane was sold months ago.” Third time was the charm: we found N32706 for sale in Salt Lake City, had the prebuy done there (another long and boring story that I’ll eventually post about), and closed the deal on May 15.

John Blevins, one of my flight instructors, flew out to Salt Lake on Delta to pick it up. After dinner at In-N-Out (who knew they were in SLC?) and an overnight at the local Marriott, we departed KSLC about 730am. Our planned route was to go to Los Alamos (KLAM), thence Muskogee, Oklahoma (KMKO) and then back to Huntsville. It looked like that would take about 10 hours total flying time.

The airplane started right up, and we got VFR flight following at 11500’ south past Provo. Right after takeoff, we noticed some oil spray on the windshield, but the oil temperature and pressure remained good, , then flew to the Carbon, Canyonlands, and Cortez VORs before descending into Los Alamos. Along the way we were treated to some gorgeous scenery.

A mountain

random mountain off the pilot’s side, about 2000’ below us

unusual rock formations

interesting rock formations; I wonder what causes the striations?

We’d thought it would be a fun place to stop for lunch, and fuel appeared to be relatively cheap. Neither of these things proved to be true. While refueling the airplane, I found heavy grease all over the front of the cowling. The constant-speed propeller on this airplane has inner workings that are lubricated with heavy grease; the good news is that there was no engine oil anywhere it shouldn’t be. John and I conferred for a bit, then started walking into town to the local AutoZone. Our plan: get a screwdriver, take off the propeller spinner, and locate the source of the grease. Why did we walk? Well, the airport was unattended (even though we got there between 7a and 1p, the hours when it was supposed to be attended), and the one taxi company in Los Alamos didn’t answer our phone calls. About halfway there, a fellow pilot whom I’d waved at while fueling the plane drove by, recognized us, and asked if we needed a ride— we hitched with him to AutoZone, bought the stuff we needed, and rode back to the airfield, whereupon he got the mechanic he uses to come over and have a look. (Thank you very much, Gary and JP! Side note for another time: the camaraderie and helpful spirit that is generally present in the aviation community is wonderful.)

We removed the spinner and found that it contained a big streak of grease, almost like someone had smeared it in there like frosting– but only on one side. There was no grease leaking from the Zerk fittings on the prop hub, so we degreased the prop, hub, spinner, cowling, and windshield, put everything back together, and determined that we’d take off as planned, but land at the first sign of any more gunk on the windscreen. Our first alternate was Santa Fe, which is nearby; then Tucumcari, then Amarillo. (I should mention at this point that Los Alamos has some interesting departure and arrival restrictions, and it is right next to a large chunk of restricted airspace, courtesy of LANL. Also, we never did get lunch there).

KLAM airport sign

The best part of the Los Alamos airport

Takeoff was normal and we had a completely uneventful flight to our next planned stop. Originally we were going to stop in Muskogee but decided instead to stop at Sundance Airpark, just outside of Oklahoma City. The crew at Sundance Aviation could not have been any more friendly; they fueled the plane, loaned us a car, and suggested an area where we’d find some restaurants. After a solid Mexican dinner at Abuelita’s, we took off headed for Huntsville. There was a weird rectangular line of storms lying astride our planned route, so we ended up flying direct to the Little Rock VOR, then direct to Huntsville. 

see what I mean? mostly rectangular

Turns out it’s hard to find archived NEXRAD images but this one shows the funny line of storms

The final leg took us about 3.5 hours, 2.4 of which I logged as night time and 1.5 of which I logged as actual instrument. We started off flying at 9000’, but moved to 7000’ for more favorable winds. That put us in between two cloud layers, which was great because a) it was beautiful and b) the air was super smooth. We discovered that the intercom system had a music input jack, which was great, except that I made the mistake of letting John pick the music. Let’s just say that I don’t want to hear any more Colbie Caillat songs in the next two or three years.

PaulR  Dell 20140517 034between the layers over Arkansas

We arrived at Huntsville International about 1030p, after 10.5 hours of flying time. Our duty day was lengthened by our two fuel stops, and I was pretty tired by that point so I was happy to have a 12,000’ runway waiting for me. Signature hangared the plane, John filled out my logbook, and I got home just in time for that rectangle of storms to unleash a large, and relaxing, thunderstorm. I slept like a baby that night!

A couple of days later, Derek and I moved the plane from Huntsville to its new home, North Alabama Aviation in Decatur. This weekend, I plan to take it out for some sightseeing, in the first of what I hope will be many trips with, and  without, the boys. So when you hear a propeller airplane, look up; it might be me! (Or Derek.)

2 Comments

Filed under aviation

US lawyers and Office 365

Every field has its own unique constraints; the things the owner of a small manufacturing business worries about will have some overlap, but many differences, compared to what the CEO of a multi-billion-dollar energy company is concerned with. The legal industry is no exception; one major area of concern for lawyers is ethics. No, I don’t mean that they’re concerned about not having any. (I will try to refrain from adding any further lawyer jokes in this post unless, you know, they’re funny).

Disclaimer: I am not a lawyer. This is not legal advice. Seriously.

The entire US legal system is based on a number of core principles, including that of precedent, or what laymen might call “tradition”. For that reason, as well as the stiff professional penalties that may result from a finding of malpractice or incompetence, many in the legal profession have been slower to embrace technology than their peers in other industries. When there is no settled precedent to answer a question, someone has to generate precedent, often by taking a case to court. Various professional standards bodies can generate opinions that are considered to be more or less binding on their members, too. To cite one example of what I mean, here’s what the Lawyers’ Professional Responsibility Board of the state of Minnesota has to say about one small aspect of legal ethics, the safeguarding and use of metadata:

…a lawyer is ethically required to act competently to avoid improper disclosure of confidential and privileged information in metadata in electronic documents.

That seems pretty straightforward; the body responsible for “the operation of the professional responsibility system in Minnesota” issued an opinion calling for attorneys in that state to safeguard metadata and refrain from using it in ways that conflict with their other ethical obligations. With that opinion now extant, lawyers in Minnesota can, presumably, be disciplined for failing to meet that standard.

With that as background, let me share this fascinating link: a list of ethics opinions related to the use of cloud services by lawyers and law firms. (I found the list at Sharon Nelson’s excellent “Ride the Lightning” blog, which I commend to your attention.)

Let that sink in for a minute: some of the organizations responsible for setting ethical standards for lawyers in various states are weighing in on the ethics of legal use of cloud services.

This strikes me as remarkable for several reasons. Consider, for example, that there don’t seem to be similar guidelines for e-mail admins, or professional engineers, or cosmetologists, or any other profession that I can think of. In pretty much every other market, if you want to use cloud services, feel free! Oh, sure, you may want to consider the ramifications of putting sensitive or protected data into the cloud, especially if you have specific requirements around compliance or governance. By and large, though, no one is going to punish you for using cloud services in your business if that choice turns out to be inappropriate. On the other hand, if you’re a lawyer, you can be professionally liable for failing to protect your clients’ confidentiality, as might happen in case of a data breach at your cloud provider.

The existence of these opinions, then, means that in at least 14 states, there are now defined standards that practitioners are expected to follow when choosing and using cloud services. For example, the Alabama standard (which I picked because it is simple, because I live in Alabama, and because it was first in the alphabetical list) says:

…a lawyer may use “cloud computing” or third-party providers to store client data provided that the attorney exercises reasonable care in doing so… The duty of reasonable care requires the lawyer to become knowledgeable about how the provider will handle the storage and security of the data being stored and to reasonably ensure that the provider will abide by a confidentiality agreement in handling the data. Additionally, because technology is constantly evolving, the lawyer will have a continuing duty to stay abreast of appropriate security safeguards that should be employed by the lawyer and the third-party provider. If there is a breach of confidentiality, the focus of any inquiry will be whether the lawyer acted reasonably in selecting the method of storage and/or the third party provider.

The other state opinions are generally similar in that they require an attorney to act with “reasonable care” in choosing a cloud service provider. That makes Microsoft’s recent relaunch of the expanded Office 365 Trust Center a great move: it succinctly addresses “appropriate security safeguards” that are applied throughout the Office 365 stack. Reading it will give you a solid grounding in the physical. technical, and operational safeguards that Microsoft has in place.

Compared to its major SaaS competitors, Microsoft’s site has more breadth and depth about security in Office 365, and it’s written in an approachable style that is appropriate for non-technical people… including attorneys. In particular, the top-10 lists provide easily digestible bites that help to reassure customers that there data, and metadata, are safe within Microsoft’s cloud. By comparison, the Google Apps security page is limited in both breadth and depth; the Dropbox page is laughable, and the Box.net page is basically a quick list of bullets without much depth to back them up.

The Office 365 Trust Center certainly provides the information necessary for an attorney to “become knowledgeable about how the provider will handle the storage and security of the data being stored”, and it is equally useful for the rest of us because we can do the same thing. If you haven’t already done so, it’s worth a few minutes of your time to go check it out; you’ll probably come away with a better idea of the number and type of security measures that Microsoft applies to Office 365 operations, which will help you if a) you go to law school and/or b) you are considering moving to Office 365.

4 Comments

Filed under Office 365, UC&C

Exchange Server and Azure: “not now” vs “never”

Wow, look what I found in my drafts folder: an old post.

Lots of Exchange admins have been wondering whether Windows Azure can be used to host Exchange. This is to be expected for two reasons. First, Microsoft has been steadily raising the volume of Azure-related announcements, demos, and other collateral material. TechEd 2014 was a great example: there were several Azure-related announcements, including the availability of ExpressRoute for private connections to the Azure cloud and several major new storage improvements. These changes build on their aggressive evangelism, which has been attempting, and succeeding, to convince iOS and Android developers to use Azure as the back-end service for their apps. The other reason, sadly, is why I’m writing: there’s a lot of misinformation about Exchange on Azure (e.g. this article from SearchExchange titled “Points to consider before running Exchange on Azure”, which is wrong, wrong, and wrong), and you need to be prepared to defuse its wrongness with customers who may misunderstand what they’re potentially getting into.

On its face, Azure’s infrastructure-as-a-service (IaaS) offering seems pretty compelling: you can build Windows Server VMs and host them in the Azure cloud. That seems like it would be a natural fit for Exchange, which is increasingly viewed as an infrastructure service by customers who depend on it. However, there are at least three serious problems with this approach.

First: it’s not supported by Microsoft, something that the “points to consider” article doesn’t even mention. The Exchange team doesn’t support Exchange 2010 or Exchange 2013 on Azure or Amazon EC2 or anyone else’s cloud service at present. It is possible that this will change in the future, but for now any customer who runs Exchange on Azure will be in an unsupported state. It’s fun to imagine scenarios where the Azure team takes over first-line support responsibility for customers running Exchange and other Microsoft server applications; this sounds a little crazy but the precedent exists, as EMC and other storage companies did exactly this for users of their replication solutions back in Exchange 5.5/2000 times. Having said that, don’t hold your breath. The Azure team has plenty of other more pressing work to do first, so I think that any change in this support model will require the Exchange team to buy in to it. The Azure team has been able to get that buy-in from SharePoint, Dynamics, and other major product groups within Microsoft, so this is by no means impossible.

Second: it’s more work. In some ways Azure gives you the worst of the hosted Exchange model: you have to do just as much work as you would if Exchange were hosted on-premises, but you’re also subject to service outages, inconsistent network latency, and all the other transient or chronic irritations that come, at no extra cost, with cloud services. Part of the reason that the Exchange team doesn’t support Azure is because there’s no way to guarantee that any IaaS provider is offering enough IOPS, low-enough latency, and so on, so troubleshooting performance or behavior problems with a service such as Azure can quickly turn into a nightmare. If Azure is able to provide guaranteed service levels for disk I/O throughput and latency, that would help quite a bit, but this would probably require significant engineering effort. Although I don’t recommend that you do it at the moment, you might be interested in this writeup on how to deploy Exchange on Azure; it gives a good look at some of the operational challenges you might face in setting up Exchange+Azure for test or demo use.

Third: it’s going to cost more. Remember that IaaS networks typically charge for resource consumption. Exchange 2013 (and Exchange 2010, too) is designed to be “always on”. The workload management features in Exchange 2013 provide throttling, sure, but they don’t eliminate all of the background maintenance that Exchange is more-or-less continuously performing. These tasks, including GAL grammar generation for Exchange UM, the managed folder assistant, calendar repair, and various database-related tasks, have to be run, and so IaaS-based Exchange servers are continually going to be racking up storage, CPU, and network charges. In fairness, I haven’t estimated what these charges might be for a typical test-lab environment; it’s possible that they’d be cheap enough to be tolerable, but I’m not betting on it, and no doubt a real deployment would be significantly more expensive.

Of course, all three of these problems are soluble: the Exchange team could at any time change their support policy for Exchange on Azure, and/or the Azure team could adjust the cost model to make the cost for doing so competitive with Office 365 or other hosted solutions. Interestingly, though, two different groups would have to make those decisions, and their interests don’t necessarily align, so it’s not clear to me if or when we might see this happen. Remember, the Office 365 team at Microsoft uses physical hardware exclusively for their operations.

Does that mean that Azure has no value for Exchange? On the contrary. At TechEd New Orleans in June 2013, Microsoft’s Scott Schnoll said they were studying the possibility of using an Azure VM as the witness server for DAGs in Exchange 2013 CU2 and later. This would be a super feature because it would allow customers with two or more physically separate data centers to build large DAGs that weren’t dependent on site interconnects (at the risk, of course, of requiring always-on connectivity to Azure). The cost and workload penalty for running an FSW on Azure would be low, too. In August 2013, the word came down: Azure in its present implementation isn’t suitable for use as an FSW. However, the Exchange team has requested some Azure functionality changes that would make it possible to run this configuration in the future, so we have that to look forward to.

Then we have the wide world of IaaS capabilities opened up by Windows Azure Active Directory (WAAD), Azure Rights Management Services, Azure Multi-Factor Authentication, and the large-volume disk ingestion program (now known as the Azure Import/Export Service). As time passes, Microsoft keeps delivering more, and better, Azure services that complement on-premises Exchange, which has been really interesting to watch. I expect that trend to continue, and there are other, less expensive ways to use IaaS for Exchange if you only want it for test labs and the like. More on that in a future post….

5 Comments

Filed under General Tech Stuff, UC&C

Welcome to Training Tuesday: Triathlon Time

Last summer, I went through a rough personal patch after moving here, and that motivated me to restart the exercise habits that had been so valuable when I was in Pensacola. Using Fitocracy regularly got me interested in lifting, which got me involved with the two coached programs I participated in (I’ve already written about them a bunch before, e.g. here). But I’d been thinking that I wanted to choose a goal race, so I decided to train for a sprint triathlon, as I mentioned in my 2014 goal list. As soon as registration opened, I signed up for the Huntsville Sprint, then signed up for the Fleet Feet TRI101 program. So far we’ve had the kickoff meeting; our first group class was cancelled because of severe weather, but I’ve started to work my way through the 16-week training plan. As I progress, I’ll be sharing more observations about the training and my progress, usually on Tuesdays (hence the “Training Tuesday” label).

First: I am super impressed by the Fleet Feet coaches and training program. I had heard that they were good, but I didn’t realize how good. They have been uniformly supportive, effective at motivating us, patient with questions, and generous with sharing knowledge. I don’t really know any of the people in my training group yet but there’s a great mix of ages, sizes, and prior experience levels; we’ve got some accomplished half marathoners and marathoners, some complete noobs, and lots of people in between. It’ll be fun getting to know my fellow future triathletes.

Second: I am a lousy swimmer. Yesterday I swam 400m freestyle in 16 minutes, 29 seconds. The current world record, set in 2012 by Yannick Agnel, is 3 min 32 seconds. So I’ve got some room to improve. However, I am improving. As our coaches like to point out, the only way to improve your swimming is to swim. You can’t buy gear to make you faster, and you can’t just bash your way through with increased effort. The TRI101 program includes four coached swims, where you show up at the pool and work with a coach; this has been really helpful so far but I may end up working on my own with a coach some as well. If I can keep my swim time around 15 minutes or less I’ll be happy; that seems like an approachable goal. My plan to get there is never to swim less than the 400m distance required for this first triathlon, and to go longer when I feel like it. We’ll see how that works out.

Third: there is a ton of gear that even a newbie triathlete needs that I didn’t have. Let’s start with a triathlon suit (which I still don’t have; I just got this racing swimsuit instead), which allows you to wear one suit in the water, on the bike, and on the road. Here’s one example. To prevent a reprise of my MEC appearance, though, I think I’ll be extra careful about posting photos of myself wearing the suit once I get it.

More prosaically, I needed a swim cap, which Fleet Feet provided as part of the class, and goggles. Since I didn’t have swim flip-flops, I bought a pair of those too. I already had a bike, with clipless pedals and appropriate shoes, but I needed a rack to carry it to and from riding locations, plus a reflective harness so I don’t get smashed by a truck. Even my trusty running shoes weren’t immune; I swapped to a pair of Lock Laces (motto: “Win, never tie”) to speed up transition times. Triathlons have two transition periods: T1 is when you move from the water to the bike, and T2 is when you move from the bike to the run. The transition areas have all sorts of rules to keep things more-or-less organized, and your T1 and T2 times are measured separately, so being able to jump out of the pool, run to your bike, walk it out (no riding in the transition area, of course), and then get on the road fast can make a big difference. I am thinking that my transition times are probably the least of my worries so I’m not planning on putting a whole lot of emphasis on buying stuff to shorten my times.

Fourth: this is not the same kind of bicycle riding I did as a kid. Riding in bike shorts feels like wearing a diaper, for one thing. Plus, maintaining a steady cadence takes practice and skill, because it involves shifting gears. Doing it while drinking from a water bottle is even harder. Throw in the fact that your feet are attached to the pedals and it can get tricky. Which reminds me, I should take my bike to have it fit— the seat, handlebar, and pedal positions on my bike can be adjusted to best fit my arm, leg, and torso lengths but I have no idea what the “right” settings are. Bike fits consist of getting on your bike on a trainer and riding it while the bike shop folks watch you, then they adjust a few things, then you ride some more; rinse and repeat. I like riding, and have even briefly entertained the idea that I might like riding a metric century, but I’m not quite to that point yet.

The current training schedule calls for 3 runs and 2 swims per week, with 2 off days; we’ll start working the bike into the schedule in a couple of weeks. Astute readers may note that I haven’t said anything about lifting so far; I plan to keep lifting on my two off days (or maybe on swim days) but will be sticking with the basic big lifts: deadlift (or variations), back squat, and bench press, with a few shoulder exercises thrown in for swimming. I intended Sundays to be brick days— a brick is the triathlon term for multiple-activity workouts, such as bike + run or swim + bike. I’ve gotten in one bike/run brick so far and plan to do them at least once a week, even before the schedule calls for them, but we’ll see how that goes.

Should be an exciting journey!

Leave a comment

Filed under Fitness, General Stuff

Getting ready for TechEd 2014

Wow, this snuck up on me! TechEd 2014 starts in 10 days, and I am nowhere near ready.

A few years ago, I started a new policy: I only attend TechEd to speak, not as a general attendee or press member; the level of technical content for the products I work with has declined steadily over the years. This is to be expected; in a four-day event, there’s a finite number of sessions that Microsoft can present, and as they add new products, every fiefdom must have its due. There are typically around 30 sessions that involve unified communications in some way; that number has remained fairly constant since 2005 or so. Over the last several years, the mix of sessions has changed to accommodate new versions of Exchange, Lync, and Office 365, but the limited number of sessions means that TechEd can’t offer the depth of MEC, Exchange Connections, or Lync Conference. This year there are 28 Exchange-related sessions, including several that are really about Office 365— so about 25% the content of MEC.

I can’t keep track of how many previous TechEd events I’ve been to; if you look at the list, you’ll see that they tend to be concentrated in a small number of cities and so they all kind of blend together. (Interestingly, this 2007 list of the types of attendees you see at TechEd is still current.) The most memorable events for me have been the ones in Europe (especially last year’s event in Madrid, where I’d never been before).

This year I was asked to pinch-hit and present OFC-B318, “What’s New in Lync Mobile.” That’s right— so far this year, I have presented on Lync at Lync Conference and MEC, plus this session, plus another Lync session at Exchange Connections! If I am not careful I’ll get a reputation. Anyway, I am about ready to dive into shining up my demos, which will feature Lync Mobile on a variety of devices— plus some special guests will be joining me on stage, including my favorite Canadian, an accomplished motorcycle rider, and a CrossFitter. You’ll have to attend the session to find out who these people are though: 3pm, Monday the 12th— see you there! I’ll also be working in the Microsoft booth area at some point, but I don’t know when yet; stay tuned for updates.

Leave a comment

Filed under UC&C

Speaking at Exchange Connections 2014

I’m excited to say that I’ll be presenting at Exchange Connections 2014, coming up this fall at the Aria in Las Vegas.

Tony posted the complete list of speakers and session titles a couple of days ago. I’m doing three sessions:

  • “Who Wears the Pants In Your Datacenter: Taming Managed Availability”: an all-new session in which the phrase “you’re not the boss of me” will feature prominently. You might want to prepare by reading my Windows IT Pro article on MA, sort of to set the table.
  • “Just Like Lemmings: Mass Migration to Office 365”: an all-new session that discusses the hows and whys of moving large volumes of mailbox and PST data into the service, using both Microsoft and third-party tools. (On the sometimes-contentious topic of public folder migration, I plead ignorance; see Sigi Jagott’s session if you want to know more). There is a big gap between theory and practice here and I plan to shine some light into it.
  • “Deep Dive: Exchange 2013 and Lync 2013 Integration” covers the nuts and bolts of how to tie Lync and Exchange 2013 together. Frankly, if you saw me present on this topic at DellWorld, MEC, or Lync Conference, you don’t need to attend this iteration. However, every time I’ve presented it, the room has been packed to capacity, so there’s clearly still demand for the material!

Exchange Connections always has a more relaxed, intimate feeling about it than the bigger Microsoft-themed conferences. This is in part because it’s not a Microsoft event and in part because it is considerably smaller. As a speaker, I really enjoy the chance to engage more deeply with the attendees than is possible at mega-events. If you’re planning to be there, great— and, if not, you should change your plans!

1 Comment

Filed under Office 365, UC&C

MEC 2014 wrap-up by the numbers

The MEC 2014 conference team sent out a statistical summary of the conference to speakers, and it makes for fascinating reading. I wanted to share a few of the highlights of the report because I think it makes some really interesting points about the state of the Exchange market and community.

First: the 101 sessions were attended by a total of 13,079 people. The average attendance across all sessions was 129, which is impressive (though skewed a bit by the size of some of the mega-sessions; Microsoft had to make a bet that lots of people would attend these sessions, which they did!). In terms of attendance, the top 10 sessions were mostly focused on architecture and deployment:

  • Exchange Server 2013 Architecture
  • Ready, set, deploy: Exchange Server 2013
  • Experts Unplugged: Exchange Top Issues – What are they and does anyone care or listen?
  • Exchange Server 2013 Tips & Tricks
  • The latest on High Availability & Site Resilience
  • Exchange hybrid: architecture and deployment
  • Experts Unplugged: Exchange Deployment
  • Exchange Server 2013 Transport Architecture
  • Exchange Server 2013 Virtualization Best Practices
  • Exchange Design Concepts and Best Practices
RS IV, not life size To put this in perspective, the top session on this list had just over 600 attendees and the bottom had just under 300. Overall attendance to sessions on the architecture track was about double that of the next contender, the deployment and migration track. That tells me that there is still a large audience for discussions of fundamental architecture topics, in addition to the day-in, day-out operational material that we’d normally see emerging as the mainstay of content at this point in the product lifecycle.Next takeaway: Tim McMichael is a rock star. He captured the #1 and #2 slots in the session ratings, which is no surprise to anyone who’s ever heard him speak. I am very hopeful that I’ll get to hear him speak at Exchange Connections this year. The overall quality of speakers was superb, in my biased opinion. I’d like to see my ratings improve (more demos!) but there’s no shame in being outranked by heavy hitters such as Tony, Michael, Jeff Mealiffe, Ross Smith IV (pictured at left; not actual size), or the ebullient Kamal Janardhan. MEC provides an excellent venue for the speakers to mingle with attendees, too, both at structured events like MAPI Hour and in unstructured post-session or hallway conversations. To me, that direct interaction is one of the most valuable parts of attending a conference, both as a speaker and because I can ask other speakers questions about their particular areas of expertise.

Third, the Unplugged sessions were very popular, as measured both by attendance numbers and session ratings. I loved both the format and content of the ones I attended, but they depend on having a good moderator— someone who is both knowledgeable about the topic at hand and experienced at steering a group of opinionated folks back on topic when needed. While I am naturally bad at that, the moderators overall did an excellent job and I hope to see more Unplugged sessions at future events. When attendees added sessions to their calendar, the event staff used that as a means of gauging interest and assigning rooms based on the likely number of attendees. However, looking at the data shows that people flocked to sessions based on word-of-mouth and didn’t necessarily update their calendars; I calculated the attendance split by dividing the number of people who attended an actual session by the number who said they would attend. If 100 calendared the session but 50 attended, that would be a 50% split. The average split across all sessions (except one) was 53.8%— not bad considering how dynamic the attendance was. The one session I left out was “Experts Unplugged: Architecture – HA and Storage”, which had a split of 1167%! Of the top 10 splits (i.e. sessions where the largest percentage of people stood by their original plans), 4 were Unplugged sessions.

Of course, MEC was much more than the numbers, but this kind of data helps Microsoft understand what people want from future events, measured not just by asking them but by observing their actual preferences and actions. I can’t wait to see what the next event, whenever it may be, will look like!

2 Comments

Filed under UC&C