Race report: 2016 IRONMAN 70.3 North Carolina

Summary: At the end of my season in 2015, I decided that I wanted to try a 70.3 in 2016. I’d heard about how great the Beach2Battleship 70.3 in Wilmington, North Carolina was… in large part because they give pajama pants to all finishers. I signed up in December, right after WTC bought the race and renamed it to IRONMAN 70.3 North Carolina, and then mostly forgot about it because my friend Ingrid was “encouraging” me to sign up for IM 70.3 New Orleans (which I did). I didn’t have a great race at New Orleans and wanted to do better this time. I did, by 46 minutes.

Sunday: new wheels, part 1

My friend Tony finished his season at AtomicMan by smashing the course, and he was kind enough to offer to loan me his race wheels, a Zipp 404/808 pair. I put them on the bike and took off for a ride the day before I was supposed to leave for California. PROTIP: when you change wheels, you have to adjust the derailleur limit screws. If you don’t do this, here’s what happens: you break the derailleur hanger, ruin your chain, put a big dent in your friend’s borrowed wheel, and break several spokes. On Sunday, when no bike shops are open. And while you’re 10 miles from home. A quick Uber ride home and I was left to sort out my plan: Dana would drop my bike off at the shop, they’d fix it, and I’d pick it up Thursday when I got back.

A broken derailleur is a terrible thing

A broken derailleur is a terrible thing

Thursday: new wheels, part 2

Hosanna to the crew at Bicycle Cove. Chris, Parker, and Nick got the parts in and got the bike fixed. When I went to pick it up, though, I was surprised; instead of Tony’s Zipps, it was wearing a pair of Bontrager Aeolus 5s. They wanted Chris to look at Tony’s wheel again before clearing it for riding, you see, so they loaned me a set of their shop wheels to make sure I wouldn’t miss the race. “We’ll settle up later,” they said. “Now go have a great race.” Because we were having thunderstorms at the time, I didn’t get a chance to ride the new wheels; I had to load up and go.

Friday: flying and being pathetic

On Friday, I packed up 706 and filed direct HSV-ILM. A cold front had just passed through Huntsville, and I knew I’d be going through it again en route, but I was looking forward to a nice tailwind. That’s just what I got, with 15-25 knots of wind speeding me along. I landed at ILM after a beautiful flight with only a few bumps. Air Wilmington has a strict 1-hour limit on their courtesy cars, so I grabbed an Uber and headed to the downtown convention center for race check-in. Unfortunately, I failed to note the line in the athlete guide that said check-in closed at 1pm, and I got there about 1:45p. After some nervous waiting in line, Caroline from IRONMAN was kind enough to check me in anyway. She gave me 3 bags: one for T1, one for T2, and one for “morning clothes.” I found a niche to spread out my stuff and started the process of filling the bags. See, this race is a point-to-point-to-point race: you swim from point A to point B, get on the bike and bike to point C, and then run to point D. At each point, you have to change into the appropriate clothes, so before the race you have to stage all your stuff in the right bags. If you forget something, or put it in the wrong bag… well, too bad.

I got my bags packed and found that points B, C, and D were farther apart than I expected. While wandering around, I ran into Nancy and Paula, two fellow members of the Pathetic Triathletes group on Facebook. Nancy recognized me when I mentioned taking an Uber– I’d previously asked her whether Uber was in Wilmington. Thank goodness they found me; they were invaluable in showing me the ropes of this particular race. They were also kind enough to drive me and my bike over to T1 so I could stage my stuff. Then we had a lovely Pathetic meetup at Poe’s Tavern on the beach, where I met several other Pathetics and had a delicious burger. We went to check out the swim exit, where we met a volunteer who explained the swim course to me in great detail. The course was well marked with buoys, which I appreciated since my open water sighting technique still needs some work.

Seems placid, doesn't it?

Seems placid, doesn’t it?

The sunset was pretty impressive, too.

Yay for bonus sunsets

Yay for bonus sunsets

Nancy and Paula dropped me off at my Airbnb (summary: nice and quiet, no New Orleans-style murder) and I was in for the night, modulo a quick run to CVS for some Advil. I checked the weather forecast a few dozen times to get some idea of what the winds would be like. As I tried to drift off to sleep, I mulled over what reasonable goal times for the day would be. All I really wanted was to beat my NOLA 70.3 time, but I

Saturday pre-race: patheticness everywhere

Stan, Karen, Paula, and Nancy were kind enough to let me carpool with them, then stop so I could grab some breakfast, then to loan me $5 because I had pathetically forgotten my wallet. I had a gas station protein bar and a 20oz Coke… breakfast of champions, right? We got to T1 in plenty of time for me to fill my bike bottles with Mercury, pump up my tires, and check once more to make sure I had everything in my “morning clothes” bag that I wanted. See, at the swim start, you leave that bag there, and you don’t get it again until after the race. It’s a good place for things like eyeglasses and cell phones. T1 was crowded, as you’d expect for a race with nearly 3000 athletes. I was way in the far left back corner, which turned out to not be so bad because it was easy to find.

Transition 1 on race morning

Transition 1 on race morning

Last-minute preparation accomplished, I caught a shuttle to the swim start and met up with my pathetic pals there.  Stan had loaned me a cap, which I was glad to have because it was chilly; I put on my wetsuit earlier than I normally would have, and it helped quite a bit while we waited. I got to the swim start about 8:10, and my wave wasn’t due to start until 9:06, so I had some time to mill around. I found that I still had my chapstick with me, even though I should’ve left it in my run bag. Solution: put it in the top of my swim cap. It survived, luckily, and didn’t get too much extra ocean flavor. Almost before I knew it, they were herding our swim wave across the street and into the waist-high water behind the start line. The water was warmer than the air, and it felt great after I’d been standing outside being cold for an hour.

The swim

39:53, a new PR for me at this distance and roughly 20 minutes faster than my New Orleans performance. This course was linear so my poor sighting didn’t put me at much of a disadvantage, and there was a fast current to boot. I swallowed a good bit of salt water so I was worried about having to vomit– usually an automatic cause for the support staff to pull you– but I ended up OK. At swim exit I was wobbly from all the time spent swimming through chop; the second half of the swim was mostly into the surface chop so I was a little, if not seasick, then seasickish. When I exited the water, I noticed that my watch said “Resume?” and had recorded only about 1030 yards of the swim. I guess I accidentally hit a button with my wetsuit cuff or something. So much for an accurate swim distance.


14:28? Jeez. The run from the swim exit to the bike corral was long, and I did stop in the changing tent to put on sunscreen, a dry shirt, and lots of chamois butter… but I had no idea I was in T1 for this long. I felt really stupid when I saw my race results, because this should’ve been no more than a 5-minute stop. Once I got all my stuff together, I got to the mount line and headed out on the bike.

The bike

Before the race, there was a great uproar because of IRONMAN’s decision to shorten the full-distance bike course. During race week, they announced a couple of route changes (and more were rumored), but by race day they’d settled on one 56-mile route for both half and full-distance races. It was windy, with forecast winds of 13-15 mph from the west. We got all that and more– the wind history at ILM was 12.1mph for 24 hours, with a highest sustained wind of 22mph and peak gusts of 27mph. The bike course itself was a big part of the problem– its structure meant that we went out, did a loop, and then came back, starting from the green dot. The loop was south on 421, then north to the turnaround (where a gas station was selling fried chicken that smelled indecently delicious), then south again. Since the wind was coming from the west, we had very significant wind exposure– more miles than I think we got in the out-and-back New Orleans course.

the bike course

the bike course

If you look at my lap times you can absolutely see the last 8 miles of tailwinds… and the other 48 or so of cross/head winds. I averaged 14.5mph on the first 39 miles and 17.6mph on the way back. Despite that gap in speeds, I felt really good on the bike. I passed nearly 100 people, which was an absolute first for me– I usually start at a deficit when coming out the water that I can’t make up on the road. I held the power target that I wanted, I didn’t wreck in the strong crosswinds, and with the winds, I came in just over my target time of 3:30. (obtw, those Aeolus race wheels were excellent.)

bike data… too many cadence spikes

bike data… too many cadence spikes

There was a fair bit of (justified) complaining online because the aid stations weren’t were the athletes’ guide promised, and each one only had 2 porta-potties.

EDITED TO ADD: here’s a video of the bike route provided by relive.cc.


Back through the changing tent and out again; this time it only took me 9:21… still ridiculously slow. That time comprised walking my bike down a long string of rubber mats overlaid on top of the gravel/dirt construction mix on the street where the bike finished, getting into the corral and getting my run bag, hitting the changing tent, and actually changing, then leaving again. I’m still not really sure where the timing mat was.

The run

2:28. That’s really all I have to say about that. Faster than NO, but still ~ 30min slower than my standalone 13.1 pace. Lots of room for improvement here. The run course was semi-scenic; the first leg went through downtown, where there was a moderate crowd, then along an ugly industrial section of Front Street, then over to Greenfield Lake, which is ringed with city-provided signs that say “YES, there are alligators in this lake. Do not feed, harass, or tease them.” It’s a delightfully scenic lake, though, and (unlike the bike) there were plentiful, well-stocked aid stations. The full-distance racers had to do two loops of the course, whereas I only had one, for which I was grateful. I tried Red Bull for the first time on the course; while it didn’t give me superpowers, it also didn’t make my stomach convulse, so I’ll score that as a draw. I saw Pathetic Nancy on the run (I spotted the “#P” marked on her calf as I passed her), and I met Robert Moore, one of the “PPD Heroes” featured by the race sponsor. Then I ran the last mile or so with a lady who was finishing the full-distance race and we chatted a bit– that was a pleasant way to get to the finish line. Oddly, there were fewer spectators out when I came back through downtown on the return, which surprised me a bit.

YES, there are alligators in this lake

YES, there are alligators in this lake



The finish line experience was great– I crossed, got my medal and pajama pants, and wandered around for a bit catching my breath. Unfortunately, soon I had to go pick up 3 bags of stuff: my run, bike, and morning clothes bags were all in different places. It took me close to 30 minutes of schlepping around to collect the bags and my bike, which was far longer than I wanted to spend. I grabbed an Uber back to the house, took a badly needed hot shower, and headed over to Hops Supply for dinner. I wasn’t up late.

The trip home

This morning, I woke up at 5 with a goal of being wheels-up by 6. Plot twist: there aren’t any Uber drivers awake that early, apparently. I eventually got a car and got to the airport to find that my plane was parked out on the back 40 and had to be towed to where I could access it. 45min after my desired time, I was airborne for Peachtree-DeKalb to meet my best friend from high school, the illustrious Brian Albro. We had a fantastic but short visit (thank you, Flying Biscuit, for breakfast), then I headed back. My flight was smooth and beautiful. I got to see some Harriers parked on the ramp at ILM, some great river fog, and a lot of greenery.


The fog follows the river path exactly

The fog follows the river path exactly

A great race and a worthy effort. There were a lot of logistical hiccups; for example, the 70.3 athlete tracking on the IM website never worked, and the bike course caused a ton of traffic problems for locals that mean this will be an unpopular race next year. I got 7.5 flying hours and 7.2 hours on the triathlon route, so it was a good trip.

Leave a comment

Filed under aviation, Fitness

Flying Friday: the great Gulfstream migration

Y’all may have heard of a little thing called Hurricane Matthew (or, as the Weather Channel continually called it, to the great amusement of my son Matthew, “DEADLY HURRICANE MATTHEW.”) And you may have heard of Gulfstream, the wildly successful purveyor of extremely expensive and capable business jets. But did you know that, for a while, our own Huntsville International Airport hosted nearly a billion dollars worth of Gulfstream hardware?

See, Gulfstream is based in Savannah, Georgia. They have a large factory there, with a satellite facility at Brunswick where they do paint and interior work. With a category 4 hurricane headed their way, Gulfstream made the very wise decision to find another place to park their airplanes until the storm passed, and Huntsville won the toss. On October 6th, I was listening to LiveATC and noticed a few airplanes checking in to Huntsville Approach with callsigns of “Gulftest XXX.” Neat, I thought. These must be test or acceptance flights. Then I heard a few more. Then one of the controllers asked a pilot how many more flights to expect– the pilot nonchalantly replied “oh, 30 or so.” That led me to check FlightRadar24 and, sure enough, the migration was well underway. (Sadly I didn’t think to capture any screen shots).

Last Sunday I drove out to the airport to take a few pictures of the shiny goodness on the ramp. These are links to my Flickr stream, which has lots of other airplane pictures if you’re into that sort of thing:

I was out of town this past week, so I missed the return flight, but sadly they’re gone now. It was fun to see them here, as that’s probably the closest I’ll ever be to such expensive hardware.

Leave a comment

Filed under aviation

Vertical constraints and the Avidyne IFD540

When you file and fly an IFR flight plan, you’ll have an assigned altitude for every segment of the flight. You’re not permitted to deviate from this altitude without permission (except in emergencies); the altitude obviously has to be high enough so that you don’t hit anything on the ground, and it may be capped on top to keep you away from other aircraft, clear of military training airspace, and so on.  (There are several different varieties of minimum altitude that you might be assigned on a route, but that’s a distinction for another day.)

These altitude assignments can often be expressed as constraints. ATC might, for instance, say “N32706, descend pilot’s discretion 4000”, which means I’m cleared to go from whatever altitude I’m at down to 4000′ at whatever rate of descent I want. I’m still not cleared to descend below 4000′ though. And of course, there are lots of constraints on instrument approaches, where we’re commonly told to maintain an altitude between an upper and lower limit at different segments of approach.

It’s common for air traffic control to give pilots crossing restrictions, too. For example, airplanes flying into Norcal Approach’s airspace will often be told to cross KLIDE “at or above 12000”. If they’re flying the ILS 30L (an excerpt of which is shown below), they’ll be told to “cross KLIDE at or above 4000, reduce speed to 230 knots”. You can tell that there’s a restriction there because of the horizontal line under the altitude– an underline means “at or above”, an overline means “at or below,” and both together mean “at”.

Those horizontal lines are important

Those horizontal lines are important

An example I’m personally familiar with is that, when you fly into McCollum Field in north Atlanta from the west, the Atlanta Center controller will typically tell you to cross 35mi west of the airport at a specific altitude. In days of old, I’d just do this manually, which required a bit of math– “a standard 500 foot/minute descent at X airspeed means I need to start descending at Y minutes.” The IFD540 makes crossing restrictions really simple to track, but the interface for doing so is a little weird.

To specify a vertical constraint, you use the FMS page. The problem is that, by default, the FMS page shows the flight plan as a strip along the right edge of the screen, and you can’t see the fields necessary to enter the constraint. The example below (taken from Avidyne’s excellent Windows simulator) shows what I mean– you can see the flight plan in FMS view but, unless you know this One Simple Trick, you can’t put in crossing restrictions.


Tapping the “FPL” tab on the left edge of the flight plan data strip changes your view, though. In the expanded view, it’s easy to specify the distance and altitude you want, as you can see in this picture from the actual IFD540. ATC told me “cross CARAN at 5000” and so it was a trivial matter to specify a crossing distance of 0nm and an altitude of 5000 feet. I could just as easily have handled an instruction like “descend to 5000 5nm west of CARAN.”


As soon as I put the restriction in place and switched back to the map view, I could see a new “TOD” (top of descent) circle showing me where to start my descent. In addition, notice that CARAN is now bracketed by lines above and below its name– that’s because I specified I wanted to cross at 5000. If I had said “at or above” or “at or below,” the symbology would reflect that.


There’s also an audio alert which sounds just like a doorbell– so If you start a 500fpm descent when you hear the chime at the indicated TOD, you’ll arrive at the specified crossing point at the right altitude. This is a very handy feature, especially since you can set any combination of distance and altitude. Want to make sure you arrive right at pattern altitude when coming into an unfamiliar airport? Set a restriction for, say, 2nm on the approach side of the airport for pattern altitude and voila!

Leave a comment

Filed under aviation

Office 365 Exposed episode 6, from Las Vegas

Fresh on the episode we did at Microsoft Ignite this year, Tony and I thought it would be fun to do another short episode while we’re both in Vegas for IT/DevConnections… so we did. Topics include a spiffy new profanity filter (for Office 365 Groups, not Tony and me), the triumphant debut of Focused Inbox on desktop Outlook, and a touching closing segment where Tony mourns the loss of a favored gadget.

Leave a comment

Filed under Office 365, Podcasts

Training Tuesday: training with heart rate variability (HRV)

[For some reason this scheduled post didn’t post on Tuesday so I’m manually reposting it day late. Selah.]

Normally, I’m not a big podcast listener because I (thankfully) don’t spend much time in the car, and I find having people talking in the background while I work distracting. However, working on the Office 365 Exposed podcast with Tony has helped me see that lots of people do like them, and that I might be missing out, so I’ve expanded my listening a bit. When I found that my coaching team at Complete Human Performance had a podcast, I subscribed. The most recent episode was with Dr. Mike T. Nelson, and he was discussing something called HRV (heart rate variability). It was a fascinating topic so I want to try to summarize what I (think I) learned:

  • HRV refers to the variance in the amount of time between heartbeats (not your heart rate itself)
  • It’s calculated based on heart rate measurements, from an EKG or other methods.
  • The most commonly used scale gives you a rating from 1-100.
  • You want your HRV to be high. A low HRV is generally a bad sign, and may in some cases even indicate impending heart problems.
  • The trend of your HRV is a useful indicator of fatigue, stress, and so on.
  • People who have low HRV values after heart surgery tend to have worse outcomes

Tracking HRV is especially useful for endurance athletes because it gives you a data point showing the total amount of stress that your cardiovascular system has been under. Mike Nelson, in the podcast, said that you should think of HRV as reflecting the cost of everything you do. Exercise, diet, rest, job stress, and personal stress all add to this cost. Factoring all those in, and measuring your HRV daily, you should be able to intensify or ease your workouts to keep your day-to-day HRV in the desired range.

“You can measure HRV with an app,” he said, so I did– I paused the podcast, grabbed the iThlete app, read its instructions, and took a reading. Mid-50s.

Nelson went on to outline a number of strategies for adjusting training to take HRV data into account– you want to get the most effective training possible, while still not having a long-term negative impact on HRV. I’m not doing that yet because you need a solid baseline of data to see what your “normal” HRV is. Mine looks like it’s hovering around 60ish. I don’t know enough to know if that’s good, bad, or what considering my age and physical condition, and I don’t yet understand the relationship (if any) between my resting heart rate and my HRV. I’ll keep an eye on it for another month or so and then start considering, in consultation with my coaches, what, if anything, I should use it for besides another nerdy data point.



Filed under Fitness

Creating Exchange dynamic distribution groups with custom attributes

You learn something new every day… I guess that means I’m ahead of schedule for the day.

A coworker asked if there was a way to use PowerShell to create a dynamic distribution group using one of the AD customAttributeX values. I didn’t know the answer offhand (since I create new distribution groups about every 5 years), but a little binging turned up the documentation for New-DynamicDistributionGroup. Turns out that the ConditionalCustomAttributeN parameters will do what he wanted:

New-DynamicDistributionGroup -IncludedRecipients mailContacts -ConditionalCustomAttribute6 "PeopleToInclude"

It turns out that wasn’t what he really wanted– he wanted to create a dynamic DG to include objects where the custom attribute value was not set to a particular value. The ConditionalXXX switches can’t do that, so he had to use a RecipientFilter instead:

New-DynamicDistributionGroup -IncludedRecipients mailContacts -RecipientFilter {ExtensionCustomAttribute6 -ne "PeopleToExclude"}...

Leave a comment

Filed under Office 365, UC&C

Microsoft Exchange engineering and cloud-scale

The Exchange team (or at least Perry Clarke, its fearless leader) has been known to describe Exchange Online as “the gateway drug to the cloud.” But how did that come to pass?

This week at Ignite, I was lucky enough to have dinner with some folks from the Exchange product team and a very, very large customer where we discussed the various ways in which Exchange engineering has blazed a trail the rest of Microsoft’s server products have eventually followed. After a bracing Twitter discussion this afternoon with @swiftonsecurity and some of her other followers, I thought it would be fun to put together a partial list of some of the things we discussed to illustrate how the Exchange team has built a stairway to heaven, or an elevator to the cloud, or something like that.

Let’s start with PowerShell. Love it or hate it, it is here, so we all have to deal with it. In 2007, the idea that Exchange would be built on PS was both revolutionary and, to many, revolting, but it allowed Microsoft to do several important things (not all of which shipped in Exchange 2007, but all of which are critical to cloud operations):

  • Greatly improve testability, both for the developers themselves but also for administrators, who now got a suite of protocol and endpoint-related tests they could run as part of troubleshooting– critically important when you have to troubleshoot in a global network of data centers hosting tens of millions of mailboxes
  • Fully enable role-based access control, also critical for cloud deployments where customers want to control who can do what with their data
  • Finally decouple the presentation layer of the UI (EMC, EAC, etc) from business logic
  • Massively improve the tools for scripting, including enabling very large-scale bulk operations– an obvious requirement for a cloud-scale service

Requiring PowerShell was a bold move by the Exchange team but one which has both paid off hugely and one that’s been echoed by the Windows, SharePoint, SQL Server and Skype teams, all of whom depend on it for managing their own cloud services. (See also: the Microsoft Graph APIs.)

Then there’s storage performance. In ancient days, getting scale from Exchange pretty much required the use of SANs due to Exchange’s IO requirements. Now, thanks to the IOPS diet imposed by Exchange engineering, it doesn’t. Tony does his usual excellent job of summarizing the actual reductions. Summary: Exchange 2016 requires roughly 96% fewer IOPS than Exchange 2003 did. There have been a ton of storage performance improvements in Exchange’s sister products (notably SQL) but those have their own stories that I’m not competent to tell. The relentless drive to cut IOPS requirements was one of the biggest enablers for Exchange Online, since controlling storage provisioning costs is critical for any type of scaled cloud service.

Of course, data protection is critical too. Exchange moved from having a single monolithic database to one with separate property and MIME databases (Exchange 2000) then to having software-based database replication with clustering (Exchange 2007) to shared-nothing, fully-replicated active/passive database replication (Exchange 2010 and later). Keeping multiple separate database copies (including lagged copies) enables all sorts of DR and HA scenarios that previously had required SANs. The ability to reliably use cheap JBOD disks, which thanks to Moore’s Law have embiggened nicely during Exchange’s lifetime, has been a key enabler for Exchange Online.

Then there’s a bunch of other architectural changes and improvements that are really only interesting to Exchange nerds. For the latest example, I present “read from passive,” but there’s also all the stuff covered by the Preferred Architecture.

Oh, I almost forgot: managed availability gives ExO a fair degree of self-healing, although its behavior sometimes surprises on-prem admins who see it do things on their behalf unexpectedly.

Oh, and let’s not forget the conversion of all the Exchange codebase to managed code– that was an important accelerator for the move to the cloud, as well as serving as a lighthouse for other product groups with code of similar vintage.

There are more examples, I’m sure, but these should get the point across– there’s been a steady stream of architectural changes in the nearly 20 years since Exchange 4.0 shipped that have led directly to the capability, power, and reliability of Exchange Online– which really has been the gateway drug for getting Microsoft’s customers to Office 365.



Leave a comment

Filed under UC&C