Category Archives: General Tech Stuff

Fixed: Surface Book doesn’t recognize docking state

I got a shiny new Surface Book on Monday and started using it immediately… more specific notes on it later when I have more time. I ran into a problem today, though, and wanted to document what I found.

Symptom: the touchpad and keyboard don’t work. The clipboard switches to tablet mode (if you’ve enabled automatic switching). You can’t use the base unit’s USB ports. The taskbar “undock” icon shows that the base is undocked.

Cause: beats me.

Resolution: boot into the system BIOS by turning the machine off, then holding the power and volume-up keys for 15 seconds. When you get into the BIOS, just exit BIOS setup and the machine will reboot normally. There’s a thread here that outlines the exact procedure.

Overall, I love the machine: the form factor, build quality, screen resolution, performance, and trackpad are all superb. I expect this kind of temporary hiccup, so it hasn’t put me off at all.

2 Comments

Filed under General Tech Stuff

Windows Hello and Microsoft Passport intro

I’ve been working on a white paper explaining how Windows Hello and Microsoft Passport work together in Windows 10– it’s a really neat combination. Over at my work blog, I have a short article outlining what Hello and Passport are and a little about how they work (plus a bonus demo video). If you’re curious, head over and check it out.

Leave a comment

Filed under General Tech Stuff, Security

The difference between Suunto cadence and bike pods

I spent way too much time trying to figure this out today, so I’m blogging it in hopes that the intertubez will make it easy for future generations to find the answer to this question: what’s the difference between a cadence pod and a bike pod according to Suunto?

See, the Suunto Ambit series of watches can pair with a wide range of sensors that use the ANT+ standard. You can mix and match ANT+ devices from different manufacturers, so a Garmin sensor will work with a Suunto watch, or a Wahoo heart-rate belt will work with a Specialized bike computer. I wanted to get a speed and cadence sensor for my bike. These sensors measure two parameters: how fast you’re going and how rapidly you’re pedaling. (This is a great explanation of what these sensors really measure and how they work.) Ideally you want a nice, steady cadence of 75-90 rpm. I knew I had a variable cadence, and I wanted to measure it to get a sense for where I was at.

I ordered a Wahoo combined cadence/speed sensor from Amazon and installed it on the bike, which was pretty straightforward. Then I paired it with the watch using the “bike POD” option. (Suunto, for some reason, calls sensors “PODs”). That seemed to work fine, except that I wasn’t getting any cadence or speed data. But I knew the sensor was working because the watch paired with it. I tried changing the sensor battery, moving the sensor and its magnets around, and creating a new tracking activity that didn’t use GPS to see if I got speed data from the sensor. Then I thought “maybe it’s because I didn’t pair a cadence pod”, so I tried that, but no matter what I did, the watch refused to see the Wahoo sensor as a cadence sensor.

Here’s why: to Suunto, a “bike POD” is a combined speed/cadence sensor. A “cadence pod” is for cadence only. Like Bluetooth devices, each ANT+ device emits a profile that tells the host device what it is. That’s why the watch wouldn’t see the sensor, which reported itself as a combined cadence/speed unit, when I tried to pair a cadence pod. After I figured that out, I quit trying to pair the cadence pod… but I still didn’t get speed or cadence data.

The solution turned out to be simple. For some reason, in the cycling sport activity, the “Bike POD” sensor was unchecked, so the watch wasn’t reading its data stream during the activity. I don’t remember unchecking the box, but maybe I did. In any event, once I checked the “Bike POD” box and updated the watch, I immediately started getting cadence and speed data, so I set out for a ride.

NewImage

Hint: if you uncheck any of these boxes the watch will never, ever pay attention to that sensor

I thought it was a pretty good ride from a speed perspective, even though I took a new route that had a number of hills– I had some trouble with that. But look at my cadence… you can see that it definitely needs work. Sigh. One of the nifty things about Suunto’s web site is that it shows vertical speed when you point at cadence data, so I could see where I was struggling to get up hills (meaning I needed to change gears) or loafing when going downhill. Just one more thing to put on my to-fix list…

NewImage

36 Comments

Filed under Fitness, General Tech Stuff, HOWTO

Exchange Server and Azure: “not now” vs “never”

Wow, look what I found in my drafts folder: an old post.

Lots of Exchange admins have been wondering whether Windows Azure can be used to host Exchange. This is to be expected for two reasons. First, Microsoft has been steadily raising the volume of Azure-related announcements, demos, and other collateral material. TechEd 2014 was a great example: there were several Azure-related announcements, including the availability of ExpressRoute for private connections to the Azure cloud and several major new storage improvements. These changes build on their aggressive evangelism, which has been attempting, and succeeding, to convince iOS and Android developers to use Azure as the back-end service for their apps. The other reason, sadly, is why I’m writing: there’s a lot of misinformation about Exchange on Azure (e.g. this article from SearchExchange titled “Points to consider before running Exchange on Azure”, which is wrong, wrong, and wrong), and you need to be prepared to defuse its wrongness with customers who may misunderstand what they’re potentially getting into.

On its face, Azure’s infrastructure-as-a-service (IaaS) offering seems pretty compelling: you can build Windows Server VMs and host them in the Azure cloud. That seems like it would be a natural fit for Exchange, which is increasingly viewed as an infrastructure service by customers who depend on it. However, there are at least three serious problems with this approach.

First: it’s not supported by Microsoft, something that the “points to consider” article doesn’t even mention. The Exchange team doesn’t support Exchange 2010 or Exchange 2013 on Azure or Amazon EC2 or anyone else’s cloud service at present. It is possible that this will change in the future, but for now any customer who runs Exchange on Azure will be in an unsupported state. It’s fun to imagine scenarios where the Azure team takes over first-line support responsibility for customers running Exchange and other Microsoft server applications; this sounds a little crazy but the precedent exists, as EMC and other storage companies did exactly this for users of their replication solutions back in Exchange 5.5/2000 times. Having said that, don’t hold your breath. The Azure team has plenty of other more pressing work to do first, so I think that any change in this support model will require the Exchange team to buy in to it. The Azure team has been able to get that buy-in from SharePoint, Dynamics, and other major product groups within Microsoft, so this is by no means impossible.

Second: it’s more work. In some ways Azure gives you the worst of the hosted Exchange model: you have to do just as much work as you would if Exchange were hosted on-premises, but you’re also subject to service outages, inconsistent network latency, and all the other transient or chronic irritations that come, at no extra cost, with cloud services. Part of the reason that the Exchange team doesn’t support Azure is because there’s no way to guarantee that any IaaS provider is offering enough IOPS, low-enough latency, and so on, so troubleshooting performance or behavior problems with a service such as Azure can quickly turn into a nightmare. If Azure is able to provide guaranteed service levels for disk I/O throughput and latency, that would help quite a bit, but this would probably require significant engineering effort. Although I don’t recommend that you do it at the moment, you might be interested in this writeup on how to deploy Exchange on Azure; it gives a good look at some of the operational challenges you might face in setting up Exchange+Azure for test or demo use.

Third: it’s going to cost more. Remember that IaaS networks typically charge for resource consumption. Exchange 2013 (and Exchange 2010, too) is designed to be “always on”. The workload management features in Exchange 2013 provide throttling, sure, but they don’t eliminate all of the background maintenance that Exchange is more-or-less continuously performing. These tasks, including GAL grammar generation for Exchange UM, the managed folder assistant, calendar repair, and various database-related tasks, have to be run, and so IaaS-based Exchange servers are continually going to be racking up storage, CPU, and network charges. In fairness, I haven’t estimated what these charges might be for a typical test-lab environment; it’s possible that they’d be cheap enough to be tolerable, but I’m not betting on it, and no doubt a real deployment would be significantly more expensive.

Of course, all three of these problems are soluble: the Exchange team could at any time change their support policy for Exchange on Azure, and/or the Azure team could adjust the cost model to make the cost for doing so competitive with Office 365 or other hosted solutions. Interestingly, though, two different groups would have to make those decisions, and their interests don’t necessarily align, so it’s not clear to me if or when we might see this happen. Remember, the Office 365 team at Microsoft uses physical hardware exclusively for their operations.

Does that mean that Azure has no value for Exchange? On the contrary. At TechEd New Orleans in June 2013, Microsoft’s Scott Schnoll said they were studying the possibility of using an Azure VM as the witness server for DAGs in Exchange 2013 CU2 and later. This would be a super feature because it would allow customers with two or more physically separate data centers to build large DAGs that weren’t dependent on site interconnects (at the risk, of course, of requiring always-on connectivity to Azure). The cost and workload penalty for running an FSW on Azure would be low, too. In August 2013, the word came down: Azure in its present implementation isn’t suitable for use as an FSW. However, the Exchange team has requested some Azure functionality changes that would make it possible to run this configuration in the future, so we have that to look forward to.

Then we have the wide world of IaaS capabilities opened up by Windows Azure Active Directory (WAAD), Azure Rights Management Services, Azure Multi-Factor Authentication, and the large-volume disk ingestion program (now known as the Azure Import/Export Service). As time passes, Microsoft keeps delivering more, and better, Azure services that complement on-premises Exchange, which has been really interesting to watch. I expect that trend to continue, and there are other, less expensive ways to use IaaS for Exchange if you only want it for test labs and the like. More on that in a future post….

5 Comments

Filed under General Tech Stuff, UC&C

2-factor Lync authentication and missing Exchange features

Two-factor authentication (or just 2FA) is increasingly important as a means of controlling access to a variety of systems. I’m delighted that SMS-based authentication  (which I wrote about in 2008), has become a de facto standard for many banks and online services. Microsoft bought PhoneFactor and offers its SMS-based system as part of multi-factor authentication for Azure, which makes it even easier to deploy 2FA in your own applications.

Customers have been demanding 2FA for Lync, Exchange, and other on-premises applications for a while now. Exchange supports the use of smart cards for authentication with Outlook Anywhere and OWA, and various third parties such as RSA have shipped authentication solutions that support other authentication factors, such as one-time codes or tokens. Lync, however, has been a little later to the party. With the July 2013 release of Lync Server 2013 CU2, Lync supports the use of smart cards (whether physical or virtual) as an authentication mechanism. Recently I became aware that there are some Lync features that aren’t available when the client authenticates with a smart card– that’s because the client authenticates to two different endpoints. It authenticates to Lync using two-factor authentication, but the Lync client can’t currently authenticate to Exchange using the same smart card, so services based on access through Exchange Web Services (EWS) won’t work. The docs say that this is “by design,” which I hope means “we didn’t have time to get to it yet.”

The result of this limitation means that Lync 2013 clients using 2FA cannot use several features, including

  • the Unified Contact Store. You’ll need to use Invoke-CsUcsRollback to disable Lync 2FA users’ UCS access if you’ve enabled it.
  • the ability to automatically set presence based on the user’s calendar state, i.e. the Lync client will no longer set your presence to “out of office”, “in a meeting,” etc. based on what’s on your calendar. Presence that indicates call states such as “in a conference call” still works.
  • integration with the Exchange-based Conversation History folder. If you’ve configured the use of Exchange 2013 as an archive for Lync on the server side, that still works.
  • Access to high-definition user photos
  • The ability to see and access Exchange UM voicemail messages from the Lync client

These limitations weren’t fixed in CU3, but I am hopeful that a not-too-distant future version of the client will enable full 2FA use. In the meantime, if you’re planning on using 2FA, keep these limitations in mind.

1 Comment

Filed under General Tech Stuff, UC&C

Need Windows licensing help? Better call Paul

No, I’m not giving it. That would be like me giving advice on how to do a pencil drawing, or what wine goes with In-N-Out Burger.

A year or so ago, I had a very complex Windows licensing questions that Microsoft was unable to answer. More to the point, no two Microsoft people were able to give me the same answer. I did a little digging and found Paul DeGroot of Pica Communications, author of the only book on Microsoft licensing that I know of. Paul quickly and clearly answered my question, and a couple of rounds of follow-up questions after that. Armed with his information, I was able to solve the particular problem I was having in a less expensive, less painful way than just buying all the licenses. As I was cleaning out my inbox, I found our discussion and remembered, guiltily, that I meant to mention Paul’s services earlier. Under the banner “better late than never” consider this a belated, and strong, recommendation.

Leave a comment

Filed under General Tech Stuff, UC&C

PC reliability: Apple, Dell, and lessons for servers?

Via Ed Bott, a fascinating article on real-world robustness from Windows 7 and Windows 8 PCs: Want the most reliable Windows PC? Buy a Mac (or maybe a Dell). You should read the article, which outlines a report issued by Soluto, a cloud-based PC health and service monitoring company. Their report analyzes data reported to their service by customers to attempt to answer the question of which manufacturer’s PCs are the most reliable. Apple’s 13″ MacBook Pro comes out on top, with Acer’s Aspire E1-571 coming in second and Dell’s XPS 13 in third. In fact, out of the top 10, Apple has two spots, Acer has two spots, and Dell has five. Ed points out that it’s odd that Hewlett-Packard doesn’t have any entries in the list, and that Lenovo (which I have long considered the gold standard for laptops not made by Apple) only has one.

The report, and Ed’s column, speculate on why the results came out this way. I don’t know enough about the PC laptop world to have a good feel for how many of the models on their list are consumer-targeted versus business-targeted, although they do include cost figures that help provide some clues. There’s no doubt that the amount of random crap that PC vendors shovel on to their machines makes a big difference in the results, although I have to suspect that the quality of vendor-provided drivers makes a bigger difference. Graphics drivers are especially critical, since they run in kernel mode and can easily crash the entire machine; the bundled crapware included by many vendors strikes me as more of an annoyance than a reliability hazard (at least in terms of unwanted reboots or  crashes.)

The results raise the interesting question of whether there are similar results for servers. Given that servers from major vendors such as Dell and H-P come with very clean Windows installs, I wouldn’t expect to see driver issues play a major part in server reliability. My intuition is that the basic hardware designs from tier 1 vendors are all roughly equal in reliability, and that components such as SAN HBAs or RAID controllers probably have a bigger negative impact on overall reliability than the servers themselves– but I don’t have data to back that up. I’m sure that server vendors do, and equally sure that they guard it jealously.

More broadly, it’s fascinating that we can even have this discussion.

First of all, the rise of cloud-based services like Soluto (and Microsoft’s own Windows Intune) means that now we have data that can tell us fascinating things. I remember that during the development period of Windows 2003, Microsoft spent a great deal of effort persuading customers to send them crash dumps for analysis. The analysis revealed that the top two causes of server failures were badly behaving drivers and administrator errors. There’s not much we can do about problem #2, but Microsoft attacked the first problem in a number of ways, including restructuring how drivers are loaded and introducing driver signing as a means of weeding out unstable or buggy drivers. But that was a huge engineering effort led by a single vendor, using data that only they had– and Microsoft certainly didn’t embarrass or praise any particular OEM based on the number of crashes their hardware and drivers had.

Second, Microsoft’s ongoing effort to turn itself into a software + services + devices company (or whatever they’re calling it this week) means that they are able to gather a huge wealth of data about usage and behavior. We’ve seen them use that data to design the Office fluent interface, redesign the Xbox 360 dashboard multiple times, and push a consistent visual design language across Windows 8, Windows Phone 8, Xbox 360, and apps for other platforms such as Xbox SmartGlass. It’s interesting to think about the kind of data they are gathering from operating Office 365, and what kind of patterns that might reveal. I can imagine that Microsoft would like to encourage Exchange 2013 customers to share data gathered by Managed Availability, but there are challenges in persuading customers to allow that data collection, so we’ll have to see what happens.

To the cloud…

1 Comment

Filed under General Tech Stuff, UC&C

Loading PowerShell snap-ins from a script

So I wanted to launch an Exchange Management Shell (EMS) script to do some stuff for a project at work. Normally this would be straightforward, but because of the way our virtualized lab environment works, it took me some fiddling to get it working.

What I needed to do was something like this:

c:\windows\system32\powershell\v1.0\powershell.exe -command "someStuff"

That worked fine as long as all I wanted to do was run basic PowerShell cmdlets. Once I started trying to run EMS cmdlets, things got considerably more complex because I needed a full EMS environment. First I had to deal with the fact that EMS, when it starts, tries to perform a CRL check. On a non-Internet-connected system, it will take 5 minutes or so to time out. I had completely forgotten this, so I spent some time fooling around with various combinations of RAM and virtual CPUs trying to figure out what the holdup was. Luckily Jeff Guillet set me straight when he pointed me to this article, helpfully titled “Configuring Exchange Servers Without Internet Access.” That cut the startup time waaaaay down.

However, I was still having a problem: my scripts wouldn’t run. They were complaining that “No snap-ins have been registered for Windows PowerShell version 2”. What the heck? Off to Bing I went, whereupon I found that most of the people reporting similar problems were trying to launch PowerShell.exe and load snap-ins from web-based applications. That puzzled me, so I did some more digging. Running my script from the PowerShell session that appears when you click the icon in the quick launch bar seemed to work OK. Directly running the executable by its path (i.e. %windir%\system32\powershell\v1.0\powershell.exe) worked OK too… but it didn’t work when I did the same thing from my script launcher.

Back to Bing I went. On about the fifth page of results, I found this gem at StackExchange. The first answer got me pointed in the right direction. I had completely forgotten about file system virtualization, the Windows security feature that, as a side effect, helps erase the distinction between x64 and x86 binaries by automatically loading the proper executable even when you supply the “wrong” path. In my case, I wanted the x64 version of PowerShell, but that’s not always what I was getting because my script launcher is a 32-bit x86 process. When it launched PowerShell.exe from any path, I was getting the x86 version, which can’t load x64 snap-ins and thus couldn’t run EMS.

The solution? All I had to do was read a bit further down in the StackExchange article to see this MSDN article on developing applications for SharePoint Foundation, which points out that you must use %windir%\sysnative as the path when running PowerShell scripts after a Visual Studio build. Why? Because Visual Studio is a 32-bit application, but the SharePoint snap-in is x64 and must be run from an x64 PowerShell session… just like Exchange.

Armed with that knowledge, I modified my scripts to run PowerShell using sysnative vice the “real” path and poof! Problem solved. (Thanks also to Michael B. Smith for some bonus assistance.)

1 Comment

Filed under General Tech Stuff, UC&C

Why you should keep multiple backups

I spent the weekend a) in Huntsville with the boys and b) in a fog of cold medication. In fact, I called in sick to work today, which is really unusual for me. A coworker e-mailed me to ask for a couple of documents I’d written, and when I saw her e-mail (I called in sick, not dead, so I was still checking e-mail), I couldn’t find the files in SkyDrive on my Surface Pro. “Oh,” I thought. “I must have checked them in to our SharePoint site.”

Nope.

“Maybe they’re on my work desktop.” A quick RDP connection and… nope.

Now I was beginning to freak out a bit. I knew I had written these documents. I knew right where I’d left them. But they were nowhere to be found.

I went back to my MacBook Pro, which is sort-of my desktop now.… nope.

Then the fog lifted, oh so briefly, and I figured out what had happened.

For some reason, about a month ago, the SkyDrive client for OS X started pegging the CPU at random intervals. It was still syncing, most of the time, but when it started burning the CPU it would kick the MacBook Pro’s fans into turbo mode, so I started shutting the app off until I explicitly wanted it to sync. (This reminded me of the ancient technology known as Groove, but let us never speak of it again.) Eventually I got tired of this and started troubleshooting the problem. The easiest solution was to remove and reinstall the app, so I did. Before doing so, I made a backup copy of the entire SkyDrive folder, renamed it to “Old SkyDrive,” and let the newly installed app resync from the cloud. Then I deleted the old copy.

Fast-forward to today. I realized what had happened: the documents had been in the old SkyDrive folder, they never got synced, and now they were gone.

But wait! I do regular backups to Time Machine when I’m in Mountain View. I looked in Time Machine… nope.

“Oh, that’s right,” I muttered. “I created those files and ‘fixed’ SkyDrive last time I was in Huntsville.”

But wait! I also use CrashPlan! I fired up the app… nope.

Then I noticed the little “Show deleted files” checkbox. I checked it, typed in the name of the files I wanted, and in 90 seconds had all of them restored to my local disk.

So, the moral of the story is: a) make backups and then b) make backups of your backups. Oh, and go easy on the Benadryl.

Leave a comment

Filed under General Tech Stuff

Coming soon: do-it-yourself armed drones

I recently finished Daniel Suarez’s excellent thriller Kill Decision. The major plot point: parties unknown have been releasing autonomous, armed drones that are killing people in a variety of ways. The drones are capable of insect-level intelligence and swarming behavior, and of autonomously finding human targets and bombing or shooting them. Suarez asks a fairly provocative question: would America’s love affair with drones change if other countries, or criminal syndicates, or even individuals had them and used them as freely in the US as we use them elsewhere? Great plot, well-written, and solid characterizations– by far the best of his books so far. Highly recommended.

Anyway, with that in mind, I saw an article on the Lawfare blog about a guy who equipped an inexpensive commercial drone with a paintball marker. This video shows it in action, hitting targets easily while maneuvering slowly. The video’s a little fear-monger-y, but the narrator is right: “it seems inevitable” that these drones will be used in ways the manufacturer didn’t anticipate.  I sent the video to a couple of coworkers, one of whom asked “I wonder how hard it is to shoot accurately with it?” That got me to thinking… so off the top of my head, I jotted down a few factors that would affect the accuracy of a firearm-equipped drone. Note that here I’m talking about an autonomous UAV, not a remotely-piloted, man-in-the-loop drone. 

  • What’s it for? What kind of range and endurance do you need? It would be easy to build a sort of launch rack that would launch a drone to check out a target that triggered a tripwire, motion detector, etc. It’d be a little harder to build one that could autonomously navigate, but definitely doable– as Paul proved with his Charlie-following project. See also: the Burrito Bomber, which can follow waypoints and then deliver a payload on target.  Drones to sneak into somewhere and snipe a single target would have different range/payload requirements than a patrol or incident-reponse drone. This drives the weight of the drone (since more range requires more fuel).
  • What’s it packing? The purpose of the drone dictates what kind of firearm you want it to carry. Some of Suarez’s drones had short-barrelled .38 pistols, which are plenty good enough to kill from close range but wouldn’t be very accurate past around 35 feet or so. A longer barrel and a heavier round would provide better accuracy, at the cost of weight and size.
  • How much range do you need? A sniper drone that can shoot targets from 1500yds is definitely feasible— use a .50 Barrett, for example. It would be heavy and range-limited, though, unless you wanted to make it bigger. In general, heavier bullets are more stable and give you better accuracy, but they’re heavier to carry and shoot.
  • How stable is the drone? A light drone that’s sensitive to wind, etc. will be harder-pressed to make accurate shots. Gyrostabilizing the gun platform would help, but it would add a weight and cost penalty (including for power for the gyros, plus the gyros themselves). The bigger the drone, the more sensors, power, and ammo you can carry… but the more noise, infrared, and visual signature it creates. A small sneaky drone may be a better deal than a large, more powerful one.
  • What can you see? In other words, what kind of sensors do you have for aiming? How good is their resolution and range? Do they have to be automated? If so, you need to be able to either fire at the centroid of the target or track interesting parts, like wheels of a truck or a person’s head), using machine vision. 
  • Where are you pointing the gun, and how accurate can you be? What kind of angular resolution does the gun-pointing system have? If you’re willing to slow to a dead hover, or nearly so, you can be very accurate (as in the video above). If you want to go faster, you’ll have a more challenging set of requirements– you have to be able to point the gun while the drone’s moving, and changing its aim point means fighting inertia in a way you don’t have to worry about in a hover.

There are lots of other more subtle considerations, I’m sure; these are just what I came up with in 5 minutes. Any engineer, pilot, or armorer could come up with a couple dozen more without too much effort. Of course, you could just buy a pre made system like this one from Autocopter. Isn’t it great to know they’ll lease you as many UAVs as you need? Just for a ballpark figure, Autocopter quotes an 8Kg payload on their smallest drones– figure 3Kg for a cut-down M4 and that leaves you a reasonable 5Kg for sensors, guidance, navigation, and control.

What could you do with such drones? The mind boggles. Imagine that, say, your favorite Mexican drug cartel cooked up a bunch of these in their machine shops and used them to guard the pot farms they run in national forests. Or say the white-supremacy militia guys in Idaho built some for sovereign defense. Or suppose you built 100 or so of them, staged them inside an empty 18-wheeler with a tarp over the top, then launched them into Candlestick Park during a 49ers game. There are all sorts of movie-plot-worthy applications for these drones, to say nothing of the ones Suarez wrote about.

Meanwhile, the February 2013 NASA Aviation Safety Reporting System (ASRS) newsletter is full of safety reports filed after drones got into airspace where they weren’t supposed to be… and these were piloted, unarmed drones. How careful do you think these hypothetical armed drones would be about respecting the National Airspace System? I think I’ll be extra careful when flying around… that smudge on the windscreen might turn out to be an armed autonomous drone.

Leave a comment

Filed under aviation, General Tech Stuff, Musings

Surface Pro first impressions

Saturday morning I decided, more or less on the spur of the moment, to try to grab a Surface Pro and try it out. This follows a well-established pattern; I wasn’t going to buy an Xbox 360 when it first came out, or an iPhone, and yet somehow on launch day I ended up with both of those. Anyway…

After some fruitless searching, Tom and I found a local Staples that had a 64GB Surface Pro. This was no mean trick because Huntsville doesn’t have a Microsoft Store (I know, right?) and the local Best Buys got zero stock. In fact, as far as I could tell there were none shipped to stores in Memphis, Nashville, Birmingham, or Atlanta. I’m betting that at least the Atlanta region got a handful but those sold out. Anyway, my local Staples stores apparently got 1 64GB unit apiece, so I went out and grabbed one. Total cost with the Type Cover and sales tax was $1111.

This isn’t a review; it’s more a collection of observations, since I don’t have time at the moment to string together a coherent narrative instead of just giving you factoids and observations. Thus this post is worth what you’ve paid for it 🙂

The hardware build quality is superb. It’s true that the device is thicker and heavier than an iPad, but it’s much lighter and smaller than my 15″ MacBook Pro. I was able to comfortably use it on my lap while on the sofa. One thing I didn’t expect: the Type Cover flexes more than I thought it would. I guess I hit the keys hard or something. This was a little disconcerting at first. The kickstand works very well, and I’ve gotten used to the odd feel of having the Type Cover folded around the back of the unit.

Setup was simple: I signed in with my Microsoft account and it synced all of my profile information. SkyDrive works beautifully, as do all the other Microsoft services (notably Xbox LIVE). I’m glad to have multiple accounts on the device, because the kids cannot get enough of playing with it. They’re used to the iPad and don’t think of it as remarkable, but all of them are fascinated by the Surface. David’s used it for two homework assignments– in preference to his Win7 laptop; Tom is fascinated by the pen interface; and Matt likes that he can play all the Flash-based games that don’t work on the iPad.

The Surface Pro is fast. It boots fast, apps run fast, and the UI performance is “fast and fluid,” to coin a phrase. It does have a fan, and in a silent room you can hear it when it kicks in, but it’s not obtrusive– it’s quieter than the fans in my MBP, for example. 

Battery life? Haven’t tested it, don’t much care. If I want to just browse and watch, I’ll use the iPad, with its excellent battery life. The Surface Pro is an adjunct to, and replacement for, my “real” laptop, which means a 4-5 hour battery life will suit me just fine. I do want to see whether I can charge it with my external 10Ah battery (the excellent RAVPower Dynamo), though I’ll need an adapter.

Setting up VPN access to my office network was trivial. Lync MX won’t work until I get some more server-side plumbing set up. I tried to sign in to the desktop version of Lync 2013 and couldn’t because I didn’t have the necessary server certificate– but going to the Windows Server CA page with IE 10 resulted in a message from the server telling me that my browser couldn’t be used to request a certificate, even though all I wanted to do was download the CA chain. I’ll have to look into this.

And speaking of desktop access: I was easily able to turn on RDP access and hit the tablet from my Mac, but there’s a bug in CoRD that makes the cursor sometimes disappear. I haven’t tried Microsoft’s (lame and poorly maintained) RDP client, nor have I tried RDP from a Windows machine. Just to see what would happen, I plugged the cable from my desktop monitor into the Surface Pro’s mini-Display Port and immediately got a beautiful, mirrored 1920 x 1080 desktop, as expected. 

As many other reviewers have noted it’s a little disorienting at first to have two separate environments: desktop and Metro. However, since I can alt-Tab to switch between apps, in practice that has been absolutely no problem for me. The lack of a Start menu is a bit aggravating, but again, there’s an easy solution: tap the Windows key and start typing. Problem solved.

One night, I sat on the sofa using Word 2013 on the Surface Pro to revise a book chapter. This worked very well; I much prefer the UI of Word 2013 to Word 2011 on the Mac. I didn’t try using any pen input as part of my editing workflow, although that’s on my to-do list.

The smaller physical size of the Surface Pro compared to the MBP is a great asset; I’m looking forward to using it on commercial flights. The Ars Technica review shows the Surface as having a larger footprint than the MBP, but that ignores the fact that you have to open the MBP to use it, and when you do, the screen won’t be at 90° to the bottom– it’ll be tilted further back, which is where the footprint problem comes from. In that configuration the MBP screen impinges on the seatback space, which is how laptops get broken by reclining seats.

I tried running Outlook 2013, flipping out the kickstand, and using the Surface as a calendar display sitting next to my main screen. It’s a fantastic size to use as an adjunct display like that; I could have multiple browser windows (American, Delta, and kayak.com) plastered all over my man 2560 x 1440 desktop and still have glance-able calendar access.

Bottom line: I’m well pleased with the Surface Pro so far and will be swapping out my 64GB unit for a 128GB unit as soon as I can find one in stock.

1 Comment

Filed under General Tech Stuff, Reviews

Skype automatic updates: in-app vs Microsoft Update

One of the problems I most often run into when working with Windows machines is the way updates work. Microsoft has made great strides in improving the update experience for Windows and for Microsoft applications; compared to the steaming pile of filth that is Adobe’s updater, for example, or the mishmash of every-app-its-own-update-client behavior common on the Mac, Microsoft Update is pretty smooth.

But what about Skype? It’s now a Microsoft application, so you’d expect it to receive updates through Microsoft Update… and it does. However, it also has its own update mechanism. What gives?

Here’s a solid explanation, which Doug Neal of the Microsoft Update team was kind enough to let me republish. (My comments are italicized):/p>

While we’re still working through the best way to complement the updating system available via Skype, here are some insights that may explain the differences:

Skype 5.8 [released nearly a year ago] introduced a Skype-based auto-updating features unrelated to any Microsoft technology (and before knowledge of the merger). This updating service will remain for the foreseeable future – and is Skype’s method of offering updates on a more frequent basis than Microsoft Update. These settings and consent to update via Skype’s updating service can be controlled via the Skype | Options | Automatic Updates setting – which also provides a link to more information on Skype’s updating approach via their updater. These updates via the Skype updater can include major and incremental updates. [in other words, the Skype app can pull both minor updates and entire new versions through its built-in update mechanism, as do many other third-party apps on Windows and OS X. ]

As a new addition to the products supported via Microsoft Update, only major versions of Skype are made available via MU. Consent to automatically update via Microsoft Update is granted via Microsoft Update opt-in – the same opt-in experience available via Windows Control Panel | Windows Update | Change Settings. [So MU may offer you major versions, which is useful if you don’t now about ]

So, updating Skype via Skype’s updating service is controlled from within the Skype application. This updating experience may include various Skype-specific reminders and prompts that a newer update is available. Turning off updates here will reduced the number of incremental updates your Skype client will receive assuming Microsoft Update is still enabled to provide more major, less-frequent updates to Skype.

Updating via Microsoft Update will only occur for major versions and is controlled within the Windows Update control panel – the same place for all Microsoft product updates. Turning off Microsoft Updates is not recommended – and will result in preventing any updates from Microsoft for all 60+ products supported by Microsoft including security updates. The updating experience for Skype will be the same as you expect for all other Microsoft Updates, namely that unmanaged consumer PCs will see these as Important updates with no UI (applied automatically), or to managed PCs via WSUS/SCCM admin approval.

Having a single update mechanism, à la the iTunes App Store and the Windows Phone Marketplace, certainly seems to be the best model for end users: all app updates are packaged and available on demand in a single location. On the other hand, putting the responsibility for applying security-critical updates in the hands of end users, instead of in a centralized patch management system driven by WSUS or equivalent, is a terrible idea for the enterprise. Having a hybrid approach like this is a compromise, albeit an unintentional one, that may deliver the better aspects of each approach. Long-term I’d like to see the major OS vendors offer a flexible method of combining both vendor-specific OS/app updates with opt-in updates provided by third parties– something like the existing Marketplace combined with the controls and reporting in WSUS would be ideal. Here’s hoping…

Leave a comment

Filed under General Tech Stuff, Security

Thursday trivia #84

  • Joel Gascoigne has some interesting advice about the value of setting a morning routine. I haven’t been successful in doing that lately, but the benefits are sure appealing.
  • Amazon’s new AutoRip service is very cool: buy a physical CD and it shows up in your Amazon Cloud Player. The best part: it’s retroactive, so CDs you bought from Amazon in the past are automatically included.
  • Why the Gun Is Civilization.” Read it and tell me if you find it persuasive in the comments.
  • If your doctor carries a purse, you should be very afraid. (Bonus: now I know what “fomite” means.)
  • Hey, the Lenovo A720 (which seemed to have gone missing over the holidays) is back, in a single configuration, at Lenovo’s site.
  • Protip: if you use the Lync 2010 topology builder to add a new Lync standard edition server to your topology, do not then try to use the Lync 2013 deployment wizard to install Lync 2013 on it unless you like swearing and error logs.
  • This year is the 150th anniversary of both the Emancipation Proclamation and the London Underground.
  • Nearly done with the unified messaging chapter for the book– it’s a game of incremental progress, but I’m slowly getting back into the groove.

Leave a comment

Filed under General Tech Stuff

CrashPlan “Cannot connect to backup engine” errors on Mac OS X

I recently updated to Java 1.7 for work, and after doing so I noticed that CrashPlan was no longer performing backups. (I’m a bit ashamed to admit how long it took for me to notice though!) The company’s support forum suggests uninstalling and reinstalling the client, which didn’t fix the problem. A bit more searching identified the problem: CrashPlan expects Java 1.6, the official Apple version, and it gets unhappy if you replace that with 1.7. The instructions here outline a workaround: you have to stop the CrashPlan background service, modify its configuration file to point to the official-Apple version of Java, and then restart the service. Happy backups!

Leave a comment

Filed under General Tech Stuff

Microsoft wins UK case vs Motorola Mobility/Google

Earlier this year I had the unique (to me) opportunity to serve as a technical expert witness in a court case in the UK. Tony’s already written about the case but I wanted to add my perspective.

I was contacted by Bird and Bird to see if I might be willing and able to act as a technical expert in a court case; that’s all they said at first. The nature of the questions they were asking soon clued me in that the case involved Exchange ActiveSync and multiple-point-of-presence (MPOP) support for presence publishing– two completely separate technologies which Motorola/Google had lumped together in this case.  .

My role was to perform a wire-level analysis of the protocols in question: EAS, SIP/SIMPLE as implemented in Lync, and the Windows Live Messenger protocol. For each of these protocols, my job was to produce an accurate, annotated packet capture showing exactly what happened when multiple devices synchronized with the same account, and when the status on one device changed.

This isn’t what most people think of when they think of expert testimony; in courtroom dramas and books, it always seems like the expert is being asked to provide an opinion, or being cross-examined on the validity of their opinion. No one wanted my opinion in this case (which is perfectly normal), just for me to to accurately and impartially report what was happening on the wire.

This proved to be incredibly interesting from a technical standpoint. Like most administrators, it had never really occurred to me to look into the depths of the EAS protocol itself to see exactly what bits were being passed on the wire. After a great deal of study of the ActiveSync protocol documentation and many a late night slaving away over Wireshark and Network Monitor captures, I’d produced a report that showed the actual network traffic that passed between client(s) and server for a variety of test scenarios, along with an explanation of the contents of the packets and how they related to user action on the device.

Along the way, I gained a new appreciation for the economy of design of these protocols– it’s surprising how efficient they are when you look at them at such a low level. (And a shout out to Eric Lawrence for his incredibly useful Fiddler tool, which made it much easier for me to get the required data into a usable format.) I found a few bugs in Wireshark, learned more than I wanted to about SSL provisioning on Windows Phone 7.5 devices, and generally had a grand time. I particularly enjoyed working with the attorneys at Bird and Bird, who were quite sharp and had lovely accents to boot. (I’m not sure they enjoyed my accent quite as much, but oh well.)

When I finished my report, I submitted it to Bird and Bird and that was the last I heard of the case until today, when Mr. Justice Arnold issued his ruling. It was submitted as part of Microsoft’s justification explaining why their implementations did not infringe on Motorola’s patent; the purpose of having an annotated set of packet captures was to clearly illustrate the differences between the claimed innovations in the patent and Microsoft’s implementation to show why Microsoft wasn’t infringing.

Florian Mueller has a good summary of the case that highlights something I didn’t know: the patent at issue is the only one on which an Android manufacturer is currently enforcing an injunction against Apple. I am no patent attorney, but it would seem that Apple might have grounds to have this injunction lifted. It will be interesting to see what happens in the related German court cases that Muller cites, but it’s hard for this layman to see any other likely result besides a Microsoft win… but we will see.

Leave a comment

Filed under General Tech Stuff, UC&C