The Best and Worst of PDC 2010

I attended Microsoft’s PDC 2010 simulcast in their offices in Alpharetta, Georgia.  Here are my thoughts on the best and worst items from the day.

The Best of PDC 2010

  1. [Re]connecting with Friends & Colleagues – The best part of the day was seeing colleagues I haven’t seen in a while and talking to new people.  Randy, Doug, Jeremy, David, Chad – it was great seeing you guys.  Isaac, I enjoyed meeting you – best wishes in your new role.  The guy from India whose name I didn’t catch – our conversation was very interesting and raised some important points – thanks!
  2. Mark Russinovich on Azure – OK, I know this is old news, but Mark has so much geek cred & seeing him in action with Azure boosts my confidence in the platform.  I formerly just believed that Microsoft cared about making the guts of the platform accessible & diagnosable for developers.  Now I know there’s a guy in place who will make sure that happens.
  3. Asynch in C# 5.0 – I didn’t see much of Anders Hejlsberg’s, but what I did see was striking.  He demonstrated the capabilities of async and await which result in multithreaded code under the covers.  Of course multithreading is nothing new, and the Parallel framework in .NET 4.0 renders multithreaded code, but the addition of these keywords and compiler support will dramatically improve code performance at much lower development cost.
  4. Component Applications – Karandeep Anand’s presentation on AppFabric announced Azure’s upcoming Component Application features.  These features provide modeling capabilities for connecting various services together (inside Azure or not) and treating them all as a cohesive whole.  By adding and connecting a new service, all the glue code between the services was generated.  These capabilities will reduce development time and reduce manual code risks.
  5. Azure Management Portal – As with every Azure demo I saw, Karandeep used a new, Silverlight-based management portal, and it was really powerful.  The best example of ‘really powerful’ was during his presentation regarding Component Applications.  Karandeep was modeling several different services on a collection of Azure instances and he could deploy all of it atomically with a single action!  No more deploying individually.  Very cool and big time savings to boot!
  6. Windows VM hosting – Microsoft’s focus on the PaaS side of Azure has been the right course, but migrating existing apps to the cloud almost always involves some software that needs to remain as it is (due to cost, complexity, ‘it just works,’ etc.)  Now Azure will have true IaaS-level virtual machine support (but only for specific versions of Windows Server 2003 and 2008 R2)
  7. RDP to Azure Instances – In the “They Have to Do This Eventually” category, Microsoft has finally announced the ability to use Remote Desktop to access instances and VMs in Azure.  Beta coming by end of year.

The Worst of PDC 2010

  1. Buffering! My biggest disappointment of the day was that it seemed like we were watching the simulcast feed over the internet, just as we would have at our own offices.  Every session I saw encountered long pauses due to buffering.  Some sessions were so bad that people just left the room to do something (anything!) valuable with their time.  It was painful! Can’t Microsoft get these feeds over their WAN with some QOS support?  Many people in my sessions agreed that this is a make or break issue for attending next year.
  2. Azure Management Portal Not Available Until H1-2011 – The new portal will save every Azure developer a lot of time.  But, its first availability, even as a CTP, is set for the entirely nebulous “first half of 2011.”  In case you’re new to this game, that statement roughly translates to:
    • Q1 of 2011 – Wow!  You should be pleasantly surprised (shocked) if it is available in Jan, Feb or Mar
    • Q2 of 2011 – Microsoft’s internal target is probably in this quarter, and probably more toward the end.  No need to be shocked in this time-frame; pleasantly surprised is appropriate.
    • H2 of 2011 – Right, H2 is not part of H1; they don’t even overlap.  But hey, schedule slips happen!
  3. Component Application features, CTP in H1-2011 – Same as above; disappointing that we may not even get to CTP it prior to next summer.  Schedule slips are probably a higher risk for this component than the portal.
  4. Silver-what? There was a noticeable lack of Silverlight references or discussion.  If you took your guidance from yesterday’s keynote and other general sessions, you’d drop Silverlight and immediately switch to HTML5.  Silverlight still has a great future, and Microsoft continues to invest in it; so should you.

Well, that’s about it.  I hope this has been helpful or informative in some way.  If you attended PDC 2010 and think I missed a best or worst, just comment below.  If you think my Best and Worst items or commentary are wrong-headed, post a comment.  If you think I’m a lunatic, well just join the crowd.

Technorati Tags: ,,

Clemens Vasters’ Scheduler-Agent-Supervisor Pattern

Microsoft’s Clemens Vasters published a pattern the Azure team is using which he calls the Scheduler-Agent-Supervisor pattern (http://blogs.msdn.com/b/clemensv/archive/2010/09/27/cloud-architecture-the-scheduler-agent-supervisor-pattern.aspx)

This pattern is reminiscent of the solution we architected and implemented a few years ago (at a much smaller scale, obviously).  We used SQL Server for persistent storage of pipelines of step-wise tasks, .NET-based agents and a good ol’ Windows Service to execute the pipelines.  The service polled for pipelines waiting for work to execute, instantiate the appropriate agents for the work and provide context and threads for agent execution.  After quite a bit of rigorous stress and negative testing, the system proved to be very efficient.  Separately, it was actually very adaptable to other purposes since the agents implemented the business logic — new agents or agents used in different ways could implement different functionality.

Ok, enough of what we did before.  Clemens’ post builds on the Scheduler-Agent pattern by adding a Supervisor.  This Supervisor appears to be a sort of über-agent which is scheduled to check for problems in the pipelines (i.e., lists of tasks) and attempt to correct them.  Clemens writes,

“If the agent keels over and dies as it is processing the step [pipeline task] (or right before or right after), it is obviously no longer in a position to let the scheduler know about its fate.”

In these cases, the pipeline is likely in an indeterminate state.  The scheduler has handed off the responsibility to the agent, but the agent hasn’t reported that it has finished its work (or even that it encountered an error).  Enter the Supervisor.  Just look for incomplete work and start over, right?  Well, it’s not a simple as it seems at first.  What if the agent had calculated and added tax to a total amount and then died just before reporting that it was done?  The Supervisor shouldn’t trigger this agent to execute again because the tax would be added a second time.  This particular case is very simple, but you get the point.  The entire Schedule-Agent-Supervisor pattern requires a level of atomicity in various areas to reduce these risks.

In case anyone is curious to know how we dealt with this in our solution a few years ago, we used a “supervisor,” too:  a developer who found the broken agent in the database and reset it!  How’s that for sophisticated? 😉

Amazon’s Free Usage Tier

Yesterday Amazon announced its new AWS Free Usage Tier.  The announcement says that this tier will include:

  • 750 hrs / month of compute instance
  • 750 hrs / month of Elastic Load Balancer
  • 10 GB Elastic Block Storage
  • 5 GB of S3 Storage
  • 30 GB data transfer (caveats: 15 GB each for ingress & egress; CloudFront excluded)

Additional caveats or restrictions include:

  • Availability is limited to new AWS customers only
  • Ends one year after registration
  • Requires a valid credit card during registration
  • Only available with the smallest compute instance, micro Linux ECS

So, is this a really big deal?  Is it reshaping cloud computing?  Well, not really.  Some quick calculations indicate that the offering is worth about $450 over 12 months, but that assumes leveraging each aspect to its full free level (e.g., using 5 GB of S3 Storage each month).

That’s no small amount for many individual developers.  Developers working in companies or on teams will probably find the limitations too tight to be useful in their environment.

From a competitive stand-point, Amazon’s new offer isn’t much different than Microsoft Azure’s Introductory Special offering.  Azure also has additional month-to-month plans and developer subscriptions.  (Full disclosure: I haven’t researched AWS developer pricing options)

UPDATE: Also see PK’s post on this topic

Technorati Tags: ,,

Azure’s Global DNS changes coming tonight!

If you have any production Azure systems or cloud components you depend on (overnight testing, etc.), you should keep an eye on tonight’s DNS switch-over.

As the blog posting on 9/24 announced, Azure’s DNS is “…moving to a new globally distributed infrastructure…” to “…increase performance and improve reliability….”  This change is set to occur at midnight UTC on 9/5.

Midnight UTC on 9/5 maps to:

8:00 PM today (9/4) in Atlanta, New York, Boston, etc.

5:00 PM today (9/4) in Redmond, Silicon Valley, etc.

Technorati Tags: ,,

No Swimming for 30 Minutes After Azure DNS Changes

The Windows Azure team recently published Windows Azure Domain Name System Improvements.  If you only saw this on Twitter, or just took a cursory glance the announcement, you probably just thought, “Good.  Azure’s DNS will be faster.” and moved on to something else.  It’s important to note, however, that the announcement also includes some gotchas.

DNS Entry Propagation

Although DNS entry propagation delays are not new, you’ll need to keep the following issues in mind regarding Azure’s DNS infrastructure:

  1. New DNS entries may take up to 2 minutes to propagate through all of Azure’s global DNS system
  2. Systems that cache responses to DNS queries (such as web browsers, DNS Client, etc.), typically cache responses for about 15 minutes.

So, What’s the Problem?

These issues can add up to significant pain when you are launching a new Azure application, service, etc.  The end users of the application or service may receive errors due to these timing issues, and first impressions are often lasting impressions.

The problem due to the first item above is the more obvious: When you create a new entry in Azure’s DNS, you should wait at least two minutes before expecting DNS queries will receive the right answer.  Any DNS queries occurring inside this two minute window may encounter an “entry not found,” “DNS Not Found” or similar error.  When this error situation occurs, it can be compounded by the second problem…

Most systems which query DNS (e.g., Windows’ DNS Client) will also cache the response for a period of time – typically 15 minutes.  If a system queries DNS inside the 2 minute window and then caches the resulting error, subsequent queries for that entry will typically be resolved by the cache and will respond with the error.  If you try to use the name in your web browser, for example, it may take as much as 15 minutes until you stop getting the cached error and actually get the right answer.

You can manually flush the local DNS cache on Windows systems using ipconfig /flushdns, but be aware that this method does not impact other caching systems along the DNS resolution route.

Theoretically, an additional 15 minutes could be incurred if the route for DNS resolution changes between queries.  But the incidence of this scenario is very rare.

Risk Reduction

The best practice for reducing the risk of these errors occurring – and the resulting negative impressions – is to get your DNS house in order in advance.  Simply registering the appropriate names (DNS entries) well in advance of publishing the application or system should eliminate these issues.  Early registration also gives your team an opportunity to verify / test the name resolutions in advance.

In a situation in which we were forced to “live on the edge” and quick-publish software, we would wait at least 30 minutes before announcing that the software was ready to access.

OAuth & WRAP for Development Teams

We posted some code to CodePlex which smooths OAuth integration for development teams.

Good Explanation of Publishing Metadata for WCF

The blog post, Quick WCF Metadata Publication Walkthrough, is not new, but it give a good explanation of how metadata publication works with WCF.  In particular, it provides good understanding of interaction / dependencies of <baseAddresses />, <endpoint /> and <behavior />.  If you use it’s guidance and tinker around a bit, you’ll also get a good grasp on how you can do IMetadataExchange though HTTP, named pipes or TCP.

Uninstalling .NET Windows Service using InstallUtil gives “marked for deletion error”

If you are working with a .NET-based Windows service and have trouble uninstalling it, the problem may be due to allowing it to interact with the desktop. I used the Services admin tool to change the service’s security context to Local System.  When I made that change, I also enabled “Allow service to interact with desktop.”  Later, after making some code changes, uninstalling the service failed.  To uninstall, I used

InstallUtil.exe /u <service>.exe

but the uninstall failed saying that the service is “marked for deletion.”  After trawling around a bit, I found this message in the event log:

“The [service name] service is marked as an interactive service.  However, the system is configured to not allow interactive services.  This service may not function properly.”

So, Windows Server 2008 R2’s local policy must default to disallow services to operate interactively.  I disabled “Allow service to interact…” and uninstall is working again.

BTW, allowing a service to operation interactively is not a good idea in the first place.  I was just going to use it for some output / debugging during development.

A PaaS by any other name…?

David Smith (VP & Gartner Fellow) asks, “Is PaaS Passe yet?”  In his post he tries to deal with:

1) All “XaaS” terms are confusing — Ok, he doesn’t really go after all of the XaaS terms, but what he says about PaaS applies generally.  There are not clear standards of what makes something IaaS, PaaS or SaaS.

2) Corporate marketers abuse the terms — Since there are no clear standards or delineations, “creative” people will use them “creatively.”  His example is a Red Hat PaaS announcement.  RH marketers can now include a PaaS check-mark on their collateral.  But, in reality (says Mr Smith), the RH technology is enabling PaaS – a far cry from actually being a PaaS

3) License != Service — I do like his qualification that, “…if you need a traditional software license it’s not a service.”  And, therefore, not in the realm of XaaS

4) Middleware in the Cloud — Mr Smith says that “middleware in the cloud” is what most people think of when they say PaaS.  I don’t really agree with that assessment, but that doesn’t really matter.  Distinguishing between PaaS and MaaS (or MWaaS) in the current climate is a distinction without much difference.  I don’t mind changing terminology and acronyms, but only if they actually improve the situation significantly.  Without clear distinctions or standards, new terms just become new marketing fodder.

Which Mobile Platform Benefits the Most by Oracle’s Java Lawsuit?

In a recent post regarding Oracle’s Java lawsuit against Google I mentioned the Chinese proverb, “May you live in interesting times.”  In the tech industry, this lawsuit certainly makes things “interesting!”

One question swirling around this suit is, “What effect will the outcome of this suit have on mobile phone OSs?”  Not to sound too conspiratorial, it’s probably valuable to take a “follow the money” approach to this question.  Here’s a quick analysis of the platforms and how they may benefit (or not) from this situation.

  1. Google — Let’s dispense with this quickly.  Even on the high end of conspiracy theories, Google really doesn’t have much to gain in this situation.  The very existence of the legal battle will have a dramatic impact on the software industry’s investments in Android.
  2. Apple — The iPhone has been a huge success for Apple, even with its self-imposed support problems.  But Steve & Co. know that Android is eating their lunch (Android’s Mobile Web Consumption Share In The US Is Surging, iOS Share Dropping)  Oracle and Apple aren’t exactly direct competitors — Oracle is incapable of Apple’s user experience and marketing capabilities, and Apple can’t be bothered with such back-office primitives as databases, ERP, etc.  Seems like a great match, right?  If Apple and Oracle are in league on this “lawsuit to beat up Google,” how does Oracle benefit?  There’s the (potential) Java benefit directly, but that doesn’t require or need Apple.  I doubt this is the behind-the-scenes reality, but be on the look-out for some kind of co-marketing campaign.
  3. Microsoft — It’s pretty clear that Microsoft needs all the help it can get in the mobile space.  The Windows Mobile platform has been a laggard for years (which is an eternity in the mobile market).  Mobile phone and computing trends indicate that, at best, Windows Mobile whispers “Don’t forget about me. I’m still here!”  Apple and Android together are the dominant, uncontested players in the mobile and tablet markets.  Is it possible the Microsoft is in league with Oracle in order to put a big dent in the Apple-Android duopoly?  Microsoft’s Windows Phone 7 platform was released to production this week and, IMHO, is a Hail-Mary attempt at getting back in the mobile game.  In order to become a major contender in the mobility race, Microsoft has to succeed on many fronts, including getting mobile app developers to choose .NET over Java.  Raising FUD over Java’s future, licensing, etc. would certainly benefit .NET.  But…Oracle and Microsoft would be very strange bedfellows – very!
  4. IBM — This is the obligatory inclusion of IBM.  ‘Nuff said.
  5. RIM / Blackberry – Really?  I’m not going to spend much time on this possibility.  It seems to me that RIM’s market share is rapidly dwindling and they really don’t have anything to offer Oracle.
  6. Symbian — Who?  Yes, Symbian is still used by some mobile phones.  They have even less to offer Oracle than RIM does.
  7. Oracle — “What?” you ask.  “How could Oracle benefit in the mobile space?  They don’t even play in the mobile space.”  Right, but that may be the point.  Just as Microsoft is trying to re-enter the mobile space, Oracle needs to get in, too.  Maybe RIM or Symbian are working with Oracle and plan to take Oracle into the mobile space.  “Pretty thin” as Sgt. Murtaugh would say.
  8. VMWare — Now that we’re in “pretty thin” territory, I’ll bring VMWare into the picture.  We already know that it is working on virtualization solutions for mobile devices, and it put its money on Java by acquiring  Swing.  VMWare made a good strategic move in partnering with Salesforce.com on VForce.  Could it be promising Oracle inclusion in its mobile plans in exchange for freedom to use Java + Swing in all arenas?

So, where does that leave us?  It seems to me that Apple and Microsoft stand the most to gain in the mobile space by Oracle’s suit against Google.  Between the two, Microsoft seems to be a less likely bedfellow in this scenario.  But then again, do you remember when Microsoft kept Apple alive in the ’90s?

Hmmm.  Interesting times!