The Best and Worst of PDC 2010

I attended Microsoft’s PDC 2010 simulcast in their offices in Alpharetta, Georgia.  Here are my thoughts on the best and worst items from the day.

The Best of PDC 2010

  1. [Re]connecting with Friends & Colleagues – The best part of the day was seeing colleagues I haven’t seen in a while and talking to new people.  Randy, Doug, Jeremy, David, Chad – it was great seeing you guys.  Isaac, I enjoyed meeting you – best wishes in your new role.  The guy from India whose name I didn’t catch – our conversation was very interesting and raised some important points – thanks!
  2. Mark Russinovich on Azure – OK, I know this is old news, but Mark has so much geek cred & seeing him in action with Azure boosts my confidence in the platform.  I formerly just believed that Microsoft cared about making the guts of the platform accessible & diagnosable for developers.  Now I know there’s a guy in place who will make sure that happens.
  3. Asynch in C# 5.0 – I didn’t see much of Anders Hejlsberg’s, but what I did see was striking.  He demonstrated the capabilities of async and await which result in multithreaded code under the covers.  Of course multithreading is nothing new, and the Parallel framework in .NET 4.0 renders multithreaded code, but the addition of these keywords and compiler support will dramatically improve code performance at much lower development cost.
  4. Component Applications – Karandeep Anand’s presentation on AppFabric announced Azure’s upcoming Component Application features.  These features provide modeling capabilities for connecting various services together (inside Azure or not) and treating them all as a cohesive whole.  By adding and connecting a new service, all the glue code between the services was generated.  These capabilities will reduce development time and reduce manual code risks.
  5. Azure Management Portal – As with every Azure demo I saw, Karandeep used a new, Silverlight-based management portal, and it was really powerful.  The best example of ‘really powerful’ was during his presentation regarding Component Applications.  Karandeep was modeling several different services on a collection of Azure instances and he could deploy all of it atomically with a single action!  No more deploying individually.  Very cool and big time savings to boot!
  6. Windows VM hosting – Microsoft’s focus on the PaaS side of Azure has been the right course, but migrating existing apps to the cloud almost always involves some software that needs to remain as it is (due to cost, complexity, ‘it just works,’ etc.)  Now Azure will have true IaaS-level virtual machine support (but only for specific versions of Windows Server 2003 and 2008 R2)
  7. RDP to Azure Instances – In the “They Have to Do This Eventually” category, Microsoft has finally announced the ability to use Remote Desktop to access instances and VMs in Azure.  Beta coming by end of year.

The Worst of PDC 2010

  1. Buffering! My biggest disappointment of the day was that it seemed like we were watching the simulcast feed over the internet, just as we would have at our own offices.  Every session I saw encountered long pauses due to buffering.  Some sessions were so bad that people just left the room to do something (anything!) valuable with their time.  It was painful! Can’t Microsoft get these feeds over their WAN with some QOS support?  Many people in my sessions agreed that this is a make or break issue for attending next year.
  2. Azure Management Portal Not Available Until H1-2011 – The new portal will save every Azure developer a lot of time.  But, its first availability, even as a CTP, is set for the entirely nebulous “first half of 2011.”  In case you’re new to this game, that statement roughly translates to:
    • Q1 of 2011 – Wow!  You should be pleasantly surprised (shocked) if it is available in Jan, Feb or Mar
    • Q2 of 2011 – Microsoft’s internal target is probably in this quarter, and probably more toward the end.  No need to be shocked in this time-frame; pleasantly surprised is appropriate.
    • H2 of 2011 – Right, H2 is not part of H1; they don’t even overlap.  But hey, schedule slips happen!
  3. Component Application features, CTP in H1-2011 – Same as above; disappointing that we may not even get to CTP it prior to next summer.  Schedule slips are probably a higher risk for this component than the portal.
  4. Silver-what? There was a noticeable lack of Silverlight references or discussion.  If you took your guidance from yesterday’s keynote and other general sessions, you’d drop Silverlight and immediately switch to HTML5.  Silverlight still has a great future, and Microsoft continues to invest in it; so should you.

Well, that’s about it.  I hope this has been helpful or informative in some way.  If you attended PDC 2010 and think I missed a best or worst, just comment below.  If you think my Best and Worst items or commentary are wrong-headed, post a comment.  If you think I’m a lunatic, well just join the crowd.

Technorati Tags: ,,

Clemens Vasters’ Scheduler-Agent-Supervisor Pattern

Microsoft’s Clemens Vasters published a pattern the Azure team is using which he calls the Scheduler-Agent-Supervisor pattern (http://blogs.msdn.com/b/clemensv/archive/2010/09/27/cloud-architecture-the-scheduler-agent-supervisor-pattern.aspx)

This pattern is reminiscent of the solution we architected and implemented a few years ago (at a much smaller scale, obviously).  We used SQL Server for persistent storage of pipelines of step-wise tasks, .NET-based agents and a good ol’ Windows Service to execute the pipelines.  The service polled for pipelines waiting for work to execute, instantiate the appropriate agents for the work and provide context and threads for agent execution.  After quite a bit of rigorous stress and negative testing, the system proved to be very efficient.  Separately, it was actually very adaptable to other purposes since the agents implemented the business logic — new agents or agents used in different ways could implement different functionality.

Ok, enough of what we did before.  Clemens’ post builds on the Scheduler-Agent pattern by adding a Supervisor.  This Supervisor appears to be a sort of über-agent which is scheduled to check for problems in the pipelines (i.e., lists of tasks) and attempt to correct them.  Clemens writes,

“If the agent keels over and dies as it is processing the step [pipeline task] (or right before or right after), it is obviously no longer in a position to let the scheduler know about its fate.”

In these cases, the pipeline is likely in an indeterminate state.  The scheduler has handed off the responsibility to the agent, but the agent hasn’t reported that it has finished its work (or even that it encountered an error).  Enter the Supervisor.  Just look for incomplete work and start over, right?  Well, it’s not a simple as it seems at first.  What if the agent had calculated and added tax to a total amount and then died just before reporting that it was done?  The Supervisor shouldn’t trigger this agent to execute again because the tax would be added a second time.  This particular case is very simple, but you get the point.  The entire Schedule-Agent-Supervisor pattern requires a level of atomicity in various areas to reduce these risks.

In case anyone is curious to know how we dealt with this in our solution a few years ago, we used a “supervisor,” too:  a developer who found the broken agent in the database and reset it!  How’s that for sophisticated? 😉

Amazon’s Free Usage Tier

Yesterday Amazon announced its new AWS Free Usage Tier.  The announcement says that this tier will include:

  • 750 hrs / month of compute instance
  • 750 hrs / month of Elastic Load Balancer
  • 10 GB Elastic Block Storage
  • 5 GB of S3 Storage
  • 30 GB data transfer (caveats: 15 GB each for ingress & egress; CloudFront excluded)

Additional caveats or restrictions include:

  • Availability is limited to new AWS customers only
  • Ends one year after registration
  • Requires a valid credit card during registration
  • Only available with the smallest compute instance, micro Linux ECS

So, is this a really big deal?  Is it reshaping cloud computing?  Well, not really.  Some quick calculations indicate that the offering is worth about $450 over 12 months, but that assumes leveraging each aspect to its full free level (e.g., using 5 GB of S3 Storage each month).

That’s no small amount for many individual developers.  Developers working in companies or on teams will probably find the limitations too tight to be useful in their environment.

From a competitive stand-point, Amazon’s new offer isn’t much different than Microsoft Azure’s Introductory Special offering.  Azure also has additional month-to-month plans and developer subscriptions.  (Full disclosure: I haven’t researched AWS developer pricing options)

UPDATE: Also see PK’s post on this topic

Technorati Tags: ,,

Azure’s Global DNS changes coming tonight!

If you have any production Azure systems or cloud components you depend on (overnight testing, etc.), you should keep an eye on tonight’s DNS switch-over.

As the blog posting on 9/24 announced, Azure’s DNS is “…moving to a new globally distributed infrastructure…” to “…increase performance and improve reliability….”  This change is set to occur at midnight UTC on 9/5.

Midnight UTC on 9/5 maps to:

8:00 PM today (9/4) in Atlanta, New York, Boston, etc.

5:00 PM today (9/4) in Redmond, Silicon Valley, etc.

Technorati Tags: ,,

OAuth & WRAP for Development Teams

We posted some code to CodePlex which smooths OAuth integration for development teams.

A PaaS by any other name…?

David Smith (VP & Gartner Fellow) asks, “Is PaaS Passe yet?”  In his post he tries to deal with:

1) All “XaaS” terms are confusing — Ok, he doesn’t really go after all of the XaaS terms, but what he says about PaaS applies generally.  There are not clear standards of what makes something IaaS, PaaS or SaaS.

2) Corporate marketers abuse the terms — Since there are no clear standards or delineations, “creative” people will use them “creatively.”  His example is a Red Hat PaaS announcement.  RH marketers can now include a PaaS check-mark on their collateral.  But, in reality (says Mr Smith), the RH technology is enabling PaaS – a far cry from actually being a PaaS

3) License != Service — I do like his qualification that, “…if you need a traditional software license it’s not a service.”  And, therefore, not in the realm of XaaS

4) Middleware in the Cloud — Mr Smith says that “middleware in the cloud” is what most people think of when they say PaaS.  I don’t really agree with that assessment, but that doesn’t really matter.  Distinguishing between PaaS and MaaS (or MWaaS) in the current climate is a distinction without much difference.  I don’t mind changing terminology and acronyms, but only if they actually improve the situation significantly.  Without clear distinctions or standards, new terms just become new marketing fodder.

Who Cares About the Cloud? (Gartner Webinar)

I listened in to Gartner’s webinar today on Who Really Cares About the Cloud – An Industry Perspective. If you’re looking for a multi-industry analysis of cloud adoption, this webinar is a good place to start.  Check Gartner’s website for replay.

My key take-aways were:

  1. Companies’ major attraction to cloud computing is for cost reduction.  No big news here; reinforces what we knew.  Ability to implement solutions more quickly seemed to be a distant second attractor.
  2. Companies’ major hurdle is security.  Again, more reinforcement than news.  This meme continues to interest me because I believe cloud security (or lack thereof) is driven more by appropriate architecture and implementation, and less by whether the system is cloud-based or not.  Who owns the data seemed to be the second biggest hurdle.

Again, if this topic is interesting to you, I encourage you to check out Gartner’s site to replay the webinar.

Enable Multiple Action Claims in AppFabric WRAP/SWT Sample

If you’re using the ASPNETSimpleService sample for investigating Azure’s August release of AppFabric (currently in AppFabric Labs), you might like to know a simple change to enable more action claims.

The sample provides instructions for creating the necessary ‘reverse’ value for the ‘action’ claim in an ACS Rule Group.  But what happens if you have more than one ‘action’ claim in the rule group (either by adding or by reusing an existing rule group)?  The sample breaks and renders WebException, (401) Unauthorized.  What gives?

The client app retrieves a SWT from AppFabric’s ACS and passes it to the web service.  The web service (implemented in the sample’s default.aspx) verifies that the SWT contains the ‘reverse’ value of the ‘action’ claim.  Unfortunately, the sample expects the SWT to look like:

“action=reverse&Audience=http://localhost:8000/Service/&ExpiresOn=1281474685&Issuer=https://<your-namespace>.accesscontrol.appfabriclabs.com/&HMACSHA256=muXlPgsuZkO9BHNAZqA97TZj4gz4NLKc0Ut58Y=”

This SWT will work fine.  Notice the claim “action=reverse” at the beginning.  If you aren’t aware (I wasn’t), multiple values of an claim are combined and separated by commas.  So, if you have another value ‘translate’ for the ‘action’ claim, the SWT will look like this:

“action=reverse,translate&Audience=http://localhost:8000/Service/&ExpiresOn=1281474685&Issuer=https://<your-namespace>.accesscontrol.appfabriclabs.com/&HMACSHA256=muXlPgsuZkO9BHNAZqA97TZj4gz4NLKc0Ut58Y=”

And that makes sense.  Why include multiple ‘action=<value>’ sections?  One simple change to the sample enables it to handle this situation.  Around line 90 of the sample’s default.aspx you’ll see

// check for the correct action claim value
if (!actionClaimValue.Equals(this.requiredClaimValue))
{
this.ReturnUnauthorized();
return;
}

This bit of code is requiring that the actionClaimValue (which comes from the SWT) EQUALS the expected value (requiredClaimValue which set to ‘reverse’).  In the second SWT above, the value of actionClaimValue will be “reverse,translate” which do not equal “reverse”

Simply changing Equals to use the Contains method fixes the sample.  Based on this situation, I don’t think using Equals is a good idea for your apps.  Contains is a reasonable alternative, but you may want to use a more rigorous practice.

CloudStorageAccount.Parse fails on trailing semicolon

Be careful not to let a sneaky little semicolon add itself to the end of your Azure connection string!  It’ll cause CloudStorageAccount.Parse to throw “Invalid Account String” exception.  For example

“AccountName=<account>;AccountKey=<key>;DefaultEndpointsProtocol=http”

works as expected, but

“AccountName=<account>;AccountKey=<key>;DefaultEndpointsProtocol=http;

causes the exception.  Spaces after the semicolon result in the exception, too.  The easiest fix is to use TrimEnd

azureConnStr = azureConnStr.TrimEnd(new char[] {‘;’});

I wish Microsoft would code Parse to handle this case more gracefully.  I guess it seems like just a little thing, but when you’re connecting to the cloud tracking down problems like this takes too long.

AppFabric goes Production on 4/9/10

Microsoft announced that the Windows Azure AppFabric will go live for production purposes on April 9th.

Pricing: Service Bus will now be charge $3.99 per “connection-month;” $1.99 per 100,000 Access Control transactions.