Monday, October 04, 2010

Windows Azure and Cloud Computing Posts for 9/29/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb311  
••• Updated 10/4/2010: New Azure MVPs •••
•• Updated 10/3/2010: Articles marked ••
• Updated 10/2/2010: Articles marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

Richard Seroter blogged Comparing AWS SimpleDB and Windows Azure Table Storage – Part I on 9/30/2010:

We have a multitude of options for storing data in the cloud.  If you are looking for a storage mechanism for fast access to non-relational data, then both the Amazon Web Service (AWS) SimpleDB product and the Microsoft Windows Azure Table storage are viable choices.  In this post, I’m going to do a quick comparison of these two products, including how to leverage the .NET API provided by both.

First, let’s do a comparison of these two.

Amazon SimpleDB Windows Azure Table
Feature
Storage Metaphor Domains are like worksheets, items are rows, attributes are column headers, items are each cell Tables, properties are columns, entities are rows
Schema None enforced None enforced
“Table” size Domain up to 10GB, 256 attributes per item, 1 billion attributes per domain 255 properties per entity, 1MB per entity, 100TB per table
Cost (excluding transfer) Free up to 1GB, 25 machine hours (time used for interactions); $0.15 GB/month up to 10TB, $0.14 per machine hour 0.15 GB/month
Transactions Conditional put/delete for attributes on single item Batch transactions in same table and partition group
Interface mechanism REST, SOAP REST
Development tooling AWS SDK for .NET Visual Studio.NET, Development Fabric

These platforms are relatively similar in features and functions, with each platform also leveraging aspects of their sister products (e.g. AWS EC2 for SimpleDB), so that could sway your choice as well.

Both products provide a toolkit for .NET developers and here is a brief demonstration of each.

Amazon Simple DB using AWS SDK for .NET

You can download the AWS SDK for .NET from the AWS website.  You get some assemblies in the GAC, and also some Visual Studio.NET project templates.

2010.09.29storage01

In my case, I just built a simple Windows Forms application that creates a domain, adds attributes and items and then adds new attributes and new items.

After adding a reference to the AWSSDK.dll in my .NET project, I added the following “using” statements in my code:

image

Then I defined a few variables which will hold my SimpleDB domain name, AWS credentials and SimpleDB web service container object.

image

I next read my AWS credentials from a configuration file and pass them into the AmazonSimpleDB object.

image

Now I can create a SimpleDB domain (table) with a simple command.

image

Deleting domains looks like this:

image

And listing all the domains under an account can be done like this:

image

To create attributes and items, we use a PutAttributeRequest object.  Here, I’m creating two items, adding attributes to them, and setting the value of the attributes.  Notice that we use a very loosely typed process and don’t work with typed objects representing the underlying items.

image

If we want to update an item in the domain, we can do another PutAttributeRequest and specify which item we wish to update, and with which new attribute/value.

image

Querying the domain is done with familiar T-SQL syntax.  In this case, I’m asking for all items in the domain where the ConferenceType attribute equals ‘Technology.”

image

Summary of Part I

Easy stuff, eh?  Because of the non-existent domain schema, I can add a new attribute to an existing item (or new one) with no impact on the rest of the data in the domain.  If you’re looking for fast, highly flexible data storage with high redundancy and no need for the rigor of a relational database, then AWS SimpleDB is a nice choice.  In part two of this post, we’ll do a similar investigation of the Windows Azure Table storage option.


• Johnny Halife announced on v1.0.2 of his Windows Azure Storage for Ruby is available from GitHub:

Windows Azure Storage library — simple gem for accessing WAZ‘s Storage REST API

image A simple implementation of Windows Azure Storage API for Ruby, inspired by the S3 gems and self experience of dealing with queues. The major goal of the whole gem is to enable ruby developers [like me =)] to leverage Windows Azure Storage features and have another option for cloud storage.

imageThe whole gem is implemented based on Microsoft’s specs from the communication and underlying service description and protocol (REST). The API is for ruby developers built by a ruby developer. I’m trying to follow idioms, patterns and fluent type of doing APIs on Ruby.

This work isn’t related at all with StorageClient Sample shipped with Microsoft SDK and written in .NET, the whole API is based on my own understanding, experience and values of elegance and ruby development.

Full documentation for the gem is available at waz-storage.heroku.com

How does this differ from waz-queues and waz-blobs work?

Well, this is a sum up of the whole experience of writing those gems and getting them to work together to simplify end user experience. Although there’re some breaking changes, it’s pretty backward compatible with existing gems.

What’s new on the 1.0.2 version?
  • Completed Blobs API migration to 2009-09-19, _fully supporting_ what third-party tools do (e.g. Cerebrata) [thanks percent20]

Neil MacKenzie offered Examples of the Windows Azure Storage Services REST API in this 8/18/2010 post to his Convective blog (missed when posted):

In the Windows Azure MSDN Azure Forum there are occasional questions about the Windows Azure Storage Services REST API. I have occasionally responded to these with some code examples showing how to use the API. I thought it would be useful to provide some examples of using the REST API for tables, blobs and queues – if only so I don’t have to dredge up examples when people ask how to use it. This post is not intended to provide a complete description of the REST API.

The REST API is comprehensively documented (other than the lack of working examples). Since the REST API is the definitive way to address Windows Azure Storage Services I think people using the higher level Storage Client API should have a passing understanding of the REST API to the level of being able to understand the documentation. Understanding the REST API can provide a deeper understanding of why the Storage Client API behaves the way it does.

Fiddler

The Fiddler Web Debugging Proxy is an essential tool when developing using the REST (or Storage Client) API since it captures precisely what is sent over the wire to the Windows Azure Storage Services.

Authorization

Nearly every request to the Windows Azure Storage Services must be authenticated. The exception is access to blobs with public read access. The supported authentication schemes for blobs, queues and tables and these are described here. The requests must be accompanied by an Authorization header constructed by making a hash-based message authentication code using the SHA-256 hash.

The following is an example of performing the SHA-256 hash for the Authorization header

private String CreateAuthorizationHeader(String canonicalizedString)
{
    String signature = string.Empty;
    using (HMACSHA256 hmacSha256 = new HMACSHA256(AzureStorageConstants.Key))
    {
        Byte[] dataToHmac = System.Text.Encoding.UTF8.GetBytes(canonicalizedString);
        signature = Convert.ToBase64String(hmacSha256.ComputeHash(dataToHmac));
    }

    String authorizationHeader = String.Format(
          CultureInfo.InvariantCulture,
          "{0} {1}:{2}",
          AzureStorageConstants.SharedKeyAuthorizationScheme,
          AzureStorageConstants.Account,
          signature);

    return authorizationHeader;
}

This method is used in all the examples in this post.

AzureStorageConstants is a helper class containing various constants. Key is a secret key for Windows Azure Storage Services account specified by Account. In the examples given here, SharedKeyAuthorizationScheme is SharedKey.

The trickiest part in using the REST API successfully is getting the correct string to sign. Fortunately, in the event of an authentication failure the Blob Service and Queue Service responds with the authorization string they used and this can be compared with the authorization string used in generating the Authorization header. This has greatly simplified the us of the REST API.

Table Service API

The Table Service API supports the following table-level operations:

The Table Service API supports the following entity-level operations:

These operations are implemented using the appropriate HTTP VERB:

  • DELETE – delete
  • GET – query
  • MERGE – merge
  • POST – insert
  • PUT – update

This section provides examples of the Insert Entity and Query Entities operations.

Insert Entity

The InsertEntity() method listed in this section inserts an entity with two String properties, Artist and Title, into a table. The entity is submitted as an ATOM entry in the body of a request POSTed to the Table Service. In this example, the ATOM entry is generated by the GetRequestContentInsertXml() method. The date must be in RFC 1123 format in the x-ms-date header supplied to the canonicalized resource used to create the Authorization string. Note that the storage service version is set to “2009-09-19” which requires the DataServiceVersion and MaxDataServiceVersion to be set appropriately.

Neil continues with details and code snippets of the remaining methods of the Table Service API and sections for the Blob and Queue Services APIs.


<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

•• Francois Lascelles compared RESTful Web services and signatures in WS-* (SOAP), OAuth v1 and v2 in a 10/2/2010 post:

image A common question relating to REST security is whether or not one can achieve message level integrity in the context of a RESTful web service exchange. Security at the message level (as opposed to transport level security such as HTTPS) presents a number of advantages and is essential for achieving a number of advanced security related goals.

When faced with the question of how to achieve message level integrity in REST, the typical reaction of an architect with a WS-* background is to incorporate an XML digital signature in the payload. Technically, including an XML dSig inside a REST payload is certainly possible. After all, XML dSig can be used independently of WS-Security. However there are a number of reasons why this approach is awkward. First, REST is not bound to XML. XML signatures only sign XML, not JSON, and other content types popular with RESTful web services. Also, it is practical to separate the signatures from the payload. This is why WS-Security defines signatures located in SOAP headers as opposed to using enveloped signatures. And most importantly, a REST ‘payload’ by itself has limited meaning without its associated network level entities such as the HTTP verb and the HTTP URI. This is a fundamental difference between REST and WS-*, let me explain further.

Below, I illustrate a REST message and a WS-* (SOAP) message. Notice how the SOAP message has it’s own SOAP headers in addition to transport level headers such as HTTP headers.

The reason is simple: WS-* specifications go out of their way to be transport independent. You can take a soap message and send it over HTTP, FTP, SMTP, JMS, whatever. The ‘W’ in WS-* does stand for ‘Web’ but this etymology does not reflect today’s reality.

In WS-*, the SOAP envelope can be isolated. All the necessary information needed is in there including the action. In REST, you cannot separate the payload from the HTTP verb because this is what defines the action. You can’t separate the payload from the HTTP URI either because this defines the resource which is being acted upon.

Any signature based integrity mechanism for REST needs to have the signature not only cover the payload but also cover those HTTP URIs and HTTP verbs as well. And since you can’t separate the payload from those HTTP entities, you might as well include the signature in the HTTP headers.

imageThis is what is achieved by a number of proprietary authentication schemes today. For example Amazon S3 REST authentication and Windows Azure Platform both use HMAC based signatures located in the HTTP Authorization header. Those signatures cover the payload as well as the verb, the URI and other key headers.

imageOAuth v1 also defined a standard signature based token which does just this: it covers the verb, the uri, the payload, and other crucial headers. This is an elegant way to achieve integrity for REST. Unfortunately, OAuth v2 dropped this signature component of the specification. Bearer type tokens are also useful but, as explained by Eran Hammer-Lahav in this post, dropping payload signatures completely from OAuth is very unfortunate.

You might be interested also in Francois’ related Enteprise SaaS integration using REST and OAuth (9/17/2010) and OAuth-enabling the enterprise (8/5/2010) posts.


imageSee •• Azret Botash of DevExpress will present Introduction to OData on 10/5/2010 at 10:00 AM to 11:00 AM PDT according to a GoToMeeting.com post of 10/3/2010 in the Cloud Computing Events section below.


•• David Ramel posted Using WebMatrix with PHP, OData, SQL Azure, Entity Framework and More to Redmond Developer News’ Data Driver blog on 9/30/2010:

image I've written before about Microsoft's overtures to the PHP community, with last month's release of PHP Drivers for SQL Server being the latest step in an ongoing effort to provide interoperability between PHP and Microsoft technologies.

imageWith a slew of other new products and services released (relatively) recently, such as SQL Azure, OData and WebMatrix, I decided to see if they all work together.

imageAmazingly, they do. Well, amazing that I got them to work, anyway. And as I've said before, if I can do it, anyone can do it. But that's the whole point: WebMatrix is targeted at noobs, and I wanted to see if a hobbyist could use it in conjunction with some other new stuff.

image WebMatrix is a lightweight stack or tool that comes with stripped-down IIS and SQL Server components, packaged together so pros can do quick deployments and beginners can get started in Web development.

WebMatrix is all over PHP. It provides PHP code syntax highlighting (though not IntelliSense). It even includes a Web Gallery that lets you install popular PHP apps such as WordPress (see Ruslan Yakushev's tutorial). Doing so installs and configures PHP and the MySQL database.

I chose to install PHP myself and configure it for use in WebMatrix (see Brian Swan's tutorial).

After doing that, I tested the new PHP Drivers for SQL Server. The August release added PDO support, which allows object-oriented programming.

I used the PDO driver to access the AdventureWorksLTAZ2008R2 test database I have hosted in SQL Azure. After creating a PDO connection object, I could query the database and loop over the results in different ways, such as with the PDO FETCH_ASSOC constant:

(while $row = $sqlquery->fetch(PDO::FETCH_ASSOC) )

or with just the connection object:

foreach ($conn->query($sqlquery) as $row)

which both basically return the same results. You can see those results as a raw array on a site I created and deployed on one of the WebMatrix hosting partners, Applied Innovations, which is offering its services for free throughout the WebMatrix beta, along with several other providers. (By the way, Applied Innovations provided great service when I ran into a problem of my own making, even though the account was free.)

Having tested successfully with a regular SQL Azure database, I tried using PHP to access the same database enabled as an OData feed, courtesy of SQL Azure Labs. That eventually worked, but was a bit problematic in that this was my first exposure to PHP and I haven't worked that much with OData's Atom-based XML feed that contains namespaces, which greatly complicated things.

It was simple enough to grab the OData feed ($xml = file_get_contents ("ODataURL"), for one example), but to get down to the Customer record details buried in namespaces took a lot of investigation and resulted in ridiculous code such as this:

echo $xmlfile->entry[(int)$customerid]->children
('http://www.w3.org/2005/Atom')->content->
children
('http://schemas.microsoft.com/ado/2007/08/dataservices/metadata')->
children
('http://schemas.microsoft.com/ado/2007/08/dataservices')->
CompanyName->asXML();

just to display the customer's company name. I later found that registering an XPath Namespace could greatly reduce that monstrous line, but did require a couple more lines of code. There's probably a better way to do it, though, if someone in the know would care to drop me a line (see below).

Anyway, the ridiculous code worked, as you can see here.

I also checked out the OData SDK for PHP. It generates proxy classes that you can use to work with OData feeds. It worked fine on my localhost Web server, but I couldn't get it to work on my hosted site. Microsoft Developer Evangelist Jim O'Neil, who used the SDK to access the "Dallas" OData feed repository, suggested I needed "admin privileges to configure the php.ini file to add the OData directory to the include variable and configure the required extensions" on the remote host. I'm sure that could be done easily enough, but I didn't want to bother the Applied Innovations people any further about my free account.

So I accessed OData from a WebMatrix project in two different ways. But that was using PHP files. At first, I couldn't figure out how to easily display the OData feed in a regular WebMatrix (.cshtml) page. I guess I could've written a bunch of C# code to do it, but WebMatrix is supposed to shield you from having to do that. Which it does, in fact, with the OData Helper, one of several "helpers" for tasks such as using Windows Azure Storage or displaying a Twitter feed (you can see an example of the latter on my project site). You can find more helpers online.

The OData Helper made it trivial to grab the feed and display it in a WebMatrix "WebGrid" using the Razor syntax:

@using Microsoft.Samples.WebPages.Helpers
@{
var odatafeed = OData.Get("[feedurl]");
var grid = new WebGrid(odatafeed, columnNames : new []{"CustomerID",
"CompanyName", "FirstName", "LastName"});

@grid.GetHtml();

which results in this display.

Of course, using the built-in SQL Server Compact to access an embedded database was trivial:

@{
var db = Database.OpenFile("MyDatabase.sdf");
var query = "SELECT * FROM Products ORDER BY Name";
var result = db.Query(query);
var grid = new WebGrid(result);
}
@grid.GetHtml();

which results in this.

Finally, I tackled the Entity Framework, to see if I could do a regular old query on the SQL Azure database. I clicked the button to launch the WebMatrix project in Visual Studio and built an Entity Data Model that I used to successfully query the database with both the ObjectQuery and Iqueryable approaches. Both of the queries' code and resulting display can be seen here.

Basically everything I tried to do actually worked. You can access all the example results from my Applied Innovations guest site. By the way, it was fairly simple to deploy my project to the Applied Innovations hosting site. That kind of thing is usually complicated and frustrating for me, but the Web Deploy feature of WebMatrix really streamlined the process.

Of course, WebMatrix is in beta, and the Web Deploy quit working for me after a while. I just started FTPing all my changed files, which worked fine.

I encountered other little glitches along the way, but no showstoppers. For example, the WebMatrix default file type, .cshtml, stopped showing up as an option when creating new files. In fact, the options available for me seemed to differ greatly from those I found in various tutorials. That's quite a minor problem, but I noticed it.

As befits a beta, there are a lot of known issues, which you can see here.

Overall, (much like LightSwitch) I was impressed with WebMatrix. It allows easy ASP.NET Web development and deployment for beginners and accommodates further exploration by those with a smattering of experience.

David is Features Editor for MSDN Magazine, part of 1105 Media.


Glenn Berry delivered A Small Selection of SQL Azure System Queries in a 9/29/2010 post:

image I had a question come up during my “Getting Started with SQL Azure” presentation at SQLSaturday #52 in Denver last Saturday that I did not know the answer to. It had to do with how you could see what your data transfer usage was in and out of a SQL Azure database. This is pretty important to know, since you will be billed at the rate of $0.10/GB In and $0.15/GB Out (for the North American and European data centers).  Knowing this information will help prevent getting an unpleasant surprise when you receive your monthly bill for SQL Azure.

imageSQL Server MVP Bob Beauchemin gave me a nudge in the right direction, and I was able to put together these queries, which I hope you find useful:

-- SQL Azure System Queries
-- Glenn Berry 
-- September 2010
-- http://glennberrysqlperformance.spaces.live.com/
-- Twitter: GlennAlanBerry

-- You must be connected to the master database to run these

-- Get bandwidth usage by database by hour (for billing)
SELECT database_name, direction, class, time_period, 
       quantity AS [KB Transferred], [time]
FROM sys.bandwidth_usage
ORDER BY [time] DESC;

-- Get number of databases by SKU for this SQL Azure account (for billing)
SELECT sku, quantity, [time]
FROM sys.database_usage
ORDER BY [time] DESC;

-- Get all SQL logins (which are the only kind) for this SQL Azure "instance"
SELECT * 
FROM sys.sql_logins;

-- Get firewall rules for this SQL Azure "instance"
SELECT id, name, start_ip_address, end_ip_address, create_date, modify_date
FROM sys.firewall_rules;

-- Perhaps a look at future where you will be able to clone/backup a database?
SELECT database_id, [start_date], modify_date, percent_complete, error_code,
error_desc, [error_severity], [error_state]
FROM sys.dm_database_copies
ORDER BY [start_date] DESC;

-- Very unclear what this is for, beyond what you can infer from the name
SELECT instance_id, instance_name, [type_name], type_version, [description],
       type_stream, date_created, created_by, database_name
FROM dbo.sysdac_instances;

•• Adam Mokan showed how to create a Windows Server AppFabric Monitoring oData Service in a 9/29/2010 post:

image I recently caught an excellent blog post [Windows Server AppFabric Monitoring - How to create operational analytics reports with AppFabric Monitoring and Excel PowerPivot] by Emil Velenov about how to pull data from the Windows Server AppFabric monitoring database and create some really nice dashboard-like charts in Excel 2010 using the PowerPivot feature. At my current job, we are not yet using the Office 2010 suite, but Emil’s post sparked my interest in publishing the data from AppFabric somehow. I decided to spend an hour and create a WCF Data Service and publish the metrics using the oData format.

imageI’m not going to get into every detail because the aformentioned blog post does a much better job at that. However, I will note that in my case, I only had a need to pull data from two of AppFabric’s database views. I am currently only tracking WCF services and focused on the ASEventSources and ASWcfEvents views in the monitoring database. I did turn off the data aggregation option on the server’s web.config before writing my query. Emil’s post goes into detail on this step. Here is the query I wrote to get the data I felt was important. It gives me a nice simple breakdown of the site my service was running under (I have a couple different environments on this server for various QA and test setups), the name of the service, the service virtual path (I am using “svc-less” configuration, the operation name, event type (either “success” or “error”), the date/time the service was called, and finally the duration.

image

Query Results

Using this query, I created a view in the AppFabric database called “WcfStats”. Obviously, you could skip that portion and use the query in a client app, create a stored proc, use an ORM, and so-on. I went with a view, created a new SQL Server user account with read-only access and then started working on my WCF Data Service and utilized EF4 to create an entity based off this view.

I will say that I’ve not had the chance to work with a WCF Data Service up until now, so I’m sure I’m missing a “best practice” or something. Drop me a comment, if so.

Data Service Code

After publishing the service, I was instantly able to interact with it using LINQPad.

image

 

Results from querying the oData service in LINQPad

URL generated by LINQPad showing the query

Now I have a nice oData service out on my intranet for other developers to consume, to write reports against, and whatever else they can think of. Another developer is already working on a Silverlight 4 monitor which consumes this data. Once Office 2010 is rolled out, its native support for oData will make it trivial to pull this into Excel.

Hopefully this quick experiment will help inspire you to come up with some cool new ways to interact with AppFabric’s monitoring features.


• Ryan Dunn (@dunnry, right) and Steve Marx (@smarx, center) bring you Cloud Cover Episode 28 - SQL Azure with David Robinson (left) Part 11 on 10/1/2010:

image

Join Ryan and Steve each week as they cover the Microsoft cloud. You can follow and interact with the show at@cloudcovershow
In this episode:  

  • Listen to David Robinson explain how to migrate your Access databases to SQL Azure.
  • Learn about deploying SQL Azure and Windows Azure services together 

Show Links:

Windows Azure Domain Name System Improvements
Maximizing Throughput in Windows Azure - Part 3 (of 11)
EpiServer CMS on Windows Azure Part 1 (of 11)
F# and Windows Azure with Don Syme Part 3 (of 11)


Benko posted Finding Sasquatch and other data mysteries with Jonathan Carter on MSDN Radio on 10/1/2010:

imageThis week we explore the mountains of data around us by looking at the Open Data Protocol (OData). Jonathan Carter is a Technology Evangelist who helps people explore the possible with these new technologies, including how to build client and server-side solutions to implement and expose data endpoints. Join us and find out how you can leverage these technologies. This show is hosted by Mike Benkovich and Mithun Dhar.

image But the question remains…who is Jonathan Carter [pictured at right]?  We know he’s a frequent contributor to channel 9 and speaks regularly at conferences around the world. Join us as we ask the hard questions. Register at http://bit.ly/msdnradio-20 to join the conversation. See you on Monday!


Steve Yi posted Demo: Using Entity Framework to Create and Query a SQL Azure Database In Less Than 10 Minutes to the SQL Azure Team blog on 9/30/2010:

image As we were developing this week's posts on utilizing Entity Framework with SQL Azure we had an opportunity to sit down and talk with Faisal Mohamood, Senior Program Manager of the Entity Framework team. 

imageAs we've shared in earlier posts this week, one of the beautiful things about EF is the Model First feature which lets developers visually create an entity model within Visual Studio.  The designer then creates the T-SQL to generate the database schema.  In fact, EF give developers the opportunity to create and query SQL Azure databases all within the Visual Studio environment and not have to write a line of T-SQL.

In the video below, I discuss some of the basics of Entity Framework (EF) with Mohamood and its benefits as the primary data access framework for .NET developers.

Get Microsoft Silverlight

In the second video Faisal provides a real-time demonstration of Model First and creates a rudimentary SQL Azure database without writing any code.  Then we insert and query records utilizing Entity Framework, accomplishing all this in about ten minutes and ten lines of code.  To see all the details, click on "full screen" button in the lower right of the video player.  If you have a fast internet connection you should be able to see everything in full fidelity without pixelation.

Get Microsoft Silverlight

Summary

Hopefully the videos in this post illustrate how easy it is to do database development in SQL Azure by utilizing Entity Framework.  Let us know what other topics you'd like us to explore in more depth.  And let us know if video content like this is interesting to you - drop a comment, or use the "Rate This" feature at the beginning of the post.

If you're interested in learning more, there are some outstanding walkthroughs and learning resources in MSDN, located here.


Steve Yi of the SQL Azure Team posted Model First For SQL Azure on 9/29/2010:

image One of the great uses for ADO.NET Entity Framework 4.0 that ships with .NET Framework 4.0 is to use the model first approach to design your SQL Azure database. Model first means that the first thing you design is the entity model using Visual Studio and the Entity framework designer, then the designer creates the Transact-SQL for you that will generate your SQL Azure database schema. The part I really like is the Entity framework designer gives me a great WYSIWYG experience for the design of my tables and their inter-relationships. Plus as a huge bonus, you get a middle tier object layer to call from your application code that matches the model and the database on SQL Azure.

Visual Studio 2010

imageThe first thing to do is open Visual Studio 2010, which has the 4.0 version of the Entity Framework, this version works especially well with SQL Azure. If you don’t have Visual Studio 2010, you can download the Express version for free; see the Get Started Developing with the ADO.NET Entity Framework page.

Data Layer Assembly

At this point you should have a good idea of what your data model is, however you might not know what type of application you want to make; ASP.NET MVC, ASP.NET WebForms, Silverlight, etc.. So let’s put the entity model and the objects that it creates in a class library. This will allow us to reference the class library, as an assembly, for a variety of different applications. For now, create a new Visual Studio 2010 solution with a single class library project.

Here is how:

  1. Open Visual Studio 2010.
  2. On the File menu, click New Project.
  3. Choose either Visual Basic or Visual C# in the Project Types pane.
  4. Select Class Library in the Templates pane.
  5. Enter ModelFirst for the project name, and then click OK.

The next set is to add an ADO.NET Entity Data Model item to the project, here is how:

  1. Right click on the project and choose Add then New Item.
  2. Choose Data and then ADO.NET Entity Data Model

    image

  3. Click on the Add Button.
  4. Choose Empty Model and press the Finish button.

    image

Now you have an empty model view to add entities (I still think of them as tables).

Designing You[r] Data Structure

The Entity Framework designer lets you drag and drop items from the toolbox directly into the designer pane to build out your data structure. For this blog post I am going to drag and drop an Entity from the toolbox into the designer. Immediately I am curious about how the Transact-SQL will look from just the default entity.

To generate the Transact-SQL to create a SQL Azure schema, right click in the designer pane and choose Generate Database From Model. Since the Entity Framework needs to know what the data source is to generate the schema with the right syntax and semantics we are asked by Entity Framework to enter connection information in a set of wizard steps.

Since I need a New Connection to I press the Add Connection button on the first wizard page. Here I enter connection information for a new database I created on SQL Azure called ModelFirst; which you can do from the SQL Azure Portal. The portal also gives me other information I need for the Connection Properties dialog, like my Administrator account name.

image

Now that I have the connection created in Visual Studio’s Server Explorer, I can continue on with the Generate Database Wizard. I want to uncheck that box that saves my connection string in the app.config file. Because this is a Class Library the app.config isn’t relevant -- .config files go in the assembly that calls the class library.

The Generate Database Wizard creates an Entity Framework connection string that is then passed to the Entity Framework provider to generate the Transact-SQL. The connection string isn’t stored anywhere, however it is needed to connect to the SQL Azure to find out the database version.

image

Finally, I get the Transact-SQL to generate the table in SQL Azure that represents the Transact-SQL.

-- --------------------------------------------------
-- Creating all tables
-- --------------------------------------------------

-- Creating table 'Entity1'
CREATE TABLE [dbo].[Entity1] (
    [Id] int IDENTITY(1,1) NOT NULL
);
GO

-- --------------------------------------------------
-- Creating all PRIMARY KEY constraints
-- --------------------------------------------------

-- Creating primary key on [Id] in table 'Entity1'
ALTER TABLE [dbo].[Entity1]
ADD CONSTRAINT [PK_Entity1]
    PRIMARY KEY CLUSTERED ([Id] ASC);
GO

This Transact-SQL is saved to a .sql file which is included in my project. The full project looks like this:

image

I am not going to run this Transact-SQL on a connection to SQL Azure; I just wanted to see what it looked like. The table looks much like I expected it to, and Entity Framework was smart enough to create a clustered index which is a requirement for SQL Azure.

Summary

Watch for our upcoming video and interview with Faisal Mohamood of the Entity Framework team to demonstrate a start-to-finish example of Model First. From modeling the entities, generating the SQL Azure database, and all the way to inserting and querying data utilizing Entity Framework.

Make sure to check back, or subscribe to the RSS feed to be alerted as we post more information. Do you have questions, concerns, comments? Post them below and we will try to address them.


Abhishek Sinha of the SQL Server Team announced SQL Server 2008 SP2 is available for download today! on 9/29/2010:

imageMicrosoft is pleased to announce the availability of SQL Server 2008 Service Pack 2. Both the Service Pack and Feature Pack updates are now ready for download on the Microsoft Download Center. Service Pack 2 for SQL Server 2008 includes new compatibility features with SQL Server 2008 R2, product improvements based on requests from the SQL Server community, and hotfix solutions provided in SQL Server 2008 SP1 Cumulative Update 1 to 8.

Key improvements in Microsoft SQL Server 2008 Service Pack 2 are:

  • Reporting Services in SharePoint Integrated Mode. SQL Server 2008 SP2 provides updates for Reporting Services integration with SharePoint products. SQL Server 2008 SP2 report servers can integrate with SharePoint 2010 products. SQL Server 2008 SP2 also provides a new add-in to support the integration of SQL Server 2008 R2 report servers with SharePoint 2007 products. This now enables SharePoint Server 2007 to be used with SQL Server 2008 R2 Report Server. For more information see the “What’s New in SharePoint Integration and SQL Server 2008 Service Pack 2 (SP2)” section in What's New (Reporting Services).
  • SQL Server 2008 R2 Application and Multi-Server Management Compatibility with SQL Server 2008.
    • SQL Server 2008 Instance Management With SP2 applied, an instance of the SQL Server 2008 Database Engine can be enrolled with a SQL Server 2008 R2 Utility Control Point as a managed instance of SQL Server. SQL Server 2008 SP2 enables organizations to extend the value of the Utility Control Point to instances of SQL Server 2008 SP2 without having to upgrade those servers to SQL Server 2008 R2. For more information, see Overview of SQL Server Utility in SQL Server 2008 R2 Books Online.
    • Data-tier Application (DAC) Support. Instances of the SQL Server 2008 Database Engine support all DAC operations delivered in SQL Server 2008 R2 after SP2 has been applied. You can deploy, upgrade, register, extract, and delete DACs. SP2 does not upgrade the SQL Server 2008 client tools to support DACs. You must use the SQL Server 2008 R2 client tools, such as SQL Server Management Studio, to perform DAC operations. A data-tier application is an entity that contains all of the database objects and instance objects used by an application. A DAC provides a single unit for authoring, deploying, and managing the data-tier objects. For more information, see Designing and Implementing Data-tier Applications. …

Thanks to all the customers who downloaded Microsoft SQL Server 2008 SP2 CTP and provided feedback.

Abhishek Sinha, Program Manager, SQL Server Sustained Engineering

Related Downloads & Documents:


Scott Kline posted SQL Azure, OData, and Windows Phone 7 on 9/29/2010:

imageOne of the things I really wanted to do lately was to get SQL Azure, OData, and Windows Phone 7 working together; in essence, expose SQL Azure data using the OData protocol and consume that data on a Windows Mobile Phone 7 device. This blog will explain how to do just that. This example is also in our SQL Azure book in a bit more detail, but with the push for WP7 I thought I'd give a sneak-peak here.

imageYou will first need to download and install a couple of things, the first of which is the OData client Library for Windows Phone 7 Series CTP which is a library for consuming OData feeds on the Windows Phone 7 series devices. This library has many of the same capabilities as the ADO.NET Data Services client for Silverlight. The install will simply extract a few files to the directory of your choosing.

The next item to download is the Windows Phone Developer Tools, which installs the Visual Studio Windows Phone application templates and associated components. These tools provide integrated Visual Studio design and testing for your Windows Phone 7 applications.

Our goal is to enable OData on a SQL Azure database so that we can expose our data and make it available for the Windows Phone 7 application to consume. OData is a REST-based protocol which standarizes the querying and updating of data over the Web. The first step then is to enable OData on the SQL Azure database by going into the SQL Azure Labs site and enabling OData. You will be required to log in with your Windows Live account, then once in the SQL Azure Labs portal select the SQL Azure OData Service tab. As the home page states, SQL Azure Labs is in Developer Preview.

The key here is the URI at the bottom of the page in the User Mapping section. I'll blog another time on what the User Mapping is, but for now, highlight and copy the URI to the clipboard. You'll be using it later.

Once OData is enabled on the selected SQL Azure database, you are ready to start building the Windows Phone application. In Visual Studio 2010, you will notice new installed templates for the Windows Phone in the New Project dialog. For this example, select the Windows Phone Application.

Once the project is created, you will need to add a reference to the OData Client Library installed earlier. Browse to the directory to which you extracted the OData Client Library and add the System.Data.Services.Client.dll library.

The next step is to create the necessary proxy classes that are needed to access a data service from a .NET Framework client application. The proxy classes can be generated by using the DataSvcUtil tool, a command-line tool that consumes an OData feed and generates the client data service classes. Use the following image as an example to generate the appropriate data service classes. Notice the /uri: paramter. This is the same URI listed in the first image above, and what the DataSvcUtil will use to generate the necessary proxy classes.

Once the proxy class is generated, add it to the project. Next, add a new class to your project and add the following namespaces which provide addtional functionality needed to query the OData source and work with collections.

using System.Linq;
using System.ComponentModel;
using System.Collections.Generic;
using System.Diagnostics;
using System.Text;
using System.Windows.Data;
using TechBioModel;
using System.Data.Services.Client;
using System.Collections.ObjectModel;

Next, add the following code to the class. The LoadData method first initializes a new TechBio instance of the proxy generated class, passing the URI to the OData service to call out to the service. A LINQ query is used to pull the data you want and the results loaded into the Docs DataServiceCollection.

public class classname
{
    public TechBioModel()
    {
        LoadData();
    }

    void LoadData()
    {
        TechBio context = new TechBio(new Uri("https://odata.sqlazurelabs.com/OData.svc/v0.1/servername/TechBio"));

        var qry = from u in context.Docs
                    where u.AuthorId == 113
                    select u;

        var dsQry = (DataServiceQuery<Doc>)qry;

        dsQry.BeginExecute(r =>
        {
            try
            {
                var result = dsQry.EndExecute(r);
                if (result != null)
                {
                    Deployment.Current.Dispatcher.BeginInvoke(() =>
                    {
                        Docs.Load(result);
                    });
                }
            }
            catch (Exception ex)
            {
                MessageBox.Show(ex.Message.ToString());
            }
        }, null);

    }

    DataServiceCollection<Doc> _docs = new DataServiceCollection<Doc>();

    public DataServiceCollection<Doc> Docs
    {
        get
        {
            return _docs;
        }
        private set
        {
            _docs = value;
        }
    }
}

I learned from a Shawn Wildermuth blog post that the reason you need to use the Dispatcher is that this call is not guaranteed to be executed on the UI thread so the Dispatcher is required to ensure the this call is executed on the UI thread. Next, add the following code to the App.xaml. This will get called by the load of the phone with the application starts.

private static TechBioModel tbModel = null;
public static TechBioModel TBModel
{
    get
    {
        if (tbModel == null)
            tbModel = new TechBioModel();

        return tbModel;
    }
}

To call the code above, add the following code to the OnNavigatedTo event of the phone iteslf (the MainPage constructor)

protected override void OnNavigatedTo(System.Windows.Navigation.NavigationEventArgs e)
{
    base.OnNavigatedTo(e);

    if (DataContext == null)
        DataContext = App.TBModel;

}

Lastly, you need to go to the UI of the phone and add a ListBox and then tell the ListBox where to get its data. Here we are binding the ListBox to the Docs DataServiceCollection.

<ListBox Height="611" HorizontalAlignment="Left" Name="listBox1"
        VerticalAlignment="Top" Width="474"
        ItemsSource="{Binding Docs}" >

You are now ready to test. Run the project and when the project is deployed to the phone and run, data from the SQL Azure database is queried and displayed on the phone.

In this example you saw an simple example of how to consume an OData feed on on Windows Phone 7 application that gets its data from a SQL Azure database.


Channel9 completed its SQL Azure lesson series with Exercise 4: ​Supportabil​ity– Usage ​Metrics near the end of September 2010:

imageUpon commercial availability for Windows Azure, the following simple consumption-based pricing model for SQL Azure will apply:

There are two editions of SQL Azure databases: WebEdition with a 1 GB data cap is offered at USD $9.99 per month; Business Edition with a 10 GB data cap is offered at USD $99.99 per month.

Bandwidth across Windows Azure, SQL Azure and .NET Services will be charged at $0.10/GB for ingress data and $0.15 / GB for egress data.

image Note: For more information on the Microsoft Windows Azure cloud computing pricing model refer to: Windows Azure Platform Pricing: http://www.microsoft.com/windowsazure/pricing/ Confirming Commercial Availability and Announcing Business Model: http://blogs.msdn.com/windowsazure/archive/2009/07/14/confirming-commercial-availability-and-announcing-business-model.aspx

In this exercise, we will go through the mechanisms on how to programmatically calculate bandwidth and database costs with these steps:


Herve Roggero explained How to implement fire-and-forget SQL Statements in SQL Azure in this 9/28/2010 post:

Implementing a Fire Hose for SQL Azure

image While I was looking around in various blogs, someone was looking for a way to insert records in SQL Azure as fast as possible. Performance was so important that transactional consistency was not important.  So after thinking about that for a few minutes I designed  small class that provides extremely fast queuing of SQL Commands, and a background task that performs the actual work. The class implements a fire hose, fire-and-forget approach to executing statements against a database.

imageAs mentioned, the approach consists of queueing SQL commands in memory, in an asynchronous manner, using a class designed for this purpose (SQLAzureFireHose). The AddAsynch method frees the client thread from the actual processing time to insert commands in the queue. In the background, the class then implements a timer to execute a batch of commands (hard-coded to 100 at a time in this sample code). Note that while this was done for SQL Azure, the class could very well be used for SQL Server. You could also enhance the class to make it transactional aware and rollback the transaction on error.

Sample Client Code

First, here is the client code. A variable, of type SQLAzureFireHose is declared on top. The client code inserts each command to execute using the AddAsynch method, which by design will execute quickly. In fact, it should take about 1 millisecond (or less) to insert 100 items.

image 

SQLAzureFireHose Class

The SQLAzureFireHouse class implements the timer (that executes every second) and the asynchronous add method. Every second, the class fires the timer event, gets 100 SQL statements queued in the _commands object, builds a single SQL string and sends it off to SQL Azure (or SQL Server). This also minimizes roundtrips and reduces the chattiness of the application. …

C# class code omitted for brevity.


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

Valery Mizonov described Implementing Reliable Inter-Role Communication Using Windows Azure AppFabric Service Bus, Observer Pattern & Parallel LINQ in this 9/30/2010 post to the Windows Server AppFabric Customer Advisory Team blog:

image The following post is intended to walk you through the implementation of a loosely coupled communication pattern between processes and role instances running on Windows Azure platform. The scenario discussed and implemented in this paper is based off a real-world customer project and built upon a powerful combination of Windows Azure AppFabric Service Bus, Observer design pattern and Parallel LINQ (PLINQ).

Background

In the world of highly distributed applications, information exchange often occurs between a variety of tiers, layers and processes that need to communicate with each other either across disparate computing environments or within a proximity of given local boundaries. Over the years, the communication paradigms and messaging patterns have been able to support the many different scenarios for inter- and intra-process data exchange, some of which were durable and successful whilst the others have suffered from a limited existence.

The design patterns for high-scale cloud computing may result in breaking up of what could have been monolithic in the past and could communicate through direct method invocation into more granular, isolated entities that need to run in different processes, on different machines, in different geographical locations. Along those lines, the Windows Azure platform brings in its own unique and interesting challenges for creating an efficient implementation of the cross-process communication concepts in the Cloud to be able to connect web and worker roles together in a seamless and secure fashion.

image72In this paper, we take a look at how the Windows Azure platform AppFabric Service Bus helps address data exchange requirements between loosely coupled cloud application services running on Windows Azure. We explore how multicast datagram distribution and one-way messaging enable one to easily build a fully functional publish/subscribe model for Azure worker roles to be able to exchange notification events with peers regardless of their actual location.

For sake of simplicity, the messaging concept discussed in this paper will be further referenced to as “inter-role communication” although one should treat this as a communication between Azure role instances given the nature of the definition of a “role” in the Windows Azure platform’s vocabulary.

The Challenge

The inter-role communication between Windows Azure role instances is not a brand new challenge. It has been on the agenda of many Azure solution architects for some time until the support for internal endpoints has been released. An internal endpoint in the Windows Azure roles is essentially the internal IP address automatically assigned to a role instance by the Windows Azure fabric. This IP address along with a dynamically allocated port creates an endpoint that is only accessible from within a hosting datacenter with some further visibility restrictions. Once registered in the service configuration, the internal endpoint can be used for spinning off a WCF service host in order to make a communication contract accessible by the other role instances.

Consider the following example where an instance of the Process Controller worker role needs to exchange units of work and other processing instructions with other roles. In this example, the internal endpoints are being utilized. The Process Controller worker role needs to know the internal endpoints of each individual role instance which the process controller wants to communicate with.

Internal Endpoints Example
Figure 1: Roles exchanging resource-intensive workload processing instructions via internal endpoints.

The internal endpoints provide a simplistic mechanism for getting one role instance to communicate to the others in 1:1 or 1:many fashion. This leads to a fair question: where is the true challenge? Simply put, there aren’t any. However, there are constraints and special considerations that may de-value the internal endpoints and shift focus over to a more flexible alternative. Below are the most significant “limitations” of the internal endpoints:

  • Internal endpoints must be defined ahead of time – these are registered in the service definition and locked down at design time;

  • The discoverability of internal endpoints is limited to a given deployment – the role environment doesn’t have explicit knowledge of all other internal endpoints exposed by other Azure hosted services;

  • Internal endpoints are not reachable across hosted service deployments – this could render itself as a limiting factor when developing a cloud application that needs to exchange data with other cloud services deployed in a separate hosted service environment even if it’s affinitized to the same datacenter;

  • Internal endpoints are only visible within the same datacenter environment – a complex cloud solution that takes advantage of a true geo-distributed deployment model cannot rely on internal endpoints for cross-datacenter communication;

  • The event relay via internal endpoints cannot scale as the number of participants grows – the internal endpoints are only useful when the number of participating role instances is limited and with underlying messaging pattern still being a point-to-point connection, the role instances cannot take advantage of the multicast messaging via internal endpoints.

In summary, once a complexity threshold is reached, one may need to re-think whether or not the internal endpoints may be a durable choice for inter-role communication and what other alternatives may be available. At this point, we organically approached the section that is intended to drill down into an alternative solution that scales, is sufficiently elastic and compensates for the constraints highlighted above.

The Solution

Among other powerful communication patterns supported by the AppFabric Service Bus, the one-way messaging between publishers and subscribers presents a special interest for inter-role communication. It supports multicast datagrams which provide a mechanism for sending a single message to a group of listeners that joined the same multicast group. Simply put, this communication pattern enables communicating a notification to a number of subscribers interested in receiving these notifications.

The support for “publish/subscribe” model in the AppFabric Service Bus promotes the relevant capabilities into a natural choice for loosely coupled communication between Azure roles. The role instances authenticate and register themselves on a Service Bus service namespace and choose the appropriate path from the global hierarchical naming system on which the subscriber would be listening to incoming events. The Service Bus creates all underlying transport plumbing and multicasts published events to the active listener registrations on a given rendezvous service endpoint (also known as topic).

SB Event Relay Example

Figure 2: Roles exchanging resource-intensive workload processing instructions via Service Bus using one-way multicast (publish/subscribe).

The one-way multicast eventing is available through the NetEventRelayBinding WCF extension provided by the Azure Service Bus. When compared against internal endpoints, the inter-role communication that uses Service Bus’ one-way multicast benefits from the following:

  • Subscribers can dynamically register themselves at runtime – there is no need to pre-configure the Azure roles to be able to send or receive messages (with an exception of access credentials that are required for connectivity to a Service Bus namespace);

  • Discoverability is no longer an issue as all subscribers that listen on the same rendezvous endpoint (topic) will equally be able to receive messages, irrespectively whether or not these subscribers belong the same hosted service deployment;

  • Inter-role communication across datacenters is no longer a limiting factor - Service Bus makes it possible to connect cloud applications regardless of the logical or physical boundaries surrounding these applications;

  • Multicast is a highly scalable messaging paradigm, although at the time of writing this paper, the number of subscribers for one-way multicast is constrained to fit departmental scenarios with fewer than 20 concurrent listeners (this limitation is subject to change).

Now that all the key benefits are evaluated, let’s jump to the technical implementation and see how a combination of the Service Bus one-way multicast messaging and some value-add capabilities found in the .NET Framework 4.0 help create a powerful communication layer for inter-role data exchange.

Note that the sample implementation below has been intentionally simplified to fit the scope of a demo scenario for this paper.

Defining Application Events

To follow along, download the full sample code from the MSDN Code Gallery. …

Valery continues with a detailed analysis of the sample source code.


Alik Levin announced Windows Identity Foundation (WIF)/Azure AppFabric Access Control Service (ACS) Survival Guide Published to the TechNet Wiki on 10/2/2010:

imageI have just published Windows Identity Foundation (WIF) and Azure AppFabric Access Control (ACS) Service Survival Guide.

It has the following structure:

  • image72Problem Scope
    • What is it?
    • How does it fit?
    • How To Make It Work?
  • Case Studies
  • WIF/ACS Anatomy
    • Architecture
    • Identification (how a client identifies itself)
    • Authentication (how client's credentials validated)
    • Identity flow (how the token flows through the layers/tiers)
    • Authorization (how relying party - application or service - decides to grant or deny access)
    • Monitoring
    • Administration
  • Quality Attributes
    • Supportability
    • Testability
    • Interoperability
    • Performance
    • Security
    • Flexibility
  • Content Channels
    • MSDN/Technet
    • Codeplex
    • Code.MSDN
    • Blogs
    • Channel9
    • SDK
    • Books
    • Conventions
    • Forums
  • Content Types
    • Explained
    • Architecture scenarios
    • Guidelines
    • How-to's
    • Checklists
    • Troubleshooting cheat sheets
    • Code samples
    • Videos
    • Slides
    • Documents
  • Related Technology
  • Industry
  • Additional Q&A

Constructive feedback on how to improve much appreciated.

Related Books

Alik Levin posted Azure AppFabric Access Control Service (ACS) v 2.0 High Level Architecture – REST Web Service Application Scenario to his MSDN blog on 9/30/2010:

image This is a follow up to a previous post Azure AppFabric Access Control Service (ACS) v 2.0 High Level Architecture – Web Application Scenario. This post outlines the high level architecture for a scenario where Azure AppFabric Access Control Service (ACS) V2 involved in authentication and identity flow process between a client and a RESTful Web Service. Good description of the scenario, including visuals and solution summary, can be found here - App Scenario – REST with AppFabric Access Control. The sequence diagram can be found here - Introduction (skip to Web Service Scenario).

image72In this case there is no involvement of end user, so that User Experience part is irrelevant here.

Important to mention on when to use what for token signing. As per Token Signing:

  • Add an X.509 certificate signing credential if you are using the Windows Identity Foundation (WIF) in your relying party application.
  • Add a 256-bit symmetric signing key if you are building an application that uses OAuth WRAP.

These keys or certificates are used to protect tokens from tampering while on transit. These certificates and keys are not for authentication. They help maintaining trust between Azure AppFabric Access Control Service (ACS) and the Web Service.

AppFabric Access Control Service (ACS) v2 and RESTful Web Service Scenario

Try out yourself using bootstrap samples available here:

Related Books
More Info

Lori MacVittie (@lmacvittie) claimed Enterprise developers and architects beware: OAuth is not the double rainbow it is made out to be. It can be a foundational technology for your applications, but only if you’re aware of the risks in an introduction to her Mashable Sees Double Rainbows as Google Goes Gaga for OAuth essay of 9/29/2010:

drainbowoauthOAuth has been silently growing as the favored mechanism for cross-site authentication in the Web 2.0 world. The ability to leverage a single set of credentials across a variety of sites reduces the number of username/password combinations a user must remember. It also inherently provides for a granular authorization scheme.

Google’s announcement that it now offers OAuth support for Google Apps APIs was widely mentioned this week including Mashable’s declaration that Google’s adoption implies all applications must follow suit. Now. Stop reading, get to it. It was made out to sound like that much of an imperative.

image Google’s argument that OAuth is more secure than the ClientLogin model was a bit of a stretch considering the primary use of OAuth at this time is to integrate Web 2.0 and social networking sites – all of which rely upon a simple username/password model for authentication. True, it’s safer than sharing passwords across sites, but the security of data can still be compromised by a single username/password pair.

OAUTH on the WEB RELIES on a CLIENT-LOGIN MODEL

The premise of OAuth is that the credentials of another site are used for authentication but not shared or divulged. Authorization for actions and access to data are determined by the user on a app by app basis. Anyone familiar with SAML and assertions or Kerberos pdf-icon will recognize strong similarities in the documentation and underlying OAuth protocol. Similar to SAML and Kerberos, OAuth tokens - like SAML assertions or Kerberos tickets - can be set to expire, which is one of the core reasons Google claims it is “more” secure than the ClientLogin model. 

OAuth has been a boon to Web 2.0 and increased user interaction across sites because it makes the process of authenticating and managing access to a site simple for users. Users can choose a single site to be the authoritative source of identity across the web and leverage it at any site that supports OAuth and the site they’ve chosen (usually a major brand like Facebook or Google or Twitter). That means every other site they use that requires authentication is likely using one central location to store their credentials and identifying data.

One central location that, ultimately, requires a username and a password for authentication.

Yes, ultimately, OAuth is relying on the security of the same ClientLogin model it claims is less secure than OAuth. Which means OAuth is not more secure than the traditional model because any security mechanism is only as strong as the weakest link in the chain. In other words, it is quite possible that a user’s entire Internet identity is riding on one username and password at one site. If that doesn’t scare you, it should. Especially if that user is your mom or child and they haven’t quite got the hang of secure passwords or manually change their passwords on a regular basis.

When all your base are protected by a single username and password, all your base are eventually belong to someone else.

Now, as far as the claim that the ability to expire tokens makes OAuth more secure, I’m not buying it. There is nothing in the ClientLogin model that disallows expiration of credentials. Nothing. It’s not inherently a part of the model, but it’s not disallowed in any way. Developers can and probably should include an expiration of credentials in any site but they generally don’t unless they’re developing for the enterprise because organizational security policies require it. Neither OAuth nor the ClientLogin model are more secure because the ability exists to expire credentials. Argue that the use of a nonce makes OAuth less susceptible to replay attacks and I’ll agree.  Argue that OAuth’s use of tokens makes it less susceptible to spying-based attacks and I’ll agree. Argue that expiring tokens is the basis for OAuth’s superior security model and I’ll shake my head.

SECURITY through DISTRIBUTED OBSCURITY

imageThe trick with OAuth is that the security is inherently contained within the distributed, hidden nature of the credentials. A single site might leverage two, three, or more external authoritative sources (OAuth providers) to authenticate users and authorize access. It is impossible to tell which site a user might have designated as its primary authoritative source at any other site, and it is not necessarily (though it is likely, users are creatures of habit) the case that the user always designates the same authoritative source across external sites. Thus, the security of a given site relying on OAuth for authorization distributes the responsibility for security of credentials to the external, authorizing site. Neat trick, isn’t it? Exclusively using OAuth effectively surrounds a site with an SEP field. It’s Somebody Else’s Problem if a user’s credentials are compromised or permissions are changed because the responsibility for securing that information lies with some other site. You are effectively abrogating responsibility and thus control. You can’t enforce policy on any data you don’t control. 

The distributed, user-choice aspect of OAuth obscures the authoritative source of those credentials, making it more difficult to determine which site should be attacked to gain access. It’s security through distributed obscurity, hiding the credential store “somewhere” on the web.

CAVEAT AEDIFICATOR

That probably comes as a shock at this point, but honestly the point here is not to lambast OAuth as an authentication and authorization model. Google’s choice to allow OAuth and stop the sharing of passwords across sites is certainly laudable given that most users tend to use the same username and password across every site they access. OAuth has reduced the number of places in which that particular password is stored and therefore might be compromised. OAuth is scriptable, as Google pointed out, and while the username/password model is also image scriptable doing so almost certainly violates about a hundred and fifty basic security precepts. Google is doing something to stop that, and for that they should be applauded.

OAuth can be a good choice for application authentication and authorization. It naturally carries with it a more granular permission (authorization) model that means developers don’t have to invent one from scratch. Leveraging granular permissions is certainly a good idea given the nature of APIs and applications today. OAuth is also a good way to integrate partners into your application ecosystem without requiring management of their identities on-site – allowing partners to identify their own OAuth provider as an authoritative source for your applications can reduce the complexity in supporting and maintaining their identities. But it isn’t more secure – at least not from an end-user perspective, especially when the authoritative source is not enforcing any kind of strict password and username policies. It is that end-user perspective that must be considered when deciding to leverage OAuth for an application that might have access to sensitive data. An application that has access to sensitive data should not rely on the security of a user’s Facebook/Twitter/<insert social site here> password. Users may internally create strong passwords because corporate policy forces them to. They don’t necessarily carry that practice into their “personal” world, so relying on them to do so would likely be a mistake. 

Certainly an enterprise architect can make good use of Google’s support for OAuth if s/he is cautious: the authoritative source should be (a) under the control of IT and (b) integrated into an identity store back-end capable of enforcing organizational policies regarding authentication, i.e. password expirations and strengths. When developing applications using Google Apps and deciding to leverage OAuth, it behooves the developer to be very cautious about which third party sites/applications can be used for authentication and authorization, because the only real point of control regarding the sharing of that data is in the choices allowed, i.e. which OAuth providers are supported. Building out applications that leverage an OAuth foundation internally will enable single-sign on capabilities in a way that eliminates the need for more complex architecture. It also supports the notion of “as a service” that can be leveraged later when moving toward the “IT as a Service” goal.

OAuth is not necessarily “more” secure than the model upon which it ultimately relies, though it certainly is more elegant from an architectural and end-user perspective. The strength of an application’s security when using OAuth is relative to the strength of the authoritative source of authentication and authorization. Does OAuth have advantages over the traditional model, as put forth by Google in its blog? Yes. Absolutely. But security of any system is only as strong as the weakest link in the chain and with OAuth today that remains in most cases a username/password combination.

That ultimately means the strength or weakness of OAuth security remains firmly in the hands (literally) of the end-user. For enterprise developers that translates into a “Caveat aedificator” or in plain English, “Let the architect beware.”


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

imageMy (@rogerjenn) Windows Azure Uptime Report: OakLeaf Table Test Harness for September 2010 (99.54%) post of 10/3/2010 shows Windows Azure meets its SLA for two months in a row with a single instance:

image


Björn Axéll reported on 10/3/2010 release of Windows Azure Application Monitoring Management Pack v6.1.7686.0 of 10/1/2010:

image The Windows Azure Management Pack enables you to monitor the availability and performance of applications that are running on Windows Azure.

Feature Summary

imageAfter configuration, the Windows Azure Management Pack offers the following functionality:

  • Discovers Windows Azure applications.
  • Provides status of each role instance.
  • Collects and monitors performance information.
  • Collects and monitors Windows events.
  • Collects and monitors the .NET Framework trace messages from each role instance.
  • Grooms performance, event, and the .NET Framework trace data from Windows Azure storage account.
  • Changes the number of role instances via a task.

NOTE! The management group must be running Operations Manager 2007 R2 Cumulative Update 3 [of 10/1/2010].

Download Windows Azure Application Monitoring Management Pack [v6.1.7686.0] from Microsoft


The Windows Azure Team posted an Update on the ASP.NET Vulnerability on 10/1/2010:

imageA security update has been released which addresses the ASP.NET vulnerability we blogged about previously.  This update is included in Guest OS 1.7, which is currently being rolled out to all Windows Azure data centers globally.  Some customers will begin seeing Guest OS 1.7 as early as today, and broad rollout will complete within the next two weeks.

If you’ve configured your Windows Azure application to receive automatic OS upgrades, no action is required to receive the update.  If you don’t use automatic OS upgrades, we recommend manually upgrading to Guest OS 1.7 as soon as it’s available for your application.  For more information about configuring OS upgrades, see “Configuring Settings for the Windows Azure Guest OS.”

Until your application has the update, remember to review Scott Guthrie’s blog post to determine if your application is vulnerable and to learn how to apply the a workaround for additional protection.


The Windows Azure Team added Windows Azure Guest OS 1.7 (Release 201009-01) to the MSDN Library on 9/30/2010:

Windows Azure Platform Updated: September 30, 2010

imageThe following table describes release 201008-01 of the Windows Azure Guest OS 1.7:

Friendly name Windows Azure Guest OS 1.7 (Release 201009-01)
Configuration value WA-GUEST-OS-1.7_201009-01
Release date October 1, 2010

Features

  • Stability and Security patch fixes applicable to Windows Azure OS
  • Latest security update to resolve a publicly disclosed vulnerability in ASP.NET

Security Patches

The Windows Azure Guest OS 1.7 includes the following security patches, as well as all of the security patches provided by previous releases of the Windows Azure Guest OS:

Bulletin ID Parent KB Vulnerability Description
MS10-047 981852 Vulnerabilities in Windows Kernel could allow Elevation of Privilege
MS10-048 2160329 Vulnerabilities in Windows Kernel-Mode Drivers Could Allow Elevation of Privilege
MS10-049 980436 Vulnerabilities in SChannel could allow Remote Code Execution
MS10-051 2079403 Vulnerability in Microsoft XML Core Services Could Allow Remote Code Execution
MS10-053 2183461 Cumulative Security Update for Internet Explorer
MS10-054 982214 Vulnerabilities in SMB Server Could Allow Remote Code Execution
MS10-058 978886 Vulnerabilities in TCP/IP could cause Elevation of Privilege
MS10-059 982799 Vulnerabilities in the Tracing Feature for Services Could Allow an Elevation of Privilege
MS10-060 983589 Vulnerabilities in the Microsoft .NET Common Language Runtime and in Microsoft Silverlight Could Allow Remote Code Execution
MS10-070 2418042 Vulnerability in ASP.NET Could Allow Information Disclosure

Windows Azure Guest OS 1.7 is substantially compatible with Windows Server 2008 SP2, and includes all Windows Server 2008 SP2 security patches through August 2010.

noteNote:

When a new release of the Windows Azure Guest OS is published, it can take several days for it to fully propagate across Windows Azure. If your service is configured for auto-upgrade, it will be upgraded sometime after the release date indicated in the table below, and you’ll see the new guest OS version listed for your service. If you are upgrading your service manually, the new guest OS will be available for you to upgrade your service once the full roll-out of the guest OS to Windows Azure is complete.

See Also: Concepts

Configuring Operating System Versions


SpiveyWorks (@michael_spivey, @windowsazureapp) announced on 10/2/2010 a Body Mass Index Calculator running on Windows Azure:

image

imageClick Join In on the live page or here for more information about SpiveyWorks Notes.


Gunther Lenz posted Introducing the Nasuni Filer 2.0, powered by Windows Azure to the US ISV Evangelism blog on 10/1/2010:

imageNasuni CEO Andres Rodriguez and Microsoft Architect Evangelist Gunther Lenz present Nasuni Filer 2.0. Now supporting Hyper-V, Windows Azure and VFS, the Nasuni Filer turns cloud storage into a file server with unlimited storage, snapshot technology and end-to-end encryption.

Join Nasuni and Microsoft for this free event and learn how to take advantage of the cost-savings and scalability of cloud storage.

Event Sponsors:
alt
alt


Bill Hilf posted Windows Azure and Info-Plosion: Technical computing in the cloud for scientific breakthroughs to the Official Microsoft Blog on 9/30/2010:

image Today Microsoft and Japan’s National Institute of Informatics (NII) announced  a joint program that will give university researchers free access to Windows Azure cloud computing resources for the “Info-Plosion Project.”  This project is aimed at developing new and better ways to retrieve information and follows a similar agreement with the National Science Foundation to provide researchers with Windows Azure resources for scientific technical computing.

imageThese cloud research engagement projects are helping usher in a new era of technical computing. Technical computing, a.k.a. high performance computing, is all about using many computers simultaneously to do complex calculations on massive amounts of data.  Extracting valuable insights from these oceans of data allows scientists to build sophisticated simulations and models of the world around us in an effort to answer large and complex questions.  Scientific research of all types, along with industries such as manufacturing, finance and digital content creation, is naturally a hotbed for technical computing.  But researchers still need better, simpler ways to take advantage of the technology.

As I’ve described in previous posts, Microsoft’s technical computing initiative is focused on empowering a broader group of people to solve some of the world’s biggest challenges. Our aim is mainstream technical computing with tools, platforms and services that take advantage of computing power across the desktop, servers and the cloud.

Bringing technical computing to the cloud is truly a cornerstone of our initiative.  As Dan Reed, corporate vice president of Technology Strategy and Policy, says “Cloud computing can transform how research is conducted, accelerating scientific exploration, discovery and results.”  By providing academics, as well as those in business and government, with access to virtually unlimited computing horsepower we can empower scientific breakthroughs that impact all walks of life.  And complementing this effort is investment in simpler, more efficient solutions to build technical computing applications.

Moving forward, you will hear more from me and others here at Microsoft about how we are bringing technical computing to the cloud. 

Stay tuned…it’s going to be exciting.

Bill is Microsoft’s General Manager of Technical Computing


David Chou posted Building Highly Scalable Java Applications on Windows Azure (JavaOne 2010) on 9/30/2010:

image JavaOne has always been one of my favorite technology conferences, and this year I had the privilege to present a session there. Given my background in Java, previous employment at Sun Microsystems, and the work I’m currently doing with Windows Azure at Microsoft, it’s only natural to try to piece them together and find more ways to use them. Well, honestly, this also gives me an excuse to attend the conference, plus the co-located Oracle OpenWorld, along with 41,000 other attendees. ;)

075018_thumb6Slides: Building Highly Scalable Java Applications on Windows Azure - JavaOne S313978

View on Docs.com http://docs.com/8FAZ.

InfoQA related article published on InfoQ may also provide some context around this presentation. http://www.infoq.com/news/2010/09/java-on-azure-theory-vs-reality.

Plus my earlier post on getting Jetty to work in Azure - http://blogs.msdn.com/b/dachou/archive/2010/03/21/run-java-with-jetty-in-windows-azure.aspx, which goes into a bit more technical detail on how a Java application can be deployed and run in Windows Azure.

Java in Windows Azure

imageSo at the time of this writing, deploying and running Java in Windows Azure is conceptually analogous to launching a JVM and run a Java app from files stored on a USB flash drive (or files extracted from a zip/tar file without any installation procedures). This is primarily because Windows Azure isn’t a simple server/VM hosting environment. The Windows Azure cloud fabric provides a lot of automation and abstraction so that we don’t have to deal with server OS administration and management. For example, developers only have to upload application assets including code, data, content, policies, configuration files and service models, etc.; while the Windows Azure manages the underlying infrastructure:

  • application containers and services, distributed storage systems
  • service lifecycle, data replication and synchronization
  • server operating system, patching, monitoring, management
  • physical infrastructure, virtualization, networking
  • security
  • “fabric controller” (automated, distributed service management system)

The benefit of this cloud fabric environment is that developers don’t have to spend time and effort managing the server infrastructure; they can focus on the application instead. However, the higher abstraction level also means we are interacting with sandboxes and containers, and there are constraints and limitations compared to the on-premise model where the server OS itself (or middleware and app server stack we install separately) is considered the platform. Some of these constraints and limitations include:

  • dynamic networking – requires interaction with the fabric to figure out the networking environment available to a running application. And as documented, at this moment, the NIO stack in Java is not supported because of its use of loopback addresses
  • no OS-level access – cannot install software packages
  • non-persistent local file system – have to persist files elsewhere, including log files and temporary and generated files

These constraints impact Java applications because the JVM is a container itself and needs this higher level of control, whereas .NET apps can leverage the automation enabled in the container. Good news is, the Windows Azure team is working hard to deliver many enhancements to help with these issues, and interestingly, in both directions in terms of adding more higher-level abstractions as well as providing more lower-level control.

Architecting for High Scale

So at some point we will be able to deploy full Java EE application servers and enable clustering and stateful architectures, but for really large scale applications (at the level of Facebook ad Twitter, for example), the current recommendation is to leverage shared-nothing and stateless architectures. This is largely because, in cloud environments like Azure, the vertical scaling ceiling for physical commodity servers is not very high, and adding more nodes to a cluster architecture means we don’t get to leverage the automated management capabilities built into the cloud fabric. Plus the need to design for system failures (service resiliency) as opposed to assuming a fully-redundant hardware infrastructure as we typically do with large on-premise server environments.

image_thumb6

(Pictures courtesy of LEGO)

The top-level recommendation for building a large-scale application in commodity server-based clouds is to apply more distributed computing best practices, because we’re operating in an environment with more smaller servers, as opposed to fewer bigger servers. The last part of my JavaOne presentation goes into some of those considerations. Basically - small pieces, loosely coupled. It’s not like the traditional server-side development where we’d try to get everything accomplished within the same process/memory space, per user request. Applications can scale much better if we defer (async) and/or parallelize as much work as possible; very similar to Twitter’s current architecture. So we could end up having many front-end Web roles just receiving HTTP requests, persist some data somewhere, fire off event(s) into the queue, and return a response. Then another layer of Worker roles can pick up the messages from the queue and do the rest of the work in an event-driven manner. This model works great in the cloud because we can scale the front-end Web roles independently of the back-end Worker roles, plus not having to worry about physical capacity.

image_thumb9

In this model, applications need to be architected with these fundamental principles:

  • Small pieces, loosely coupled
  • Distributed computing best practices
    • asynchronous processes (event-driven design)
    • parallelization
    • idempotent operations (handle duplicity)
    • de-normalized, partitioned data (sharding)
    • shared nothing architecture
    • optimistic concurrency
    • fault-tolerance by redundancy and replication
    • etc.

Thus traditionally monolithic, sequential, and synchronous processes can be broken down to smaller, independent/autonomous, and loosely coupled components/services. As a result of the smaller footprint of processes and loosely-coupled interactions, the overall architecture will observe better system-level resource utilization (easier to handle more smaller and faster units of work), improved throughput and perceived response time, superior resiliency and fault tolerance; leading to higher scalability and availability.

Lastly, even though this conversation advocates a different way of architecting Java applications to support high scalability and availability, the same fundamental principles apply to .NET applications as well.

According to Elizabeth White’s Windows Azure with Java at Cloud Expo Silicon Valley post of 10/29/2010, “Microsoft’s David Chou [will] discuss building highly scalable & available applications on Windows Azure with Java” at Cloud Expo Silicon Valley.


Maarten Balliauw posted a link to his PHP on Windows and on Azure slide deck on 9/30/2010:

imageAs promised during my session on PHP Summer Camp in Lisbon, Portugal, here's the slide deck: PHP on Windows and on Azure.

View more presentations from Maarten Balliauw.

Thanks for joining!


Brian Hitney continues his description of how to use a Distributed Cache in Azure: Part II with this 9/30/2010 post:

image In my last post, I talked about creating a simple distributed cache in Azure.   In reality, we aren’t creating a true distributed cache – what we’re going to do is allow each server manage its own cache, but we’ll use WCF and inter-role communication to keep them in sync.   The downside of this approach is that we’re wasting n* more RAM because each server has to maintain its own copy of the cached item.   The upside is:  this is easy to do.

imageSo let’s get the obvious thing out the way.  Using the built in ASP.NET Cache, you can add something to the cache like so that inserts an object that expires in 30 minutes:

1 Cache.Insert("some key", 2 someObj, 3 null, 4 DateTime.Now.AddMinutes(30), 5 System.Web.Caching.Cache.NoSlidingExpiration);

With this project, you certainly could use the ASP.NET Cache, but I decided to you use the Patterns and Practices Caching Block.  The reason for this:  it works in Azure worker roles.  Even though we’re only looking at the web roles in this example, it’s flexible enough to go into worker roles, too.  To get started with the caching block, you can download it here on CodePlex

The documentation is pretty straightforward, but what I did was just set up the caching configuration in the web.config file.  For worker roles, you’d use an app.config:

1 <cachingConfiguration defaultCacheManager="Default Cache Manager"> 2 <backingStores> 3 <add name="inMemory" 4 type="Microsoft.Practices.EnterpriseLibrary.Caching.BackingStoreImplementations.NullBackingStore, Microsoft.Practices.EnterpriseLibrary.Caching" /> 5 </backingStores> 6 7 <cacheManagers> 8 <add name="Default Cache Manager" 9 type="Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager, Microsoft.Practices.EnterpriseLibrary.Caching" 10 expirationPollFrequencyInSeconds="60" 11 maximumElementsInCacheBeforeScavenging="250" 12 numberToRemoveWhenScavenging="10" 13 backingStoreName="inMemory" /> 14 </cacheManagers> 15 </cachingConfiguration>

The next step was creating a cache wrapper in the application.  Essentially, it’s a simple static class that wraps all the insert/deletes/etc. from the underlying cache.  It doesn’t really matter what the underlying cache is.  The wrapper is also responsible for notifying other roles about a cache change.   If you’re a purist, you’re going to see that this shouldn’t be a wrapper, but instead be implemented as a full fledged cache provider since it isn’t just wrapping the functionality.   That’s true, but again, I’m going for simplicity here – as in, I want this up and running _today_, not in a week.

Remember that the specific web app dealing with this request knows whether or not to flush the cache.  For example, it could be a customer updates their profile or other action that only this server knows about.  So when adding or removing items from the cache, we send a notify flag that instructs all other services to get notified…

1 public static void Add(string key, object value, CacheItemPriority priority, 2 DateTime expirationDate, bool notifyRoles) 3 { 4 _Cache.Add(key, 5 value, 6 priority, 7 null, 8 new AbsoluteTime(expirationDate) 9 ); 10 11 if (notifyRoles) 12 { 13 NotificationService.BroadcastCacheRemove(key); 14 } 15 } 16 17 public static void Remove(string key, bool notifyRoles) 18 { 19 _Cache.Remove(key); 20 21 if (notifyRoles) 22 { 23 Trace.TraceWarning(string.Format("Removed key '{0}'.", key)); 24 NotificationService.BroadcastCacheRemove(key); 25 } 26 }

The Notification Service is surprisingly simple, and this is the cool part about the Windows Azure platform.  Within the ServiceDefinition file (or through the properties page) we can simply define an internal endpoint:

image

This allows all of our instances to communicate with one another.  Even better, this all maintained by the static RoleEnvironment class.  So, as we add/remove instances to our app, everything magically works.  A simple WCF contract to test this prototype looked like so:

1 [ServiceContract] 2 public interface INotificationService 3 { 4 [OperationContract(IsOneWay = true)] 5 void RemoveFromCache(string key); 6 7 [OperationContract(IsOneWay = true)] 8 void FlushCache(); 9 10 [OperationContract(IsOneWay = false)] 11 int GetCacheItemCount(); 12 13 [OperationContract(IsOneWay = false)] 14 DateTime GetSettingsDate(); 15 }

In this case, I want to be able to tell another service to remove an item from its cache, to flush everything in its cache, to give me the number of items in its cache as well as the ‘settings date’ which is the last time the settings were updated.  This is largely for prototyping to make sure everything is in sync.

We’ll complete this in the next post where I’ll attach the project you can run yourself, but the next steps are creating the service and a test app to use it.    Check back soon!

Maarten Balliauw posted Remix 2010 slides and sample code on 9/29/2010:

imageAs promised during my session on Remix 10 yesterday in Belgium, here's the slide deck and sample code.

Building for the cloud: integrating an application on Windows Azure

image Abstract: “It’s time to take advantage of the cloud! In this session Maarten builds further on the application created during Gill Cleeren’s Silverlight session. The campaign website that was developed in Silverlight 4 still needs a home. Because the campaign will only run for a short period of time, the company chose for cloud computing on the Windows Azure platform. Learn how to leverage flexible hosting with automated scaling on Windows Azure, combined with the power of a cloud hosted SQL Azure database to create a cost-effective and responsive web application.”

SlideShare: Building for the cloud integrating an application on windows azure - remix2010

Source code used in the session: TDD.ChristmasCreator.zip (686.86 kb)

Thanks for joining and bearing with me during this tough session with very sparse bandwidth!


Return to section navigation list> 

VisualStudio LightSwitch

Beth Massi (@BethMassi) announced on 10/1/2010 that she’ll perform on MSDN Radio on Oct. 11th: LightSwitch with Yours Truly:

imageOn Monday, October 11th at 9:00 AM PST I’ll be talking with Mike Benkovich and Mithun Dhar about Visual Studio LightSwitch on MSDN Radio. MSDN Radio is a weekly Developer talk-show that helps answer your questions about the latest Microsoft news, solutions, and technologies. Head on over to the registration page so you can call in and ask me questions. Here’s the 411 on the show:

image22MSDN Radio: Switching the Lights on with Beth Massi
Application development is working with data and as the tools become more refined, our jobs become easier and less code is required. The challenge is getting the code right. The latest release of Microsoft Visual Studio LightSwitch brings a great tool for working with data in a streamlined way. It logically organizes the project by data, but behind the scenes is built on Microsoft .NET. This week we talk with Beth Massi on what’s exciting and possible with LightSwitch.

Hope to see.. um.. hear you there! ;-)

Sounds like Beth (a.k.a., the LightSwitch Goddess) is planning on two-way radio.


Beth Massi (@BethMassi) reported Vision Clinic Walkthrough sample code now available in C# on 9/28/2010:

image Yesterday we updated the LightSwitch Code Samples page to also include a C# download that compliments the Walkthrough: Creating the Vision Clinic Application available in the MSDN library that was released for Beta 1 with Visual Basic code. The library walkthrough page will be updated in the next refresh but we wanted to get the sample application to you now.

image22All the samples are available on the LightSwitch Developer Center Learn page along with other learning content like How Do I videos and the Training Kit.  As we release more resources they will show up here on LightSwitch Developer Center (http://msdn.com/lightswitch) so bookmark this page :-)

image

Enjoy,
-Beth Massi, Visual Studio Community

It’s refreshing to see the VB.NET version to issue first.


David Harms promoted T4-based code generation for LightSwitch: A powerful tool with a critical limitation on 9/9/2010 (missed when posted to the DevRoadmaps (Beta) blog):

image22On August 23 Microsoft's released the first beta version of LightSwitch, the company's application development environment for Silverlight line of business applications (both web and desktop). Although ostensibly targeted at non-developers (think MS Access), LightSwitch quickly drew a lot of interest from the business software development community.

Having played with LightSwitch for a few days, I have to say it's pretty cool.

It's just not what I hoped it would be. At least not yet.

LightSwitch is an application generator, and a pretty slick one at that. It does all the grunt work of building multi-tiered business software; you can build a basic (or not so basic) CRUD app without writing any code, although there are certainly places where you can plug in your own code. You can also use custom Silverlight controls.

So if  LightSwitch lets me create apps fast, and I can extend those apps, what's not to like?

Here's the rub: while LightSwitch is an application generator, it is not an application code generator. There's very little generated source code; instead LightSwitch creates a model, and it interprets that model at runtime.

There are some advantages to this approach, but there are also some key disadvantages.

I can think of two advantages to not generating actual source code. One, no one's tempted to change that source code and then have their changes wiped out on the next generation cycle. Two, MS is completely free to change the internal implementation of the application, since no one really knows what it is anyway. When LightSwitch is used to create smallish, standalone applications there's no downside here.

A black box is a two-edged sword

Hiding the implementation of a LightSwitch application inside the LightSwitch runtime may be an advantage for Microsoft and for small scale apps, but it's a huge disadvantage for experienced developers building big business apps. And from what I can see a lot of experienced developers are showing interest in LightSwitch.

What if LightSwitch generated actual source code instead of interpreting the model at runtime? I'm not talking about generate-once and modify; I'm talking about generate-repeatedly and don't use (unless you really do want to take it out of the codegen cycle, which I think the vast majority would not want to do).

Now, if you're just generating the code and not modifying it, it may seem like there's no benefit there. But there are at least four good reasons to generate code:

  • Reusability - there would be actual assemblies that could be reused elsewhere
  • Visibility - developers could learn from, and better interact with, the overall application architecture
  • Upgradability - if, for any reason, a LightSwitch app proved insufficient it could be taken over as pure source and enhanced.
  • Security - in the unlikely event that MS abandons LightSwitch, there's always the code.

Let's look at each of these in more detail.

Reusability

Reusability is integral to object-oriented programming; we write code not just for one task, but in such a way that it can be used for other as yet unknown tasks.

But LightSwitch applications are islands unto themselves. They don't encourage reusability by anything outside of the application. You can't take an assembly containing, say, the entity model and reuse that in a WPF application. And you can't use a LightSwitch screen anywhere except in that LightSwitch application.

Note: Presumably it's possible for a non-LightSwitch app to consume the RIA services exposed by the LightSwitch middle tier, although I also assume you'd have to deal with whatever security measures are already in place to prevent unauthorized use of those services.

If LightSwitch generated actual source code then you'd have assemblies, you'd have XAML code, and you could use these anywhere you like. You'd be able to fully leverage LightSwitch's automated approach to application creation.

Visibilty

Right now LightSwitch applications are a black box. Black boxes can be good: developers willingly accept abstractions all the time. For instance, I don't need or want to know the internal workings of my ORM layer.

But LightSwitch's abstraction level is crazy high. It hides the entire structure of my application. That may be good enough for a non-developer, but I don't want to just know the app exists, I want and need to know how it works. Understanding application architecture is important for professional developers; it's an essential part of the craft.

Upgradability

LightSwitch is highly extensible, and I think as time goes on we'll see even more ways to extend LightSwitch app. So for many devs there may never come a time when LS just doesn't get the job done and some part of it needs to be taken to source and modified. Good thing, because there's no way to do that at present.

But what do you do if you invest a bunch of time in LS dev and you hit a roadblock? If LS had an  upgrade (or downgrade, if you prefer) path to source, you'd probably have a way out.

Admittedly this is a weaker argument for having source code generation. Unless you're one of the few who need it.

Security

LightSwitch appears to have the full support of Microsoft and increasing support from developers. I have no reason whatsoever to expect that it will one day be abandoned. But what if that did happen? How would LightSwitch developers move forward?

For small apps, the risk doesn't appear very high. Chances are if LS is replaced by something else, it'll be even easier to build simple apps with that new tool. But what about large scale application development? The bigger the app, the bigger the risk. Unless there's source.

Large scale development and the instant application architecture

In fact, size has everything to do with the suitability of LightSwitch development. The smaller the app, the more appealing LightSwitch looks. Given that MS has indicated LightSwitch really isn't a pro-level tool, perhaps that's good enough.

But the level of interest from experienced developers should be telling Microsoft that there's a huge need out there that LightSwitch could fill.

The great benefit of LightSwitch is not that it lets you build apps without source code, it's that it provides something I like to call the Instant Application Architecture, or IAA.

Building big business applications in .NET is hard work. First you have to work out what kind of application architecture you need. How many tiers will there be? Will you use a service layer? What will your data layer look like? What are your UI tier deployment options?

And once you've decided on the overall architecture, you have to make implementation choices. Will you use standard controls? Third party controls? Custom controls? All three? How will you handle client and server side validation? What about security? Where will your business logic reside? How will you design your user interface?

There are only 87 people* on this planet whose understanding of .NET application design choices is so complete they can architect a major business app without once having to Google something with Bing.

I don't want to be one of those 87 people (well, I do, but I'm not smart enough). What I really want is to be able to reap the benefit of those 87 giant brains; I want them to design the architecture and put it in a box and send it to me. And then I'll add my data and my screens and my custom code and reconstitute everything into a professionally architected business application. Presto! Instant Application Architecture!

So here's what's beautiful, and tempting, about LightSwitch: it really does provide one such IAA. But it's largely non-reusable and completely opaque. That's fine for smaller applications, but when you start talking about hundreds and perhaps thousands of screens, that LightSwitch black box starts looking like one scary big dependency.

It doesn't have to be this way: the black box dependency doesn't have to be there.

As long as LightSwitch can generate actual source code.

How would code generation work?

Microsoft does, in fact, have some wonderful code generation technology in house. It's called T4, and it's been in Visual Studio since 2008. Microsoft uses T4 all over the place, not least in Entity Framework,  ASP.NET MVC and UML tools.

What T4 lacks is any kind of business-oriented modeling front end.

What LightSwitch lacks is a way to generate code out of its business-oriented model.

Anybody see yet where this is headed?

In LightSwitch beta 1 the intelligence that transforms the model into the running application is hidden inside LS itself. But what if Microsoft wrote the code to transform the model into the application as T4 templates? And what if anyone could write their own T4 templates to work with the model, or even to interact with the default T4 templates? You'd suddenly open up a whole new world. You'd have an abstract model with replaceable implementations.

Right now there's at least one major downside to generating code: by interpreting the model at runtime, LightSwitch also makes it possible to make changes to the application at runtime. That capability could be difficult to reconcile with a code generation approach. One way would be to offer two modes: in code generation mode you'd forego the option of making runtime changes to the application.

Bringing LightSwitch's hidden application architecture out into the open would allow for the same kind of rich community feedback that MS has enjoyed on projects like ASP.NET MVC. And it would provide an opportunity for alternative architectures as well. Why shouldn't you be able to use LightSwitch to build WPF desktop apps, or Windows Phone 7 apps?

Generating code does raise the risk that developers will create dependencies on code that may need to change in the future. There are ways to mitigate this risk, such as creating a new template set for a major breaking change such that anyone not wanting to go through the migration can still use the older template set.

Summary

LightSwitch is a promising new tool for small-scale business application development, but it's severely limited for large scale by the insular nature of the applications it creates. Code reuse is difficult or impossible outside of a given application, because the application is a runtime interpretation of a model, rather than generated, compiled source code.

Given the increasing interest in LightSwitch among professional developers,  I think the rewards of generating source code greatly outweigh the risks. It should be possible to achieve code generation without losing the existing functionality and deserved appeal of LightSwitch for smaller-scale application development.


<Return to section navigation list> 

Windows Azure Infrastructure

imageThe Microsoft Most Valued Professionals Team brought the number of Azure MVPs up to 12 in September, 2010 according to this list:

image Chris J.T. Auld: Windows Azure: Architecture: You can usually find Chris Auld at events around the world by walking up the stack from his crazy yellow shoes. Chris is the Director of Strategy and Innovation at Intergen, a …

image Guillaume Belmas: Windows Azure: Architecture: Architecte logiciel pendant 4 ans chez Exakis, Guillaume s'est principalement focalisé sur des problématiques d'architectures distribuées et sur l'industrialisation des dévelop...

image Jim Zimmerman: Windows Azure: Development: Jim Zimmerman is currently a Visual Developer – ASP/ASP.NET MVP. He speaks on various .NET related topics including Azure, Ajax, ORM, and ASP.NET MVC at Code Camps and .NET use...

image Michael Wood: Windows Azure: Architecture: Michael Wood is a Microsoft Practice Director for Strategic Data Systems in Centerville, OH, but lives across the river in Kentucky. He describes himself as a problem solving, …

image Viktor Shatokhin: Windows Azure: Kiev, Ukraine

 

image Cory Fowler: Windows Azure: Development

 

imageSébastien Warin: Windows Azure: R & D

 

image Michael S. Collier: Windows Azure: Architecture: Michael is an Architect with Centric Consulting in their Columbus, Ohio office. He has nearly 10 years of experience building Microsoft based applications for a wide range of …

imageBrent Stineman: Windows Azure: Brent’s nearly 20 year career has spanned platforms from the mainframe to mobile devices. Brent started working with the Windows Azure platform early in its CTP phase, and is n...

image David Pallmann: Windows Azure: Architecture: David Pallmann is the GM of App Dev for Neudesic, a national Microsoft SI partner, where he leads Windows Azure business development, deilvery, and IP. David has 3 decades of e...

image Steven Nagy: Windows Azure: Development: Consulting from the cloud

 

image Jason Milgram: Windows Azure: Jason is the founder of Linxter and architect of the Linxter technology, a message-oriented cloud middleware solution. He has been working with computers ever since he bought h...

imageThe actual number as of 10/4/2010 might be 15, because David Makogon, who’s not in the preceding list, blogged I’ve been awarded Azure MVP! on 10/2/2010. Similarly, Panagiotis Kefalidis said “I was awarded with the Windows Azure MVP title” in his Long time, no see.. post of 10/3/2010. Rainier Stropek tweeted on 10/4/2010: “Hurra! Bin seit heute offiziell #MVP for Windows #Azure :-)”


•• Richard Blackden reported “Steve Ballmer will bound into a lecture hall at the London School of Economics on Tuesday in evangelical mood” in a deck to his Cloud computing: will Microsoft and its rivals find a silver lining? article of 10/2/2010 for the Telegraph.co.uk’s Finance section:

Steve Ballmer is likely to get animated when talking about cloud computing

Steve Ballmer is likely to get animated when talking about cloud computing Photo: Getty

The chief executive of Microsoft is coming to the UK to explain the multi-billion dollar bet that the world’s biggest software company and a poster boy for corporate America is making.

image The wager? That the era in which companies pay to have software installed on computers is drawing to an end. That services such as sending emails, using documents, managing calendars and updating spreadsheets will no longer be tied to an individual computer but be accessible everywhere via the internet, and that companies will only pay for how much they use. And, finally, that many businesses will no longer bother having their own computer networks at all. Welcome, then, to the world of cloud computing.

The phrase, in which the word cloud is used as a metaphor for the internet, has been generating gigabytes of excitement among technologists, developers and futurologists for the past three years. The best analogy to bust the jargon, say experts, is to consider how homes get their electricity. Few have their own generators on the premises - instead people call on an electricity provider to power up a microwave, turn on a kettle or light as a room as they need to.

In the era of cloud computing, businesses will treat computing services in the same way, sharing networks with other companies and paying only for what they use. Technology watchers say it’s as fundamental a change as the advent of mainframe computers in the 1960s, the development of servers and the arrival of the internet itself.

“The shift to cloud computing is huge. It’s one of those shifts that happen in technology once a decade or so,” said Sarah Friar, an analyst at Goldman Sachs in San Francisco. “It’s not something that anyone of any size can afford to ignore.”

And it’s no longer just the preserve of theory, either. It’s shaping strategy in boardrooms, has fuelled the boom in technology deals this year and will help define the technology industry’s next generation of winners and losers.

All of which explains why Ballmer will be in London talking clouds. Although the lecture hall will be crammed full of students, his real audience will be the vast sweep of businesses in the UK and Europe - both big and small - who are planning their IT budgets for the next few years. For North America has led the way on spending on cloud computing, accounting for 58pc of total spend this year, according to research firm Gartner, compared with 24pc for western Europe.

imageThe numbers Microsoft gives suggests its bet is a real one. By next year, the Seattle-based company plans to be spending 90pc of its annual $9.5bn research and development budget on cloud computing. It already has a range of web-based software products, including Office Web Apps and Windows Azure, and 70pc of the 40,000 of its staff who work on software are in this field. [Emphasis added.]

But sceptics wonder whether Microsoft’s enthusiasm resembles that of the evangelist who is still trying to convince themselves to really believe.

“The prevailing wisdom is that Microsoft has been dragged kicking and screaming into the cloud by Google,” said David Smith, who tracks the industry for Gartner. Google’s web-based drive into Microsoft’s heartland of e-mail and word processing has been aggressive, and the search engine says it can provide it at less than half the price. There’s no doubt that cloud computing’s embrace by Microsoft is not a completely warm one.

Stephen Elop, who ran Microsoft’s division that sells software to businesses until he left last month to head Nokia, has called cloud a “constructive disruption”. The division enjoys a 64pc profit margin and has, in the eyes of critics of the software giant and its Windows operating system, been a licence to print money.

The majority of Wall Street analysts expect those margins will come under pressure as Microsoft provides more lower-cost web-based alternatives and competition increases. But if the company had been a reluctant convert, the camp which still doubt its seriousness is dwindling.

“Historically they were pushed into it but now they are full[y] embracing it,” said Colin Gillis of BGC Partners. “They are a cloud-first company.” Microsoft, which declines to break out the profits it makes from cloud computing, argues that it should generate more revenue as it looks after companies’ networks and provides more support. …

Read more here.


Todd Hoff claimed “Facebook's own internal clients were basically running a DDOS attack on their own database servers” in his Facebook and Site Failures Caused by Complex, Weakly Interacting, Layered Systems post of 9/30/2010 to the High Scalability blog:

Facebook has been so reliable that when a site outage does occur it's a definite learning opportunity. Fortunately for us we can learn something because in More Details on Today's Outage, Facebook's Robert Johnson gave a pretty candid explanation of what caused a rare 2.5 hour period of down time for Facebook. It wasn't a simple problem. The root causes were feedback loops and transient spikes caused ultimately by the complexity of weakly interacting layers in modern systems. You know, the kind everyone is building these days. Problems like this are notoriously hard to fix and finding a real solution may send Facebook back to the whiteboard. There's a technical debt that must be paid.

The outline and my interpretation (reading between the lines) of what happened is:

  • Remember that Facebook caches everything. They have 28 terabytes of memcached data on 800 servers. The database is the system of record, but memory is where the action is. So when a problem happens that involves the caching layer, it can and did take down the system.
  • Facebook has an automated system that checks for invalid configuration values in the cache and replaces them with updated values from the persistent store. We are not told what the configuration property was, but since configuration information is usually important centralized data that is widely shared by key subsystems, this helps explain why there would be an automated background check in the first place.
  • A change was made to the persistent copy of the configuration value which then propagated to the cache.
  • Production code thought this new value was invalid, which caused every client to delete the key from the cache and then try to get a valid value from the database. Hundreds of thousand of queries a second, that would have normally been served at the caching layer, went to the database, which crushed it utterly. This is an example of the Dog Pile Problem. It's also an example of the age old reason why having RAID is not the same as having a backup. On a RAID system when an invalid value is written or deleted, it's written everywhere, and only valid data can be restored from a backup.
  • When a database fails to serve a request often applications will simply retry, which spawns even more requests, which makes the problem exponentially worse. CPU is consumed, memory is used up, long locks are taken, networks get clogged. The end result is something like the Ant Death Spiral picture at the beginning of this post Bad juju. 
  • A feedback loop had been entered that didn’t allow the databases to recover. Even if a valid value had been written to that database it wouldn't have mattered. Facebook's own internal clients were basically running a DDOS attack on their own database servers. The database was so busy handling requests no reply would ever be seen from the database, so the valid value couldn't propagate. And if they put a valid value in the cache that wouldn't matter either because all the clients would still be spinning on the database, unaware that the cache now had a valid value.
  • What they ended up doing was: fix the code so the value would be considered valid; take down everything so the system could quiet down and restart normally.

This kind of thing happens in complex systems as abstractions leak all over each other at the most inopportune times. So the typical Internet reply to every failure of "how could they be so stupid, that would never happen to someone as smart as me", doesn't really apply. Complexity kills. Always.

Based on nothing but pure conjecture, what are some of the key issues here for system designers?

  • The Dog Pile Problem has a few solutions, perhaps Facebook will add one them, but perhaps their system has been so reliable it hasn't been necessary. Should they take the hit or play the percentages that it won't happen again or that other changes can mitigate the problem? A difficult ROI calculation when you are committed to releasing new features all the time.
  • The need for a caching layer in the first place, with all the implied cache coherency issues, is largely a function of the inability for the database to serve as both an object cache and a transactional data store. Will the need for a separate object cache change going forward? Cassandra, for example, has added caching layer that along with key-value approach, may reduce the need for external caches for database type data (as apposed to HTML fragment caches and other transient caches).
  • How did invalid data get into the system in the first place? My impression from the article was that maybe someone did an update by hand so that the value did not got through a rigorous check. This happens because integrity checks aren't centralized in the database anymore, they are in code, and that code can often be spread out and duplicated in many areas. When updates don't go through a central piece of code it's any easy thing to enter a bad value. Yet the article seemed to also imply that value entered was valid, it's just that the production software didn't think it was valid. This could argue for an issue with software release and testing policies not being strong enough to catch problems. But Facebook makes a hard push for getting code into production as fast as possible, so maybe it's just one of those things that will happen? Also, data is stored in MySQL as a BLOB, so it wouldn't be possible to do integrity checks at the database level anyway. This argues for using a database that can handle structured value types natively.
  • Background validity checkers are a great way to slowly bring data into compliance. It's usually applied though to data with a high potential to be unclean, like when there are a lot of communication problems and updates get wedged or dropped, or when transactions aren't used, or when attributes like counts and relationships aren't calculated in real-time. Why would configuration data be checked when it should always be solid? Another problem is again that the validation logic in the checker can easily be out of sync when validation logic elsewhere in the stack, which can lead to horrendous problems as different parts of the system fight each other over who is right.
  • The next major issue is how the application code hammered the database server. Now I have no idea how Facebook structures their code, but I've often seen this problem when applications are in charge of writing error recovery logic, which is a very bad policy. I've seen way too much  code like this: while (0) { slam_database_with_another_request(); sleep (1); }. This system will never recover when the sh*t really hits the fan, but it will look golden on trivial tests. Application code shouldn't decide policy because this type of policy is really a global policy. It should be moved to a centralized network component on each box that is getting fed monitoring data that can tell what's happening with the networks and services running on the networks. The network component would issue up/down events that communication systems running inside each process would know to interpret and act upon. There's no way every exception in every application can handle this sort of intelligence, so it needs to be moved to a common component and out of application code. Application code should never ever have retries. Ever. It's just inviting the target machine to die from resource exhaustion. In this case a third party application is used, but with your own applications it's very useful to be able to explicitly put back pressure on clients when a server is experiencing resource issues. There's no need to go Wild West out there. This would have completely prevented the DDOS attack.

There are a lot of really interesting system design issues here. To what degree you want to handle these type of issues depends a lot on your SLAs, resources, and so on. But as systems grow in complexity there's a lot more infrastructure code that needs to be written to keep it all working together without killing each other. Much like a family :-)


David Linthicum asserted “The pervasive use of the cloud without an overall strategy greatly diminishes its value” as a deck for his Lack of cloud computing vision is hurting most enterprises post of 9/29/2010 to InfoWorld’s Cloud Computing blog:

image The use of cloud computing within most enterprises and government agencies is rather ad hoc these days. This is understandable, considering the tactical and even experimental nature of most cloud computing deployments, as well as the learning curve that most IT organizations are going through in moving to the cloud.

image However, if history is any guide, I suspect that the tactical and ad hoc nature of moving to the cloud will continue to manifest itself as cloud computing continues to mature. Thus, private, public, and hybrid cloud computing services will be used without any kind of overall vision or strategy.

The core issue with a lack of strategy is that cloud computing is about driving a holistic change in IT, but doing so a bit at a time. Thus, there really should be an overreaching idea of where the enterprise or agency is heading with cloud computing technology, and then tactical solutions created around that vision or framework.

This issue is really around both value and risk: value that is increased around a clearly defined objective of finding better and less expensive ways to do computing, and the cost of risks that increases dramatically when there is no plan.

The fact of the matter is that IT is very good at buying technology and tossing it at business problems, but not so good at planning. Indeed, if you want to be the most hated man or woman in IT, try being the person in charge making sure that everything adheres to a plan or budget. You won't get invited out to many lunches, trust me.

However, for all this cloud stuff to work, you really need to focus on the end game, moreso than with any other fundamental technology shift. Those that don't will end up with another set of new cloud-based systems that are highly dysfunctional in how they work and play together. In other words, cloud computing won't be able to deliver on the expected business value.


Jason Beal recommended Save Yourself 2 Years: Pick A Cloud Specialization Now in a 9/29/2010 post to the MSPMentors blog:

In the past year, we’ve heard a lot about the tremendous opportunities available to channel partners who are willing to jump into cloud computing. And in my previous columns, I’ve described how the cloud offers channel partners a unique chance to provide customers with top notch service, adding more value at lower prices while still making steady profits doing it. But how do you get started? Here are some clues.

After hundreds of conversations with channel partners throughout North America, I’m convinced that those partners who wait for the cloud to envelop our entire industry before making a move will miss out. Much like we saw with managed services, the early movers who embrace the cloud will gain the advantage. Those channel partners who are taking it to the next level and specializing their practice will continue to lead the industry and set new records for growth and profitability.

Everything Old Is New Again

image Yet, specializing isn’t a new path for channel partners. We’ve been down this road many times before in the IT channel. A new technology category or delivery model hits the channel, creates a lot of interest and the industry runs with it. The technology or delivery model matures and adoption increases. Then what starts out as niche service becomes widely accepted and broadly applied, forcing resellers to become jacks-of-all-trades and, in turn, waters down their expertise. A few years later, these channel partners are forced to retrench and find a way to differentiate their business.

That same dynamic is likely to occur with the cloud in that it will go mainstream and many channel partners will try to own all aspects of it. These partners will struggle and many will go down swinging.

My key point: The fact is the cloud is just too broad for any VAR or MSP to really handle every part of it.

Get Going

Channel partners can cut to the chase faster when it comes to cloud solutions by avoiding the generalist path and specializing their practice on a few target technologies that will offer real added value to their customers. Pick a cloud platform (Azure, Force.com, Google, etc.), a technology area in the cloud (cloud security, Infrastructure-as-a-Service, etc.), or a service or application (on-premise to cloud database migration, private cloud building, etc.) and get to work.

The model is proven with successful cloud solution providers and MSPs such Appirio Inc., which specializes in cloud application development and SaaS implementation services, and Model Metrics, a CRM specialist working with Force.com, Salesforce.com, Google and Amazon, being recognized among the fastest growing partners with a lot of momentum behind them because they’ve specialized.

Real Success, Right Now

Those MSPs aren’t spinning their wheels trying to command a seemingly endless list of cloud services such as IaaS, Saas, PaaS, hybrid, public and private clouds. Instead, they are focusing in on narrower components of the cloud and mastering the sale.

Specializing in a portion of cloud computing doesn’t mean you should abandon your focus on certain end user markets – whether it’s SMB, midmarket, large enterprises or certain industries. It does mean, however, that by honing in on a few areas of the cloud instead of trying to corral them all, you’ll be ahead of the game when it comes to marketing, branding, selling and supporting your services.  You’ll also find yourself better positioned to take advantage of growth on specific platforms or categories in the cloud.

So what are you waiting for? Save yourself two years of headache. Specialize your practice and embrace what the cloud has to offer.

Jason Beal is the director of services sales for Ingram Micro North America. Monthly guest blog entries such as this one are part of MSPmentor’s annual platinum sponsorship. Read all of Jason’s guest blog entries here.


The HPC in the Cloud blog reported a New Research Report Shows Financial Firms Lack Sufficient Infrastructure for Growing Data and Analytics Demand on 9/29/2010:

image Two-thirds of financial services firms fear their analytics programs and infrastructures will not be able to handle increasing analytical complexity and data volume, according to a just-released Research Report, featuring a survey of financial services professionals and conducted by Wall Street & Technology in conjunction with Platform Computing, SAS and The TABB Group.  Completed in July 2010, the survey indicates that firms are hampered by a lack of scalability, inflexible architectures and inefficient use of existing computing capacity.  Noteworthy differences exist in the challenges being faced by both buy- and sell-side firms, with sell-side institutions more likely to report a lack of a scalable environment, insufficient capacity to run complex analytics, and contention for computing resources as significant challenges.

"It's clear that organizations need more flexible infrastructures and platforms to enable them to manage the data issues that both buy- and sell-side firms have today, which includes managing an exponentially larger glut of data at compressed speeds" said Robert Iati, partner and global head of consulting at financial services research firm TABB Group.

According to the survey, data proliferation and the need to better manage it are at the root of many of the challenges being faced by financial institutions of all sizes.  Two-thirds (66 percent) of buy-side firms and more than half (56 percent) of sell-side firms are grappling with siloed data sources.  The silo problem is being exacerbated by organizational constraints, including policies prohibiting data sharing and access, network bandwidth issues and input/output (I/O) bottlenecks.  Ever-increasing data growth is also cause for concern, with firms reporting that they are dealing with too much market data.  Sixty-six percent of respondents were not confident that their analytics infrastructures would be able to keep pace with demand over time.

"Siloed data sources have particularly impacted firms in the area of risk management as was evident during the recent financial crisis," noted David M. Wallace, global financial services marketing manager at SAS.  "The improved liquidity and counterparty risk management needed by both the buy and sell side, as reported in the survey, requires greater enterprise data integration across the firm."

In fact, both buy and sell side firms plan to increase their focus on liquidity and counterparty risk in the next twelve months.  Counterparty risk management was ranked as the highest priority for the sell side (45 percent) with liquidity risk following at 43 percent.  Liquidity risk and counterparty risk scored high for the buy side with 36 percent and 33 percent, respectively.

"We found that mid-sized firms are particularly affected by resource constrictions, where one in four respondents reported challenges around limited computing capacity, frequent contention for compute resources and the inability to complete calculations during peak demand periods," said Jeff Hong, head of financial services industry marketing at Platform Computing.

To counter these challenges, financial institutions plan to turn to a combination of technologies including cloud computing and grid technologies.  Within the next two years, 51 percent of all respondents are considering or likely to invest in cluster technology, 53 percent are considering or likely to buy grid technology, and 57 percent are considering or likely to purchase cloud technology.

Survey Results Available through Wall Street and Technology Website

The Wall Street and Technology report entitled "The State of Business Analytics in Financial Services: Examining Current Preparedness for Future Demands" is freely available for download at http://www.grid-analytics.wallstreetandtech.com.  Wall Street and Technology, in conjunction with Platform Computing, SAS, and The TABB Group, will host a webinar to discuss in-depth key findings of the survey on October 7th at 12 pm ET/9 am PT.

For more information, please visit:  http://tinyurl.com/2ulcesm.

Read more: 2, All »


Nancy Medica posted Windows Azure’s Future to Common Sense’s GetCS blog on 9/29/2010:


image Microsoft’s Windows Azure is a brand new Cloud Services Platform, released last February, and is one of the most innovative 2010 cloud offerings. Azure is also still in its own process of evolving and growing.

imageToday we will share with you the current Windows Azure state-of-the-art features and the upcoming ones we’ll soon enjoy.

Windows Azure now:

• Load balance and automatic data replication
Cloud computing automatically balances your server’s load in order to consistently provide users with availability and fast content.

• Automatic data redundancy
Your data will be automatically copied in multiple nodes for both disaster prevention fast content availability.

• Able to operate as a Content Delivery Network
You can use Azure to hold multiple files throughout the world that everyone can link to and use. So even if you have your own hosting and website, wouldn’t it be great to load files directly from Microsoft servers worldwide?

• For the moment, each SQL database has a limit of 50 GBs
It’s likely that this limit will be increased in the future.SQL-scheduled jobs are not allowed, but they can be replaced with worker roles.
Right now you can’t have automatic jobs running in your databases. However, you can always use Azure’s Worker Roles to be awakened at certain times to perform the processing you need.

• As the infrastructure is shared, big updates that one client makes can affect the performance of another client’s server
Cloud computing at some point means you’ll be sharing your servers with other Microsoft customers. That shouldn’t be a problem, but you may notice slow operations at times if someone else is concurrently making big updates and you unluckily happen to hit the same node at the same time.

• Azure PaaS supports the Entity Framework, and it also supports Microsoft new technologies (such as the MVC framework)
Data Sync Framework is designed to migrate data from the SQL Server to Azure

• The IIS-specific configurations are currently not supported, and web roles are not exactly IIS processes
As the server is still shared, it’s not that easy to allow customers to have administrator privileges so that they can modify the configuration — that could affect other users. However, a new approach to this problem is coming (see below).

Azure’s Coming Soon [to Azure]:

• Azure SDK 1.3 might be available in November
This new version of the platform will have full IIS-specific configurable permissions for web roles and worker roles. It will provide users with a way to configure everything they need for their projects without affecting server configuration settings for other users, which is a nice benefit!

• The SDK 1.3 will support multiple websites in a project (this is not currently possible)
All in all, we know that this platform is continuously growing, and also improving its offerings. It is a great and new way of working, and the future features indicate to us that there are plenty possibilities of succeeding with this Cloud tool.

Are you working with cloud? Have you tried Windows Azure PaaS? Share with us your experience!


Phil Wainwright analyzed Moving Collaborative Apps from Groupware to the Cloud in this 9/28/2010 post to the Enterprise Irregulars blog:

Listen to my conversation with Jon Pyke, Founder and CEO of CIMTrek, which provides tools to migrate legacy applications to cloud platforms.

In this podcast, learn why some collaborative applications are holding organisations back from adopting the cloud, and find out some of the possible approaches to migrating groupware-based applications to a cloud platform.

Listen to or download the 7:15 minute podcast below: Download file

—Transcript—

PW: Jon, you recently founded and launched CIMTrek. Give us a brief overview of the reasons for this new venture and what it does.

JP: Well, I’d been spending some time last year writing a book with a couple of colleagues, Peter Fingar and Andy Mulholland. And one of the things — one of the recurring themes that came up when we were doing the research, was that there were so many occasions when people seemed to be assuming that anything to do with the cloud was a green-field site. And I started to question this. I thought, well actually, probably one of the biggest impediments to organizations moving to the cloud is the stuff they’ve already got. And I’m not talking about the big SAP- and ERP-, Oracle-type systems. I’m not talking about those. I’m talking more about the groupware legacy applications, in that space.

And I thought that if we’re going to get businesses to move to the cloud, then we ought to be able to, not just move their mail addresses and directories, but you ought to be able to move the applications they use on an everyday basis. And it’s those simple applications — like room booking, and expenses, and simple CRM — those sort of things. So I thought if we could move those across into a cloud platform then there would be less resistance to move into it.

Right. And so these are the collaborative applications that people are typically using in workgroups. They might be built on Lotus Notes, or Microsoft Access, Novell Groupware, or whatever I guess.

Yeah absolutely, and particularly Notes and Groupware I think. If you think back to the early nineties when we were all talking about CSCW — that’s Computer Supported Collaborative Working — well, I think there’s a new twist to that to become Cloud Supported Collaborative Working.

[laughter] So, one of the things that was going through my mind is, why don’t businesses just carry on with what they’ve got? How important is it for them that these legacy applications don’t have the cloud capabilities?

Well, it’s an interesting question, but I think it comes down to cost and availability and being able to run these systems. Because up until now, — well, let’s just backtrack a little bit. I was talking to a large organization very recently, and the amount of money they spend on keeping these types of applications going, it was just remarkable. They were spending something like nine million pounds a year to keep an organization using very simple end-user type applications, as I was mentioning before.

It seems to me if you can get rid of the ownership of that, and turn that money into being innovative and doing new things rather than trying to keep things going in the background. So it was very much a question of cost reduction, both from a people and a hardware environment. But also I think it’s much greener to do it this way, rather than keeping all of these disks and machine rooms spinning away doing literally nothing most of the time.

Well, yes.

So much cost, I think.

So, is CIMTrek a complete platform, or do you work with other platforms when you migrate these applications into the cloud?

Well, what we do is, we mine those applications and turn what we find into common formats. So typically, a user would mine the Notes environment and then come back and tell you every single application you’ve got, how many forms you’ve got, how many views, and all that kind of stuff. Then we convert that into an internal common format, and then you select where you want it to go. So for instance, if you wanted to move it to a big workflow system like Pega, or Lombardi, or whatever, then you could do that. And then we would take the process across into that environment…

Read the complete article @ The Connected Web


Nancy Gohring quoted Amazon’s Werner Vogels, Citrix’s Simon Crosby and Microsoft’s Doug Hauger in her The Cloud: A Threat to Incumbents, Opportunity for Startups article of 9/28/2010 for PCWorld’s Business Center:

image Legacy enterprise software, like well-known CRM and ERP applications, are moving to the cloud, but new kinds of applications will need to be developed to take full advantage of these computing services, said Amazon's Web Services chief.

Such a potential shift to new applications poses a threat to vendors of legacy software, said experts speaking during a panel at TechNW on Monday.

Companies like Oracle, SAP and CA are certifying their software to run on cloud services like Amazon's, said Werner Vogels, CTO of Amazon Web Services. "That doesn't give you immediately all the benefits like scale and elasticity," he said. "For that, new apps have to be developed."

That shift opens doors for new companies to develop services that take full advantage of the cloud but also threatens existing vendors. Vogels said that telecom service offers a good example. For instance, "no one buys their own PBX anymore," he said. Instead, companies are using cloud-based voice services from providers like Twilio, which uses Amazon.

image"The telcos are looking at them and thinking, 'what is happening, why can't I move this fast,'" Vogels said. "What cloud allows you to do is, any young business with a good idea can execute faster than any incumbent. I see it all over the place, like in enterprise software moving to the cloud and actually taking the incumbents, the number one and number two, out to lunch."

image Citrix, which offers virtualization technologies including for desktops, agrees that the shift away from legacy software is a threat to big incumbent vendors. "It's all desperately bad news for Windows," said Simon Crosby, CTO of Citrix. "There's this thing that's paying the bills and it's not Azure." Azure is Microsoft's new cloud platform service. Crosby suggests that Microsoft will struggle to earn enough revenue from Azure to compensate for slowing Windows sales in the future. That slowdown isn't even starting to happen though, with the latest version of Windows selling well.

Still, the executives expect software built on systems from incumbents will be around for a long time. "There are a lot of legs in legacy," said Crosby. Legacy equipment will retire along with the work force that installed it, meaning it will be around for a long time, he said.

imageMicrosoft hopes to be at the confluence of three trends, which it expects will result in a significant new opportunity. The consumerization of IT, mobility and cloud computing are coming together to "light up the experience for the end user," said Doug Hauger, general manager of Windows Azure for Microsoft.

"Innovating in that space where those three come together will be incredibly fruitful for people and there's a lot of space there," he said. Microsoft hopes to nurture third-party vendors that want to build services on Azure to deliver to popular mobile devices, for instance.

Hauger stressed that it's not all about developing solely on offerings from Microsoft. "It's about building loosely coupled distributed application platforms. Building things that are truly services that can be moved back and forth, on premise and off, from Microsoft to Amazon. That mobility is so critical. If applications are built as monolithic platform stacks, it makes them very hard to move back and forth," he said.

That means that a service provider may build an application on Azure for users of the iPhone or build a service on Amazon that is used by Windows PC users, he said.

In addition to new services aimed at end users, startups are building IT services on top of the compute offerings. "There's a rich ecosystem of supplemental services," said Vogels. For instance, third-party vendors are developing geolocation and telephony services that users of cloud services may want to incorporate into their services, he said.


Audrey Watters asked Is IT-as-a-Service the Culmination of Cloud Computing? in this 9/28/2010 post to the ReadWriteCloud:

While they might not be the worst terms in cloud computing, phrases ending with "as-a-service" are certainly becoming more widespread. One of the latest: IT-as-a-service.

For those who fear that adoption of cloud technologies will mean the end of their IT jobs, it's probably not a welcome idea. For others, IT-as-a-service may be seen as just another way of saying "private cloud."

But as Lori MacVittie writes in an article on Dev Central, the idea of IT-as-a-service is too often conflated with a move to the cloud or with what she calls "Infrastructure 2.0." Rather, she argues that the latter two are requirements or building blocks for IT-as-a-service.

"At the top of our technology pyramid," she writes, "we have IT as a Service. IT as a Service, unlike cloud computing, is designed not only to be consumed by other IT-minded folks, but also by (allegedly) business folks. IT as a Service broadens the provisioning and management of resources and begins to include not only operational services but those services that are more, well, businessy, such as identity management and access to resources." MacVittie argues that IT-as-a-service is built on the cloud framework but is a higher level abstraction, so that it isn't simply about provisioning infrastructure resources.

itasaservice.png

MacVittie says that IT-as-a-service should be a culmination of the layers beneath it and should have as its ultimate goal making self-service possible, even to those that have no technical understanding of what's going on. And while she recognizes that we are probably a long way away from that "push button" service, the end goal is to enable users to easily take advantage of IT services.

What are your thoughts on IT-as-a-service? New buzzword? Or the end-goal of cloud computing?


Michael Biddick analyzes Application Performance Management (APM) in private and public clouds in his 34-page Research: APM Reaches for the Clouds white paper for Information Week::Informatics:

Virtualization or Not, APM Is Hot

As enterprises embrace virtualization and move more apps 
to public and private clouds, they must adapt APM approaches 
to ensure delivery of appropriate metrics to customers and business partners. Our August survey of 379 business 
technology professionals reveals insights into the critical 
evolution of the APM market. Read this in-depth analysis and trending report to learn how vendors and user organizations are adapting their products, services and strategies to virtual environments?and why some IT and business leaders are holding off.
If you’re like most business tech professionals, you’ve moved some of your company’s mission-critical applications to a virtual environment—the time savings and budgetary benefits far outweigh any disadvantages—and you’re considering migrating even more apps to public or private clouds. But no matter how much you’ve improved efficiency by going virtual, you still have to monitor and manage the performance of your apps and share performance data with users and business partners. And that’s not easy, especially in situations where you have limited access to a service provider’s related applications and underlying infrastructure—you can’t determine the root cause of many problems if you can’t get a complete picture of the user experience.

image As more vendors enter the APM market and existing APM vendors rush to revamp their offerings to accommodate organizations taking their apps virtual, InformationWeek Analytics polled 379 business technology professionals in its 2010 Application Performance Management Survey. Our goal: to find out how organizations are updating their APM strategies in context of virtualization and cloud computing, to ensure efficient delivery of required metrics as the market evolves. We also set out to identify trends emerging since our 2009 APM survey, and to explore why some companies are holding off on APM altogether—for now. (R1591010)

image (Don’t miss “APM Deployment Options,” our companion report done in conjunction with Network Computing, for an in-depth assessment of new APM implementations.)

Download


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

imageNo significant articles today.

<Return to section navigation list> 

Cloud Security and Governance

The CSC Blog’s Administrator posted In the Cloud, ‘Security’ Starts with a ‘T’ on 9/29/2010:

in-the-cloud-%e2%80%98security%e2%80%99-starts-with-a-%e2%80%98t%e2%80%99This week, the 6th Annual Information Technology Security Automation Conference (ITSAC) takes place in Baltimore, MD hosted by the Security Content Automation Protocol (SCAP) and sponsored by National Institute of Standards and Technology (NIST), Department of Homeland Security (DHS), National Security Agency (NSA), and the Department of Defense (DoD).

At the event, CSC’s Ron Knode, Director of Global Security Solutions, (and frequent blogger to this site) presented  ”CloudTrust 2.0 - In the Cloud, ‘Security’ Starts with a ‘T’”. The presentation highlights that transparency in the cloud is key to capturing digital trust payoffs for both cloud consumers and cloud providers. The CloudTrust Protocol (CTP) with SCAP offers a simple way to request and receive the fundamental information  needed to address the essential elements of transparency.  CSC Transparency-as-a-Service (TaaS) will use the  CTP to provide a flexible and uniform technique for reclaiming transparency into cloud architectures, configurations, services, and status– responding to both the cloud user and cloud provider needs.  To be truly effective, Ron noted, transparency protocols need to be accompanied with the operation and contractual terms required to meet business requirements. To learn more about Ron’s presentations and CSC’s Cloud Offerings, including CTP, please visit: http://www.trustedcloudservices.com/.


Bruce Maches posted Planning for Validation of Cloud Based Applications on 9/29/2010 to the HPC in the Cloud blog:

image “The validation of any cloud based application involves additional considerations and risks that must be taken into account during the planning process. Any life science company looking to validate cloud based systems must adjust its system qualification process to properly prove to any auditor that the application in question is installed, operating correctly while also meeting the users requirements.”

imagePrevious posts have provided a generic overview of what it takes to validate applications to meet FDA 21 CFR Part 11 guidelines. These guidelines are put forth by the FDA and apply to any system maintaining electronic records dealing with the research, testing, manufacturing and distribution of a medical device or drug.

I mentioned in a prior post three of the components that are part of the Part 11 validation process, the Installation Qualification (IQ), the Operational Qualification (OQ) and the Performance Qualification (PQ). There are several other critical documents that make up the overall validation package that would be reviewed by the FDA, they include:

  • Validation plan: the document that describes the software validation strategy, scope, execution process, roles,  responsibilities, and general acceptance criteria for each system being validated
  • Functional Requirements: these are based on the user requirements and define the processes and activities to be supported by the system
  • Traceability Matrix: used to cross reference the functional requirements to actual validation test scripts to ensure that all user requirements are tested and have been proven to be fulfilled
  • Installation Qualification: a set of test scripts that provide verification that the hardware and software are properly installed in the environment of intended operation
  • Operational Qualification: verification that hardware and software are capable of consistently operating as expected and originally predicted
  • Performance Qualification: proving that hardware and software can consistently perform within pre-defined or particular specifications and also meet the requirements as defined
  • Validation Summary Report: a report summarizes the validation activities and results and provides the approving individuals with the software recommendation of acceptable or unacceptable for use

Every life science company must have SOP’s that spell out the validation process, roles, responsibilities, and what must be covered in the actual validation package itself. On top of that would be a number of associated procedures that provide additional guidance on such topics as change control, documentation practices, auditing, access controls and development methodologies. There can also be a validation master plan that describes what systems need to be validated within the company, for example a compan[y’s] HR system may not need to be validated, and can provide templates and additional details on how the validation process must occur.

As you can see there is an entire eco-system of procedures, templates, and documents that make up the Part 11 validation process within any company that must adhere to FDA regulatory guidelines. For many large scale deployments the validation process can take nearly as much time and effort as the actual system implementation itself. The question is how does the advent of cloud computing impact all of these processes and protocols? What additional items will the validation process need to take into account when attempting to validate cloud based applications? I will provide some basic information below and then add more detail in a future blog entry.

- Validation Master Plan: the VMP will need to spell out what will or will not need to be added to the validation process due to the use of cloud computing. This includes all three facets of cloud (IaaS, PaaS, and SaaS). This can include such things as performing on-site audits, obtaining vendor employee training records, and auditing vendor change control, software development processes and security access procedures

- System Validation Plan: this will depend on what  aspect of cloud is being used in the system, for internal applications being run on IaaS then the validation plan will need to address the additional requirements on validating the infrastructure, if PaaS is being leveraged the validation plan will need to prove the installation and operations of the PaaS environment and similarly for SaaS, the plan will need to define the validation process for an externally provisioned application

- Functional Requirements: there may need to be additional functional requirements as part of the cloud validation process for security, access, encryption, latency and response times

- Installation Qualification: as described in the validation plan, the type of cloud services being utilized will have a significant impact on what must be validated, for IaaS whether it is public or private cloud can be a big difference, for PaaS the validation of the supporting infrastructure can be provided by the vendor and evaluated for inclusion in the overall package. For SaaS it would be relatively the same

- Operational Qualification: once again depending on the type of cloud services being used the OQ, additional testing will need to address the operational effectiveness of the environment and take into account any changes required to qualify the cloud environment

- Performance Qualification: the PQ may incorporate additional scripts to prove that the system is meeting its defined user requirements, other test scripts may be necessary to prove such things as response times, security, or backup/recovery of cloud based applications

- Validation Summary Report: the thrust of the summary report will not change as its purpose is to collate all of the information generated during the validation process and to ensure that it is properly collated and can support the recommendation that the system meets its initial specifications/requirements and it ready to use

By taking into account the validation process changes required to support cloud applications, life science companies can provide the proper levels of assurance to the FDA that their applications running in the cloud meet the necessary Part 11 guidelines.

Bruce Maches is a former Director of Information Technology for Pfizer’s R&D division, current CIO for BRMaches & Associates and a contributing editor for HPC in the Cloud.


Cloud Computing Events

image•• Azret Botash of DevExpress will present Introduction to OData on 10/5/2010 at 10:00 AM to 11:00 AM PDT according to a GoToMeeting.com post of 10/3/2010:

This session will provide an overview of the OData protocol; along with how to expose and consume OData feeds using the WCF Data Services.

imageRegister here.


Brian Loesgen is Looking Forward to the SOA-Cloud Symposiums Next Week in Berlin! according to this 9/20/2010 post:

image

I am really looking forward to speaking at (and being at) the SOA and Cloud Symposiums next week in Berlin. This is my third year speaking at the SOA Symposium, co-located this year with the Cloud Symposium. I hope to share, and learn, a lot.

I was going to start listing and linking to my friends and esteemed colleagues that I would be seeing again next week, and quickly realized that would take too long, there are soooooo many! The cross-industry representation at these events is very impressive.

I have a lot of fond memories of last year in Rotterdam, including the honor of being part of the SOA Manifesto working group, which will be celebrating its one year anniversary, and we’ll be doing a retrospective.

This year will also be the European book launch of SOA with .NET & Windows Azure- Realizing Service-Orientation with the Microsoft Platform. We will be doing a book signing by the authors in attendance, which will be David ChouThomas ErlHerbjorn Wilhelmsen and myself.

This year I’ll be doing a couple of sessions (Azure/BizTalk/ESB of course) and at least one panel, and probably more that I just don’t know about yet.

In addition, on Monday, it will be CloudCamp Berlin, which is a free event, and also looks like fun.

CloudCamp Berlin

Being in Berlin at Oktoberfest time is of course just PURELY coincidental. Prost!

Follow us on Twitter #SoaCloud


Eric Nelson posted his Slides and Links for Windows Azure Platform Best Practices for DevCon London on 9/29/2010:

Yesterday (28th Sep 2010) I delivered a short (50mins) new session on best practices for the Windows Azure Platform. Big thanks to all who attended – I think we had fun.

I will expand this post with some additional links when I have a little more time but I wanted to get the main bits in place – oh, and download the free ebook.

Slides: Windows Azure Platform best practices by ericnel

View more presentations from Eric Nelson.

Links


Brian Hitney announced on 9/28/2010 Azure Firestarter – coming soon! to the right coast:

image I’m excited to announce that our Azure Firestarter series is getting ready to roll!  Registration details at the bottom.  Basically, I’m teaming up with my colleagues Peter Laudati and Jim ONeil and we’ll be travelling around.   We’re intentionally been waiting to do these events so they post-PDC – this means we’ll include the new stuff we’re announcing at PDC!  Also, for those wondering what was going on with the @home series, we’ll be doing that here, too, with some revamped ideas…

The Agenda

windows-azure-logo_1-f2e19cIs cloud computing still a foggy concept for you? Have you heard of Windows Azure, but aren’t quite sure of how it applies to you and the projects you’re working on? Join your Microsoft Developer Evangelists for this free, all-day event combining presentations and hands-on exercises to demystify the latest disruptive (and over-hyped!) technology and to provide some clarity as to where the cloud and Windows Azure can take you.

8:00 a.m. - Registration

8:30 a.m. - Morning Sessions:

Getting Your Head into the Cloud

Ask ten people to define “Cloud Computing,” and you’ll get a dozen responses. To establish some common ground, we’ll kick off the event by delving into what cloud computing means, not just by presenting an array of acronyms like SaaS and IaaS , but by focusing on the scenarios that cloud computing enables and the opportunities it provides. We’ll use this session to introduce the building blocks of the Windows Azure Platform and set the stage for the two questions most pertinent to you: “how do I take my existing applications to the cloud?” and “how do I design specifically for the cloud?”

Migrating Applications to Windows Azure

How difficult is it to migrate your applications to the cloud? What about designing your applications to be flexible inside and outside of cloud environments? These are common questions, and in this session, we’ll specifically focus on migration strategies and adapting your applications to be “cloud ready.”

We’ll examine how Azure VMs differ from a typical server – covering everything from CPU and memory, to profiling performance, load balancing considerations, and deployment strategies such as dealing with breaking changes in schemas and contracts. We’ll also cover SQL Azure migration strategies and how the forthcoming VM and Admin Roles can aid in migrating to the cloud.

Creating Applications for Windows Azure

Windows Azure enables you to leverage a great deal of your Visual Studio and .NET expertise on an ‘infinitely scalable’ platform, but it’s important to realize the cloud is a different environment from traditional on-premises or hosted applications. Windows Azure provides new capabilities and features – like Azure storage and the AppFabric – that differentiate an application translated to Azure from one built for Azure. We’ll look at many of these platform features and examine tradeoffs in complexity, performance, and costs.

12:15 - Lunch

1:00 - Cloud Play

Enough talk! Bring your laptop or pair with a friend, as we spend the afternoon with our heads (and laptops) in the cloud. Each attendee will receive a two-week “unlimited” Azure account to use during (and after) our instructor-led hands-on lab. During the lab you’ll reinforce the very concepts we discussed in the morning as you develop and deploy a compelling distributed computing application to Windows Azure.

4:00 p.m. The Silver Lining: Evaluations and Giveaways

Registration & Details

Use the links below to register for the Windows Azure Firestarter in the city closest to you.

City Date Registration
Tampa, FL November 8 REGISTER HERE!
Alpharetta, GA November 10 REGISTER HERE!
Charlotte, NC November 11 REGISTER HERE!
Rochester, NY November 16 REGISTER HERE!
Waltham, MA November 30 REGISTER HERE!
New York, NY December 1 REGISTER HERE!
Malvern, PA December 7 REGISTER HERE!
Chevy Chase, MD December 9 REGISTER HERE!

The Microsoft Partner Network announced the opening of Windows Azure Platform University to Microsoft Partners on 10/12/2010 in the Redmond campus’s Building 122 from 8:30 AM to 4:30 PM PDT:

Cloud computing is driving a fundamental change in the economics of the technology industry and altering the ecosystem between development platforms, software vendors, and system integrators and creating new risks and opportunities for all.
Microsoft is investing heavily in delivering the best Cloud platform and experience for both customers and partners. Windows Azure Platform University is your single best opportunity to:

  • Understand the Cloud market space and Microsoft’s strategy to win
  • Develop important business relationships
    • Meet the Microsoft field, meet our marketers, meet other Microsoft partners
  • Secure the Microsoft resources, tools and support to help you grow your business with Azure

Other dates and locations are:

  • 11/12/2010, 8:30:00 AM - 4:30:00 PM EST, New York (no venue reported), USA
  • 11/30/2010, 8:30:00 AM - 4:30:00 PM EST, Atlanta Marriott Alpharetta, Alpharetta, GA, USA
  • 12/3/2010, 8:30:00 AM - 4:30:00 PM EST, Microsoft Canada Mississauga Office, Mississauga, ON, Canada

The post includes a Submit button for registration.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Steve explained Getting Secure *In* the Cloud: Hosted Two-factor Authentication with a third-party two-factor authentication server in this 10/1/2010 post:

Ds3cloudas One of the aspects that attracted me to come work at AWS was my sense that the cloud is becoming a logical place in which to build security solutions. Now, two of my favorite notions -- deploying security into the cloud and accelerating the adoption of multi-factor authentication -- have come together with DS3 CloudAS, a two-factor authentication server from Data Security Systems Solutions built as a virtual appliance AMI.

To help you understand why I think this is so cool, a brief security lesson is in order. Three important principles are necessary to ensure that the right people are doing the right things in any information system:

  • Identification. This is how you assert who you are to your computer. Typically it's your user name; it could also be the subject name on a digital certificate.
  • Authentication. This is how you prove your identity assertion. The computer won't believe you until you can demonstrate knowledge of a secret that the computer can then verify. Typically it's your password; it could also be the private key associated with a digital certificate. Authentication sequences never send the actual secrets over the wire; instead, the secrets are used to compute a difficult-to-reverse message. Since (presumably) only you know your secret, your claim is valid.
  • Authorization. This is what you're allowed to do once the computer grants you access.

Unfortunately, humans aren't very good at generating decent secrets and frequently fail at keeping them secret. Multi-factor authentication mitigates this carbon problem by requiring an additional burden of proof. Authentication factors come in many varieties:

  • Something you know. A password; a PIN; a response to a challenge.
  • Something you have. A token; a smartcard; a mobile phone; a passport; a wristband.
  • Something you are or do. A tamper- and theft-resistant biometric characteristic; the distinct pattern of the way you type on a keyboard; your gait. (Note: I disqualify fingerprints as authenticators because they aren't secret: you leave yours everywhere you go and are easy to forge. Fingerprints are identifiers.)

Strong authentication combines at least two of these. My preference is for one from the "know" category and one from the "have" category because individually the elements are useless and because the combination is easy to deploy (you'd quickly tire of having to walk 100 paces in front of your computer each time you logged on!).

Several products in the "have" category compete for your attention. The DS3 CloudAS supports many common tokens so that you have a choice of whose to use. In some cases you might require using a dedicated hardware device that generates a random time-sequenced code. My favorite item in the "have" category is a mobile phone. Let me illustrate why.

Ds3oauthappMobile phones provide out-of-band authentication. Phishing succeeds because bad guys get you to reveal your password and then log into your bank account and clear you out. Imagine that a bank's website incorporates transaction authentication by sending a challenge to your pre-registered mobile phone and then waits for you to enter that challenge on the web page before it proceeds. This technique pretty much eliminates phishing as an attack vector -- an attacker would need to know your ID, know your password, and steal your phone. Indeed, the idea isn't really imaginary: it's already in place in many banks around the world. (These are the smart banks who realize that two-factor authentication just for logon isn't sufficient.)

imageThe DS3 CloudAS virtual appliance makes it easy to build strong authentication (logon and transaction) into your applications without having to invest in and maintain on-premise authentication hardware. It's a DevPay AMI that provides a complete self-contained pay-as-you-go implementation of DS3's Authentication Server. If this interests you, I encourage you to consider using mobile phones as the second authentication factor. They free your customers from having to purchase expensive and easy-to-lose hardware tokens -- people jealously guard their phones and everyone knows how to use SMS. And yup, there's even an app for that.


Nancy Gohring led with “Amazon's Werner Vogels shared secrets behind the Web Services business” as a deck for her Even Amazon.com doesn't exclusively use the cloud article of 9/30/2010 for NetworkWorld’s Data Center blog:

Amazon Web Services has its roots in the needs of Amazon.com, the retailer, but that doesn't mean that all of the book seller's operations run on Web Services.

image "A large part of Amazon's website runs on Amazon Web Services, but there are also pieces that are not suitable to move, mainly because of the way we built the architecture really specific for some hardware, or we have a very dedicated, highly tuned environment where just moving them over to Amazon Web Services would give no direct benefit," said Werner Vogels, CTO of Amazon Web Services.

image Vogels spoke on Tuesday evening at the company's new offices in Seattle during an open house. He offered a few other surprising tidbits about the Web Services operations, described how the business came about and disclosed a few problems it has faced along the way.

image While many people think of surges in demand as driving the development of cloud services like those offered by Amazon, the opposite -- drops in demand -- is equally important, Vogels said.

"Scaling isn't only about scaling up," he said. "In reality, scaling down is almost as important as scaling up because that's where you get the cost benefits."

At Amazon, each year before the holiday season in past years company engineers requested additional servers in anticipation of a spike in traffic. Even before it built its Web Services platform, Amazon was good at allocating servers quickly, within hours, he said.

"However, the engineers would never release capacity," he said. After the holiday season, the engineers would always say they wanted to hang on to the added capacity for the next expected surge. In the meantime, that capacity would go unused.

"Even though we had efficient mechanisms certainly compared to traditional enterprises that would take weeks or months to get hardware, we needed to do something in the design of our compute servers such that we could radically change that behavior," he said.

Cloud computing made it easier for Amazon to provision capacity but reallocate it too, he said.

Now that Web Services is used by companies around the world, Amazon still has a good handle on the high end of capacity -- making sure that there's enough to go around. Because its customers come from such a wide variety of market segments, there aren't worldwide events that Amazon worries will create massive usage spikes that it can't handle, Vogels said.

But the drops in usage tend to set off alarms. Amazon uses order rates as a metric of the overall health of the system, he said. "So if the order rate drops, it's an indicator that there's a problem," he said.

Once, Amazon noticed that orders dropped to zero in Germany and the company scrambled to figure out what problem had cropped up. It turned out that Germany's soccer team was having an important match and the "whole country was at a standstill," he said. "It's more those events that are surprising than worldwide events."

As one of the largest data center operators around, Amazon faces some unique challenges. "At the scale of Amazon, we have to deal with every possibility as a reality," Vogels said. 

Read more: 2, Next >


Geva Perry reported AutoCAD in the Cloud Launches Today in a 9/30/2010 post to his Thinking Out Cloud blog:

image Although it announced it a few weeks ago, today Autodesk is launching its cloud service, AutoCADWS, and its iOS apps (iPod Touch, iPhone, iPad) for storing, editing and sharing Autocad DWG files in the cloud.

Screen shot 2010-09-30 at 2.45.04 AMIt's a nice follow up to my Oh, SaaS, is there anything you can't do? post from a couple of months ago. This isn't quite a true SaaS application (yet), but it shows, once again, how cloud computing can apply to a wide variety of applications and how companies are re-thinking the delivery models of their products.

At the time of me writing this post the link to the cloud service, AutoCAD WS, wasn't live yet, but according to the company it should be launched today. My wife is an interior architect and I'm sure she's going to want to play around with this new service. The iPhone and iPad apps are already available on the iTunes store (for free).

I am looking forward to speaking to some of the Autodesk executives to better understand what their plans are for the cloud, as I understand they have a lot more goodies in store.


Chris Czarnecki analyzed AWS Security: Identity and Access Management in a 9/28/2010 post to the Learning Tree blog:

image For an organisation adopting Cloud Computing, one of the benefits is the self service nature of the cloud. If a developer requires a test machine for a short period of time, using an Amazon EC2 instance or Azure server instance is an obvious cheap solution. Not only is the machine only paid for the time it is being used, there is no capital investment required.

imageA question to be asked for organisations when working with a cloud provider such as Amazon is who will have responsibility for provisioning and releasing resources. One account with a credit card is created but ideally this would not be shared with all personel who require cloud access.

image The solution for Amazon EC2 is Amazon Identity and Access Management (IAM). This welcome addition to the Amazon toolset allows the creation of multiple users on a single amazon account. Each user can be assigned permissions on the main account eliminating the need to share passwords or access keys. This enables fine grained security to be configured based on users. For example, an individual user could be allowed permission to start EC2 instances but not terminate them.

Currently IAM is available from the command line tools and the API interface. Plans for incorporating the toolset into the management console have also been announced. No new or extra work is required to use IAM with existing AWS API’s – the security is incorporated seamlessly.

In summary, Amazon have provided a cloud specific transparent security solution that enables a simple, yet elegant solution to enabling controlled multiple user access to AWS resources. Even better, there is no charge for this service – you just pay for the resources utilised as before.


Jeff Barr posted New AWS SDK for PHP on 9/28/2010 to the Amazon Web Services blog:

image We've got a really nice new AWS SDK for PHP. Like our existing .NET and Java toolkits, this one was designed to be a high-quality SDK with comprehensive feature coverage, documentation, and tutorials. The first release supports a large subset of our infrastructure services including the Amazon Elastic Compute Cloud (EC2), the Amazon Simple Storage Service (S3), Amazon CloudFront, Amazon CloudWatch, Amazon SimpleDB, the Amazon Simple Notification Service (SNS), the Amazon Simple Queue Service (SQS), as well as  Amazon Identify and Access Management (IAM).

imageThe new SDK is a derivative of the popular and highly respected CloudFusion toolkit. In fact, the lead CloudFusion developer is now a part of our Developer Resources team (we've got another opening on that team if you are interested)!

imageBuilding the SDK in-house means that we'll be able to deliver updates simultaneously with updates to the services. Having more PHP developers on staff means that we'll be in a position to help with PHP-related questions and issues on the AWS Forums.;

The AWS SDK for PHP includes the following goodies:

  • AWS PHP Libraries - Build PHP applications using APIs that take the complexity out of coding directly against a web service interface. The toolkit provides APIs that hide much of the lower-level plumbing, including authentication, signatures, and error handling.
  • Code Samples - Practical examples showing how to use the toolkit to build real applications.
  • Documentation - Complete, interactive SDK reference documentation with embedded samples.
  • PEAR Package - You can install the AWS SDK automatically using the PHP Extension & Application Repository (PEAR).

Here's what the documentation browser looks like:

We've also set up a dedicated forum for PHP developers. You can find the AWS SDK for PHP, links to the forums, and additional resources for PHP developers in our revised and expanded PHP Developer Center.

I wrote a little application to experiment with the new Toolkit and to get some experience with Amazon SNS. Using the SDK is as simple as including one file. Once included, I create an access object so that I can make calls to SNS:

#!/usr/bin/env php
require_once("sdk.class.php");
// Create the SNS object
$sns = new AmazonSNS();

Then I create some new SNS topics. This is an idempotent operation, so there's no harm in trying to re-create a topic that already exists. I am using a batch operation for greater efficiency.

// Create some new topics
foreach (array('currency_usd', 'currency_jpy', 'currency_brl') as $Topic)
{
$sns->batch()->create_topic($Topic);
}
$sns->batch()->send();

I can now call SNS's get_topic_list function to discover all of my topics:

// Gather up all of the topics
$topics = $SNS->get_topic_list();

Then I subscribe my email address to each topic. SNS will send a confirming email after each request. I must click on a link in the email to confirm my intent to subscribe.

// Subscribe an email address to each topic
foreach ($topics as $topic)
{
$response = $SNS->subscribe($topic, "email", "my_email_address@amazon.com");
print("Subscribe to ${topic}: " . ($response->isOK() ? "succeed" : "fail") . PHP_EOL);
}

With topics created and subscriptions established, I can now publish notifications with ease:

// Publish something to each topic foreach ($topics as $topic) {
$rand1 = rand(1, 100);
$rand2 = rand(1, 100);
$response = $SNS->publish($Topic, "Start: ${rand1}, End: ${rand2}\n",
array("Subject" => "Price Change"));
}

Now that this toolkit has been released, I will make some minor changes to the code samples in my new AWS book and make the code available through the SitePoint code archive in the very near future.


<Return to section navigation list> 

0 comments: