Saturday, September 17, 2011

Windows Azure and //BUILD/ Posts for 9/12/2011+

A compendium of Windows Azure, SQL Azure Database, AppFabric, Windows Azure Platform Appliance and other cloud-computing articles. image222

image433

Updated 9/28/2011 with link to updated version of Valery Mizonov’s Best Practices for Leveraging Windows Azure Service Bus Brokered Messaging API in my Windows Azure and Cloud Computing Posts for 9/26/2011+ post.

Note: This post contains the “cream” of the Windows Azure articles released during the //BUILD/ Windows conference week. Subsequent posts will include additional Windows Azure and general cloud computing for the week.


Azure Blob, Drive, Table and Queue Services

Brad Calder reported Now Available: Geo-Replication and new Blob, Table and Queue features for Windows Azure Storage in a 9/16/2011 post to the Windows Azure blog:

imageDuring the BUILD Day two keynote, we announced the release of geo-replication and a new version of the REST API to enable functionality improvements for Windows Azure Blobs, Tables, and Queues. At this time we are now geo-replicating all Windows Azure Blob and Table data between two data centers.

Geo-Replication

imageCustomers have continually emphasized the importance of Disaster Recovery capabilities in Azure as well as other cloud platforms. Wednesday’s announcement on Geo-replication helps in this area and does so without increasing costs to our customers. Geo-replication replicates your Windows Azure Blob and Table data between two locations that are hundreds of miles apart and within the same region (i.e., between North Central and South Central US, between North Europe and Europe West, and between East and South East Asia). We do not replicate data across different regions. Geo-replication is now turned on for all Windows Azure Storage accounts for Blobs and Tables. Note that there is no change in existing performance as updates are asynchronously geo-replicated.

New Blob, Tables and Queue features

For REST API improvements, we have just released the new version (“2011-08-18”), which contains:

  • Table Upsert – allows a single request to be sent to Windows Azure Tables to either insert an entity (if it doesn’t exist) or update/replace the entity (if it exists).
  • Table Projection (Select) – allows a client to retrieve a subset of an entity’s properties. This improves performance by reducing the serialization/deserialization cost and bandwidth used for retrieving entities.
  • Improved Blob HTTP header support – improves experience for streaming applications and browser downloads.
  • Queue UpdateMessage – allows clients to have a lease on a message and renew the lease while it processes it, as well as update the contents of the message to track the progress of the processing.
  • Queue InsertMessage with visibility timeout - allows a newly inserted message to stay invisible on the queue until the timeout expires
Table Upsert

The Table Upsert allows a client to send a single request to either update or insert an entity; the appropriate action is taken based on if the entity already exists or not. This saves a call in the scenario where an application would want to insert the entity if it doesn’t exist or update it if it does exist. This feature is exposed via the InsertOrReplace Entity and InsertOrMerge Entity APIs.

  • InsertOrReplace Entity – inserts the entity if it does not exist or replaces the existing entity if it does exist.
  • InsertOrMerge Entity – inserts the entity if it does not exist or merges with the existing one if it does exist.
Table Projection (Select)

Table Projection allows you to retrieve a subset of the properties of one or more entities, and only returns those properties/columns from Azure Tables. Projection improves performance by reducing latency when retrieving data from a Windows Azure Table. It also saves bandwidth by returning only the properties of interest.

Improved Blob Download Experience

We have added additional HTTP header support to Windows Azure Blobs to improve the experience for streaming applications and resuming download. Without this support, some browsers would have to restart reading a blob from the beginning if there was an interruption in the download.

Queue UpdateMessage

With the current Queue API, once a worker retrieves a message from the queue, it has to specify a long enough visibility timeout so that it can finish processing the message before the timeout expires. In many scenarios, the worker may want to extend the visibility timeout if it needs more time to process the message. This new UpdateMessage API enables such scenarios. It allows the worker to use the visibility timeout as a lease on the message, so that it can periodically extend the lease and maintain the ownership of the message until the processing completes.

The UpdateMessage API also supports updating the content of the message. This allows the worker to update the message in the Queue to record progress information. Then if the worker crashes, this allows the next worker to continue processing the message from where the prior worker left off.

This functionality enables worker roles to take on longer running tasks than before. It also allows faster failover time, since the leases can be set at fairly small intervals (e.g. 1 minute) so that if a worker role fails, the message will become visible within a minute for another worker role to pick up.

Queue InsertMessage with Visibility Timeout

We have added support in the InsertMessage API to allow you to specify the initial visibility timeout value for a message. This allows a newly inserted message to stay invisible on the queue until the timeout expires. This allows scheduling of future work by adding messages that become visible at a later time.

For more information see our BUILD talk or one of the following blog posts

To read more about all of the Windows Azure-related announcements made at BUILD, please read the blog post, "JUST ANNOUNCED @ BUILD: New Windows Azure Toolkit for Windows 8, Windows Azure SDK 1.5, Geo-Replication for Windows Azure Storage, and More". For more information about BUILD or to watch the keynotes, please visit the BUILD Virtual Press Room. And follow @WindowsAzure and @STBNewsBytes for the latest news and real-time talk about BUILD.

Brad Calder is General Manager for Windows Azure Storage.

Brad presented Inside Windows Azure storage: what's new and under the hood deep dive on Day 2 (9/14/2011) of the //BUILD/ Windows conference.


Brad Calder and Monilee Atkinson (pictured below) of the Windows Azure Storage Team posted a detailed Introducing Geo-replication for Windows Azure Storage article on 9/15/2011:

imageWe are excited to announce that we are now geo-replicating customer’s Windows Azure Blob and Table data, at no additional cost, between two locations hundreds of miles apart within the same region (i.e., between North and South US, between North and West Europe, and between East and Southeast Asia). Geo-replication is provided for additional data durability in case of a major data center disaster.

Storing Data in Two Locations for Durability

imageWith geo-replication, Windows Azure Storage now keeps your data durable in two locations. In both locations, Windows Azure Storage constantly maintains multiple healthy replicas of your data.

The location where you read, create, update, or delete data is referred to as the ‘primary’ location. The primary location exists in the region you choose at the time you create an account via the Azure Portal (e.g., North Central US). The location where your data is geo-replicated is referred to as the secondary location. The secondary location is automatically determined based on the location of the primary; it is in the other data center that is in the same region as the primary. In this example, the secondary would be located in South Central US (see table below for full listing). The primary location is currently displayed in the Azure Portal, as shown below. In the future, the Azure Portal will be updated to show both the primary and secondary locations. To view the primary location for your storage account in the Azure Portal, click on the account of interest; the primary region will be displayed on the lower right side under Country/Region, as highlighted below.

portalaccountprimaryregion

The following table shows the primary and secondary location pairings:

Primary

Secondary

North Central US

South Central US

South Central US

North Central US

North Europe

West Europe

West Europe

North Europe

South East Asia

East Asia

East Asia

South East Asia

Geo-Replication Costs and Disabling Geo-Replication

Geo-replication is included in current pricing for Azure Storage.

If you do not want your data geo-replicated you can disable geo-replication for your account. To turn geo-replication off, please contact Microsoft Windows Azure Support. Note that there is no cost savings for turning geo-replication off.

When you turn geo-replication off, the data will be deleted from the secondary location. If you decide to turn geo-replication on again after you have turned it off, there is a re-bootstrap egress bandwidth charge (based on the data transfer rates) for copying your existing data from the primary to the secondary location to kick start geo-replication for the storage account. This charge will be applied only when you turn geo-replication on after you have turned it off. There is no additional charge for continuing geo-replication after the re-bootstrap is done.

Currently all storage accounts are bootstrapped and in geo-replication mode between primary and secondary storage locations.

How Geo-Replication Works

When you create, update, or delete data to your storage account, the transaction is fully replicated on three different storage nodes across three fault domains and upgrade domains inside the primary location, then success is returned back to the client. Then, in the background, the primary location asynchronously replicates the recently committed transaction to the secondary location. That transaction is then made durable by fully replicating it across three different storage nodes in different fault and upgrade domains at the secondary location. Because the updates are asynchronously geo-replicated, there is no change in existing performance for your storage account.

Our goal is to keep the data durable at both the primary and secondary location. This means we keep enough replicas in both locations to ensure that each location can recover by itself from common failures (e.g., disk, node, rack, TOR switch failing, etc), without having to talk to the other location. The two locations only have to talk to each other to geo-replicate the recent updates to storage accounts. They do not have to talk to each other to recover data due to common failures. This is important, because it means that if we had to failover a storage account from the primary to the secondary, then all the data that had been committed to the secondary location via geo-replication will already be durable there.

With this first release of geo-replication, we do not provide an SLA for how long it will take to asynchronously geo-replicate the data, though transactions are typically geo-replicated within a few minutes after they have been committed in the primary location.

How Geo-Failover Works

In the event of a major disaster that affects the primary location, we will first try to restore the primary location. Dependent upon the nature of the disaster and its impacts, in some rare occasions, we may not be able to restore the primary location, and we would need to perform a geo-failover. When this happens, affected customers will be notified via their subscription contact information (we are investigating more programmatic ways to perform this notification). As part of the failover, the customer’s “account.service.core.windows.net” DNS entry would be updated to point from the primary location to the secondary location. Once this DNS change is propagated, the existing Blob and Table URIs will work. This means that you do not need to change your application’s URIs – all existing URIs will work the same before and after a geo-failover.

For example, if the primary location for a storage account “myaccount” was North Central US, then the DNS entry for myaccount.<service>.core.windows.net would direct traffic to North Central US. If a geo-failover became necessary, the DNS entry for myaccount.<service>.core.windows.net would be updated so that it would then direct all traffic for the storage account to South Central US.

After the failover occurs, the location (what use to be the secondary) that is accepting traffic is considered the new primary location for the storage account. This location will remain the primary location unless another geo-failover was to occur. In addition, after a storage account does a failover to a new primary, we will bootstrap a new secondary, which will also be in the same region. In the future we plan to support the ability for customers to choose their secondary location (when we have more than two data centers in a given region), as well as the ability to swap their primary and secondary locations for a storage account.

Order of Geo-Replication and Transaction Consistency

Geo-replication ensures that all the data within a PartitionKey is committed in the same order at the secondary location as at the primary location. This said, it is also important to note that there are no geo-replication ordering guarantees across partitions. This means that different partitions can be geo-replicating at different speeds. However, once all the updates have been geo-replicated and committed at the secondary location, the secondary location will have the exact same state as the primary location. However, because geo-replication is asynchronous, recent updates can be lost in the event of a major disaster if a failover occurs.

For example, consider the case where we have two blobs, foo and bar, in our storage account (for blobs, the complete blob name is the PartitionKey). Now say we execute transactions A and B on blob foo, and then execute transactions X and Y against blob bar. It is guaranteed that transaction A will be geo-replicated before transaction B, and that transaction X will be geo-replicated before transaction Y. However, no other guarantees are made about the respective timings of geo-replication between the transactions against foo and the transactions against bar. If a disaster happened and caused recent transactions to not get geo-replicated, that would make it possible for transactions A and X to be geo-replicated, while losing transactions B and Y. Or transactions A and B could have been geo-replicated, but neither X nor Y had made it. Along with other possible combinations, where the only guaranteed ordering of geo-replicated transactions are those to the same blob. The same holds true for operations involving Tables, except that the partitions are determined by the application defined PartitionKey of the entity instead of the blob name. For more information on partition keys, please see Windows Azure Storage Abstractions and their Scalability Targets.

Because of this, to best leverage geo-replication, one best practice is to avoid cross-PartitionKey relationships whenever possible. This means you should try to restrict relationships for Tables to entities that have the same PartitionKey value. Since all transactions within a single partitionKey are geo-replicated in order, this guarantees those relationships will be committed in order on the secondary.

The only multiple object transaction supported by Windows Azure Storage is Entity Group Transactions for Windows Azure Tables, which allow clients to commit a batch of entities together as a single atomic transaction, since they have the same PartitionKey value. Geo-replication also treats this batch as an atomic operation. Therefore, the whole batch transaction is always committed atomically on the secondary.

Summary

This is our first step in geo-replication, where we are now providing additional durability in case of a major data center disaster. The next steps involve developing features needed to help applications recover after a failover, which is an area we are investigating further.


Jai Haridas (@jaiharidas) described Windows Azure Queues: Improved Leases, Progress Tracking, and Scheduling of Future Work in a 9/15/2011 post:

imageAs part of the “2011-08-18” version, we have introduced several commonly requested features to the Windows Azure Queue service. The benefits of these new features are:

  1. Allow applications to store larger messages
  2. Allow applications to schedule work to be processed at a later time
  3. Allow efficient processing for long running tasks, by adding:
    • Leasing: Processing applications can now extend the visibility timeout on a message they have dequeued and hence maintain a lease on the message
    • Progress Tracking: Processing applications can update the message content of a message they have dequeued to save progress state so that a new worker can continue from that state if the prior worker crashed.
That was then

To better understand these features, let us quickly summarize the messaging semantics in Windows Azure Queue. The Windows Azure Queue service provides a scalable message delivery system that can be used to build workflow and decouple components that need to communicate. With the 2009-09-19 version of the service, users could add up to 8KB messages into the queue. When adding a message, users specify a time to live (< 7 days) after which the message is automatically deleted if it still exists in the queue. When added to the queue, a message is visible and a candidate to be dequeued to be processed by workers. Workers use a 2-phase dequeue/delete pattern. This semantic required the workers to estimate the time it would take to process the message at the time of message is retrieved, often referred to as a non-renewable lease period of the message called the “visibility timeout”. This non-renewable lease period had a limit of 2 hours. When the message is retrieved, a unique token called a pop receipt is associated with the message and must be used for subsequent operations on the message. Once the message is retrieved from the Queue, the message becomes invisible in the queue. When a message is completely processed, the worker subsequently issues a request to delete the message using the pop receipt. This 2-phase process ensures that a message is available to another worker if the initial worker crashes while processing the message.

This is now

With the 2011-08-18 version, we focused on streamlining the use of Windows Azure Queues to make them simpler and more efficient. First, we made it extremely simple for workers to process long running jobs efficiently – this required the ability to extend the lease on the message by providing a new visibility timeout. Without this ability, workers would have had to provide a generous lease period to the “Get Messages” API since the lease period is set before the message is inspected.

To further improve efficiency, we now allow workers to also update the message contents they have dequeued. This can be used to store progress information and intermittent states so that if the worker crashes, a new worker can resume the work rather than starting from scratch. Finally, we targeted scenarios that allow support for larger messages and allow scheduling of work when adding messages to the queue. To reiterate, the following features in the 2011-08-18 version, makes working with Windows Azure Queues simpler and more efficient:

  1. The maximum message size has been increased to 64KB which will allow more applications to store the full message in the queue, instead of storing the actual message contents in blobs, and to now keep progress information in the message.
  2. A message can be added to the queue with a visibility timeout so that it becomes visible to workers at a later time.
  3. A lease on the message can be extended by the worker that did the original dequeue so that it can continue processing the message.
  4. The maximum visibilitytimeout for both scheduling future work, dequeueing a message, and updating it for leasing has been extended to 7 days.
  5. The message content can now be updated to save the progress state, which allows other workers to resume processing the message without the need to start over from the beginning.

NOTE: The current storage client library (version 1.5) uses the 2009-09-19 version and hence these new features are not available. We will be releasing an update with these new features in a future release of the SDK. Until that time we have provided some extension methods later in this posting that allow you to start using these new features today.

We will now go over the changes to the Windows Azure Queue service APIs in detail.

PUT Message

The “PUT Message” REST API is used to add messages to the queue. It now allows the message content to be up to 64KB and also provides an optional visibility timeout parameter. For example, you can now put a message into the queue with a visibilitytimeout of 24 hours, and the message will sit in the queue invisible until that time. Then at that time it will become visible for workers to process (along with the other messages in that queue).

By default, the visibilitytimeout used is 0 which implies that a message becomes visible for processing as soon as it is added to the queue. The visibilitytimeout is specified in seconds and must be >= 0 and < 604,800 (7 days). It also should be less than the “time to live”. Time to live has a default value of 7 days after which a message is automatically removed from the queue if it still exists. A message will be deleted from the queue after its time to live has been reached, regardless of whether it has become visible or not.

REST Examples

Here is a REST example on how to add a message that will be visible in 10 minutes. The visibility timeout is provided as a query parameter to the URI called “visibilitytimeout” and is in seconds. The optional expiry time is provided as messagettl query parameter and is set in seconds here 2 days in this example.

Request:

POST http://cohowinery.queue.core.windows.net/videoprocessing/messages?visibilitytimeout=600&messagettl=172800&timeout=30 HTTP/1.1
x-ms-version: 2011-08-18
x-ms-date: Fri, 02 Sep 2011 05:03:21 GMT
Authorization: SharedKey cohowinery:sr8rIheJmCd6npMSx7DfAY3L//V3uWvSXOzUBCV9Ank=
Content-Length: 100

<QueueMessage>
<MessageText>PHNhbXBsZT5zYW1wbGUgbWVzc2FnZTwvc2FtcGxlPg==</MessageText>
</QueueMessage>
Storage Client Library Example

We will use the extension methods provided at the end of this blog to show how to add messages that are made visible at a later time.

Let us look at the scenario of a video processing workflow for Coho Winery. Videos are uploaded by the Marketing team at Coho Winery. Once these videos are uploaded, they need to be processed before it can be displayed on the Coho Winery web site – the workflow is:

  1. Scan for virus
  2. Encode the video in multiple formats
  3. Compress the video for efficiency and this is compressed to the new location that the website picks it up from.

When uploading the videos initially, the component adds a message to the queue after the videos is uploaded. However, 1 day is allowed before the video is processed to allow a period of time for changes to be made to the video in the workflow. The message is appended to the queue with delayed visibility to allow this grace 1 day time period. A set of instructions go into the message which include the format, encoder to use, compression to use, scanners to use etc. The idea is that in addition to this information required for processing the message, we will also save the current state in the message. The format used is as follows; the first 2 characters represent the processing state, followed by the actual content.

/// <summary>
/// Add message for each blob in input directory. 
/// After uploading, add a message to the queue with invisibility of 1 day 
/// to allow the blob to be uploaded.
/// </summary>
private static void UploadVideos()
{
    CloudQueueClient queueClient = Account.CreateCloudQueueClient();
    CloudQueue queue = queueClient.GetQueueReference(QueueName);
    queue.EncodeMessage = false;

    string[] content = GetMessageContent();
    for (int i = 0; i < content.Length; i++)
    {
        // upload the blob (not provided for brevity…)

        // Call the extension method provided at the end of this post
 queue.PutMessage(
            Account.Credentials, 
            EncodeMessage(content[i], ProcessingState.VirusScan),
            StartVisibilityTimeout, // set to 1 day
            MessageTtl, // set to 3 days
            ServerRequestTimeout);
        
    }
}

/// <summary>
/// The processing stages for a message
/// </summary>
public enum ProcessingState : int
{
    VirusScan = 1,
    Encoder = 2,
    Compress = 3,
    Completed  = 4
}
/// <summary>
/// Form of the queue message is: [2 digits for state][Actual Message content]
/// </summary>
/// <param name="content"></param>
/// <param name="state"></param>
/// <returns></returns>
private static string EncodeMessage(string content, ProcessingState state)
{
    return string.Format("{0:D2}{1}", (int)state, content);
}
Update Message

The “Update Message” REST API is used to extend the lease period (aka visibility timeout) and/or update the message content. A worker that is processing a message can now determine the extra processing time it needs based on the content of a message. The lease period, specified in seconds, must be >= 0 and is relative to the current time. 0 makes the message visible at that time in the queue as a candidate for processing. The maximum value for lease period is 7 days. Note, when updating the visibilitytimeout it can go beyond the expiry time (or time to live) that is defined when the message was added to the queue. But the expiry time will take precedence and the message will be deleted at that time from the queue.

Update Message can also be used by workers to store the processing state in the message. This processing state can then be used by another worker to resume processing if the former worker crashed or got interrupted and the message has not yet expired.

When getting a message, the worker gets back a pop-receipt. A valid pop-receipt is needed to perform any action on the message while it is invisible in the queue. The Update Message requires the pop receipt returned during the “Get Messages” request or a previous Update Message. The pop receipt is invalid (400 HTTP status code) if:

  • The message has expired.
  • The message has been deleted using the last pop receipt received either from “Get Messages” or “Update Message”.
  • The invisibility time has elapsed and the message has been retrieved by another “Get Messages” call.
  • The message has been updated with a new visibility timeout and hence a new pop receipt is returned. Each time the message is updated, it gets a new pop-receipt which is returned with the UpdateMessage call.

NOTE: When a worker goes to renew the lease (extend the visibility timeout), if for some reason the pop receipt is not received by the client (e.g., network error), the client can retry the request with the pop receipt it currently has. But if that retry fails with “Message not found” then the client should give up processing the message, and get a new message to process. This is because the prior message did have its visibility timeout extended, but it now has a new pop receipt, and that message will become visible again after the timeout elapses at which time a worker can dequeue it again and continue processing it.

The pop receipt returned in the response should be used for subsequent “Delete Message” and “Update Message” APIs. The new next visibility timeout is also returned in the response header.

REST Examples

Update a message to set the visibility timeout to 1 minute.

PUT http://cohowinery.queue.core.windosws.net/videoprocessing/messages/663d89aa-d1d9-42a2-9a6a-fcf822a97d2c?popreceipt=AgAAAAEAAAApAAAAGIw6Q29bzAE%3d&visibilitytimeout=30&timeout=30 HTTP/1.1
x-ms-version: 2011-08-18
x-ms-date: Fri, 02 Sep 2011 05:03:21 GMT
Authorization: SharedKey cohowinery:batcrWZ35InGCZeTUFWMdIQiOZPCW7UEyeGdDOg7WW4=
Host: 10.200.21.10
Content-Length: 75

<QueueMessage><MessageText>new-message-content</MessageText></QueueMessage>
Storage Client Library Example

Continuing with the example of video processing workflow for Coho Winery, we will now go over the processing part of the workflow. The video processing task is a long running task and we would like to divide the work into stages defined by the ProcessingState enumeration mentioned above. The workflow is to retrieve a message, then decode its content to get the processing state and the actual content. To retrieve, we use the new extension method since the September 2009 version of GetMessage API blocked visibility timeouts of longer than 2 hours on the client side, and therefore won’t support this workflow. ProcessMessages starts a timer to iterate through all the current messages retrieved and renew the lease or delete the message based on the processing state and when the message will be visible again. ProcessMessages converts the QueueMessage retrieved into MessageInfo and adds it to the list of messages that needs to be renewed. The MessageInfo class exists since the QueueMessage class does not allow updating the pop receipt which needs to set on every Update message.

[Source code elided for brevity.]

Get Messages

The “Get Messages” REST API is used to retrieve messages. The only change in 2011-08-18 version is that the visibility timeout has been extended from 2 hours to 7 days.

REST Examples

Get messages with visibility timeout set to 4 hours (provided in seconds).

GET http://cohowinery.queue.core.windosws.net/videoprocessing/messages? visibilitytimeout=14400&timeout=30 HTTP/1.1
x-ms-version: 2011-08-18
x-ms-date: Fri, 02 Sep 2011 05:03:21 GMT
Authorization: SharedKey cohowinery:batcrWZ35InGCZeTUFWMdIQiOZPCW7UEyeGdDOg7WW4=
Host: 10.200.21.10
Storage Client Library Example

The example in Update Message covers the invocation of GetMessages extension.

Storage Client Library Extensions

As we mentioned above, the existing Storage Client library released in SDK version 1.5 does not support the new version, therefore we have provided sample extension methods described in this blog post so you can start using these new features today. These extension methods can help you issue such requests. Please test this thoroughly before using it in production to ensure it meets your needs.

We have provided 2 extension methods:

  1. PutMessage: implements adding a message to the queue with visibility timeout.
  2. UpdateMessage: implements updating a message (content and/or visibility timeout. It returns the new pop receipt and next visibility timeout. It does not change the CloudQueueMessage type, as pop receipt and next visibility are not publically accessible. …

[Source code elided for brevity.]


Jai Haridas (@jaiharidas) explained Windows Azure Blobs: Improved HTTP Headers for Resume on Download and a Change in If-Match Conditions in a 9/15/2011 post to the Windows Azure Storage Team blog:

imageIn the new 2011-08-18 version of the Windows Azure Blob service, we have made some changes to improve browser download and streaming for some media players. We also provided an extension to Blob Service settings to allow anonymous and un-versioned requests to benefit from these changes. The motivation to provide these features are:

  1. Allow browsers to resume download if interrupted. Some browsers require the following:
    • ETag returned as part of the response must be quoted to conform to the HTTP spec.
    • Return Accept-Ranges in the response header to indicate that range requests are accepted by the service. Though this is not mandatory according to the spec, some browsers still require this.
  2. Support more range formats for range requests. Certain media players request a range of format “Range:bytes=0-“. The Windows Azure Blob service use to ignore this header format. Now, with the new 2011-08-18 version, we will return the entire blob in the format of a range response. This allows such media players to resume playing as soon as response packets arrive rather than waiting for the entire blob to download.
  3. Allow un-versioned requests to be processed using semantics of 2011-08-18 version. Since the above two changes impact un-versioned browser/media player requests and the changes made are versioned, we need to allow such requests to take advantage of the changes made. To allow un-versioned requests to be processed using semantics of 2011-08-18 version, we now take an extra property in “Set Blob Service Properties”, which makes it possible to define the default version for the blob service to use for un-versioned requests to your account.

In addition, another change for the blob service is that we now return “Pre-Condition Failure” (412) for PUT if you do a PUT request with conditional If-Match and the blob does not exist. Previously, we would have recreated this blob. This change is effective for all versions starting with 2009-09-19 version.

We will now cover the changes in more detail.

Header Related Changes

In this section we will cover the header related changes that we have done in the Windows Azure Blob service for 2011-08-18 version.

Quoted ETags

ETags returned in response headers for all APIs are now quoted to conform to RFC 2616 specification. ETags returned in the listing operations as part of XML in response body will remain as is. As mentioned above, this allows browsers to resume download using the ETag. Unquoted ETags were ignored by certain browsers, while all standards-compliant browsers honor quoted ETags. ETags are required by a browser when using a conditional Range GET to resume a download on the blob and it needs to ensure that the partial content it is requesting has not been modified.

With version 2011-08-18, we now support this header format.

Sample GET Blob ResponseHTTP/1.1 200 OK

x-ms-blob-type: BlockBlob
x-ms-lease-status: unlocked
x-ms-meta-m1: v1
x-ms-meta-m2: v2
Content-Length: 11
Content-Type: text/plain; charset=UTF-8
Date: Sun, 25 Sep 2011 22:49:18 GMT
ETag: “0x8CB171DBEAD6A6B”
Last-Modified: Sun, 25 Sep 2011 22:48:29 GMT
x-ms-version: 2011-08-18
Accept-Ranges: bytes
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
Return Accept-Ranges Header

Get Blob” requests now return “Accept-Ranges” in response. Though clients should not expect this header to infer that range requests are allowed or not, certain browsers still expect it. For those browsers, if this header is missing, an interrupted download will resume from the beginning rather than resuming from where it was interrupted.

With version 2011-08-18, we now support this header format. The sample REST request above also shows the presence of this new header.

Additional Range Format

Certain media players issue a range request for the entire blob using the format:

Range: bytes=0-

It expects a status code of 206 (i.e. Partial Content) with entire content being returned and Content-Range header set to:

Content-Range: bytes 0-10240779/10240780 (assuming the blob was of length 10240780).

In receiving the Content-Range, the media player would then start streaming the blob rather than waiting for the entire blob to be downloaded first.

With version 2011-08-18, we now support this header format.

Sample Range GET Blob Request
GET http://cohowinery.blob.core.windows.net/videos/build.wmv?timeout=60 HTTP/1.1
User-Agent: WA-Storage/6.0.6002.18312
Range: bytes=100-Host:10.200.30.18
Sample Range GET Blob Repsonse
HTTP/1.1 206 Partial Content
Content-Length: 1048476
Content-Type: application/octet-stream
Content-Range: bytes 100-1048575/1048576
Last-Modified: Thu, 08 Sep 2011 23:39:47 GMT
Accept-Ranges: bytes
ETag: "0x8CE4217E34E31F0"
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: 387a38ae-fa0c-4fe2-8e60-d6afa2373e56
x-ms-version: 2011-08-18
x-ms-lease-status: unlocked
x-ms-blob-type: BlockBlob
Date: Thu, 08 Sep 2011 23:39:46 GMT

<content …>
If-Match Condition on Non-Existent Blob

PUT Blob API with If-Match precondition set to a value other than “*” would have succeeded before, even when the blob did not exist. This should not have succeeded, since it violates the HTTP specification. Therefore, we changed this to return “Precondition failed” (i.e. HTTP Status 412). The breaking change was done to prevent users from inadvertently recreating a deleted blob. This should not impact your service since:

  1. If the application really intendeds to create the blob, then it will send a PUT request without a ETag, since providing the ETag shows that the caller expects the blob to exist.
  2. If an application sends an ETag, then the intent to just update is made explicit – so if a blob does not exist, the request must fail.
  3. The current behavior is unexpected since we recreate the blob when the intent was to just update it. Because of these semantics, no application should be expecting the blob to be recreated.

We have made this change effective in all versions starting with 2009-09-19.

Blob Service Settings and DefaultServiceVersion

Before we get into the changes to the blob service settings, let us understand how versioning works in Windows Azure Storage. The Windows Azure Storage Service accepts the version that should be used to process the request in a “x-ms-version” header. The list of supported versions is explained here. However, in certain cases, versions can be omitted:

  1. Anonymous browser requests do not send a version header since there is no way to add this custom header
  2. The PDC 2008 version of the request also did not require a version header

When requests do not have the version header associated, we call it un-versioned requests. However, the service still needs to associate a version with these requests and the rule was as follows:

  1. If a request is versioned, then we use the version specified in the request header
  2. If the version header is not set, then if the ACL for the blob container was set using version 2009-09-19 or later, we will use the 2009-09-19 version to execute the API
  3. Otherwise we will use the PDC 2008 version of the API (which will be deprecated in the future)

Because of the above rules, the above changes such as ETag, Accept-Ranges header etc, done for un-versioned requests would not have taken effect for the intended scenarios (e.g., anonymous requests). Hence, we now allow a DefaultServiceVersion property that can be set for the blob service for your storage account. This is used only for un-versioned requests and the new version precedence rules for requests are:

  1. If a request is versioned, then we use the version specified in the request header
  2. If a version header is not present and the user has set DefaultServiceVersion in “Set Blob Service Properties”to a valid version (2009-09-19 or 2011-08-18)”, then we will use that default version for this request.
  3. If the version header is not set (explicitly or via the DefaultServiceVersion property), then if the ACL for the container was set using version 2009-09-19 or later, we will use 2009-09-19 version to execute the API
  4. Otherwise, we will use the PDC 2008 version of the API, which will be deprecated in the future.

For users who are targeting their blobs to be downloaded via browsers or media players, we recommend setting this default service version to 2011-08-18 so that the improvements can take effect. We also recommend setting this for your service, since we will be deprecating the PDC 2008 version at some point in the future.

Set DefaultServiceVersion property

The existing “Set Blob Service Properties” has been extended in 2011-08-18 version to include a new DefaultServiceVersion property. This is an optional property and accepted only if it is set to a valid version value. It only applies to the Windows Azure Blob service. The possible values are:

  • 2009-09-19
  • 2011-08-18

When set, this version is used for all un-versioned requests. Please note that the “Set Blob Service Properties” request to set DefaultServiceVersion must be made with version 2011-08-18, regardless of which version you are setting DefaultServiceVersion to. An example REST request looks like the following:

Sample REST Request
PUT http://cohowinery.blob.core.windows.net/?restype=service&comp=properties HTTP/1.1
x-ms-version: 2011-08-18
x-ms-date: Sat, 10 Sep 2011 04:28:19 GMT
Authorization: SharedKey cohowinery:Z1lTLDwtq5o1UYQluucdsXk6/iB7YxEu0m6VofAEkUE=
Host: cohowinery.blob.core.windows.net
Content-Length: 200
<?xml version="1.0" encoding="utf-8"?>
<StorageServiceProperties>
    <Logging>
        <Version>1.0</Version>
        <Delete>true</Delete>
        <Read>false</Read>
        <Write>true</Write>
        <RetentionPolicy>
            <Enabled>true</Enabled>
            <Days>7</Days>
        </RetentionPolicy>
    </Logging>
    <Metrics>
        <Version>1.0</Version>
        <Enabled>true</Enabled>
        <IncludeAPIs>false</IncludeAPIs>
        <RetentionPolicy>
            <Enabled>true</Enabled>
            <Days>7</Days>
        </RetentionPolicy>
    </Metrics>
    <DefaultServiceVersion>2011-08-18</DefaultServiceVersion>
</StorageServiceProperties>
Get Storage Service Properties

Using the 2011-08-18 version, this API will now return the DefaultServiceVersion if it has been set.

Sample REST Request
GET http://cohowinery.blob.core.windows.net/?restype=service&comp=properties HTTP/1.1
x-ms-version: 2011-08-18
x-ms-date: Sat, 10 Sep 2011 04:28:19 GMT
Authorization: SharedKey cohowinery:Z1lTLDwtq5o1UYQluucdsXk6/iB7YxEu0m6VofAEkUE=
Host: cohowinery.blob.core.windows.net
Sample Library and Usage

We provide sample code that can be used to set these service settings. It is very similar to the example provided in the Analytics blog but it uses the new DefaultServiceVersion and we have renamed some classes in which we have used “ServiceSettings” in place of “AnalyticsSettings” in class names and method names.

  • Class SettingsSerializerHelper handles serialization and deserialization of settings.
  • Class ServiceSettings represents the service settings. It also contains DefaultServiceVersion property which should be set only for blob service. Windows Azure Queue and Table service will return a 400 HTTP (“Bad Request”) status error code.
  • Class ServiceSettingsExtension implements extension methods that can be used to set/get service settings.

The way to use the code is still the same except for the new DefaultServiceVersion property:

CloudStorageAccount account = CloudStorageAccount.Parse(ConnectionString);
CloudBlobClient blobClient = account.CreateCloudBlobClient();
ServiceSettings settings = new ServiceSettings()
        {
            LogType = LoggingLevel.Delete | LoggingLevel.Read | LoggingLevel.Write,
            IsLogRetentionPolicyEnabled = false,
            LogRetentionInDays = 7,
            IsMetricsRetentionPolicyEnabled = true,
            MetricsRetentionInDays = 3,
            MetricsType = MetricsType.All,
            DefaultServiceVersion = "2011-08-18"
        };

blobClient.SetServiceSettings(settings);

Here are the rest of the utility classes. …

[Source code elided for brevity.]


<Return to section navigation list>

SQL Azure Database and Reporting

Zerg Zergling explained Using Active Record with SQL Azure in a 9/14/2011 post to the Window Azure’s Silver Lining blog:

imageActive Record is an object-relational mapping (ORM) that makes it easy to work with databases. While there are other ORMs out there for Ruby, Active Record is very popular and I wanted to walk through using it with SQL Azure. Active Record can be installed by running gem install activerecord, however most people probably install it as part of the Rails installation. The examples in this post use Rails, so gem install rails is the command I used to install Rails, Active Record, and all the other bits that Rails comes with.

imageNext we need an adapter that Active Record can use to talk to SQL Azure. I’m using the SQL Server Adapter (gem install activerecord-sqlserver-adapter,) though I’m told that if you’re using JRuby that the activerecord-jdbc-adapter works as well.

The Active Record SQL Server adapter can connect to SQL Azure using either ODBC (through the Ruby-ODBC gem,) or dblib (through the TinyTDS gem.) While both allow connectivity, I’ll be using ODBC since the TinyTDS gem currently requires a manual build process to enable SQl Azure support. If you’re interested in using TinyTDS, I documented my experience building it with SQL Azure support at http://social.technet.microsoft.com/wiki/contents/articles/connecting-to-sql-azure-from-ruby-applications.aspx#tinytds. To install the Ruby-ODBC gem, use gem install ruby-odbc.

Now we just need to provision a new SQL Azure server, create a database, create a Rails application, and modify it to use our SQL Azure database.

Provisioning a SQL Azure Database Server

Perform the following steps to provision a new SQL Azure server:

  1. In your browser, login to http://windows.azure.com and select New Database Server from the Common Tasks section of the ribbon.
  2. In the left pane, select your subscription and then click the Create button in the Server section of the ribbon.
  3. In the Create a New Server dialog, select a region and then click next.
  4. Enter an administrator login and password, and then click next.
  5. Click the Add button and enter the IP address of the machine you will be running the Ruby code on. Finally, click finish.

At this point you will have a new server with a gibberish name like fvwerou3.database.windows.net. You can use the Test Connectivity button on the ribbon at the top of your browser to test whether you can connect to the master database for this server.

Create a database

Active Record can’t automatically create a database on the SQL Azure server that we’ve just provisioned, so we have to do this manually. To create a new database, select the Create button from the Database section of the ribbon.

Enter a new name and click OK. The defaults of Web edition and a size of 1gb should be sufficient for testing.

Create a Rails application

For a test application, I created a simple blog using the following commands at the command line:

rails new blog
cd blog
rails generate scaffold Post title:string body:text

To configure this application to use SQL Azure, perform the following steps:

  1. Edit the Gemfile and comment out the gem ‘sqlite3’ entry. Add the following entries:
    gem ‘activerecord-sqlserver-adapter’
    gem ‘ruby-odbc’
  2. Save the Gemfile and run bundle install to ensure that the new gems we added are installed.
  3. Next, open the database.yml file from the blog/config directory and replace the existing development section with the following:
    development:
    adapter: sqlserver
    mode: ODBC
    dsn: Driver={SQL Server};Server=servername.database.windows.net;Uid=user@servername.database.windows.net;Pwd=password;Database=databasename
  4. Replace the servername, username, password, and databasename with the values for your SQL Azure Server, database administrator login, password, and database name.
  5. Save the database.yml file and run rake db:migrate to create the database structure.

At this point you willll receive an error staging that ‘Tables without a clustered index are not supported in this version of SQL Server’. To fix this error, go back to the Windows Azure portal in your web browser and perform the following steps:

  1. Select your database and then click the Manage icon in the Database section of the ribbon.
  2. When prompted to login, ensure that the Database name is correct and then enter the database administrator username and password. Click Log on to proceed.
  3. Select the New Query icon from the ribbon, and enter the following statement in the query window:
    CREATE CLUSTERED INDEX [idx_schema_migrations_version] ON [schema_migrations] ([version])
  4. Click the Execute icon on the ribbon to run this statement. Once this has completed, issue the rake db:migrate command again and it will succeed.
Run the Application

You can now start the blog web site by running rails s. Navigate to http://localhost:3000/posts to create and view posts, which will be stored into SQL Azure. If you return to the browser window where we added the clustered index, you should be able to select the ‘posts’ table and view the table structure, data stored in it, etc.

NOTE

In writing this blog post I’ve noticed that something has changed between my the Rail 3.1RC code my work environment is using vs. the Rails 3.1 release code I installed on my test box. The RC4 code works fine with SQL Azure, but with the 3.1 release I receive the following error:

DBCC command ‘useroptions’ is not supported in this version of SQL Server

This is accurate; useroptions isn’t supported by SQL Azure. So if you're using a newer version of Rails and things are failing, you're not alone. I'll investigate and see if I can find a resolution to this error and post an update here.


James Podgorski posted Understanding SQL Azure Federations No-MARS Support and Entity Framework to the Windows Azure AppFabric CAT blog on 9/13/2011:

Understanding SQL Azure Federations No-MARS Support and Entity Framework

In a previous blog posting here I ran through a typical first blocker when using the code-first feature of Entity Framework (EF) with SQL Azure Federations. That first blocker was understanding the correct procedure to submit the USE FEDERATION statement before executing your LINQ statements.

This blog posting will continue with the blockers and provide one key takeaway, that is Multiple Active Results Sets (MARS) is not supported in SQL Azure Federations. In fact one cannot even submit the USE FEDERATION statement against a federated database without receiving a SqlException. In this blog we build from the code presented earlier and provide a couple of samples to illustrate a couple of important points.

The USE FEDERATION statement is not supported on a connection which has multiple active result sets (MARS) enabled.
Where in Entity Framework Is MARs Used?

MARs is used in by the Entity Framework provider for the following operations to load related objects. If you are using EF with SQL Azure Federations, you should avoid the following as they dependent upon the MARs capability of SQL Server.

  • Lazy loading
  • LoadProperty() to explicitly load related objects as specified by navigation properties
  • Load() to explicitly load a collection of related objects
Lazy Loading image

The snippet below highlights a common scenario in which someone is looking at an Order (ShowOrder) and then decides to drill further down into the OrderDetails (ShowDetails). A couple of points worth mentioning. We are going on the assumption that MARs is disabled by setting the multipleactiveresultsets property of the connection string to false for all the samples. And, the sample from the previous post was augmented to include an OrderDetail collection for the Order, the implementation of which is shown at the bottom for completeness only.

                var orders =  from x in dc.Orders
                              select x;

                foreach (Order order in orders)
                {
                    bool examine = ShowOrder(order);
                    if (examine)

ShowDetails(order.OrderDetails);

                }

In this example, lets propose that Lazy loading is enabled, the default in EF 4.0 and beyond. In the highlighted text above, there is an EntityCommandExecutionException thrown, risen from the Order.OrderDetails navigation property because MARs is a requirement from such an operation.

There is already an open DataReader associated with this Command which must be closed first.

Fortunately we can disable lazy loading whether using the designer or code. In the code bits below you will see that this was achieved by setting the LazyLoadingEnabled property on the Configuration property for the DbContext to false. But if we do so, we get a NullReferenceException when accessing the OrderDetails collection because the navigation properties, in this case the OrderDetails, are empty/null.

                dc.Configuration.LazyLoadingEnabled = false;

                var orders = from x in dc.Orders
                             select x;

                foreach (Order order in orders)
                {
                    bool examine = ShowOrder(order);
                    if (examine)

ShowDetails(order.OrderDetails);

                }
Object reference not set to an instance of an object.
LoadProperty() and Load() image

The following code sample shows a possible alternative, that as you would have guessed, fails because there is already and open DataReader associated with the retrieval of the orders. It presents the same exact problem associated with lazy loading, explicit loading of related objects requires two active result sets, i.e. MARs.

                var orders = from x in dc.Orders
                             select x;

                foreach (Order order in orders)
                {
                    bool examine = ShowOrder(order);
                    if (examine)
                    {

dc.Entry(order).Collection(o => o.OrderDetails).Load();

                        ShowDetails(order.OrderDetails);
                    }
                }
Alternatives

Call ToList() or ToArray() before the foreach so that the DataReader is closed for retrieval of the order details. This technique requires that lazy loading is enabled at the context.

                var orders = (from x in dc.Orders
                              select x).ToList();

                foreach (Order order in orders)
                {
                    bool examine = ShowOrder(order);
                    if (examine)
                    {
                        ShowDetails(order.OrderDetails);
                    }
                }

Use eager loading via the Include method of the query results but this means retrieval of all values up front.

                var orders = from x in dc.Orders.Include(o => o.OrderDetails)
                             select x;

                foreach (Order order in orders)
                {
                    bool examine = ShowOrder(order);
                    if (examine)
                    {
                        ShowDetails(order.OrderDetails);
                    }
                }

Conclusion

In this blog we took home one take away about MARs and SQL Azure Federations and looked at those implications to your Entity Framework client code. In the next post I will continue onward with another set of samples to guide you with SQL Azure Federations.

Code Sample for Completeness

Included below are the code augments made from the previous posting. The EF code first snippet includes a new class called OrderDetail. Not the navigation properties added to both classes.

    public class Order
    {
        [Key, Column(Order = 1)]
        public long order_id { get; set; }

        [Key, Column(Order = 2)]
        public long customer_id { get; set; }

        public decimal total_cost { get; set; }
        public DateTime order_date { get; set; }

        public virtual ICollection<OrderDetail> OrderDetails { get; set; }
    }

    public class OrderDetail
    {
        public long order_id { get; set; }

        [Key, Column(Order = 1)]
        public long order_detail_id { get; set; }

        [Key, Column(Order = 2)]
        public long customer_id { get; set; }

        public long product_id { get; set; }

        public Int16 order_qty { get; set; }

        public decimal unit_price { get; set; }

        [ForeignKey("customer_id,order_id")]
        public virtual Order Orders { get; set; }
    }

    public class SalesEntities : DbContext
    {
        public DbSet<Order> Orders { get; set; }
        public DbSet<OrderDetail> OrderDetails { get; set; }

        public SalesEntities(string connStr)
            : base(connStr)
        {
        }
    }

An update to the underlying SQL Azure tables.

-- Create the table in the first federated member, this will be a federated table.
-- The federated column in this case is col1.
CREATE TABLE Orders
(
    order_id bigint not null,
    customer_id bigint,
    total_cost money not null,
    order_date datetime not null,
    primary key (order_id, customer_id)
) FEDERATED ON (range_id = customer_id)
GO

CREATE TABLE OrderDetails
(
    order_id bigint not null,
    order_detail_id bigint not null,
    customer_id bigint not null,
    product_id bigint not null,
    order_qty smallint not null,
    unit_price money not null,
    primary key (order_detail_id, customer_id)
) FEDERATED ON (range_id = customer_id)
GO

ALTER TABLE OrderDetails
ADD CONSTRAINT FK_OrderDetails_Sales FOREIGN KEY (order_id, customer_id)
REFERENCES Orders (order_id, customer_id)
GO

-- Insert 160 orders (2 order details per order) into the federated tables
DECLARE @order_id int
DECLARE @customer_id int
DECLARE @order_detail_id int

SET @customer_id = 1
SET @order_detail_id = 1
SET @order_id = @customer_id

WHILE @customer_id < 81
BEGIN
    INSERT INTO Orders VALUES (@order_id, @customer_id, 10, getdate())
    INSERT INTO OrderDetails VALUES (@order_id, @order_detail_id, @customer_id, 1, 1, 10)
    SET @order_detail_id = @order_detail_id + 1
    INSERT INTO OrderDetails VALUES (@order_id, @order_detail_id, @customer_id, 2, 2, 20)

    SET @order_id = @order_id + 1
    SET @order_detail_id = @order_detail_id + 1
    INSERT INTO Orders VALUES (@order_id, @customer_id, 20, getdate())
    INSERT INTO OrderDetails VALUES (@order_id, @order_detail_id, @customer_id, 1, 1, 10)
    SET @order_detail_id = @order_detail_id + 1
    INSERT INTO OrderDetails VALUES (@order_id, @order_detail_id, @customer_id, 2, 2, 20)

    SET @order_id = @order_id + 1
    SET @order_detail_id = @order_detail_id + 1
    SET @customer_id = @customer_id + 1
END
GO

<Return to section navigation list>

MarketPlace DataMarket and OData

Sudhir Hasbe reported Windows Azure Marketplace News from BUILD: Announcing NEW Data Offerings and International Availability in a 9/15/2011 post to the Windows Azure blog:

imageYesterday at BUILD, Microsoft Server and Tools Business President, Satya Nadella made two announcements around the Windows Azure Marketplace and shared details on how Ford Motor Company and eBay are using the Marketplace to add further value to their business. This post will dive deeper into both of these announcements.

International Availability

imageMicrosoft announced the upcoming availability of the Windows Azure Marketplace in 25 new markets around the world, including: Austria, Belgium, Canada, Czech, Denmark, Finland, France, Germany, Hungary, Ireland, Italy, Netherlands, Norway, Poland, Portugal, Spain, Sweden, Switzerland, UK, Australia, Hong Kong, Japan, Mexico, New Zealand, and Singapore. Customers in these new markets will be able to discover, explore and subscribe to premium data and applications on the Marketplace starting next month.

Starting today, partners can submit their applications & datasets to publish on the marketplace. Interested partners can learn how to get started here.

BING Data Available on Windows Azure Marketplace

Microsoft also announced the coming availability of a number of exciting data offerings on the Windows Azure Marketplace. The first of these, the Microsoft Translator APIs are available today, along-side a fast-growing collection of data sets and applications, with more being introduced through the remainder of the year. The Microsoft Translator APIs, which were previously available here, allow developers and webmasters to provide translation and language services in more than 35 languages, as part of their applications, websites or services. This is the same cloud service that delivers translations to millions of users every day via Bing, Microsoft Office and other Microsoft products.

Through the Windows Azure Marketplace, Microsoft will make available both a free, limited throughput version of the Microsoft Translator APIs, as well as a number of paid, higher throughput versions of the APIs. Starting today, Microsoft is offering customers a 3 month promotional period during which the higher throughput versions of the APIs will be available free of charge.

Developers can now start using Microsoft Translator APIs through Windows Azure Marketplace in web or in client applications to perform machine language translations from or to any of the following languages (list updated regularly).

Arabic

Bulgarian

Catalan

Hindi

Norwegian

Finnish

French

German

Portuguese

Greek

Japanese

Korean

Latvian

Vietnamese

Lithuanian

Slovenian

Spanish

Swedish

Dutch

Thai

Chinese Trad.

Chinese Simp.

Haitian Creole

Hungarian

Turkish

Czech

Hebrew

Polish

Romanian

Ukrainian

Danish

English

Indonesian

Russian

Estonian

Italian

Slovak

     

How are others using the Windows Azure Marketplace?

Ford Motor Company

Ford will launch its first battery-powered electric passenger vehicle at the end of the year. Fully charging the vehicle at home or a business should take just over 3 hours to complete, however as the cost of electricity can vary by the time of day, when you charge the vehicle can have an important impact on costs of ownership. So, every new Focus Electric will offer the Value Charging system powered by Microsoft, to help owners in the US charge their vehicles at the cheapest utility rates, lowering cost of ownership. To do this, Ford will rely on an electric utility rates dataset on the Windows Azure Marketplace that currently has information from 100 utilities covering more than 10,000 US zip codes and 1,500 Canadian Postal Codes.

eBay

eBay has a popular mobile application on Windows Phone 7 called eBay mobile, with more than 300k downloads to date. In the coming weeks, eBay will release a major update including faster payment flows and selling capabilities as well as the ability to have listing details automatically translated to and from 37 different languages. This is accomplished by leveraging the Microsoft Translator API, which is now available in the Windows Azure Marketplace. By leveraging the Translator API, eBay is able to create a more global product - delivering product listings in multiple languages to a broad global audience.

ESRI

Esri, a leading provider of geospatial software and services, is extending their ArcGIS system to Windows Azure Platform. With ArcGIS Online customers can create “intelligent maps” (starting with Bing, topography, ocean and other base maps) to visualize, access, consume and publish data-sets from Windows Azure Marketplace and their own data services. This will make a rich set of geographic tools, once only available to geographic information professionals, broadly available to anyone interested in working with geospatial data e.g. environmental scientists interested in visualizing air quality metrics against specific geographies. These maps can then be served up to the cloud and shared between individuals and their defined groups, across organizations and devices. This solution is available today, and can be accessed here.

To read more about all of the Windows Azure-related announcements made at BUILD, please read the blog post, "JUST ANNOUNCED @ BUILD: New Windows Azure Toolkit for Windows 8, Windows Azure SDK 1.5, Geo-Replication for Windows Azure Storage, and More". For more information about BUILD or to watch the keynotes, please visit the BUILD Virtual Press Room. And follow @WindowsAzure and @STBNewsBytes for the latest news and real-time talk about BUILD.

Visit the Windows Azure Marketplace to learn more.


Shayne Burgess (@shayneburgess) reported Updates [to] Item Templates for Microsoft Visual Studio 11 Express for Windows Developer Preview in a 9/15/2011 post to the WCF Data Services blog:

imageMicrosoft recently released the developer preview of the next version Windows as well as a developer preview of Visual Studio 11 for building apps on the new version of Windows. Some of you may have noticed that the “New Item” template for creating a WCF Data Services isn't working correctly (this is pre-release “preview” software after all). We have created a fix for these templates that you can use to unblock creating a WCF Data Service. The details of apply the fix are below – keep in mind this version of Visual Studio is a preview release only and the fix should only be used with that version of Visual Studio and should not be applied to any other beta/RTM version of Visual Studio.

imageInstallation Instructions for Updates Templates:

1. Download and unzip the templates attached to this post.

2. Install the new templates. There are 4 templates to install, and installing them simply requires that you copy the attached templates over the ones in the Visual Studio directory. The names of the templates and the location to copy them are in the table below (each location is relative to the base Visual Studio install directory).

Template Location
AdoNetDataServiceCSharpWap.zip

<Program Files>\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\CSharp\Web\1033

AdoNetDataServiceVBWap.zip <Program Files>\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\VisualBasic\Web\1033
AdoNetDataServiceCSharpWebsite.zip <Program Files>\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\Web\CSharp\1033
AdoNetDataServiceVBWebsite.zip <Program Files>\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\Web\VisualBasic\1033

3. Update Visual Studio.
a. Close any active instances of Visual Studio 2011.
b. Open an elevated Visual Studio 2011 Developer Command Prompt. You can find a tile for this from the start screen, right-click the tile, select the Advanced button and select Run as administrator.
c. In the developer command prompt enter the command “devenv /installvstemplates”
d. Wait for the command to complete and then start Visual Studio and the Item Templates should be working.

If you have any comments or questions on using WCF Data Services in the developer preview of Visual Studio please don’t hesitate to send them to us.


Glenn Gailey (@ggailey777) listed Some Resources for OData and Azure in a 9/14/2011 post:

imageWith the rapidly growing popularity of the Windows Azure Platform and Microsoft’s Cloud Services offerings, I thought it a good idea to put up a quick post about where you can find out about publishing OData services to the Azure cloud, especially since it’s so easy to do with the Windows Azure Tools for Visual Studio. Anyway, the following is some of the more complete and useful content available out there that shows you how to publish WCF Data Services to Azure:

Windows Azure and SQL Azure Tutorials - Tutorial 2.1: Creating an OData Service

imageThe Windows Azure and SQL Azure tutorials is a series of articles on the TechNet Wiki that show you, tutorial style, how to create and deploy a Windows Azure hosted application. This application is extended and modified throughout the series. Because the second tutorial adds a SQL Azure database to the application, it is very easy to extend this application (an ASP.NET Web role) to add both an Entity Framework model and a data service. Also, I wrote it so you know who to complain to.

Data Service in the Cloud

This article, written for the Data Developer Center by the great Julie Lerman, is pretty-much the definitive article on how to create an Azure-hosted OData service. While she covers the traditional Visual Studio authored ASP.NET Web role in Azure accessing SQL Azure, Julie also rockets on down the road to also consume the same OData feed in an application hosted as an ASP.NET MVC UI Web Role (also in Azure). Seeing an OData service consumed in an ASP.NET MVC app is (to my mind) worth the price of admission (which in this case is actually free).

Deploying an OData Service in Windows Azure

This is the original blog post from Shayne on the OData team at Microsoft that showed how to use the Windows Azure Tools for Visual Studio to create and publish a WCF Data Services-based OData service as an ASP.NET Web role, which uses the Entity Framework provider to access data from a SQL Azure database. It is still very useful, although both the Azure site and tools have been redesigned so the screen captures aren’t really correct anymore.


Note that all of these articles use the Entity Framework provider to access SQL Azure, because that is frankly the most common and interesting data provider scenario for a cloud-based Azure data service. If you wanted to publish a data service that implements a custom data service provider, you would need to follow the same basic steps (and yes the custom provider is usually a bit more work—see this if you are interested in custom providers). OK, there is one other interesting Azure scenario for data services, which is using Azure Blob Storage with a streaming provider—I hope to be able to demonstrate this very soon in my streaming provider series.

Also, if you know of any other really good OData on Azure content, please leave a comment on this post to let me know about it. If I get enough responses, I’ll create a new topic on the TechNet Wiki where we can maintain the complete list of content (because the wiki is useful for that kind of stuff—and other folks can help me maintain it.


<Return to section navigation list>

Windows Azure AppFabric: Apps, Access Control, WIF and Service Bus

Clemens Vasters (@clemensv) reported Now Available: The Service Bus September 2011 Release in a 9/16/2011 post to the Windows Azure blog:

imageAs announced at BUILD this week, the Service Bus September 2011 Release is now available. This is the biggest feature update in the production environment since the service launched in January 2010.

image72232222222The Service Bus provides secure connectivity and messaging capabilities that enable building distributed and loosely coupled applications in the cloud, as well as hybrid applications across both on-premises and the cloud. It enables a variety of communication and messaging protocols and patterns, and eliminates the need for the developer to worry about delivery assurance, reliable messaging and scale. You can learn more about the Service Bus here.

This release introduces enhancements to the Service Bus that improve pub/sub messaging by supporting features such as Queues, Topics and Subscriptions, and Rules. This release also enables new scenarios on the Windows Azure platform, such as:

  • Asynchronous Cloud Eventing – Distribute event notifications to occasionally connected clients (for example, phones, remote workers, kiosks, and so on)
  • Event-driven Service Oriented Architecture (SOA) – Building loosely coupled systems that can easily evolve over time
  • Advanced Intra-App Messaging – Load leveling and load balancing for building highly scalable and resilient applications

The new messaging features that materialize in Queues and Topics were first made available in the May 2011 CTP as a service preview, and are now in the Service Bus production environment. Several of the detailed capabilities of the new messaging features, like the unique support for sessions along with a facility to track session processing state, were directly informed by the needs of customer projects that we’ve followed in early adopter programs and other Microsoft development efforts that are taking a long-term architectural bet on Service Bus.

What Changed

What developers will notice right away after looking at the documentation and exploring the samples is that the API for the new messaging capabilities has changed quite a bit since the May 2011 CTP release –in response to direct customer feedback. One key goal of the API changes was to streamline the API and reduce the number of lines of code that are needed to use the new Service Bus features; I’ll give a few examples below.

Another goal was to make the runtime pieces of the API much more robust – example: developers had to handle exceptions due to torn network connections explicitly in the CTP and take explicit steps to replace ‘faulted’ receiver or sender objects; in this production release the messaging client API will automatically try to reconnect just like the Relay listeners of Service Bus and client objects won‘t go into a ‘faulted’ state requiring explicit recovery by the application.

What Didn’t Change

What developers will not see is a discontinuity in service or a disruptive change to Service Bus behavior in production. Even though we’ve proverbially pulled the tablecloth out from under everyone and put a new one on – with a very much expanded feature set – none of the crystal, china, or silverware on the table moved. Existing apps using Service Bus still run and the Microsoft.ServiceBus.dll assembly 1.0.x.x from the previous production SDK ‘just works’ – there is no work required to adapt existing apps to the new release.

How to Use the New Features

Taking advantage of all of the new Service Bus capabilities from .NET does, however, require using the new SDK with the new Microsoft.ServiceBus.dll version 1.5.x.x that contains the new Messaging API. Our recommendation is that even applications that are only using Relay features be recompiled and tested against this newest assembly and that customer deployments of the 1.0.x.x assemblies are starting phased out as part of regular upgrade and deployment cycles.

Keep in mind that the new 1.5.x.x assembly is only available for the full .NET 4.0 framework – for access to the new messaging capabilities from Silverlight, the new Windows 8 client profile, the .NET 4.0 client profile, or from applications requiring .NET 3.5, developers can leverage the client samples provided for Silverlight to gain access to the vast majority of features through the Service Bus REST API. These Silverlight samples as well as code for accessing Service Bus from Java, PHP, and other platforms we will be made available over the course of the upcoming weeks – each of them will use the appropriate native runtime APIs, but echo the .NET API in terms of terminology and patterns.

Using the REST API is also a good choice, for this release, if applications depending on Service Bus Queues and Topics need access from tightly managed networking environments where outbound HTTPS access is possible, but TCP 9354 cannot be made available for outbound traffic.

The grand upside of using the new .NET API directly is that the TCP protocol used by the new Microsoft.ServiceBus.dll client bits is vastly more efficient than HTTP. The TCP protocol is also, for now, a prerequisite for several advanced features like sessions and transaction support.

New Opportunities for Existing Applications

And even though we’re committed to backwards compatibility, there are a few things developers with existing Service Bus apps may want to consider looking at over the next few months. We will provide more concrete guidance in coming posts, but for now, here are a few high-level examples:

  • For the vast majority of use cases, the new Queue capability is a better choice than the Message Buffer. There are some special use-cases where the Message Buffer may be preferred, such as where the application relies on the Message Buffer being ephemeral (it automatically expires and vanishes) or where the application relies on the Message Buffer’s overflow policy to exert backpressure on the sender – in most other cases, moving to the Queue is a splendid idea.
  • The NetEventRelayBinding provides relayed multicast one-way messaging – albeit within rather narrow limits of up to 25 concurrent listeners. For many cases, it’s well worth looking into replacing these paths with a Topic where every destination has its own Subscription and that receives messages using the new NetMessagingBinding. That provides more scale (up to 2000 concurrent subscriptions) and the subscriptions can also be filtered. The remaining differentation for NetEventRelayBinding is that it does, as all direct TCP-based connectivity mechanisms do, exert backpressure into the client if the listeners have a slower receive rate than the sender’s theoretically possible send rate.
API Changes Compared to the May CTP

Based on customer feedback and our own experiences in building apps with the API we’ve presented back in May, we’ve made a significant number of API improvements, some of which also benefit the existing Service Bus Relay capabilities – while staying compatible with existing code.

One of the most apparent changes is around security where we’ve introduced the notion of ‘token providers’ instead of feeding credentials directly into the API. Using the credential classes and setting credentials on the TransportClientEndpointBehavior is still possible for the Relay, but is now labeled as deprecated. Service Bus uses federated security with access tokens issued by the Windows Azure AppFabric Access Control service (ACS). The API factoring now reflects that federated nature, with token providers being independent entities that acquire and dispense tokens into the Service Bus client infrastructure as needed. The upside is that this new factoring allows customer code to plug new token providers into the API that acquire credentials and interact with the Access Control service in special ways, possibly popping a dialog window that hosts a web-browser control asking for a Facebook login, passing the resulting Facebook token to ACS, and then returning the Service Bus access token to the application.

The management API surface also got a bit of a makeover. ServiceBusNamespaceClient is now called NamespaceManager and there are now – of course – methods that allow checking whether a particular Queue, Topic, or Subscription already exists. The Namespace Manager also now allows direct management of Subscription rules. Creating rules on Subscriptions is done using filters and the respective classes have also been slightly reorganized, e.g. SqlFilterExpression is now just SqlFilter.

The most significant changes – and, as we hope, improvements – were done in the core Messaging API where we’ve changed a number of things ‘under the hood’, allowing us to shed quite a bit of complexity in the API surface.

What you will find browsing through the new samples is that the MessagingFactory and all objects dispensed by it now have a much simpler state management model. You no longer need to ‘Open’ the objects for use and they also no longer go into a faulted state. As much as those changes may seem like minor details, they are the direct result of improvements in the underlying binary protocol that now allows connection sharing across multiple senders and receivers, message prefetching, and – probably most importantly – automatically reconnects sessions when connections get lost as mentioned earlier. In other words, an application can now have a pending ‘Receive’ request, get disconnected or even be hibernated along with the OS, and will reconnect and automatically retry getting the message once the network is available again.

NetMessagingBinding is the new name for the binding for Queues and Topics that is providing full integration with WCF, and is functionally similar to its cousin NetMsmqBinding. On the service-side, the NetMessagingBinding provides an automatic message-pump that pulls messages off a Queue or Subscription and it’s integrated with WCF’s ReceiveContext mechanism.

Where? How?

As mentioned above - the new release is available right now and you can instantly use all of these features. The required client assemblies and samples are available for download here. You can also easily install the runtime assemblies through NuGet or the Web Platform Installer as part of the Windows Azure SDK and start building applications.

Please note that the September 2011 release is only available in the regular production environment for Service Bus and not in the CTP environment (“appfabriclabs.com”) that you may have been using before. Applications that run against the CTP environment should be migrated to the regular Service Bus service no later than October 31, 2011 as the environment may change, unannounced, after that date.

Here is a list of resources for learning more about this release:

To read more about all of the Windows Azure-related announcements made at BUILD, please read the blog post, "JUST ANNOUNCED @ BUILD: New Windows Azure Toolkit for Windows 8, Windows Azure SDK 1.5, Geo-Replication for Windows Azure Storage, and More". For more information about BUILD or to watch the keynotes, please visit the BUILD Virtual Press Room. And follow @WindowsAzure and @STBNewsBytes for the latest news and real-time talk about BUILD.

Clemens Vasters is the Principal Technical Lead on Service Bus. Follow Clemens @clemensv.


Vittorio Bertocci (@vibronet) describe Using ACS in Metro Style Applications in a 9/14/2011 post:

imageI am sure that many of you, amazed by the fantastic news about the Windows Developer Preview capabilities, wondered if it will be possible to take advantage of ACS even from Metro Style applications.

The answer is “yes, absolutely”. Smile

image72232222222We’ve been working with the appropriate Windows engineering team to make sure we connect to ACS from Metro style applications by making proper use of the new Windows security features: today we are sharing with you some of the outcomes of that conversation.

More precisely:

Below I’ll add some details on the first two items.
Please keep in mind at all times that, in line with what you heard these days, what we are sharing on this topic is a developer preview.

John Shewchuk’s Demo in Today’s //BUILD Keynote

If you weren’t following the keynote, or if you were spacing out right at the crucial moment, here there’s a brief summary of what John demonstrated today (minus the notification parts, I am focusing just on authentication here). The demo John showed is a version of the app I built with the help of the Windows guys: Wade made a great job in polishing it and inserting in a realistic scenario, turning the rough developer-oriented prototype in a nice looking demo. In my session on Thursday you are going to see things in details.
I don’t want to spoil the simplicity of the scenario by hitting you with the explanation of what’s going on behind the scenes; not yet. I’ll get to that in a moment.

The application is a very simple travel management utility, with the typical look of the Metro Style app:

clip_image001[4]

If you hit the login button, you’ll be prompted to sign in by choosing among four well-known identity providers.

clip_image002[4]

Let’s pick Facebook: the familiar Facebook authentication UI appears in what looks like a dialog.

clip_image003[4]

Upon successful authentication the application lets you in and retrieves your data.

clip_image004[4]

Simple, right? But there’s more.

If the application ran on a trusted device, and you logged on the machine using Windows Live ID, you are in for a nice surprise. If you launch the travel application on another trusted device, you won’t have to go through the authentication phase again; you will find that you are already logged in!

Now that you saw how the user experience unfolds, let’s take a quick pick under the hood.

The App

The application is Metro style app based on HTML. All the code running on the client site is Javascript. And in Javascript tradition, it is extremely simple.

The Identity Providers

The list of identity providers displayed at sign in time is, surprise surprise, retrieved from ACS. As many of you loyal readers know by now, ACS offers the list of configured identity providers (and their sign-in URLs) in form of a JSON list. And how hard is it to retrieve a JSON list via Javascript? Thought so.

function SignIn() {

try {

Show('signon-block');
var request = new XMLHttpRequest();
request.open("GET", IPSFeedURL("https://xxxxxxxxxxx.accesscontrol.windows.net"), false);
request.send(null);
var jsonString = request.responseText;
var jsonlist = ParseIPList(jsonString);

BindJsonToList(jsonlist);
// result

} catch (e) {
ShowDialog(e);
}
}
Web Authentication Experience in a Metro Style App

The dialog which displayed the Facebook authentication UI is part of a new Windows runtime feature. I don’t want to go too much in details, as I am sure that the Windows guys will talk at length about it and they are THE authoritative source of information about their feature. Here I’ll stick to the talking points they gave me: the WebAuthenticationBroker is a surface that developers can use to host authentication experiences for online services, just like the demo did for Facebook (and would have done for any other provider, had we picked a different one).

function ItemSelected(element) {

try {
var acsURL = ipList[element.detail.itemIndex].LoginUrl;

var startURI = new Windows.Foundation.Uri(acsURL);
var endURI = new Windows.Foundation.Uri(callbackURL);

Windows.Security.Authentication.Web.WebAuthenticationBroker.authenticateAsync(
Windows.Security.Authentication.Web.WebAuthenticationOptions.n,
startURI,
endURI).then(callbackACSAuth, callbackACSAuthError);

} catch (e) {
ShowDialog(e);
}
}
Invoking a Service With OAuth

Upon successful authentication, the flow bounces from Facebook to ACS, where a slim, RESTful SWT token is minted. The token is returned from the broker to the application (more about how that happens in the section about the toolkit). The token is then used for securing an OAuth 2.0 call to a service on the travel app backend (on Windows Azure of course, but technically it could live anywhere). The operation in itself is absolutely trivial to implement in Javascript, it’s just a matter of putting the token in the authorization HTTP header according to the OAuth2.0 syntax.

function GetTravelerInfo(token, serviceUrl) {
try {
var authHeader = "OAuth " + token;
var request = new XMLHttpRequest();
request.open("GET", serviceUrl, false);
request.setRequestHeader("Authorization", authHeader);

request.send();

return request.responseText;

} catch (e) {
ShowDialog(e);
}
}

The token validation on the service side is done via WIF, but given the simplicity of the operation it could be done even directly in the app code.

Roaming Tokens

This is all fine and dandy, I can almost hear you say: but how did you pull off the trick of avoiding the need to re-authenticate on the other device? The answer lies in another great new feature of Windows, the Credentials Vault. The considerations I made earlier about the WebAuthenticationBroker are valid for the Vault, too: in fact, it is coming from the same awesome Windows feature team.
Here I will just say that, as you have seen in the keynote and big picture sessions, Windows is introducing phenomenal new roaming capabilities: if you save your tokens in the Vault, and the correct conditions are met, you can take advantage of those roaming capabilities too.

//saving the token in the Vault
var vault = new Windows.Security.Credentials.PasswordVault();
var cred = new Windows.Security.Credentials.PasswordCredential(
url,
username,
token);
vault.add(cred);

Now, that was quite the whirlwind tour! Don’t get fooled by the length of the post, that is due to the fact that there are so many new things to describe. In fact, I keep being amazed by how little, non-esoteric code all this requires on the client side when you develop Metro Style apps.
What’s that? You want to try it by yourself? Keep reading, then!

The ACS Sample in the Windows Azure Toolkit for Windows 8

Want to take advantage from the Windows Developer Preview of the Windows Azure services you already know and love? Want to learn more about the new Windows Push Notification Services?

Then download the Windows Developer Preview, install it and get yourself a copy of the Windows Azure Toolkit for Windows 8.

The Windows Azure Toolkit for Windows 8 contains a sample ACS application which demonstrates the same flow described earlier for the keynote demo. The notable differences are that the UI is much less fancy (but still metro style!) and the backend is designed to run on the local Windows Azure simulation environment, which makes it especially handy.

In this post I won’t drill too deep in the code, that’s for a future installment (or, if you are at //BUILD and you are interested, come by on Thursday). For now I just want to give you few tips for finding your way through the sample and run it successfully.

Setup

Install the Windows Developer Preview; download the Windows Azure Toolkit for Windows 8 and launch it. That’s all you need to do. The (metaphorically) award-winning Dependency Checker takes care of tracking down everything you need and offer you the right links for downloading/installing it. In fact, I used it for getting Visual Studio 2010 installed and configured side by side with the IDE out-of-the-box in this preview, Visual Studio 2011 Express.

Few suggestions:

  • Some of the entries in the dependency checker can take a long time. Be patient when
    • downloading & installing WebPI (if you don’t have it already)
    • enabling the Internet Information Services 7 (IIS7) feature
  • The ACS sample is in c:\WindowsAzure\WATWindows\Samples\ACS.
    In order to use the sample, you need to run SetupSample.cmd in the same folder. You can’t skip this, as the setup needs to adapt the code to the ACS namespace you’ll use and update the namespace itself accordingly
    • The setup will ask you for one ACS namespace and its management key. I suggest getting those info in advance: instructions on how to do that are in the readme of the toolkit, in Appendix II
    • If you want to use Facebook, you’ll need to create one Facebook app tied to your ACS namespace; the setup will ask for the app ID and the secret. Again, I suggest getting those values in advance
The Sample

The sample includes two solutions:

  • ACSMetroClient, the Metro style application
  • ModernCloudIdentity, the service called by the client (and some other stuff)

solutions

The Metro style application templates are available only in Visual Studio 11; and as of today, the Windows Azure tools for Visual Studio will work only with VS 2010. This means that you need to open ACSMetroClient with VS11 and ModernCloudIdentity with VS10.

shortcut

Pro tip: although you can open ACSMetroClient.sln by double-clicking it, it is suggested that you open ModernCloudIdentity.sln by first launching VS10 via the shortcut to VisualStudio2010WindowsAzure.cmd that the setup placed on your desktop, and then you open the solution from there.

The Service solution

waz

The solution contains three projects:

  • DPE.OAuth, a class library containing an OAuth implementation for WIF (from the good old FabrikamShipping)
  • ModernCloudIdentity, the cloud project hosting the web role for the service
  • ModernCloudIdentity.Web, a web role containing the service and a a couple of utility pages (Default.aspx and Bouncer.aspx)

The service part is well known, basically the same as every REST-based sample or hands-on lab we released in the past.

The Default page here is a little trick in this sample for bridging the different ways in which ACS issues tokens in redirect scenarios (in a POST) and the WebAuthenticationBroker expects them (in the querystring of the Url that has been defined as callback). As I mentioned I don’t want to go in details here. I’ll just say that when Default.aspx receives the SWT token from ACS in a classic wresult POST , it extracts the token and adds it in the querystring of a redirect to Bouncer.aspx; but Bouncer.aspx is the designated callback URL, hence the broker retrieves the token from the querystring and returns. More complicated to explain than to do; and in any case, please keep in mind that this is just a sample based on developer preview software.

Hit F5 to start the simulation environment: you’ll get a couple of browsers complaining that you didn’t send a token, don’t mind them and move to the client solution.

The Metro Style client

client

The client code is really straightforward. It is basically the default app as it comes out of the template, with just few modifications to default.html and default.js:

  • The UI is really a series of DIVs, which are made visible or invisible depending on where in the app the user. There is a splash screen, a home realm discovery screen, and a service invocation UI. If you follow the flow described for the keynote demo, you’ll see them in that order. The main difference, apart from the look&feel, is that the call to the service does not happen automatically right after the authentication experience but takes place when you click on “Invoke”. Note that the token text is available on the page, so that you can experiment with tampering with the string before sending it and get an “invalid signature” on purpose.
    If there is already a token in the vault from former sessions, the app will skip the authentication and go straight to the invoke screen.
  • The JS contains the logic for moving thru the app, getting the list of IPs and presenting it to the user (databinding is fun! Thanks Giorgio and Jaime for all your help on that), using the WebAuthenticationBroker, the Vault, and performing calls. As mentioned, I won’t go in details yet: we’ll get to this in tomorrow’s session.

Exciting times!

Among all the great news of the last two days, it’s nice to see that claims-based identity is the gift that keeps on giving. I just *love* to see how ACS can really simplify your life even when used with those brand-new development technologies. Looking forward for your feedback!

Vito presented an Identity and access management for Windows Azure apps session at the //BUILD/ Windows conference on 9/15/2011.


Valery Mizonov described Best Practices for Leveraging Windows Azure Service Bus Brokered Messaging API in a 9/14/2011 post to the Windows Azure AppFabric CAT blog:

This article was replaced on 9/28/2011 with a link to updated version of Valery Mizonov’s Best Practices for Leveraging Windows Azure Service Bus Brokered Messaging API in my Windows Azure and Cloud Computing Posts for 9/26/2011+ post.


<Return to section navigation list>

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Shaun Xu announced Windows Azure SDK 1.5 Arrived in a 9/15/2011 post:

imageIn the BUILD event, Microsoft had just announced the latest Windows Azure SDK 1.5, Visual Studio Tool for Windows Azure SDK 1.5, and the Windows Azure AppFabric SDK 1.5. You can find them simply from the Web Platform Installer.

image

One thing I found for now is that, the database had been changed for the local storage emulator in this version. That means you need to recreate the storage database by running the DSInit command from the folder (let’s say you installed the SDK in driver C) C:\Program Files\Windows Azure SDK\v1.5\bin\devstore.

image

More information about the new features of SDK 1.5 please refer to the MS announcement here.


Vijay Rajagopalan (@vijayrvr) reported Announcing: Windows Azure SDK 1.5, Windows Azure Tools for Microsoft Visual Studio 2010 and new Service Management Features in a 9/15/2011 post to the Windows Azure Team blog:

imageDuring yesterday’s Day two keynote at the BUILD, Microsoft announced the availability of the Windows Azure SDK 1.5 and Windows Azure Tools for Microsoft Visual Studio 2010. You can download the tools here using the Web Platform Installer. All three of these releases are aimed at simplifying development, deployment, and management on the Windows Azure platform.

imageThe Windows Azure SDK includes the following new features:

  • Re-architected emulator enabling higher fidelity between local & cloud developments & deployments.
  • Many fundamental improvements (performance of emulator and deployment, assembly/package validation before deployment)
  • Support for uploading service certificates in csupload.exe and a new tool csencrypt.exe to help manage remote desktop encryption passwords.
  • Many enhancements and fixes to Windows Azure SDK 1.4.

Also available are several new enhancements in Windows Azure Tools for Visual Studio for developing & deploying cloud applications. These enhancements include:

  • Add Windows Azure Deployment project from Web Application project.
  • Profile applications running in Windows Azure.
  • Create ASP.NET MVC3 Web Roles.
  • Manage multiple service configurations in one cloud project.
  • Improved validation of Windows Azure packages.

Now, it is easier to add a Windows Azure deployment project to common web projects like ASP.NET MVC, ASP.NET Web Forms or WCF. Based on the type of web project that you have, the project properties for assemblies are also updated, if the service package requires additional assemblies for deployment.

With profiling support in the Windows Azure Tools you can easily detect performance bottlenecks in your application while it is running in Windows Azure.

The tools now support creating ASP.NET MVC3 web roles. The new template includes the new universal ASP.NET providers that support SQL Azure and it will also make sure that ASP.NET MVC assemblies are deployed with your application when you publish to Windows Azure.

If you want to maintain different settings for different deployment environments, the Windows Azure tools now support multiple service configurations in the same Windows Azure Project. This is especially useful for managing different Windows Azure Storage connection strings for local debugging and running in the cloud.

Finally the new tools will help you avoid some of the common problems when you deploy your application to Windows Azure. If you forget to include a local assembly in your package or you publish with a local Azure Storage connection string, the tools will let you know.

Read more about the recent features here.

The Windows Azure Platform Training Kit has also been updated for the new tools. The Windows Azure Platform Training Kit includes a comprehensive set of technical content including hands-on labs, presentations, and demos that are designed to help you learn how to use the Windows Azure platform. You can download it here.

New Windows Azure Service Management API Features

Introduction:

We are also excited to announce the release of new service management APIs for the following scenarios:

  • Rollback an In-Progress Configuration Update or Service Upgrade
  • Ability to Invoke Multiple “write” Operations on an Ongoing Deployment
  • More Descriptive Status for Role Instances
  • New API Method: Get Subscription

Background:

The Windows Azure Service Management API enables Windows Azure customers to programmatically administer their subscriptions, hosted services, and storage accounts.

Rollback an In-Progress Configuration Update or Service Upgrade

The API now exposes a new method – Rollback Update or Upgrade – which can be called on an in-progress service update or upgrade. The effects of rolling back an in-progress deployment are as follows:

  • Any instances which had already been updated or upgraded to the new version of the service package (*.cspkg) and/or service configuration (*.cscfg) files will be rolled back to the previous version of these files.
  • Note that the customer does not need to resupply the previous version of these files – the Windows Azure platform will retain these for the duration of the update or upgrade.
    • Any instances, which had not yet been updated or upgraded to the new version, will not be updated or upgraded, since those instances are already running the target version of the service.
    • Typically, such instances are not even restarted by the Windows Azure Fabric Controller as part of the Upgrade/Update à Rollback sequence.

Here are some additional details about the new Rollback Update or Upgrade method:

  • As above, Rollback can be invoked on an ongoing service configuration update (triggered via Change Deployment Configuration) or service upgrade (triggered via Upgrade Deployment).
  • It only makes sense to call Rollback on an in-place update or upgrade since VIP swap upgrades entail atomically replacing one entire running instance of your service with another.
  • Rollback can be applied to upgrades performed in either manual or automatic mode.
  • Note that Rollback itself can be called in automatic or manual mode as well.
    • Rollback can only be called when an update (configuration change) or upgrade is in progress on the deployment, which can be detected by the client via checking whether the value of the “RollbackAllowed” flag – as returned by Get Deployment or Get Hosted Service Properties – is “true”.
    • In order to invoke the version of these methods which returns the RollbackAllowed field, you must use the following version (or greater) in the request header: “x-ms-version: 2011-10-01”. For more information about versioning headers, see Service Management Versioning.
    • An update or upgrade is considered “in progress” as long as there is at least one instance in the service, which has not yet been updated to the new version.

What’s an example of when I might use this?

Suppose you are rolling out a major in-place upgrade to your Windows Azure hosted service. Because your new release is substantially different from the old, you want to control the rate at which the rollout proceeds and so you call Upgrade Deployment in manual mode and begin to Walk Upgrade Domains. Role instances in the 1st and 2nd upgrade domains appear to come up healthy after being upgraded but, as you’re walking the 3rd upgrade domain, some role instances in the 1st and 2nd upgrade domains become unresponsive. So you call Rollback on this upgrade, which will (1) leave untouched the instances which had not yet been upgraded and (2) roll back instances which had been upgraded (i.e., those in the 1st and 2nd upgrade domains as well as any in the 3rd to which the upgrade had already been applied) to the previous service package and configuration.

Can’t I achieve the same effect by calling Update or Upgrade on a service – in order to roll that service to the previous version? IOW, what does Rollback buy me?

Without Rollback, if you were in the process of updating or upgrading your service from version X to version X+1 and decided that you wanted to go back to version X, you first had to update or upgrade all role instances to X+1 then, after that completed, start a new update or upgrade to X. With Rollback, it’s possible to short-circuit that process (changing the target version from X+1 to X, in the midst of the upgrade to X+1), which results in less service interruption/churn. Moreover, the Windows Azure platform now retains (for the duration of an update or upgrade) the service package (*.cspkg) and service configuration (*.cscfg) files from the version of the service before the update or upgrade began (X), which means that the customer does not need to resupply these in the event that he wants to go back to the pre-upgrade version.

Ability to Invoke Multiple “write” Operations on an Ongoing Deployment

In order to provide customers more flexibility in administering their hosted services, we are relaxing the constraints on when mutable operations can be invoked on deployments. The mutable or write operations are: Change Deployment Configuration, Upgrade Deployment, Update Deployment Status (used to Start or Stop a deployment), Delete Deployment, and Rollback Update or Upgrade. In particular, prior to this release, customers were only able to have a single “in-progress” mutable operation on a deployment: once such an operation was started, the customer had to wait for that operation to complete before starting another one. That is, the deployment was locked.

With this new Service Management API release, a couple methods (Get Deployment and Get Hosted Service Properties) return a new field, which explicitly informs customers as to whether a given deployment is “Locked” (unable to have write operations performed on it). Moreover, the period of time during which a lock is held (for a given deployment) is substantially reduced, which enables parallelizing or interrupting certain workflows.

  • As with the RollbackAllowed field, in order to invoke the version of these API methods which return the Locked field, you must use the following version (or greater) in the request header: “x-ms-version: 2011-10-01”.

What’s an example of when I might use this?

Suppose you’re performing an upgrade and there is a bug in the new version of the role code which causes the upgraded role instances to repeatedly crash. This will prevent the upgrade from making progress – because the Fabric Controller will not move onto the next upgrade domain until a sufficient number of instances in the previous one are healthy. This is referred to as a “stuck deployment” and, with this Windows Azure release, customers can now get themselves “unstuck.” In particular, in that case, you could elect to apply a fresh Update or Upgrade over top of the toxic one.

More Descriptive Status for Role Instances

In order to provide better diagnostic and service health monitoring capabilities, customers can now obtain more descriptive information from Get Deployment about the state of their role instances than was previously available. Two new fields will be returned (InstanceStateDetails and InstanceErrorCode) and an existing field (InstanceStatus) will contain new values, including: RoleStateUnknown, CreatingVM, StartingVM, CreatingRole, StartingRole, ReadyRole, BusyRole, StoppingRole, StoppingVM, DeletingVM, StoppedVM, RestartingRole, CyclingRole, FailedStartingVM, UnresponsiveRole.

  • In order to invoke this method, you must use the following version (or greater) in the request header: “x-ms-version: 2011-10-01”. For more information about versioning headers, see Service Management Versioning.

New API Method: Get Subscription

With this Service Management API release, we introduce a new method, Get Subscription, which enables obtaining basic information about a subscription (the subscription name, status, and email addresses of the Account and Service Administrators) as well as the current and max usage as far as number of storage accounts, hosted services, and cores. That is, with this new method you can programmatically obtain the quotas associated with your subscription.

  • In order to invoke this method, you must use the following version (or greater) in the request header: “x-ms-version: 2011-10-01”. For more information about versioning headers, see Service Management Versioning.

What’s an example of when I might use this?

There are a couple immediate use cases for this new method. First, for security compliance purposes, you might have a program which periodically confirms that the configured service administrators for a given subscription are as expected (i.e. that no rogue values for AccountAdmin and ServiceAdmin have been configured). Secondly, this method provides visibility into a key component of your Windows Azure bill. Namely, the CurrentCoreCount value tells you how many cores all of your hosted services’ deployments together are using. The “compute hours” portion of your bill is calculated based on how many cores were used by your services over the billing period.

To read more about all of the Windows Azure-related announcements made at BUILD, please read the blog post, "JUST ANNOUNCED @ BUILD: New Windows Azure Toolkit for Windows 8, Windows Azure SDK 1.5, Geo-Replication for Windows Azure Storage, and More". For more information about BUILD or to watch the keynotes, please visit the BUILD Virtual Press Room. And follow @WindowsAzure and @STBNewsBytes for the latest news and real-time talk about BUILD.

Vijay Rajagopalan is Principal Group Program Manager for Windows Azure. Follow Vijay at @vijayrvr.


Nick Harris (@cloudnick) reported ANNOUNCING the Windows Azure Toolkit for Windows 8 in a 9/14/2011 post to the Windows Azure Team blog:

imageThe Windows Azure Toolkit for Windows 8 is designed to make it easier for developers to create a Windows Metro style application that can harness the power of Windows Azure Compute and Storage. It includes a Windows 8 Cloud Application project template for Visual Studio that makes it easier for developers to create a Windows Metro style application that utilizes services in Windows Azure. This template generates a Windows Azure project, an ASP.NET MVC 3 project, and a Windows Metro style JavaScript application project. Immediately out-of-the-box the client and cloud projects integrate to enable push notifications with the Windows Push Notification Service (WNS). In Addition, the Windows Azure project demonstrates how to use the WNS recipe and how to leverage Windows Azure Blob and Table storage.

imageThe Windows Azure Toolkit for Windows 8 is available for download.

Push Notification Cloud Service Architecture

For those of you who are familiar with working with Windows Phone 7 and the Microsoft Push Notification Service (MPNS), you will be happy to know that the Windows Push Notification service (WNS) is quite similar. Let’s take a look at a birds-eye architectural view of how WNS works.

The process of sending a notification requires few steps:

  1. Request a channel. Utilize the WinRT API to request a Channel Uri from WNS. The Channel Uri will be the unique identifier you use to send notifications to an application instance.
  2. Register the channel with your Windows Azure cloud services. Once you have your channel you can then store your channel and associate it with any application specific data (e.g user profiles and such) until your services decide that it’s time to send a notification to the given channel
  3. Authenticate against WNS. To send notifications to your channel URI you are first required to Authenticate against WNS using OAuth2 to retrieve a token to be used for each subsequent notification that you push to WNS.
  4. Push notification to channel recipient. Once you have your channel, notification payload and WNS access token you can then perform an HttpWebRequest to post your notification to WNS for delivery to your client.

Fortunately, the Windows Azure Toolkit for Windows 8 accelerates development by providing a set of project templates that enable you to start delivering notifications from your Windows Azure cloud service with a simple file new project experience. Let’s take a look at the toolkit components.

Toolkit Components

The Windows Azure Toolkit for Windows 8 contains a rich set of assets including a Dependency Checker, Windows Push Notification Service recipe, Dev 11 project templates, VS 2010 project templates and Sample Applications.

Dependency Checker

The dependency checker is designed to help identify and install those missing dependencies required to develop both Windows Metro style apps on and Windows Azure solutions on Windows 8.

Dev 11 Windows Metro style app

The Dev 11 Windows Metro style app provides a simple UI and all the code required to demonstrate how to request a channel from WNS using the WinRT API. For example, the following listing requests a Channel URI from WNS:

var push = Windows.Networking.PushNotifications;
var promise = push.PushNotificationChannelManager.createPushNotificationChannelForApplicationAsync();
promise.then(function (ch) {
var uri = ch.uri;
var expiry = ch.expirationTime;
updateChannelUri(uri, expiry);
});

Once you have your channel, you then need to register this channel to your Windows Azure cloud service. To do this, the sample app calls into updateChannelUri where we construct a simple JSON payload and POST this up to our WCF REST service running in Windows Azure using the WinJS.xhr API.

function updateChannelUri(channel, channelExpiration) {
if (channel) {
var serverUrl = "https://myservice.com/register";
var payload = { Expiry: channelExpiration.toString(),
URI: channel };
var xhr = new WinJS.xhr({
type: "POST",
url: serverUrl,
headers: { "Content-Type": "application/json; charset=utf-8" },
data: JSON.stringify(payload)
}).then(function (req) { … });
} }

VS 2010 Windows Azure Cloud Project Template

The Windows Azure Cloud project provided by the solution demonstrates several assets for building a Windows Azure service for delivering push notifications. These assets include:

1. A WCF REST service for your client applications to register channels and demonstrates how to store them in Windows Azure Table Storage using a TableServiceContext. In the following code listing you can see the simple WCF REST interface exposed by the project.
[ServiceContract]
public interface IWNSUserRegistrationService
{
[WebInvoke(Method = "POST", BodyStyle = WebMessageBodyStyle.Bare)]
void Register(WNSPushUserServiceRequest userChannel);
[WebInvoke(Method = "DELETE", BodyStyle = WebMessageBodyStyle.Bare)]
void Unregister(WNSPushUserServiceRequest userChannel);
}
2. An ASP .NET MVC 3 portal to build and send Toast, Tile and Badge notifications to clients using the WNS recipe.

3. An example of how to utilize Blob Storage for Tile and Toast notification images.


4. A Windows Push Notification Recipe used by the portal that provides a simple managed API for authenticating against WNS, constructing payloads and posting the notification to WNS.
using Windows.Recipes.Push.Notifications;
using Windows.Recipes.Push.Notifications.Security;
...
//Construct a WNSAccessTokenProvider which will accquire an access token from WNS
IAccessTokenProvider _tokenProvider = new WNSAccessTokenProvider("ms-app%3A%2F%2FS-1-15-2-1633617344-1232597856-4562071667-7893084900-2692585271-282905334-531217761", "XEvTg3USjIpvdWLBFcv44sJHRKcid43QXWfNx3YiJ4g");
//Construct a toast notification for a given CchannelUrl
var toast = new ToastNotification(_tokenProvider)
{
ChannelUrl = "https://db3.notify.windows.com/?token=AQI8iP%2OtQE%3d";
ToastType = ToastType.ToastImageAndText02;
Image = "https://127.0.0.1/devstoreaccount1/tiles/WindowsAzureLogo.png";
Text = new List<string> {"Sending notifications from a Windows Azure WebRole"};
};
//Send the notification to WNS
NotificationSendResult result = toast.Send();
5. As you can see the Windows Push Notification Recipe simplifies the amount of code required to send your notification down to 3 lines.

The net end result of each of these assets is a notification as demonstrated in the below screenshot of a Toast delivered using the Windows Azure Toolkit for Windows 8.

As an exercise, it is recommended to spend some time using the website to explore the rich set of templates available to each of the Toast, Tile and Badge notification types.

Sample applications

At present there are also two sample applications included in the toolkit that demonstrate the usage of other Windows Azure features:

  1. PNWorker: This sample demonstrates how you can utlize Windows Azure Storage Queues to offload the work of delivering notifications to a Windows Azure Worker Role. For more details please see the CodePlex documentation.
  2. ACSMetroClient: An example of how to use ACS in your Windows Metro style applications. For more details please see this post by Vittorio Bertocci.
  3. Margie’s Travel: As seen in the demo keynote by John Shewchuk, Margie’s Travel is a sample application that shows how a Metro style app can work with Windows Azure. For more details please see this post by Wade Wegner. This sample application will ship shortly after the //build conferene.
Summary

The Windows Azure Toolkit for Windows 8 provides developers a rich set of re-useable assets that demonstrate how to start using Windows Azure quickly from Metro style applications in Windows 8. To download the toolkit and see a step by step walkthrough please see the Windows Azure Toolkit for Windows 8.

Nick Harris is a Technical Evangelist for Windows Azure. Follow Nick at @cloudnick.


Bruce Kyle suggested that you Bind Windows Azure Storage Data Using Java Persistence API in a 9/14/2011 post to the US ISV Evangelism blog:

imagejpa4azure is an ORM (Object Relational Mapper) that binds java objects to Azure Tables, making it very easy for Java developers to leverage Windows Azure Storage from on-premise or cloud applications.

imageAny intermediate Java developer can take an object model and with some simple annotations be on their way to using Azure as a persistence mechanism in minutes. It is hosted at http://jpa4azure.codeplex.com , and available to maven builds from http://repo1.maven.org/maven2/com/codeplex/jpa4azure/

Key features
  • jpa4azure implements parts of the well know JPA (Java Persistence API) specification for object/relational binding. Thus interacting with Azure Storage becomes a familiar activity for Java developers (http://jcp.org/aboutJava/communityprocess/final/jsr317/index.html)
  • Support for parent child relationships. This is required for object mapping, and unsupported in the raw Azure Storage SDK (.net or any variety).
  • Automatic table creation from entity model, driven by annotations.
  • Automated key generation strategy for UUID's.

Paolo Salvatori explained How to use a WCF custom channel to implement client-side caching in a Windows Azure-hosted application in a 9/13/2011 post:

Introduction

imageSome months ago I created a custom WCF protocol channel to transparently inject client-side caching capabilities into an existing WCF-enabled application just changing the its configuration file. Since I posted my first post on this subject I received positive feedbacks on the client-side caching pattern, hence I decided to create a version of my component for Windows Azure. The new implementation is almost identical to the original version and introduces some minor changes. The most relevant addition is the ability to cache response messages in Windows Azure AppFabric Caching Service. The latter enables to easily create a cache in the cloud that can be used by Windows Azure roles to store reference and lookup data.

imageCaching can dramatically increase performance by temporarily storing information from backend stores and sources. My component allows to boost the performance of existing WCF-enabled applications running in the Azure environment by injecting caching capabilities in the WCF channel stack that transparently exploit the functionality offered by Windows Azure AppFabric Caching to avoid redundant calls against remote WCF and ASMX services. The new version of the WCF custom channel provides the possibility to choose among three caching providers:

  • A Memory Cache-based provider: this component internally uses an instance of the MemoryCache class contained in the .Net Framework 4.0.
  • A Web Cache-based provider: this provider utilizes an instance of the Cache class supplied by ASP.NET to cache response messages.
  • An AppFabric Caching provider: this caching provider leverages Windows Azure AppFabric Caching Service. To further improve the performance, it’s highly recommended the client application to use the Local Cache to store response messages in-process.

As already noted in the previous article, client-side caching and server-side caching are two powerful and complimentary techniques to improve the performance of WCF-enabled applications. Client-caching is particularly indicated for those applications, like a web site, that frequently invoke one or multiple back-end systems to retrieve reference and lookup data, that is data that is typically static, like the catalog of an online store and that changes quite infrequently. By using client-side caching you can avoid making redundant calls to retrieve the same data over time, especially when these calls take a significant amount of time to complete.

For more information on Windows Azure AppFabric, you can review the following articles:

Problem Statement

The problem statement that the cloud version of my component intends to solve can be formulated as follows:

  • How can I implicitly cache response messages within a service or an application running in a Windows Azure role that invokes one or multiple underlying services using WCF and a Request-Response message exchange pattern without modifying the code of the application in question?

To solve this problem, I created a custom protocol channel that you can explicitly or implicitly use inside a CustomBinding when specifying client endpoints within the configuration file or by code using the WCF API.

Design Pattern

The design pattern implemented by my component can be described as follows: the caching channel allows to extend existing WCF-enabled cloud applications and services with client-caching capabilities without the need to explicitly change their code to exploit the functionality supplied by Windows Azure AppFabric Caching. To achieve this goal, I created a class library that contains a set of components to configure, instantiate and use this caching channel at runtime. My custom channel is configured to checks the presence of the response message in the cache and behaves accordingly:

  • If the response message is in the cache, the custom channel immediately returns the response message from the cache without invoking the underlying service.

  • Conversely, if the response message is not in the cache, the custom channel calls the underlying channel to invoke the back-end service and then caches the response message using the caching provider defined in the configuration file for the actual call.

You can exploit the capabilities of the caching channel in 2 distinct ways:

  • You can configure the client application to use a CustomBinding. In this case, you have to specify the ClientCacheBindingElement as the topmost binding element in the binding configuration. This way, at runtime, the caching channel will be the first protocol channel to be called in the channel stack.
  • Alternatively, you can use the ClientCacheEndpointBehavior to inject the ClientCacheBindingElement at the top of an existing binding, for example the BasicHttpBinding or the WsHttpBinding.

Scenarios

This section describes 2 scenarios where you can use the caching channel in a cloud application.

First Scenario

The following picture depicts the architecture of the first scenario that uses Windows Azure Connect to establish a protected connection between a web role in the cloud and local WCF service and uses Windows Azure AppFabric Caching provider to cache response messages in the local and distributed cache. The diagram below shows an ASP.NET application running in a Web Role that invokes a local, on-premises WCF service running in a corporate data center. In particular, when the Service.aspx page first loads, it populates a drop-down list with the name of writers from the Authors table of the notorious pubs database which runs on-premise in the corporate domain. The page offers the user the possibility to retrieve the list of books written by the author selected in the drop-down list by invoking the AuthorsWebService WCF service located in the organization’s network that retrieves data for from the Titles tables of the pubs database.

  1. Directly invoke the WCF service via Windows Azure Connect: the Service.aspx page always calls the AuthorsWebService to retrieve the list of titles for the current author using Windows Azure Connect (see below for more information).

  2. Use Caching API Explicitly and via Windows Azure Connect: the page explicitly uses the cache aside programming pattern and the Windows Azure AppFabric Caching API to retrieve the list of titles for the current author. When the user presses the Get Titles button, the page first checks whether data is in the cache: if yes it retrieves titles from the cache, otherwise it invokes the underlying WCF service, writes the data in the distributed cache, then returns the titles.

  3. Use Caching API via caching channel and Windows Azure Connect: this mode uses the caching channel to transparently retrieve data from the local or distributed cache. The use of the caching channel allows to transparently inject client-side caching capabilities into an existing WCF-enabled application without the need to change its code. In this case, it’s sufficient to change the configuration of the client endpoint in the web.config.

  4. Directly invoke the WCF service via Service Bus: the Service.aspx page always calls the AuthorsWebService to retrieve the list of titles for the current author via Windows Azure AppFabric Service Bus. In particular, the web role invokes a BasicHttpRelayBinding endpoint exposed in the cloud via relay service by the AuthorsWebService.

  5. Use Caching API via caching channel and Service Bus: this mode uses the Service Bus to invoke the AuthorsWebService and the caching channel to transparently retrieve data from the local or distributed cache. The ClientCacheEndpointBehavior replaces the original BasicHttpRelayBinding specified in the configuration file with a CustomBinding that contains the same binding elements and injects the ClientCacheBindingElement at the top of the binding. This way, at runtime, the caching channel will be the first channel to be invoked in the channel stack.

WindowsAzureConnect

Let’s analyze what happens when the user selects the third option to use the caching channel with Windows Azure Connect.

Message Flow:

  1. The user chooses an author from the drop-down list, selects the Use Caching API via WCF channel call mode and finally presses the Get Titles button.

  2. This event triggers the execution of the GetTitles method that creates a WCF proxy using the UseWCFCachingChannel endpoint. This endpoint is configured in the web.config to use the CustomBinding. The ClientCacheBindingElement is defined as the topmost binding element in the binding configuration. This way, at runtime, the caching channel will be the first protocol channel to be called in the channel stack.

  3. The proxy transforms the .NET method call into a WCF message and delivers it to the underlying channel stack.

  4. The caching channel checks whether the response message in the local or distributed cache. If ASP.NET application is hosted by more than one web role instance, the response message may have been previously put in the distributed cache by another role instance. If the caching channel finds the response message for the actual call in the local or distributed cache, it immediately returns this message to the proxy object without invoking the back-end service.

  5. Conversely, if the response message is not in the cache, the custom channel calls the inner channel to invoke the back-end service. In this case, the request message goes all the way through the channel stack to the transport channel that invokes the AuthorsWebService.

  6. The AuthorsWebService uses the authorId parameter to retrieve a list of books from the Titles table in the pubs database.

  7. The service reads the titles for the current author from the pubs database.

  8. The service returns a response message to the ASP.NET application.

  9. The transport channel receives the stream of data and uses a message encoder to interpret the bytes and to produce a WCF Message object that can continue up the channel stack. At this point each protocol channel has a chance to work on the message. In particular, the caching channel stores the response message in the distributed cache using the AppFabricCaching provider.

  10. The caching channel returns the response WCF message to the proxy.

  11. The proxy transforms the WCF message into a response object.

  12. The ASP.NET application creates and returns a new page to the browser.

Second Scenario

The following picture depicts the architecture of the second scenario where the web role uses the Windows Azure AppFabric Service Bus to invoke the AuthorsWebService and Windows Azure AppFabric Caching provider to cache response messages in the local and distributed cache.

ServiceBus

Let’s analyze what happens when the user selects the fifth option to use the caching channel with the Service Bus.

Message Flow:

  1. The user chooses an author from the drop-down list, selects the Use Caching API via WCF channel call mode and finally presses the Get Titles button.

  2. This event triggers the execution of the GetTitles method that creates a WCF proxy using the UseWCFCachingChannel endpoint. This endpoint is configured in the web.config to use the CustomBinding. The ClientCacheBindingElement is defined as the topmost binding element in the binding configuration. This way, at runtime, the caching channel will be the first protocol channel to be called in the channel stack.

  3. The proxy transforms the .NET method call into a WCF message and delivers it to the underlying channel stack.

  4. The caching channel checks whether the response message in the local or distributed cache. If ASP.NET application is hosted by more than one web role instance, the response message may have been previously put in the distributed cache by another role instance. If the caching channel finds the response message for the actual call in the local or distributed cache, it immediately returns this message to the proxy object without invoking the back-end service.

  5. Conversely, if the response message is not in the cache, the custom channel calls the inner channel to invoke the back-end service via the Service Bus. In this case, the request message goes all the way through the channel stack to the transport channel that invokes the relay service.

  6. The Service Bus relays the request message to the AuthorsWebService.

  7. The AuthorsWebService uses the authorId parameter to retrieve a list of books from the Titles table in the pubs database.

  8. The service reads the titles for the current author from the pubs database.

  9. The service returns a response message to the relay service.

  10. The relay service passes the response message to the ASP.NET application.

  11. The transport channel receives the stream of data and uses a message encoder to interpret the bytes and to produce a WCF Message object that can continue up the channel stack. At this point each protocol channel has a chance to work on the message. In particular, the caching channel stores the response message in the distributed cache using the AppFabricCaching provider.

  12. The caching channel returns the response WCF message to the proxy.

  13. The proxy transforms the WCF message into a response object.

  14. The ASP.NET application creates and returns a new page to the browser.

Quotes_Icon NOTE
In the context of a cloud application, the use of the caching channel not only improves performance, but allows to decrease the traffic on Windows Azure Connect and Service Bus and therefore the cost due to operations performed and network used.

Windows Azure Connect

In order to establish an IPsec protected IPv6 connection between the Web Role running in the Windows Azure data center and the local WCF service running in the organization’s network, the solution exploits Windows Azure Connect that is main component of the networking functionality that will be offered under the Windows Azure Virtual Network name. Windows Azure Connect enables customers of the Windows Azure platform to easily build and deploy a new class of hybrid, distributed applications that span the cloud and on-premises environments. From a functionality standpoint, Windows Azure Connect provides a network-level bridge between applications and services running in the cloud and on-premises data centers. Windows Azure Connect makes it easier for an organization to migrate their existing applications to the cloud by enabling direct IP-based network connectivity with their existing on-premises infrastructure. For example, a company can build and deploy a hybrid solution where a Windows Azure application connects to an on-premises SQL Server database, a local file server or an LOB applications running the corporate network.

For more information on Windows Azure Connect, you can review the following resources:

Windows Azure AppFabric Service Bus

The Windows Server AppFabric Service Bus is an Internet-scale Service Bus that offers scalable and highly available connection points for application communication. This technology allows to create a new range of hybrid and distributed applications that span the cloud and corporate environments. The AppFabric Service Bus is designed to provide connectivity, queuing, and routing capabilities not only for the cloud applications but also for on-premises applications. The Service Bus and in particular Relay Services support the WCF programming model and provide a rich set of bindings to cover a complete spectrum of design patterns:

  • One-way communications
  • Publish/Subscribe messaging
  • Peer-to-peer communications
  • Multicast messaging
  • Direct connections between clients and services

The Relay Service is a service residing in the cloud, whose job is to assist in the connectivity, relaying the client calls to the service. Both the client and service can indifferently reside on-premises or in the cloud.

For more information on the Service Bus, you can review the following resources:

We are now ready to delve into the code.

Solution

The solution code has been implemented in C# using Visual Studio 2010 and the .NET Framework 4.0. The following picture shows the projects that comprise the WCFClientCachingChannel solution.

CloudSolution

A brief description of the individual projects is indicated below:

  • AppFabricCache: this caching provider implements the Get and Put methods to retrieve and store data items from\to Windows Azure AppFabric Caching.
  • MemoryCache: this caching provider provides the Get and Put methods to retrieve and store items to a static in-process MemoryCache object.
  • WebCache: this caching provider provides the Get and Put methods to retrieve and store items to a static in-process Web Cache object.
  • ExtensionLibrary: this assembly contains the WCF extensions to configure, create and run the caching channel at runtime.
  • Helpers: this library contains the helper components used by the WCF extensions objects to handle exceptions and trace messages.
  • Pubs WS: this project contains the code for the AuthorsWebService.
  • Pubs: this test project contains the definition and configuration for the Pubs web role.
  • PubsWebRole: this project contains the code of the ASP.NET application running in Windows Azure.

As I mentioned in the introduction of this article, the 3 caching providers have been modified to be used in a cloud application. In particular, the AppFabricCache project has been modified to use the Windows Azure AppFabric Caching API in place of their on-premises counterpart. Windows Azure AppFabric uses the same cache client programming model as the on-premise solution of Windows Server AppFabric. However, the 2 API are not identical and there are relevant differences when developing a Windows Azure AppFabric Caching solution compared to developing an application that leverages Windows Server AppFabric Caching . For more information on this topic, you can review the following articles:

Quotes_Icon NOTE
The Client API of Windows Server AppFabric Caching and Windows Azure AppFabric Caching have the same fully-qualified name. So, what happens when you install the Windows Azure AppFabric SDK on a development machine where Windows Server AppFabric Caching is installed? The setup process of Windows Server AppFabric installs the Cache Client API assemblies in the GAC whereas the Windows Azure AppFabric SDK copies the assemblies in the installation folder, but it doesn’t register them in the GAC. Therefore, if you create an Azure application on a development machine hosting both the on-premises and cloud version of the Cache Client API, even if you reference the Azure version in your web or worker role project, when you debug the application within the Windows Azure Compute Emulator, your role will load the on-premises version, that is, the wrong version of the Cache Client API. Fortunately, the 2 on-premises and cloud versions of the API have the same fully-qualified name but different version number, hence you can include the following snippet in the web.config configuration file of your role to refer the right version of the API.

<!-- Assembly Redirection -->
<runtime>
  <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
    <dependentAssembly>
      <assemblyIdentity name="Microsoft.ApplicationServer.Caching.Client"
                        publicKeyToken="31bf3856ad364e35"
                        culture="Neutral" />
      <bindingRedirect oldVersion="1.0.0.0"
                        newVersion="101.0.0.0"/>
    </dependentAssembly>
    <dependentAssembly>
      <assemblyIdentity name="Microsoft.ApplicationServer.Caching.Core"
                        publicKeyToken="31bf3856ad364e35"
                        culture="Neutral" />
      <bindingRedirect oldVersion="1.0.0.0"
                        newVersion="101.0.0.0"/>
    </dependentAssembly>
  </assemblyBinding>
</runtime>

Configuration

The following table shows the web.config configuration file of the PubsWebRole project. …

[347 lines of configuration data elided for brevity]

Please find below a brief description of the main elements and sections of the configuration file:

  • Lines [4-10] define the config sections. For Windows Azure AppFabric Caching features to work, the configSections element must be the first element in the application configuration file. It must contain child elements that tell the runtime how to use the dataCacheClients element.

  • Lines [28-56] contain the dataCacheClients element that is used to configure the cache client. Child elements dataCacheClient define cache client configuration; in particular, the localCache element specifies the local cache settings.

  • Lines [148-175] contain the client section that defines a list of endpoints the test project uses to connect to the test service. In particular, I created 4 different endpoints to demonstrate how to configure the caching channel:

    • The first endpoint called DirectCallNoCache does not use the caching channel and always invokes the the underlying service directly using the Windows Azure Connect and the BasicHttpBinding.

    • The second endpoint called UseWCFCachingChannel uses the CustomBinding as a recipe to create the channel stack at runtime. The custom binding is composed of 3 binding elements: the clientCaching, textMessageEncoding and httpTransport. As you can see at lines [181-203], the clientCaching binding element allows to accurately configure the runtime behavior of the caching channel at a general level and on a per operation basis. Below I will explain in detail how to configure the clientCaching binding element.

    • The first endpoint called UseSBWithoutCaching does not use the caching channel and always invokes the the underlying service directly using the Service Bus and the BasicHttpRelayBinding.

    • The fourth endpoint called UseSBWithCaching adopts the Service Bus and the BasicHttpRelayBinding to communicate with the underlying service. However, the endpoint is configured to use the cachingBehavior that at runtime replaces the original binding with a CustomBinding made up of the same binding elements and adds the clientCaching binding element as the first element to the binding element collection. This technique is an alternative way to use and configure the caching channel.

  • Lines [265-324] contain the extensions element which defines the cachingBehavior extension element and the clientCaching binding element extension element. Besides, this section configures the WCF extensions introduced and required by the Service Bus (see the NOTE box below for more information on this).

As you can easily notice, both the cachingBehavior and clientCaching components share the same configuration that is defined as follows:

cachingBehavior and clientCaching elements:

  • enabled property: gets or sets a value indicating whether the WCF caching channel is enabled. When the value is false, the caching channel always invokes the target service. This property can be overridden at the operation level. This allows to enable or disable caching on a per operation basis.
  • headerproperty: gets or sets a value indicating whether a custom header is added to the response to indicate the source of the WCF message (cache or service). This property can be overridden at the operation level.
  • timeoutproperty: gets or sets the default amount of time the object should reside in the cache before expiration. This property can be overridden at the operation level.
  • cacheType property: gets or sets the cache type used to store items. The component actually supports two caching providers: AppFabricCache and WebCache. This property can be overridden at the operation level.
  • maxBufferSizeproperty: gets or sets the maximum size in bytes for the buffers used by the caching channel. This property can be overridden at the operation level.
  • indexes property: gets or sets a string containing a comma-separated list of indexes of parameters to be used to compute the cache key. This property is used only when the keyCreationMethod= Indexed.
  • keyCreationMethodproperty: gets or sets the method used to calculate the key for cache items. The component provides 5 key creation methods:
    • Action: this method uses the value of the Actionheader of the request as key for the response. For obvious reasons, this method can be used only for operations without input parameters.
    • MessageBody: this method uses the body of the request as key for the response. This method doesn’t work when the request message contains contains DateTimeelements that could vary from call to call.
    • Simple: this method creates the string [A](P1)(P2)…(Pn)for an operation with n parameters P1-Pn and Action = A.
    • Indexed: this method works as the Simple method, but it allows to specify which parameters to use when creating the key. For example, the Indexed method creates the string [A](P1)(P3)(5) for an operation with n parameters P1-Pn (n >= 5) and Action = A and when the value of the Indexesproperty is equal to “1, 3, 5”. This method can be used to skip DateTime parameters from the compute of the key.
    • MD5: this method uses the MD5 algorithm to compute a hash from the body of the request message.

operation element:

  • action property: gets or sets the WS-Addressing action of the request message.
  • enabled property: gets or sets a value indicating whether the WCF caching channel is enabled for the current operation identified by the Actionproperty.
  • headerproperty: gets or sets a value indicating whether a custom header is added to the response to indicate the source of the WCF message (cache or service) at the operation level.
  • timeoutproperty: gets or sets the default amount of time the object should reside in the cache before expiration at the operation level.
  • cacheType property: gets or sets the cache type used to store responses for the current operation. The component actually supports two caching providers: AppFabricCache and WebCache. This property can be overridden at the operation level.
  • maxBufferSizeproperty: gets or sets the maximum size in bytes for the buffers used by the caching channel for the current operation.
  • indexes property: gets or sets a string containing a comma-separated list of indexes of parameters to be used to compute the cache key for the current operation. This property is used only when the keyCreationMethod= Indexed.
  • keyCreationMethod property: gets or sets the method used to calculate the key for cache items.

Quotes_Icon NOTE
When you install the Windows Azure AppFabric SDK on your development machine, the setup process registers the Service Bus relay bindings, binding elements and behaviors as WCF extensions in the machine.config. However, these extensions are not installed by default in a Windows Azure VM when you deploy an Windows Azure application to the cloud. The documentation on MSDN suggests to perform the following steps when you develop a Windows Azure-Hosted application that uses the Service Bus to invoke a remote service:

  1. In Solution Explorer, under the WorkerRole or WebRole node (depending on where you have your code), add the Microsoft.ServiceBus assembly to your Windows Azure project as a reference.
    This step is the standard process for adding a reference to an assembly.

  2. In the Reference folder, right-click Microsoft.ServiceBus. Then click Properties.

  3. In the Properties dialog, set Copy Local to True. Doing so makes sure that the Microsoft.ServiceBus assembly will be available to your application when it runs on Windows Azure.

  4. In your ServiceDefinition.csdef file, set the enableNativeCodeExecution field to true.

However, the above steps are not sufficient. In order to use the relay bindings in a Web Role or a Worker Role running in the cloud, you need to make sure to properly register them as WCF extensions in the application configuration file or use a Startup Task to register them in the machine.config. In this case, make sure to add the following XML snippet to the configuration/system.serviceModel section of the configuration file. …

[Source code elided for brevity]

Source Code

This section contains the code of the main classes of the solution. You can find the rest of the components in the source code that accompanies this article. The following table contains the source code for the Service.aspx page running in the web role. …

[Source code elided for brevity]

The following table contains the source code for the Cache class in the AppFabricCache project. This class implements the Windows Azure AppFabric Caching caching provider used by the caching channel. …

[Source code elided for brevity]

Performance Results

The following performance results have been collected under the following condition:

  • The AuthorsWebService and the pubs database were running on my laptop in Milan, Italy.
  • The Web Role and Windows Azure AppFabric Caching were running in the South Central US Azure Data Center. I intentionally deployed the Web Role in US and not in Western Europe to emphasize the distance between the cloud application and the remote service.
  • The local cache was enabled in the cache client configuration.

image

Quotes_Icon NOTE
According to my tests, invoking a WCF service running on-premises in the organization's network from a Web Role using Windows Azure Connect and the BasicHttpBinding is slightly faster than invoking the same service via the Service Bus and the BasicHttpRelayBinding. Nevertheless, more accurate and exhaustive performance tests should be conducted to confirm this result.

Conclusions

The caching channel shown in this article can be used to transparently inject client-side caching capabilities into an existing Windows Azure application that uses WCF to invoke one or multiple back-end services. The use of client-side caching techniques allows to dramatically increase performance and reduce the costs due to the use of Windows Azure Connect and Service Bus. The source code that accompanies the article can be downloaded here. As always, any feedback is more than welcome!

Additional Resources/References

For more information on the topic discussed in this blog post, please refer to the following:


Brian Swan (@brian_swan) described Getting Role Instance Information with the Windows Azure SDK for PHP Command Line Tools in a 9/13/2011 post to the Windows Azure’s Silver Lining blog:

imageLast week I wrote a couple of posts (here and here) about the command line tools in the Windows Azure SDK for PHP. And, as I pointed out in the latter of those posts, I found it necessary to extend the functionality of the command line tools. After a bit of reflection, I started wondering two things:

  1. Would others find my extension to the command line tools useful? If yes, is my design for the extension useful. Is it designed in a way that you would expect?
  2. More broadly, how do developers envision using the command line tools? I have ideas about scenarios where the tools are useful, but I’m just one person.

imageSo, in this post, I’ll share a bit more detail about the work I’ve done in extending the command line tools in hopes of stimulating some discussion around the questions above. (BTW, I've submitted the code for the functionality below as a patch for the Windows Azure SDK for PHP. Hopefully, my patch will be accepted soon. EDIT: My patch has been accepted to the SDK, so the functionality described below is now available!)

In a nutshell, I added 3 operations to the deployment command line tool:

  • getRoleInstances: Returns the number of instances that are running for a specified deployment as well as the name and status for each instance.
  • getRoleInstanceProperties: Returns the status, upgrade domain, fault domain, and size of a specified instance.
  • getRoleInstanceProperty: Returns the role name, status, upgrade domain, fault domain, or size of a specified instance by instance name and property name.

Note: All the information I’m surfacing in these operations is already available via the PHP API of the Windows Azure SDK, it just wasn’t accessible via the command line.

Here’s how these operations are used and example outputs:

getRoleInstances

Example command (getting role instances by specifying the deployment slot – production or staging):

deployment getRoleInstances -F="C:\config.ini" --Name="dns_prefix" --BySlot=Production

Example output:

InstanceCount:2

Instance0
Name:PhpOnAzure.Web_IN_0
Status:Ready

Instance1
Name:PhpOnAzure.Web_IN_1
Status:Ready

You can get the same information by specifying a deployment by name (instead of by “slot”, as shown above)

getRoleInstanceProperties

Example command (getting role instance properties for a specified deployment and specified instance):

deployment getRoleInstanceProperties -F="C:\config.ini" --Name="dns_prefix" --BySlot=Production --InstanceName="PhpOnAzure.Web_IN_1"

Example output:

Status:Ready
Upgrade domain:1
Fault domain:1
Size:Small

You can get the same information by specifying a deployment by name (instead of by “slot”, as shown).

getRoleInstanceProperty

Example command:

deployment getRoleInstanceProperty -F="C:\config.ini" --Name="dns_prefix" --BySlot=Production --InstanceName="PhpOnAzure.Web_IN_0" --PropertyName="Status"

Example output:

Ready

Again, you can also specify the deployment by name instead of by “slot”.

So, what good are these new operators? Well, at the risk of showing my inexperience with batch scripts, now you can write a script that will create a service, deploy a package, and return when all instances are running:

@echo off
echo Creating service...
call service create -F="C:\config.ini" --Name="dns_prefix" --Location="North Central US" --Label="service_label" --WaitFor
echo.
echo Creating deployment...
call deployment createfromlocal -F="C:\config.ini" --Name="dns_prefix" --DeploymentName="deployment_name" --Label="deployment_label" --Production --PackageLocation="path\to\.cspkg" --ServiceConfigLocation="path\to\.cscfg" --StorageAccount="your_storage_account_name" --WaitFor
echo.
echo Starting instances...
:loop
call deployment getRoleInstances -F="C:\config.ini" --Name="dns_prefix" --BySlot=Production --WaitFor>; status.txt
set I=0
set J=0
FOR /F "tokens=1,2 delims=:" %%G in (status.txt) DO (
if [%%G]==[InstanceCount] set J=%%H
if [%%G]==[Status] if [%%H]==[Ready] set /a I+=1
)
if %I% NEQ %J% goto loop
echo.
echo All instances in Ready state.
echo Services:
call service list -F="C:\config.ini"
echo.
echo New deployment properties:
call deployment getproperties -F="C:\config.ini" --Name="dns_prefix" --BySlot="production" --WaitFor

Of course, that is just one possibility. You can now write scripts that become the tools for easily deploying and managing your Azure deployments. Which brings me back to my original questions: Basically, does this seem useful?


<Return to section navigation list>

Visual Studio LightSwitch and Entity Framework 4.1+

The Visual Studio LightSwitch Team (@VSLightSwitch) reported Grid Logic Releases the Office Integration Pack for LightSwitch in a 9/13/2011 post:

imageGrid Logic has released a LightSwitch extension that helps you work with Microsoft Office called the Office Integration Pack. This extension makes it easy for LightSwitch developers to manipulate the 2010 versions of Microsoft Excel, Microsoft Word and Microsoft Outlook in a variety of ways common in desktop business applications. Create documents, PDFs, spreadsheets, email and appointments using data from your LightSwitch applications. This extension is completely FREE and includes source code!

image222422222222You can download and install the Office Integration Pack directly from Visual Studio LightSwitch via the Extension Manager or you can download it manually from the Visual Studio Gallery.

Available downloads include:

For more details about the Office Integration Pack see Grid Logic’s website: http://www.gridlogic.com/OfficeIntegrationPack


Return to section navigation list>

Windows Azure Infrastructure and DevOps

I (@rogerjenn) asserted “Microsoft made the Windows Server 8 and Azure connection clearer, raising hopes for easier development in the cloud” in a deck for my Developments in the Azure and Windows Server 8 pairing post of 9/15/2011 to Tech Target’s SearchCloudComputing.com blog:

imageANAHEIM, Calif. -- The shroud of secrecy has been lifted surrounding Windows Server 8 and Azure. What lies behind it is greater symmetry between the cloud computing and virtualized HA infrastructures, improved storage and an Azure toolkit that promises to help enterprises easily develop an Azure service and deploy it to end users.

imageWindows 8 ultimately will provide the underpinnings of the Windows Azure platform with the intent to democratize high availability (HA) clusters to push the "scale-up envelope" with features previously reserved for high-performance computing Windows Server 2008 R2 versions.

imageThis means DevOps teams will need to gain expertise with Window 8's new features to obtain maximum return on investment with public cloud computing as well as with private and hybrid clouds.

Windows 8 will include new alternative disk storage architectures called Storage Pools and Spaces, Satya Nadella, president of Microsoft's servers and tools business, said here in his BUILD conference keynote. Storage Pools aggregate commodity disk drives into isolated JBOD (Just a Bunch of Disks) units and attaches them to Windows for simplified management. Storage Spaces do the same for virtual machines.

Azure also saw its share of storage improvements, which team member Brad Calder outlined in the "Inside Windows Azure storage: what's new and under the hood deep dives" session:

  • Geo-replication helps with disaster recovery and a new version of the REST API to enable some functionality improvements for Windows Azure binary large objects (blobs), tables and queues.
  • Table Upsert allows a single request to be sent to Windows Azure Tables to either insert an entity, if it doesn't exist, or update and replace an existing entity.
  • Table Query Projection (Select) allows a client to retrieve a subset of an entity's properties. This improves performance by reducing the serialization/deserialization cost and bandwidth used for retrieving entities.
  • Improved blob HTTP header support aids streaming applications and browser downloads.
  • Queue UpdateMessage allows clients to have a lease on a message and renew the lease while the system processes it as well as update the contents of the message to track processing progress.
  • Queue InsertMessage with visibility timeout allows a newly inserted message to stay invisible on the queue until the timeout expires.

Windows 8 client development tools
Details about programming Windows 8 applications with Visual Studio 2011 Express also were previewed in Windows 8 client developer tools. MSDN subscribers also can download the Window 8 Server from the Developer Network. In addition to OS bits, both developer previews include the following:

  • Microsoft Visual Studio 11 Express for Windows Developer Preview
  • Microsoft Expression Blend 5 Developer Preview
  • Windows SDK for Metro style apps
  • 28 Metro style apps including the BUILD Conference app

Windows Azure developers probably will want to download and install the Windows Azure Toolkit for Windows 8 from CodePlex. According to CodePlex:

"This toolkit has all the tools to make it easy to develop a Windows Azure service and deploy it to your users. In addition to documentation, this toolkit includes Visual Studio project templates for a sample Metro style app and a Windows Azure cloud project. This tool is designed to accelerate development so that developers can start enabling Windows 8 features, such as notifications, for their app with minimal time and experience. Use this toolkit to start building and customizing your own service to deliver rich Metro style apps."

Windows Azure AppFabric Service Bus and TFS on Azure also include the following improvements:

  • Asynchronous Cloud Eventing allows developers to distribute event notifications to occasionally connected clients, for example, phones, remote workers, kiosks and so on.
  • Event-driven Service Oriented Architecture enables you to build loosely coupled systems that can evolve over time.
  • Advanced Intra-App Messaging provides load-leveling and load-balancing, so developers can build highly scalable, resilient applications.

Developers and operations folks were enthusiastic about new Windows 8 Server features and their potential contribution to future Windows Azure upgrades.

More on Microsoft Azure:

Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


Tony Bailey suggested that you Plug in Your Numbers for Windows Azure Pricing in a 9/15/2011 post to the Windows Azure, Windows Phone and Windows Client blog:

imageI mentioned in an earlier blog post that we had plugged in some estimates for Windows Azure usage for different types of application scenarios.

I’ve posted a more detailed analyses for the scenarios here:

  1. Windows Azure Pricing Scenario: Asset Tracking Application
  2. Windows Azure Pricing Scenario: E-Commerce Web Site
  3. Windows Azure Pricing Scenario: Sales Training Application
  4. Windows Azure Pricing Scenario: Social Media Application

Remember with Windows Azure you must look at the total developmental costs. There is no configuration of the platform.

The platform is a service, in point of fact.

Developer hours can be reduced because developers are using familiar tools, debugging locally and then putting up their finished work on the platform service. Developers are not trying to figure out what infrastructure configuration best meets their needs and they are no longer maintaining that infrastructure.

If you think Windows Azure platform is the right choice, check it out.

A quick live chat or phone call will get you a no-credit-card required Windows Azure platform free pass.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


Steve Plank (@plankytronixx) answered What’s the best cloud platform: IaaS or PaaS? in a 9/12/2011 post:

imageAnalysts such as Gartner say the cloud platform business has a PaaS future. But when you look at the market today, it’s the IaaS market that is strong healthy and growing very fast. Let’s stick with the 2 main protagonists of each approach – Amazon for IaaS and Microsoft for PaaS – despite the rosey future of PaaS, Amazon still continues to show strong growth. How can that be? Is it that PaaS just hasn’t captured the imagination of IT folks yet? I don’t think so. It has more to do with the number of applications that are able to easily take advantage of IaaS technologies today, compared to PaaS technologies.

There is a continuum from control to abstraction. At the control end, you get complete power over the platform. You decide when and how you will patch, apply security fixes, service packs and so on. You control the network, the OS the runtimes and middleware (meaning platforms like Java and .Net), the app’s data and the application itself. At the abstraction end – you are denied all but the most simple and necessary elements of control. You can’t do backup and restore operations, you can’t apply patches no matter how important you think they are – in fact you might not even know what OS, hardware, runtime platforms etc are sitting under the software you are using. Maybe the provider will let you create your own users, rather than phoning a help-desk to do it for you: more a move of convenience on the part of the service provider than anything we could really call “control” The SaaS extreme. Let’s place various technologies on that continuum.

image

The above diagram is my personal view of where they fit.

With great power comes great responsibility” – Peter Parker, Spiderman…

It doesn’t matter whether you consider a global social networking site or a simple CRM system, the value of the application is in what is delivered to its end-users. There is no inherent value in the behind-the-scenes operational management of it. It’s something that cannot be ignored: it’s a necessary evil. Without that kind of management, applications would go offline, corrupt data, become unreliable and as a result ultimately fail to provide the end-users with the service they need. But the processes of making sure the servers don’t overheat, taking regular backups to cope with the inevitable failures that will occur, monitoring for gradually increasing disk errors, monitoring for memory leaks and unexplained high CPU-utilisation. None of those things actually give value to the users. They enable the application to give value, but they don’t give value themselves.

But it’s when these things go wrong or when the processes that look after them aren’t robust enough that users fail to receive the value the application gives them. The result – they may not be able to communicate with their friends and family on the other side of the world for a day, maybe two. They may not get a large sales order in to their system which skews the quarter-end results causing investors to retreat. A couple of these well timed failures could result in the collapse of a company.

The result over the years has been the need for more and more monitoring, more and more control. Installing a piece of hardware which has no knobs, buttons or software controls is fine, until the post-mortem (witch-hunt) after the failure when it is realised it was never able to give a continuous commentary concerning a sustained growth in errors. Had it been noticed before it got too bad, it could have been dealt with.

So the growth in instrumentation, telemetry, automatic monitoring and systems management technologies has itself spawned an entire industry, growing applications which themselves deliver a value to their users – the value of keeping them in control of the applications that, in their turn deliver value to the end-users. It’s all about control. It’s almost certainly the lineage of IT that has generated this notion. The “men in white coats” syndrome: back in the early days, IT was a scientific endeavour.

It has become so normal that the default mindset is that in order to deliver good service, IT needs to have this thing called control. I think that is one aspect that is making IaaS platforms such a success. Though the IT departments that deploy their applications to public IaaS providers may not own the equipment, they can still control it, just like they always have. They can set up load-balancers and firewall configurations, they can apply security fixes, patches, service packs, they can set up VPN configurations and so on. I can see how it’s an uncomfortable feeling to let all that go and simply “trust” that somebody else will do as good a job as the IT department.

The other thing is that IaaS, to a large extent, replicates what is in the on-premise data-centre. So there is great familiarity with this – but also, many applications grew up in on-premise data-centres that were very close relatives of the IaaS model. So when it comes to moving an existing application, there is greater symmetry between the two environments.

To fully take advantage of a cloud platform (whether IaaS or PaaS) it’s necessary to assume failures will occur and build the application architecture around this assumption. So applications need to run on load-balanced machines, be designed to scale out (not up) in order to cope with increased load, be stateless so that failure of a load-balanced machine can be coped with and so on. It’s just the way a lot of modern applications are developed. The unit of compute in both of these environments tends to be the virtual machine, which can be provisioned and de-provisioned within minutes, which leads to modern applications truly being able to take advantage of modern architectures and especially the cloud.

It’s when we get to older legacy applications that the symmetry between the on-premise data-centre and the IaaS environment looks attractive. It’s possible to “lift and shift” an older application straight in to an IaaS data-centre. There is still a sense of “control” because a good proportion of the infrastructure management is still offered to the application’s owner and the advantage of no longer owning the hardware and other necessary infrastructure (cooling, power, data-centre rack-space etc). Of course with that sense of control, also comes a never-ending responsibility to monitor, maintain and look after the infrastructure. It’s up to the application-owner to understand the OS image they are running and how to patch, update and service pack it for the application they are running.

The application, if it is a traditional, single-instance beast is just as vulnerable to failure in its new environment as it was in its previous data-centre. Even IaaS cloud operators recommend applications should be built the modern way – to expect failures and deal with them appropriately. The difference between IaaS and PaaS in this environment as that you can move a legasy application to an IaaS data-centre. OK, so it goes with the same set of risks as when it was in a private data-centre, but it can be moved.

The level of abstration is more pronounced in a PaaS cloud data-centre. As said earlier, the unit of compute is the virtual machine. This is taken much more literally in the PaaS world. The set of machine instructions, and the data they operate on (say for example the code of an ASP.Net web site, plus the html, jpg, css files etc in the site) is the unit of execution. Changes to the local file system are not durable across reboots, for example. This is actually a very good thing in a modern application. The unit of execution will always be instantiated to a known state. Rather like creating an instance of a class in a piece of code. The object always appears in a pre-defined, known state. PaaS uses this notion several thousand feet higher up the application stack. Application data itself is stored either in a separate storage sub-system or in a Database-as-a Service store.

It heralds a new way of thinking about applications where the truly major components (compute, storage, database etc) can be considered as instances of a class. These major instances are defined in a service model.

Imagine creating an object from say the Java or .Net classes. You really don’t need to concern yourself with the internals of these platforms. If a bug is discovered, the platform is patched. The next time you create that object you go through exactly the same motions but you now have an object that doesn’t have the bug. It’s the same with say compute instances in a PaaS model. The service model specifies the type of compute instance you need. The cloud platform itself takes care of its internal make-up. If there is a bug in the OS, it is patched. This is actually done is such a way that even when your application is running, it can be patched underneath you and you need not concern yourself with how your app will continue to run – it just will.

It does mean though, that the platform itself has the control, not you, the individual. But let’s remember what we said right at the start – that the control and management of an application has no inherent value, it’s the service the application gives to the user where the value is derived.

So I believe those IT departments who are moving existing legacy applications to the cloud are the ones who are mostly using IaaS. And as there are more existing legacy applications in the world than modern or greenfield app projects, there will be a natural skew toward the platform that allows it in the fewest barriers – we called it “lift and shift” earlier. But as time advances, as more net-new applications are developed, as more legacy applications are updated to the newer modern architectures, there will be a greater movement toward PaaS platforms. “With great power comes great responsibility” and that responsibility exists in perpetuity. We can’t say there is no place for IaaS, there clearly is. It continues to grow and I imagine will do well, for a time. Then when all that low-hanging fruit has been picked and the only fruit on the tree is a huge collection of modern applications well suited to PaaS – we’ll see a big change. I think this is really why the analysts say the PaaS has the bright future – you only do 2 things with a PaaS platform: supply the app and the data. The rest is abstracted away. It’s a way we’ll all gradually think and I think it’ll come sooner rather than later.

The driver, I’m sure, is the consumerisation of IT. Almost everybody in Generation Y (those born after 1981) is using IT at work at rest and at play. There are a host of applications we don’t even know we need yet and Generation Y are going to develop them. They never really grew up with the idea of IT being a scientific endeavour. They’ll be great consumers of cloud services to provide the power behind their services. I’m convinced they’ll only want to write great apps. The idea of managing the platforms, dong operational stuff, monitoring – it’s just not sexy, it’s just not going to appeal. Especially if somebody else can do it – the PaaS operators of the future.

<Return to section navigation list>

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

image

No significant articles today.


<Return to section navigation list>

Cloud Security and Governance

Ed Moyle asserted “Companies that have invested in SSO need to address authentication issues before moving to the cloud” in a deck for his Cloud authentication: Avoiding SSO land mines in the cloud post to SearchCloudSecurity.com of 9/12/2011:

imageIt’s hard enough to think through all the security and compliance issues of a cloud move at the macro level during the planning phases of a complex deployment, but the details of the technical“plumbing” can lend their own complexities to the process. One area of particular challenge to organizations from an architectural standpoint is authentication, but add cloud authentication issues like single sign-on (SSO) to the mix, and it becomes even more challenging.

Firms invested in SSO that are considering moving to the cloud -- or firms that may wish to invest in SSO in the future -- need to consider a few key architectural questions ahead of time to make sure they don't save money in one area or gain flexibility by moving to the cloud only to offset these gains as it relates to their SSO investments.

The drive to reduce passwords

Even if an organization hasn’t invested in a large, expensive SSO system, chances are still good that they have an economic stake in SSO. This comes about through incremental investments made over time to narrow the password base. Since passwords cost money by increasing support calls, offset security investment by encouraging password reuse, and decrease user satisfaction,, there’s an industry-wide desire to reduce the number of passwords. This pressure translates to consistent, incremental decisions to integrate new technology into the existing authentication infrastructure.

For example, consider a hospital looking to deploy a system that provides physicians access to patient test results via a Web interface (a fairly common use case in the health care sector). One of the products it evaluates uses Windows Domain authentication (i.e., Active Directory); it allows physicians to authenticate using domain credentials. The other product supports proprietary credentials, putting into play a new password the physician will need to remember. Obviously, the first model (using AD) has advantages: provisioning costs are lower, help desk support is reduced and users are less irritated. Even if the hospital picks the second system, it may choose to invest resources in customization to leverage credentials they already have. Either way, the focus is on reducing the sign-on footprint.

Now what happens when the server supporting that product is virtualized and put into the cloud? Will authentication still function the same way? Depending on the cloud model in use and your planning, authentication might not tie together as seamlessly after the move as it did before. For example, a straight “physical to virtual” replacement of an existing device (a scenario fairly common in an IaaS deployment) might preserve the authentication model without interruption, but other scenarios such as movement of the application into a PaaS cloud might not.. The point is, authentication is technically important and must be specifically addressed as part of the planning process. There are key architectural decisions that need to be made to make sure service continues uninterrupted.

Technical preplanning: cloud authentication

Just like you would address the high-level compliance and security requirements of a cloud move before you start the move, so also is it imperative to address authentication as part of that pre-planning.. There are so many different possible variations of cloud technology (IaaS, PaaS, SaaS in public, private or hybrid models) that the specifics will vary greatly according to use case, and planning can change mid-stream. However, no matter what model you choose to employ, systematic analysis of the current authentication “ecosystem” is critical to understand what could be impacted..

Like most things, this planning starts with an inventory. You’ll need to have an accurate accounting of what’s in place now and how pieces tie together in order to make sure your planning accounts for each and every application that’s slotted to move to the cloud infrastructure. There should be no surprises when the time comes to actually make the move. In particular, organizations should take note of:

  • Authentication principals -- Who are the users? Are they only internal employees or are there external entities as well that need to be authenticated? Internal employees are probably a given, but also consider external vendors or partners that may need access.
  • Authentication providers -- What is the source of user account information? Active Directory is a common one, but are there other user data repositories as well? This could include Web-based SSO systems (OpenSSO, CA Technologies), in-house developed authentication systems, as well as cloud-based authentication providers.
  • Authentication consumers -- What is being authenticated to? What will need to make use of authentication providers? This could include everything from applications, servers, databases and individual application subcomponents.
  • Authentication methods -- How are users authenticating? Where is two-factor required and how is it currently implemented? What specific technology is deployed to enable authentication? This might include RADIUS servers, other authentication technologies like servers supporting two-factor authentication, certificates (for example, for inter-component authentication) or even AD credentials.
  • Provisioning flow -- How is access provisioned? How do authentication credentials get created?

Obviously, this is a lot of data to collect, but thoroughness is key. As far as how to collect the data, technical conversations with application and system SME’s is one way, but that can be overly time-intensive given aggressive cloud implementation timelines. Other documentation can be leveraged to round that data out, such as application inventories or artifacts generated from the business impact analysis (BIA) phase of BCP/DR planning..

Once you have this data collected, technical dialog with your cloud provider about how it will support your particular use cases is in order. Depending on what technologies are in scope, discussions with your provider can help steer your toward (or away from) service offerings they provide. For example, if you need two-factor authentication and your current architecture doesn’t support that easily given what (and how) you plan to move, your vendor may offer technical service offerings that may help fill the need.

Finally, once you’ve compiled the inventory and made sure you’re leveraging (or at least evaluated) any technical service your provider may offer, next begins the harder work of the technical heavy-lifting. Compare the list of applications that are moving to authentication providers and provisioning flows to make sure authentication can continue uninterrupted (ideally without breaking current integration). This may sound like extra legwork, but it’s better to find out before a move that an application’s authentication model will break than finding out afterwards.

About the author: Ed Moyle is a senior security strategist with Savvis as well as a founding partner of Security Curve.


<Return to section navigation list>

0 comments: