Friday, May 20, 2011

Windows Azure and Cloud Computing Posts for 5/18/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

image4    

• Updated 5/20/2011 with new articles marked by The System Center Team, Wade Wegner, Vittorio Bertocci, Kenon Owens, Turker Keskinpala, Karsten Januszewski, Bruce Kyle, Lynn Langit, Jason Bloomberg, and Clemens Vasters.

Note: Further updates will be limited this week due to preparation for my Moving Access Tables to SharePoint 2010 or SharePoint Online Lists Webcast of 3/23/2011. See Three Microsoft Access 2010 Webcasts Scheduled by Que Publishing for March, April and May 2011 for details of the other two members of the series.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Channel9 posted Cihan Biyikoglu’s Building Scalable Database Solutions Using Microsoft SQL Azure Database Federations session video on 5/18/2011:

image SQL Azure provides an information platform that you can easily provision,configure and use to power your cloud applications. In this session we explore the patterns and practices that help you develop and deploy applications that can exploit the full power of the elastic,highly available,and scalable SQL Azure Database service.

imageThe session details modern scalable application design techniques such as sharding and horizontal partitioning and dives into future enhancements to SQL Azure Databases.


Steve Yi (pictured below) reported a TechEd 2011: The Data and BI Platform for Today and Tomorrow session in a 5/18/2011 post to the SQL Azure Team blog:

image Quentin Clark, Vice President for SQL Server, delivered a session after the TechEd keynote yesterday about the future of SQL Server for data and BI and moving towards the cloud in the upcoming “Denali” release. 

imageThere was a great demo by Roger Doherty that illustrates how well SQL Azure and SQL Server work together by making it much easier to move back and forth between on-premises and cloud utilizing improvements in the DAC Framework. 

Additionally, new developer tools in “Denali”, specifically SQL Server Developer Tools Codename “Juneau”, provided a central point for developers to create database applications.  Database edition and cloud-aware – “Juneau” provides the one tool for any SQL Server or SQL Azure development. 

I encourage you to watch it in its entirety.  However, if you want to skip ahead the SQL Azure demo starts at the 23:00 mark.

Steve’s embedded video didn’t work for me, so here’s a link to Channel9’s session video: Microsoft SQL Server: The Data and BI Platform for Today and Tomorrow:

Foundational Sessions bridge the general topics outlined in the keynote address and the in-depth coverage in breakout sessions by sharing the company’s vision,strategy and roadmap for particular products and technologies,and are delivered by Microsoft senior executives. Attend this session for a demo-intensive visual tour through the new SQL Server and discover what’s possible for end users,developers and IT alike to help organizations unlock the value of exploding data volumes,create business solutions fast,and deliver mission critical capabilities at low TCO in their enterprise and in the cloud.


Steve Yi announced a Video How To: Advanced Business Intelligence with Cloud Data on 5/18/2011:

imageWe’ve created a new readiness video that introduces some of the features of SQL Azure Reporting and demonstrates the use of advanced analytical tools, such as using SQL Azure data with Excel and PowerPivot. Specifically, users will learn how to use SQL Azure data with Excel and create two pivot tables using employee expense report data. The conclusion points to some additional resources to help users get started.

Get Microsoft Silverlight

Neil MacKenzie described the new SQL Azure Management REST API in a 5/17/2011 post:

imageThe SQL Azure Management REST API was released with the May 2011 SQL Azure release. This API is very similar to the Windows Azure Service Management REST API that has been available for some time. They both authenticate using an X.509 certificate that has been uploaded as a management certificate to the Windows Azure Portal.

imageThe primary differences are that the SQL Azure Management REST API uses:

  • service endpoint: management.database.windows.net:8443
  • x-ms-version: 1.0

while the Windows Azure Service Management REST API uses:

  • service endpoint: management.core.windows.net
  • x-ms-version: 2011-02-25

This post is a sequel to an earlier post on the Windows Azure Service Management REST API.

SQL Azure Management Operations

The SQL Azure Management REST API supports the following operations:

The Create Server, Set Server Administrator Password and the Set Server Firewall Rule operations all require a request body adhering to a specified XML schema.

Creating the HTTP Request

Each operation in the SQL Azure Management REST API requires that a specific HTTP request be made against the service management endpoint. The request must be authenticated using an X.509 certificate that has previously been uploaded to the Windows Azure Portal. This can be a self-signed certificate.

The following shows how to retrieve an X.509, identified by thumbprint, from the CPersonal (My) level of the certificate store for the current user:

X509Certificate2 GetX509Certificate2(String thumbprint)
{
    X509Certificate2 x509Certificate2 = null;
    X509Store store = new X509Store(“My”, StoreLocation.CurrentUser);
    try
    {
        store.Open(OpenFlags.ReadOnly);
        X509Certificate2Collection x509Certificate2Collection =
             store.Certificates.Find(X509FindType.FindByThumbprint, thumbprint, false);
        x509Certificate2 = x509Certificate2Collection[0];
    }
    finally
    {
        store.Close();
    }
    return x509Certificate2;
}

The following shows how to create an HttpWebRequest object, add the certificate (for a specified thumbprint), and add the required x-ms-version request header:

HttpWebRequest CreateHttpWebRequest(
    Uri uri, String httpWebRequestMethod, String version)
{
    X509Certificate2 x509Certificate2 = GetX509Certificate2(“THUMBPRINT”);

    HttpWebRequest httpWebRequest = (HttpWebRequest)HttpWebRequest.Create(uri);
    httpWebRequest.Method = httpWebRequestMethod;
    httpWebRequest.Headers.Add(“x-ms-version”, version);
    httpWebRequest.ClientCertificates.Add(x509Certificate2);
    httpWebRequest.ContentType = “application/xml”;

    return httpWebRequest;
}

Making a Request on the Service Management API

As with other RESTful APIs, the Service Management API uses a variety of HTTP operations – with GET being used to retrieve data, DELETE being used to delete data, and POST or PUT being used to add elements.

The following example invoking the Get Servers operation is typical of those operations that require a GET operation:

XDocument EnumerateSqlAzureServers(String subscriptionId)
{
    String uriString =
       String.Format(“https://management.database.windows.net:8443/{0}/servers”,
           subscriptionId);
    String version = “1.0″;

    XDocument responseDocument;
    Uri uri = new Uri(uriString);
    HttpWebRequest httpWebRequest = CreateHttpWebRequest(uri, “GET”, version);
    using (HttpWebResponse httpWebResponse =
        (HttpWebResponse)httpWebRequest.GetResponse())
    {
        Stream responseStream = httpWebResponse.GetResponseStream();
        responseDocument = XDocument.Load(responseStream);
    }
    return responseDocument;
}

The response containing the list of servers is loaded into an XML document where it can be further processed as necessary. The following is an example response:

<Servers xmlns=”http://schemas.microsoft.com/sqlazure/2010/12/”>
  <Server>
    <Name>SERVER</Name>
    <AdministratorLogin>LOGIN</AdministratorLogin>
    <Location>North Central US</Location>
  </Server>
</Servers>

Some operations require that a request body be constructed. Each operation requires that the request body be created in a specific format – and a failure to do so causes an error when the operation is invoked.

The following shows how to create the request body for the Set Server Firewall Rule operation:

XDocument GetRequestBodyForAddFirewallRule(String startIpAddress, String endIpAddress)
{
    XNamespace defaultNamespace =
        XNamespace.Get(“http://schemas.microsoft.com/sqlazure/2010/12/”);
    XNamespace xsiNamespace =
        XNamespace.Get(“http://www.w3.org/2001/XMLSchema-instance”);
    XNamespace schemaLocation = XNamespace.Get(“http://schemas.microsoft.com/sqlazure/2010/12/FirewallRule.xsd”);

    XElement firewallRule = new XElement(defaultNamespace + “FirewallRule”,
        new XAttribute(“xmlns”, defaultNamespace),
        new XAttribute(XNamespace.Xmlns + “xsi”, xsiNamespace),
        new XAttribute(xsiNamespace + “schemaLocation”, schemaLocation),
        new XElement(defaultNamespace + “StartIpAddress”, startIpAddress),
        new XElement(defaultNamespace + “EndIpAddress”, endIpAddress));

    XDocument requestBody = new XDocument(
        new XDeclaration(“1.0″, “utf-8″, “no”),
        firewallRule
    );
    return requestBody;
}

This method creates the request body and returns it as an XML document. The following is an example:

<FirewallRule
    xmlns=”http://schemas.microsoft.com/sqlazure/2010/12/”
    xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance”
    xsi:schemaLocation
        =”http://schemas.microsoft.com/sqlazure/2010/12/FirewallRule.xsd”>
  <StartIpAddress xmlns=”">10.0.0.1</StartIpAddress>
  <EndIpAddress xmlns=”">10.0.0.255</EndIpAddress>
</FirewallRule>

The following example shows the invocation of the Set Server Firewall Rule operation:

void AddFirewallRule(String subscriptionId, String serverName, String ruleName,
      String startIpAddress, String endIpAddress)
{
    String uriString = String.Format(
“https://management.database.windows.net:8443/{0}/servers/{1}/firewallrules/{2}”,
       subscriptionId, serverName, ruleName);
    String apiVersion = “1.0″;

    XDocument requestBody = GetRequestBodyForAddFirewallRule(
         startIpAddress, endIpAddress);
    String StatusDescription;
    Uri uri = new Uri(uriString);
    HttpWebRequest httpWebRequest = CreateHttpWebRequest(uri, “PUT”, apiVersion);
    using (Stream requestStream = httpWebRequest.GetRequestStream())
    {
        using (StreamWriter streamWriter =
            new StreamWriter(requestStream, System.Text.UTF8Encoding.UTF8))
        {
            requestBody.Save(streamWriter, SaveOptions.DisableFormatting);
        }
    }
    using (HttpWebResponse httpWebResponse =
     (HttpWebResponse)httpWebRequest.GetResponse())
    {
        StatusDescription = httpWebResponse.StatusDescription;
    }
}


<Return to section navigation list> 

MarketPlace DataMarket and OData

• Turker Keskinpala reported OData Service Validation Tool Updated – JSON Payload Validation and new rules on 9/19/2011:

imageWe pushed another new update to the OData Service Validation Tool last Friday. This update has the following:

  • Added support for JSON payload validation.
  • Added 9 new JSON rules
  • Added 2 new Atom/XML metadata rules
  • Updated 4 Atom/XML rules (3 metadata and 1 feed)

As always, please give it a try and let us know what you think using the OData Mailing List.

• Karsten Januszewski described The Rise Of JSON to the VisitMIX blog on 3/17/2011:

image I’ve been prototyping a new service, sketching out the different pieces: payload protocol, storage, data model, transport, client/server communication, etc.  And, upon completion of the prototype, I stepped back and looked at the decisions made. For example, how are we storing the data? Raw JSON.  How are we serving data? As JSON.

imageIt suddenly struck me: there was never even a question of what format we would serialize to; JSON was assumed. And the idea that we would support XML as well as JSON wasn’t even considered.

The fact that this architecture was almost assumed instead of deliberated upon got me thinking: when was it that JSON won?  After all, the format isn’t that old. But its rise has been quick and triumphant.  I’m not the first one to observe this of course. There’s a great piece called “The Stealthy Ascendency of JSON” on DevCentral which does some digging across a range of available web APIs, discovering an increase in the percentage of APIs that support JSON as compared to XML in the last year.  The ProgrammableWeb has a piece called “JSON Continues its Winning Streak Over XML“ which similarly documents this trend. And there is also the much blogged about facts that Twitter has removed XML support from their streaming API and Foursquare’s v2 API only supports JSON.

All this begs a different question: Why is JSON so popular?  There is the simple fact that JSON is smaller as a payload than XML. And no doubt JSON is less verbose than XML.  But there’s much more to it than just size.  The crux has to do with programming.  JSON is natively tied to Javascript. As an object representation of data, it is so easy to work with inside Javascript. Its untyped nature flows perfectly with how Javascript itself works. Compare this to working with XML in Javascript: ugh.  There’s pretty fascinating piece by James Clark called “XML vs. The Web” that really dives into this.

JSON’s untyped nature flows with how the web itself works.  The web does not seem like typing; it doesn’t like schemas; it doesn’t like things to be rigid or too structured. Just look at the failure of XHTML.  A beautiful idea for the purists,  but for the web, its lack of adoption underscores its platonic ideals.

Not to dismiss XML.  It turns out, XML works fantastically well with strongly typed languages.  Perhaps XML’s crowning glory these days is how it maps to object graphs — elements as objects, attributes as properties — for the purposes of creating client user interfaces.  Consider mobile development. Look at both Android development (Java/XML) and Windows Phone development (.NET/XAML).  Both models extensively use XML to represent user interface which map directly to an object graph.  And both models use this XML representation to facilitate WYSIWYG editors like one finds in Expression Blend, Visual Studio and Eclipse. I wrote about this a fair amount quite a while ago in a paper called The New Iteration about designer/developer workflow with XAML.

Where you find an impedance mismatch is using a loosely typed payload format like JSON with a strongly typed language.  This happens all the time on the server, especially if you are a .NET developer. Historically, this has caused plenty of headaches. I know I’ve spent too much time dealing with serialization/deserialization issues on the server when parsing JSON.

Well, I’m happy to say that this mismatch seems to have finally gone away with the dynamic keyword in .NET 4 combined with some great open source work by the WCF team that has resulted in a library called Microsoft.Runtime.Serialization.Json.

Consider the following c# code, which downloads my Foursquare profile:

WebClient webClient = new WebClient();
dynamic result = JsonValue.Parse(webClient.DownloadString("https://api.foursquare.com/v2/users/self?oauth_token=XXXXXXX"));
Console.WriteLine(result.response.user.firstName);

Notice how the parsing of the JSON returns an object graph that I can drill into.  So elegant! If you want to learn more about these new APIs, check out this post called JSON and .NET Nirvana Hath Arrived.

It’s great to see this marriage between JSON and .NET, because it’s clear JSON isn’t going away any time soon.


Glenn Gailey (@ggailey777) described Accessing an OData Media Resource Stream from a Windows Phone 7 Application (Streaming Provider Series-Part 3) in a 5/17/2011 post to the WCF Data Services Team blog:

image In this third post in the series on implementing a streaming data provider, we show how to use the OData client library for Windows Phone 7 to asynchronously access binary data exposed by an Open Data Protocol (OData) feed. We also show how to asynchronously upload binary data to the data service. This Windows Phone sample is the asynchronous equivalent to the previous post Data Services Streaming Provider Series-Part 2: Accessing a Media Resource Stream from the Client; both client samples access the streaming provider that we create in the first blog post in this series: Implementing a Streaming Provider. This post also assumes that you are already somewhat familiar with using the OData client library for Windows Phone 7 (which you can obtain from the OData project in CodePlex), as well as phone-specific concepts like paged navigation and tombstoning. For more information about OData and Windows Phone, see the topic Open Data Protocol (OData) Overview for Windows Phone.

OData Client Programming for Windows Phone 7

image This application consumes an OData feed exposed by the sample photo data service, which implements a streaming provider to store and retrieve image files, along with information about each photo. This service returns a single feed (entity set) of PhotoInfo entries, which are also media link entries. The associated media resource for each media link entry is an image, which can be downloaded from the data service as a stream. The following represents the PhotoInfo entity in the data model:

image

This sample streaming data service is demonstrated in Implementing a Streaming Provider. You can download this streaming data service as a Visual Studio project from Streaming Photo OData Service Sample on MSDN Code Gallery. In our client phone application, we bind data from the PhotoInfo feed to UI controls in the XAML page.

First we need to create a Window Phone application that references the OData client library. (Note that the same basic APIs can be used to access and create media resources from a Silverlight client, except for the tombstoning functionality, which is specific to Windows Phone.) I won’t go into too much detail on the XAML that creates the pages in the application, since this is not a tutorial on XAML. You can review for yourself the XAML pages in the downloaded ODataStreamingPhoneClient project. Here are the basic steps to create this application:

  1. Download and install the OData client library for Windows Phone 7. This includes the System.Data.Services.Client.dll assembly and the DataSvcUtil.exe tool.
  2. Create the Windows Phone project.
  3. Run the DataSvcUtil.exe program (included in the OData client library for Windows Phone 7 download) to generate the client data classes for the data service.
    Your command line should look like this (except all on one line):

    DataSvcUtil.exe /out:"PhotoData.cs" /language:csharp /DataServiceCollection
    /uri:http://myhostserver/PhotoService/PhotoData.svc/ /version:2.0

  4. Add a reference to the System.Data.Services.Client.dll assembly.
  5. Create a ViewModel class for the application named MainViewModel. This ViewModel helps connect the view (controls in XAML pages) to the model (OData feed accessed using the client library) by exposing properties and methods required for data binding and tombstoning. The following represents the MainViewModel class that supports this sample:
    image
  6. Implement tombstoning to store application state when the application is deactivated and restore state when the application is reactivated. This is important because deactivation can happen at any time, including when the application itself displays the PhotoChooserTask to select a photo stored on the phone. To learn more about how to tombstone using the DataServiceState object, see Open Data Protocol (OData) Overview for Windows Phone.
  7. The MainPage.xaml page displays a ListBox of PhotoInfo objects, which includes the media resources as images downloaded from the streaming data service.
    image
  8. When one of the items in the ListBox is tapped, details of the selected PhotoInfo object are displayed in a Pivot control the PhotoDetailsWindow page:
    imageimage
Querying the Data Service and Binding the Streamed Data

The following steps are required to asynchronously query the streaming OData service. All code that access the OData service is implemented in the MainViewModel class.

  1. Declare the DataServiceContext used to access the data service and the DataServiceCollection used for data binding.

    // Declare the service root URI.
    private Uri svcRootUri =
        new Uri(serviceUriString, UriKind.Absolute);

    // Declare our private binding collection.
    private DataServiceCollection<PhotoInfo> _photos;

    // Declare our private DataServiceContext.
    private PhotoDataContainer _context;

    public bool IsDataLoaded { get; private set; }

  2. Register a handler for the LoadCompleted event when the binding collection is set.  

    public DataServiceCollection<PhotoInfo> Photos
    {
        get { return _photos;}
        set
        {
            _photos = value;

            NotifyPropertyChanged("Photos");

            // Register a handler for the LoadCompleted event.
            _photos.LoadCompleted +=
                new EventHandler<LoadCompletedEventArgs>(Photos_LoadCompleted);
        }
    }

  3. When MainPage.xaml is navigated to, the LoadData method on the ViewModel is called; the LoadAsync method asynchronously executes the query URI.

    // Instantiate the context and binding collection.
    _context = new PhotoDataContainer(svcRootUri);
    Photos = new DataServiceCollection<PhotoInfo>(_context);

    // Load the data from the PhotoInfo feed.
    Photos.LoadAsync(new Uri("/PhotoInfo", UriKind.Relative));

  4. The Photos_LoadCompleted method handles the LoadCompleted event to load all pages of the PhotoInfo feed returned by the data service.

    private void Photos_LoadCompleted(object sender, LoadCompletedEventArgs e)
    {
        if (e.Error == null)
        {
            // Make sure that we load all pages of the Customers feed.
            if (_photos.Continuation != null)
            {
                // Request the next page from the data service.
                _photos.LoadNextPartialSetAsync();
            }
            else
            {
                // All pages are loaded.
                IsDataLoaded = true;
            }
        }
        else
        {
            if (MessageBox.Show(e.Error.Message, "Retry request?",
                MessageBoxButton.OKCancel) == MessageBoxResult.OK)
            {
                this.LoadData();
            }
        }
    }

  5. When the user selects an image in the list, PhotoDetailsPage.xaml is navigated to, which displays data from the selected PhotoInfo object.
Binding Image Data to UI Controls

This sample displays images in the MainPage by binding a ListBox control to the Photos property of the ViewModel, which returns the binding collection containing data from the returned PhotoInfo feed. There are two ways to bind media resources from our streaming data service to the Image control.

  • By defining an extension property on the media link entry.
  • By implementing a value converter.

Both of these approaches end up calling GetReadStreamUri on the context to return the URI of the media resource a specific PhotoInfo object, which is called the read stream URI. We ended-up going with the extension property approach, which is rather simple and ends up looking like this:

public partial class PhotoInfo
{
       // Returns the media resource URI for binding.
       public Uri StreamUri
       {
           get
           {
               return App.ViewModel.GetReadStreamUri(this);
           }
       }
}

When you bind an Image control using the read stream URI, the runtime does the work of asynchronously downloading the media resource during binding. The following XAML shows this binding to the StreamUri extension property for the image source:

<ListBox Margin="0,0,-12,0" Name="PhotosListBox" ItemsSource="{Binding Photos}" 
            SelectionChanged="OnSelectionChanged" Height="Auto">
    <ListBox.ItemsPanel>
        <ItemsPanelTemplate>
            <toolkit:WrapPanel ItemHeight="150" ItemWidth="150"/>
        </ItemsPanelTemplate>
    </ListBox.ItemsPanel>
    <ListBox.ItemTemplate>                      
        <DataTemplate>
            <StackPanel Margin="0,0,0,17" Orientation="Vertical"
                    HorizontalAlignment="Center">
            <Image Source="{Binding Path=StreamUri, Mode=OneWay}"
                   Height="100" Width="130" />
                <TextBlock Text="{Binding Path=FileName, Mode=OneWay}"
                            HorizontalAlignment="Center" Width="Auto"
                        Style="{StaticResource PhoneTextNormalStyle}"/>
            </StackPanel>
        </DataTemplate>
    </ListBox.ItemTemplate>
</ListBox>

Because the PhotoInfo class now includes the StreamUri extension property, the client also serializes this property in POST requests that create new media link entries in the data service. This causes an error in the data service when this unknown property cannot be processed. In our sample, we had to rewrite our requests to remove the StreamUri property from the request body. This payload rewriting is performed in the PhotoDataContainer partial class (defined in project file PhotoDataContainer.cs), which follows the basic pattern described in this post.  I cover this and other binding issues related to media resource streams in more detail in my blog.

Uploading a New Image to the Data Service

The following steps are required to create a new PhotoInfo entity and binary image file in the data service.

  1. When the user taps the CreatePhoto button to upload a new image, we must create a new MLE object on the client. We do this by calling DataServiceCollection.Add in the MainPage code-behind page:

    // Create a new PhotoInfo instance.
    PhotoInfo newPhoto = PhotoInfo.CreatePhotoInfo(0, string.Empty,
        DateTime.Now, new Exposure(), new Dimensions(), DateTime.Now);

    // Add the new photo to the tracking collection.
    App.ViewModel.Photos.Add(newPhoto);

    // Select the newly added photo.
    this.PhotosListBox.SelectedItem = newPhoto;

    In this case, we don’t need to call AddObject on the context because we are using a DataServiceCollection for data binding.
  2. When the new PhotoInfo is selected from the list, the following SelectionChanged handler is invoked:

    var selector = (Selector)sender;
    if (selector.SelectedIndex == -1)
    {
        return;
    }

    // Navigate to the details page for the selected item.
    this.NavigationService.Navigate(
        new Uri("/PhotoDetailsPage.xaml?selectedIndex="
            + selector.SelectedIndex, UriKind.Relative));

    selector.SelectedIndex = -1;

    This navigates to the PhotoDetailsPage with the index of the newly created PhotoInfo object in the query parameter.
  3. In the code-behind page for the PhotoDetailsPage, the following method handles the OnNavigatedTo event: 

    if (chooserCancelled == true)
    {
        // The user did not choose a photo so return to the main page;
        // the added PhotoInfo is already removed.
        NavigationService.GoBack();

        // Void out the binding so that we don't try and bind
        // to an empty PhotoInfo object.
        this.DataContext = null;

        return;
    }

    // Get the selected PhotoInfo object.
    string indexAsString = this.NavigationContext.QueryString["selectedIndex"];
    int index = int.Parse(indexAsString);
    this.DataContext = currentPhoto
        = (PhotoInfo)App.ViewModel.Photos[index];

    // If this is a new photo, we need to get the image file.
    if (currentPhoto.PhotoId == 0
        && currentPhoto.FileName == string.Empty)
    {
        // Call the OnSelectPhoto method to open the chooser.
        this.OnSelectPhoto(this, new EventArgs());
    }

    If we have a new PhotoInfo object (with a zero ID), the OnSelectedPhoto method is called. 
  4. In the PhotoDetailsPage, we must initialize the PhotoChooserTask in the class constructor: 

    // Initialize the PhotoChooserTask and assign the Completed handler.
    photoChooser = new PhotoChooserTask();
    photoChooser.Completed +=
        new EventHandler<PhotoResult>(photoChooserTask_Completed);

  5. In the OnSelectedPhoto method (which also handles the SelectPhoto button tap) we display the photo chooser: 

    // Start the PhotoChooser.
    photoChooser.Show();

    At this point, the PhotoChooserTask is displayed and the application itself is deactivated, to be reactivated when the chooser closes—hence the need to implement tombstoning. 
  6. When the photo chooser is closed, the Completed event is raised. When the application is fully reactivated, we handle the event as follows to set PhotoInfo properties based on the selected photo:

    // Get back the last PhotoInfo objcet in the collection,
    // which we just added.
    currentPhoto =
        App.ViewModel.Photos[App.ViewModel.Photos.Count - 1];

    if (e.TaskResult == TaskResult.OK)
    {
        // Set the file properties for the returned image.               
        currentPhoto.FileName =
            GetFileNameFromString(e.OriginalFileName);
        currentPhoto.ContentType =
            GetContentTypeFromFileName(currentPhoto.FileName);

        // Read remaining entity properties from the stream itself.
        currentPhoto.FileSize = (int)e.ChosenPhoto.Length;

        // Create a new image using the returned memory stream.
        BitmapImage imageFromStream =
            new System.Windows.Media.Imaging.BitmapImage();
        imageFromStream.SetSource(e.ChosenPhoto);

        // Set the height and width of the image.
        currentPhoto.Dimensions.Height =
            (short?)imageFromStream.PixelHeight;
        currentPhoto.Dimensions.Width =
            (short?)imageFromStream.PixelWidth;

        this.PhotoImage.Source = imageFromStream;

        // Reset the stream before we pass it to the service.
        e.ChosenPhoto.Position = 0;

        // Set the save stream for the selected photo stream.
        App.ViewModel.SetSaveStream(currentPhoto, e.ChosenPhoto, true,
            currentPhoto.ContentType, currentPhoto.FileName);
    }
    else
    {
        // Assume that the select photo operation was cancelled,
        // remove the added PhotoInfo and navigate back to the main page.
        App.ViewModel.Photos.Remove(currentPhoto);
        chooserCancelled = true;
    }

    Note that we use the image stream to create a new BitmapImage, which is only used to automatically set the height and width properties of the image.
  7. When the Save button in the PhotoDetailsPage is tapped, we register a handler for the SaveChangesCompleted event in the ViewModel, start the progress bar, and call SaveChanges in the ViewModel: 

    App.ViewModel.SaveChangesCompleted +=
        new MainViewModel.SaveChangesCompletedEventHandler(ViewModel_SaveChangesCompleted);

    App.ViewModel.SaveChanges();

    // Show the progress bar during the request.
    this.requestProgress.Visibility = Visibility.Visible;
    this.requestProgress.IsIndeterminate = true;

  8. In the ViewModel, we call BeginSaveChanges to send the media resource as a binary stream (along with any other pending PhotoInfo object updates) to the data service:

    // Send an insert or update request to the data service.           
    this._context.BeginSaveChanges(OnSaveChangesCompleted, null);

    When BeginSaveChanges is called, the client sends a POST request to create the media resource in the data service using the supplied stream. After it processes the stream, the data service creates an empty media link entry. The client then sends a subsequent MERGE request to update this new PhotoInfo entity with data from the client.
  9. In the following callback method, we call the EndSaveChanges method to get the response to the POST request generated when BeginSaveChanges was called:

    private void OnSaveChangesCompleted(IAsyncResult result)
    {
        EntityDescriptor entity = null;
        // Use the Dispatcher to ensure that the response is
        // marshaled back to the UI thread.
        Deployment.Current.Dispatcher.BeginInvoke(() =>
        {
            try
            {
                // Complete the save changes operation and display the response.
                DataServiceResponse response = _context.EndSaveChanges(result);

                // When we issue a POST request, the photo ID and
                // edit-media link are not updated on the client (a bug),
                // so we need to get the server values.
                if (response != null && response.Count() > 0)
                {
                    var operation = response.FirstOrDefault()
                        as ChangeOperationResponse;
                    entity = operation.Descriptor as EntityDescriptor;

                    var changedPhoto = entity.Entity as PhotoInfo;

                    if (changedPhoto != null && changedPhoto.PhotoId == 0)
                    {
                        // Verify that the entity was created correctly.
                        if (entity != null && entity.EditLink != null)
                        {
                            // Detach the new entity from the context.
                            _context.Detach(entity.Entity);

                            // Get the updated entity from the service.
                            _context.BeginExecute<PhotoInfo>(entity.EditLink,
                                OnExecuteCompleted, null);
                        }
                    }
                    else
                    {
                        // Raise the SaveChangesCompleted event.
                        if (SaveChangesCompleted != null)
                        {
                            SaveChangesCompleted(this, new AsyncCompletedEventArgs());
                        }
                    }
                }
            }
            catch (DataServiceRequestException ex)
            {
                // Display the error from the response.
                MessageBox.Show(ex.Message);
            }
            catch (InvalidOperationException ex)
            {
                MessageBox.Show(ex.GetBaseException().Message);
            }
        });
    }

    When creating a new photo, we also need need to execute a query to get the newly created media link entry from the data service, after first detaching the new entity. We must do this because of a limitation in the WCF Data Services client POST behavior where it does not update the object on the client with the server-generated values or the edit-media link URI. To get the updated entity materialized correctly from the data service, we first detach the new entity and then call BeginExecute to get the new media link entry.
  10. When we handle the callback from the subsequent query execution, we assign the returned object to a new instance to properly materialize the new media link entry:

    private void OnExecuteCompleted(IAsyncResult result)
    {
        // Use the Dispatcher to ensure that the response is
        // marshaled back to the UI thread.
        Deployment.Current.Dispatcher.BeginInvoke(() =>
        {
            try
            {
                // Complete the query by assigning the returned
                // entity, which materializes the new instance
                // and attaches it to the context. We also need to assign the
                // new entity in the collection to the returned instance.
                PhotoInfo entity = _photos[_photos.Count - 1] =
                    _context.EndExecute<PhotoInfo>(result).FirstOrDefault();

                // Report that that media resource URI is updated.
                entity.ReportStreamUriUpdated();
            }
            catch (DataServiceQueryException ex)
            {
                MessageBox.Show(ex.Message);
            }
            finally
            {
                // Raise the event by using the () operator.
                if (SaveChangesCompleted != null)
                {
                    SaveChangesCompleted(this, new AsyncCompletedEventArgs());
                }
            }
        });
    }

    Because we detached the new media link entry, we must also assign the now tracked PhotoInfo object to the appropriate instance in the binding collection, otherwise the binding collection is out of sync with the context.
  11. Finally, the SaveChangesCompleted event is raised by the ViewModel, to inform the UI that it is OK to turn off the progress bar, which is handled in the following code in the PhotoDetailsPage:

    // Hide the progress bar now that save changes operation is complete.
    this.requestProgress.Visibility = Visibility.Collapsed;
    this.requestProgress.IsIndeterminate = false;

    // Unregister for the SaveChangedCompleted event now that we are done.
    App.ViewModel.SaveChangesCompleted -=
        new MainViewModel.SaveChangesCompletedEventHandler(ViewModel_SaveChangesCompleted);
    NavigationService.GoBack();

    Unfortunately, when the navigation returns to the MainPage, the binding again downloads the images. This is because of the application deactivation that occurs when the PhotoChooserTask is displayed. To avoid this re-download from the data service after tombstoning, you could instead use the GetReadStream method to get a stream that contains the image data and use it to create an image in isolated storage. Then, your binding could access the stored version instead of the web version of the image, but this is outside the scope of this sample.

Glenn added the following in a 5/17/2011 New Resources for OData, Streaming, and Windows Phone post to his personal blog:

To support this blog post, I’ve also uploaded a new Windows Phone application (on which this blog post is based) to the MSDN Code Gallery:

OData Streaming Client for Windows Phone

Some additional streaming, XAML, binding resources are:

Revisited: Guidance on Binding Media Resource Streams to XAML Controls


<Return to section navigation list> 

AppFabric: Access Control, WIF and Service Bus

• Clemens Vasters (@clemensv) listed Service Bus May 2011 CTP Resources in a 5/19/2011 post:

image Here’s a (incomplete) snapshot of what’s out there in terms of material for the new Service Bus CTP:

  • First read the release notes where we provide a summary of what’s new and what changed and also point out some areas of caution on parallel installs of the CTP and production SDKs.  
  • You can get the SDK bits from here. Get the right set of binaries for your machine (x64 or x86) and the right set of samples (CS or VB) and definitely get the user guide. We will have a NuGet package shortly that will allow you integrating the Service bus assembly and all necessary config incantations straight into your apps without even having the SDK on your machine.
  • The reference docs are located here. This is a CTP and the documentation is likewise in CTP state, so there are some gaps that we try to fill.
  • My introduction to the CTP is on the new AppFabric blog here.
  • At the same location you’ll find David Ingham’s primer on Queues.
  • My TechEd talk on the new features is now posted on Channel 9.
  • We have a video series providing high-level overviews on Service Bus.
  • Neudesic’s Rick Garibay provides a community insider’s perspective on the new features. Matt Davey also likes what he sees.
  • The forums. Go there for questions or suggestions.

There’s more on the way. Let me know if you write a blog post about what you find out so I can link to it.


Bruce Kyle reported CTP for Single Sign-On Extends Windows Identity Foundation (WIF) v3.5 for SAML 2.0 in a 5/19/2011 post to the US ISV Evangelism blog:

image Windows Identity Foundation (WIF) v3.5 Extension for SAML 2.0 Protocol enables .NET developers to build applications for the enterprise and government that require SAML 2.0 Protocol support and interoperate with identity services on a wide variety of platforms.

Windows Identity Foundation logoThe community preview was announced at Tech•Ed and on the team blog at Announcing the WIF Extension for SAML 2.0 Protocol Community Technology Preview.

Key features of this extension include:

  • Service Provider initiated and Identity Provider initiated Web Single Sign-on (SSO) and Single Logout (SLO)
  • Support for the Redirect, POST, and Artifact bindings
  • All of the necessary components to create a SP-lite compliant service provider application

The CTP release that includes the extension and samples is now available here.

For more information, see the Windows Identity Foundation website.


• Vittorio Bertocci (@vibronet) described Adding a Custom OpenID Provider to ACS… with JUST ONE LINE of PowerShell Code in a 5/19/2011 post:

image ACS offers you a variety of identity provider you can integrate with. Many of you will be familiar with the list shown by the management portal at the beginning of the add new identity provider wizard.

image

Some of you may also know that ACS integrates with Yahoo! and Google using OpenID, however from your point of view that doesn’t matter much: the details are abstracted away by ACS.

A less-known factlet is that ACS also supports integration with other OpenId providers: however that capability is not exposed via portal, you can only set it up via management APIs. We do have a tutorial which shows you how to do that step by step using myOpenID, you can find it here.

It’s not hard, that’s just OData after all, but it is still 6 printed pages. Now, how would you feel if I’d tell you that if you use the ACS cmdlets you can do exactly the same in ONE line of PowerShell code? Mind == blown, right? Smile

Here we go:

PS C:\Users\vittorio\Desktop> Add-IdentityProvider –Namespace “myacsnamespace” –ManagementKey “XXXXXXXX” -Type "Manual" -Name "myOpenID" -Protocol OpenId –SignInAddress “http://www.myopenid.com/server

That’s it! With the –Manual switch I can explicitly create any IP type. In order to maintain my boast that one line of code is enough, I used the inlined syntax for passing the namespace and the management key directly. In the announcement post I first obtained a management token with Get-AcsManagementToken, assigned it to a variable and passed it along for all subsequent commands, which is more appropriate for longer scripts (hence from now on I’ll use it instead).

That did the equivalent of the tutorial: however, that’ not enough to use myOpenID with the application yet. We still need to create rules that will add some claims or ACS won’t even send a token back. Luckily, that’s just another line of PowerShell code:

PS C:\Users\vittorio\Desktop> Add-Rule -MgmtToken $mgmtToken -GroupName "Default Rule Group for myRP" -IdentityProviderName "myOpenID"

Here I didn’t specify any input or output claim, which substantially ends up in a pass-through rule. NOW we’re ready! Let’s see what happens if I hit F5 on a plain vanilla Windows Azure webrole project where I added the SecurityTokenDisplayControl (you can find the VS2010 version in the identity training kit labs about ACS).

image

Oh hello myOpenID option! I’s there, good sign. Let’s hit it.

image

As expected, we end up on the auth page of openID. Once successfully authenticated, we get to the consent page:

image

Note that the consent does not mention any attributes, this fact will become relevant in a moment. Let’s click continue and…

image

congratulations! You just added an arbitrary OpenID provider, and all it took was just 2 lines of PowerShell (without even touching your application or opening the ACS management portal).

Now, you may notice one thing about this transaction: we got an awfully low amount of information about the user, just the OpenID handle in fact. I am not very deep in OpenID, I’ll readily admit. Luckily Oren, Chao and Andrew from the ACS team came to the rescue (thank you guys) and explained that ACS gets claims in OpenID via Attribute Exchange, which myOpenID does not support (they use Simple Registration).

Bummer! I really wanted to show passing name and email. Luckily adding another OpenID provider which supports AX is just a matter of hitting the up arrow a couple of times in the PowerShell ISE and change the name and signin address accordingly. In the end I settled with http://hyves.net, since I was just recently in the Netherlands :

Add-IdentityProvider -MgmtToken $mgmtToken -Type "Manual" -Name "hyves.net OpenID" -Protocol OpenId -SignInAddress https://openid.hyves-api.nl/

Add-Rule -MgmtToken $mgmtToken -GroupName "Default Rule Group for myRP" -IdentityProviderName "hyves.net OpenID"

Another F5…

image

…and the new option for hyves.net shows up. Good! Let’s hit it.

image

We get to their auth page. Let’s log in, we’ll get to the consent page.

image

Now this looks more promising. Hyves.net asks permission to share the email address with the ACS endpoint, as expected. Let’s grant it and see what happens.

image

Bingo! This time ACS (hence the RP) got the name and email claims, just like I wanted.

Soo, let me recap. I just enabled users from two arbitrary OpenID providers to authenticate with my application; and all it took was writing two commands in the window below to provision the first provider, then modifying those two commands for provisioning the second. We are talking minutes here, and just because I am not a very good typist nor an expert in PowerShell.

image

I know it’s bad form that I am the one saying it: but isn’t this really awesome? Come on, do something with the cmdlets too! I am super-curious to see what you guys will be able to accomplish.


The AppFabric Team announced New Sample Access Control Cmdlets Available in a 5/17/2011 post to the team blog:

image722322222As announced at TechEd North America we have released PowerShell cmdlets which wrap the management API of Windows Azure AppFabric Access Control service.

Read about this in the blog post by Vittorio Bertocci. [See below.]

This is a very useful addition to to your Access Control service management options that makes it easy to streamline and automate management actions.

As a reminder, we recently released major enhancements to the Access Control service which make it easy to enable single sign-on (SSO) in web applications.

In addition, we have a promotion period in which we are not charging for Access Control usage for billing periods before January 1st, 2012.

imageIf you have not signed up for Windows Azure AppFabric and would like to start using Access Control, be sure to take advantage of our free trial offer. Just click on the image below and get started today!


Vittorio Bertocci (@vibronet) posted Announcing: Sample ACS Cmdlets for the Windows Azure AppFabric Access Control Service on 5/17/2011:

powershell

image Long story short: we are releasing on Codeplex a set of PowerShell cmdlets which wrap the management API of the Windows Azure AppFabric Access Control Service.

This is hopefully for the joy of our IT admin friends who want to add ACS to their arsenal, but I bet that this will make many developers happy as well. I’ve never really used PowerShell before, and I’m using those cmdlets like crazy since the very first internal drop!

image722322222You can use those new cmdlets to save repetitive provisioning processes in form of PowerShell scripts, and consistently reuse them just by passing as parameters the targeted namespace and corresponding management key. You can use them for backing up your namespace settings on file and restore them at a later time, or copy settings from one namespace to the other. You can easily integrate ACS management in your existing scripts, or even just use the cmdlets to perform quick queries and adjustments to your namespace directly from PowerShell ISE or the command line. In fact, you can do whatever you are used to do with PowerShell cmdlets.

image

The initial set we are releasing today is not 100% exhaustive, for example we don’t touch the service identities yet, but it already enables most of the scenarios we encountered. The command names are self-explanatory:

image

There’s just 23 of them, and we might shrink them further in the future. For example: do we really need an Add-DefaultPassThroughRules cmdlet or can we just rely on Add-Rule? You tell us!
All cmdlets support get-help including the –Full option, although things are not too verbose at the moment: in subsequent releases we’ll tidy things up, but we wanted to put this in your hands ASAP.

Now for the usual disclaimer:  Those cmdlets are distributed in source code form and are not part of the product. you should consider them a code sample, even if we provide you with a setup that will automatically compile and install them so that you can use them without ever opening the project in visual studio if you don’t want to. Of course we are happy to take your feedback, especially now that the package is still a bit rough on the edges, but you should always remember that those cmdlets are unsupported.
Other disclaimer: this release have been thoroughly tested only on Windows 7 x64 SP1, and quickly tested on Windows 7 x86 SP1 and Windows 2008 R2 x64 SP1. There are known problems on older platforms, which we’ll iron out moving forward. Think of this release as one preview.

That said, I am sure you’ll have a lot of fun using the cmdlets for exploring the features that ACS offers.

Some More Background, and One Example

If you want to manage your ACS namespaces, there’s no shortage of options: you can take advantage of the management portal (new in 2.0) or you can use the OData API exposed via management service.

In my team we make pretty heavy use of ACS, both for our internal tooling (for example managing content and events) and for the samples, demos and hands on lab we produce.
In order to enable the scenario we want to implement, at setup time all of those deliverables require us to go through fixed sets of configuration steps in ACS. For example, when you use the template in the Windows Azure Toolkit for Windows Phone 7 to generate an ACS-ready project, the initialization code needs to:

  • Add Google as an IP
  • Add Yahoo! as an IP
  • Remove any RP which may collide with the new one
  • Create the new RP
  • Get rid of all the rules which may already be in the rule group we are targeting
  • Generate all the pass-through rules for the various IPs

This is a relatively simple sequence of operations; other setups we have to do, like the enterprise subscription provisioning flow we follow when we handle a new FabrikamShipping subscriber, are WAY more complicated.
In order to automate those processes, we progressively populated a class library of C# wrappers for the ACS management APIs. Then we started including that library in the Setup folder of various projects, together with a console app which calls those wrappers in the sequence that the specific sample being set up dictates; for example, the sequence described above for the Windows Azure toolkit for Windows Phone 7
In that specific case, the setup solution (it’s C:\WAZToolkitForWP7\Setup\acs\AcsSetup.sln if you have the toolkit and you are curious) is almost 580 lines of code.

Now, multiply that for all the projects we have (for the newest ones see this post) and the number starts to look significant. Add it to the frequent requests we get from customers to extend the cmdlets we created for Windows Azure to other services in the Windows Azure platform, and you’ll see why we decided to create a set of cmdlets for ACS.
Quite frankly, it was also because it was a low hanging fruit for us. We already had our wrapper library for the ACS management API, and we had the cmdlets wrapper solution we used for generating the Windows Azure cmdlets; putting the two together was pretty straightforward.

Once we had the right set of cmdlets we went ahead and re-created the sequence above in for of PowerShell script, and the improvement in respect to the AcsSetup.sln approach is impressive. Check it out:

# Coordinates of your namespace
$acsNamespace = "<yourNamespace>";
$mgmtKey = "<yourManagementKey>";

# Constants
$rpName = "WazMobileToolkit";
$groupName = "Default Rule Group for $rpName";
$signingSymmetricKey = "2RGYmQiFT9uslnxTTUn9MFr/nU+HeVwkmMJ6MwBNGuQ=";

$allowedIdentityProviders = @("Windows Live ID","Yahoo!", "Google”);

# Include ACS Management SnapIn
Add-PSSnapin ACSManagementToolsSnapIn;

# Get the ACS management token for securing all subsequent API calls
$mgmtToken = Get-AcsManagementToken -namespace $acsNamespace -managementKey $mgmtKey;

# Configure Preconfigured Identity Providers
Write-Output "Add PreConfigured Identity Providers (Google and Yahoo!)...";
$googleIp = Add-IdentityProvider -mgmtToken $mgmtToken -type "Preconfigured" –preconfiguredIPType "Google";
$yahooIp = Add-IdentityProvider -mgmtToken $mgmtToken -type "Preconfigured" –preconfiguredIPType "Yahoo!";

# Remove RP (if it already exists)
Write-Output "Remove Relying Party ($rpName) if exists...";
Remove-RelyingParty -mgmtToken $mgmtToken -name $rpName;

# Remove All Rules In Group (if they already exist)
Write-Output "Remove All Rules In Group ($groupName) if exists...";
Get-Rules -mgmtToken $mgmtToken -groupName $groupName | ForEach-Object { Remove-Rule -mgmtToken $mgmtToken -rule $_ };

# Create Relying Party
Write-Output "Create Relying Party ($rpName)...";
$rp = Add-RelyingParty -mgmtToken $mgmtToken -name $rpName -realm "uri:wazmobiletoolkittest" -tokenFormat "SWT" -allowedIdentityProviders $allowedIdentityProviders -ruleGroup $groupName -signingSymmetricKey $signingSymmetricKey;

# Generate default pass-through rules
Write-Output "Create Default Passthrough Rules for the configured IPs ($allowedIdentityProviders)...";
$rp.IdentityProviders | ForEach-Object { Add-DefaultPassthroughRules -mgmtToken $mgmtToken -groupName $groupName -identityProviderName $_.Name }

Write-Output "Done";

Excluding the comments (but counting the Write-Output) those are 20 lines of very understandable code, which you can modify in notepad (typically just for the namespace and namespace key) and run with a simple double-click; or, if you are fancy, you can open it up in PowerShell ISE and execute it line by line if you want to. Does it show that I am excited about this?

Let’s play a bit more. Let’s say that you now want to add Facebook as an identity provider. First you’ll need to add some config values at the beginning of the script:

$fbAppIPName = "Facebook IP";
$fbAppId = "XXXXXXXXXXXXX";
$fbAppSecret = "XXXXXXXXXXXXX";

We can even be fancy and subordinate the Facebook setup to the existance of non-empty facebook app coordinates in the script:

$facebookEnabled = (($fbAppId -ne "") -and ($fbAppSecret -ne ""));

Then we just add those few lines right where we create the preconfigured IPs:

# Configure Facebook App Identity Provider
if ($facebookEnabled)
{
    Write-Output "Add Facebook App Identity Provider ($fbAppIPName)...";
    # Remove FB App IP (if exists)
    Remove-IdentityProvider -mgmtToken $mgmtToken -name $fbAppIPName;
    # Add FB App IP
    $fbIp = Add-IdentityProvider -mgmtToken $mgmtToken -type "FacebookApp" -name $fbAppIPName -fbAppId $fbAppId -fbAppSecret $fbAppSecret;
}

Super straightforward; and the part that I love is that you can just test those commands one by one and see the results immediately, saving them in the script only when you are certain they do what you want them to do. For management tasks, it definitely beats fiddling with the debugger and the immediate window.

Want to play a bit more? Sure. One thing I often need to do is wiping a namespace clean after I did a demo during a session. Sometimes I have many sessions in a day, from time to time even back to back: as you can imagine, clicking around the portal for deleting entities is not fun nor very fast. But now I can just double click on the following script and I am done!

$acsNamespace = "holacsfederation";
$mgmtKey = "XXXXXXXXXXXXXXXXXXXX";
# Include ACS Management SnapIn
Add-PSSnapin ACSManagementToolsSnapIn;

$mgmtToken = Get-AcsManagementToken -namespace $acsNamespace -managementKey $mgmtKey;
Write-Output "Wiping IPs (and associated rules)";
Get-IdentityProviders -mgmtToken $mgmtToken | where {$_.SystemReserved -eq $false} | ForEach-Object { Remove-IdentityProvider -mgmtToken $mgmtToken -name $_.Name };
Write-Output "Wiping RPs (and associated rules)";
Get-RelyingParties -mgmtToken $mgmtToken | where {$_.SystemReserved -eq $false} | ForEach-Object { Remove-RelyingParty -mgmtToken $mgmtToken -name $_.Name };
Write-Output "Wiping Rule Groups";
Get-RuleGroups -mgmtToken $mgmtToken | where {$_.SystemReserved -eq $false} | ForEach-Object { Remove-RuleGroup -mgmtToken $mgmtToken -name $_.Name };

Here I delete all IPs (which will delete the associated rules), all RPs and all rule groups. All three commands have the same structure. Let’s pick the IP one:

Get-IdentityProviders -mgmtToken $mgmtToken
     | where {$_.SystemReserved -eq $false}
          | ForEach-Object { Remove-IdentityProvider -mgmtToken $mgmtToken -name $_.Name };

Get-IdentityProviders returns all IPs in the namespace; the where clause excludes the system reserved ones (Windows Live ID) which we’d be unable to delete anyway, then the ForEach-Object cycles through all the IPs and removes them. You’ve got to love PowerShell piping.

Well, this barely scratches the surface of what you can do with the ACS cmdlets. Please do check them out! We look forward for your feedback, and for once not just from developers!


The AppFabric Team posted An Introduction to Service Bus Queues on 5/17/2011:

image722322222In the new May CTP of Service Bus, we’re adding a brand-new set of cloud-based, message-oriented-middleware technologies including reliable message queuing and durable publish/subscribe messaging. We’ll walk through the full set of capabilities over a series of blog posts but I’m going to begin by focusing on the basic concepts of the message queuing feature. This post will explain the usefulness of queues and show how to write a simple queue-based application using Service Bus.

Let’s consider a scenario from the world of retail in which sales data from individual Point of Sale (POS) terminals needs to be routed to an inventory management system which uses that data to determine when stock needs to be replenished. I’m going to walk through a solution that uses Service Bus messaging for the communication between the terminals and the inventory management system as illustrated below:

Each POS terminal reports its sales data by sending messages to the DataCollectionQueue. These messages sit in this queue until they are received by the inventory management system. This pattern is often termed asynchronous messaging because the POS terminal doesn’t need to wait for a reply from the inventory management system to continue processing.

Why Queuing?

Before we look at the code required to set up this application, let’s consider the advantages of using a queue in this scenario rather than having the POS terminals talk directly (synchronously) to the inventory management system.

Temporal decoupling

With the asynchronous messaging pattern, producers and consumers need not be online at the same time. The messaging infrastructure reliably stores messages until the consuming party is ready to receive them. This allows the components of the distributed application to be disconnected, either voluntarily, e.g., for maintenance, or due to a component crash, without impacting the system as a whole. Furthermore, the consuming application may only need to come online during certain times of the day, for example, in this retail scenario, the inventory management system may only need to come online after the end of the business day.

Load leveling

In many applications system load varies over time whereas the processing time required for each unit of work is typically constant. Intermediating message producers and consumers with a queue means that the consuming application (the worker) only needs to be provisioned to accommodate average load rather than peak load. The depth of the queue will grow and contract as the incoming load varies. This directly saves money in terms of the amount of infrastructure required to service the application load.

Load balancing

As load increases, more worker processes can be added to read from the work queue. Each message is processed by only one of the worker processes. Furthermore, this pull-based load balancing allows for optimum utilization of the worker machines even if the worker machines differ in terms of processing power as they will pull messages at their own maximum rate. This pattern is often termed the competing consumer pattern.

Loose coupling

Using message queuing to intermediate between message producers and consumers provides an inherent loose coupling between the components. Since producers and consumers are not aware of each other, a consumer can be upgraded without having any effect on the producer. Furthermore, the messaging topology can evolve without impacting the existing endpoints – we’ll discuss this further in a later post when we talk about publish/subscribe.

Show me the Code

Alright, now let’s look at how to use Service Bus to build this application.

Signing up for a Service Bus account and subscription

Before you can begin working with the Service Bus, you’ll first need to sign-up for a Service Bus account within the Service Bus portal at http://portal.appfabriclabs.com/. You will be required to sign-in with a Windows Live ID (WLID), which will be associated with your Service Bus account. Once you’ve done that, you can create a new Service Bus Subscription. In the future, whenever you login with your WLID, you will have access to all of the Service Bus Subscriptions associated with your account.

Creating a namespace

Once you have a Subscription in place, you can create a new service namespace. You’ll need to give your new service namespace a unique name across all Service Bus accounts. Each service namespace acts as a container for a set of Service Bus entities. The screenshot below illustrates what this page looks like when creating the “ingham-blog” service namespace.

Further details regarding account setup and namespace creation can be found in the User Guide accompanying the May CTP release.


Using the Service Bus

To use the Service Bus namespace, an application needs to reference the AppFabric Service Bus DLLs, namely Microsoft.ServiceBus.dll and Microsoft.ServiceBus.Messaging.dll. You can find these in the SDK that can be downloaded from the CTP download page.

Creating the Queue

Management operations for Service Bus messaging entities (queues and publish/subscribe topics) are performed via the ServiceBusNamespaceClient which is constructed with the base address of the Service Bus namespace and the user credentials. The ServiceBusNamespaceClient provides methods to create, enumerate and delete messaging entities. The snippet below shows how the ServiceBusNamespaceClient is used to create the DataCollectionQueue.

    Uri ServiceBusEnvironment.CreateServiceUri("sb", "ingham-blog", string.Empty);
    string name = "owner";
string key = "abcdefghijklmopqrstuvwxyz";

    ServiceBusNamespaceClient namespaceClient = new ServiceBusNamespaceClient(
baseAddress, TransportClientCredentialBase.CreateSharedSecretCredential(name, key) );

namespaceClient.CreateQueue("DataCollectionQueue");

Note that there are overloads of the CreateQueue method that allow properties of the queue to be tuned, for example, to set the default time-to-live to be applied to messages sent to the queue.

Sending Messages to the Queue

For runtime operations on Service Bus entities, i.e., sending and receiving messages, an application first needs to create a MessagingFactory. The base address of the ServiceBus namespace and the user credentials are required.

Uri ServiceBusEnvironment.CreateServiceUri("sb", "ingham-blog", string.Empty);
string name = "owner";
string key = "abcdefghijklmopqrstuvwxyz";

MessagingFactory factory = MessagingFactory.Create(
baseAddress, TransportClientCredentialBase.CreateSharedSecretCredential(name, key) );

From the factory, a QueueClient is created for the particular queue of interest, in our case, the DataCollectionQueue.

QueueClient queueClient = factory.CreateQueueClient("DataCollectionQueue");

A MessageSender is created from the QueueClient to perform the send operations.

MessageSender ms = queueClient.CreateSender();

Messages sent to, and received from, Service Bus queues are instances of the BrokeredMessage class which consists of a set of standard properties (such as Label and TimeToLive), a dictionary that is used to hold application properties, and a body of arbitrary application data. An application can set the body by passing in any serializable object into CreateMessage (the example below passes in a SalesData object representing the sales data from the POS terminal) which will use the DataContractSerializer to serialize the object. Alternatively, a System.IO.Stream can be provided.

BrokeredMessage bm = BrokeredMessage.CreateMessage(salesData);

bm.Label = "SalesReport";
bm.Properties["StoreName"] = "Redmond";
bm.Properties["MachineID"] = "POS_1";

ms.Send(bm);

Receiving Messages from the Queue

Messages are received from the queue using a MessageReceiver which is also created from the QueueClient. MessageReceivers can work in two different modes, named ReceiveAndDelete and PeekLock. The mode is set when the MessageReceiver is created, as a parameter to the CreateReceiver operation.

When using the ReceiveAndDelete mode, receive is a single-shot operation, that is, when Service Bus receives the request, it marks the message as being consumed and returns it to the application. ReceiveAndDelete mode is the simplest model and works best for scenarios in which the application can tolerate not processing a message in the event of a failure. To understand this, consider a scenario in which the consumer issues the receive request and then crashes before processing it. Since Service Bus will have marked the message as being consumed then when the application restarts and begins consuming messages again, it will have missed the message that was consumed prior to the crash.

In PeekLock mode, receive becomes a two stage operation which makes it possible to support applications that cannot tolerate missing messages. When Service Bus receives the request, it finds the next message to be consumed, locks it to prevent other consumers receiving it, and then returns it to the application. After the application finishes processing the message (or stores it reliably for future processing), it completes the second stage of the receive process by calling Complete on the received message. When Service Bus sees the Complete, it will mark the message as being consumed.

Two other outcomes are possible. Firstly, if the application is unable to process the message for some reason then it can call Abandon on the received message (instead of Complete). This will cause Service Bus to unlock the message and make it available to be received again, either by the same consumer or by another completing consumer. Secondly, there is a timeout associated with the lock and if the application fails to process the message before the lock timeout expires (e.g., if the application crashes), then Service Bus will unlock the message and make it available to be received again.

One thing to note here is that in the event that the application crashes after processing the message but before the Complete request was issued then the message will be redelivered to the application when it restarts. This is often termed At Least Once processing, that is, each message will be processed at least once but in certain situations the same message may be redelivered. If the scenario cannot tolerate duplicate processing then additional logic is required in the application to detect duplicates which can be achieved based upon the MessageId property of the message which will remain constant across delivery attempts. This is termed Exactly Once processing.

Back to the code, the snippet below illustrates how a message can be received and processed using the PeekLock mode which is the default if no ReceiveMode is explicitly provided.

MessageReceiver mr = queueClient.CreateReceiver();
BrokeredMessage receivedMessage = mr.Receive();

try
{
ProcessMessage(receivedMessage);
receivedMessage.Complete();
}
catch (Exception e)
{
receivedMessage.Abandon();
}

Wrapping up and request for feedback

Hopefully this post has shown you how to get started with the queuing feature being introduced in the new May CTP of Service Bus. We’ve only really just scratched the surface here; we’ll go in to more depth in future posts.

Finally, remember one of the main goals of our CTP release is to get feedback on the service. We’re interested to hear what you think of the Service Bus messaging features.

We’re particularly keen to get your opinion of the API, for example, do you think it makes sense to have PeekLock be the default mode for receivers? We have a survey for that question.

For other suggestions, critique, praise, or questions, please let us know at the AppFabric CTP forum. Your feedback will help us improve the service for you and other users like you.


The AppFabric Team announced New Service Bus and AppFabric Videos Available in a 3/17/2011 post:

As promised, we have released new videos as part of our Windows Azure AppFabric Learning Series available on CodePlex.

The new videos cover the new capabilities that enable advanced pub/sub messaging in Service Bus which have been released as CTP, and the capabilities that enable to compose and manage applications with AppFabric, which have been announced at TechEd.

More videos and accompanying code samples will be released soon, so keep checking back!

The following videos and code samples are currently available or are coming soon:

Name Link to Sample Code Link to Video
Windows Azure AppFabric - An Introduction to Windows Azure AppFabric N/A Click here
Composing and Managing Applications with Windows Azure AppFabric N/A Click here
Windows Azure AppFabric - An Introduction to Service Bus N/A Click here
Windows Azure AppFabric - An Introduction to Service Bus Relay N/A Click here
Windows Azure AppFabric - An Introduction to Service Bus Queue N/A Click here
Windows Azure AppFabric - An Introduction to Service Bus Topics N/A Click here
Windows Azure AppFabric - How to use Service Bus Relay Coming Soon Coming Soon
Windows Azure AppFabric - How to use Service Bus Queues Coming Soon Coming Soon
Windows Azure AppFabric - How to use Service Bus Topics Coming Soon Coming Soon
Windows Azure AppFabric Cache - Introduction to the Windows Azure AppFabric Cache N/A Click here
Windows Azure AppFabric Cache - How to Set Up and Deploy a Simple Cache Click here Click here
Windows Azure AppFabric Cache - Caching SQL Azure Data Click here Click here
Windows Azure AppFabric Cache - Caching Session State Click here Click here

We hope the learning series helps you better understand what is AppFabric and how to use it.

The enhancements to Service Bus are available on our LABS previews environment at: http://portal.appfabriclabs.com/. So be sure to login and start checking out these new capabilities. Please remember that there are no costs associated with using the CTPs, but they are not backed up by any SLA.

free trial offerIf you haven’t signed up for Windows Azure AppFabric you can take advantage of our free trial offer. Just click on the image below and get started today!


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

The Windows Azure Connect Team explained Windows Azure Connect–Certificate Based Endpoint Activation in a 5/6/2011 post (missed when posted):

imageIf you have deployed Windows Azure Connect endpoint before, you know that the endpoint will be required to present an activation token (which you can get from Windows Azure Management Portal) for activation. This activation token can be specified in the .cscfg file for Windows Azure Roles (this can also be done via Visual Studio). For the endpoints that live on your corporate network (local endpoints), the activation token is part of the install link. We are happy that you like the ease of use and simplicity of this approach. We also heard some of you request an option for secure activation.

To address this feedback, we introduced certificate based activation in our latest CTP Refresh. You can now choose to use existing activation model (token based only - this is the default) or certificate based activation (token + certificate). In this refresh, certificate based activation is only available for local endpoints.

If you already have PKI and/or have a mechanism to securely distribute X509 certificates (private + public key pairs) to your endpoints within your organization, you are just few steps away from benefiting from this new feature:

1. On your corporate network, pick a Certificate Issuer that issues certificates to endpoints via manual/auto-enrollment policies.

secactivationcert

For example, the above snapshot shows that a machine that receives certificate from the issuer with CN=SecIssuer. In this case, the public key (.cer file) of CN=SecIssuer will need to be exported and saved for step 3 below.

Note: If you have deeper PKI hierarchy (example: CN=RootIssuer -> CN=SecIssuer -> CN=myendpoint), make sure you export the public key of the direct/immediate issuer i.e., CN=SecIssuer.

2. From the Windows Azure Management Portal, Click on the “Activation Options” as shown in the snapshot below.

image

3. This will bring up the certificate endpoint activation dialog (shown in the snapshot below):

a. Check the box that says “Require endpoints to use trusted certificate for activation”.

b. Click on “Add” button and choose the certificate (.cer file with public key only) file from Step 1 above.

image

4. At this point, all the new endpoints (excluding Azure roles) in this subscription will be required to prove their strong identity via the possession of a certificate issued by the issuer in step 1 above. The endpoint must have private keys for this certificate, but there is no requirement for the subject name to match the endpoint’s FQDN or hostname (example: CN=myendpoint can be used on a machine with name ContosoHost.Corp.AdventureWorks.com).

5. If you run into activation issues with this model, you can troubleshoot by checking the event viewer for any error messages such as below:

image

a. Verify that you have a certificate in the Local Computer\Personal\Certificates store. This certificate should have been directly issued by the issuer in step 1 above.

b. Verify that there is a private key for this certificate.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Wade Wegner (@wadewegner) announced VB.NET and Bug Fixes for Windows Azure Toolkit for Windows Phone 7 (v1.2.1) on 3/19/2011:

image Today’s release of the Windows Azure Toolkit for Windows Phone 7 (v1.2.1) has two important updates:

  • Support for Visual Basic
  • Bug fixes

imageYou can download the latest drop here: http://watoolkitwp7.codeplex.com/releases/view/61952

Visual Basic

When we released on Monday we did not include updated project templates for Visual Basic – turns out that creating these project templates takes a long time, and we did not have enough cycles to complete it in time.  However, we’ve had a few additional dates to finish this work, and we’re now providing Visual Basic support – this means can use this toolkit with Visual Basic and benefit from all the updates provided as part of the 1.2 release.

VBNET

Bug Fixes

We also fixed a few bugs in this release.  Many thanks to those of you who helped by reporting them so quickly!

  • [Fixed] The VS on-screen documentation for "Windows Phone 7 Empty Cloud App" template is missing.
    clip_image002
  • [Fixed] Modify VSIX installation scripts to avoid errors when users have PowerShell Profile Scripts.
    clip_image002[5]
  • [Fixed] The CopyLocal property of the Microsoft.IdentityModel assembly is set to false in the sample and project template solutions. It has now been set to true.
  • When creating a new project using the project template wizard:
    • [Fixed] If the user sets an invalid ACS Namespace and Management Key, the application shows an non-descriptive error and generates an inconsistent solution.
    • [Fixed] Using a real Azure Storage account and not selecting Use HTTPS option makes the generated solution fail when running it.
    • [Fixed] An error is displayed when creating a new ‘Windows Phone Cloud Application’ project in Visual Web Developer 2010 Express.
    • [Fixed] An error is displayed when creating a new ‘Windows Phone Cloud Application’ project in Visual Studio 2010 Express for Windows Phone.
    • [Fixed] The value of the PushServiceName setting is not replaced in the code generated with the project templates.
  • [Fixed] The ACS sample was shipped with an invalid configuration and wrong instructions in the Readme document to configure them appropriately.
  • [Fixed] The ApplePushNotification class is not used in the samples nor by the code generated with the project templates.
  • [Fixed] The solutions for VWD and VPD do not build since some projects are missing.
  • [Fixed] The error message shown in the ‘login’ page when there are connectivity issues is non-descriptive.

Again, many thanks to those of you who helped by reporting these bugs.  Keep the feedback coming!


Brent Stineman (@BrentCodeMonkey) posted Windows Azure Endpoints – Overview on 5/18/2011:

image You ever have those days where you wake up, look around, and wonder how you got where are? Or better yet, what the heck you’re doing there? This pretty much sums up the last 6 months or so of my career. I have recently realized that I’ve been doing so much Windows Azure evangelizing (both internally and externally) as well working on actual projects and potential projects that I haven’t written a single technical blog post in almost 5 months (since my Azure App Fabric Management Service piece in November).

imageSo I have a digital copy of Tron Legacy running in the background as I sit down to write a bit on something that I have found confusing. And judging by the questions I’ve been seeing on the MSDN forums, I’m likely not alone. So I thought I’d share my findings about Windows Azure Endpoints.

For every ending, there is a beginning

Originally, back in the CTP days, Windows Azure did not allow you to connect directly to a worker role. The “disconnected communications” model was pure. The problem with purity is that it’s often limiting. So when the platform moved from CTP to its production form in November of ’09, they introduced a new concept, internal endpoints.

Before this change, input endpoints were how we declared the ports that web roles would expose indirectly via a load balancer. Now with internal endpoints, we could declare one or more ports on which we could directly access the individual instances of our services. And they would work for ANY type of role.

It was a bit of a game-changer. We were now able to reduce latency by allowing direct communication between roles. It was a nice and necessary addition.

What does internal mean?

The question of “what does internal mean” is something I struggled with. I couldn’t find a clear answer for if internal endpoints could be reached from outside of Windows Azure, or more specifically outside of the boundary created by a hosted service.

We’re told that when you create a Windows Azure Service, all instances of that service are put on their own branch within the datacenter network to isolate them from other tenants of the datacenter. The Windows Azure load balancer sits in front of this network branch, and handles the routing of all traffic to the input endpoints that have been declared by that service. This is also why the ports your services’ input endpoints were configured for (the external facing one) and the port that your service is hosted on (behind the firewall) can be different.

So where does this leave internal endpoints? I ran some tests and I’ve asked around and unfortunately I was not been able to come to a definitive answer. What appears to be happening is that an internal endpoint is not visible outside of Windows Azure because it’s not managed by the load balancer. Therefore the internal endpoints simply aren’t visible to anything on the public side of the Azure Load balancer. Even if you know the port number to connect to, the IP addresses assigned to the instances isn’t known on the public side of the Azure load balancer.

But is this secure enough?

It turns out that this isn’t the end of how the endpoints are secured. I ran across a great blog post on what the Azure Fabric does when it configures the instance VM’s. Namely, its configuring the internal firewall.

I really want to salute Michael Washam for his info on the firewall. This wasn’t information I would have gone looking in the guest VM for. It’s also information I hadn’t seen anywhere else. In his post he discusses how the role’s firewall is set for restricting access to only the roles and their IP’s within a Windows Azure service. Presumably, these settings are automatically managed by the fabric agent as roles are spun up and down during normal service operation.

This would be important because as you recall from earlier, I discussed how each branch of the network is protected. So the firewall configuration compliments this by further ensuring that traffic is restricted. How if we could just apply the same filters to Azure storage services. J

But we’re missing here…

But this all has me thinking. There are likely a couple items missing yet. Windows Azure is a PaaS offering. It’s supposed to take away the mundane tasks we have to do today and replace them with features/services that do these tasks for us. If so, there’s two things missing.

As Michael Washam points out, if you’re spinning up internal ports yourself, you have to configure the firewall manually. IMHO, there should be a way to register connections with the Azure Fabric at run-time and have them auto-magically configured. Admittedly, this is a bit of an edge case and not something that’s being asked for.

The more important need that I see is one that is illustrated by a recent MSDN article, Load Balancing Private Endpoints on Worker Roles. In his article, Joseph Fultz talks about how they needed a “private” service but also one that’s load balanced. The internal endpoints were a perfect solution for privacy, but not load balancing. In Joseph’s article, he explains a couple of approaches to creating a load balanced internal service.

These approaches are valid and work well. But having to do this seems to fly in the face of what PaaS solutions are about. I don’t want to have to build my own load balanced solution just because I want to keep a service private. Its PaaS, I shouldn’t have to do this. To me, this shortcoming is almost as high on the feature list as fixed IP’s.

To this end, I have created my own entry in the MyGreatWindowsAzureIdea list. I have proposed being able to flag a load balanced input endpoint as either public or private. If it’s public, you get behavior as we see if today. If you flag it as private, then it’s accessible only to instances that are within its service boundary.

Learning more

I normally try to make sure I’m adding to the discussion on any topic I cover in my blog. Unfortunately, with this topic I’m not adding anything new as much as aggregating several sources of information that I found scattered around the web. So I figured I’d leave this update with a list of some other sources on Azure Endpoints I recommend.

Until next time!

PS – It’s often asked how many endpoints you can have. The simple answer is that any role can have any of input and internal endpoints up to a maximum of 5 endpoints. Additionally, if you plan on enabling RDP, this will consume one of those endpoints.

Update: My post-script isn’t entirely correct (but still works if you just want to keep things simple), so my buddy David did a new post just moments after this that clarifies some additional info about the 5 endpoints per role limitation. He managed to get 50 total endpoints!


Elizabeth White asserted “Open Betas Provide Visibility into Azure Performance, Storage and Subscription Utilization” as a deck for her Cloud Computing: Quest Software Released Tools for Microsoft Azure post of 5/18/2011:

image "We identified three of the biggest challenges users currently experience working with Azure, and want to support our customers who are making investments in the cloud by providing tools that address those challenges quickly and easily," said Douglas Chrystall, chief architect at Quest Software.

imageQuest on Monday released three open beta tools that help simplify Microsoft Azure infrastructure and account management: Spotlight on Azure, Quest Cloud Storage Manager for Azure, and Quest Cloud Subscription Manager for Azure.

As more organizations incorporate cloud solutions into their IT infrastructures, they not only face the inherent complexities of dealing with combinations of on-premise, virtual and cloud architectures, but also the challenge of ensuring they get the most from their cloud investments. Being able to monitor cost and resource utilization, cloud storage, and application performance are critical to maximizing an investment in cloud technology; but, unlike on-premise architectures, these tasks currently are difficult to perform in Azure. Quest's new tools address these critical areas, making it easier to conduct Azure cost analysis, and smoothing the overall transition from on-premise to cloud-based applications.

Specific capabilities offered by the beta releases include:

  • Spotlight on Azure - provides in-depth monitoring and performance diagnostics of Azure environments from the individual resources level up to the application level
  • Quest Cloud Storage Manager for Azure - provides file and storage management that enables users to easily access multiple storage accounts in Azure using a simple GUI interface
  • Quest Cloud Subscription Manager for Azure - drills into Azure subscription data, providing a detailed view of resource utilization with customized reporting and project mapping

Douglas Chrystall noted "Through our own research, we learned that over 90 percent of organizations are overspending on the cloud, and Azure developers don't have the visibility they need into things like storage and performance to properly manage their cloud environments. With Quest's Azure management offerings, users will gain that visibility into their Azure infrastructure, be able to easily map their cloud resources, and understand exactly where their dollars are being spent."


Brian Loesgen offered developers a Reminder: One Role Instance in Windows Azure does NOT give you High Availability in a 5/18/2011 post:

image My team had an incident recently where an ISV’s application went down at a very inopportune time. Upon looking into it, we found that they only had a single role instance running. So, it seemed like a good idea to do a post to remind people that you need at least two role instances running in order to have high availability.

Cloud providers, including Windows Azure, run on commodity hardware. Hardware WILL fail. At Microsoft, we have Service Level Agreements (SLAs) in place about accessibility of our services, and we incur financial penalties if we fail to meet those service levels.

imageFor Azure compute (Web and Worker Roles), when you create a new project, the service configuration file will default to a single role instance. I’m assuming this was done to preserve client resources when running in the emulator, but that’s just a guess. It works just fine for development, but if you deploy and only have a single role instance, you have a single point of failure, and the SLA will not apply. You can specify how many role instances you want either through the portal, by changing/uploading a new ServiceConfiguration.cscfg file, or using the Service Management API.

Of course you are incurring a cost for every role instance, but compare that to the cost of an outage, and assess the risk. In some cases, outages can be tolerated (eg:queued async operations), in others (eg: customer-facing portal), they can’t.


Mary Jo Foley (@maryjofoley) reported Microsoft chases Amazon in taking SAP to the cloud in a 5/18/2011 post to ZDNet’s All About Microsoft blog:

image On May 18, both Microsoft and its cloud rival Amazon made dueling announcements involving SAP and their respective cloud strategies.

The difference? Microsoft’s SAP announcement is full of “future plans,” while Amazon is offering certain SAP wares today via the Amazon Web Services platform.

Amazon and SAP announced
at the Sapphire conference that the pair will be providing a certified suite of SAP’s enterprise software – other than the SAP ERP products) – running on the Amazon cloud.

imageMicrosoft and SAP announced plans for “innovations to people-centric applications development for SAP software as well as virtualization and cloud computing without disruption of customer IT landscapes.” Somewhat more specifically, the pair said they will provide integration betwen SAP’s “upcoming landscape management software,” Microsoft System Center and Windows Server Hyper-V — a k a, Microsoft’s private-cloud stack. (SAP’s landscape management software has nothing to do with gardening, in case you were wondering. It is SAP’s name for its public and private cloud provisioning/management technologies.)

“In the future, SAP and Microsoft plan to continue their collaboration to support deployment of SAP applications on the Windows Azure Platform,” the SAP press release added.

There is no timetable as to when SAP and Microsoft plan to deliver any of this private or public cloud management/integration.

There was some slightly less cloudy Microsoft and SAP news at Sapphire today, however. Microsoft and SAP are extending their existing Duet partnership. Duet is a jointly developed Microsoft-SAP product that integrates SAP applications’ business processes and Microsoft Office and SharePoint.

Microsoft and SAP said they are connecting their respective development platforms, integrating future versions of Visual Studio and the .Net Framework development tools with SAP’s Business Suite of applications. (Maybe that will happen next year, with Visual Studio v.Next, a k a Visual Studio 2012?) Again, no timetables were provided.) At some point SAP also plans to extend its SAP NetWeaver Gateway with a new software development kit for Windows Azure, enabling .Net developers to create private or public Azure apps that connect to on-premises SAP systems without leaving their development environment, according to the SAP press release.


The Windows Azure Team posted How to Deploy a Hadoop Cluster on Windows Azure on 5/18/2011:

image The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing.  If you’re looking for guidance on deploying a Hadoop cluster on Windows Azure, then be sure to check out the latest blog post, “Hadoop in Azure”, by Microsoft Principal Architect Mario Kosmiskas. 

imageIn this post, he demonstrates how to create a typical cluster with a Name Node, a Job Tracker and a customizable number of Slaves. He also outlines how to dynamically change the number of Slaves using the Windows Azure Management Portal.

Learn more about Hadoop here.


Yves Goeleven (@yvesgoeleven) continued his series with Building Global Web Applications With the Windows Azure Platform – Dynamic Work Allocation and Scale out on 5/17/2011:

image Today I would like to finish the discussion on ‘understanding capacity’ for my ‘Building Global Web Applications With the Windows Azure Platform’ series, by talking about the holy grail of cloud capacity management: Dynamic work allocation and scale out.

imageThe basic idea is simple, keep all roles at full utilization before scaling out:

To make optimal use of the capacity that you’re renting from your cloud provider you could design your system in such a way that it is aware of it’s own usage patterns and acts upon these patterns. For example, if role 3 is running to many cpu intensive jobs and role 1 has excess capacity, it could decide to move some cpu intensive workloads off of role 3 to role 1. The system repeats these steps for all workload types and tries to maintain a balance below 80% overall capacity before deciding to scale out.

Turns out though that implementating this is not so straight forward…

First of all you need to be able to move workloads around at runtime. Every web and worker role needs to be designed in such a way that it can dynamically load workloads from some medium, and start executing it. But it also needs to be able to unload the workload, in effect your web or worker role becomes nothing more than an agent that is able to administer the workloads on the machine instead of executing them itself.

In the .net environment this means that you need to start managing separate appdomains or processes for each workload. Here you can find a sample where I implemented a worker role that can load other workloads dynamically from blob storage into a separate appdomain in response to a command that you can send from a console application. This sort of proves that moving workloads around should be technically possible.

Even though it is technically quite feasible to move workloads around, the hardest part is the business logic that decides what workloads should be moved, when and where to. You need to take quite a few things into account!

  • Every workload consumes a certain amount of cpu, memory and bandwith, but these metrics cannot be derived from traditional monitoring information as that only shows overall usage. So you need to define and compute additional metrics for each individual workload in order to know what the impact of moving that specific workload would be.
  • Workloads tend to be rather temporal as well, so a heavy cpu usage right now, does not mean it will consume the same amount in 5 seconds. So just simply moving workloads around when you detect a problem is not going to cut it.
  • In other words, you need to find ways to accurately predict future usage based on past metrics and user supplied information.
  • You need to ensure a workload is moved well before it actually would start consuming resources as moving the workload itself takes time as well.
  • These same problems repeat themselves on the target side, where you would move the workload to as that role’s utilization is in continuous flux as well.
  • I’m only touching the tip of the iceberg here, there is even much more to it…

Lot’s of hard work… but in time you will have to go through it. Please keep in mind that this is the way most utility companies make their (enormous amounts of) money, by continuously looking for more accurate ways to use and resell excess capacity.

Alright, now that you understand the concept of capacity and how it can help you to keep your costs down. It is time to move to the next section of this series: how to make your application globally available.


<Return to section navigation list> 

Visual Studio LightSwitch and Entity Framework 4+

Andrew Peters and Rowan Miller announced EF Power Tools CTP1 Released in a 3/18/2011 post to the ADO.NET Team blog:

Last month we announced the RTW of Entity Framework 4.1 (Magic Unicorn Edition). As we worked on the EF 4.1 release we had a series of Community Technology Previews (CTPs) and it was great to have consistent feedback coming in from you all. So, with EF 4.1 released we thought it was time to get some more pre-release stuff in your hands…

Today we are releasing a preview of some Power Tools for EF 4.1 that integrate with Visual Studio. This first preview of the EF Power Tools is focused on providing design-time tools for Code First development.

Where Do I Get It?

EF Power Tools CTP1 is available on the Visual Studio Gallery

You can also install the power tools directly from Visual Studio by selecting ‘Tools –> Extension Manager…’ then searching for “Entity Framework Power Tools” in the Online Gallery.

Support?

This release is a preview of features that we are considering for a future release and is designed to allow you to provide feedback on the design of these features. EF Power Tools CTP1 is not intended or licensed for use in production. If you have questions please use the “Q & A” tab on the Entity Framework Power Tools Visual Studio Gallery page.

What Does It Add To Visual Studio?

EF Power Tools CTP1 is focused on Code First development and adds some options to context menus in Visual Studio:

When right-clicking on a C# project an ‘Entity Framework’ sub-menu is added:

ProjectMenu

  • Reverse Engineer Code First
    This command allows one time generation of Code First to an existing database. This option is useful if you want to use Code First to target an existing database as it takes care of a lot of the initial coding. The command prompts for a connection to an existing database and then reverse engineers POCO classes, a derived DbContext and Code First mappings that can be used to access the database.
    • You will need to have EF 4.1 installed or have added the EntityFramework NuGet package to your project before running reverse engineer.
    • The reverse engineer process currently produces a complete mapping using the fluent API. Items such as column name will always be configured, even when they would be correctly inferred by conventions. This allows you to refactor property/class names etc. without needing to manually update the mapping.
      • In the future we may look at options that allow only minimal mapping to be generated along with the option to use Data Annotations for mapping rather than the fluent API. We would really like to hear you feedback on this.
    • A connection string is added to the App/Web.config file and is used by the context at runtime. If you are reverse engineering to a class library you will need to copy this connection string to the App/Web.config file of the consuming application(s).
    • This process is designed to help with the initial coding of a Code First model. You may need to adjust the generated code if you have a complex database schema or are using advanced database features.
    • Running this command multiple times will overwrite any previously generated files, including any changes that have been made to generated files.
When right-clicking on a file containing a derived DbContext class an ‘Entity Framework’ sub-menu is added:

ItemMenu

  • View Entity Data Model (Read-only)
    Displays the Code First model in the Entity Framework designer.
    • This is a read-only representation of the model, you can not update the Code First model using the designer.
  • View Entity Data Model XML
    Displays the EDMX XML representing the Code First model.
  • View Entity Data Model DDL SQL
    Displays the DDL SQL to create the database targeted by the Code First model.
  • Optimize Entity Data Model
    Generates pre-compiled views used by the EF runtime to improve start-up performance. Adds the generated views file to the containing project.
    • View compilation is discussed in the Performance Considerations article on MSDN.
    • If you change your Code First model then you will need to re-generate the pre-compiled views by running this command again.

Where Are Enums/Spatial/Migrations/…?

Our team is working on many new features including the ones mentioned above. This power tools preview is just one of a series of previews we have planned as we work on the next release. Rest assured, we appreciate that these are very important features and we will be reaching out for your feedback on them.


Paul Patterson posted a link to a Microsoft LightSwitch – Microsoft TechEd 2011 Presentation session video on 3/17/2011:

Building Business Applications with Microsoft Visual Studio LightSwitch

image2224222222Visual Studio LightSwitch is the simplest way to build business applications for the desktop and cloud. LightSwitch simplifies the development process by letting you concentrate on the business logic,while LightSwitch handles the common tasks for you. In this demo-heavy session,see end-to-end how to build and deploy a data-centric business application using LightSwitch. Finally,see how to take advantage of the underlying LightSwitch application architecture to implement your application’s custom business logic


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

Jon Brodkin claimed that you can “Take your Microsoft server apps to the cloud with license mobility” in a deck for his Microsoft's cloud licensing changes: what you need to know post of 5/18/2011 to Network World’s Data Center blog (site registration required):

imageStarting July 1, Microsoft customers will be able to use their current license agreements to move server applications from internal data centers to cloud computing services, and contracts signed after that date will offer the same benefit.

image But what Microsoft calls "license mobility" will only be available to customers who pay extra for Software Assurance, which can nearly double the cost of a license. Also, while the license mobility will apply to server applications, it won't affect the server operating system itself.

Here are the details of the changes and how they might affect you.

In other words, customers will gain the benefit for SQL Server, Exchange, SharePoint, Lync, System Center and Dynamics CRM. But customers who want to spin up a Windows Server instance in a cloud service would not be able to use their current licenses.

Although numerous customers do run Windows Server instances in cloud services, Microsoft official Mark Croft said the Windows Server license is typically provided by the hosting company.

"The idea here is that on the Windows Server level, infrastructure hosters already provide the fabric, if you will, and some level of sophisticated virtualization," says Croft, director of product management for worldwide licensing. "The infrastructure layer, we think, is very the much the hoster environment. The most valuable scenario is to let people move their workloads around."

With Amazon's Elastic Compute Cloud - the largest competitor to Microsoft's Windows Azure - customers simply pay Amazon on a per-hour basis for access to Windows instances, and don't have to worry about purchasing a license from Microsoft.

But those server applications - Exchange, SQL Server etc. - are ones that customers have running internally and may want to lift up and move to, say, the Rackspace cloud service. Without license mobility, a customer that moves a workload to a cloud service without purchasing a new license would be out of compliance, Croft says. "To be in compliance, you would have gone to the hoster and have the hoster buy those licenses on your behalf," he says.

But starting July 1, "customers can just take those existing on-prem investments they already have and simply deploy them on a hosted infrastructure," Croft says. Of course, "the hoster is still in charge of how they want to cost out that hosting fee. But we've taken a major element out of the cost equation by allowing this mobility."

In addition to providing incentive to purchase Software Assurance, license mobility for cloud applications will "make cloud computing an easier decision for organizations which will absolutely increase the number who take advantage of it," according to licensing consultant Cynthia Farren, who blogs for Network World's Microsoft Subnet.

However, Farren adds that "I would like to see this extended to the Windows Server OS as well."

Microsoft previewed the end-user licensing changes in March and discussed them further this month at the annual Tech-Ed North America conference

Microsoft has separate licensing agreements for cloud hosters, known as SPLAs (service provider license agreements). Microsoft lowered the price for most SPLA Windows Server licenses in January, has announced more affordable packages and an easing of workload restrictions for July, and could make further adjustments to both end-user and service provider licenses in October.

Prices vary widely for customer licenses, depending on the product, number of users and the customer's skill in negotiating. July 1, by the way, is the start of Microsoft's next fiscal year, and the end of the current fiscal year is a good time for customers to wring concessions out of salespeople while negotiating contracts.

List pricing for Exchange Server 2010 Standard Edition is $700 and the Enterprise Edition is $4,000.

As a general rule of thumb, adding Software Assurance will not quite double the cost of a license but will add substantially to it. Software Assurance for servers is typically 25% of the cost of a license, multiplied by three years, and 28% of the cost of a license for an application, according to Farren. A three-year server contract with Software Assurance is therefore 175% of the license cost, and a three-year application contract with Software Assurance is 184% of the license cost.

Enterprise agreements with Software Assurance are usually applicable to customers with 250 or more users and devices, Croft says.

Smaller customers without volume licensing agreements therefore have to pay again if they decide to move a workload from an in-house data center to a cloud service. It's obviously a carrot to get customers to upgrade, but Software Assurance does come with other benefits, such as guaranteed upgrades to new versions of products for three years.

When asked why Microsoft isn't extending license mobility benefits to smaller customers, Croft says "we think they should be a Software Assurance customer."

Cloud computing isn't the only growing technology that is affecting Microsoft license policies. The spread of server virtualization convinced Microsoft to offer unlimited virtualization rights through the so-called "Datacenter Edition" Windows Server license, which costs more but doesn't limit the number of virtual machines that can be installed on a physical host.

It stands to reason that more changes are coming to enable greater mobility across virtualization and cloud environments, but Microsoft won't go too far if it starts to affect profit margins.

"We don't have any plans on record" to make further mobility changes, Croft says. "We're certainly going to be keenly watching the adoption from an enterprise customer point of view."

There are also a few other changes to Microsoft licensing agreements that customers should be aware of. The Enterprise Agreement is being expanded to Office 365, Windows Intune and Dynamics CRM, making several Microsoft cloud services eligible for volume licensing.

Software Assurance is also being bolstered with System Center Advisor, a cloud service for assessing server configurations to avoid problems, and Windows Thin PC, a stripped-down, more secure version of Windows 7 designed for thin clients. Windows Thin PC was launched in beta this year and will become widely available by the end of this quarter. 

Read more: 2, Next >


Maarten Balliauw (@maartenballiauw) reported about Microsoft .NET Framework 4 Platform Update 1 KB2478063 Service Pack 5 Feature Set 3.1 R2 November Edition RTW in a 3/18/2011 post:

image As you can see, a new .NET Framework version just came out. Read about it at http://blogs.msdn.com/b/endpoint/archive/2011/04/18/microsoft-net-framework-4-platform-update-1.aspx. Now why does my title not match with the title from the blog post I referenced? Well… How is this going to help people?

For those who don’t see the problem, let me explain… If we get new people on board that are not yet proficient enough in .NET, they all struggle with some concepts. Concepts like: service packs for a development framework. Or better: client profile stuff! Stuff that breaks their code because stuff is missing in there! I feel like this is going the Java road where every version has a billion updates associated with it. That’s not where we want to go, right? The Java side?

image

As I’m saying: why not make things clear and call these “updates” something like .NET 4.1 or so? Simple major/minor versions. We’re developers, not marketeers. We’re developers, not ITPro who enjoy these strange names to bill yet another upgrade to their customers

How am I going to persuade my manager to move to the next version? Telling him that we now should use “Microsoft .NET Framework 4 Platform Update 1 KB2478063” instead of telling “hey, there’s a new .NET 4! It’s .NET 4.1 and it’s shiny and new!”.

It seems I’m not alone with this thought. Hadi Hariri also blogged about it. And I expect more to follow... If you feel the same: now is the time to stop this madness! I suspect there’s an R2 November Edition coming otherwise…

[Edit @ 14:00] Here's how to use it in NuGet. Seems this thing is actually ".NET 4.0.1" under the hood.
[Edit @ 14:01] And here's another one. And another one.


Richard L. Santaleza (@InfoLawGroup) reported NIST Releases New DRAFT Cloud Computing Synopsis in a 5/17/2011 post to the Information Law Group blog:

image The National Institute of Standards and Technology (NIST) recently released a new cloud computing draft special publication for public review and comment (see associated press release), which NIST is billing as "its most complete guide to cloud computing to date."  Public comments to NIST on the 84-page P 800-146  DRAFT Cloud Computing Synopsis and Recommendations (PDF 1.9MB) are due by June 13, 2011, and should be submitted via email to 800-146comments@nist.gov.

According to NIST, "Draft Special Publication 800-146, NIST Cloud Computing Synopsis and Recommendations explains cloud computing technology in plain terms and provides practical information for information technology decision makers interested in moving into the cloud."

We'll be reviewing and commenting on this latest cloud draft from NIST in a future post, and have been following NIST's ongoing and comprehensive efforts in the area of cloud computing closely.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

• Kenon Owens described Showing the Private Cloud on our own Private Cloud at TechEd 2011 in a 3/19/2011 post:

imageWow, I can’t believe TechEd 2011 is over. It was a fantastic show, and I enjoyed every minute of it. From the Sessions to the TLC Booths to the different treks from different buildings. It was all worth it. One thing that was going to be difficult, however, was how do we setup the demo environments at the different SIM Pods for all of the different products we have. With System Center our products are working together, and it isn’t something that we can show with just 1 VM. We can’t really rely on RDP’ing back to Corp, and it is really challenging to bring extra equipment (Pelican Cases, extra baggage charges, breakage, etc.) What we were able to do, was a brilliant little plan concocted by us and  one of our partners, HynesITe.

They built us a Private Cloud to host all of the demo vms for the SIM TLC Pods. Using System Center Virtual Machine Manager 2012 Beta, they created a “Cloud” for each POD, and we were able to log into the system and only see the VMs we had access to, but could manage them ourselves as needed. This showed the true tenants of Private Cloud. We had a shared environment, that was elastic as resource needs changed, it allowed for Self Service, and we were metered by use.

Here is a description of the solution:

Challenge

TechEd offers our demo stations a workstation class machine with 8GB RAM. Product teams in the System Center and Identify Management tracks are showing environments consisting of dozens of VM’s at times with high disk and memory requirements. Examples.

1. Service Manager: 9 VM’s, 40GB RAM, 450GB Storage

2. Orchestrator: 24 VM’s, 40GB RAM, 250 GB Storage

In addition, we wanted to showcase some of Microsoft key private cloud solutions and how we are actually using those solutions to solve our own challenges.

Hardware

Sponsor HP provided two C7000 bladecenter systems containing 4 blades each. Blades were 72GB RAM per blade, with SSD local storage. The enclosures were connected together with a 4GB fiber backplane for storage and 10GB Ethernet.

Storage was provided by an HP EVA storage array containing 24 400GB SAS drives configured as RAID 10.

Storage and Hyper-V hosts were configured and deployed using VMM. Entire build time for the solution was 8 hours, including operating system installation, storage provisioning, cluster creation and management.

Clustering

Each bladecenter was configured as a failover cluster (named StormCloud and Cyclone). Storage was 4GB Fiber with multiple paths of redundancy.

clip_image002

Individual demo VM’s were deployed as highly available resources and assigned a primary node. Live migration was used to move running demo environments between nodes as needed for maintenance. In addition this allowed the intelligent placement and dynamic optimization features in SCVMM 2012 to ensure that all demo environments were equally balanced across all nodes.

Cloud Implementation

SCVMM 2012 includes the ability to present physical resources as “clouds”. A cloud is an abstract view of resources that you own, which hides all aspects of the underlying physical implementation. In addition, clouds can be shaped to ensure that no one cloud or cloud owner can consume more resources than they have been allocated.

1. SCVMM was used to bare metal deploy the Hyper-V hosts.

2. SCVMM was used to allocate storage to the servers.

3. SCVMM was used to deploy and configure the cluster.

4. SCVMM was used to create clouds representing slices of the overall fabric (hardware) to be allocated to users.

5. SCVMM PowerShell was used to create the clouds and self-service roles as a batch.

6. SCVMM PowerShell was used to manage cloud and VM properties.

Sample: Get-VM –Name *ERDC* | Set-VM –Cloud (Get-SCCloud –Name *Orchestrator*) was used to assign all VM’s in a group to a cloud.

For each demo station, three steps are performed.

1. A cloud is created.

2. A self-service user is created.

3. The VM’s are dynamically provisioned to the correct cloud.

Each demo station owner is provided a standard Windows desktop and a copy of VMM 2012 administration console. When a demo station owner launches the VMM 2012 administration console, they are prompted to choose a self-service role. Each self-service role is mapped to one or more clouds with specific rights. The following series of screenshots depicts aspects of the solution.

clip_image004

Figure 1 Choosing a role

clip_image006

Figure 2 Delegated view of a cloud as a self-service user

clip_image008

Figure 3 Delegated activities for each self-service user

clip_image010

Figure 4 Administrator view of all clouds

clip_image012

Figure 5 Cloud shaping to control resource usage

This was a fantastic solution for the SIM Pods, and allowed us to showcase our solutions on our solution. Some key Benefits Realized

Benefits Realized

1. Users do not need to know anything about the physical implementation to access resources. All details of physical implementation are fully hidden from the cloud manager, while still giving the cloud manager administrative access to all resources.

2. Resources are automatically highly available and fault tolerant. VM’s are migrated between cluster nodes as needed for maintenance and capacity management.

3. Deployment and setup was very quick and easy. Platform was deployed from bare metal to fully operational in under 8 hours.

4. Automation via PowerShell has resolved most troubleshooting and support issues. Powershell scripts were used for repeated tasks such as cloud creation, troubleshooting, and VM management.

5. SCVMM offers extensive control over user resource consumption allowing fair managed allocation or resources.

If you would like for information on the solution, please contact HynesITe at the details below, or comment here. We would love to hear your feedback.

Contact

Also, don’t forget that the System Center Virtual Machine Manager 2012 Community Evaluation Program starts May 26th, 2011. Go to http://connect.microsoft.com/site1211 for more details, or email mscep@microsoft.com for more information


The System Center Team reported Microsoft’s Hyper-V Cloud Fast Track Accelerates Private Cloud Deployment in a 3/19/2011 post:

image What an exciting week at Tech Ed for Private Cloud solutions from Microsoft and our great partners!   It started with the announcement of NetApp and Cisco joining the Hyper-V Cloud Fast Track program and bringing their solution to market immediately.  We had a session where Alex Jauch from NetApp did a very cool demo. He showed provisioning of Cisco UCS blades via an Opalis workflow and PowerShell. He followed that with a Disaster Recovery scenario – bringing down a private cloud in Seattle and bringing the infrastructure back up in Tacoma without losing connectivity to the hosted applications.      

Next, HP’s private cloud offering in the Fast Track program provided an incredible display of power – supporting thousands of VM’s on just a 16-node configuration.   It was amazing to see this system in action, specifically the quick provisioning and de-provisioning of virtual machines, automating the process of workload balancing and the ability to keep the infrastructure available through advanced monitoring and automation.   This live Fast Track implementation clearly demonstrates the benefit of shared resources pools with advanced automation and management.  

And to top it all off, Fujitsu announced on Wednesday that their Fast Track offering is coming to market.  Fujitsu’s is based on their Fujitsu PRIMERGY BX900 blade server system and ETERNUS storage systems.

Keep looking here for updates on how to implement private clouds in your organization – today - with Hyper-V Cloud Fast Track  offerings from partners around the globe.


Tim Anderson (@timanderson) asked Three questions about Microsoft’s cloud play at TechEd 2011 in a 3/17/2011 post:

image This year’s Microsoft TechEd is subtitled Cloud Power: Delivered, and sky blue is the theme colour. Microsoft seems to be serious about its cloud play, based on Windows Azure.

image

Then again, Microsoft is busy redefining its on-premise solutions in terms of cloud as well. A bunch of Windows Servers on virtual machines managed by System Center Virtual Machine Manager (SCVMM) is now called a private cloud – note that the forthcoming SCVMM 2012 can manage VMWare and Citrix XenServer as well as Microsoft’s own Hyper-V. If everything is cloud then nothing is cloud, and the sceptical might wonder whether this is rebranding rather than true cloud computing.

I think there is a measure of that, but also that Microsoft really is pushing Azure heavily, as well as hosted applications like Office 365, and does intend to be a cloud computing company. Here are three side-questions which I have been mulling over; I would be interested in comments.

image

Microsoft gets Azure – but does its community?

At lunch today I sat next to a delegate and asked what she thought of all the Azure push at TechEd. She said it was interesting, but irrelevant to her as her organisation looks after its own IT. She then added, unprompted, that they have a 7,000-strong IT department.

How much of Microsoft’s community will actually buy into Azure?

Is Microsoft over-complicating the cloud?

One of the big announcements here at TechEd is about new features in AppFabric, the middleware part of Windows Azure. When I read about new features in the Azure service bus I think how this shows maturity in Azure; but with the niggling question of whether Microsoft is now replicating all the complexity of on-premise software in a new cloud environment, rather than bringing radical new simplicity to enterprise computing. Is Microsoft over-complicating the cloud, or it is more that the same necessity for complex solutions exist wherever you deploy your applications?

What are the implications of cloud for Microsoft partners?

TechEd 2011 has a huge exhibition and of course every stand has contrived to find some aspect of cloud that it supports or enables. However, Windows Azure is meant to shift the burden of maintenance from customers to Microsoft. If Azure succeeds, will there be room for so many third-party vendors? What about the whole IT support industry, internal and external, are their jobs at risk? It seems to me that if moving to a multi-tenanted platform really does reduce cost, there must be implications for IT jobs as well.

The stock answer for internal staff is that reducing infrastructure cost is an opportunity for new uses of IT that are beneficial to the business. Staff currently engaged in keeping the wheels turning can now deliver more and better applications. That seems to me a rose-tinted view, but there may be something in it.

Related posts:

  1. Microsoft TechEd 2010 wrap-up: cloud benefits, cloud sceptics
  2. Microsoft maybe gets the cloud – maybe too late
  3. PDC day one: Windows in the cloud

<Return to section navigation list> 

Cloud Security and Governance

• Jason Bloomberg asserted “Hackers will be quietly stealing your data before you know what happened” as a deck for his Data Remanence: Cloud Computing Shell Game article of 5/20/2011:

Everybody knows that dragging a file into the trash and then emptying the trash doesn't actually erase the file. It simply indicates to the file system that the file is deleted, but the data in the file remain on the hard drive until the file system eventually overwrites the file. If you require the actual erasure of deleted files, then you must take an active step to erase the portion of the drive that contained the file, perhaps by explicitly overwriting each bit of the original file. Even then, it may be possible (although generally quite difficult) to recover parts of the original file, due to the magnetic properties of the storage medium. We call this problem data remanence.

Cloud Computing complicates the data remanence issue enormously. You typically have no visibility into the physical location of your data in the Cloud, so overwriting the physical media is virtually impossible. The Cloud infrastructure may distribute your storage or virtual machine instance across multiple physical drives. And furthermore, deprovisioning that instance is similar to dragging it to the trash: the data that your instance wrote to the various drives remain until the Cloud provider eventually gets around to reallocating the sectors you were using to other instances. And even then, an enterprising hacker might be able to read your data by looking at the bits in their newly provisioned instance.

Unfortunately, the current state of the art for dealing with data remanence in the Cloud is a shell game: applications relegate the solution to the infrastructure level, while the infrastructure considers the problem to be at the application level. To make matters worse, no one seems to be focusing on the data remanence problem in the Cloud. That is, except for the hackers, who will be quietly stealing your data before you know what happened.

Encryption: Necessary but not Sufficient
Encryption is the obvious first line of defense against the data remanence problem. Make sure all the data you store in the Cloud are encrypted. Manage your keys locally, rather than putting them in the Cloud. In this way, not only are your data confidential, but all you have to do to securely delete your data is to delete (or expire) the key.

Problem solved, right? Not so fast.

Such application-level encryption has a major limitation. There's simply not much you can do with encrypted data unless you decrypt them, other than simply store them or move them around. If you decrypt your data in the Cloud, then the data remanence problem once again rears its ugly head. As a result, application-level encryption can only solve the data remanence problem when you're using the Cloud for storage only. If you want to process your data in the Cloud, the approach is insufficient.

Perhaps we should handle encryption below the application level, say, at the media layer. With media encryption, you essentially have an encrypted volume in the Cloud. You must present the appropriate credential to mount the volume, just as you would a local hard drive that has media encryption. Media encryption protects you from stolen hard drives (or your Cloud provider going bankrupt and putting the drives on eBay), but it is still insufficient for dealing with the data remanence issue.

The limitation of media encryption in the Cloud is that it only protects read/write operations to the file systems or databases that are physically present on the encrypted media. Other operations, however, may not have adequate protection, for example, message queuing, data caching, and logging. In a traditional on-premise server environment, your systems people are fully in control over how and where they handle such operational or transitory data. However, in the Cloud you have no such control. The Cloud provider's underlying provisioning infrastructure may use a caching scheme as part of its elastic load balancing, and you'd be none the wiser. Remember, you may believe queues or caches are inherently temporary, but the data remanence issue centers on situations where "temporary" really means "unpredictably persistent."

One approach to addressing this problem that is gaining in popularity is "Virtual Private Storage," or VPS. With VPS, encryption and decryption (among other capabilities) take place transparently on an intermediary that negotiates all interactions with the Cloud. For example, buy one of the new generation of Cloud appliances, put it in your DMZ, and configure it to encrypt everything going from your network up to the Cloud, while decrypting in the other direction. From the user's perspective such security measures are entirely transparent; they don't have to worry about confidentiality or data remanence in the Cloud. From the perspective of the Cloud, none of your data are ever unencrypted, whether written to a hard drive or temporarily stored in a queue or a cache somewhere.

The Missing Piece: Meaningful Use
Unfortunately, neither VPS or media encryption is a complete solution, because they both limit what you can do in the Cloud environment. In essence, all of the encryption approaches we've discussed treat the Cloud as a storage option. It's true that Cloud storage is an essential part of the Infrastructure-as-a-Service (IaaS) story. But what if you want to do more with the Cloud than IaaS?

A wonderful example of this question comes from the healthcare industry. And even if you're not in healthcare, the same challenges may apply to your organization. As you might expect, there are stringent, heavily regulated standards for the confidentiality of Electronic Health Records (EHRs). Encryption techniques traditionally provide sufficient confidentiality for these sensitive data. As solution providers build Cloud-based EHR applications, however, the data remanence issue rears its ugly head.

Cloud storage itself isn't the issue. Put EHRs in the Cloud, move them around, and bring them back from the Cloud: no problem there. But the regulations require more than storage. In the US, for example, the HITECH Act "promotes the adoption and meaningful use of health information technology." It then goes into quite a bit of detail as to what "meaningful use" means, and it's a lot more than IaaS can provide. For example, e-prescribing (eRx) and clinical decision support are two obvious meaningful uses of EHRs that the healthcare industry requires from Cloud-based solutions.

The challenge is that both eRx and clinical decision support necessitate actually doing something interesting with EHRs in the Cloud-and that means decrypting EHRs in the Cloud, which brings us back to the data remanence issue. IaaS simply cannot fully solve this problem, because it's at the application level. Software-as-a-Service (SaaS) also cannot fully resolve the problem, because SaaS solutions alone cannot deal with the remanence issues inherent in having decrypted data in the Cloud.

The ZapThink Take
Fortunately, there is a third Cloud service model: Platform-as-a-Service (PaaS). ZapThink has lambasted PaaS as warmed-over middleware in the Cloud, and truth be told, many PaaS solutions are still little more than thinly veiled middleware. The fact still remains that it's up to the PaaS vendors to solve the Cloud data remanence problem, since all of the gaps in media encryption and application-level encryption are within the realm of PaaS.

It's not clear, however, that any PaaS vendor has fully solved this problem yet. There are many moving parts to a platform, after all: messaging, transactionality, data storage and caching, framework APIs, and more. Place those capabilities into the dynamically provisioned Cloud environment. Then, ensure the platform never writes unencrypted data to physical media, even for data in transit.

Essentially, the PaaS vendors must rise to this challenge and build their offerings from the ground up with data remanence in mind. Until they do, no organization should trust them with EHRs or data of similar sensitivity. Of course, with challenge comes opportunity. Are you a vendor who is working on a solution to the Cloud data remanence problem, or a Cloud user who is struggling to find such a solution? Drop us a line, or better yet, check out our new online Cloud Security Fundamentals course.

Jason Bloomberg is Managing Partner and Senior Analyst at Enterprise Architecture advisory firm ZapThink LLC


David Linthicum asserted “With the recent admission by Dropbox that it can see your data, businesses have more reason to avoid public cloud services” in a deck to his Cloud providers must learn to keep secrets post of 5/19/2011 to InfoWorld’s Cloud Computing blog:

image Just when you think it is safe to put your embarrassing college photos in the cloud, we learn that some cloud providers have the ability to look at your files: "Dropbox now asserts that it can decrypt and pass your data on to a third party if Dropfox feels it needs to do so, in order to protect its property rights."

image In other words, if somebody shows up with a legal reason to grab your data, Dropbox will gladly decrypt your data and hand it over. Moreover, it can see your data if it wants to do so -- nice.

The myth thus far is that cloud data storage security meant only you had the magical encryption key to see your stored data. Those files were safe, as secure as local storage, considering that local storage could fall into the wrong hands.

With this shocking revelation from Dropbox, we learned that the notion of security is only as good as your trust in your cloud computing provider. Worse, it seems there's a difference between the posted privacy policy and what can really happen. I suspect other clouds have the same issue.

What's good about this fiasco is that other providers will update their privacy policies with the fact that your stuff is less than private and they may be willing to sell you out when pressured. Moreover, there will be a disclosure that they have the ability to see your files, and even if they have a policy against viewing data, I imagine that looking at your vacation videos are a good way to pass the time for cloud providers' third-shift guys. You'll never know.

The cloud providers need to learn from this event. If they can't be trusted, enterprises won't use them. After all, if you can't keep secrets, nobody will share them with you.


<Return to section navigation list> 

Cloud Computing Events

• Lynn Langit (@llangit) reported on 5/19/2011 FREE Developer Azure Training in SoCal in June 2011:

image Register as below:

image

Also get your FREE Windows Azure Trial Account: Go here - http://windowsazurepass.com/- use code - DPEWR02


Bruno Terkaly listed three Azure Bootcamps–San Francisco in a 5/17/2011 post:

image Three Dates – May 31st, June 2nd, June 7-9: Three opportunities to learn about cloud computing in the Bay Area:

 

Important Bay Area Cloud Dates Register

Azure Bootcamp - San Francisco - May 31st, 2011 - Registration Link

Join us for this 1-day Azure Bootcamp, an immersive experience which will help explore and learn about how to leverage the Windows Azure platform and get started with the tools and architecture available. Register now, as space is limited!

https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032486466&Culture=en-US#

image

Azure Bootcamp - Mountain View - June 3rd, 2011 - Registration Link

Join us for this 1-day Azure Bootcamp, an immersive experience which will help explore and learn about how to leverage the Windows Azure platform and get started with the tools and architecture available. Register now, as space is limited!

https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032486469&Culture=en-US#

image

Azure ISV Workshop - San Francisco - June 7 to 9th, 2011 - Registration Link

The Microsoft Developer & Platform is pleased to announce a Metro Workshop event focused on Developers covering the Windows Azure Platform, for Microsoft partners and customers. Delivered through workshop style presentations and hands-on lab exercises, the workshop will focus on three key services of the Windows Azure Platform – Windows Azure, SQL Azure and .NET Services.

Attendee Prerequisites

This workshop is aimed at developers, architects and systems designers with at least 6 months practical experience using Visual Studio 2008 and C#.

https://msevents.microsoft.com/cui/EventDetail.aspx?culture=en-US&EventID=1032487064&IO=aCQY6NobMYQvOqaGF5O67A%3D%3D

image


Neil MacKenzie reported on 5/18/2011 about two of the three above Windows Azure Boot Camps on the West Coast in late May and early June 2011:

image A Windows Azure Boot Camp is a great way to learn Windows Azure. The boot camp is a mixture of presentations and hands-on labs where you can try out Windows Azure.

image

Boot camps are normally two days. However, I am going to do a couple of one-day mini boot camps in San Francisco and Mountain View in Northern California.

May 31, 2011 -San Francisco
Microsoft San Francisco Office. (register)

June 3, 2011 – Mountain View
Microsoft Silicon Valley Center (register)

Doing the hands-on labs is an important part of the boot camp so you should come prepared to do them. This means you need to install some Windows Azure software on your laptop. The Windows Azure Boot Camp website has links to all the software needed.

If you can’t come to one of these boot camps you should check the schedule to see if there is a boot camp that you can go to. They take place across the globe – but not so far in Antarctica or Greenland. Brian Prince, who started the boot camps, also holds online Windows Azure “Office Hours” where you can ask questions directly. The schedule for these is here.

Agenda
  • Introduction to Cloud Computing and Azure
  • Basic Roles
  • Hello Windows Azure VS2010 [LAB]
  • Advanced Roles
  • SQL Azure
  • Intro to SQL Azure [LAB]
  • Diagnostics and Service Management
  • Windows Azure Deployment VS2010 [LAB]
  • Storage Basics
  • Using Azure Tables
  • Using BLOB Storage
  • Queues
  • Exploring Windows Azure Storage [LAB]
  • Introduction to Windows Azure AppFabric
  • Introduction to AppFabric ACS V2 [LAB]
  • Cloud Computing Patterns & Scenarios

Brian Hitney announced in a 5/18/2011 post a series of Azure Tech Jam[s] on the East Coast in June 2011:

image You’ve heard about cloud computing and already know it’s the greatest thing since sliced bread – and maybe you’ve already attended a Microsoft Azure Boot Camp or other event introducing you to the cloud and detailing the various parts of the Windows Azure platform.  Well we’ll do that too… in the first half hour!  The rest of the time we’ll have a bit of fun with Azure by taking a look at some cool demos and poking under the hood.  We’ll then take a look at some of the innovative uses of cloud computing that Windows Azure customers have already produced. 

image

After lunch, we’ll introduce the genesis and creation of the Rock Paper Azure Challenge… AND run our very own Challenge on-site, exclusive to attendees only, complete with prizes like an Xbox 360/Kinect bundle, a stand-alone Kinect, and a $50 gift certificate. This is an interactive way to learn about developing and deploying to the cloud, with a little friendly competition thrown in for fun.

So bring your laptop, Windows Azure account credentials and a sense of adventure and join us for this FREE, full-day event as Peter, Jim, and I take you “to the cloud!”

Prerequisites:

  • Windows Azure Account – don’t have one? We’re offering a free Windows Azure 30-day pass for all attendees. Apply for yours now as it can take 3 days to receive. Use code AZEVENT
  • Laptop with Azure Tooks and SDK installed

Want a leg up on the competition? Visit the Rock Paper Azure Challenge web site and begin coding your winning bot today.

Location/Date

  • Charlotte, NC: June 2
  • Malvern, PA: June 7
  • Pittsburgh, PA: June 9
  • Ft. Lauderdale, FL: June 14
  • Tampa, FL: June 16

Due to the hands-on nature of this event seating is limited. Reserve your spot by registering today!


Eric Nelson (@ericnel) warned UK developers in a 5/17/2011 post that Microsoft UK TechDays 2011 [is] less than one week off:

image It has been a couple of weeks since I last posted about TechDays.

Today I completed the final review of all the decks for the Windows Azure developer tracks for Monday and Tuesday. All except for the keynote (always troublesome) and the roadmap session (equally troublesome) are now final – which gives me a warm and likely misplaced feeling that we are in good shape.

Yesterday we had the T-1 week meeting in which everyone was smiling. Good stuff. In my case, it was also my first experience of using Lync to join a meeting (vs Live Meeting et al). I’m impressed – all worked flawlessly.

image

Now… back to that roadmap deck…

Related Links:


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Jeff Barr (@jeffbarr) posted an Updated AWS Security White Paper; New Risk and Compliance White Paper on 3/18/2011:

We have updated the AWS Security White Paper and we've created a new Risk and Compliance White Paper.  Both are available now.

The AWS Security White Paper describes our physical and operational security principles and practices.

It includes a description of the shared responsibility model, a summary of our control environment, a review of secure design principles, and detailed information about the security and backup considerations related to each part of AWS including the Virtual Private Cloud, EC2, and the Simple Storage Service.

The new AWS Risk and Compliance White Paper covers a number of important topics including (again) the shared responsibility model, additional information about our control environment and how to evaluate it, and detailed information about our certifications and third-party attestations. A section on key compliance issues addresses a number of topics that we are asked about on a regular basis.

The AWS Security team and the AWS Compliance team are complimentary organizations and are responsible for the security infrastructure, practices, and compliance programs described in these white papers. The AWS Security team is headed by our Chief Information Security Officer and is based outside of Washington, DC. Like most parts of AWS, this team is growing and they have a number of open positions:

We also have a number of security-related positions open in Seattle:


David Linthicum claimed “Although Google's Chromebooks seems like a boon for cloud computing, the platform will fall far short of expectations” in an introduction to his Chromebooks and the cloud: The ugly truth post of 5/17/2011 to InfoWorld’s Cloud Computing blog:

image These days when Google talks, we all listen. This was especially true about an announcement from Google I/O last week around the forthcoming release of the long-awaited Chromebooks platform. You can think of Chromebooks as a browser-only netbook, which has to rely on connectivity into cloud-based services for its applications and file storage.

image The price tag is not very compelling. Chromebooks will cost $349 to $469 if you buy one outright from Acer or Samsung, or $28 per month to rent from Google. Last I looked, I could get basic but fully functional laptops and netbooks for that money running Windows 7 and providing tons of local storage -- as well as a browser I'm not forced to use for everything.

Of course Google and many in the press are promoting the Chromebook as a "cloud device," something that exists as a client for a new era of cloud services and thus has very little functionality when disconnected from the Internet. But I believe the Chromebook's dependence on cloud services for word processing, email, and other common functions is its weakness rather than its strength.

Cloud computing is indeed the wave of the future. Today, I use on-demand services to provide both a platform for sharing and as an efficient substitute for client-based software. The difference with that use of cloud computing compared to the Chromebook's is that I'm not forced to be completely dependent on the cloud for these services; I can mix and match them to meet my specific needs. I don't think I'm alone in wanting that freedom.

Thus, Chromebooks could be to laptops what Google TV was to cable TV: a great idea in concept, but not thought through as to how the device would be used in the real world by real people. The innovative nature of the Chromebook won't get around its inherent limitations. I suspect the Chromebook will be a concept that does not take flight, at least in the next couple of years. That's not a knock on the cloud, but a knock on this specific use case. That's the ugly truth.


Chris Hoff (@Beaker) posted a Quick Ping: VMware’s Horizon App Manager – A Big Bet That Will Pay Off… on 5/17/2011:

image It is so tempting to write about VMware‘s overarching strategy of enterprise and cloud domination, but this blog entry really speaks to an important foundational element in their stack of offerings which was released today: Horizon App Manager.

Check out @Scobleizer’s interview with Noel Wasmer (Dir. of Product Management for VMware) on the ins-and-outs of HAM.

Frankly, federated identity and application entitlement is not new.

Connecting and extending identities from inside the enterprise using native directory services to external applications (SaaS or otherwise) is also not new.

What’s “new” with VMware’s Horizon App Manager is that we see the convergence and well-sorted integration of a service-driven federated identity capability that ties together enterprise “web” and “cloud” (*cough*)-based SaaS applications with multi-platform device mobility powered by the underpinnings of freshly-architected virtualization and cloud architecture.  All delivered as a service (SaaS) by VMware for $30 per user/per year.

[Update: @reillyusa and I were tweeting back and forth about the inside -> out versus outside -> in integration capabilities of HAM.  The SAML Assertions/OAuth integration seems to suggest this is possible.  Moreover, as I alluded to above, solutions exist today which integrate classical VPN capabilities with SaaS offers that provide SAML assertions and SaaS identity proxying (access control) to well-known applications like SalesForce.  Here's one, for example.  I simply don't have any hands-on experience with HAM or any deeper knowledge than what's publicly available to comment further -- hence the "Quick Ping."]

Horizon App Manager really is a foundational component that will tie together the various components of  VMware’s stack offers for seamless operation including such products/services as Zimbra, Mozy, SlideRocket, CloudFoundry, View, etc.  I predict even more interesting integration potential with components such as elements of the vShield suite — providing identity-enabled security policies and entitlement at the edge to provision services in vCloud Director deployments, for example (esp. now that they’ve acquired NeoAccel for SSL VPN integration with Edge.)

“Securely extending the enterprise to the Cloud” (and vice versa) is a theme we’ll hear more and more from VMware.  Whether this thin client, virtual machines, SaaS applications, PaaS capabilities, etc., fundamentally what we all know is that for the enterprise to be able to assert control to enable “security” and compliance, we need entitlement.

I think VMware — as a trusted component in most enterprises — has the traction to encourage the growth of their supported applications in their catalog ecosystem which will in turn make the enterprise excited about using it.

This may not seem like it’s huge — especially to vendors in the IAM space or even Microsoft — but given the footprint VMware has in the enterprise and where they want to go in the cloud, it’s going to be big.

/Hoff

(P.S. It *is* interesting to note that this is a SaaS offer with an enterprise virtual appliance connector.  It’s rumored this came from the TriCipher acquisition.  I’ll leave that little nugget as a tickle…)

(P.P.S. You know what I want? I want a consumer version of this service so I can use it in conjunction with or in lieu of 1Password. Please.  Don’t need AD integration, clearly)

Related articles


<Return to section navigation list> 

0 comments: