Thursday, April 07, 2011

Windows Azure and Cloud Computing Posts for 4/6/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

image4    

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

Avkash Chauhan provided a workaround for problems Downloading WORD and EXCEL files from Windows Azure Storage in a ASP.NET Web Role in a 4/5/2011 post:

image While opening Microsoft Office WORD and EXCEL Files before saving from Windows Azure Storage in ASP.NET Web Role, a few partners reported the following Error:

The file <File name> cannot be opened because there are problem with the contents.

Note: When downloading PDF and other image files the problem did not occurred and it occurred only with WORD & Excel files.

imageFollowing is the correct code to solve this problem:

C# Code:

using System.IO;
using Microsoft.WindowsAzure;
using Microsoft.WindowsAzure.StorageClient;
using Microsoft.WindowsAzure.ServiceRuntime;

namespace WebRole
{
    public partial class _Default : System.Web.UI.Page
    {
        private static CloudStorageAccount account;
        private static CloudBlobClient blobClient;
        private static CloudBlobContainer container;
        private static CloudBlob blob;
 
        protected void Page_Load(object sender, EventArgs e)
        {
            DownloadBlob("HelloWorld.docx");
        }

        public void DownloadBlob(string blobName)
        {
            account = CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"));
       blobClient = account.CreateCloudBlobClient();
       container = blobClient.GetContainerReference("<YOUR_CONTAINER_NAME>");
       blob = container.GetBlobReference(blobName);
       MemoryStream memStream = new MemoryStream();
       blob.DownloadToStream(memStream);
       Response.ContentType = blob.Properties.ContentType;
       Response.AddHeader("Content-Disposition", "Attachment; filename=" + blobName.ToString());
       Response.AddHeader("Content-Length", blob.Properties.Length.ToString());
       Response.BinaryWrite(memStream.ToArray());
        }
    }
}

VB.NET Code:

Imports System.IO
Imports Microsoft.WindowsAzure
Imports Microsoft.WindowsAzure.StorageClient
Imports Microsoft.WindowsAzure.ServiceRuntime

Public Class _Default
    Inherits System.Web.UI.Page

    Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
DownloadBlob("HelloWorld.docx");
    End Sub

    Private Sub DownloadBlob(ByVal blobName As String, ByVal containerName As String)
        Dim account As CloudStorageAccount
        Dim blobClient As CloudBlobClient
        Dim container As CloudBlobContainer
        account =
        CloudStorageAccount.Parse(RoleEnvironment.GetConfigurationSettingValue("DataConnectionString"))
        blobClient = account.CreateCloudBlobClient()
        container = blobClient.GetContainerReference("<Your_Container_Name>")
        Dim blob As CloudBlob
        blob = container.GetBlobReference(blobName)

        'Downloads a stream. Errors for xls and Word. 
        Dim ms As New MemoryStream()
        Using ms
            blob.DownloadToStream(ms)
            Response.ContentType = blob.Properties.ContentType
            Response.AddHeader("Content-Disposition", "Attachment; filename=" & blobName)
            Response.AddHeader("Content-Length", blob.Properties.Length)
            Response.BinaryWrite(ms.ToArray())
        End Using
    End Sub
End Class


<Return to section navigation list> 

SQL Azure Database and Reporting

Steve Yi (pictured below) posted a Real World SQL Azure: Interview with Gregory Kim, Chief Technology Officer, Accumulus case study to the SLQ Azure Team blog on 4/7/2011:

image As part of the Real World SQL Azure series, we talked to Gregory Kim, Chief Technology Officer at Accumulus, about using SQL Azure to extend its subscription management service with rich business intelligence capabilities to the cloud. Here's what he had to say:

MSDN: Can you tell us about Accumulus and the services you offer?

image Kim: Accumulus addresses the complex process that online companies face with subscription-based billing. Our solution helps businesses manage subscription billing and automate the customer lifecycle-everything from customer sign-up and activation to billing and payment processing. 

MSDN: What were some of the challenges that Accumulus faced prior to adopting SQL Azure?

Kim: Before founding Accumulus, our employees had a long history in subscription billing processes, which were typically managed through an on-premises server infrastructure model. We knew that with Accumulus we wanted to adopt a cloud-based, software-as-a-service model, however. It's no secret that the cloud helps you inherently avoid massive server infrastructure costs and offers a model that lets you pay for what you use-and that's what our customers wanted, too. That said, we needed a cloud services provider that offered a robust relational database because our vision was to offer rich business intelligence as a competitive differentiator.

MSDN: Why did you choose SQL Azure as your solution?

imageKim: In addition to a relational database, we also wanted a cloud services provider that would enable us to use our existing developer and IT skills, which are firmly rooted in Microsoft products and technologies. The Windows Azure platform, with SQL Azure, was the clear choice for us. The platform met all our business requirements, plus has the backing of the reliable Microsoft infrastructure.

MSDN: Can you describe how Accumulus is using SQL Azure and Windows Azure to help address your need to provide business intelligence in the cloud?

Kim: The front end of the Accumulus solution is hosted in web roles in Windows Azure and uses worker roles to handle the back-end processing from web role requests. We use the Queue service in Windows Azure for persistent messaging between the web and worker roles, but also take advantage of Blob Storage in Windows Azure for messaging tasks. Our primary database is SQL Azure, which we use to cross-reference relevant customer, product, payment, and promotional data in a relational data structure. The SQL Azure database is deployed in a multitenant environment so that customers share the Accumulus application, but their data is safeguarded and kept separate from each other. Customers access Accumulus through a rich user interface is based on the Microsoft Silverlight browser plug-in, and the application integrates with their own IT infrastructure through REST-based application programming interfaces.

MSDN: What makes your solution unique?

Kim: Business intelligence, pricing agility, and the ability to manage subscriptions across platforms. We recognize that providing customers with business insight into their billing and customer life cycles distinguishes us from competitors. It allows them to strategically price their products, services, and content across a number of access channels. At the heart of our competitive advantage is SQL Azure, which gives us the relational database that is so vital to the business intelligence we offer. The ability to have a relational data model where we can cross-reference all our data and yield business intelligence for our customers is critical to our success.

MSDN: In addition to business intelligence in the cloud, what other benefits is Accumulus realizing with the Windows Azure platform?

Kim: The scalability we achieved for compute and storage needs is key. We can easily scale up web roles and worker roles in Windows Azure and simply add partitions to our multitenant SQL Azure database to scale up for increased storage needs. It's also great that we didn't have a steep learning curve with the platform and could draw from our existing knowledge and skillset to develop the solution. We saved approximately four months by developing our solution for Windows Azure and SQL Azure, representing a 25 percent productivity savings compared to if we had used Amazon EC2 or Google App Engine. Add to that, we can develop locally and deploy immediately to the cloud, which means we can deploy updates and new features in half the time than we could with other providers.

Read the full story at:
www.microsoft.com/casestudies/casestudy.aspx?casestudyid=4000009546

To read more SQL Azure customer success stories, visit:
www.sqlazure.com


Anton Staykov (@astaykov) described how to implement an SQL Azure Agent-like service in SQL Azure with a CodePlex project updated on 3/29/2011 (missed when updated):

image Proof of concept project to show how you can achieve SQL Server Agent-like functionality for SQL Azure.

Credits
imageThis project is based on the 3 parts "I miss you SQL Server Agent" published while back on the SQL Azure Team Blog:
Extends to the main article include
  • Execution of multiple tasks (not only one predefined task) on multiple SQL Azure Servers / DataBases
  • Uses the Transient Fault Handling Framework for Azure Storage, Service Bus & SQL Azure from Microsoft AppFabric CAT Best Practices
Where to start from?
Download the SQL Azure Agent Service Beta 1.0 and check out the Documentation for instructions.
Related good readings on SQL Azure:

Anton is a newly appointed Azure MVP for Bulgaria.


<Return to section navigation list> 

MarketPlace DataMarket and OData

Glenn Gailey (@ggailey777) announced A Wiki-Based Source for OData Content and Info in a 4/6/2011 post:

image While I am (obviously) a big fan of the content on the Open Data Protocol (OData) that we produce (including the WCF Data Services content, Silverlight and Windows Phone topics, and of course the OData.org site), I am always excited to see other folks outside of the OData team creating and publishing content about OData. That is why I was jazzed to see that Chris Woodruff, with Greg Duncan’s help, have put together a wiki site (based on the SrewTurn wiki platform) that is an homage to all things OData. This is another great source of OData stuff, videos, presentations, podcasts, and the like, but this time posted externally and independently by fans of the OData Protocol.

imageFeel free to check it out, and post any OData stuff that you have to the OData Primer wiki site

Let the ecosystem grow!


Marcelo Lopez Ruiz announced a Live datajs sample showing OData.read in a 4/6/2011 post:

imageBrought to you by Taggart Software Consulting, available at http://blog.ctaggart.com/2011/04/odata-from-javascript-netflix-genres.html anywhere browsers are run...

Enjoy!


Sudhir Hasbe announced on 4/5/2011 a one-hour Leverage ParcelAtlas data on DataMarket to create Innovative Solutions Webinar held on 4/6/2011 and available on-demand:

Date/Time: Wed Apr 6, 2011, 8:00 AM, USA Pacific

Duration: 1 Hour

Contact: Academy Live

image Cloud technologies provide businesses running geospatial software and data, combined to lower implementation risks and increase margins. ParcelAtlas has changed the very notion of location in the world from a point on a map to a piece of the map itself, the area within its parcel boundary. Implications to transaction accuracy and throughput automation is touching near every business sector. Rather adding a NPL to your clients operations by jumping on an ISP server, jump instead on Azure, the Microsoft Cloud. BSI's patented technology has contributed to BSI being a sustained leader in both the technology and data sharing policy development needed to complete SEAMLESS USA, a 3,141 County Open Records NPL.

During this webinar you will learn:

  • Business benefits associated with Cloud based GeoSpatial Information Services.
  • How the DataMarket datasets helped establish ParcelAtlas
  • Relationship between Cloud and other GeoSpatial Information Services.
  • ParcelAtlas Database Scope and vast GeoSpatial functionality supported by API.
  • How ParcelAtlas is like subscribing to SEAMLESS USA - Business opportunities in expediting the NPL.
  • How an open records 3,140 County National Parcel Layer will be a wellspring to a 100,000 business ops

[Register to view at] Leverage ParcelAtlas data on DataMarket to create Innovative Solutions (APP26CAL) - EventBuilder.com.


Marcelo Lopez Ruiz reported datajs at MIX - data in an HTML5 world in a 4/5/2011 post:

imageI'm happy to announce that Asad [Khan] and I will be doing a session on datajs at MIX - Data in an HTML5 World.

image We'll be talking about the state of affairs today, how things change with HTML5 capabilities, what datajs is doing about that and how everyone can participate. Drop me a message if you're attending and would like to ask questions, share your opinions or just have a chat.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

Wade Wegner (@WadeWegner, pictured below) added more detail on 4/5/2011 about the Article: Introducing the Windows Azure AppFabric Caching Service he co-authored with Karandeep Anand:

image I was excited to receive my MSDN Magazine in the mail this week, as an article written by myself and Karandeep Anand—a Principal Group Program Manager on the AppFabric team—was finally published.  These things generally take a month or so from the time you finish writing them, which is often challenging in a services world.

You can now read this article online: Introducing the Windows Azure AppFabric Caching Service

clip_image001

Don’t let the title deceive you – it’s not your typical “Introducing…” or “Getting Started” tutorial.  We wanted to delve deeper, and included:

  • Under the Hood
  • Architectural Guidance
  • Setup
  • Using Cache in Your App
  • What’s Next?

image722322222Karan and I are also giving a talk at MIX11 entitled Build Fast Web Applications with Windows Azure AppFabric Caching where we’ll not only build on foundational principals laid out in this article, but also show a lot of compelling demonstrations.

The best part is that you can start using the Caching service today as part of the Community Technology Preview (CTP).  Sign up at: https://portal.appfabriclabs.com/.


Paolo Salvatori of the AppFabric CAT Team described How to use a WCF custom channel to implement client-side caching in a 4/5/2011 post:

Introduction

PaoloAppFabricCATFeaturedImageA couple of months ago Yossi Dahan told me that one of his customers in the UK was searching for a solution to transparently cache the response messages resulting from a WCF call. I immediately thought that this design pattern could be implemented using a custom channel so I proposed this solution to Yossi. So I sent him the code of a custom WCF channel that I built for another project, he created a first prototype to test the feasibility of the outlined approach, then I extended the component to include the support for Windows Server AppFabric Caching and many additional features that I’ll explain in this article.

image722322222The idea of using Windows Server AppFabric Caching to manage caching is not new, but all the samples I saw so far on internet implement a server side caching using an a custom component that implements the IOperationInvoker standard interface. Conversely, my component implements a client-side caching using a custom protocol channel. Moreover, my extensions library provides the possibility to choose among two caching providers:

  • An Web Cache-based provider: This component doesn’t need the installation of Windows Server AppFabric Caching as it internally uses an instance of the Cache class supplied by ASP.NET.

  • An AppFabric Caching provider: as the name suggests, this caching provider requires and leverages Windows Server AppFabric Caching. To further improve the performance, it’s highly recommended the client application to use the Local Cache to store response messages in-process.

Client-side  caching and server-side caching are two powerful and complimentary techniques to improve the performance of a server application.  Client-caching is particularly indicated for those applications, like a web site, that frequently invoke one or multiple back-end systems to retrieve reference and lookup data, that is data that is static and change quite rarely. By using client-side caching you avoid making redundant calls to retrieve the same data, especially when the calls in question take a time to complete. My component allows to extend existing server applications with client-caching capabilities without the need to change their code to explicitly use the functionality supplied by Windows Server AppFabric Caching.

For more information on how to implement server side caching, you can review the following articles:

WCF Messaging Runtime

Before diving into the code, let’s do a quick excursion on how the WCF messaging actually works. The WCF runtime is divided into 2 primary layers as shown by the following picture:

  • The Service Layer aka Service Model defines the mechanisms and attributes used by developers to define and decorate service, message and data contracts.

  • The  Messaging Layer  is instead responsible for preparing a WCF message for transmission on the send side and produce a WCF message for the dispatcher on the receive side. The messaging layer accomplishes this task using a Channel Stack. This latter is a pipeline of channel components that handle different processing tasks.  Each channel stack is composed of exactly one transport channel, one message encoder, and zero or more protocol channels.

WCFRuntime

It’s the responsibility of the proxy component on the client side and dispatcher component on the service side to mediate and translate between the two layers. In particular, the proxy component transforms .NET method calls into Message objects, whereas the dispatcher component turns WCF Messages into .NET method calls. WCF uses the Message class to model all incoming/outgoing messages within the Messaging Layer. The message represents a a SOAP Envelope, and therefore it’s composed of a payload and a set of headers. A typical WCF communication can be described as follows:

  1. The client application creates one or more input parameters. Each of these parameters is defined by a data contract.

  2. The client application invokes one of the methods of the service contract exposed by the proxy.

  3. The proxy delivers a WCF Message object to the channel stack.

  4. At this point each protocol channel has a chance to operate on the message before the transport channel uses a message encoder to transmit the final Message as a sequence of bytes to the target service. Each protocol channel can modify the content or the headers of the message to implement specific functionalities or WS-* protocols like WS-AtomicTransaction, WS-Security.

  5. The raw stream of data is transmitted over the wire.

  6. On the service side, the transport channel receives the stream of data and uses a message encoder to interpret the bytes and to produce a WCF Message object that can continue up the channel stack. At this point each protocol channel has a chance to work on the message.

  7. The final Message is passed to the Dispatcher.

  8. The Dispatcher receives the WCF Message from the underlying channel stack, individuates the target service endpoint using the destination address and Action property contained in the Message, deserializes the content of the WCF Message into objects.

  9. Finally the target service method is invoked.

After a slightly long-winded but necessary introduction, we are now ready to introduce the problem statement and examine how to leverage my component in three different application scenarios.

Problem Statement

The problem statement that my component intends to solve can be formulated as follows:

  • How can I implicitly cache response messages within a consumer application that invokes one or multiple underlying services using WCF and a Request-Response message exchange pattern without modifying the code of the application in question?

To solve this problem, I created a custom protocol channel that you can explicitly or implicitly use inside a CustomBinding when specifying client endpoints within the configuration file or by code using the WCF API.

Scenarios

The design pattern implemented by my component can be described as follows: a client application submits a request to WCF service hosted in IIS\AppFabric and waits for a response. The service invoked by the client application uses a WCF proxy to invoke a back-end service. My custom channel is configured to run first in the channel stack. It checks the presence of the response message in the cache and behaves accordingly:

  • If the response message is in the cache, the custom channel immediately returns the response message from the cache without invoking the underlying service.

  • Conversely, if the response message is not in the cache, the custom channel calls the underlying channel to invoke the back-end service and then caches the response message using the caching provider defined in the configuration file for the actual call.

First Scenario

The following picture depicts the architecture of the first scenario that uses the AppFabric Caching provider to cache response message in the AppFabric local and distributed cache.

AppFabricCache

Message Flow

  1. The client application submits a request to a WCF service and waits for a response.
  2. The WCF Service invokes one the methods exposed by the WCF proxy object.
  3. The proxy transforms the .NET method call into a WCF message and delivers it to the underlying channel stack.
  4. The caching channel checks the presence of the response message in the AppFabric Caching Local Cache or on Cache Cluster. If the service in question is hosted by a web farm, the response message may have been previously put in the distributed cache by another service instance running on the same machine or on another node of the farm. If the caching channel finds the response message for the actual call in the local or distributed cache, it immediately returns this message to the proxy object without invoking the back-end service.

  5. Conversely, if the response message is not in the cache, the custom channel calls the underlying channel to invoke the back-end service and then caches the response message using the AppFabric Caching provider.

  6. The caching channel returns the response WCF message to the proxy.

  7. The proxy transforms the WCF message into a response object.

  8. The WCF service creates and returns a response message to the client application.

Second Scenario

The following diagram shows the architecture of the second scenario. In this case, the service uses the Web Cache provider, therefore each node of the web farm has a private copy of the response messages.

WebCache

Message Flow

  1. The client application submits a request to a WCF service and waits for a response.
  2. The WCF Service invokes one the methods exposed by the WCF proxy object.
  3. The proxy transforms the .NET method call into a WCF message and delivers it to the underlying channel stack.
  4. The caching channel checks the presence of the response message in the in-process Web Cache and, in affirmative case, it returns it to the proxy object without invoking the back-end service.

  5. Conversely, if the response message is not in the cache, the custom channel calls the underlying channel to invoke the back-end service and then caches the response message in the Web Cache.

  6. The caching channel returns the response WCF message to the proxy.

  7. The proxy transforms the WCF message into a response object.

  8. The WCF service creates and returns a response message to the client application.

Third Scenario

Finally, the following figure shows how to get advantage of my component in a BizTalk Server application:

BizTalk

Message Flow

  1. The client application submits a request to a WCF receive location and waits for a response.

  2. The XML disassembler component within the XmlReceive pipeline recognizes the document type and promotes the MessageType context property.

  3. The Message Agent publishes the document to the MessageBox database.

  4. The inbound request starts a new instance of an given orchestration.

  5. The orchestration posts to the MessageBox database a request message for a back-end service.

  6. The request message is processed by a WCF-Custom send port which is configured to use the CustomBinding. In particular, the binding is composed by a transport binding element, by a message encoder, and by one or multiple protocol binding elements. The first of these components is the Binding Element that at runtime is responsible for creating the ChannelFactory which in turns creates the caching channel.

  7. The WCF-Custom Adapter transforms the IBaseMessage into a WCF Message and relay it to the channel stack.

  8. The caching channel checks the presence of the response message in the local or distributed cache. In affirmative case, it retrieves the response message from the cache and returns it to the WCF-Custom Adapter without invoking the back-end service. Conversely, if the response message is not in the cache, the custom channel calls the underlying channel to invoke the back-end service and then caches the response message in the both the local and distributed cache. The WCF-Custom Adapter transforms the WCF Message into a IBaseMessage.

  9. The WCF send port publishes the message to the MessageBox database.

  10. The orchestration consumes the response message and prepares a response message for the client application.

  11. The orchestration publishes the response message to the MessageBox database.

  12. The response message is retrieved by the WCF receive location.

  13. The response message is returned to the client application.

We are now ready to analyze the code.

The Solution

The code has been realized in C# using Visual Studio 2010 and the .NET Framework 4.0. The following picture shows the projects that comprise the WCFClientCachingChannel solution:

Solution

The following is a brief description of individual projects:

  • AppFabricCache: this caching provider implements the Get and Put methods to retrieve and store data items to the AppFabric local and distributed cache.

  • WebCache: this caching provider provides the Get and Put methods to retrieve and store items to a static in-process Web Cache object.

  • ExtensionLibrary: this assembly contains the WCF extensions to configure, create and run the caching channel at runtime.

  • Helpers: this library contains the helper components used by the WCF extensions objects to handle exceptions and trace messages.

  • Scripts: this folder contains the scripts to create a named cache in Windows Server AppFabric Caching and the scripts to start and stop both the cache cluster and individual cache hosts.

  • Tests: this test project contains the unit and load tests that I built to verify the runtime behavior of my component.

  • TestServices: this project contains a console application that opens and exposes a test WCF service.

Configuration

The following table shows the app.config configuration file of the Tests project.

   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <configuration>
   3:   <!--configSections must be the FIRST element -->
   4:   <configSections>
   5:     <!-- required to read the <dataCacheClient> element -->
   6:     <section name="dataCacheClient"
   7:        type="Microsoft.ApplicationServer.Caching.DataCacheClientSection, Microsoft.ApplicationServer.Caching.Core, Version=1.0.0.0,Culture=neutral, PublicKeyToken=31bf3856ad364e35"
   8:        allowLocation="true"
   9:        allowDefinition="Everywhere"/>
  10:   </configSections>
  11:  
  12:   <dataCacheClient>
  13:     <!-- (optional) Local Cache Settings -->
  14:     <localCache  isEnabled="true" sync="TimeoutBased" objectCount="100000" ttlValue="300" />
  15:     <!-- Security Settings -->
  16:     <securityProperties mode="None" protectionLevel="None" />
  17:     <!-- Hosts -->
  18:     <hosts>
  19:       <host
  20:          name="localhost" cachePort="22233"/>
  21:     </hosts>
  22:     <!-- Transport Properties -->
  23:     <transportProperties connectionBufferSize="131072"
  24:                          maxBufferPoolSize="268435456"
  25:                          maxBufferSize="8388608"
  26:                          maxOutputDelay="2"
  27:                          channelInitializationTimeout="60000"
  28:                          receiveTimeout="600000"/>
  29:   </dataCacheClient>
  30:  
  31:   <system.serviceModel>
  32:  
  33:     <client>
  34:       <endpoint address="http://localhost:8732/TestService/"
  35:                 binding="customBinding"
  36:                 bindingConfiguration="customBinding"
  37:                 contract="TestServices.ITestService"
  38:                 name="CustomBinding_ITestService" />
  39:       <endpoint address="http://localhost:8732/TestService/"
  40:                 binding="basicHttpBinding"
  41:                 behaviorConfiguration="basicHttpBinding"
  41:                 bindingConfiguration="basicHttpBinding"
  42:                 contract="TestServices.ITestService"
  43:                 name="BasicHttpBinding_ITestService" />
  44:     </client>
  45:  
  46:     <bindings>
  47:       <customBinding>
  48:         <binding name="customBinding"
  49:                  closeTimeout="00:10:00"
  50:                  openTimeout="00:10:00"
  51:                  receiveTimeout="00:10:00"
  52:                  sendTimeout="00:10:00">
  53:           <clientCaching enabled="true"
  54:                          header="true"
  55:                          timeout="00:05:00"
  56:                          cacheType="AppFabricCache"
  57:                          cacheName="WCFClientCache"
  58:                          regionName="Messages"
  59:                          maxBufferSize="65536"
  60:                          keyCreationMethod="Simple">
  61:             <operations>
  62:               <operation action="TestAction"
  63:                          keyCreationMethod="Action"
  64:                          cacheType="AppFabricCache"
  65:                          timeout="00:20:00" />
  66:               <operation action="TestSimple"
  67:                          keyCreationMethod="Simple"
  68:                          cacheType="AppFabricCache"
  69:                          timeout="00:20:00" />
  70:               <operation action="TestMD5"
  71:                          keyCreationMethod="MD5"
  72:                          cacheType="WebCache"
  73:                          timeout="00:10:00" />
  74:               <operation action="TestMessageBody"
  75:                          keyCreationMethod="MessageBody"
  76:                          cacheType="AppFabricCache"
  77:                          timeout="00:20:00" />
  78:               <operation action="TestIndexed"
  79:                          keyCreationMethod="Indexed"
  80:                          cacheType="AppFabricCache"
  81:                          timeout="00:20:00"
  82:                          indexes="1,2" />
  83:             </operations>
  84:           </clientCaching>
  85:           <textMessageEncoding messageVersion="Soap11" />
  86:           <httpTransport />
  87:         </binding>
  88:       </customBinding>
  89:       <basicHttpBinding>
  90:         <binding name="basicHttpBinding"  
  91:                  closeTimeout="00:10:00" 
  92:                  openTimeout="00:10:00" 
  93:                  receiveTimeout="00:10:00" 
  94:                  sendTimeout="00:10:00" 
  95:                  allowCookies="false" 
  96:                  bypassProxyOnLocal="false" 
  97:                  hostNameComparisonMode="StrongWildcard" 
  98:                  maxBufferSize="10485760" 
  99:                  maxBufferPoolSize="1048576" 
 100:                  maxReceivedMessageSize="10485760" 
 101:                  messageEncoding="Text" 
 102:                  textEncoding="utf-8" 
 103:                  transferMode="Buffered" 
 104:                  useDefaultWebProxy="true">
 105:         </binding>
 106:       </basicHttpBinding>
 107:     </bindings>
 108:  
 109:     <extensions>
 110:       <behaviorExtensions>
 111:         <add name="cachingBehavior"                type="Microsoft.AppFabric.CAT.Samples.WCF.ClientCache.ExtensionLibrary.ClientCacheBehaviorExtensionElement, Microsoft.AppFabric.CAT.Samples.WCF.ClientCache.ExtensionLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=8f6257ebc688af7c"/>
 112:       </behaviorExtensions>
 113:       <bindingElementExtensions>
 114:         <!-- This item is required to register our custom binding element -->
 115:         <add name="clientCaching"                 type="Microsoft.AppFabric.CAT.Samples.WCF.ClientCache.ExtensionLibrary.ClientCacheBindingExtensionElement, Microsoft.AppFabric.CAT.Samples.WCF.ClientCache.ExtensionLibrary, Version=1.0.0.0, Culture=neutral, PublicKeyToken=8f6257ebc688af7c"/>
 116:       </bindingElementExtensions>
 117:     </extensions>
 118:  
 119:     <behaviors>
 120:       <endpointBehaviors>
 121:         <!—- basicHttpBinding Endpoint Behavior -->
 122:         <behavior name="basicHttpBinding"> 123:           <cachingBehavior enabled="true"
 124:                            header="true"
 125:                            timeout="00:05:00"
 126:                            cacheType="AppFabricCache"
 127:                            cacheName="WCFClientCache"
 128:                            regionName="Messages"
 129:                            maxBufferSize="65536"
 130:                            keyCreationMethod="Simple">
 131:             <operations>
 132:               <operation action="TestAction"
 133:                          keyCreationMethod="Action"
 134:                          cacheType="AppFabricCache"
 135:                          timeout="00:20:00" />
 136:               <operation action="TestSimple"
 137:                          keyCreationMethod="Simple"
 138:                          cacheType="AppFabricCache"
 139:                          timeout="00:20:00" />
 140:               <operation action="TestMD5"
 141:                          keyCreationMethod="MD5"
 142:                          cacheType="WebCache"
 143:                          timeout="00:10:00" />
 144:               <operation action="TestMessageBody"
 145:                          keyCreationMethod="MessageBody"
 146:                          cacheType="AppFabricCache"
 147:                          timeout="00:20:00" />
 148:               <operation action="TestIndexed"
 149:                          keyCreationMethod="Indexed"
 150:                          cacheType="AppFabricCache"
 151:                          timeout="00:20:00"
 152:                          indexes="1,2" />
 153:             </operations>
 154:           </cachingBehavior>
 155:         </behavior>
 156:       </endpointBehaviors>
 157:     </behaviors>
 158:   </system.serviceModel>
 159: </configuration>

Please find below a brief description of the main elements and sections of the configuration file:

  • Lines [4-10] define the config sections. For AppFabric caching features to work, the configSections element must be the first element in the application configuration file. It must contain child elements that tell the runtime how to use the dataCacheClient element.

  • Lines [12-29] contain the dataCacheClient element that is used to configure the cache client. Child elements define cache client configuration; in particular, the localCache element specifies the local cache settings, whereas the hosts element defines the DNS name and port of available cache hosts.

  • Lines [33-44] contain the client section that defines a list of endpoints the test project uses to connect to the test service. In particular, I created 2 different endpoints to demonstrate how to configure the caching channel:

    • The first endpoint uses the CustomBinding as a recipe to create the channel stack at runtime. The custom binding is composed of 3 binding elements: the clientCaching, textMessageEncoding and httpTransport. As you can see at lines [47-88], the clientCaching binding element allows to accurately configure the runtime behavior of the caching channel at a general level and on a per operation basis. Below I will explain in detail how to configure the clientCaching binding element.

    • The second endpoint adopts the BasicHttpBinding to communicate with the underlying service. However, the endpoint is configured to use the cachingBehavior that at runtime replaces the original binding with a CustomBinding made up of the same binding elements and adds the clientCaching binding element as the first element to the binding element collection. This technique is an alternative way to use and configure the caching channel.

  • Lines [109-117] contain the extensions element which defines the cachingBehavior extension element and the clientCaching binding element extension element.

  • Lines [122-154] contain the basicHttpBinding endpoint configuration.

As you can easily notice, both the cachingBehavior and clientCaching components share the same configuration that is defined as follows:

cachingBehavior and clientCaching elements:

  • enabled property: gets or sets a value indicating whether the WCF caching channel is enabled. When the value is false, the caching channel always invokes the target service. This property can be overridden at the operation level. This allows to enable or disable caching on a per operation basis.

  • header property: gets or sets a value indicating whether a custom header is added to the response to indicate the source of the WCF message (cache or service). This property can be overridden at the operation level.

  • timeout property: gets or sets the default amount of time the object should reside in the cache before expiration. This property can be overridden at the operation level.

  • cacheType property: gets or sets the cache type used to store items. The component actually supports two caching providers: AppFabricCache and WebCache. This property can be overridden at the operation level.

  • cacheName property: gets or sets the name of the cache used for storing messages in the AppFabric distributed cache. This property is used only when the value of the cacheType property is equal to AppFabricCache.

  • regionName property: gets or sets the name of the region used for storing messages in the AppFabric distributed cache. This property is used only when the value of the cacheType property is equal to AppFabricCache. If the value of this property is null or empty, the component will not use any named region.

  • maxBufferSize property: gets or sets the maximum size in bytes for the buffers used by the caching channel. This property can be overridden at the operation level.

  • indexes property: gets or sets a string containing a comma-separated list of indexes of parameters to be used to compute the cache key. This property is used only when the keyCreationMethod = Indexed.

  • keyCreationMethod property: gets or sets the method used to calculate the key for cache items. The component provides 5 key creation methods:

    • Action: this method uses the value of the Action header of the request as key for the response. For obvious reasons, this method can be used only for operations without input parameters.

    • MessageBody: this method uses the body of the request as key for the response. This method doesn’t work when the request message contains contains DateTime elements that could vary from call to call.

    • Simple: this method creates the string [A](P1)(P2)…(Pn) for an operation with n parameters P1-Pn and Action = A.

    • Indexed: this method works as the Simple method, but it allows to specify which parameters to use when creating the key. For example, the Indexed method creates the string [A](P1)(P3)(5) for an operation with n parameters P1-Pn (n >= 5) and Action = A and when the value of the Indexes property is equal to “1, 3, 5”. This method can be used to skip DateTime parameters from the compute of the key.

    • MD5: this method uses the MD5 algorithm to compute a hash from the body of the request message.

operation element:

  • action property: gets or sets the WS-Addressing action of the request message.

  • enabled property: gets or sets a value indicating whether the WCF caching channel is enabled for the current operation identified by the Action property.

  • header property: gets or sets a value indicating whether a custom header is added to the response to indicate the source of the WCF message (cache or service) at the operation level.

  • timeout property: gets or sets the default amount of time the object should reside in the cache before expiration at the operation level.

  • cacheType property: gets or sets  the cache type used to store responses for the current operation. The component actually supports two caching providers: AppFabricCache and WebCache. This property can be overridden at the operation level.

  • maxBufferSize property: gets or sets the maximum size in bytes for the buffers used by the caching channel for the current operation.

  • indexes property: gets or sets a string containing a comma-separated list of indexes of parameters to be used to compute the cache key for the current operation. This property is used only when the keyCreationMethod = Indexed.

  • keyCreationMethod property: gets or sets the method used to calculate the key for cache items.

Paolo continues with several hundred lines of C# source code …

Conclusions

The caching channel shown in this article can be used to extend existing applications that use WCF to invoke one or multiple back-end services and inject caching capabilities. The solution presented in this article can be further extended to implement new channel types other than the IRequestChannel and new key creation algorithms. The source code that accompanies the article can be downloaded here. As always, any feedback is more than welcome!

Authored by: Paolo Salvatori
Reviewed by: Christian Martinez

References

For more information on the AppFabric Caching, see the following articles:

  • “Windows Server AppFabric Cache: A detailed performance & scalability datasheet” whitepaper on Grid Dynamics.
  • “Windows Server AppFabric Caching Logical Architecture Diagram” topic on MSDN.
  • “Windows Server AppFabric Caching Physical Architecture Diagram” topic on MSDN.
  • “Windows Server AppFabric Caching Deployment and Management Guide” guide on MSDN.
  • “Lead Hosts and Cluster Management (Windows Server AppFabric Caching)” topic on MSDN.
  • “High Availability (Windows Server AppFabric Caching)” topic on MSDN.
  • “Security Model (Windows Server AppFabric Caching)” topic on MSDN.
  • “Using Windows PowerShell to Manage Windows Server AppFabric Caching Features” topic on MSDN.
  • “Expiration and Eviction (Windows Server AppFabric Caching)” topic on MSDN.
  • “Concurrency Models (Windows Server AppFabric Caching)” topic on MSDN.
  • “Build Better Data-Driven Apps With Distributed Caching” article on the MSDN Magazine.
  • “AppFabric Cache – Peeking into client & server WCF communication” article on the AppFabric CAT blog.
  • “A Configurable AppFabric Cache Attribute For Your WCF Services” article on the AppFabric CAT blog.
  • “Guidance on running AppFabric Cache in a Virtual Machine (VM)” article on the AppFabric CAT blog.
  • “Tagging Objects in the AppFabric Cache” article on Stephen Kaufman’s WebLog.
  • “Pre-Populate the AppFabric Cache” article on Stephen Kaufman’s WebLog.


The Identity and Access Team posted AD FS 2.0 Step-by-Step Guide: Federation with IBM Tivoli Federated Identity Manager to the Claims-Based Identity Blong on 4/4/2011:

We have published a step-by-step guide on how to configure AD FS 2.0 and IBM Tivoli Federated Identity Manager to federate using the SAML 2.0 protocol. You can view the guide as a web page and soon also in Word and PDF formats. This is the fifth in a series of these guides; the guides are also available on the AD FS 2.0 Step-by-Step and How-To Guides page.

Isn’t this team also known as Venice? My Windows Azure and Cloud Computing Posts for 3/30/2011+ post contains the following excerpt from a Microsoft job description:

image722322222We are the Venice team and are part of the Directory, Access and Identity Platform (DAIP) team which owns Active Directory and its next generation cloud equivalents. Venice's job is to act as the customer team within DAIP that represents the needs of the Windows Azure and the Windows Azure Platform Appliance teams. We directly own delivering in the near future the next generation security and identity service that will enable Windows Azure and Windows Azure Platform Appliance to boot up and operate.


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Microsoft PR announced Microsoft and Toyota Announce Strategic Partnership on 4/6/2011:

image

Companies will build telematics services – the fusing of telecommunications and information technologies in vehicles – on the Windows Azure platform.

News: Read the Press Release

Webcast: Watch the Announcement

Clips: Announcement Excerpts

According to Wikipedia’s entry on telematics:

Telematics typically is any integrated use of telecommunications and informatics, also known as ICT (Information and Communications Technology). Hence the application of telematics is with any of the following:

Lexus Gen V navigation system

  • The technology of sending, receiving and storing information via telecommunication devices in conjunction with effecting control on remote objects.
  • The integrated use of telecommunications and informatics, for application in vehicles and with control of vehicles on the move.
  • Telematics includes but is not limited to Global Positioning System technology integrated with computers and mobile communications technology in automotive navigation systems.
  • Most narrowly, the term has evolved to refer to the use of such systems within road vehicles, in which case the term vehicle telematics may be used.

In contrast telemetry is the transmission of measurements from the location of origin to the location of computing and consumption, especially without effecting control on the remote objects. Telemetry is typically applied in testing of flight objects.

Although the majority of devices that integrate telecommunications and information technology are not vehicles but rather mobile phones and the like, their use is not included in telematics.


Mary Jo Foley reported Microsoft and Toyota to build telematics platform based on Microsoft cloud in a 4/6/2011 post to ZDNet’s All About Microsoft blog:

image Microsoft and Toyota announced on April 6 a partnership via which Toyota’s next-generation telematics platform will be built using Microsoft Windows Azure and SQL Azure.

The pair are comitting to invest $12 million in Toyota Media Service Co., the Toyota subsidiary that provides digital information systems to Toyota automotive. The New telematics platform will encompass GPS, energy management and multimedia technologies, according to Microsoft’s press release.

image The first cars to take advantage of the new platform will be Toyota’s electric and plug-in hybrid vehicles in 2012. Toyota’s longer-term goal is to establish a complete global cloud platform by 2015 that will provide telematics services to their customers globally.

imageMicrosoft is positioning this deal as more than a simple Azure customer win. CEO Steve Ballmer is participating in an hour-long Webcast today at 4 p.m. ET to go over terms of the deal.

image I’m wondering how Microsoft’s decision to refocus its Hohm energy-management application from a home-energy-monitoring app to one that will manage electrical-vehicle charging at home plays into this. I’ve asked Microsoft for comment and will update if/when I hear back.

Update: During the Webcast, Ballmer emphasized power-management and remote administration of cars from cell phones as examples of the kinds of telematics services that Azure will enable.


The Windows Azure Team posted Microsoft and Toyota Announce Strategic Partnership To Build Next-Generation Telematics on Windows Azure on 4/6/2011:

imageWe're excited to share with you that the Windows Azure platform is at the center of an announcement today by Microsoft and Toyota Motor Corp. (TMC).  The two companies have formed a strategic partnership to build a global platform for TMC's next-generation telematics services based on  Windows Azure.  As part of the partnership, the two companies will participate in a 1 billion yen (approximately $12 million) investment in Toyota Media Service Co., a TMC subsidiary that offers digital information services to Toyota automotive customers. The two companies will help develop and deploy telematics applications on Windows Azure starting with TMC's electric and plug-in hybrid vehicles in 2012. TMC's aim is to establish a complete global cloud platform by 2015 that will provide affordable and advanced telematics services to Toyota automotive customers around the world.

imageWindows Azure will help TMC integrate telematics with its smart-grid activities aimed at achieving a low-carbon society through efficient energy use. TMC is conducting trials in Japan of its "Toyota Smart Center" pilot program, which links people, automobiles and homes for integrated control of energy consumption. TMC believes that, as electric and plug-in hybrid vehicles become more popular, such systems will rely more greatly on telematics services for achieving efficient energy management, and Windows Azure is a crucial foundation for providing these services.

Microsoft has a long history of delivering platforms and services to the automotive market, including in-car infotainment systems built on the Windows Embedded Automotive platform, in-car mapping services with Bing and TellMe, and many other consumer solutions. Now, Microsoft's Windows Azure platform expands the reach of automotive technology into the cloud.

For more information on the announcement, please read the press release.


Dom Green (@domgreen) described Azure SDK in built MSBuild Packaging Target in a 4/6/2011 post:

image When running an automated build to deploy your application into Windows Azure you need to at some point run a packaging script, this will take all the components of your application and turn them into a .cspkg file ready to be deployed to Azure with the correct configuration.

imageI would have normally done this using the CSPack Tool and something like this:

   1: cspack #{CSDEF} 
   2:         /role:WorkerRole1;#{WORKER_ROLE_DIR};#{WORKER_ROLE_DLL} 
   3:         /role:WebRole1;#{WEB_ROLE_DIR} 
   4:         /sites:WebRole1; Web;c:\web 
   5:         /out:#{CSPKG_FILE}

his shows a CSPack command creating a web and worker role for an application (tokenized).

This gets annoying to keep updating and can actually get quite confusing especially when the SDK upgrades and you need to go and figure out what changes you need to make to your packing command to ensure the .cspkg file is created correctly.

However, there is good news coming … if you are using MSBuild you can call upon the build targets that are part of the Windows Azure SDK and allow it to do all of the packaging for you.

The targets are typically located here:

C:\Program Files (x86)\MSBuild\Microsoft\Cloud Service\1.0\Visual Studio 10.0\Microsoft.CloudService.targets

You can then add these targets to your own MSBuild targets like so:

   1: <PropertyGroup>
   2:     <CloudExtensionsDir Condition=" '$(CloudExtensionsDir)' == '' ">
   3:             C:\Program Files (x86)\MSBuild\Microsoft\Cloud Service\
   4:                 1.0\Visual Studio 10.0\
   5:     </CloudExtensionsDir>
   6:   </PropertyGroup>
   7:   <Import 
   8:     Project="$(CloudExtensionsDir)Microsoft.CloudService.targets" />
   9: <PropertyGroup>

Then call on the following targets:

    • CorePublish – This packages up your application as a .cspkg file to be published into the  public cloud
    • CorePackageComputeService – This packages the application so that it can be run on the local development fabric.

These two targets use the CSPack command under the hood but allows you to defer the responsibility to the Azure targets rather than having to write the CSPack command yourself.

Here is how I have used these targets on a previous project:

   1: <Target Name="CloudDeploy" DependsOnTargets="CorePublish" 
   2:         Condition=" '$(DeployType)' == 'Cloud' ">
   3:     ...
   4: </Target>
   5:  
   6: <Target Name="LocalDeploy" DependsOnTargets="CorePackageComputeService" 
   7:         Condition=" '$(DeployType)' == 'Local' ">
   8:     ...
   9: </Target>


Brian Hitney explained Getting a Windows Azure account for Rock, Paper, Azure in a 4/6/2011 post:

image If you’re interested in getting a Windows Azure account to play in Rock, Paper, Azure (RPA), there are a few options available to you, from least painful to most painful (in my opinion, anyway):

Method 1 – Windows Azure Pass

imageThe way most people are getting an account is through the Windows Azure Pass program (using code PLAYRPA).  More details can be found on the Get Started page under step 1.    But, this certainly isn’t the only way to get an account, and – for a few of you – might not be possible.  The Azure Pass is limited to one Live ID, so if you got an account through the Azure Pass program say 6 months ago, you can’t get another one.  (I won’t say anything if you just sign up for another Live ID.)

Method 2 – Windows Azure Trial Account

Sign up for the Windows Azure Free Trial.   This gives you 750 hours of an extra small compute instance, and 25 hours of a small compute instance.  You do need a credit card to cover overages.  Note: the Bot Lab project by default is set up as a small compute instance.  If you go this route, I highly recommend you change the Bot Lab to be an Extra Small instance.  You can do this by double-clicking the role and changing the VM size:

image

Method 3 – MSDN Subscriptions

Have an MSDN Premium or Ultimate subscription?   You already have account hours you can use.  Log into your MSDN account for more information.   This step does require a credit card (or other billing arrangement) to handle overages, but you are not billed as long as you stay within plan.  As of the time of this writing, please note that Extra Small compute instances are beta and not included in the MSDN hours – so be sure to stick with a small instance.  As usual, we recommend taking down deployments once you’re done to avoid wasting compute time.

Method 4: Pay as You Go Specials

Check out the current offers.   There are few different options based on your needs (and some are available specifically for partners).  The introductory special is the best way to get started, but if you’re using Windows Azure consistently, the Windows Azure Core offers a great value.   If you’re just interested in playing the game and willing to pay or aren’t able to receive other offers for some reason, deploying the Bot Lab as an Extra Small instance costs $0.05 per hour.   If you were to play during the week, and leave the Bot Lab deployed 24 hours, you’d be looking at roughly $5.  (If you only code in the evenings for a few hours, pulling down the deployment overnight and not during use will bring that down substantially.)

See you on the battlefield!


The Windows Azure Team reported New Content Available: Windows Azure Code Quick Start Topics on 4/5/2011:

imageWindows Azure Code Quick Start Topics are now available on MSDN to take you through samples of C# code that demonstrate basic ways of interfacing with Windows Azure. You'll find code and detailed step-by-step instructions for storing a file in Windows Azure storage, creating and deploying a WCF service, and creating a client application that uses a WCF service. The important aspects of each piece of code are explained and called out so that, with a few changes, you can copy and use the code as is.

To work through the quick start topics completely, you will need an edition of Microsoft Visual Studio 2010 and the Windows Azure Tools for Microsoft Visual Studio. To see your code working in the cloud, you will also need an active subscription for Windows Azure. However, you can work through some of the quick launch topics using only the development environment. Go to http://www.microsoft.com/windowsazure/getstarted/default.aspx to download what you need to get started.

Check out all the code quick starts, and find out how code works on Windows Azure. And watch for more quick starts coming soon.


Mark Rendle (@markrendle) asserted DataCenter/Region choice matters in a 4/5/2011 post:

British author’s apology: I spell DataCentre wrong in this article as a sacrifice to the search engine gods.

image Last week I was investigating a performance issue with a brand-new Orchard CMS site hosted on Windows Azure. The home-page, which isn’t particularly complicated, was taking several seconds to load, and the problem was clearly with the page generation rather than download speeds. The creator of the site was blaming Orchard, saying that it was equally slow on the development platform, but I know that several high-profile Microsoft sites run on Orchard, including those for NuGet and MIX, so that seemed an unlikely culprit.

imageI got the source code for the site and checked it over, and couldn’t see anything untoward.

So I took a look at the deployment to see if the role (singular, still in dev mode) was recycling due to an error, or some such thing. I had been poking around for quite a while, and was in the process of uploading a package to the staging slot with a definition change to use a Medium instance instead of a Small one, when I noticed that the Hosted Service and Windows Azure Storage account were located in the West Europe region, while the SQL Azure server was in North Europe.

Now, this wasn’t a conscious decision on the part of the guy who created the site. The subscription he was given to deploy on already had a SQL Azure server running, so he put the database on there without paying much attention to where it was. And then he just randomly selected West Europe for the Hosted Service.

I know some people do this because they think it strikes the best balance between Europe and the United States. Fun fact: the North Europe DataCenter is 700 miles west of the West Europe DataCenter.

That’s 700+ miles of cables, switches, routers and suchlike between the server running the site and the one(s) running the database. The SQL Server protocol was never really designed for wide-area networking, so there’s a fair amount of latency on a request across the internet. And like most CMSes, Orchard is quite chatty with its database, so that latency gets multiplied up to a very noticeable level very quickly.

I set up a new SQL Azure instance in the West Europe region and copied the Orchard database across (by scripting it ‘with data’ and running the script against the new database). Thanks to the Azure module for Orchard, I was able to point the site at the new SQL Azure server by simply uploading a modified settings.txt file to Blob storage and rebooting the instance. And as if by magic, the site started responding in the sub-second times that we expect from Cloud-hosted .NET applications.

So the moral of this story is: sometimes, you really should put all your eggs in one basket.

Mark is a newly minted Windows Azure MVP in the UK.


Avkash Chauhan explained WCF Web Role Deployment with fixed IP Address/Port will cause error after deployment in a 4/5/2011 post:

imageIf you deploy a WCF Web Role based Windows Azure Application it is wise to change the "Web Server" setting to use "Auto-assign Port" instead of using a fixed IP. If you deploy the application with fixed IP Address/Port, it will not work on Cloud and return errors. In my case the error was logged as below:

"Role instances recycled for a certain amount of times during an update or upgrade operation. This indicates that the new version of your service or the configuration settings you provided when configuring the service prevent role instances from running. The most likely reason for this is that your code throws an unhandled exception. Please consider fixing your service or changing your configuration settings so that role instances do not throw unhandled exceptions. Then start another update or upgrade operation. Until you start another update or upgrade operation, Windows Azure will continue trying to update your service to the new version or configuration you provided Dr. Watson Diagnostic ID ...”

The following screen shows how it should be set for "Auto-assign Port":


Brian Hitney claimed We’re Ready to Rock… with Rock, Paper, Azure in a 4/4/2011 post:

image… but there are some changes to Rock, Paper, Azure we need to discuss based on our test week.   First things first:  I highly recommend you download the latest Bot Lab and MyBot project here

For the most part, old bots bots will continue to work, but since this was a test round anyway, I highly recommend you upgrade – there are a few edge cases that might break bots, or cases where a bot doesn’t do what you think it would.  Let’s discuss all of these changes we’ve made.  I’ll save the best for last.

CanBeat and LosesTo – Don’t Use

imageWe re-scoped the CanBeat and LosesTo methods from the Move class to internal (from public).   Most people weren’t using them, and they weren’t really intended for outside consumption and we received a couple of questions on behavior.  The reason for the change is this – and some of this is historical:  the way the engine works is that it goes through a series of steps to determine a winner.   It will first look if a move can beat another move:

   1: internal bool CanBeat(Move move)
   2:    {
   3:      if (!ValidMoves.Contains(GetType().Name))
   4:        return false;
   5:  
   6:      if (!ValidMoves.Contains(move.GetType().Name))
   7:        return true;
   8:  
   9:      return CanBeatLegalMove(move);
  10:    }

So the RockMove implementation returns true if the move is scissors which is implemented in the RockMove class.  The original game had moves in different assemblies for multiple rounds, so the win determination path had to waterfall through, and examine rules to the game – such as whether or not you have any dynamite remaining.   The short answer here is there are cases where this will return unexpected results, so it’s best to just not expose them.  Performance wise, it’s better to have your own implementation, anyway.  If your bot is currently using this, sorry to say this is a breaking change.

Game Summary

At the top of the log file, you’ll now see a game summary that is helpful for info-at-a-glance.  It looks like this:

Game Summary       bhitney          JohnSmith       
Points 953 (L) 1000 (W)
Dynamite Used 45 100
Time Deciding (s) 0.04885 0.06507

Time deciding shows how much time your bot has used for the entire round.   This is interesting to see how your bot compares to others, but, it’s also useful for timeouts which is below.

Timeout

The server has some strict timeouts – we’re always tweaking the exact logic and we’re hesitant to give too much information because, in a multithreaded environment, the time slice each bot has isn’t exclusive.   But, your bot has a maximum of 1 second (subject to change) per match.  Typically that is no problem, as you can see in the game summary.  Once your bot crosses the 1 second timeout, your bot will stop making moves (essentially forfeiting the rest of the moves).

Server Messaging

In the log file, you’ll now see more clear messaging if something happened – for example, if a player threw dynamite but was out of dynamite, the log will indicate this next to the score. 

ExceptionMove

We standardized the error moves to “ExceptionMove.”  If a player times-out or throws an exception, their last move will be an ExceptionMove.  This is visible in the log, and detectable like so:

if (opponent.LastMove is ExceptionMove)
{
//player threw an exception or timeout
}

This is a good time to mention that the opponent’s LastMove is null on the first move.  Also, if a player throws “illegal dynamite” (that is, throws dynamite when he/she is out) their last move will be Dynamite.  It’s up to you to figure out they were out of dynamite!

Fresh Upgrade

One of Azure’s strengths is flexibility in deployment – when deploying, you have an option for an in-place upgrade, or can deploy to staging and do a VIP (virtual IP) swap that allows you to seamlessly move a new deployment into production. 

Because of the significant changes to our Bot Lab and new features, we recommend deleting the old deployment first, and then deploying the new package and not doing an in-place upgrade.

Player Log

And now the best part!  There’s a private player log you can write to.  This information will be appended to the bottom of the game log – but it’s only visible to you.  We’re still testing this feature, but it should be helpful in isolating issues with your bot.  To write to your log, you can do something like:

   1: if (!you.HasDynamite)
   2: {
   3:    you.Log.AppendLine("I'm out of dynamite!");
   4:  
   5:    //opponent's log is always null!
   6:    if (opponent.Log == null) { }
   7: }

Note:  Your opponent’s log is always null, so don’t try it!  Also, logs are limited to a total of 300k – this should be more than enough, but once the log passes this mark, the engine won’t write to the log (but no exception is raised).


<Return to section navigation list> 

Visual Studio LightSwitch

Ravi Eda described How to Elevate Permissions in Server Code (Ravi Eda) in a 4/7/2011 post to the Visual Studio LightSwitch Team Blog:

image2224222222This article introduces the concept of permission elevation in LightSwitch. It presents a scenario at a shipping department of a store to show the need for elevation of privileges. Code samples to add, remove and control the scope of elevation are included.

Permission elevation in server code is a new feature introduced in LightSwitch Beta 2. This feature allows restricting access to entities when manipulated through the UI, but still allows changes via a process that runs on server on behalf of the end user.

Most business applications support multiple users. These users are categorized using roles assigned to them. Permissions assigned to a role will restrict access to portions of the application. Often, there is a need where a user requires elevated permissions to complete certain tasks. In such cases, granting a user higher privileges and promptly removing the privileges as soon as the task is complete, can be tricky.

For example, consider a business application designed for a small or a medium sized departmental store. In the shipping department, the employee who receives the shipment will have access to the Receivables screen. This screen allows the user to input the type and quantity of items received along with other logistics. This user will not have access to any other screens such as the Inventory or Billing or Customer details. Generally, based on the item and quantity received, there will be additional tasks that need to be performed. For instance:

  1. If the shipment contains item ‘A’ of quantity greater than 100 then update the Inventory table immediately.
  2. Increase the price of item ‘B’ by 0.5% when the quantity received is less than 15.
  3. Send an Email to the first customer who is in the waitlist for item ‘C’.

To perform the three additional tasks there are two solutions. First, grant access to all three screens for the person receiving the shipment. Second, a user with access to the Inventory, Billing and Customer details logs in to the application and does the necessary operations. Both these solutions have a disadvantage. The first solution makes the application less secure. It defeats the purpose of roles. The second is possible at the cost of additional resources i.e., have the employee find a co-worker that has access to the Inventory, Billing and Customer screens.

A better solution would be to elevate the access level for the user temporarily at the server. In the shipping department example, the employee finishes entering the details of items received and clicks save. Within this save operation, the system grants additional permissions that the user requires for performing the other three tasks. This elevation of privileges to perform the additional tasks during save is possible in LightSwitch due to availability of various server-pipeline interception methods.

A developer of a LightSwitch application can elevate the permissions within the server-pipeline logic. The developer can choose to control the scope of the elevation within the save operation. Once the save operation concludes the server state vanishes and thus there is no way to make elevation last longer than one save operation.

Add Or Remove Permission:

The following two APIs allow adding and removing of permissions on the current user:

  • AddPermission(params string[] permissions)
  • RemovePermission(params string [] permissions)

Here are some examples that show the usage of these APIs:

Application.Current.User.AddPermissions(Permissions.InventoryMaster, Permissions.CustomerSupport);
Application.Current.User.RemovePermissions(Permissions.InventoryMaster, Permissions.GenerateBill);
Application.Current.User.AddPermissions(Permissions.AllPermissions);
Application.Current.User.RemovePermissions(Permissions.AllPermissions);

Both the APIs return an IDisposable. This allows for calling the ‘Dispose’ method to remove the scope of elevation. Within the scope of elevation, all calls to HasPermission() and DemandPermission() will use the new set of permissions.

Where and Where Not To Elevate Privileges:

LightSwitch allows permission elevation in save related methods that run on the server. Here is a list of such methods:

Data Source Methods:

  • SaveChanges_Executing
  • SaveChanges_Executed

General Methods:

  • <Table Name>_Deleted
  • <Table Name>_Deleting
  • <Table Name>_Inserted
  • <Table Name>_Inserting
  • <Table Name>_Updated
  • <Table Name>_Updating

Permission elevation outside save related methods or in any client side methods will cause a “System.InvalidOperationException: Permissions can only be modified from within save-related user methods”. Permission elevation inside of query operations will cause the same exception. Here is a list of such methods where permission elevation will not be allowed:

General Methods:

  • <Table Name>_Created

Security Methods:

  • <Table Name>_CanDelete
  • <Table Name>_CanExecute
  • <Table Name>_CanInsert
  • <Table Name>_CanRead
  • <Table Name>_CanUpdated

Query Methods:

  • <Table Name>_All_ExecuteFailed
  • <Table Name>_All_Executed
  • <Table Name>_All_Executing
  • <Table Name>_All_PreprocessQuery
  • <Table Name>_Single_ExecuteFailed
  • <Table Name>_Single_Executed
  • <Table Name>_Single_Executing
  • <Table Name>_SingleOrDefault_ExecuteFailed
  • <Table Name>_SingleOrDefault _Executed
  • <Table Name>_SingleOrDefault _Executing
  • <Table Name>_SingleOrDefault_PreprocessQuery
  • <Table Name>_Single_PreprocessQuery
Example: Department Store Application

To demonstrate the department store scenario mentioned earlier, a simple LightSwitch application has been created. This application consists of four tables: Inventory, Customer, Billing and Receivable. Corresponding to these tables there are four screens. To control access these screens four permissions have been defined. Figure 1 shows the Access Control tab where permissions are created.

clip_image002

Figure 1 Define Permissions to Control Access to Screens

Control Access to Data:

Using these permissions, the developer can write code that will control the access to these four tables. In the shipping department scenario, the employee should not have permissions to update Inventory and Billing table, and will not have access to read customer information from Customer table. To implement this logic the following code helps:

partial void Inventories_CanUpdate(ref bool result)
{
    result = Application.Current.User.HasPermission(Permissions.InventoryMaster);
}

partial void Billings_CanUpdate(ref bool result)
{
    result = Application.Current.User.HasPermission(Permissions.GenerateBill);
}

partial void Customers_CanRead(ref bool result)
{
    result = Application.Current.User.HasPermission(Permissions.CustomerSupport);
}
Control Access to UI:

The shipping department employee should have access only to the Receivable screen. The other three screens should not be available in the navigation menu. This logic is implemented through the code shown below:

public partial class Application
{
    partial void ReceivableScreen_CanRun(ref bool result)
    {
        result = Application.Current.User.HasPermission(Permissions.ReceiveShipment);
    }

    partial void InventoryScreen_CanRun(ref bool result)
    {
        result = Application.Current.User.HasPermission(Permissions.InventoryMaster);
    }

    partial void BillingScreen_CanRun(ref bool result)
    {
        result = Application.Current.User.HasPermission(Permissions.GenerateBill);
    }

    partial void CustomerScreen_CanRun(ref bool result)
    {
        // To access Customer Screen the user should have 'CustomerSupport' permission and 
        // should NOT have 'InventoryMaster' and 'ReceiveShipment' permissions
        if ((Application.Current.User.HasPermission(Permissions.CustomerSupport)) &&
            !(Application.Current.User.HasPermission(Permissions.InventoryMaster) && 
            Application.Current.User.HasPermission(Permissions.ReceiveShipment)))
        {
            result = true;
        }
        else
        {
            result = false;
        }
    }
}

Grant for debug ‘ReceiveShipment’ permission as shown in Figure 1 and run the application (F5). Now, only ‘Receivable Screen’ will be displayed as shown in the Figure 2:

image

Figure 2 Shipping Department Employee can access Receivable Screen

Permission Elevation in Server Code:

The shipping department employee enters the shipment details and clicks save. During this save, the other three tasks need to be completed. This can be achieved by writing permission elevation code inside ‘Receivables_Inserting’ method as shown below:

partial void Receivables_Inserting(Receivable entity)
{
    EntityChangeSet itemsReceived = this.DataWorkspace.ApplicationData.Details.GetChanges();

    foreach (Receivable received in itemsReceived.AddedEntities)
    {
        //  1.   If the shipment contains Item A of quantity greater than 100 
        //       then update the Inventory table immediately.
        if (received.ItemName == "Item A" && received.UnitsReceived > 100)
        {
            if (!Application.Current.User.HasPermission(Permissions.InventoryMaster))
            {
                // Grant 'InventoryMaster'.
                Application.Current.User.AddPermissions(Permissions.InventoryMaster);
                
                // Locate 'Item A' record in the Inventory table.
                var invRec = (from p in this.DataWorkspace.ApplicationData.Inventories 
                              where p.ItemName == "Item A" select p).SingleOrDefault();

                // Update Inventory.
                invRec.UnitsInStock += received.UnitsReceived;
            }
        }

        //  2.    Increase the price of Item B by 0.5% when the quantity received is less than 15.
        if (received.ItemName == "Item B" && received.UnitsReceived < 15)
        {
            if (!Application.Current.User.HasPermission(Permissions.GenerateBill))
            {
                // Grant 'GenerateBill'.
                Application.Current.User.AddPermissions(Permissions.GenerateBill);

                // Locate 'Item B' record.
                var priceRec = (from b in this.DataWorkspace.ApplicationData.Billings 
                                where b.ItemName == "Item B" select b).SingleOrDefault();

                // Increase the price 1.05 times.
                priceRec.Price *= 1.05M;
            }

        }

        //  3.    Send an Email to the first customer who is in the waitlist for Item C.
        if (received.ItemName == "Item C" && received.UnitsReceived > 0)
        {
            // To access Customer Screen the user should have 'CustomerSupport' permission and
            // should NOT have 'InventoryMaster' and 'ReceiveShipment' permissions.

            // Just remove all permissions.
            Application.Current.User.RemovePermissions(Permissions.AllPermissions);

            // Grant only the permission needed to access Customers table.
            Application.Current.User.AddPermissions(Permissions.CustomerSupport);

            // Extract the email address of the first customer who has Item C in the waitlist.
            var custRec = (from c in this.DataWorkspace.ApplicationData.Customers 
                           where c.WaitlistItems.Contains("Item C") select c).FirstOrDefault();

            // Send Email to the customer.
            SendWaitlistEmail(custRec.EmailAddress);
        }
    }
}

The above code runs at the server within the save operation. At the client side, the shipping department employee always had only ‘ReceiveShipment’ permission. The elevation happened only at the server. When the save pipeline ends the permission elevation also ceases.

The shipping department scenario demonstrated the need for elevation of privileges. LightSwitch’s permission elevation concept allowed restricting access to the Inventory, Billing and Customer tables but still allowed changes via the elevation that happened through the server code. Thus, LightSwitch gives the developer a simple and efficient way to control access in a business application.


Andy Kung posted Course Manager Sample Part 1 – Introduction (Andy Kung) to the Visual Studio LightSwitch Team Blog on 4/7/2010:

With the release of Visual Studio LightSwitch Beta 2, the team also published a sample project called “Course Manager.”

This sample is designed for the Office of Registrar of a fictional art school. It tracks students, instructors, courses, enrollments, etc. Office staffs can use Course Manager to browse course catalog, create new students, and register courses for existing students.

Course Manager is aimed to showcase what you can achieve with LightSwitch out-of-the-box. It demonstrates how you can customize on top of the intelligent defaults LightSwitch gives you to suit your business needs and workflows.

In this introductory post, we will focus on learning what Course Manager does from the end-user’s perspective. We will walk through some basic user scenarios and features. The team will publish a series of supplement posts to go into development details.

Run the Sample

Before we start, make sure you have Visual Studio LightSwitch Beta 2 properly installed. Course Manager is available in both VB and C#. You can download the projects from MSDN Code Sample Gallery. Unzip the project file, double click on the solution to launch Visual Studio LightSwitch. After the project is loaded, hit F5 to run the application!

clip_image002

Home Screen

Course Manager now launches and shows the home screen. The first thing you will notice is that Course Manager is a Desktop application with Windows authentication. You can see your name displayed on the welcome screen as well as on the lower right corner of the application. The welcome screen displays the school logo, title, welcome text (with different font styles), and some entry points (with big pretty icons) to common screens. Since you (as the developer) are in debug mode, you also have access to all the permission-based screens, such as the screens under “Administration” menu group.

clip_image004

Browse Course Catalog

A student may call the office and ask for information about a particular course. A staff needs to be able to quickly find the course the student is looking and answer questions about availability, instructors, meeting times, etc.

To browse the course catalog, open “Course Catalog” screen from the menu. This screen shows a list of course sections. A course can have many sections (i.e., Basic Drawing course is offered at 2 different time slots). A student can enroll in a course section. Each section has a maximum enrollment number allowed. The remaining space is calculated based on the current enrollments. You can filter this list by selecting a course category from the auto-complete box. Clicking on the Title link will take you to the details screen for the section. Clicking on the Instructor link will take you to the details screen for the instructor.

clip_image006
Create a Student

Say somebody really likes a course the school is offering, but has never taken courses at the school before. We need a screen to allow the office staff to add a new student record to the database before registering the course for the student.

To create a student, open “Create New Student” screen from the menu. Provide a student photo and fill out the student information. Click Save.

clip_image008

After the data is saved, the details screen for the student will appear. We can now register a course for the student. To do so, click “Register Course” button on the ribbon.

clip_image010

Register Course

If you open “Register Course” screen from the menu, the registration screen requires you to select a student and a course in order to create an enrollment. Since we launch the same “Register Course” screen from a student details screen, the student field is pre-selected for you. All we need to do now is to pick a course to register.

clip_image012

Click on the picker for “Select a course section” to see a list of available courses this student has not yet enrolled in. If you have selected the Academic Year/Quarter via the auto-complete boxes, the section list will be further filtered down accordingly.

Pick a course and click Save. Registration screen will close and you’re back to the student details screen. Click the refresh button on the enrollment grid to see the newly added course.

clip_image014

Find a Student

Last but not least, imagine an existing student calls to change the mailing address. A staff needs to find the student record quickly and update the information.

To search for a student record, open “Search Students” screen from the menu. Use the search box to find a student by last name. Click on the student link to open the student details screen.

clip_image016

This is the same student details screen we saw earlier but for a different student. Notice the name of the screen tab is based on the student name. This is very useful if you have multiple student records open at the same time.

We can go ahead and update the address on the screen. You can also find student’s registered courses shown at the bottom. You can filter the list by Academic Year and Academic Quarter. You can also add or remove an enrollment by clicking “Register Course” (as we saw earlier) or “Drop Course” (it will prompt for confirmation) button on the grid.

clip_image018

What’s Next?

We’ve run through some basic user scenarios and workflows. By all means, this is a very simple application. Surely a real-world course registration system will be many times more complex. But believe it or not, this application covers many LightSwitch features that you will often use to build your fancy applications! On top of my head, this sample covers:

  • Create summary and calculated field for a table
  • Create permissions and enable authentications
  • Create many-to-many relationships and UI
  • Write middle-tier logic and validation
  • Add static images and text with different font styles
  • Create screen parameters and parameterized queries
  • Customize screens and work with different layouts

Over the next couple of weeks, we will publish supplement walkthroughs on different parts of the sample. So stay tuned! Some topics have already been covered by other posts, in which case, we will highlight and reference them. If you are particularly interested in a topic we haven’t covered, please let us know and we can prioritize accordingly.

I hope you’re motivated to find out how to build Course Manager and anxious to learn more behind-the-scene tips and tricks!

Coming up next: An overview of tables, relationships, and queries in Course Manager.


Stuart Kent asked Is Model Driven Development Feasible? and answered “Yes” in a 4/7/2011 post to his MSDN blog:

This is the question asked in the title of a post on our modeling and tools forum, and rather than answer directly there, I thought it would make more sense to post an answer here, as it’s then easier to cross-reference in the future.

The body of the post actually reads:

Various Microsoft attempts at MDD have failed or been put on the back burner: WhiteHorse, Software Factories, Oslo.

Does Microsoft have any strategy for Model Driven Development? Will any of the forementioned tools ever see the light of day?

First we need to clarify some definitions. I distinguish between the following:

Model-assisted development, where models are used to help with development but are not the prime artifacts – they don’t contribute in any direct way to the executing code. Examples would be verifying or validating code against models or even just using models to think through or communicate designs. UML is often used for the latter.

Model-driven development, where models are the prime artifacts, with large parts of the executing code generated from them. Custom, hand-written code is used to complete the functionality of the software, filling in those parts which are not represented in the model.

Model-centric development, where models are the prime artifacts, but interpreted directly by the runtime. Custom, hand-written code is used to complete the functionality, with an API provided to access model data and talk to the runtime as necessary.

These are not separated approaches, and development projects may use a combination of all three. In fact, I imagine a continuous dial that ranges from model-assisted development through to model-centric development, such as the one illustrated below (starting at the left, the first five tabs are examples of model-assisted development).

image

The challenge is to make the movement through the dial as seamless and integrated as possible, and also to make sure that these approaches can be adopted incrementally and in a way that integrates with mainstream development (agile) practices. Only then will we be able to unlock the potential productivity benefits of these approaches for the broader development community.

In Microsoft, there are a number of shipping products which support or employ one or more of these approaches.

In Visual Studio, we have DSL Tools and T4, which together support model-driven development. New functionality was added to both in VS2010, and T4, for template-based code generation, continues to see broader adoption, for example in the ASP .Net community as evidenced by this blog post. for example. Many customers use DSL Tools for their own internal projects, and we continually get questions on this forum and through other channels on this.

Visual Studio 2010 edition introduced a set of architecture tools: a set of UML Tools (built using DSL Tools), a tool for visualizing dependencies between code as graphs, and a tool for validating expected dependencies between different software components (layers). I would place all these tools under the category of model-assisted development, although the VS2010 feature pack 2 does provide some support for generating code from UML models, which allows these tools to be used for model-driven development, if MDD with UML is your thing.

image2224222222Visual Studio LightSwitch combines a model-driven (generates code) and model-centric (directly interprets the model) approach to the development of business applications. It is a great example of how models can really make the developer more productive.

Outside of Visual Studio, the Dynamix line of products, also take a combined model-centric/model-driven approach to the domains of ERP and CRM business applications. For example, take a look at this blog post.

I am an architect with the Visual Studio Ultimate team, who are responsible for DSL Tools, T4, and the architecture tools. In that team, we are now looking into how to take these tools forward, focused very much on how to make software developers and testers more productive. Part of this will be to consolidate and integrate some of what we already have, as well as integrate better with other parts of Visual Studio and mainstream development practices. Another part will be to add new features and capabilities and target support on specific frameworks and domains – as LightSwitch does. We’re focused on addressing the challenge set out above, and to deliver value incrementally.

As we start to deliver the next wave of tools, I look forward to a continued conversation with our customers about the direction we’re headed.


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

The Microsoft Partner Network and Ingraham Micro Cloud announced the Microsoft ISV Pilot Program in a 4/7/2011 e-mail to Microsoft partners:

image Microsoft Azure ISV Pilot Program: June-December, 2011

Ingram Micro Deliverables:
  • image Ingram Micro Cloud Platform: Marketplace with links to your website
    • Your Cloud Service landing page with your services descriptions, specs, features and benefits
    • Post your service assets (training videos, pricing, fact sheets, technical documents)
    • Education section: post your white papers and case studies
  • Banner ads on Ingram Micro Cloud and on your microsite
  • Ingram Micro Services and Cloud newsletter presence (June and September)
  • Face-to-face events

Cloud Summit | June 1-2, 2011
Arizona Grand Resort, Phoenix

register

  • imageEmail -- announcement of the Microsoft Azure ISV Program with your company and service name represented, along with other pilot ISV partners
  • Webinar– one per quarter per ISV to introduce your specific service to Ingram Micro reseller partners
  • Email -- one per ISV webinar to our resellers' partners introducing your specific service increasing awareness and interest and driving registration for your webinar
  • Event Upgrade Option -- VTN Invitational (Oct. 16-19, 2011 in Las Vegas)
Getting Started
  1. View the Ingram Micro Cloud webinar for program overview (April 28 at 10 a.m., PST)
  2. View the Ingram Micro Cloud Microsoft Azure ISV Pilot Program details 1,289K, PDF

For answers to your questions and to learn more about our Ingram Micro Cloud Services marketing program, designed specifically for Microsoft Azure ISVs, email Mary Ann Burns, senior marketing manager, or call (714) 382-2039.


Dana Gardner asserted “A proliferation of on-premises applications and the ongoing data explosion are posing serious threats to businesses worldwide” in an introduction to his Outmoded Applications and Data Explosion Hamper Innovation in Enterprises post of 4/7/2011:

A proliferation of on-premises applications, many of them outdated, and the ongoing data explosion are posing serious threats to businesses worldwide, according to a recent survey of companies in Europe and America by Capgemini.

The first annual Application Landscape report found that millions of applications are obsolete and no longer deliver full business value. The result, says Capgemini, is a need to rationalize and retire applications, freeing up valuable resources to drive innovation and future growth, rather than maintain outdated systems.
The sheer number of applications supported – up to 10,000 for global enterprises – combined with an estimated average data growth of five percent per month means applications management is on track to quickly become an issue of real significance. Moreover, as companies move toward the transfer of applications to the cloud, the need for systematic and well-managed application retirement will accelerate.

Outmoded applications
In in-depth interviews with CIOs and IT leaders in the US and Europe, Capgemini found that:

  • Some 85 percent say their application portfolios are in need of rationalization
  • Almost 60 percent of enterprise companies say they currently support "more" or "far more" applications than are necessary to run their business
  • Only 4 percent say that every IT system they use is considered to be business critical
  • Half agree that up to 50 percent of their application portfolio needs to be retired
  • Another 61 percent say they keep all data beyond its expiration date "just in case"
  • Also, 56 percent of large companies and enterprises say that half or more of their applications are custom-built, increasing the technical complexity of required platforms and technologies
  • Only 13 percent say their application development and maintenance teams are aligned. And half (48 percent) say their teams are only in synch for 50 percent of the time or even less.

    Successful application management – achieved through a true lifecycle approach of 'build, deploy, maintain and retire' – can deliver tangible business benefits in tough economic times.

Ron Tolido, CTO at Capgemini for Application Services Continental Europe, said: “Our research reveals that key goals for CIOs are value creation, improving efficiencies and cutting costs. Despite the fact that data archiving and application retirement can result in significant cost savings, process efficiencies and increased agility, it still does not rank high enough on the agenda. This report shows that successful application management – achieved through a true lifecycle approach of 'build, deploy, maintain and retire' – can deliver tangible business benefits in tough economic times.”

In addition to acknowledging the growing importance of this issue, the report also reveals the numerous current barriers to effective application management including: the cost of retirement projects, the lack of immediate ROI, cultural resistance to change, regional differences, the lack of qualified developers to migrate retired application data, and most importantly that applications are not considered a key priority.

You might also be interested in:


Ernest De Leon (a.k.a., “The Silicon Whisperer”) posted Cloud Washing’ hits feverish pitch as enterprises begin to migrate to the Cloud on 4/7/2011:

Over the last few months, I have had several discussions with CIOs, CTOs and Directors of IT about Cloud Computing. I have also had these same discussions with research firms seeking to understand what the 'State of the Cloud' is today and where it is going in the near term. While I have seen small adoption in the Fortune 500 space, the SMB/SME space is exploding in the cloud. This has not gone unnoticed by many hardware and software manufacturers, resulting in the 'Cloud Washing' phenomenon taking place in the industry.

Having worked closely with people in the environmental sustainability movement while engaged with large scale virtualization projects in the Bay Area of California, I remember the constant discussions about companies attempting to 'Green Wash' their products or offerings. While some claims were legitimate, far more were amazing stretches of the truth or outright lies about the 'greenness' of their offerings. When 'green' became hot, all of a sudden, every product and service on the market was as emerald as they come. We now have the same thing happening in the IT industry, but instead of 'green' we have 'cloud.'

We have defined 'Cloud Computing' several times on this blog, but as you know, definitions of new technology can shift over time as adoption and offerings grow. I have often said that it is much easier to define what Cloud is not rather than what it is. Using this approach, it is much easier to sift out products and services that are not truly 'cloud' and only include those that meet a certain minimum criteria in a discussion.

Let's start with a broad statement: In order for something to be 'cloud,' it needs to be accessible via a network. In tech speak, the 'cloud' has historically been associated with the internet as a whole. IT people always talk about their pipe to the cloud, or rather, the bandwidth of their internet connection. In that one sentence, notice that the cloud is an external entity that they are connecting their business to. The cloud is not inside their data center. The cloud is the internet itself. That is a rather loose description, but it defines the base premise that on-site resources are not true cloud. There is no such thing as a 'Private Cloud.' I always ask the question: "Is someone selling you hardware?" If so, their offering is not 'cloud.' If a vendor is still talking in terms of physical hardware, RAM, processor speed, etc., even in a virtual sense, it is not cloud. That should knock out about 50% of the products and services out there that are parading themselves as 'cloud' but are truly nothing of the sort. With that said, let us look at some characteristics of what true cloud offerings have.

The single largest differentiator between traditional data center technology and cloud computing is scale. Can you scale your usage up as high as you want without ever discussing hardware? Can you then scale down usage when you do not need it? Are you only paying for your actual resource usage or are you paying by some other older standard like server count? If you are able to scale up and down as much and as frequently as needed and only pay for what you actually use (Utility Billing) then chances are high that you are in a true cloud.

The second largest differentiator is multi-tenancy. This means that the aggregate compute power of the cloud provider is shared among all of the users (tenants), not divided up into silos that limit scalability. From a consumer perspective, this should be transparent but understood. Measures are taken to ensure that users can scale up and down without negatively affecting other users in the same cloud while still managing the entire cloud in terms of aggregate resources. If you are the only user sitting on a given set of resources, then you are not in a true cloud. Your costs will also likely be higher as this is no different from renting a few dedicated servers and using them as a pool for your application. You may also be billed a flat monthly fee (or similar set fee) instead of utility billing as you are not sharing the costs with other users for the same underlying resources.

The third area where a true cloud offering differs from 'cloud washed' offerings is how resources are provisioned. Are you able to provision an instance immediately from a portal and have it operational within a few minutes? If so, you are working with a true cloud. If you have to go through an archaic process of requesting resources, such as opening a ticket to have someone build out a 'server' somewhere for you, then you are not truly in the cloud. Chances are, the vendor is using common virtualization offerings and building out virtual servers for you in that infrastructure. Things such as increasing the resources you need are manual or ticket driven instead of on-the-fly via a portal. User self-provisioning is key when looking for true cloud offering.

So, with all that said, it is actually quite easy to find which products and offerings are truly cloud and which are cloud washed versions of products that manufacturers have sold all along. The biggest red flag in terms of cloud washing versus true cloud is Utility Billing. If you are being billed in flat fees instead of what you actually consume, there is a problem. Secondly, if you have to put ANY of the costs from a cloud product or offering as CAPEX instead of OPEX, there is a problem. You should not be buying hardware or software. You are buying a service. That service should be billed by usage. If a potential cloud product or offering makes it past these two tests, then you can look at scalability, multi-tenancy and provisioning to ensure that you are getting a true cloud offering. I hope this article helps to peel away the cloud washing and marketing speak out there and offers insight into 'true cloud.' If you have any comments or questions, please leave them below and I'll he happy to address them.


David Linthicum claimed “The IEEE hopes to solve the cloud interoperability problem, but vendors have every reason to sabotage it, and users don't seem to care” as a deck for his IEEE's cloud portability project: A fool's errand? article of 4/7/2011 for InfoWorld’s Cloud Computing blog:

image IEEE, the international standards-making organization, is jumping with both feet into the cloud computing space and announcing the launch of its new Cloud Computing Initiative. The IEEE is trying to create two standards for how cloud applications and services would interact and be portable across clouds.

image The two standards are IEEE P2301, Draft Guide for Cloud Portability and Interoperability Profiles, and IEEE P2302, Draft Standard for Inter-cloud Interoperability and Federation.

image The goal of IEEE P2301 is to provide a road map for cloud vendors, service providers, and other key cloud providers for use in their cloud environments. If IEEE P2301 does what it promises and is adopted, the IEEE says it would aid users in procuring, developing, building, and using standards-based cloud computing products and services, with the objective of enabling better portability, increased commonality, and interoperability.

The goal of IEEE P2302 is to define the topology, protocols, functionality, and governance required to support cloud-to-cloud interoperability.

Don't expect anything to happen any time soon. The standards process typically takes years and years. Even the first step has yet to occur for these standards: the formation of their working groups. However, IEEE is good at defining the details behind standards, as evidenced by its widely used platform and communication standards. By contrast, most of the standards that emerge from organizations other than the IEEE are just glorified white papers -- not enough detail to be useful.

The cloud industry has already been working toward interoperability, as have some other standards organizations. But none of those efforts has exactly set the cloud computing world on fire. I like the fact that the IEEE is making this effort, versus other standards organizations whose motivations are more about undercover marketing efforts than unbiased guidelines to aid users.

But reality gets in the way, and I have my doubts that anything useful will come out of the IEEE efforts in any reasonable timeframe. The other standards groups involved in cloud computing have found that many of the cloud providers are more concerned with driving into a quickly emerging market and being purchased for high multiples than about using standards.

Read more: next page ›, 2

As I noted in my earlier Windows Azure and Cloud Computing Posts for 4/4/2011+ post, “Everyone wants to get in on the standards act.”


Lori MacVittie (@lmacvittie) asserted It’s called a feedback loop, not a feedback black hole as a preface to her Now Witness the Power of this Fully Operational Feedback Loop post of 4/6/2011 to F5’s DevCentral blog:

image One of the key components of a successful architecture designed to mitigate operational risk is the ability to measure, monitor and make decisions based on collected “management” data. Whether it’s simple load balancing decisions based on availability of an application or more complex global application delivery traffic steering that factors in location, performance, availability and business requirements, neither can be successful unless the components making decisions have the right information upon which to take action.

Monitoring and management is likely one of the least sought after tasks in the data center. It’s not all that exciting and it often involves (please don’t be frightened by this) integration. Agent-based, agentless, standards-based. Monitoring of the health and performance of resources is critical to understanding how well an “application” is performing on a daily basis. It’s the foundational data used for capacity planning, to determine whether an application is under attack and to enable the dynamism required of a dynamic, intelligent infrastructure supportive of today’s operational goals.

YOU CAN’T REACT to WHAT you CAN’T SEE

imageWe talk a lot about standards and commoditization and how both can enable utility-style computing as well as the integration necessary at the infrastructure layers to improve the overall responsiveness of IT. But we don’t talk a lot about what that means in terms of monitoring and management of resource “health” – performance, capacity and availability.

The ability of any load-balancing service depends upon the ability to determine the status of an application. In an operationally mature architecture that includes the status of all components related to the delivery of that application, including other application services such as middle-ware and databases and external application services. When IT has control over all components, then traditional agent-based approaches work well to provide that information. When IT does not have control over all components, as is increasingly the case, then it cannot collect that data nor access it in real-time. If the infrastructure components upon which successful application delivery relies cannot “see” how any given resource is performing let alone whether it’s available or not, there is a failure to communicate that ultimately leads to poor decision making on the part of the infrastructure.

We know that in a highly virtualized or cloud-computing model of application deployment that it’s important to monitor the health of the resource, not the “server”, because the “server” has become little more than a container, a platform upon which a resource is deployed and made available. With the possibility of a resource “moving” it is even more imperative that operations monitor resources. Consider how IT organizations that may desire to leverage more PaaS (Platform as a Service) to drive application development efforts forward faster. Monitoring and management of those resources must occur at the resource layer; IT has no control or visibility into the underlying platforms – which is kind of the point in the first place.  

YOU CAN’T MAKE DECISIONS without FEEDBACK

image

The feedback from the resource must come from somewhere. Whether that’s an agent (doesn’t play well with a PaaS model) or some other mechanism (which is where we’re headed in this discussion) is not as important as getting there in the first place. If we’re going to architect highly responsive and dynamic data centers, we must share all the relevant information in a way that enables decision-making components (strategic points of control) to make the right decisions. To do that resources, specifically applications and application-related resources, must provide feedback.

This is a job for devops if ever there was one. Not the ops who apply development principles like Agile to their operational tasks, but developers who integrate operational requirements and needs into the resources they design, develop and ultimately deploy. We already see efforts to standardize APIs imagedesigned to promote security awareness and information through efforts like CloudAudit. We see efforts to standardize and commoditize APIs that drive operational concerns like provisioning with OpenStack. But what we don’t see is an effort to standardize and commoditize even the simplest of health monitoring methods. No simple API, no suggestion of what data might be common across all layers of the application architecture that could provide the basic information necessary for infrastructure services to take actions appropriately.

The feedback regarding the operational status of an application resource is critical in ensuring that infrastructure is able to make the right decisions at the right time regarding each and every request. It’s about promoting dynamic equilibrium in the architecture; an equilibrium that leads to efficient resource utilization across the data center while simultaneously providing for the best possible performance and availability of services.

MORE OPS in the DEV

It is critical that developers not only understand but take action regarding the operational needs of the service delivery chain. It is critical because in many situations the developer will be the only ones with the means to enable the collection of the very data upon which the successful delivery of services relies. While infrastructure and specifically application delivery services are capable of collaborating with applications to retrieve health-related data and subsequently parse the information into actionable data, the key is that the data be available in the first place. That means querying the application service – whether application or middle-ware and beyond – directly for the data needed to make the right decisions. This type of data is not standard, it’s not out of the box, and it’s not built into the platforms upon which developers build and deploy applications. It must be enabled, and that means code.

That means developers must provide the implementation of the means by which the data is collected; ultimately one hopes this results in a standardized health-monitoring collection API jointly specified by ops and dev. Together.


Mario Meir-Huber continued his series with Windows Azure Series - Roles Offered by Windows Azure, subtitled “The development environment”, with a 4/6/2011 post:

Part 1 of the Windows Azure Series provided an introduction to Windows Azure, and Part 2 provided a look inside the Windows Azure datacenters. Part 3 will discuss the Windows Azure Roles and the Development Environment. The last one is especially important; we will focus on it for the next couple of articles and look at the API as well. This article will focus on the "Compute" part of Windows Azure.

The Roles Explained
imageWindows Azure currently (April 2011) has three different roles. In case I didn't mention it before, Windows Azure is a Platform as a Service offering. Therefore, developers have to think a little differently about IaaS platforms. If you have more roles, Windows Azure does the load balancing for you. There is no need to pay extra money for a load-balancing service or to handle this on your own. But now back to the Roles, which are the WebRole, the WorkerRole and the VmRole. Each of the roles serves a different need for modern Software as a Service applications.

The WebRole
The WebRole is the main role to display web content to users. This is the role people would consider to run in IIS (and in fact, the role runs on IIS). The WebRole features different technologies. With Microsoft technologies, developers can use the "classic" ASP.NET or ASP.NET MVC. To date, .NET 4.0 and the ASP.NET MVC 2.0 Framework are supported. The WebRole supports other frameworks as well. Windows Azure allows you to run applications as FastCGI. This means that frameworks such as PHP can run in Windows Azure. The goal of the WebRole is to display website content. Tasks that require more "Compute Power" should be outsourced to other roles. A good way to achieve this is with a WorkerRole. Another great feature of the WebRole is the ability to run applications as Full IIS. This allows developers to run multiple sites in a WebRole instance.

The WorkerRole
As already mentioned, the WorkerRole's primary target is to run as a background worker. This allows easier scalability of web applications - and the cloud is all about scaling out your applications. When you first look at the WorkerRole, it might look strange. It does only one thing: run in an infinite loop. You can run .NET applications, as well as a large variety of other technologies such as Java. In order to communicate with other roles, various techniques are available. One good choice might be to use Messagequeues.

The VmRole
The VmRole is not as "PaaSy" as the other two roles. The target of the VmRole is to allow easier migration to the cloud as some applications are not as easy to bring to the cloud. The VmRole was introduced to allow easier migration. It's necessary to mention that you might lose some of the advantages you have with the other two roles.

Instance Sizes
Windows Azure includes a variety of instance sizes. Currently (April 2011) there are five different instances with a variety of memory, instance storage and virtual CPU sizes. The instances are as follows:

  • Extra Small: Consists of a 1 Gigahertz virtual CPU with 768 MB of RAM and 20 GB instance storage. This instance is useful for demo applications or applications that need low performance. The negative aspect is the low IO performance but it's very cheap.
  • Small: Consists of a 1.6 Gigahertz virtual CPU with 1.75 GB of RAM and 225 GB instance storage. With this instance, average tasks can be achieved and the performance of this system could be used for a production system. It costs more the twice as much as the Extra Small instance.
  • Medium: Consists of two 1.6 Gigahertz virtual CPUs with 3.5 GB of RAM and 490 GB instance storage. This instance already allows some parallelization since it comes with two cores and the RAM is already powerful. A major plus of this instance is its high IO performance.
  • Large: Consists of four 1.6 Gigahertz virtual CPUs with 7 GB of RAM. The instance storage is 1 Terabyte. This instance allows complex computing tasks and is made for memory-intense tasks. The IO performance is high.
  • Extra Large: This is the strongest instance with eight 1.6 Gigahertz virtual CPUs. The memory is 14 GB large and the instance has 2 TB of instance storage. If you need to compute the real heavy stuff, this is definitely the instance for you. The IO performance is good but this instance has its price. One compute hour is almost $1.

For detailed pricing, please visit this link: http://www.microsoft.com/windowsazure/pricing/

Windows Azure Developer Opportunities
Microsoft integrates a lot of different platforms and tools on Windows Azure. The most popular framework for Windows Azure is Microsoft's own framework, the .NET Framework. To date, the .NET Framework 4 is fully supported on Windows Azure. If you are a fan of ASP.NET MVC like I am, you can't run ASP.NET MVC 3 applications on Windows Azure with a simple template as the MVC 2, which is already included. [1]

However, there are a couple of other tools that are supported on Windows Azure. A great tool is PHP, which can be run as well as all other frameworks, tools or languages that are supported by the FastCGI IIS Role on Windows Azure. Microsoft wants to make Java a first-class citizen on Windows Azure and there is a framework for Java to query Windows Azure Services. In fact, Windows Azure uses ReST a lot so there is an easy way to include numerous platforms and tools on Windows Azure.

The Development Environment for .NET
My favorite platform though is the .NET Platform, so this series will focus on that platform. However, later in the series I will show how to integrate other platforms such as Java or PHP. If you also use .NET, the best way to develop Windows Azure applications is to use Visual Studio. It's not necessary to buy the commercial version; you can also use the free version, Visual Web Developer 2010. However, you might lose the benefits of Version Control, ALM or other items that are included with the full version. In my demos I will only use the free version.

Figure 1: Create a new Windows Azure Project

Once the Windows Azure tools are installed, you can start to develop your Windows Azure applications. In Figure 1 you can see the available Windows Azure Roles with the Visual Web Developer. I'll create a short sample in one of the later articles, since there are still a lot of basics, such as storage, that we need to address prior to source code examples.

Reference

  1. How to run ASP.NET MVC 3 on Windows Azure: http://blog.smarx.com/posts/asp-net-mvc-in-windows-azure

This article is part of the Windows Azure Series on Cloud Computing Journal. The Series was originally posted on Codefest.at, the official Blog of the Developer and Platform Group at Microsoft Austria. You can see the original Series here: http://www.codefest.at/?tag=/azure+tutorial.


Nicole Hemsoth reported Redmond Sets Sights on Manufacturing in a 4/6/2011 post to the HPC in the Cloud Blog:

image The concept of digital manufacturing forms an umbrella over any number of computationally-driven technological enhancements that feed the overall manufacturing supply chain. Generally speaking, this includes anything from 3D rendering and prototyping of new products, the use of modeling and simulation to speed time to market or test for quality, or to plan and collaborate throughout the entire lifecycle of any given product. In short, there’s far more than initially meets the eye involved here…

Last year the news that touched on this, at least in the cloud sphere, was somewhat limited. Many items that emerged included advancements in SaaS solutions, including announcements from Autodesk about its Project Cumulus and Project Centaur, for instance. Even during the HPC360 conference, which had a manufacturing bent, there was an incredible amount of interest in what clouds could do for the industry but some solutions and the pesky "small" implementation details were definitely lacking from vendor conversations—SaaS-based or otherwise.

image A number of companies that attended that event were in the process of making decisions about how clouds fit into their infrastructure, cost, performance, and other goals but I think if they were to jump ahead just one year they’d be finding far more answers—or least good starting points. After all, this is technology we’re talking about and to say a lot can change in one year is a profound understatement.

This will be the year when vendors and manufacturing alike start to see (and then act on) the fact that digital manufacturing and cloud computing are a good fit; they complement one another technologically and logically. Since many manufacturers rely on cutting-edge modeling and simulation tools, for instance, this once meant they needed cutting-edge hardware to churn out ideas and speed lifecycles along, which added to upfront cost.

Now that cloud possibilities have nipped some hardware investment concerns in the bud (at least initially—we could argue at length about that sticky ROI with cloud for the long-haul issue, of course) what advances the technological/software end could equally advance the cloud computing adoption/use end.  Am I glossing over some realities here? Yes. Yes, I am. But this scenario is possible—and playing out—for some small to mid-size manufacturers—and without such smaller players feeding the supply chain the whole house of cards would collapse anyway.

Despite some of the hubbub about this (really, really important) sector of the economy snatching up cloud opportunities, there haven’t been many companies actively courting manufacturers. At least not outside of industry-focused events that set aside specific time to present to possible new customers. Microsoft, however, performed the equivalent of writing personalized invitations for the manufacturers of the world this week with an announcement that hints at a much broader manufacturing focus around the bend.

A couple of days ago the company launched its Reference Architecture Framework for Discrete Manufacturers Initiative to “accelerate cloud computing and improved collaboration across the value chain.”

More specifically, this focused push to the clouds across the manufacturing sector--from the top of the pyramid to the base—is intended to help companies collaborate on a global scale via the power of an increasing number of mobile devices connected via the cloud. And preferably its cloud offerings.

The group behind the effort has pulled in manufacturing giants, including Siemens MES and Tata Consultancy Services as well as other smaller, more focused organizations like Camstar Systems and Rockwell Automation.

According to Sanjay Ravi who oversees Microsoft’s Worldwide Discrete Manufacturing Industry division, the combination of globalization and new technology and devices have “fragmented industry value chains, making them more complex and unable to quickly respond to increased competition and shorter product life cycles.” He goes on to identify the emergence of cloud alternatives as the key to putting the pieces back together but notes that manufacturers are still looking for guidance about how they can benefit from cloud.

Presumably, this is the impetus behind the new initiative which Ravi claims will provide a response to this need for guidance “while offering a pragmatic solution road map for IT integration and adoption. The company got an earful from respondents to their recent Discrete Manufacturing Cloud Computing Survey that gathered the opinions of 152 IT and other leaders from a number of manufacturing sectors, including aerospace, electronics and heavy equipment makers--there just isn't enough information or guidance.

There were many noteworthy elements in that survey but the most important takeaway here is, at least in one opinion here, that Microsoft smells blood.

Now I realize I’m going out into left field with this analogy here, but manufacturing is like that baby antelope on the National Geographic channel; abandoned by its mother in the vast savanna—it doesn’t know much and is prone to wandering aimlessly…And...well...

Am I saying that Microsoft is the lion discretely watching it walk on wobbly legs in this mini-fable? Not really—It might be that it is more of a shepard to lead it to a safe, stable patch. And when it comes to shepards, I guess the first big, strong one on the scene will do just nicely.

Microsoft does have the power to appeal to this huge customer base and it uses the keywords that are most likely to entice this segment of the market.

Ravi claims that “current cloud computing initiatives are targeted at cost reduction but a growing number of forward-looking companies are exploring new and innovative business capabilities uniquely delivered through the cloud.” He notes that this is taking hold in product design and what he terms “social product development” projects as well staking a claim for the value of added collaboration via the cloud. 

If some of this sounds like vendor hype behind clouds that approaches the topic far too generally, you might be right, but then again that seems to be the norm in terms of anything cloud-related these days. It’s more about who goes all carpe diem on an industry at the moment it smells weakness (in this case a lack of knowledge about implementation, practicality, etc). ‘Approach with caution’ can be read as ‘be as general as possible’ but nonetheless, by tackling the fact that the education and guidance are the missing piece, Microsoft might win itself a few manufacturing converts.

To go back to that HPC360 event from last year, there were a lot of questions about clouds in general but no one really answered them completely. This might be because even “way back then” (October) companies were still fleshing out their own strategy to take to this particular market. Through its recent survey, one of the main takeaways was that while there’s interest, there’s just as much confusion, but Microsoft is seizing this opportunity to tout itself as the expert—and lead the flock to a new era of digital manufacturing.

Now that I’ve had my say I’ll go back to my NatGeoTV and see if that weak, lost little antelope suddenly kicks up its heels and makes its own path.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), App-V, Hyper-V and Private/Hybrid Clouds

Steve Plank (@plankytronixx) answered Why is Windows Server App-V a good thing for cloud computing? on 4/6/2011:

imageWith all the pzazz of a Vegas Showgirl, Server App-V was announced at the Microsoft Management Summit (MMS) in Las Vegas last week. Well, OK, Brad Anderson stood there in polo-shirt and chinos to make the announcement but he was as excited as a Vegas Showgirl when he did it… Watch the keynotes on video here.

And this is a very good thing for cloud computing everywhere, from every product-vendor and every cloud-operator. Why? Because it ushers in a new way of looking at the cloud computing platform. Although multi-tenant SaaS cloud applications have been selling for some time now (think of Microsoft’s BPOS, Office Web Apps and soon-to-be-released Office 365), we, as technologists, have tended to think of the serious developer platform as some derivative of an entire virtualised operating system.

imageOne of the very cool things the designers of Windows Azure came up with for the PaaS model it uses, is a separation between the OS and the application. This separation is so extreme that you don’t even perform OS maintenance tasks on the thing that’s running your application; the OS. But it doesn’t all disappear entirely. You can, for example, get an RDP connection to a running instance and explore the OS. You can use Event Viewer to open the error logs, you can use the file system to open up log files, you can investigate network configuration and you can hunt around IIS to try and fathom out why your application isn’t running.

If you (foolishly), applied a patch, hotfix, service pack, security fix etc, you’d find, over the long term it wouldn’t stay. Microsoft maintains the instance image in the way the fabric controller and the data-centre operators knows will provide the greatest stability and security. This gives isolation from the thing which has no inherent value to applications – management. That’s because it’s Microsoft’s image. Of course the exception to this is with VM Role. I made a point of highlighting the role differences in a previous post, where I said:

  • A Web Role is your web site hosted on IIS.
  • A Worker Role is your application hosted on Microsoft’s OS image.
  • A VM Role is your pre-loaded application hosted on your OS image.

What Server App-V does is increase and exaggerate the isolation between the OS and the application. It allows for the separation of application configuration and state from the underlying operating system in a data center environment. It converts traditional server applications into “XCopyable” images without requiring code changes to the applications themselves.

OK, so right now, in the Beta, this only really applies to machines running in your on-premise data-centre. There are challenges right now. For example state and configuration are stored locally and we know that durability of storage across instance reboots is not guaranteed. But Windows Azure does have a highly scalable and durable storage architecture: Windows Azure Storage. It’s only a matter of time before these things are improved.

One limitation of the PaaS model as it stands today is that applications have to be specifically built with the model in mind: to be stateless, scale-out, multi-instance-capable entities. That means if you have a traditional server application, and let’s face it, there are rather a lot of those around, you have to put your white lab-coat on and do some re-engineering. The result for those organisations looking to “get some quick wins” is that they install the app on to a VM and send the VM to a hoster or IaaS cloud operator. They simply replicate what they have in their data-centre in somebody else’s data-centre. As a Microsoft shareholder, I obviously don’t like hearing about this. I know the PaaS model is a better long-term bet for them (and for me – remember, I said I’m a shareholder). However, what if they could get all the advantages of the PaaS model but with a traditional server application? Well – go back and read a couple of paragraphs back. I said “It converts traditional server applications into “XCopyable” images without requiring changes to the applications themselves”.

Of course, you can’t possibly get the advantages of a stateless, multi-instance environment, if your app is designed as a huge monolithic lump. This is where Server App-V comes in. There may be significant parts of the application that are difficult to modify, but other parts that can be modified more easily. Using Server App-V as the deployment mechanism to Windows Azure can go a long way to solving those problems. And as time advances, remember what the band D:Ream said – “things can only get better”.

So with Server App-V, you can get those “quick wins” and in the same breath remove that thing I talked about that has no inherent value – management. Microsoft’s Fabric Controller will do it all for you.

Why is this good for the cloud industry? Because such a substantial techno-business proposition is impossible to ignore. All other vendors will be looking at this and I’m sure if they didn’t already have plans to offer a similar technology for a PaaS model, they will be in the process of creating a strategy as we speak. Take Amazon for example. They are the most successful cloud operator with their IaaS model, but they see the benefits of the PaaS model and have introduced their own with Elastic Beanstalk. I believe you can be guaranteed that within a few years, they will also have a Server App-V competitor. I’m sure all the other major cloud operators will do the same. It’s a great stimulator of competition. With competition comes choice. Of course, as a shareholder, I rather hope you’ll choose Microsoft! It seemed a few years ago that other than a few bolt-on services, moving virtualisation to the cloud was about all you could do to innovate. I hope Server App-V really sets the cat among the pigeons and we start to see more and more creative levels of innovation coming out across the entire cloud industry.

I think Microsoft has taken the view of the long-game. When you think, with the core infrastructure components Microsoft has at its disposal, it would have been easy to just host VMs in cloud data-centres on Hyper-V. But that’s not what happens. And to support the long game, there are even additional services to make plugging enterprise applications in to the cloud easy. I’m thinking here of App Fabric Service Bus and App Fabric Access Control Service. With the new ACS in App Fabric Labs (in CTP as I write), it’s not far off selecting a few check boxes and typing in a few URLs to get your Active Directory authenticating users to your cloud applications. As time advances, the simplicity of these sorts of configuration will get simpler and simpler. In fact, it’d be pretty difficult to make them much simpler for accepting authentication from popular sites like Facebook, Google, Yahoo and Live ID. In the new ACS portal, you just click a button!

As more and more core infrastructure services wend their way in to PaaS cloud technologies, this is going to make things easier and easier to deploy applications to the cloud. I can see a day, some years in the future when it will be no more difficult to deploy an application to a PaaS cloud, like Windows Azure that it will to deploy to an IaaS cloud.image

What I’m showing in the graph are the 2 counter-balancing areas of difficulty that have to be considered when selecting either a PaaS or IaaS model. PaaS applications are more difficult to deploy because the apps have to be specifically engineered to work as scale-out, multi-instance entities. IaaS applications can “just be”. So the movement of an existing application to IaaS is simpler. Create the VM, install the application, fire the VM up in the cloud operators datacentre.

It’s once the application has been deployed that the on-going task of managing the underlying OS has to be taken in to consideration. It’s a sort of trade-off. You trade ease-of-deployment, but it bites later when you are wedded to the management of the OS and everything in the stack above it, for the life of that application.

With Paas, there is initial difficulty in migrating an application to be a good PaaS citizen and that is a one-time piece of work. Once done, you no longer worry about the management of the OS, the middleware, the runtimes etc. As application virtualisation develops, we’ll see PaaS operators move closer to the bottom left corner of the graph. IaaS can never do that with application virtualisation because the OS is the management problem.

As I said – I think this is a great thing for the industry. Many cloud vendors will need to offer similar services and that gives choice. Innovation stimulates the market which makes it better for all of us. I just hope you choose Microsoft.


Christopher Hoff (@Beaker) posted Incomplete Thought: Cloudbursting Your Bubble – I call Bullshit… on 4/5/2011:

My wife is in the midst of an extended multi-phasic, multi-day delivery process of our fourth child.  In between bouts of her moaning, breathing and ultimately sleeping, I’m left to taunt people on Twitter and think about Cloud.

Reviewing my hot-button list of terms that are annoying me presently, I hit upon a favorite: Cloudbursting.

It occurred to me that this term brings up a visceral experience that makes me want to punch kittens.  It’s used by people to describe a use case in which workloads that run first and foremost within the walled gardens of an enterprise, magically burst forth into public cloud based upon a lack of capacity internally and a plethora of available capacity externally.

I call bullshit.

Now, allow me to qualify that statement.

Ben Kepes suggests that cloud bursting makes sense to an enterprise “Because you’ve spent a gazillion dollars on on-prem h/w that you want to continue using. BUT your workloads are spiky…” such that an enterprise would be focused on “…maximizing returns from on-prem. But sending excess capacity to the clouds.”  This implies the problem you’re trying to solve is one of scale.

I just don’t buy this.

Either you build a private cloud that gives you the scale you need in the first place in which you pattern your operational models after public cloud and/or design a solid plan to migrate, interconnect or extend platforms to the public [commodity] cloud using this model, therefore not bursting but completely migrating capacity, but you don’t stop somewhere in the middle with the same old crap internally and a bright, shiny public cloud you “burst things to” when you get your capacity knickers in a twist:

The investment and skillsets needed to rectify two often diametrically-opposed operational models doesn’t maximize returns, it bifurcates and diminishes efficiencies and blurs cost allocation models making both internal IT and public cloud look grotesquely inaccurate.

Christian Reilly suggested I had no legs to stand on making these arguments:

Fair enough, but…

Short of workloads such as HPC in which scale really is a major concern, if a large enterprise has gone through all of the issues relevant to running tier-1 applications in a public cloud, why on earth would you “burst” to the public cloud versus execute on a strategy that has those workloads run there in the first place.

Christian came up with another ringer during this exchange, one that I wholeheartedly agree with:

Ultimately, the reason I agree so strongly with this is because of the architectural, operational and compliance complexity associated with all the mechanics one needs to allow for interoperable, scaleable, secure and manageable workloads between an internal enterprise’s operational domain (cloud or otherwise) and the public cloud.

The (in)ability to replicate capabilities exactly across these two models means that gaps arise — gaps that unfairly amplify the immaturity of cloud for certain things and it’s stellar capabilities in others.  It’s no wonder people get confused.  Things like security, networking, application intelligence…

NOTE: I make a wholesale differentiaton between a strategy that includes a structured hybrid cloud approach of controlled workload placement/execution versus  a purely overflow/capacity movement of workloads.*

There are many workloads that simply won’t or can’t *natively* “cloudburst” to public cloud due to a lack of supporting packaging and infrastructure.**  Some of them are pretty important.  Some of them are operationally mission critical. What then?  Without an appropriate way of understanding the implications and complexity associated with this issue and getting there from here, we’re left with a strategy of “…leave those tier-1 apps to die on the vine while we greenfield migrate new apps to public cloud.”  That doesn’t sound particularly sexy, useful, efficient or cost-effective.

There are overlay solutions that can allow an enterprise to leverage utility computing services as an abstracted delivery platform and fluidly interconnect an enterprise with a public cloud, but one must understand what’s involved architecturally as part of that hybrid model, what the benefits are and where the penalties lay.  Public cloud needs the same rigor in its due diligence.

[update] My colleague James Urquhart summarized well what I meant by describing the difference in DC-DC (cloud or otherwise) workload execution as what I see as either end of a spectrum: VM-centric package mobility or adopting a truly distributed application architecture.  If you’re somewhere in the middle, things like cloudbursting get really hairy.  As we move from IaaS -> PaaS, some of these issues may evaporate as the former (VM’s) becomes less relevant and the latter (Applications deployed directly to platforms) more prevalent.

Check out this zinger from JP Morgenthal which much better conveys what I meant:

If your Tier-1 workloads can run in a public cloud and satisfy all your requirements, THAT’S where they should run in the first place!  You maximize your investment internally by scaling down and ruthlessly squeezing efficiency out of what you have as quickly as possible — writing those investments off the books.

That’s the point, innit?

Cloud bursting — today — is simply a marketing term.

Thoughts?

/Hoff

* This may be the point that requires more clarity, especially in the wake of examples that were raised on Twitter after I posted this such as using eBay and Netflix as examples of successful “cloudbursting” applications.  My response is that these fine companies hardly resemble a typical enterprise but that they’re also investing in a model that fundamentally changes they way they operate.

** I should point out that I am referring to the use case of heterogeneous cloud platforms such as VMware to AWS (either using an import/conversion function and/or via VPC) versus a more homogeneous platform interlock such as when the enterprise runs vSphere internally and looks to migrate VMs over to a VMware vCloud-powered cloud provider using something like vCloud Director Connector, for example.  Either way, the point still stands, if you can run a workload and satisfy your requirements outright on someone else’s stack, why do it on yours?

Related articles

Congratulations to @Beaker and his family on the birth of his new daughter, Sophie!


<Return to section navigation list> 

Cloud Security and Governance

Andy Cross (@andybareweb) described Restricting Access by IP in Azure Web Role V1.4 in a 4/6/2011 post:

image This post demonstrates the use of IP restrictions inside a Windows Azure Web Role to allow or prevent access by known IP addresses. This question was asked of me by Robin Osborne (@rposbo) a good friend and leading technical expert in London, and in exploring his question I thought I’d share the results with the wider community.

image

Source code is provided.

In a given application there may be sensitive areas of the system that need protecting. Traditional authentication mechanisms may suffice, but in certain circumstances it may be necessary to lock down a location at a lower level. One way of doing this is to allow or deny access by IP Address.

This feature is supported in IIS7, and so is available for our use in Windows Azure. From the documentation on IIS7, this is the summary of what is possible with IIS7 IP Security:

In IIS 7, all Internet Protocol (IP) addresses, computers, and domains can access your site by default. To enhance security, you can limit access to your site by creating an allow rule that grants access to all IP addresses (the default), a specific IP address, a range of IP addresses, or a specific domain. For example, if you have a site on an intranet server that is connected to the Internet, you can prevent Internet users from accessing your intranet site by allowing access only to members of your intranet.

http://technet.microsoft.com/en-us/library/cc731598(WS.10).aspx

This feature is very easy to configure and use, since in IIS7 we can do most of our configuration in our web.config file.

Jumping straight into some setup, we first need a new Cloud Project with a single Web Role inside.

Basic Site Setup

Basic Site Setup

Since we are very concerned about IP addresses, I modified the Site.Master page to add a basic piece of ASP.NET to show the user’s IP Address.

view plaincopy to clipboardprint?

  1. <h1>
  2. You are: <%: Request.UserHostAddress %>
  3. </h1>
<h1>                     You are: <%: Request.UserHostAddress %>                 </h1>

Following this, we need to open our Web.config, and add in some basic rules. These rules can be complex, allowing or denying access to certain paths based on IP Address. For my example, I found my own IP address (by going to http://www.whatismyip.com) and denied myself access to the ~/Account/ folder within my application. This looks like:

Basic IP Sec config

Basic IP Sec config

This setting means that all users have access to the path ~/Account (because allowUnlisted is true), but the specified ip address is denied access to the folder. I will test access is available to other IP Addresses using another device, such as my smart phone.

Note that if we try this in the Azure compute emulator, we will need to modify the ipAddress for local to 127.0.0.1. I am testing against Azure staging instead, so my ipAddress remains static.

That’s all there is to it! Well almost.

If we load up the project in Azure, we find that actually we can access the page even though our IP address should be banned. What’s going on?

Access is allowed, why?

Access is allowed, why?

It turns out that IIS7 doesn’t have the module or role installed for IP Security in Azure (or any Windows 2008 Server) by default. Therefore we must go ahead and install the module ourselves, before the application starts. The way to achieve this is to create a startup task.

Create a .cmd file (Save as Unicode Without Signature) called Startup.cmd in the root of your web application. Mark this as Content and “Copy Always”. In this file we are going to put some shell commands to make the Web Role install the correct IIS module during its startup. Those commands are:

view plaincopy to clipboardprint?

  1. @echo off  
  2. @echo Installing "IPv4 Address and Domain Restrictions" feature  
  3. %windir%\System32\ServerManagerCmd.exe -install Web-IP-Security  
  4. @echo Unlocking configuration for "IPv4 Address and Domain Restrictions" feature  
  5. %windir%\system32\inetsrv\AppCmd.exe unlock config -section:system.webServer/security/ipSecurity 
@echo off

@echo Installing "IPv4 Address and Domain Restrictions" feature
%windir%\System32\ServerManagerCmd.exe -install Web-IP-Security

@echo Unlocking configuration for "IPv4 Address and Domain Restrictions" feature
%windir%\system32\inetsrv\AppCmd.exe unlock config -section:system.webServer/security/ipSecurity

Then go into the ServiceDefinition.csdef and add the startup task like so:

view plaincopy to clipboardprint?

  1. <Startup>
  2. <Task commandLine="Startup.cmd" executionContext="elevated" taskType="simple" />
  3. </Startup>
<Startup>
<Task commandLine="Startup.cmd" executionContext="elevated" taskType="simple" />
</Startup>

Note that this startup task does increase the length taken to start a role considerably (~5mins on a SMALL instance size). This is clearly due to installing new IIS components.

Now when we deploy again to Windows Azure, we find that our IP Address is no longer allowed access to the folder or any of its contents.

Access denied error (based on my IP)

Access denied error (based on my IP)

When I access the same path on my smart phone (clearly it will have a different IP), I get the the login screen:

Success on a different IP

Success on a different IP

That’s about all, source code is attached here: IPRestrictions


<Return to section navigation list> 

Cloud Computing Events

Jim O’Neill reported multiple Learning Opportunities to Embrace the Cloud in this 4/7/2011 post:

image In case there was ever a doubt that Microsoft is “all in” when it comes to the cloud, hopefully the several opportunities I’m outlining below – all free by the way - will help dispel that notion. 

This certainly isn’t a exhaustive list by any means, but here are some of the events and activities that I thought might be of interest, especially if you are still wondering what this cloud hype and Windows Azure is all about!

image The RockPaperAzure Challenge is a great way to get introduced to Windows Azure, have some fun, and maybe even win an XBox.  We just kicked off the first of six-weekly challenges, so there’s plenty time to take part. 

image

You can get your own 30-day free Azure account to play along and still have plenty of time to explore Windows Azure for your own application needs.  Join the competition anytime, but if you want a primer on Windows Azure and how to play RockPaperAzure, catch us on one of the following Tuesday webcasts (all times are Eastern US).


Capturing the Cloud is a six-segment webcast series, three sessions focused on business decision-makers and three focused on technical decision-makers, running through April and May. 

Microsoft World Wide EventsEach session is co-presented by a Microsoft Windows Azure specialist and a partner – sign up for one or all!   Each webcast runs an hour and begins at 2 p.m. ET.


Jeffrey Richter from Wintellect will be presenting a free, in-person Windows Azure Deep Dive at four locations on the East Coast during May (with the first location also being simulcast).Wintellect  Here are the venues:


Bruce Kyle reported on 4/6/2011 MSDN Events Presents ‘Understanding Azure’ for Developers, Architects to the US ISV Evangelism Blog:

image Cloud Development is one of the fastest growing trends in our industry. Don't get left behind. Join us for an overview of developing with Windows Azure.

We'll cover both where and why you should consider taking advantage of the various Windows Azure's services in your application, as well as providing you with a great head start on how to accomplish it.

imageJoin us for this FREE, half-day event and learn about the benefits and nuances of hosting web apps and services in Windows Azure, as well as taking advantage of SQL Azure and the ins and outs of Windows Azure storage.

image

Location Date
Denver, CO Apr. 11
San Francisco, CA Apr. 12
Tempe, AZ Apr. 15
Bellevue, WA Apr. 18
Portland, OR Apr. 19
Irvine, CA Apr. 20
Los Angeles, CA Apr. 21


The Windows Azure Team recommended that you Don't Miss These Sessions at MIX11 April 12-14, 2011 To Learn How to Build Websites on Windows Azure on 4/6/2011:

image

If you plan to attend MIX11, next week in Las Vegas, NV don't miss these sessions to learn how the Windows Azure platform and ASP.NET CMS providers can enable you to build highly scalable, rich and compelling web experiences quickly and seamlessly. (Click on each title for a full description).

Tuesday, April 12, 2011

11:30 AM - 12:30 PM:     Deconstructing Orchard:  Build, Customize, Extend, Ship

Wednesday, April 13, 2011

2:00 - 2:25 PM:      DotNetNuke and Windows Azure: Taking Your Business to the Cloud

2:00 - 2:25 PM:     Life in the Fast Lane: Rapidly Deploy Umbraco CMS on Windows Azure

2:35 - 3:00 PM:     Building Your Websites with Kentico CMS on Windows Azure

2:35 - 3:00 PM:     Sharpen Your Web Development Skills with Razor and Umbraco CMS

image All times PDT; session schedule information is subject to change.

If you can't make it to MIX11, you can join the live keynote broadcasts on April 12 and 13, 2011 at 9:00 AM PDT and watch or download the sessions online approximately 24 hours after they're recorded.


Adron Hall (@adronbh) promoted “Git + AppHarbor + Nuget as .NET Rubyized for Railing” in his Cloud Formation presentation of 4/5/2011 to the Bellingham .NET Users Group:

image Here’s the presentation materials that I’ve put together for tonight.

Cloud Formation

image

Check my last two posts regarding the location & such:


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Tedd Hoff (@toddhoffious) reviewed Erik Meier’s and Gavin Bierman’s Paper: A Co-Relational Model of Data for Large Shared Data Banks in a 4/7/2011 post to his High Scalability blog:

imageLet's play a quick game of truth or sacrilage: are SQL and NoSQL are really just two sides of the same coin? That's what Erik Meijer and Gavin Bierman would have us believe in their "we can all get along and make a lot of money" article in the Communications of the ACM, A Co-Relational Model of Data for Large Shared Data Banks. You don't believe it? It's math, so it must be true :-) Some key points:

In this article we present a mathematical data model for the most common noSQL databases—namely, key/value relationships—and demonstrate that this data model is the mathematical dual of SQL's relational data model of foreign-/primary-key relationships

...we believe that our categorical data-model formalization and monadic query language will allow the same economic growth to occur for coSQL key-value stores.

...In contrast to common belief, the question of big versus small data is orthogonal to the question of SQL versus coSQL. While the coSQL model naturally supports extreme sharding, the fact that it does not require strong typing and normalization makes it attractive for "small" data as well. On the other hand, it is possible to scale SQL databases by careful partitioning.

What this all means is that coSQL and SQL are not in conflict, like good and evil. Instead they are two opposites that coexist in harmony and can transmute into each other like yin and yang. Because of the common query language based on monads, both can be implemented using the same principles.

I'm certainly in no position to judge this work, or what it means at some deep level. After reading a 1000 treatments on monads I still have no idea what they are. But, like the Standard Model in physics, it would be satisfying if some unifying principles underlay all this stuff. Would we all get along? That's a completely different question...

Erik and Gavin included references to LINQ and Microsoft Research’s Dryad and DryadLINQ in their ACM paper. See my Windows Azure and Cloud Computing Posts for 4/2/2011+ post.


The HPC in the Cloud blog quoted Lydia Leong (@cloudpundit) in a Gartner Maps Out the Rapidly Evolving Market for Cloud Infrastructure as a Service post of 4/7/2011:

image Cloud infrastructure as a service (IaaS) represents a spectrum of services; there is no "one size fits all" service, and no single provider successfully addresses all segments of the market, according to Gartner, Inc. The market is poised for strong growth with worldwide IaaS forecast to grow from an estimated $3.7 billion in 2011 to $10.5 billion in 2014.

image "We are still at the beginning of the adoption cycle for cloud compute IaaS," said Lydia Leong, research vice president at Gartner. "This is a rapidly evolving market that represents the transformation of IT infrastructure over 10 to 20 years; however, the next five years represent a significant revenue opportunity — as well as a critical period for vendors who need to lay their foundations for the future."

Cloud IaaS is the capability provided to the consumer to provision processing, storage, networks and other fundamental computing resources where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage and deployed applications, and possibly limited control of selected networking components (e.g., host firewalls).

IaaS can be delivered by an internal IT organization (insourced) or by an external service provider (outsourced). The underlying infrastructure can be hosted within an organization's data center or in an external data center. That underlying infrastructure can be dedicated to a single customer ("private cloud"), shared between a consortium of customers ("community cloud"), or shared with a provider's customer base in general ("public cloud").

Because the market and the associated technologies are immature, customers frequently settle for what they can get now, rather than what they actually want or need. They currently tend to make primarily tactical decisions rather than long-term strategic commitments. Ms. Leong warned that while capturing the customer at this stage has value, service providers need to work hard to ensure that they retain these customers as their needs evolve.

"Startup IaaS pure-plays, Web hosters, carriers and data center outsourcers are all competing in the cloud computing IaaS market. However, many providers have a market viewpoint that is restricted by the particular use cases that they see in their sales pipeline, and this can lead to tunnel vision," Ms. Leong said. "In reality, customer requirements and use cases for cloud compute IaaS are diverse and evolving quickly. Cloud IaaS represents a spectrum of services; there is no 'one size fits all,' and no single service provider successfully addresses all segments of the market."

Ms. Leong said that in order to understand the market evolution, providers must first understand what prospective customers intend to do with cloud IaaS — both now and in the future. They also need to be aware that customers are often not fully aware of either their needs or the options available in the market and as a result, significant education needs to take place.

The on-demand nature of the cloud means that customers will want to try it with a minimal amount of fuss and cost. Providers should not underestimate the value of a frictionless sale to someone outside the usual IT procurement process. Getting a foot in the door is not only extremely valuable for a provider; it also helps the prospective customer demonstrate immediate value to his or her organization.

Providers must also be prepared to pay attention to the different buyer constituencies in each segment: IT operations, other technical personnel (such as application developers, engineers and scientists) and business buyers, all of whom may have different needs. The needs of an organization (and the buyer) may also change over time; for instance, initial cloud IaaS adoption is often driven by application developers, but as the organization's use grows, consolidated sourcing becomes the province of IT operations.

"Cloud IaaS is an evolving, emerging market," said Ms. Leong. "Service providers must remain flexible, be prepared to respond quickly to changing market demands and be agile in their adoption of new technologies in order to make the most of its potential."

Page:  1  of  2

Read more: 2, Next >


Chris Czarnecki asked rhetorically Is Amazon Now the Only Choice for Cloud Computing? in a 4/6/2011 post to the Learning Tree blog:

image Cloud Computing is big news, that is without question. It seems every time I open a newspaper or watch a TV program Microsoft informs me how my life, both business and personal would be improved by Cloud Computing. When I search on Google it seems every other advert is suggesting Google Apps could make my life easier. I mentioned in a previous post EMC having running Cloud Computing banners at Heathrow airport. In fact all the major vendors have significant marketing campaigns, aimed at convincing customers their cloud solutions are of significant benefit. Except one that is. I have yet to see an advert for Amazon AWS and its comprehensive set of cloud computing facilities.

image The only thing I ever hear from Amazon is news on new service features and improvements – and there are lots of these. No marketing, no fuzziness, just pure useful functionality delivered without fuss or fanfare. Just looking at the month of March, Amazon announced the following:

  • A second AWS availability zone ion Tokyo
  • EC2 dedicated instances
  • Windows Server 2008 R2 support
  • Virtual Private Cloud internet Access
  • Identity and Access Management Support for CloudFront
  • VM Import connector for VMWare vCenter
  • AWS support in Japenese

image These announcements, some significant, especially for private clouds, some more nice to have’s are all on top of what is already the most wide ranging, comprehensive set of cloud services available from any one vendor. So, does this mean that Amazon is the goto vendor for Cloud Computing ? Not necessarily, but they are making a strong case for being the one. The reality is that much depends on what your organisation and projects require. For instance, Amazon is an Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) provider not a Software as a Service (SaaS) provider. But when it comes to Iaas or Paas Amazon is really hard to beat, and appears to be increasing its range of functionality over competitors. Google, Microsoft and the rest have their work cutout to catch up – and do not currently appear to be making up any ground on Amazon.

Making the correct choice of Cloud Computing vendor is critical, and a thorough understanding of the products and services vital in helping make this choice. Learning Tree’s hands-on Cloud Computing course equips attendees with the skills required to make the correct choice for their organisations. Why not consider attending.

Chris doesn’t answer his question, but it’s obvious to me that the answer is “No.” Amazon’s only PaaS offering is CloudFormation for Java. David Linthicum debunked EC2 Dedicated Instances in an article below.


Alex Williams (@alexwilliams, pictured below) described A Google App Engine for the Enterprise in a 4/6/2011 post to the ReadWriteCloud blog:

image What are the alternatives to a private cloud? The question surfaced as I spent some time over breakfast this morning talking with Sinclair Schuller, CEO SaaSGrid, which has developed what it calls a Google App Engine for the enterprise. In a sense - a private SaaS.

saasgrid.jpgSaaSGrid, part of Apprenda, is a service that runs between an enterprise infrastructure and the application. It changes behavior for the run-time of .Net applications so it can run in a neutral environment. It provides a middleware that abstracts the issues that developers face inside the enterprise. It serves as a private SaaS environment for managing applications. In that way, it's similar to Google App Engine.

image The SaaSGrid Web site states that the service is deployable on-premises or in the cloud. It stitches together Windows Server, SQL Server, and IIS instances into a single SaaS platform instance for your .NET Web and SOA applications. It has architecture qualities like multi-tenancy, scale-out and high availability. It integrates into SaaSGrid services that provide capabilities like application patching, customer provisioning, pricing management and billing/collections workflows - all manageable through point and click web portals. [Emphasis added.]

SaaSGrid_Diagram_Enterprise.png

What does that mean?

For one thing, It helps developers who use Agile methodologies to work faster with system administrators. It can be accessed internally like a Google App Engine. From within that internal platform, the developer can develop and launch applications while at the same time abiding by IT requirements.

That serves as a platform that has use cases in a few ways. Franchises, for instance, can use it to point store managers to an application that is on the SaaSGrid. Benefits providers can create an internal network that provides members with an account page. And it works internally for business critical applications.

There are alternatives to the private cloud, which still has its own relative value. But a service like SaaSGrid offers a way to have an internal platform that allows developers to iterate faster on applications while staying within the guidelines of the IT department.

Interesting potential competition for Windows Azure and SQL Azure. Will SaaSGrid speed WAPA’s release to a wider audience?


Ed Scannell and Carl Brooks (@eekygeeky) reported IBM to battle Amazon in the public cloud in a 4/6/2011 post to TechTarget’s SearchCloudComputing blog:

IBM has opened the door on a public cloud Infrastructure as a Service offering for the enterprise.

image The company has quietly added the Smart Business Cloud - Enterprise to its Smart Business Cloud product line. SBC - Enterprise is a pay-as-you-go, self service (for registered IBM customers) online platform that can run virtual machines (VMs) in a variety of formats, along with other services from IBM.

image The Smart Business Cloud - Enterprise is a big step up from IBM's previous pure Infrastructure as a Service (IaaS) offering, the Smart Business Development and Test Cloud. Users can provision stock VMs running Red Hat Enterprise Linux, SUSE Linux Enterprise Server and Microsoft Windows Server, or they can choose from an arsenal of preconfigured software appliances that IBM, in part, manages and that users consume. These include Industry Application Platform, IBM DB2, Informix, Lotus Domino Enterprise Server, Rational Asset Manager, Tivoli Monitoring, WebSphere Application Server, Cognos Business Intelligence and many others.

In what may tell the back story of IBM's own cloud computing development path, all of the images and applications being offered from SBC - Enterprise presently run in Amazon's Elastic Compute Cloud environment. Not all of the announced IBM cloud applications run in the IBM compute environment. [Emphasis added.]

Screenshots and a video demo show a familiar sight by now for cloud users: a point-and-click provisioning system in a Web interface. Apparently sensitive to its enterprise audience, IBM has taken some pains to offer higher-end features, including access and identity management control, security and application monitoring tools, and VPN and VLAN capabilities, as well as VM isolation. IBM technical support will also be available.

IBM has been a consistent supporter of open source software for the enterprise market and the inclusion of Red Hat Enterprise Linux (RHEL) and SUSE Linux are indications that IBM feels those products have the chops for big customers running Linux. Being based on Linux (Blue Insight) also makes it easy for IBM to port its technology into an Amazon-type network.

Analyzing IBM's entrance into public cloud
Some industry observers say they believe the cloud-based product disclosure is overdue, and a step in the right direction. The move is a natural one if IBM wants to extend the lofty position it currently holds in on-premises application integration.

"IBM needs to start talking about integration as a service (in the cloud)," said Dana Gardner, principal with Interarbor Solutions in Gilford, N.H. "They are a market leader in enterprise apps integration in the physical space, so the next step is to be a leader in the cloud space. They seem to be going in the right direction."

Gardner, along with other analysts and IT pros, have criticized IBM's overall cloud strategy as "scattershot" the past couple of years. After getting off to a promising start through a number of key acquisitions and technology introductions, the company's efforts appear to have lost momentum and focus.

"IBM stepped into the cloud early, but the market has been very dynamic the past two years," Gardner added. "When people think of the cloud now, they think about mobile, big data and analytics, along with cost reductions and simplifying. IBM hasn't stepped up to the latest zeitgeist around cloud to take all this on."

Other analysts said they believe IBM's delivery of its SBC - Enterprise is coming in the nick of time. It fully anticipates that fast-moving competitors, such as Salesforce.com, Google and Amazon, will debut much more capable products and services in areas where IBM still hasn't ventured.

"[IBM] is still looking at a pack of hungry competitors who have little trepidation about moving their cloud initiatives forward, moving way beyond the Infrastructure as a Service level offering things like run-time support and tools," said one analyst who requested anonymity. "Services are going to get a lot more sophisticated from the Googles of the world."

Competitive pricing disclosed
An official IBM Charge Schedule lays out price per instance in four tiers: Copper, Bronze, Silver and Gold. A Copper RHEL instance costs $0.19 per hour, and Gold is $0.46 per hour. Windows instances start at $0.10 per hour.

IBM will also sell reserved capacity, just as Amazon Web Services (AWS) does, for those looking for a long-term discount. Reserved Capacity is significantly cheaper, although a six-month commitment will cost a minimum of $1850 per month.

A Reserved Capacity RHEL instance costs $0.0154 per hour and a Gold instance will run you $0.30 per hour. Windows pricing starts at $0.064 per hour. Licensing is either determined by the application you consume or bring-your-own-licensing (BYOL). The pricing appears to be in line with other public cloud services like AWS and Rackspace.

IBM makes its entrance to the public cloud market in a big way. The service is apparently live and available to order right now. SearchCloudComputing.com recently put IBM in its top ten list of cloud providers based just on the strength of its success with the Smart Business Test and Development Cloud, a $30 million business after little more than a year in operation.

With its long history of delivering IT services, software and support to the Global 2000, some believed it was only a question of time until IBM made a decisive move into the cloud computing marketplace. It's too early to say if it will dominate the public cloud space, but it has everything it needs to do so, including global data centers, connectivity, service delivery capabilities, and the all-important enterprise user base.

More on IBM and the cloud:

Ed Scannell is an Executive Editor and Carl Brooks is the Senior Technology Writer for SearchCloudComputing.com.

IBM won’t win any awards for the name of its new cloud initiative.

Full disclosure: I’m a paid contributor to SearchCloudComputing.com


Tod Hoff (@toddhoffious) explained Netflix: Run Consistency Checkers All the time to Fixup Transactions in a 4/6/2011 post to the High Scalability blog:

image You might have consistency problems if you have: multiple datastores in multiple datacenters, without distributed transactions, and with the ability to alternately execute out of each datacenter;  syncing protocols that can fail or sync stale data; distributed clients that cache data and then write old back to the central store; a NoSQL database that doesn't have transactions between updates of multiple related key-value records; application level integrity checks; client driven optimistic locking.

Sounds a lot like many evolving, loosely coupled, autonomous, distributed systems these days. How do you solve these consistency problems? Siddharth "Sid" Anand of Netflix talks about how they solved theirs in his excellent presentation, NoSQL @ Netflix : Part 1, given to a packed crowd at a Cloud Computing Meetup.

You might be inclined to say how silly it is to have these problems in the first place, but just hold on. See if you might share some of their problems, before getting all judgy:

  • Netflix is in the process of moving an existing Oracle database from their own datacenter into the Amazon cloud. As part of that process the webscale data, the data that is proportional to user traffic and thus needs to scale, has been put in NoSQL databases like SimpleDB and Cassandra, in the cloud. Complex relational data still resides in the Oracle database. So they have a bidirectional sync problem. Data must sync between the two systems that reside in different datacenters. So there are all the usual latency, failure and timing problems that can lead to inconsistencies. Eventual consistency is the rule here. Since they are dealing with movie data and not financial data, the world won't end.
  • Users using multiple devices also leads to consistency issues. This is a surprising problem and one we may see more of as people flow seamlessly between various devices and expect their state to flow with them. If you are watching a video on your Xbox, pause it, then start watching the video on your desktop, should the movie start from where you last paused it? Ideally yes, and that's what Netflix tries to do. But think for a moment all the problems that can occur. All these distributed systems are not in a transactional context. The movie save points are all stored centrally, but since each device operates independently, they can be inconsistent. Independent systems usually cache data which means they can write stale data back to the central database which then propagates to the other devices. Or changes can happen from multiple devices at the same time. Or changes can be lost and a device will have old data. Or a device can contact a different datacenter than another device and get a different value. For a user this appears like Netflix can't pause and resume correctly, which is truish, but it's more tricky than that. The world never stands still long enough to get the right answer.
  • One of the features NoSQL databases have dropped is integrity checking. So all constraints must be implemented at the application layer. Applications can mess up and that can leave your data inconsistent.
  • Anther feature of NoSQL databases is they generally don't have transactions on multiple records. So if you update one record that has a key pointing to another record that will be updated in a different transaction, those can get out of sync. 
  • Syncing protocols are subject to the same failures of any other programs. Machines can go down, packets can be dropped or reordered. When that happens your data might be out of sync.

How does Netlix handle these consistency issues?

  • Optimistic locking. One approach Netflix uses to address the consistency issues is optimistic locking. Every write has a timestamp. The newest time stamp wins, which may or may not always be correct, but given the types of data involved, it's usually good enough. NTP is used to keep all the nodes timed synced.
  • Consistency checkers. The heavy hitter strategy they use to bring their system back into a consistent state are consistency checkers that run continuously in the background. I've used this approach very effectively in situations where events that are used to synchronize state can be dropped. So what you do is build applications that are in charge of reaching out to the various parts of the system and making them agree. They make the world stand still long enough to come to an agreement on some periodic basis. Notice they are not trying to be accurate, the ability to be accurate has been lost, but what's important is that all the different systems agree. If, for example, a user moves a movie from the second position on their queue to the third position on one device and that change hasn't propagated correctly, what matters is that all the systems eventually agree on where the item is in the queue, it doesn't matter so much what the real position is as long as they agree and the user sees a consistent experience. Dangling references can be checked an repaired. Data transforms from failed upgrades can be fixed. Any problems from devices that use different protocol can be fixed. Any order that was approved that may not now have the inventory must be addressed. Any possible incosistency must be coded for, checked, and compensated for. Another cool feature that can be tucked into consistency checkers are aggregation operations, like calculating counts, leader boards, totals, that sort of thing.

Loosely coupled, autonomous. distributed systems are complex beasts that are eventually consistent by nature. Netflix is on the vanguard of innovation here. They have extreme scale, they are transitioning into a cloud system, and they have multiple independent devices that must cooperate. It's great for them to have shared their experiences and how they are tacking their problems with us.

The Problem of Time in Autonomous Systems

    One thing this article has brought up for me again is how we have punted on the problem of time. It's too hard to keep clocks in sync so we don't even bother. Vector clocks are the standard technique of deciding which version of data to keep, but in an open, distributed, autonomous system, not all nodes can or will want to participate in this vector clock paradigm.

    We actually do have an independent measure that can be used to put an order on events. It's called time. What any device can do is put a very high precision timestamp on data. Maybe it's time to tackle the problem of time again?


    David Linthicum claimed “With Amazon's new 'dedicated instances,' it's clear that enterprises are still looking to hug their servers” as a deck for his The downside of dedicated instances, aka the new hosting post to InfoWorld’s Cloud Computing blog of 4/5/2011:

    image Amazon.com now offers Dedicated Instances, with which administrators can now set up a private cloud using the Amazon VPC (Virtual Private Cloud) that contains server partitions that keep data in one physical location. Specific hardware resources are yours for the duration. They do not move processes and storage from server to server as clouds typically do when they support multitenancy and resource pooling.

    imageThis is nothing new. Back when I was CTO and CEO of cloud companies, customers often required that they have their own private server, typically for peace of mind and not because of a regulatory or security requirement. It used to be called hosting. …

    image I'm a little worried today to see hosting -- excuse me, "dedicated instances" -- growing as a cloud approach, as that could provide the wrong option for many enterprises.

    The problem many have with cloud computing is that it flies in the face of direct control of IT resources. Thus, the reasons for not moving to public clouds are more about the people making the decisions rather than the functional requirements of the business. That's led to the popularity of private clouds, and now the ability to use brand-name public cloud computing providers as private clouds.

    Private clouds, including the AWS Dedicated Instance offering, are a valid architectural approach in many enterprises. Many have to deal with compliance and regulatory issues, and they must keep their information and their servers under direct control.

    However, I suspect many businesses are using them to punch their "I'm using cloud" tickets and not because they're the most cost-effective solution. Indeed, I suspect that many who use dedicated instances, whether from Amazon.com or elsewhere, are not parlaying the cost advantages that cloud computing should provide.

    In running the numbers and considering the costs of dedicated cloud instances, I find that it's cheaper to keep your systems in the data center. In the case of dedicated instances, it's silly to go to the cloud.


    Guy Harrison described Graph Databases and the Value They Provide in a 4/5/2011 article for the April 2011 issue of Database Trends and Applications:

    The relational database is primarily oriented toward the modeling of objects (entities) and relationships.  Generally, the relational model works best when there are a relatively small and static number of relationships between objects.  It has long been a tricky problem in the RDBMS to work with dynamic, recursive or complex relationships.  For instance, it's a fairly ordinary business requirement to print out all the parts that make up a product - including parts which, themselves, are  made up of smaller parts.  However, this "explosion of parts" is not consistently supported by all the relational databases.  Oracle, SQL Server and DB2 have special, but inconsistent, syntax for these hierarchical queries, while MySQL and PostgreSQL lack specific support.

    To make the situation worse, the World Wide Web and many Web 2.0 sites exhibit far more complicated networks of relationships than were expected when SQL was designed.  The network of hyperlinks that connect all the pages on the World Wide Web is extraordinarily complex and almost impossible to model efficiently in an RDBMS.  Similar issues are involved in modeling the social networks of Twitter, Facebook and other comparable sites.

    Graph databases aspire to overcome these problems by adopting a data model in which the relationships between objects or entities are equally as important as the objects themselves.  In a graph database, data is represented by nodes, edges and properties.  Nodes are the familiar objects that we might model in a RDBMS or key-value store - customers, products, parts, web pages, etc.  Edges represent some relationship between two nodes - friendships, links, composition, etc.   Both nodes and edges can have properties, so the nature of a relationship can be given quite specific characteristics.  Traversing - walking - the network of relationships is a fundamental and efficient operation in a graph database.

    The two best known graph databases are probably FlockDB and Neo4J.

    Neo4J is a Java-based graph database that easily can be embedded in any Java application, or run as a standalone server.  Neo4j has been around in various forms for many years, with the first official production release in early 2010.    Neo4j supports billions of nodes, ACID compliant transactions and multi-version consistency.

    Multi-server scalability is still a work in progress for the Neo4j team.  Most NoSQL databases horizontally scale across multiple servers by partitioning or "sharding" data, so that all the data of interest usually can be found on a particular host.  For instance, all the information pertaining to a particular user might be sharded to a specific server.  The nature of graph databases makes this fairly difficult, since, by definition, the edges define relationships between multiple nodes so they can't easily be isolated to a specific server.

    Last year, Twitter open sourced their graph database FlockDB.  FlockDB overcomes the difficultly of horizontally scaling the graph by limiting the complexity of graph traversal.  In particular, FlockDB does not allow multi-hop graph walks, so it can't do a full "explosion of parts."  However, it is very fast and scalable if you only want to access first level relationships.

    There are other graph-related projects of note, including attempts to add graph-like capabilities to the Hadoop ecosystem through the Hama project - the Gremlin graph programming language - and other commercial and open source graph databases such as InfiniteGraph and GraphDB.

    Like most NoSQL database systems, graph databases are not a general replacement for the relational databases that have been the mainstay of business data storage for the last two decades. Rather, they are highly specialized systems tailored for very specific use cases.  Those use cases are increasingly moving from niche to mainstream, however, so I expect to see increasing interest in graph databases.


    Romin Irani reported Google App Engine Brings Python Features to Java in a 4/5/2011 post to the ProgrammableWeb blog:

    image Google continues to churn out releases to its platform-as-a-service Google App Engine API. Since the beginning of the year, there have been new releases every 4-6 weeks. The latest is out with focus on bringing parity between its Java and Python releases, a couple of new APIs that allow writing applications that monitor incoming live data and read/write files, and some task queue/cron updates.

    image Google App Engine allows applications to be written in either Java or Python programming languages. For quite a while now, there have been various APIs that have been available in the Python world but not in Java. This new release corrects some of that discrepancy by bringing in the Remote API and the Deferred API support to Java. The Remote API allows the working with the application datastore from your local machine, while the Deferred API is a good mechanism to write and execute ad hoc tasks without the full overhead of dedicated task handlers.

    One feature introduced for the Java runtime that is bound to raise some eyebrows is the Concurrent Requests feature. The blog post says that until now, Java applications relied on starting additional instances to dynamically scale up for higher traffic levels or in other words from their documentation, it says that by default, App Engine sends requests serially to a given Web Server. This does seem like a penalty for developers that had written Thread Safe Servlets in any case. However, with this feature, all it needs developers to allow concurrent requests is to specify in their appengine-web.xml that their code is thread safe.

    The Python runtime has a new API named Prospective Search API that is currently experimental. The Prospective Search API allows you to create applications that can effectively monitor incoming live data. The way you use it is by registering search queries and then matching them against new documents in real time, as the documents arrive into your application. You can then easily raise alerts based on the matches.  For the experimental release, users will be allowed 10,000 subscriptions with the Prospective Search API and pricing details will be announced when the API moves into production mode later on. Python users have also got a Testbed Unit Test framework which works against App Engine API stubs and which was available for Java users earlier.

    The Files API is a new API available in this release and is available to both Python and Java runtimes. It allows you to read and write files using the Blobstore API. This is a welcome addition since reading/writing files has been a often requested feature from users.

    For full details on the App Engine 1.4.3 Release, refer to the blog post the App Engine blog. Refer to new features and issues fixed in the release notes for Java and Python.


    Steven S. Warren (@stevenswarren) asserted “Despite lacking some developer-friendly tools offered by other cloud services, Citrix OpenCloud provides seven solutions that together can meet almost any cloud-based need” in a deck for his Citrix OpenCloud: The Seven-legged Cloud Service post of 4/4/2011 to the DevX blog:

    image A turnkey solution -- That's how Citrix describes its "OpenCloud" offering. Whether a business is looking to expand its current cloud program or has yet to integrate cloud-based services, the open cloud concept enables the organization to do either. While many cloud services are simply "storage" solutions, Citrix offers everything from elastic cloud storage to on-demand applications to virtualization, and anything else a business needs to run from the cloud. Even though Citrix gives you the ability to integrate third-party cloud applications and services along with their products, a business starting from scratch with no on-site or cloud storage could use OpenCloud exclusively.

    imageWhile the solutions that Citrix provide are great tools for the enterprise, there is no consumer or developer-level product. Unlike Amazon EC2, which can accommodate anything from an individual developer to a massive corporation, Citrix is business-centric only. It offers no simple-storage solution (even though they're certainly capable of it) and no native mobile platform.

    image The OpenCloud actually consists of seven different solutions that together can meet almost any cloud-based need:

    • On-Demand Apps
    • On-Demand Demos
    • On-Demand Desktops
    • Compliance
    • On-boarding
    • Disaster Recovery
    • Application Development and Testing

    In this article, I explore each of these offerings and explain how they combine to make up Citrix OpenCloud.

    Citrix On-Demand Cloud: Enterprise Ready, Developer "Unfriendly"

    On-Demand applications allow a business to host numerous applications off-site and access them through multiple platforms, including mobile devices. Even though Web-based applications usually operate slower than natively installed apps, the ability for multiple clients to access a variety of applications on-demand will make a cloud solution well worth it.

    Gmail is an example of this kind of application in its simplest form. Any mobile device, desktop OS and thin client with a browser can access Gmail, Calendar and Contacts and run them without installation. Similarly, cloud applications such as those created in Citrix OpenCloud can be used immediately without having to wait for a download.

    Some cloud services provide ready-to-use applications or even templates to use as starting points. Unfortunately Citrix does not provide such tools, and OpenCloud is not a development platform. Most applications ported to the Citrix framework will be developed externally, adding complexity for the end user and third-party developer. And as stated previously, even though cloud applications managed by Citrix run in a virtual environment without the need for downloads, the speed of a natively-installed application will always perform better than cloud-based apps.

    The Citrix On-Demand Demo solution refers to the ability for a business to create a virtual environment that will showcase an application or solution for client presentation. Since the OpenCloud can integrate with your current solution, you can use the Citrix environment to showcase an app while hosting it on a native server on-site. Alternately, Citrix also provides an end-to-end solution that includes storage capacity, "network topology" and customizable templates to meet individual clients' needs. The OpenCloud is advertised as able to create a custom proof-of-concept environment in minutes.

    Citrix OpenCloud Compliance, On-Boarding and Disaster Recovery

    Citrx also implements a strong "Compliance" solution, protecting all your data whether on site or in the cloud. Using the OpenCloud offering, all data and applications are forced into the backup solution without any modification to the application itself. As you'd expect, the Citrix compliance solution also provides end-to-end encryption and powerful authentication tools to keep your data safe and secure. And since OpenCloud works with whatever hardware and software is already implemented, any data created before the Citrix implementation can be protected in the cloud as well.

    When a business attempts a move to the cloud, another concern is the fate of current applications already developed on first-party platforms. Another powerful tool afforded by Citrix is "On-Boarding." This feature makes it easy to migrate your current applications to the cloud for virtual implementation. OpenCloud supports numerous virtual platforms and requires little adjustment to current apps. Once moved to the cloud, these application workloads can be managed via a virtual dashboard. Even when they are in the cloud, applications will behave as though they are still native to a platform and won't require an additional learning curve for users.

    Along with these numerous features, the OpenCloud offers a robust disaster recovery solution to make sure your data is always safe. Using replication and secure communication between the main data center and the cloud, data can always be rebuilt from the ground up. If a data center were to be lost, you could still operate applications and a desktop environment from the cloud. Citrix will also sync your current data solution -- whether it is a physical server or public or private cloud -- and integrate disaster recovery automatically

    Next Page: Development and Testing in OpenCloud


    The SearchCloudComputing.com blog posted a 00:13:25 Randy Bias dumps on enterprise clouds Cloud Computing TV video segment on 3/30/2011 (missed when published):

    image Randy Bias, CTO and co-founder of cloud consulting firm Cloudscaling, joins us on this week's episode of Cloud Cover TV to share his thoughts on "enterprise clouds." Spoiler alert: he thinks they're doomed to failure.

    We also discuss Cisco's purchase of NewScale and the new "dedicated instances" from Amazon Web Services.

    image For the rest of the episodes, check out the Cloud Cover TV home page.

    Full disclosure: I’m a paid contributor to SearchCloudComputing.com.


    <Return to section navigation list> 

    0 comments: