Monday, February 28, 2011

Windows Azure and Cloud Computing Posts for 2/28/2011+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H640px33   

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Amina Saify of the Microsoft JDBC Driver Team Blog posted Microsoft SQL Server JDBC 3.0 and SQL Azure on 2/21/2011 (missed when posted):

We are very pleased to announce the availability of an updated version of Microsoft SQL Server JDBC Driver 3.0 supporting Microsoft SQL Azure database:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=ae924066-2946-40a7-93c6-c7e83f54072f&displaylang=en

imageThis updated version supersedes the release version of SQL Server JDBC Driver 3.0 and also addresses an issue with JDBC API getSchemas. For further information on getSchema issue, please refer to http://support.microsoft.com/kb/2504052 and the corresponding release notes.

Let us know how we’re doing and give us your feedback through the Microsoft SQL Server Data Access Forum, Microsoft Connect, or this blog.

The previous release version of Microsoft SQL Server JDBC Driver 3.0 is available here for reference:

http://www.microsoft.com/downloads/en/details.aspx?FamilyID=a737000d-68d0-4531-b65d-da0f2a735707&displaylang=en

If you do not plan to connect to SQL Azure or are not impacted by getSChema issue, you can continue to use the previous release of Microsoft SQL Server JDBC Driver 3.0


<Return to section navigation list> 

MarketPlace DataMarket and OData

Matt Stroshane (@mattstroshane) started a new series with Introducing the OData Client Library for Windows Phone, Part 1 on 2/28/2011:

image This is the first part of the 3-part series, Introducing the OData Client Library. In this post, we’ll discuss what proxy classes are, download the library, and create a set of proxy classes for the Northwind OData service.

imageI’ve felt that most of the time, the process of using the library is skippped over pretty fast. In this series I’m going to slow it down and add a little more context. I’m hoping that if you’re new to OData proxy classes, this will be nice stroll into OData-land. If you’ve worked with this library before, I hope that the added context will further your understanding of the concepts. This post is also part of the OData + Windows Phone blog series.

Prerequisites

This post assumes you’ve done a baseline setup for Windows Phone Development. For more information, see Download Links for Windows Phone and Windows Azure Developers.

Proxy Classes Defined

Before I begin, let’s agree to what I mean by proxy in the term proxy classes. In using that term, I lean heavily on the standard definitions of proxy, as defined by the Bing search results for “define proxy”:

  1. authorized capacity of substitute: the function or power of somebody authorized to act for another person
  2. substitute: somebody authorized to act for another person
  3. authorization document for stand-in: a document authorizing somebody to act for another person

Proxy classes in your Windows Phone application are a set of classes that act as substitutes/stand-ins for the data that the service provides. Rather than having to write code that operates on XML payloads, you can program against .NET objects, the OData proxy classes. The OData class library for Windows Phone takes care of all the busy work of serializing to and from XML.

Note: When using an MVVM programming style with OData, proxy classes collectively make up the model in model-view-viewmodel. For more information about MVVM, see Implementing the Model-View-ViewModel Pattern in a Windows Phone Application.

Download the OData Client Library

The OData Client Library for Windows Phone is created by the Microsoft WCF Data Services (OData) team, and is not currently included with the Windows Phone development tools. To download the client library, go to the OData Client for Windows Phone CodePlex page and download ODataClient_BinariesAndCodeGenToolForWinPhone.zip:

Odata Client for Windows Phone download page

Odata Client for Windows Phone download page (click to zoom)

The most important components of this download are:

  • The assemblies: System.Data.Services.Client.dll and System.Data.Services.Design.dll
  • The code (proxy class) generation tool: DataSvcUtil.exe
Unblock and Unzip the Download

If you have Windows 7, unblock the download before you unzip it. Just right-click on the ODataClient_BinariesAndCodeGenToolForWinPhone.zip file and click Unblock:

Unblock the OData Client for Windows Phone zip file

Unblock the OData Client for Windows Phone zip file (click to zoom)

Exploring DataSvcUtil.exe

The first thing you may notice in the unzipped folder is the WCF Data Service Client Utility, DataSvcUtil.exe. This is the tool you’ll use to generate the OData proxy classes. You tell it where the OData service is, the service root URI, and it does its thing. For a long list of URI examples, see the right column of the tables on the OData.org Producers page.

To see the full list switches on the utility, you can run the /help command. When you do so, you get the following results:

C:\Temp\ODataClient_BinariesAndCodeGenToolForWinPhone7>DataSvcUtil.exe /help

Microsoft (R) DataSvcUtil version 1.0.0.0

Copyright (C) 2008 Microsoft Corporation. All rights reserved.

/in: The file to read the conceptual model from

/out: The file to write the generated object layer to

/language:CSharp Generate code using the C# language

/language:VB Generate code using the VB language

/Version:1.0 Accept CSDL documents tagged with m:DataServiceVersion=1.0 or lower

/Version:2.0 Accept CSDL documents tagged with m:DataServiceVersion=2.0 or lower

/DataServiceCollection Generate collections derived from DataServiceCollection

/uri: The URI to read the conceptual model from

/help Display the usage message (short form: /?)

/nologo Suppress copyright message

...

...

In this code block, I’ve highlighted the call to the help command and the language parameter that determines in which language the code is generated.

Example: Northwind OData Service

In the spirit of the sample database for SQL Server, Northwind has now moved to the cloud to demonstrate OData. To take DataSvcUtil.exe for a test drive, we’ll use the Northwind OData service to generate OData class libraries for a Windows Phone application. For more information about what Northwind is, see The Story of Pubs, Northwind, and AdventureWorks.

To create the proxy classes, you need to specify the service root URI in the /uri switch of the DataSvcUtil.exe. The service root URI is the root of the OData service, much like www.Microsoft.com is the root URL of the Microsoft Web site. The root is important because every OData service is required to show all the available resources when the root URI is requested.

Upon receiving the list of resources, DataSvcUtil.exe then creates .NET classes, the proxy classes which are based on the descriptions of the resources that the OData service provides. From the /out and /language switches, the utility knows where to name the file and what language the code should be in (it defaults to C# when /language is not specified). For example:

datasvcutil.exe /uri:http://services.odata.org/Northwind/Northwind.svc/ /out:.\NorthwindModel.vb /language:VB /Version:2.0 /DataServiceCollection

When the utility completes, your folder should look like this:

OData proxy class file, NorthwindModel.vb, in Windows Explorer

OData proxy class file, NorthwindModel.vb, in Windows Explorer (click to zoom)

Next

In Introducing the OData Client Library for Windows Phone Part 2, we’ll take a look at what’s inside NorthwindModel.vb and discuss how to set up your application to use the OData proxy classes.

See Also: Community Content

Netflix Browser for Windows Phone 7 – Part 1, Part 2
OData v2 and Windows Phone 7
Data Services (OData) Client for Windows PHone 7 and LINQ
Learning OData? MSDN and I Have the videos for you!
OData and Windows Phone 7, OData and Windows Phone 7 Part 2
Fun with OData and Windows Phone
Developing a Windows Phone 7 Application that consumes OData
Lessons Learnt building the Windows Phone OData browser

See Also: Documentation

Open Data Protocol (OData) Overview for Windows Phone
How to: Consume an OData Service for Windows Phone
Connecting to Web and Data Services for Windows Phone
Open Data Protocol (OData) – Developers
WCF Data Services Client Library
WCF Data Service Client Utility (DataSvcUtil.exe)


<Return to section navigation list> 

Windows Azure AppFabric: Access Control and Service Bus

image7223222No significant articles today.


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

Avkash Chuhan explained how to overcome Installing Windows Azure Connect Client may return error : "Not a valid Win32 application" on 2/28/2011:

image When someone installs Azure Connect Client using the link provided on Azure Portal as below after Azure Connect is enabled in the Windows Azure Application...

">https://waconnecttokenpage.cloudapp.net/Default.aspx?token=<Customer_Token_ID>

... in some cases the installation may fail with the message "Not a valid Win32 application".

imageThis message could occur only if the Windows Azure Connect client binaries are being installed on an OS that Windows Azure Connect doesn't support.

Windows Azure Connect  supports Windows Vista, Windows 7, Windows Server 2008, or Windows Server 2008 R2 (Full details at http://msdn.microsoft.com/en-us/library/gg508836.aspx ). Windows XP and Windows Server 2003 all are not supported.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

The Windows Azure Team suggested Using MSBuild to Deploy to Multiple Windows Azure Environments in a 2/28/2011 post:

imageIf you've used the Windows Azure Tools for Visual Studio, you'll know how easy it is to publish your applications to Windows Azure straight from Visual Studio with just a couple of clicks. But while this process is great for individual projects, most teams use automated build and deployment processes to get more predictability and control of what gets built and where it gets deployed. In most real-world scenarios, you'll probably want to deploy to multiple Windows Azure managed services or subscriptions-for example for test, UAT and production-and use different configuration settings for each.

Tom Hollander from the Windows Azure Technology Adoption Program team has just published a blog post and sample project that shows one way to extend Visual Studio 2010 and Team Foundation Server 2010 to automatically build and deploy your Windows Azure applications to multiple accounts and use configuration transforms to modify your Service Definition and Service Configuration files. The goal is to make it easy to set up as many TFS Build definitions as required, enabling you to build and deploy to a chosen environment on-demand or as part of a regular process such as a Daily Build.

To learn more, check out Tom's post.


Cory Fowler (@SyntaxC4) continued his series with Installing PHP on Windows Azure leveraging Full IIS Support: Part 3 on 2/28/2011:

image In my last two posts in this series [Part 1, Part 2] we have created a Start-up script to install PHP on our Windows Azure Web Role, we have created the necessary Cloud Service Definition and Cloud Service Configuration files.

imageIn this post we’ll look at how to use the command-line tools for Windows Azure to Package our Deployment and how to deploy that package to the Windows Azure Platform Portal.

Packaging a Windows Azure PHP Deployment

I know what you’re thinking “Hold up, we need to package our deployment? What ever happened to good old FTP?”

FTP works great for a single point of access, but we’re looking to be able to rapidly deploy our application across many servers in a matter of minutes. Microsoft uses these packages for just that purpose, rapid deployment.

What is the Cloud Service Package (.cspkg)

The Cloud Service Package is essentially a Compressed file which holds your application, the Cloud Service Definition and a number of other files required by Windows Azure. These files are encrypted by the tool incase the package is intercepted when being transported over HTTP.

You are able to leave the cspkg unencrypted using one of two methods. I would suggest using the Environment Variable [setting _CSPACK_FORCE_NOENCRYPT_ to true], mostly due to the fact that there is a good chance you won’t be using MS Build for your PHP Application.

Setting an Application Variable

Open the Control Panel. Find and Click on System.

set environment variables

Click on the Advanced System Settings link in the Right hand Menu. This will open the System Properties Dialog.

image

Click on Environment Variables…

image

Then on New…

image

Creating a Build Script to Automate Deployment Packaging

As a developer, it is a good idea to optimize repetitive tasks in order to be more productive. One of these optimization points that you should be looking at is Build Automation. With PHP being a Scripting Language and does not need to be compiled this is a very simple thing to accomplish. Here is how simple your “Build Script” can be.

cspack ServiceDefinition.csdef

In our previous post we added the Windows Azure Command-Line Tool Path to our System Path. This enables the above script to execute the tools and create the Cloud Service Package.

To make your build script more powerful, you could add some additional functionality like executing a powershell script which leverages the Windows Azure Powershell CommandLets to deploy your latest Build to your Staging Environment in Windows Azure.

Deploying your Service Package to Windows Azure

There are many different means to getting your Windows Azure Project in to the Cloud. Once you’ve had a chance to evaluate the different methods I would suggest using the method that you find most suitable for you.

Here is a list of different methods of Deploying to Windows Azure:

Note: Barry Gervin and I have put together a video on how to deploy using the Windows Azure Platform Portal.

  • Deploying using Visual Studio
Test Startup Scripts Locally using RDP

As developers we all know that testing will always win out over trial and error, this is no different when creating a startup script for Windows Azure. Be sure to save yourself from pulling out your hair by testing you’re the executable files that will be called by your start-up tasks by RDPing into a Compute Instance.

I’ve created a few blog posts that will aid you in setting up RDP into Windows Azure:

Note: Installers that have a “Quiet Install” option (/q) in their command-line switches, are your absolute best friend. Otherwise, trying to mimic a users acceptance to a Eula or other dialog is rather difficult. WebPI allows us to accept a Eula with a switch (/AcceptEula)


The Windows Azure Team reported NOW AVAILABLE: CTP of the Windows Azure Starter Kit for Java on 2/28/2011:

imagews Azure developers have always had options when it comes to which languages (such as .NET, PHP, Ruby or Java) and development tools (such as Visual Studio or Eclipse) to use to build applications on the Windows Azure platform.  Today, there's yet another choice with the availability of the Community Technology Preview (CTP) of the Windows Azure Starter Kit for Java, which was announced  last Wednesday on the Interoperability @ Microsoft blog.

The goal of this CTP is to get feedback from Java developers, and to nail down the correct experience for Java developers, particularly to make sure that configuring, packaging and deploying to Windows Azure integrates well with common practices.

Designed to work as a simple command line build tool or in the Eclipse integrated development environment (IDE), the starter kit is an open source project released under the Apache 2.0 license and is available for download here.

To learn more about the CTP, please read the full blog post about this announcement on the Interoperability @ Microsoft blog here.


Wely Lau described the “CommunicationObjectFaultedException was unhandled” Error when working on Windows Azure Project with Source Safe and suggested a workaround on 2/28/2011:

image If you encounter the following error message, you can guess that the most probably problem lies on WCF configuration since it mentions something about System.ServiceModel which is actually the class library for WCF. You can definitely troubleshoot the error from WCF point of view. But what if it doesn’t solve the problem no matter what you’ve tried. You are sure that your WCF configuration is exactly correct.

image

imageThen, I am telling you another possibility of this error message, which I see it as a bug in Visual Studio or more appropriately maybe VS Tools for Windows Azure. If you face the problem when developing your Windows Azure project with source control such as SourceSafe, Team Foundation Server, or Subversion SVN, you most probably encounter the issue that I am talking about.

Visual Studio Modify your web.config file when the project is running

If you notice carefully, each time you run your Windows Azure project, Visual Studio actually does something on your web.config file. This can be seen if your web.config file is opened, an dialog will prompt you that your web.config file is modified, do you want to reload it.

image

What’s does VS actually add?

<machineKey> element

image

What is that?

MachineKey element is actually used by ASP.NET website to configure algorithms and keys to use for encryption, decryption, and validation of forms-authentication data and view-state data, and for out-of-process session state identification.

What’s the reason of adding those?

As you know that any VM instances hosted on Windows Azure will be load-balanced automatically. Since they are load-balanced, nobody can guarantee the same instance will be responsible on particular request. While we need to ensure that any session / view-state data that are sent to browser could be encrypted / decrypted properly, we need to ensure that the machine key of each instance is consistent. That’s the reason why Visual Studio will modify / add your machine key.

Solution

The easy way to go is to check-out your web.config file. Thus, it will give your Visual Studio write access to the web.config, to add / modify the machine key element.

Huh… Would it a better solution? As far as I know, there’s no solution for that at this moment. However, Microsoft has captured this “bug” and I do hope it would be fixed for the upcoming released of Azure SDK.


<Return to section navigation list> 

Visual Studio LightSwitch

Edu Lorenzo described Implementing Access Control on a Visual Studio LightSwitch Application in a 2/27/2011 post:

Alright.. after my previous blogs of looking at Visual Studio LightSwitch from under the hood, a small demo application using Visual Studio LightSwitch, adding a form of implementing a simple workflow with Visual Studio LightSwitch and how cool I thought customizing a Visual Studio LightSwitch application is with the changes during runtime still being in the undo stack at design time, this time I will try to illustrate the answer to an offline question I got about LightSwitch.

image22242222“How would one implement roles authentication with Visual Studio LightSwitch? How is Visual Studio LightSwitch and Windows Authentication done, if it can be done and non-Windows Authentication too.”

So I went along and told the person.. I haven’t tried it yet, but I’ll try it for you and blog the result.

So here goes.

First thing that would come to mind was to add a login form, right? But no, it didn’t turn out that way. Then as you go along creating a Visual Studio LightSwitch application, you will get the idea that most of the tasks you perform are adding objects (either entities or screens) and then changing properties. So I tried adding a User entity then creating a screen for that. It didn’t quite work out and I think I found the way.

By looking around and trying stuff I found this

After I right-clicked on the project and chose properties, so I went ahead and clicked the Access Control tab and added a couple of experiments

I highlighted several areas on the screenshot above.

First are the option buttons. Here you will see that Visual Studio LightSwitch does indeed offer not only both Windows Authentication and Forms Authentication but also a disabled authentication mode. So for this blog, I’ll choose Windows Authentication.

Next you will see that I highlighted the descriptions that I added. “on the screen” and “on the entity”

Here I illustrate that Visual Studio LightSwitch can implement access control on both(or either) the Entity or the object that you create, and the Screen.

It is also worth noticing that Visual Studio LIghtSwitch has a default Access Role called SecurityAdministration. That was there when I got there.. so.. that’s the default. Hahahaha!

Okay, so how do you implement these roles now?

Just poke around the IDE and you will find this:

There is actually a stub to place code to set if the patientSearchScreen can be ran!

So I go into that part of the code and..

It takes me to Application.cs, the code for the whole application. Where I insert that line of code.

partial
void CanRunPatientSearchScreen(ref
bool result)

{

// Set result to the desired field value

result = this.User.HasPermission(Permissions.canViewPatientScreen);

}

The CanRunPatientSearchScreen function is a bool function that returns either true or false (of course) so what we do is we set the result, depending on the User object that is logged in to the application.

So there. .we have implemented Access Control on Visual Studio LightSwitch on the screen level. But what about the Entity level that I tried before? You may stop reading this blog for a while and try out the steps yourself by also clicking the “Write Code” drop-down on the Therapist entity and see if the code stubs will be there.

I got this:

And since the access control role that I declared for this entity is “Can Add Therapist” we should go to the therapists_CanInsert function and add our code there.

Notice the slight difference in code?

Some key things are worth noticing:

  • We have now seen how Visual Studio LightSwitch implements Access Control using Windows Authentication.
  • Visual Studio LightSwitch allows you to define access control both on the screen level and entity level.
  • Visual Studio LightSwitch creates the User and Permissions namespaces for us making it easier to implement access control using Windows Authentication.

Stay tuned for my next blog on how to implement Forms Authentication and adding new Users and setting the roles within Visual Studio LightSwitch.


Return to section navigation list> 

Windows Azure Infrastructure

Buck Woody (@buckwoody) posted Windows Azure Use Case: High-Performance Computing (HPC) on 2/28/2011:

image This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx

Description:

imageHigh-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion.

This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here.

The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic.

Implementation:

Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions.

The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations.

References:

Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx

Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx


Joe Weinman (@joeweinman) posted his 31-page Smooth Operator: The Value of Demand Aggregation working paper on 2/27/2011. From the abstract:

image In industries such as cloud computing, lodging, and car rental services, demand from multiple customers is aggregated and served out of a common pool of resources managed by an operator. This approach can drive economies of scale and learning curve effects, but such benefits are offset by providers‘ needs to recover SG&A and achieve a return on invested capital. Does aggregation create value or are customers‘ costs just swept under a provider‘s rug and then charged back?

Under many circumstances, service providers—which one might call “smooth operators”—can take advantage of statistical effects that reduce variability in aggregate demand, creating true value vs. fixed, partitioned resources serving that demand.

If each customer‘s demand is flat or varies identically except for a scaling factor, aggregating the demand and serving it out of a shared pool of resources has no value. In fact, no matter what else happens off-peak, if a set of customers that have a build-to-peak capacity strategy all have a simultaneous peak, there is no benefit to aggregation.

When customers‘ demands are independent but have the same mean and standard deviation, combining them has substantial value: the coefficient of variation—a measure of degree of variability, is reduced by a factor of √n, i.e., the aggregate is smoother than its components.

Aggregation increases the likelihood of meeting an availability Service Level Agreement with the same number of resources, or, equivalently, lets such an SLA be met with fewer resources.

If demands are independent but normally distributed, then with the same quantity of resources, the penalty cost associated with the excess capacity and/or unserved demand is reduced to 1/√n of the unaggregated penalty cost.

Customers need to build capacity near peak, service providers can build capacity near average.

Sade, in her hit song Smooth Operator, sings of “higher heights,” and how “minimum waste” is tied to “maximum joy.” Operators who smooth variability via demand aggregation can enable individual customers to achieve higher peaks, while minimizing wasted resources. Joy may be in the eye of the beholder, but minimizing waste surely helps maximizes economic value.

Joe leads Communications, Media and Entertainment Industry Solutions for Hewlett-Packard. Contact Joe at http://www.joeweinman.com/contact.htm.


MW Research posted Small businesses…top FIVE reasons for considering cloud computing on 2/24/2011 (missed when posted):

“Cloud computing” seems is infiltrating our society, yet many entrepreneurs seem confused and speculative as to whether or not it is worth their time an energy. In an effort to help showcase some of its general benefits, here is a list of top 5 reasons you might want to give it a second thought…or investigate it further.

1. Circumvent costly disaster impact with remote, secure data storage

Have you ever experienced a computer crash? Have you ever experienced a nasty virus hit? Have you ever lost all of your information?

Aside from the sick feeling in the pit of your stomach, recovering that information is expensive and time-consuming – assuming that you can recover it, of course. If it isn’t accessible, well…that sick feeling will intensify. Many entrepreneurs operate with one PC, perhaps with a smart phone or tablet as well. But do you save your information to a separate hard drive? Consider loosing accounting, finance, customer, new product/service information – all in one fell sweep.

2. Peace of mind from automatic backup scheduling

Automatic backup is the key phrase here. Many people have a separate backup system – I’m thinking about those handy USB hard drives here – but how often do you save your data to them? Maybe once a week? When you remember to do so? This isn’t always a top-of-mind activity. Consider setting up an internet/cloud based system, so that your information will be backed-up regularly – without your having to remember to do so.

3. Predictable monthly billing options

Running a business – any business – has hidden costs associated with it, not to mention surprise costs, like recovering lost data. With a low-monthly fee – something affordable – you can buy yourself peace-of-mind, plus the knowledge and comfort that your information is safe and protected from attack.

4. Fast and guaranteed data recovery capabilities, with minimal downtime.

If your system is hit by an attack – and I don’t mean to foster fear mongering, this is just a basic fact of life in the online world. Sooner or later, we will all get hit by an attack. It might be an innocuous email, it might be a website we shouldn’t have visited, but our information and identities are valuable to someone – who will try to access it. When this happens, instead of waiting for a specialist to restore our system, be assured that you will be able to access your data – saved from the last automatic update – quickly and securely.

5. And perhaps the number one reason: Easy access, easy set-up, and no training required!

Entrepreneurs and small business employees wear many hats. A basic fact is that IT security, data management, data storage, and data recovery aren’t typical hats. Storing information in the cloud means that you don’t have to know how the inner workings of data storage and security actually work. Leave that for the professionals. Focus on your key business practices, with the knowledge that someone else is taking care of your data.

image.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Hybrid/Private Clouds

My (@rogerjenn) Cloud Management/DevOps Sessions at Microsoft Management Summit 2011 post of 2/28/2011 begins:

image

image The Microsoft Management Summit 2011 (@mmsteam, #mms2011) takes place on 3/21 through 3/25/2011 at the Mandalay Bay hotel, Las Vegas, NV. Following are links to and abstracts of 28 sessions in the event’s “Cloud Management” track as of 2/28/2011.

Sessions covering Windows Azure, SQL Azure and related topics are marked

The post continues with more information about the sessions, many of which involve private and hybrid cloud deployment.


Andrew Brust (@andrewbrust) (pictured below) touches on WAPA and SQL Azure when answering  Can Microsoft Build Appliances? on 2/28/2011:

image Billy Hollis, my Visual Studio Live! colleague and fellow Microsoft Regional Director said recently, and I am paraphrasing, that the computing world, especially on the consumer side, has shifted from one of building hardware and software that makes things possible to do, to building products and technologies that make things easy to do.  Billy crystalized things perfectly, as he often does.

In this new world of “easy to do,” Apple has done very well and Microsoft has struggled.  In the old world, customers wanted a Swiss Army Knife, with the most gimmicks and gadgets possible.  In the new world, people want elegantly cutlery.  They may want cake cutters and utility knives too, but they don’t want one device that works for all three tasks.  People don’t want tools, they want utensils.  People don’t want machines.  They want appliances.

Microsoft Appliances: They Do Exist

Microsoft has built a few appliance-like devices.  I would say XBox 360 is an appliance,  It’s versatile, mind you, but it’s the kind of thing you plug in, turn on and use, as opposed to set-up, tune, and open up to upgrade the internals.  Windows Phone 7 is an appliance too.  It’s a true smartphone, unlike Windows Mobile which was a handheld computer with a radio stack.  Zune is an appliance too, and a nice one.  It hasn’t attained much traction in the market, but that’s probably because the seminal consumer computing appliance -- the iPod – got there so much more quickly.

In the embedded world, Mediaroom, Microsoft’s set-top product for the cable industry (used by AT&T U-Verse and others) is an appliance.  So is Microsoft’s Sync technology, used in Ford automobiles. 

Even on the enterprise side, Microsoft has an appliance: SQL Server Parallel Data Warehouse Edition (PDW) combines Microsoft software with select OEMs’ server, networking and storage hardware.  You buy the appliance units from the OEMs, plug them in, connect them and go.

I would even say that Bing is an appliance.  Not in the hardware sense, mind you.  But from the software perspective, it’s a single-purpose product that you visit or run, use and then move on.  You don’t have to install it (except the iOS and Android native apps where it’s pretty straightforward), you don’t have to customize it, you don’t have to program it.  Basically, you just use it.

Microsoft Appliances that Should Exist

But Microsoft builds a bunch of things that are not appliances.  Media Center is not an appliance, and it most certainly should be.  Instead, it’s an app that runs on Windows 7.  It runs full-screen and you can use this configuration to conceal the fact that Windows is under it, but eventually something will cause you to abandon that masquerade (like Patch Tuesday).

The next version of Windows Home Server won’t, in my opinion, be an appliance either.  Now that the Drive Extender technology is gone, and users can’t just add and remove drives into and from a single storage pool, the product is much more like a IT server and less like an appliance-premised one.  Much has been written about this decision by Microsoft.  I’ll just sum it up in one word: pity.

Microsoft doesn’t have anything remotely appliance-like in the tablet category, either.  Until it does, it likely won’t have much market share in that space either.  And of course, the bulk of Microsoft’s product catalog on the business side is geared to enterprise machines and not personal appliances.

Appliance DNA: They Gotta Have It.

The consumerization of IT is real, because businesspeople are consumers too.  They appreciate the fit and finish of appliances at home, and they increasingly feel entitled to have it at work too.  Secure and reliable push email in a smartphone is necessary, but it isn’t enough.  People want great apps and a pleasurable user experience too.  The full Microsoft Office product is needed at work, but a PC with a keyboard and mouse, or maybe a touch screen that uses a stylus (or requires really small fingers), to run Office isn’t enough either.  People want a flawless touch experience available for the times they want to read and take quick notes.  Until Microsoft realizes this fully and internalizes it, it will suffer defeats in the consumer market and even setbacks in the business market.  Think about how slow the Office upgrade cycle is…now imagine if the next version of Office had a first-class alternate touch UI and consider the possible acceleration in adoption rates.

Can Microsoft make the appliance switch?  Can the appliance mentality become pervasive at the company?  Can Microsoft hasten its release cycles dramatically and shed the “some assembly required” paradigm upon which many of its products are based?  Let’s face it, the chances that Microsoft won’t make this transition are significant.

But there are also encouraging signs, and they should not be ignored.  The appliances we have already discussed, especially Xbox, Zune and Windows Phone 7, are the most obvious in this regard.  The fact that SQL Server has an appliance SKU now is a more subtle but perhaps also more significant outcome, because that product sits so smack in the middle of Microsoft’s enterprise stack.  Bing is encouraging too, especially given its integrated travel, maps and augmented reality capabilities.  As Bing gains market share, Microsoft has tangible proof that it can transform and win, even when everyone outside the company, and many within it, would bet otherwise.

That Great Big Appliance in the Sky

Perhaps the most promising (and evolving) proof points toward the appliance mentality, though, are Microsoft’s cloud offerings -- Azure and BPOS/Office 365.  While the cloud does not represent a physical appliance (quite the opposite in fact) its ability to make acquisition, deployment and use of technology simple for the user is absolutely an embodiment of the appliance mentality and spirit.  Azure is primarily a platform as a service offering; it doesn’t just provide infrastructure.  SQL Azure does likewise for databases.  And Office 365 does likewise for SharePoint, Exchange and Lync. You don’t administer, tune and manage servers; instead, you create databases or site collections or mailboxes and start using them. Upgrades come automatically, and it seems like releases will come more frequently.  Fault tolerance and content distribution is just there.  No muss.  No fuss.  You use these services; you don’t have to set them up and think about them.  That’s how appliances work. 

To me, these signs point out that Microsoft has the full capability of transforming itself.  But there’s a lot of work ahead.  Microsoft may say they’re “all in” on the cloud, but the majority of the company is still oriented around its old products and models.  There needs to be a wholesale cultural transformation in Redmond.  It can happen, but product management, program management, the field and executive ranks must unify in the effort. So must partners, and even customers.  New leaders must rise up and Microsoft must be able to see itself as a winner.  If Microsoft does this, it could lock-in decades of new success, and be a standard business school case study for doing so.  If not, the company will have missed an opportunity, and may see its undoing.


Herman Mehling asserted “Setting up and configuring a hybrid cloud gets cheaper, easier, and faster -- thanks to an innovation from Skytap, a provider of self-service cloud automation solutions” in a preface to his The 10-minute Hybrid Cloud -- Coming Soon to a Workspace Near You post of 2/28/2011 to DevX.com:

Skytap claims it can help developers and other IT folks to create a hybrid cloud in about 10 minutes. [See post below.]

image Exactly what is a hybrid cloud? The answer is, well, 'cloudy'.

A hybrid cloud can mean either two separate clouds joined together (public, private, internal or external), or a combination of virtualized cloud server instances used with real physical hardware.

Hybrid clouds usually require three things:

  • an onsite data center
  • a public cloud
  • an IT person dedicated to spending days connecting the two

Despite the benefits of hybrid clouds, setting up and configuring them has proven to be costly, complex, and time-consuming for most companies.

Skytap has plenty of competition in the hybrid cloud space, notably from EMC/VMware, the HP hydrid cloud portfolio, IBM, and Rackspace.

Rapidly Growing Market for Cloud Services

Gartner's Public Cloud Services Worldwide forecast report stated the market for cloud services was worth $58.6 billion in 2009 and will grow to be worth $148.8 billion by 2014.

A recent IDC white paper, sponsored by EMC: "Hybrid Cloud on the Rise: A Key Strategy to Business Growth in Asia Pacific" suggests that the hybrid approach to cloud computing will be the rule rather than the exception in 2011 as cloud computing gains a bigger foothold across the Asia Pacific region.

The white paper found that the shift to hybrid clouds is largely being accelerated by the desire of IT departments to find ways to build applications that meet the needs of smartphone and tablet users, while continuing to support core legacy and mission-critical applications.

"Current tools require additional investments in either VPN hardware and set-up, or significant configuration, coding and testing," said Sundar Raghavan, chief product and marketing officer at Skytap.

"People who want to use the cloud for development or testing and IT sandbox projects have limited time and resources to overcome these challenges," said Raghavan. "They need a simple, secure, self-service solution."

Raghavan added that Skytap's point-and-click UI gives customers everything they need to get started.

"Hybrid clouds promise the best of both worlds," said Cameron Haight, a vice president at researcher Gartner.

"These hybrid clouds enable enterprises to manage their cloud resources with the security of internal systems, but with the scale offered by a cloud service provider," Haight added.

However, Haight noted that the complexity of deploying a dynamic hybrid cloud environment presents a barrier to adoption.

"To ease the burden, IT should look for technologies that enable the rapid set-up and operational management of this emerging unified computing infrastructure," said Haight.

Switch Network Connections Dynamically

Skytap enables network connections to be switched dynamically, eliminating the need to stop a cloud machine to perform this task. This feature enables rapid test and development cycles and reduces a user's reliance on IT.

"Skytap continues to rapidly introduce new innovations that accelerate enterprise cloud adoption and extend our leadership in delivering the industry's most usable cloud solutions," said Raghavan. "Enterprise users can now secure their cloud environments, easily connect them to in-house systems, and reduce the time, cost, and IT support burden associated with hybrid clouds."

Rapidly Deploy Secure, Self-Service Hybrid Clouds

Skytap accelerates cloud integration with in-house systems by providing administrators a GUI to set up, test and configure IPSec connections from Skytap to their data centers.

Once IT authorizes and provides IPSec set-up parameters, Skytap administrators can set up, test and enable a hybrid cloud. Administrators can enhance security by restricting those connections to IP ranges permitted in Skytap. They can also explicitly include or exclude specific remote subnets.


James Staten (@staten7) asked You Must Go Further To Get Private Cloud Right . . . But How Much Further? in a 2/28/2011 to his Forrester Research blog:

image Lately it's starting to seem like private clouds are a lot like beauty – in the eye of the beholder. Or more accurately, in the eye of the builder. Sadly, unlike art and beauty, the value that comes from your private cloud isn’t as fluid, and the closer you get in your design to a public cloud, the greater the value. While it may be tempting to paint your VMware environment as a cloud or to automate a few tasks such as provisioning and then declare “cloud,”organizations that fall short of achieving true cloud value may find their investments miss the mark. But how do you get your private cloud right?

image For the most part, enterprises understand that virtualization and automation are key components of a private cloud, but at what point does a virtualized environment become a private cloud? What can a private cloud offer that a virtualized environment can’t? How do you sell this idea internally? And how do you deliver a true private cloud in 2011?

In London, this March, I am facilitating a meeting of the Forrester Leadership Board Infrastructure & Operations Council, where we will tackle these very questions. If you are considering building a private cloud, there are changes you will need to make in your organization to get it right and our I&O council meeting will give you the opportunity to discuss this with other I&O leaders facing the same challenge.

If you’re interested in being among the first to talk through this idea with me and other members of the I&O Council, please consider joining us on March 16 in London. To attend, you must be a senior-level infrastructure and/or operations executive in a $1B+ organization. Click here for more details on Forrester’s I&O Council. Are you working on a private cloud initiative in your organization, and if so, how are you defining it? Do you find that using the word “cloud” actually helps or hinders your ability to get support within IT for the project?


Kenneth van Sarksum announced Release: Skytap Hybrid Cloud Feb 2011 edition in a 2/28/2011 post:

Skytap has released an update to its Cloud Platform adding a feature called the Skytap Self-Service Hybrid Cloud.

With Self-Service Hybrid Cloud, which is a graphical user interface for setting up IPSec connections Skytap claims that it can lower the time to create a hybrid cloud to as little as ten minutes, where it would take two or three weeks before. This was mainly caused by requiring additional back-and-forth communication between the provider and the customer.

Other new features are:

  • Creation of VPN aware Skytap templates and snapshots
  • Ability to switch network connection dynamically without the need to stop machines

clip_image001


<Return to section navigation list> 

Cloud Security and Governance

Wayne Jansen and Timothy Grance updated Guidelines on Security and Privacy in Public Cloud Computing in 1/2011. From the Abstract:

image Cloud computing can and does mean different things to different people. The common characteristics most share are on-demand scalability of highly available and reliable pooled computing resources, secure access to metered services from nearly anywhere, and dislocation of data from inside to outside the organization. While aspects of these characteristics have been realized to a certain extent, cloud computing remains a work in progress.

imageThis publication provides an overview of the security and privacy challenges pertinent to public cloud computing and points out considerations organizations should take when outsourcing data, applications, and infrastructure to a public cloud environment.

NIST’s new Cloud Computing Collaboration wiki site is here. Here’s a slide from the nascent Cloud Provider Strawman Model v0.2 of 2/21/2011 from the NIST Cloud Computing Reference Architecture and Taxonomy Working Group:

image


<Return to section navigation list> 

Cloud Computing Events

Richard L. Santalesa announced on 2/28/2011 Upcoming Webinar This Wed. - Legal Issues of Cloud Computing to be held 3/2/2011 at 9:30 AM PST:

Information Law Group attorneys, Richard Santalesa [below, left] and Boris Segalis [below, right], are presenting the first in a series of monthly cloud computing-related webinars, beginning this Wednesday, March 2, 2011 – 12:30 pm ET

Click here or http://tinyurl.com/6dhfppf to register for this free webinar.  Cloud computing’s very real benefits have taken the IT market by storm, combining quick implementation, low overhead and the ability to scale as needed. However, the potential legal risks raised by diving into cloud computing without a full and joint assessment by IT, legal and operations are very real.

Rich Santalesa In this introductory webinar on cloud computing, we will review the cloud landscape, cover basic definitions and provide an overview of legal and other risks that should be considered when moving to the cloud, regardless of the cloud infrastructure being planned for or already in use.


As always we will be taking your questions for follow-up in our Q & A Session as time allows. We look forward to seeing you l!


Brjann announced Identity and Access [sessions] coming to MMS 2011 in a 2/26/2011 post to the Identity and Access Management blog:

image Microsoft Management Summit in Las Vegas , March 21-25, will host breakout sessions and a ask the experts booth on Identity and Access for the first time. I’m very excited to be able to deliver four different sessions during the week that should give attendees a good overall view into the important role IDA plays in datacenter and cloud management with BI05 where we will focus on System Center and private cloud and BI08 with a bit more focus on public cloud SaaS and PaaS offerings.

The center of our Identity and Access solutions is of course Active Directory so all session will look at extending on that platform with integration into cloud using technologies such as AD Federation Services in BI07 and Forefront Identity Manager in BI06.

BI05 Identity & Access and System Center: Better Together
System Center and IDA are integrated already and leveraging Active Directory but couldn’t you do more? This session talks about roles and identities in System Center products and how we can leverage IDA solutions. Demos of how can we use a tool such as Forefront Identity Manager in conjunction with the roles and access policies in System Center

BI06 Technical Overview of Forefront Identity Manager (FIM)
Join us for a lap around Forefront Identity Manager going through architecture, deployment and business scenarios in a presentation filled with demos. This session should give you a good understanding of what FIM is and what it can do to put you in control of identities across different directories and applications.

BI07 Technical Overview of Active Directory Federation Services 2.0
Join us for a lap around Active Directory Federation Services 2.0 covering architecture, deployment and business scenarios for using AD FS to extend single sign on from on-premises Active Directory to Office 365, Windows Azure and partners.

BI08 Identity & Access and Cloud: Better Together
Organizations have started using services such as Office 365 and Windows Azure but are worried about security and how to manage identities and provide access. This session goes through how Identity and Access from Microsoft with solutions such as Forefront Identity Manager, AD Federation Services, Windows Identity Foundation and Windows Azure AppFabric Access Control Services play together to enable organizations to access cloud applications and also enable access for their customers and partners.

Hope to see you in Las Vegas in a few weeks.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Alex Popescu (@al3xandru) reported Cloudera’s Distribution for Apache Hadoop version 3 Beta 4 in a 2/28/2011 post to his MyNoSQL blog:

image Cloudera’s Distribution for Apache Hadoop version 3 Beta 4:

New version of Cloudera’s Hadoop distro — complete release notes available here:

CDH3 Beta 4 also includes new versions of many components. Highlights include:

  • HBase 0.90.1, including much improved stability and operability.
  • Hive 0.7.0rc0, including the beginnings of authorization support, support for multiple databases, and many other new features.
  • Pig 0.8.0, including many new features like scalar types, custom partitioners, and improved UDF language support.
  • Flume 0.9.3, including support for Windows and improved monitoring capabilities.
  • Sqoop 1.2, including improvements to usability and Oracle integration.
  • Whirr 0.3, including support for starting HBase clusters on popular cloud platforms.

Plus many scalability improvements contributed by Yahoo!.

Cloudera’s CDH is the most popular Hadoop distro bringing together many components of the Hadoop ecosystem. Yahoo remains the main innovator behind Hadoop.


Chris Czarnecki asserted Google App Engine Channel API enables Push Data Application Development in a 2/28/2011 post:

image The demand for ever more sophisticated user interfaces in Web applications means that the skill set of the developers for these applications is not just server side scripting in languages such as Python or development in Java for example. Developers must also be expert in client side development in JavaScript to write code to interact with the server side code asynchronously.

image This model of development is a pull data approach where the client code has to pull data from the server. In those applications whereby the data on the server is only needed on the client if the data has changed, the client typically polls the server periodically, pulling the data and then checking to see if it has changed. Such an approach is not ideal for many reasons. Firstly, unnecessary work is being placed on both the client and the server, as undoubtedly requests will be made when there are no changes to the data. Secondly, changes in server side data will not be detected by the client until the next pull of data, meaning that if the frequency of checking is not high, the user interface will display stale data and may even miss some data changes.

image The above problem occurs because HTTP is a stateless protocol with every request potentially on a new network connection. To solve the above problem, those using the Google App Engine (GAE) can make use of a new library known as the Channel API. This API allows a persistent connection to be made between a browser based application client and the server side code running on GAE servers. This solution comprises of server side libraries in Java and Python as well as a JavaScript library for the client side code. The Channel API allows the server side code to ‘push’ data changes to the client side code as and when needed.

The Channel API released by Google is an example of the innovation of the major Cloud vendors, providing Cloud based solutions to long existing problems in a clean lightweight manner. If you require a push data Web application, this API is definitely worthy of consideration. For more details, take a look here. If you would like to know more in general about the Google App Engine and how it compares to other Cloud Computing products such as Azure and Amazon AWS, why not consider attending the Learning Tree Cloud Computing course.


CloudTimes posted What To Consider Before Choosing An IaaS Provider on 2/27/2011:

Infrastructure as a service provides cost savings, scale, and flexibility to a business IT department. Companies with complex applications are perfect candidates to an infrastructure transition from on-premise to IaaS.

Many Cloud hosting service providers offer varying services of IaaS. Replacing a business on-premise IT infrastructure by a Cloud IaaS IT infrastructure requires in-depth analysis of cloud service providers and their offerings.

Here is a list of requirements to look for in IaaS provider:

  • Can the cloud service provider add value to the IT department (cost savings, quick and ease of applications’ deployment, improved agility of computing power, data security, and storage capacity when needed)?
  • Do they have a clear understanding of the IT specific technical and operational requirements inherent within the company?
  • Can they prove that they possess the resources to replace on-premise infrastructure without compromising functionality?
  • Can they provide a targeted solution to the business IT unique needs?
  • How innovative is it and does the service provider keeps pace with constantly changing IT requirements?
  • Can the service provider prove that the technology is tested and proven reliable, with performance indicators, case studies, customer testimonials?
  • Does the cloud service possess the ability to scale with the business, and expand with the business global expansion?
  • Can the IaaS provider allow a tour of their data centers, and test or evaluate back-end systems?

Moving from an in-house to a cloud infrastructure, requires modifications to IT operations tasks, management, support and control, that is why ease of implementation comes into play. This is why it is important for businesses to Partner with an all rounded cloud service provider that has the expertise to guide a smooth transition, and provide continuing support beyond implementation.


<Return to section navigation list> 

Technorati Tags: Windows Azure, Windows Azure Platform, Azure Services Platform, Azure Storage Services, Azure Table Services, Azure Blob Services, Azure Drive Services, Azure Queue Services, SQL Azure Database, SADB, Open Data Protocol, OData, Windows Azure AppFabric, Azure AppFabric, Windows Server AppFabric, Server AppFabric, Cloud Computing, Visual Studio LightSwitch, LightSwitch, JDBC, Java, Hadoop, HBase, Hive, Pig, Flume, Sqoop, Whirr, NIST