Thursday, June 03, 2010

Windows Azure and Cloud Computing Posts for 6/2/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this daily series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated in June 2010 for the January 4, 2010 commercial release. 

Azure Blob, Drive, Table and Queue Services

Steve Marx explains Computing the Total Size of Your Blobs in this 6/1/2010 post:

imageA question came up on the Windows Azure MSDN forum recently about how to find the total number of bytes used by a blobs in a particular container. There’s no API that retrieves that information at the container level, but you can compute it by enumerating the blobs, as in the following one-liner:

var totalBytes = (from CloudBlob blob in
                  container.ListBlobs(new BlobRequestOptions() { UseFlatBlobListing = true })
                  select blob.Properties.Length
                 ).Sum();

To go one step further, we can enumerate all the containers too. Here’s how to sum the sizes of all the blobs in a particular account (still a one-liner):

var totalBytes = (from container in blobClient.ListContainers()
                  select
                  (from CloudBlob blob in
                   container.ListBlobs(new BlobRequestOptions() { UseFlatBlobListing = true })
                   select blob.Properties.Length
                  ).Sum()
                 ).Sum();

Note that this does not reflect the number of bytes you’re billed for. Things like empty pages in page blobs, uncommitted blocks in block blobs, snapshots, metadata, etc. all affect the total storage used in your account. The code snippets above simply sums the “sizes” (if you were to download them, for example) of all the blobs.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Wayne Walter Berry explains Finding Your Clustered Indexes in this 6/2/2010 post to the SQL Azure Team blog:

image Before migrating your SQL Server schema to SQL Azure, you need to create a clustered index on every table. You can read about why in this blog post. In this post, we will show how to find all the tables that don’t have a clustered index.

One of the techniques for migration is to update your SQL Server database so that its schema is compatible with SQL Azure, then transfer that same schema to SQL Azure using the Generate Scripts Wizard as we wrote about in this blog post. This is different from migrating between two different schemas, one on SQL Server and one on SQL Azure.

If you are using the first technique, you can use this script on your SQL Server to find all the tables without clustered indexes:

SELECT DISTINCT [TABLE] = OBJECT_NAME(OBJECT_ID)
FROM SYS.INDEXES
WHERE INDEX_ID = 0 AND OBJECTPROPERTY(OBJECT_ID,'IsUserTable') = 1
ORDER BY [TABLE]

Once you have the list, you can target those tables that need a clustered index. Depending on the data type and the type of data stored, one way to quickly get a clustered index is to covert a nonclustered index into a clustered index. You can do this with SQL Server Management Studio. Here is how:

  1. Expand the table, then the table’s indexes.
  2. Choose a nonclustered index that you want to convert to a clustered index.
  3. Right click on that index and choose Script Index as, then DROP AND CREATE To, then choose New Query Editor Window

    clip_image002

  4. SQL Server will generate a Transact-SQL script that will DROP and recreate your index.
  5. Find the NONCLUSTERED reference in the CREATE INDEX statement and convert it to CLUSTERED.

6) Execute the script.

Example of the script created by SQL Server Management Studio:

/****** Object:  Index [PK_GroupFeature]  
  Script Date: 06/01/2010 17:00:47 ******/
IF  EXISTS (SELECT * FROM sys.indexes 
    WHERE object_id = OBJECT_ID(N'[dbo].[GroupFeature]') 
    AND name = N'PK_GroupFeature')
ALTER TABLE [dbo].[GroupFeature]
 DROP CONSTRAINT [PK_GroupFeature]
GO

USE [Kulshan]
GO

/****** Object:  Index [PK_GroupFeature] 
   Script Date: 06/01/2010 17:00:48 ******/
ALTER TABLE [dbo].[GroupFeature] ADD
  CONSTRAINT [PK_GroupFeature] 
  PRIMARY KEY CLUSTERED 
(
    [GroupFeature_Id] ASC
)WITH (PAD_INDEX  = OFF, 
    STATISTICS_NORECOMPUTE  = OFF,
    SORT_IN_TEMPDB = OFF,
    IGNORE_DUP_KEY = OFF, ONLINE = OFF, 
    ALLOW_ROW_LOCKS  = ON,
    ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
GO

Once you have created clustered indexes for every table that you want to migrate from SQL Server to SQL Azure you are one-step closer to deploying to SQL Azure. Do you have questions, concerns, comments? Post them below and we will try to address them.

Wayne Walter Berry claims Unicode Works Better With SSIS in this 6/2/2010 post to the SQL Azure Team blog:

image Now that you have decided to move your SQL Server data to SQL Azure, it is a good time to change your varchar(max) and text data types to nvarchar(max). Changing your data types to Unicode will make your migration from SQL Server to SQL Azure easier when using SQL Server Integration Services (SSIS).

When migrating from SQL Server to SQL Azure using SSIS you need to use two different data sources, the SQL Server Native Client 10.0 data source to connect to SQL Server and the .NET Framework Data Provider for SqlServer data source to connect to SQL Azure. When using these data types, SSIS transforms your non-Unicode data types (varchar and text) to Unicode and back again, which works fine if the length of the data in the column is shorter than 8000 characters. However, if the text is longer than 8000 characters you will get an error like this:

Error 0xc0204016: SSIS.Pipeline: The "output column "State_Text" (168)" has a length that is not valid. The length must be between 0 and 8000. (SQL Server Import and Export Wizard)

If you modify your source and destination columns that are varchar(max) and text to nvarchar(max) SSIS will not have to transform your strings between the two provides and the columns will move successfully.

Example Code

Here is a single table in non-unicode:

CREATE TABLE [State] (State_Id int, State_Text varchar(max))

Here is some example Transact SQL that will change the column type.

ALTER TABLE [State] ADD [Temp] nvarchar(max) NULL
GO
UPDATE [State]
SET [State].[Temp] = [State_Text]
ALTER TABLE [State] DROP COLUMN [State_Text]
EXECUTE sp_RENAME '[State].[Temp]' , 'State_Text', 'COLUMN'

In this example, I have a table called State and a column called State_Text that is data type text.

Summary

In case you didn’t know, you should make all your columns Unicode for better localization support, and this post presented another reason to make the change. Do it before you run the SQL Server Export/Import Wizard and the process of moving your SQL Server into the cloud will be much easier

<Return to section navigation list> 

AppFabric: Access Control and Service Bus, CDN

The “Geneva” Team provides provisioning proxy server help to IT admins in its AD FS 2.0 Proxy Management post of 6/2/2010:

Since the AD FS 2.0 release candidate (RC), the AD FS product team got feedback that the experience of setting up AD FS proxy server and making it work with AD FS Federation Service is cumbersome, as it involves multiple steps across both AD FS proxy and AD FS Federation Service machines.

In AD FS 2.0 RC, after IT admin installs AD FS 2 proxy server on proxy machine, she runs proxy configuration wizard (PCW) and needs to:

  • Select or generate a certificate as the identity of the AD FS 2 proxy server.
  • Add the certificate to AD FS Federation Service trusted proxy certificates list
  • Outside of AD FS management console, make sure the certificate’s CA is trusted by AD FS Federation Service machines.

Such above steps are needed to set up a level of trust between AD FS proxy server and AD FS Federation Service. The AD FS proxy server might live in DMZ and provides one layer of insulation from outside attack.

AD FS administrator need to keep track of the proxy identity certificate life time and proactively renew it to make sure it does not expire and disrupt its service.

There are several pain points around AD FS proxy setup and maintaining experience for AD FS 2 RC version:

  • Setting up proxy involves touching multiple machines (both proxy and Federation Service machines)
  • Maintaining AD FS proxy working state involves manual attention and steps

In RTW, above issues are addressed by:

  • Easy provisioning: AD FS admin set up proxy with AD FS Federation Service by specifying username/password of an account that is authorized by AD FS Federation Service to issue proxy trust token to identify AD FS proxy servers. The proxy trust token is a form of identity issued by the AD FS Federation Service to the AD FS proxy server to identify established trust. By default, domain accounts which are part of the Administrators group on the AD FS Federation Service machines or the AD FS Federation Service domain service account are granted such privilege to provision trust by proxy from AD FS Federation Service. Such privilege is expressed via access control policy and is configurable via powershell. By default proxy trust token is valid for 15 days.
  • Maintenance free: Over time, the AD FS proxy server periodically renews the proxy trust token from the AD FS Federation Service to maintain AD FS proxy server in a working state. By default AD FS proxy server tries to renew proxy trust token every 4 hours.
  • Revocation support: If for whatever reasons, established proxy trust needs to be revoked by AD FS Federation Service, AD FS Federation Service has both powershell and UI support to do that. All proxies are revoked at the same time. There is no support for individual proxy server revocation.
  • Repair support: When proxy trust expires or is revoked, AD FS administrator can repair such trust between AD FS proxy server and AD FS Federation Service by running PCW in UI mode or command line mode (fspconfigwizard.exe).
Management support

Several management aspects are involved in the new trust mechanism.  Events are added to proxy server for:

  • AD FS proxy is set up correctly with AD FS Federation Service
  • AD FS proxy server has renewed trust with AD FS Federation Service
  • AD FS proxy failed to talk to Federation Service due to expired or invalid trust

Events are added to Federation Service server for:

  • AD FS proxy trust is established from a proxy machine
  • AD FS proxy trust is renewed from a proxy machine

Generic authorization event will be logged when:

  • Some party tries to establish or renew proxy trust using invalid credentials.

Proxy trust token issuance is audited just as any other issued token when AD FS audit is turned on. There are several knobs to turn to configure various proxy trust parameters:

  • AD FS proxy trust token lifetime
  • AD FS proxy trust renew frequency …

The post continues with illustrated details for provisioning workflow. Very helpful!

Ron Jacobs announced the availability of an endpoint.tv - Windows Server AppFabric - Server Farm Setup episode on 6/2/2010:

imageYou asked for it...here it is. In this episode, Byron Tardif is back to explain how you can setup Windows Server AppFabric in a server farm environment.

For more information, see the Windows Server AppFabric Developer Center on MSDN

In case you missed the news in my Windows Azure and Cloud Computing Posts for 5/28/2010+ post, here it is again:

The Windows Azure Team’s Announcing Pricing for the Windows Azure CDN post of 5/28/2010 begins:

image Last November, we announced a community technology preview (CTP) of the Windows Azure Content Delivery Network (CDN). The Windows Azure CDN enhances end user performance and reliability by placing copies of data, at various points in a network, so that they are distributed closer to the user. The Windows Azure CDN today delivers many Microsoft products – such as Windows Update, Zune videos, and Bing Maps - which customers know and use every day. By adding the CDN to Windows Azure capabilities, we’ve made this large-scale network available to all our Windows Azure customers.

To date, this service has been available at no charge. Today, we’re announcing pricing for the Windows Azure CDN for all billing periods that begin after June 30, 2010. The following three billing meters and rates will apply for the CDN:

  • $0.15 per GB for data transfers from European and North American locations
  • $0.20 per GB for data transfers from other locations
  • $0.01 per 10,000 transactions

With 19 locations globally (United States, Europe, Asia, Australia and South America), the Windows Azure CDN offers developers a global solution for delivering high-bandwidth content. The Windows Azure CDN caches your Windows Azure blobs at strategically placed locations to provide maximum bandwidth for delivering your content to users.  You can enable CDN delivery for any storage account via the Windows Azure Developer Portal.

Windows Azure CDN charges will not include fees associated with transferring this data from Windows Azure Storage to CDN. Any data transfers and storage transactions incurred to get data from Windows Azure Storage to the CDN will be charged separately at our normal Windows Azure Storage rates. CDN charges are incurred for data requests it receives and for the data it transfers out to satisfy these requests.

All usage for billing periods beginning prior to July 1, 2010 will not be charged. To help you determine which pricing plan best suits your needs, please review the comparison table, which includes this information.

To learn more about the Windows Azure CDN and how to get started, please be sure to read our previous blog post or visit the FAQ section on WindowsAzure.com.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

imageMy Azure Uptime Report: OakLeaf Table Test Harness for May 2010 (99.65%) post of 6/2/2010 shows that single instances of Windows Azure projects, which don’t qualify for Microsoft’s Service Level Agreement (SLA) guarantee of 99.9% availability, come close but don’t achieve three-nines.

Marius Oiaga offers Microsoft’s How to Run Java Applications in Windows Azure video tutorial in this 6/2/2010 Softpedia post:

image When building Windows Azure, one of the aspects of the operating system emphasized time and time again by the company was interoperability. And, indeed, early adopters but also customers of the Cloud platform from Microsoft can testify to the fact that Windows Azure plays nice with a range of non-Microsoft technologies. The Redmond giant wanted to make sure that developers leveraging PHP and Java would feel right at home on the new platform. A new video tutorial made available on MSDN is designed to show devs how to run a Java application on Windows Azure.

After all, I bet there are people in Java-centric shops that need a bit of convincing.
“For those of you interested in Windows Azure, an open platform on which applications written in .NET, PHP, or Java can run, the Windows Azure team has just posted a new video to MSDN on how to create and run a Java application in Windows Azure,” Peter Galli, the Open Source Community manager for Microsoft's Platform Strategy Group, revealed.

Microsoft finalized Windows Azure in November 2009, but only launched the new platform for customers in early 2010. As of February 1st, 2010, Windows Azure was available in no less than 21 markets around the world, and that number grew to 41 countries and regions in the first half of April. Come July 1st, the Redmond company will start charging for the usage of the Windows Azure Content Delivery Network, according to these pricing details.

Windows Azure in an open platform. This means you can run applications written in .NET, PHP, or Java. In this video Scott Golightly will show how to create and run an application written in Java in Windows Azure. We will create a simple Java application that runs under Apache Tomcat and then show how that can be packaged up and deployed to the Windows Azure development fabric,” Microsoft stated.

Kevin Griffin posted his Beginner Guide to Preparing for Azure on 6/1/2010:

imageWe have a project that we are considering to move over to Azure.  However, the move to the cloud can be a bit daunting if you’re not fully expecting certain things.  Cost is really one of the big factors driving our decisions.  I sat down and chatted with my friend David Makogon, and discussed some of the best Azure scenario’s for our deployment.  A lot of good information came out of that conversation, and I feel it might be a lot of good information to share if you have a similar situation.

Our scenario: Our application is build on ASP.NET WebForms, and is running on shared hosting.  It’s connected to a shared SQL Server instance through the same host.  The host (who shall remain nameless) is a piece of crap.  Random downtime, poor support, and an overall bad experience.  We were looking at two different options: a dedicated server or the cloud.

Ken continues with descriptions of Roles, Bandwidth, Data, and concludes:

Additional Thoughts: ASP.NET Gotchas

I’ve written several ASP.NET applications, and I’m a fan of session state.  I’m sure many other ASP.NET developers out there will agree with me.  However, Azure does not have anything in place for managing session state.  Azure is really a “stateless” system.  There are 3rd party examples out there of how to do this, but right now there is nothing official.

There is a built in membership and roles provider, so don’t worry about that!

Note About Worker Roles

At first, I was considering the thought of not having to run a worker role.  I’m developing a web application, why would I need one?  However, there are a few good reasons why you might want to think about baking in the cost of a worker role.

First, Azure provides a lot of data on the roles and how they’re performing.  Worker roles can collect and analyze this information on the fly.  Ever have a web application crash?  Worker roles can collect crash information, and send a nice email or log it for future reference.  All this is done independent of the web role.

Second, worker roles can help you programmically scale other roles.  Imagine a site that normally sees less than 1000 hits a day.  Somebody posts a link to the site on Digg or Reddit, and the internet goes crazy trying to access this website.  Normal shared hosting or even dedicated hosting would buckle under the weight of all the requests.  In Azure, your worker role can watch the site and start new instances as they are required.  The removes the strain off of a single process, and balances it between several.  After the internet goes to sleep, the worker role and kill unneeded web roles.  You do have to pay for the roles that it starts up, but hopefully it wouldn’t be for that long.

Azure is a great solution, and I’m looking forward to using it on future projects.  The upstart costs of Azure hosting can be scary, but you need to think about it long term.  Imagine the costs of having to buy and house several servers for a farm.  Imagine having to pay for a dedicated pipe for all the servers to connect to the internet through.  Imagine costs of system administrators.  Then imagine how much less running multiple roles on Azure would cost.  The cost of Azure will grow as your site grows.  But the goal is that as the site grows, so will your pockets.

I’m really hoping that I didn’t provide any misleading information in this post.  If you were considering Azure at all, either for a new project or an existing one, hopefully you now have more information than you did before.

The Windows Azure Platform Team suggests you check out new "How Do I?" Videos for Windows Azure Platform and “[w]atch webcasts and how-to videos covering Windows Azure, AppFabric [still called “.NET Services” here], and SQL Azure.”

Maarten Balliauw describes Running on Windows Azure - ChronoRace – Autoscaling in his 6/1/2010 post about RealDolmen’s chronicling application for the Bruxelles 20 km race of 5/31/2010:

imageAt RealDolmen, we had the luck of doing the first (known) project on Windows Azure in Belgium. Together with Microsoft, we had the opportunity to make the ChronoRace website robust enough to withstand large sports events like the 20km through Brussels.

ChronoRace is a Belgian company based in Malmédy, specialised in electronic timing for large sports events (around 340 per year) troughout Europe in different disciplines like jogging, cycling, sailing, … Each participant is registered through the website, can consult their results and can view a high-definition video of their arrival at the finish line. Of course, these results and videos are also used to brag to co-workers and family members, resulting in about 10 result and video views per participant. Imagine 20.000 or more participants on large events… No wonder their current 1 webserver – 1 database server setup could not handle all load.

Extra investments in hardware, WAN connection upgrades and video streaming were considered, however found too expensive to have these running for 365 days a year while on average this extra capacity would only be needed for 14 days a year. Ideal for cloud computing! Especially with an expected 30.000 participants for the 20km through Brussels... (which would mean 3 TB of data transfers for the videos on 1 day!!!)

imageMicrosoft selected RealDolmen as a partner for this project, mainly because of the knowledge we built over the past year, since the first Azure CTP. Together with ChronoRace, we gradually moved the existing SQL Server databse to SQL Azure. After that, we started moving the video streaming to blob storage, implemented extra caching and automatic scaling.

You probably have seen the following slides in various presentations on cloud computing:

Capacity cloud computing

All marketing gibberish, right? Nope! We actually managed to get very close to this model using our custom autoscaling mechanism. Here are some figures we collected during the peak of the 20km through Brussels:

Windows Azure Auto Scaling

More information on the technical challenges we encountered can be found in my slide deck I presented at KAHOSL last week:

Running in the Cloud - First Belgian Azure project

View more presentations from Maarten Balliauw.

If you want more information on scalability and automatic scaling, feel free to come to the Belgian Community Day 2010 where I will be presenting a session on this topic.

Oh and for the record: I’m not planning on writing marketing posts. I just was so impressed by our actual autoscaling graph that I had to blog this :-)

Microsoft Pinpoint added CloudBlaze to its list of Online Azure Applications on 6/1/2010:

Application Description

image CloudBlaze is a Solution Accelerator that eliminates the non trivial burden on the development team to create the underlying “plumbing code”.  It allows the team to focus on building the core software required to meet the business needs of your customer. The CloudBlaze framework enables the ISVs transition from an existing business application to a SaaS model in the Microsoft Windows Azure Platform, or to build SaaS-enabled application from scratch.

CloudBlaze provides implicit functionality (i.e. multi-tenancy, provisioning, metering, billing and logging etc.) that requires no effort on the ISVs part and is available out of the box. The amount of time it take to port an application is highly dependent on the size of the application, the architecture of the application, the ISVs comfort level with Azure and the extent of the explicit interactions. Given that you would have to port your application to Azure anyway, CloudBlaze provides the fastest path to building cloud applications on the Microsft Windows Azure Platform, since the requirements from CloudBlaze upon the application developer are fairly minute.

Features:

  • Provisioning and Multi-tenancy
  • Metering and Billing
  • Client, User and Role Management and Licensing
  • Monitoring, Notifications, and Custom Workflow
  • Security
  • Exception Handling
  • Logging
  • Tenant and System Administration UI

Benefits to ISVs:

  • Your existing application in the Azure Cloud up to 70% faster
  • Minimize or reallocate development resources
  • Significantly less development risk with a tested framework
  • Focus on the core product, not the underlying “plumbing”
  • Configuration vs. customization that allows you to change business rules quickly
  • Transparent and fast framework integration through APIs and SDKs

HimaBindu Vejella claims “The Open Source PHP Azure SDK provides interoperability for PHP application developers to work with the Windows cloud services platform” in his PHP Interoperability for Windows AZURE post of 6/1/2010 to the PCQuest blog:

Microsoft's Windows Azure is a cloud platform that increases choice for developers and allows them to use PHP � one of the most popular programming languages on the web. The main purpose of Windows Azure is to provide an infrastructure that is robust, reliable, and scalable for hosting Web applications. Development tools like Visual Studio or Eclipse can be used to build applications that run on Windows Azure. The Windows Azure platform supports Internet protocols like HTTP, XML, SOAP and REST. PHP runs on Azure using FastCGI, and therefore Azure services can be called from other platforms and programming languages like PHP. It is a cloud environment that helps PHP applications to share the stored information with .NET applications and vice-versa.

PHP Tool Kit for Windows Azure
The Open Source PHP Azure SDK (PHP Azure) provides interoperability for PHP application developers to work with Windows cloud services platform. PHP Azure provides a set of PHP classes for HTTP transport, AuthN/AuthZ, REST and Error Management. Windows Azure Tools for Eclipse for PHP Developers enhances cloud interoperability. It enables PHP developers to create Web applications targeting Windows Azure. It includes a Window Azure storage explorer that allows developers to browse data contained into the Windows Azure tables, blobs or queues.PHP Tool Kit for Windows Azure is an open source application that facilitates PHP developers to use ADO.Net Data Services included in .NET Framework.

The Windows Azure PHP tool kit developed on PHP requires the PHP extensions for XML, XSL and CURL to be installed. This can be installed on Windows or Linux operating systems.The way it works is that a PHP proxy file that contains class definitions is generated based on the metadata interpreted by ADO.NET Data Services. PHP client applications can use those classes to access the data service. The PHP tool kit is independent of the .NET framework and communicates with data services using HTTP and XML or JSON. The Tool Kit offers basic authentication mechanism and also has classes that support authentications for developers. …

Toddy Mladenov explains Collecting Event Logs in Windows Azure in this 6/1/2010 post:

image While playing with Windows Azure Diagnostics for my last post Windows Azure Diagnostics – Where Are My Logs? I noticed few things related to collecting Event Logs.

Configuring Windows Event Logs Data Source

First let’s configure the Event Logs data source. Here are the four simple lines that do that:

DiagnosticMonitorConfiguration dmc = DiagnosticMonitor.GetDefaultInitialConfiguration();
dmc.WindowsEventLog.DataSources.Add("Application!*");
dmc.WindowsEventLog.ScheduledTransferPeriod = TimeSpan.FromMinutes(1);
DiagnosticMonitor.Start("DiagnosticsConnectionString", dmc);

I would highly recommend you to define the “Application!*” string globally, and make sure it is not misspelled. I spent quite some time wondering why my Event Logs are not showing up in my storage table, and the reason was my bad spelling skills. BIG THANKS to Steve Marx for opening my eyes!

If you want to see other event logs like System and Security you should add those as data sources before you call DiagnosticMonitor.Start() method.

Filtering Events

The first thing I noticed is that the configuration above will transfer everything from the Application Event Log to WADWindowsEventLogsTable at one minute intervals. With “everything” I literally mean everything – doesn’t matter whether the event is generated by your application or something else running in parallel. There are two reasons why you don’t want to dump everything into your storage table: 1. it is too much noise, and 2. (but more important) the more data you dump the higher bill you will get (you will be charged for transactions and for data stored).

My suggestion is to always capture filtered events, and transfer only those to the storage table. Steve has a very short but useful post explaining how to capture filtered Windows Events with Windows Azure Diagnostics using XPath expressions. In my case I filtered to only events generated by my Web Role.

How are Windows Event Logs Transferred to Storage?

Once I had events transferred to WADWindowsEventLogsTable in my storage account, I used Cerebrata’s Azure Diagnostics Manager to browse through the data. I was running my tests on Development Fabric and storing the data in Development Storage, and surprisingly for me I saw the events duplicated in WADWindowsEventLogsTable. My first thought was: “This is a bug!”, and then the second: “Will I be double-charged?” Nothing to worry about! (And again thanks to Steve for opening my eyes).

The explanation is simple. Because every Role Instance runs in separate VM, it has its own Windows Azure Monitoring Agent running. The Monitoring Agent for particular Role Instance reads the events from the Event Log for this particular VM, and transfers it to WADWindowsEventLogsTable. Thus you are able to see the events for every Role Instance.

If you look at the WADWindowsEventLogsTable schema you will see that there is a column RoleInstance that identifies the Role Instance from which this event came. What happens in Development Fabric is that there is only one Event Log to read from, and because I had two instances running I was seeing the events duplicated.

Guidelines for Capturing Windows Events Using Windows Azure Diagnostics

As a conclusion here are some guidelines for capturing Windows Event Logs using Windows Azure Diagnostics.

  • As you can read from my previous post Windows Azure Diagnostics – Where Are My Logs? by default no logs are transferred to Windows Azure Storage tables. You need to explicitly set DiagnosticsMonitorconfiguration.WindowsEventLog.DataSources if you want to receive events into WADWindowsEventLogsTable
  • Always capture filtered Windows Events using XPath expressions in WindowsEventLog.DataSources in order to avoid noise and unnecessary charges
  • Keep in mind that Event Logs are collected in distributed environment, and you will need to mind the data you receive in your table. Using tools like Cerebrata’s Azure Diagnostics Managercan help here

Using Windows Azure Diagnostics is a good way to debug your application in the cloud, however you need to be careful not to grow your bill unnecessary.

Update: Two updates based on feedback that I received in the last few days:

  • Although I mention above that you can configure Security as event source you should be aware that this works only in Windows Azure Development Fabric. You will not be able to collect Security events in Windows Azure cloud environment because your role has no admin privileges.
  • I received the suggestion (guess from whom) to post the Xpath expression I use to filter the events to only my role. Here it is:
    Application!*[System[Provider[@Name='SmileyButton']]]

Jim Nakashima explains how to run WCF on Windows Azure in this detailed 5/27/2010 tutorial:

One thing we’ve had for a while but I wanted to bring some attention to is our “wcfazure” code gallery page that has a useful set of known issues and a number of samples on using WCF on Windows Azure.

It’s been updated recently to be more clear and to use all of the latest tools and such so even if you knew about it before, I recommend taking another look.

For the most part, WCF on Windows Azure just works.  That said, there are a couple of issues to be aware of that are covered on the wcfazure code gallery page.  That said, by in large, the most common issue I get asked about around WCF and Windows Azure is Generating Proxies so let me walk through this scenario.

First, I’m using Visual Studio 2010, but I’ll be using .NET 3.5 instead of .NET 4 because at this point in time (end of May 2010) we only support .NET 3.5 on Windows Azure.  That will be changing with our next release. 

The other reason to use .NET 3.5 is that the scenario in .NET 4 pretty much works out of the box.  I’ll talk more about that after I explain how this works on .NET 3.5.

Create a new .NET 3.5 Cloud Service by clicking on File | New | Project… and select the Windows Azure Cloud Service template under the Cloud category in Visual C# or Visual Basic. Click to continue.

image

In the Feb 2010 release of the Windows Azure Tools 1.1, even if you select .NET Framework 4, you’ll still get .NET 3.5 roles but that won’t be the case when .NET 4 support comes online.

Add a WCF Service WebRole to the solution and click OK.

image

Hit F5.

Two things to notice.  First, the browser will open to http://127.0.0.1:81/Service1.svc and you’ll see that even though the service is running on port 81, the page says to click on a link that is pointing to port 5100 which if you click the link will fail.

image

Jim continues with the patch to solve the preceding failure as well as the failure that occurs "[i]f you navigate to the Common7\IDE directory inside of the Visual Studio install directory and run WcfTestClient.exe, right click on “My Service Projects” and select “Add Service… .”

Return to section navigation list> 

Windows Azure Infrastructure

Adron explains the difference between Scalability vs. Elasticity in the 6/2/2010 post:

imageIn conversations recently two words have bubbled up for some to have one meaning and for others to have their respective true meanings.  Scalability and elasticity are two concepts that often are primary selling points of using cloud environments.

Scalability as Wikipedia defines:  “In telecommunications and software engineering, scalability is a desirable property of a system, a network, or a process, which indicates its ability to either handle growing amounts of work in a graceful manner or to be readily enlarged.”

Elasticity is a different beast to define entirely.  On Wikipedia you get a whole host of options from economic and otherwise.  I dug around on the Internet for a more accurate description that would lay out specifically what Cloud Elasticity is.  With a short job over to Timothy Fritz's Blog I found what I was looking for in his entry Cloud Elasticity.

Tim writes,

"Elasticity is a critical measurement: the time it takes to start a node up, and your minimum time commitment per node. Short lived but massively parallel tasks that were once impossible thrive in a highly elastic world. Big prediction: Clouds are going to get more elastic indefinitely; they?ll trend with Moore?s law."

and I agree.  Cloud elasticity, as described here is the specific rolling on and rolling off of nodes (or roles or other processing power) to handle huge tasks.

To summarize, scalability is the ideal of a system, network, or process which can handle a growing amount of work in a graceful manner.  Where as elasticity is the actual action of a cloud handling the growing (and decreasing) amounts of system activity.

This leads to the grand discussion of, how does a cloud (AWS, Azure, or otherwise) scale up or scale down, how does the pricing work, and what is required to maintain said elasticity.  That however, is a discussion for another day.

Lori MacVittie warns Cloud Lets You Throw More Hardware at the Problem Faster in this 6/2/2010 post to F5’s DevCentral blog:

Hidden deep within an article on scalability was a fascinating insight. Once you read it, it makes sense, but because cloud computing forces our attention to the logical (compute resources) rather than the physical (hardware) it’s easy to overlook.

blockquote “Cloud computing is actually making this problem a little bit worse,” states Leach [CTO of domain registrar Name.com], “because it is so easy just to throw hardware at the problem. But at the end of the day, you’ve still got to figure, ‘I shouldn’t have to have all this hardware when my site doesn’t get that much traffic.’”

The “problem” is scalability and/or performance. The solution is (but shouldn’t necessarily be) “more hardware.”

Now certainly you aren’t actually throwing more “hardware” at the problem, but when you consider what “more hardware” is intended to provide you’ll probably quickly come to the conclusion that whether you’re provisioning hardware or virtual images, you’re doing the same thing. The way in which we’ve traditionally approached scale and performance problems is to throw more hardware at the problem in order to increase the compute resources available. The more compute resources available the better the performance and the higher the capacity of the system. Both vertical and horizontal scalability strategies employ this technique; regardless of the scalability model you’re using (up or out) both increase the amount of compute resources available to the application.

As Leach points out, cloud computing exacerbates this problem by making it even easier to simply increase the amount of compute resources available by provisioning more or larger instances. Essentially the only thing that’s changed here is the time it takes to provision more resources: from weeks to minutes. The result is the same: more compute resources.

ISN’T THAT the BEST WAY TO ADDRESS SCALABILITY and PERFORMANCE?

Not necessarily. The reason the ease of provisioning can actually backfire on IT is that it doesn’t force us to change the way in which we look at the  problem. We still treat capacity and performance issues as wholly dependent on the amount of processing power available to the application. We have on imageblinders that don’t allow us to examine the periphery of the data center and the rest of the ecosystem in which that application is deployed and may in fact be having an impact on capacity and performance.

Other factors that may be impacting performance and scalability:

  • Network performance and bandwidth
  • Performance and scalability of integrated systems (data stores, identity management, Web 2.0 integrated gadgets/widgets)
  • Logging processes
  • Too many components in the application delivery chain
  • Ratio of concurrent users to open TCP connections (typically associated with AJAX-based applications)
  • Cowboy applications architecture (doing everything in the application – acceleration, security, SSL, access control, rate limiting)

Any one, or combination thereof, of these factors can negatively impact the performance and scalability of an  application. Simply throwing more hardware (resources) at the problem will certainly help in some cases, but not all. It behooves operations to find the root cause of the performance and scalability issues, and address them rather than pushing the “easy” button and spinning up more (costly) instances.

Improving the capacity of a single instance by leveraging application delivery techniques such as offloading TCP connection management, centralizing SSL termination, employing intelligent compression, and leveraging application acceleration solutions where possible can dramatically improve the capacity and performance of an application without dramatically increasing the bottom line. A unified application delivery strategy can be employed across all applications and simultaneously improve performance and capacity while sharing its costs across all applications, which results in lower operational costs universally.

NICKEL and DIME

Until cloud computing is more mature in terms of its infrastructure service offerings, organizations will unlikely have motivation to move away from the “throw more hardware the problem” strategy of dealing with scalability and performance. Cloud computing does not nothing right now to encourage a move away from such thinking because at this point cloud computing models are based entirely on the premise that compute is cheap and it’s okay to overprovision if necessary. Certainly a single instance of an application is vastly more inexpensive to deploy in the cloud today, but in the case of cloud computing the whole is greater than the sum of its parts. Each instance adds up and up and up until, as the old adage goes, your budget has been nickel and dime’d to death.

IT architects must seriously start considering the architecture of applications from a data center perspective. Architects need to evaluate all mitigating factors that impact capacity and performance whether those factors are network, application, or even end-point originated. Without a holistic view of application development and deployment, cloud computing will continue to promote more of the same problems that have always plagued applications in the data center.

The change brought about by cloud computing and virtualization cannot be simply technological; it must also be architectural. And that means IT, as technologists and architects, must change.

Toddy Mladenov describes Update, Upgrade and VIP-Swap for Windows Azure Service - What are the Differences? in this 6/1/2010 post:

Some time ago I wrote the definitions of In-Place upgrade and VIP-Swap but just reading those is not enough to know when to use one or the other. For completely unrelated investigation I had to do in the last few days I spent some time digging into the differences, and here are some more details.

Updating Windows Azure Hosted Service

Updating the hosted service is very simple concept. You can think of it as configuration update because this is what it is – you are changing the hosted service configuration only. There are several ways to do that – either through Windows Azure Developer Portal or the management APIs but the simplest one is to edit the CSCFG file on the Portal. Alternatively you can upload new CSCFG file through the Portal but make sure that the uploaded files (CSPKG and CSCFG) match the service definition (CSDEF) for your hosted service. Here are the changes that will result in update operation for your hosted service:

  • Change the OS Version for your hosted service
  • Change the number of instances for particular role in your hosted service
  • Change the value (but not the name) of a configuration setting
  • Change the thumbprint for a certificate
  • Change the algorithm used to generate the thumbprint for a certificate

Keep in mind that the names of the configuration settings and the certificates are defined in the service definition file (CSDEF) and you should not change those in CSCFG as part of the update else you will receive errors.

Here are the things you cannot do as part of the hosted service update:

  • Change the service definition (CSDEF) file. This includes change in the number of roles or endpoints
  • Deploy new code

Windows Azure updates your hosted service using the upgrade domain concept – the changes are done by walking upgrade domain by upgrade domain.

One important thing you should remember is that the hosted service update does not change the VIP (Virtual IP address) of your hosted service.

Upgrading Windows Azure Hosted Service

Upgrading the hosted service is very similar to updating, and all the limitations that apply to update except deploying new code should also be true for upgrade. There is one caveat though - although you should not be allowed to change the service definition file you can add or change custom configuration settings (ConfigurationSettings element) in CSDEF and your upgrade will execute without errors.

Upgrade uses the upgrade domain concept the same way update does.

As with update, upgrading the hosted service does not change the VIP of the hosted service.

VIP-Swap

Before I dig deeper into when you can use VIP-Swap here is short explanation what VIP-Swap exactly is:

  • VIP-Swap is done between the Production and Staging deployments in Windows Azure
  • During VIP-Swap the VIP of the Production deployment is assigned to the Staging deployment, and the VIP of the Staging deployment is assigned to the Production one (both VIPs are swapped – hence the term VIP-Swap)
  • During VIP-Swap your service incurs no downtime

In general the concept is very compelling because it allows you to deploy and test your service before putting it in Production. Sometimes this may work very well while sometimes it may not be very helpful (for example if your application sets cookies for particular domain).

Nevertheless the following changes to your service definition (or changes in CSDEF file) will work with VIP-Swap:

  • Changing the .Net trust level
  • Changing the VM size
  • Adding or changing configuration settings
  • Adding or changing local storage
  • Adding or changing certificate

Here are some of the restrictions that apply to VIP-Swap:

  • You cannot do VIP-Swap if you change the number of roles for your service
  • You cannot do VIP-Swap if you change the number of endpoints for your service (with or without changing the number of roles). For example you cannot add HTTPS endpoint to your Web Role

VIP-Swap doesn’t change the Virtual IP address of your service.

Announcing Pricing for the Windows Azure CDN

<Return to section navigation list> 

Cloud Security and Governance

David Aiken points to Security Best Practices for Developing Windows Azure Applications in this 6/2/2010 post:

This is worth a read http://download.microsoft.com/download/7/3/E/73E4EE93-559F-4D0F-A6FC-7FEC5F1542D1/SecurityBestPracticesWindowsAzureApps.docx.

“This paper focuses on the security challenges and recommended approaches to design and develop more secure applications for Microsoft’s Windows Azure platform. Microsoft Security Engineering Center (MSEC) and Microsoft’s Online Services Security & Compliance (OSSC) team have partnered with the Windows Azure team to build on the same security principles and processes that Microsoft has developed through years of experience managing security risks in traditional development and operating environments.”

THIS POSTING IS PROVIDED “AS IS” WITH NO WARRANTIES, AND CONFERS NO RIGHTS

Scott Sanchez asks “Can you use cloud computing and still satisfy the German data privacy laws?” as a preface to his The German Data Protection Act (BDSG) and Cloud Computing post of 6/2/2010:

Over the years I've had the opportunity to deal with the German Data Protection and Privacy Laws (Bundesdatenschutzgesetz, or BDSG) many times, including recently in a cloud computing context.  I thought it would be helpful to share some of the key things that are required, and how you can address these in the context of cloud.

The basic premise of the German Data Protection laws is to protect privacy.  The rules go something like this:

  • Don't collect any data that can identify an individual without express permission (this includes obvious things like name and date of birth, as well as less obvious things like phone numbers, address, etc.)
  • The permission that an individual grants must specify how, where, how long, and for what purposes that data may be used
  • The individual can revoke that permission at any time
  • Where personal data is processed or used, the organization needs policies, procedures and controls in place to protect this data that meets the BDSG data protection requirements
  • These policies, procedures and controls need to take in account the different types and categories of personal data being stored, and how they are protected
  • There are real penalties for breaking the law

The key areas of control that relate to Cloud Computing are:

  • Ensure only authorized access to the systems where personal data is stored, processed or used (access control)
  • Ensure that proper access control to personal data is enforced during storage, processing, access or use (create, read, update, delete)
  • Ensure that data is protected while "in motion" and being transmitted and can not be viewed, changed or deleted without proper authorization
  • Ensure that you have the ability to establish and verify when and by whom personal data was entered into a computer system, as well as when and by whom this data was updated or removed
  • Ensure and have audit trails to verify that personal data is stored, processed and transmitted in accordance with the instructions and approval of the principal (individual or entity referenced in the data)
  • Ensure that personal data is protected against accidental destruction or loss (availability control)
  • Ensure that data collected for different purposes is stored, processed, used and transmitted separately

If you went down the list of the 10 well established domains of security, you would see that these control areas fit in nicely.  It seems like Security 101.  But what makes the BDSG such a serious concern is the penalties for non-compliance, 50,000-300,000 EUR and potential for seizure of profits, and the fact that there are real enforcement efforts that take place (unlike most of the US privacy regulations to date).

Why are the issues for BDSG any different in a cloud computing environment?  People see it as a risk, some of which I believe are real and others which I believe are just perception.

Scott continues with “Perceived BDSG Risks of cloud computing” and “Real BDSG risks of cloud computing” topics and concludes:

Bottom line:

You can use cloud computing and be BDSG compliant, but don't expect to just drag and drop your already compliant applications and data on the cloud and continue to be compliant.  Use this opportunity to take a fresh, top to bottom look at your compliance efforts, identify the gaps, and plan for remediation.  Make sure you do your research and select a cloud computing provider that understands the issues you face with BDSG and has invested in their cloud organization to help your business be compliant.  In my experience, compliance is easiest when you select a provider inside the borders of Germany, even though technically if your auditors are friendly enough, they might let you put the data anywhere in the EU.

<Return to section navigation list> 

Cloud Computing Events

IDC and IDG Services announced their Cloud Leadership Forum to be held 6/13 – 6/15/2010 at the Hyatt Regency Santa Clara in a 6/2/2010 e-mail:

Security. Interoperability. Private vs. Public vs. Hybrid Clouds. 

image Join early cloud adopters and industry experts at the IDC / IDG Cloud Leadership Forum to discuss those key issues and more, at the first conference where enterprise IT and industry leaders will hash out the complexities and realities of incorporating cloud concepts and technologies into IT strategy and infrastructure.

Register now and reference promo code IDC to join this important conversation. You'll get all your questions answered, such as:

  • What do my peers' cloud implementations look like, and how and why did they make the choices they did?
  • What are my key vendors doing to advance cloud security and interoperability?
  • How are my peers overcoming any concerns about security and compliance?

This event is complimentary for qualified executives by registering here with the promo code: IDC
 
Learn from speakers such as:

  • Alan Boehme, SVP, IT Strategy and Enterprise Architecture, ING Americas
  • Jessica Carroll, Managing Director, Information Technologies, United States Golf Association
  • Erich Clementi, General Manager of Enterprise Initiatives, IBM
  • Ken Harris, CIO & SVP, Shaklee Corp.
  • Geir Ramleth, SVP & CIO, Bechtel Group, Inc.
  • James Rinaldi, CIO, NASA Jet Propulsion Laboratory
  • John Rizzi, VP, Product Management and Strategy, Tickets.com

SESSIONS INCLUDE:

  • CIO panels on private cloud and preparing your IT organization for the cloud
  • CIO keynotes on committing completely to the cloud and extending your business into the cloud
  • Expert-led sessions on decision frameworks for choosing cloud-based services, the evolution of cloud and more
  • A town hall where you can ask your strategic vendors the key questions you need answered to advance your cloud strategy

Find the full agenda here: Cloud Leadership Forum

Register now at www.cloudleadershipforum.com and reference promo code IDC to attend this important event at no charge.  You may also call 800-366-0246 or email us at executiveprograms@cxo.com to process your registration.

Jeremy Geelan asks “Cloud and virtualization are the Power Couple of enterprise IT - did you submit to speak @ 7th Cloud Expo in Santa Clara yet?” in his Cloud Expo Silicon Valley: Call for Papers Closing post of 5/31/2010:

image In short, questions abound. But so do answers, which is why we had a record level of submission to 7th Cloud Expo, being held November 1-4, 2010, at the Santa Clara Convention Center in Santa Clara, CA. Submissions came from all over the globe, and from companies representing every level of the Cloud ecosystem. They came from Cloud vendors and from from Cloud users alike.

The Call for Papers for 7th Cloud Expo closes today, May 31, 2010. So if you have not submitted your speaking proposal, and were hoping to speak in Santa Clara, do please hurry and get your submission to us. The submissions link is here. …

Bernard Golden’s Cloud Computing in Moscow: From Russia With Love post of 6/2/2010 for NetworkWorld’s DataCenter blog begins:

image I had the privilege of keynoting a data center conference in Moscow this week, speaking on the topic of what cloud computing means to the data center of the future. This is the largest data center conference in Russia, and attracts a mix of internal data center facilities executives as well as hosting providers. The conference was extremely well-attended, with record numbers of people registering.

As background, Russia is coming out of a severe financial dislocation; its economy shrank 7.9 percent in 2009. It appears to be on a growth track today. Furthermore, its penetration of IT use in the economy is far lower than what we see in the U.S. and western Europe - it can be characterized as an emerging economy, though, if my observations are anything to go by, one with tremendous vitality and ambition.

What that means to the future of data centers is quite intriguing. Unlike the U.S., for example, a much lower percentage of companies in Russia have significant existing data center infrastructure. For many companies, the choice of computing infrastructure presents an interesting greenfield opportunity. The question, therefore, is what path to pursue as companies build out their infrastructure of the future. …

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Phil Wainwright asks SaaSGrid Express: irresponsible or indispensable? in this 6/2/2010 post to the Enterprise Irregulars blog:

SaaS purists like me ought to worry about a product like SaaSGrid Express, launched this week by .NET platform-as-a-service specialist Apprenda. Surely it’s the height of irresponsibility to disseminate a free, downloadable SaaS application platform? This gives the worst imaginable encouragement to hundreds if not thousands of unskilled, poorly resourced developers to set up the worst possible kind of amateur cloud implementations, with predictably calamitous results, both for the customers and for the reputation of the wider industry.

And yet, I can’t bring myself to condemn it. These ISVs are going to do their own thing anyway. By making its product a free download, Apprenda is giving them a means of playing around and starting to learn the skills they need without the pressure of having to fund license fees or a monthly subscription. And by downloading what is by all accounts a properly architected SaaS platform, at least they’ll be learning from an example of best practice. Perhaps they’ll realize more quickly how hopeless it would be to try to engineer all of their own as-a-service infrastructure from scratch. Apprenda has hosting partners, too, so once an ISV is ready to go into production, there’s no obligation to build and manage their own infrastructure â there are third-party alternatives available that can provide a robust, ready-made operational instance of the Apprenda platform.

As Apprenda’s CEO Sinclair Schiller explained to me last month in a pre-briefing, the overriding aim of the Express product is to galvanize more small to mid-size ISVs into taking the plunge into SaaS. “How can we catalyze more to convert to SaaS?” he asked. “There’s an honest-to-goodness need to get the market moving.”

It also acknowledges an inherent problem with the vertically integrated develop-and-deploy PaaS offerings from the likes of Salesforce.com, Google AppEngine and others (one that’s been partially answered by the introduction of VMware’s open PaaS strategy). When Apprenda first launched, it offered its platform as a service but has now changed to a more conventionally licensed platform approach. The change was due to the realization that ISVs varied in their hosting requirements, and it wasn’t possible to meet all of those different needs with a single hosted proposition. So Apprenda has now moved to a more horizontally integrated model where it sells its middleware platform separately as software and lets customers choose the underlying hosting platform…

Phil Rosenbush wonders As Cloud Computing Ramps Up, Where Are the Opportunities? in this 6/1/2010 post to the DailyFinance blog:

image As I noted last week, enterprises are beginning to embrace the concept of cloud computing, in which applications and computing power are moved from individual desktops to a shared network. Over the next few years, this shift will account for a greater share of IT spending and a huge part of its growth. This week, we'll take a look at how that growth will evolve and at a few companies that stand to benefit.

A study by IDC shows that global spending on cloud computing is growing a rate of 27% a year, or nearly four times as fast as the overall information technology market. Total spending on cloud computing -- which includes business applications, servers, storage, application development and deployment and infrastructure software -- will more than double between 2008 and 2012, according to IDC. It was $16.23 billion, or 4%, of the $383 global IT market in 2008.

Spending on the cloud is expected to rise to $42.27 billion, or 9% of the $493.71 billion IT market in 2012.

Cloud computing is becoming more important to every element in the tech food chain, from suppliers of infrastructure and applications to the clients who buy it. The growing importance is even more dramatic when one considers just how much of the market's growth is being channeled into the construction of the cloud. The IT market is expected to grow from $462 billion in 2011 to $493 billion in 2012. The cloud is expected to account for 25% of that $31 billion in new spending.
"The implication for IT suppliers is clear. During the next several years, IT suppliers must position themselves as leaders in IT cloud services, or forfeit an ever-expanding portion of the industry's growth," Frank Gens of IDC concludes.

The Hot Spots

Where, precisely, will this growth occur? The biggest part of the market is business applications, which accounts for more than half of all spending on the cloud. About 52% of the cloud market will be focused on business applications by 2012. That's down a bit from 57% in 2008, but still a huge opportunity for big IT companies.

That's one reason why Microsoft (MSFT) was upgraded last week by brokerages such as FBR Capital Markets (FBCM). Software analyst David Hilal raised his rating on the company to outperform and put a price target of $32 on the company, up by $1. (The stock closed Friday at $25.80.) The software giant is moving embracing the concept of cloud computing, and new versions of its Office productivity suite have fully developed Web offerings, matching the functionality of its traditional desktop product. Goldman Sachs (GS) rated Microsoft a buy earlier this year, noting the strength of its enterprise business.

SAP (SAP), bolstered by its acquisition of Sybase, is rushing to catch up in the cloud, where it had fallen behind Microsoft and Oracle (ORCL). But the deal should help it catch up.

The greatest opportunity may occur in the cloud server and storage markets. The server market is expected to hit 8% of total spending by 2012, up from 5% in 2008. The storage market is expected to account for 13% of the market, up from 5%, during the same time frame. Just as users of Google Docs -- and, increasingly, Microsoft Office -- can store their documents online, instead of on their desktop, enterprise customers are beginning to shift some of their storage to the cloud as well.

As in many aspects of the tech market -- from social networking to the iPhone -- consumers are leading the way for the enterprise. That is particularly true of the cloud, which has driven popular consumer applications for the last few years

See full article from DailyFinance: http://srph.it/aVSSAc

<Return to section navigation list> 

blog comments powered by Disqus