Tuesday, April 13, 2010

Windows Azure and Cloud Computing Posts for 4/13/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in April 2010. 

Azure Blob, Table and Queue Services

Eugenio Pace’s Windows Azure Guidance – Using Shared Key Signatures for images in a-Expense continues his blob storage guidance:

As described before, a-Expense static content (mainly the scanned images uploaded by users) is stored in blobs. As with many other things in life there are quite a few options when it comes to how those images are made available to users.

A key design consideration, is that in Windows Azure all storage subsystems (tables, blobs and queues) are directly addressable from anywhere, not just from your web tier.

For example, a blob in Azure is normally reachable through a URL like:

https://<application>.blob.core.windows.net/<container>/<blob>

So, the classic tiered app that we come to love and blindly draw:

image

looks more like this one, provided you have the right credentials.

image

This contrasts quite a bit with typical on-premises arrangements, where there are often network barriers to the storage subsystems (e.g. firewalls, subnets, private lans, etc).

Note: SQL Azure is also directly reachable from the internet, but it incorporates a firewall that you have control on. That is, you can define which source IP addresses are allowed for incoming connections into a SQL Azure database.

Given this, one might wonder what the best way of referencing blob content from a web page is, and as usual, there are tradeoffs and “it depends” :-).

Eugenio continues with detailed analyses of the options.

Steve Marx completes his three-part Uploading Windows Azure Blobs from Silverlight [series with] Part 3: Handling Big Files by Using the Block APIs on 4/13/2010:

In this series of blog posts, I’ll show you how to use Silverlight to upload blobs directly to Windows Azure storage. At the end of the series, we’ll have a complete solution that supports uploading multiple files of arbitrary size. You can try out the finished sample at http://slupload.cloudapp.net.

Part 3: Handling Big Files by Using the Block APIs

In Part 1, we saw how to handle authentication using Shared Access Signatures. In Part 2, we saw how to enable cross-domain access to blobs so we could upload data using Silverlight. In this post, we’ll see how to handle large blobs by breaking them into smaller blocks.

Using blocks

In Part 2, we used the Put Blob method to upload blobs using Silverlight. This operation uploads an entire blob in a single web request. For larger blobs, though, we may want to upload the blob in smaller chunks, called blocks. In fact, for blobs larger than 64MB, it’s mandatory that we use the block API (Put Block and Put Block List) to upload the blob in chunks.

Uploading a blob as a series of blocks generally gives us a number of advantages:

  1. It allows us to upload blobs larger than 64MB.
  2. It allows us to upload blocks in parallel.
  3. It allows us to resume failed uploads by retrying only the blocks that weren’t already uploaded.

For our example, we’ll only be taking advantage of advantage #1.

Steve continues with the details of the Block API.

David Aiken recommends De-Normalizing your data in this 4/13/2010 post:

I mentioned yesterday about how we use some trickery to maintain the top views table in Bid Now.

Items are stored in a table, but to generate a quick top views view, we maintain a separate table. This is done using the Task/Queue/Task pattern described yesterday here.

Since TOP isn’t a keyword that Windows Azure table storage deals with, we have to use some trickery.

Items are stored in the AuctionItems table. But when you visit the home page, you get several different lists of items. Each of these items are in fact separate tables, which are kept up to date using the Task/Queue/Task pattern.

image

For the Most Viewed list – we use the MostViewedItems table. The table contains PartitionKey, RowKey, TimeStamp, Title, EndDate, ItemId, PhotoUrl, ShortDescription & ThumbnailUrl. In fact just enough information to display the list and enable a click through to the item details.

The query to return the top 5 items is simple – we simply return the first 5 items from that table – which is super fast. How you say. Well we use the PartitonKey to order the table! …

Joannes Vormorel introduces “Fat entities for a strong-typed fine-grained Table Storage” in his Fat Entities post of 4/13/2010:

Table Storage is another powerful and scalable storage service offered as a part of Windows Azure. Compared to Blob Storage, Table Storage offers much lower I/O costs, actually up to 100x lower costs through Entity Group Transactions for fine-grained data.

This page assumes that you have some basic understanding about the Table Storage, otherwise you can have a look at the Walkthrough.

Why do you need the Table Storage?

There are some frequent misunderstandings about Table Storage (TS). In particular, TS is nowhere an alternative to SQL Azure. TS features nothing you would typically expect from a relational database.

TS does feature a query language (pseudo equivalent of SQL), that supposedly support querying entities against any properties. Unfortunately, for scalability purposes, TS should never be queried without specifying row keys and/or partition keys. Indeed, specifying arbitrary properties may give a false impression that it just works ; yet to perform such queries, TS has no alternative but to scan the entire storage, which means that your queries will become intractable as soon your storage grows.

Note: If your storage is not expect to grow, then don't even bother about Table Storage, and go for SQL Azure instead. There is no point in dealing with the quirks of a NoSQL store, if you don't need to scale in the first place.

Back to original point, TS features cheaper data access costs, and obviously this aspect had to be central in Lokad.Cloud - otherwise, it would not been worthwhile to even bother with TS in the first place.

Fat Entities

To some extend, Lokad.Cloud puts aside most of the property-oriented features of TS. Indeed, query aspects of properties don't scale anyway (except for the system ones).

Thus, the first idea of was to go for fat entities. Here is the entity class shipped in Lokad.Cloud:

    public class CloudEntity < T >      {          public string RowRey { get; set; }          public string PartitionKey { get; set; }          public DateTime Timestamp { get; set; }          public T Value { get; set; }      } 

Lokad.Cloud exposes the 3 system properties of TS entities. Then, the CloudEntity class is generic and exposes a single custom property of type T.

When an entity is pushed toward TS, the entity is serialized using the usual serialization pattern applied within Lokad.Cloud.

Joannes continues with a detailed analysis of Lokad.Cloud.

<Return to section navigation list> 

SQL Azure Database, Codename “Dallas” and OData

Alex JamesOData weekly roundup #2 post of 4/13/2010 to the OData Blog contains links to recent OData content:

Since the last roundup, we've received a ton of new feedback and there have been some really exciting new developments too.

Here’s [the first few items of] the last couple of weeks feedback:

  • OData Wiki: As mentioned last time there is now a OData community wiki called ODataPrimer. We had originally planned to create a wiki on OData.org, but we won't be doing that anytime soon because ODataPrimer looks perfect for now.
  • OData Mailing list: We always planned an OData Mailing List, and are now almost ready to go. So you can expect to see something up on the same site that hosts the M-Specification mailing list soon.
  • Open Source the .NET producer libraries: This is something that we've now heard from a number of people.  We are working through the feasibility and associated logistics at the moment.  We will let you know as we make progress on our thinking around this request.
  • Create or Foster producer libraries for different platforms: Closely related is the need to make it easy to create OData Services on all major platforms. The most common requests being Java (because of it's importance to the enterprise), PHP, Python and Ruby (OData on Rails?). Doug Purdy has a blog post asking for your opinions here. Again we are investigating options here. We are also trying to figure out how we can help simplify creating libraries and services. So far we've released the OData Validation Toolkit to help you validate your service. Can you think of anything else beyond just libraries that would help? We'd love to hear you suggestions here.
  • Sample Producer code on different platforms: We've heard a lot of requests for sample code to create an OData Service in language X on platform Y. You might think this is dependent on the above, and to implement the whole of OData it certainly is, but OData is flexible enough that you can do a rudimentary OData service - in any language - very easily. Check out  Implementing only certain aspects of OData for more. …

Alex concludes with additional OData-related links in the “other developments” category.

Martin Heller reports “Visual Studio 2010 proper sports a cleaner start page, adds SQL Azure support, and behaves better when running in virtualized environments” in his feature-length InfoWorld review: Visual Studio 2010 delivers report of 4/13/2010 for the IDG News Service, which begins:

As a daily user of Visual Studio from its inception, and of Visual C++ and Visual InterDev before that, I have been following the evolution of Microsoft's development environment quite closely. In the Visual Studio 2010 IDE, Microsoft has taken several large steps away from its legacy code. That was a gutsy and potentially risky move on the part of the Visual Studio team, but one that worked out well and will lay the foundation for future product growth.

Visual Studio 2010 is a major upgrade in functionality and capability from its predecessor. It includes some major rewriting of core features, as well as many new features. Developers, architects, and testers will all find areas where the new version makes their jobs easier. Despite the higher pricing for this version, most serious Microsoft-oriented shops will upgrade to Visual Studio 2010 and never look back. …

I’m sold.

Brad AbramsSilverlight 4 + RIA Services - Ready for Business: Ajax Endpoint post of 4/12/2010 starts:

Continuing in our series, I wanted to touch on how a RIA Services can be exposed  your service in JSON.  This is very handy for Ajax clients.

The great thing is that enabling the JSON endpoint is that it requires NO changes whatsoever to the DomainService.  All you need to do is enable it is to add the JSON endpoint in web.config

  1:   <system.serviceModel>
  2:     <domainServices>
  3:       <endpoints>
  4:         <add name="JSON"
  5:              type="Microsoft.ServiceModel.DomainServices.Hosting.JsonEndpointFactory, Microsoft.ServiceModel.DomainServices.Hosting, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
  6:         <add name="OData"
  7:              type="System.ServiceModel.DomainServices.Hosting.ODataEndpointFactory, System.ServiceModel.DomainServices.Hosting.OData, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
  8:         <add name="Soap"
  9:              type="Microsoft.ServiceModel.DomainServices.Hosting.SoapXmlEndpointFactory, Microsoft.ServiceModel.DomainServices.Hosting, Version=4.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35" />
 10:       </endpoints>
 11:     </domainServices>
 12: 

As you can see, this above snippet shows adding the JSON endpoint from the RIA Services toolkit as well as the OData and Soap ones. [Emphasis added.] …

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

The Azure AppFabric Team, which still calls itself the netsqlservicesteam is Announcing commercial availability of Windows Azure platform AppFabric as of 4/12/2010:

Earlier in March, we informed you about the upcoming commercial availability of Windows Azure platform AppFabric. Today, we are absolutely thrilled to let you know that AppFabric has now become a fully SLA-supported paid service. Customers using AppFabric will see billing information in their accounts, as usage charges have begun accruing as of 11:00 PM (UTC) on April 12, 2010.

Windows Azure platform AppFabric includes Service Bus and Access Control services. AppFabric Service Bus connects services and applications across network boundaries to help developers build distributed applications. AppFabric Access Control provides federated, claims-based access control for REST web services.

You can find pricing for AppFabric here. You can also go to the pricing section of our FAQs for additional information as well as our AppFabric pricing blog post. If you do not wish to be charged for AppFabric, please discontinue use of these services.

Also, we are happy to let you know that the Microsoft Windows Azure Platform, including Windows Azure, SQL Azure, and Windows Azure platform AppFabric has expanded general availability to an additional 20 countries, making our flexible cloud services platform available to a global customer base and partner ecosystem across 41 countries. Details about this announcement are available here.

Thank you for your business and your continued interest in AppFabric.

Now that the Azure AppFabric Team has released their product for paid production, A Calculator for the Windows Azure platform AppFabric Billing Preview makes more sense:

Last week we released a Billing Preview that provided a usage summary to help customers prepare for when billing begins in April. Usage statistics in this billing preview are consistent with our new Connection-based billing for the AppFabric Services Bus that we announced in January.  A key feature of this model allows customers to purchase Connections individually on a pay-as-you-go basis, or in Connection Packs if they anticipate needing larger quantities.

To provide greater insight into the Billing Preview and the options available to customers, a member of our team created an Excel-based calculator that converts usage numbers into costs.  The calculator also allows customers to see how much their usage would have cost with each of the different reserved pack options.

While we hope customers find this tool quite useful, please do note that it is being provided only as a courtesy and is not guaranteed against possible bugs. The calculator also does not guarantee the amount of future bills.

You will find the calculator in an attachment to this post: AppFabric Billing Preview Calculator.xlsx

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

The Windows Azure Team posted Real World Windows Azure: Interview with Richard Prodger, Technical Director at Active Web Solutions on 4/13/2010:

As part of the Real World Windows Azure series, we talked to Richard Prodger, Technical Director at Active Web Solutions (AWS), about using the Windows Azure platform to deliver the company's search-and-rescue application and the benefits that Windows Azure provides. Here's what he had to say:

MSDN: Tell us about Active Web Solutions and the services you offer.

Prodger: AWS specializes in Web application and custom software development. In 2006, the Royal National Lifeboat Institution contracted AWS to build an automated alerting system for fishing vessels in the United Kingdom. We developed a location-based service infrastructure, code-named GeoPoint, that transmits position data to a centralized tracking and alerting system. We then used GeoPoint to build MOB Guardian, a search-and-rescue application for fishing vessels (MOB stands for "man overboard").

MSDN: What was the biggest challenge Active Web Solutions faced with GeoPoint and MOB Guardian before migrating to the Windows Azure platform?

Prodger: Our original infrastructure could handle approximately 10,000 boats, but we wanted to offer MOB Guardian to the 500,000 leisure craft in the U.K. and the millions of marine users worldwide. However, as a small company, we would find it hard to accommodate the massive infrastructure that would be required to offer MOB Guardian more broadly.

MSDN: Can you describe the solution you built with Windows Azure and Windows Azure platform AppFabric to help address your need for a cost-effective, scalable solution?

Prodger: We migrated our existing application to Windows Azure very quickly. Now, instead of passing emergency messages by satellite to physical servers, messages are transferred by satellite using the Simple Mail Transfer Protocol (SMTP) and delivered to a number of message queues. Multiple service instances read from the queues, process the messages, and store the data using Table storage in Windows Azure. Emergency alarms are then relayed through the AppFabric Service Bus to the end-user monitoring application in the search-and-rescue operations center. We also used the AppFabric Service Bus to connect cloud-based GeoPoint to on-premises databases without exposing the data to the public Internet. …

Josh Holmes describes his Easy Setup for PHP On Azure Development in a 4/13/2010 post:

I just got back from the JumpIn Camp in fantastic Zurich, Switzerland. I’ll blog about that whole experience shortly. In the meantime, however, I thought I’d get some resources out here that would have been useful last week. :) Specifically in this post, I thought I’d tackle the Windows Azure 4 Eclipse tooling setup.

There are two major things that we need to do. First is to get the Windows Azure SDK installed. The reality is that this is all that you *really* need to do Windows Azure development and testing. However, we want to do our PHP development with the Eclipse toolset for Azure. This gives us debugging support and a lot of great helpers.

Josh continues with “Installing the Windows Azure 1.1 February 2010 SDK,” “Installing Eclipse with the Windows Azure for Eclipse tooling” and “Quick Tips to Make Things Smoother” topics and then delves into:

Zend Debugger and Windows Azure DevFabric Storage

Changing the Zend Debugging PortThe first thing is that the Windows Azure DevFabric Storage uses port 10000. This is not a bad thing except that the Zend Debugger, by default, also uses port 10000. The end result is if you launch Eclipse and then try to start the development fabric storage engine, you’ll get a conflict.

Specifically, the error is from the Development Storage engine – “The process cannot access the file because it is being used by another process”. This is a bizarre error that doesn’t actually give you correct information. The way to fix this is in Eclipse, go to Windows | Preferences, find the Zend Debugger settings and edit the port.

He concludes with “Download Windows Azure for PHP Contrib Project,” and “Creating a Hello World Azure Application with Eclipse” topics

David Makogon analyzes Azure: Staging and Compute-Hour Metering in this 4/13/2010 post to the RDA blog:

In this post, I talked about how web and worker roles are considered live the moment they are deployed, whether stopped or running. I received a few follow-up questions about billing when using the Staging area, as well as clarification on how compute-hours are measured.

Production and Staging

When deploying to Azure, you have a choice between Production and Staging:

prod-and-staging

This comes in handy when upgrading your application. Let’s say you have an application already in Production, and you now have a new version you’d like to upgrade to. You can deploy that new version to a separate Staging area, which provides you with a separate “test URL” as well as any worker and web roles your application needs. You can run this app just like you’d run your production app. When you’re done testing, swap staging and production. This is effectively a Virtual IP address swap, so your end-users simply see an upgraded application as soon as you choose to execute the swap.

This is a terrific feature, allowing you to test an application without service disruption. It also allows you to quickly swap back to the previously-deployed version if something go wrong after deploying your new version to production.

Once you’re sure your new version is working ok, consider deleting your service from Staging. Be aware that your staging area is also consuming virtual machine instances. Staging instances and Production instances are equivalent: Each instance is a Virtual machine; Staging instances are billed just like Production instances. If you leave your service deployed to both Production and Staging, you will be accruing Compute-Hour charges for both. Just keep this in mind when estimating your monthly Azure costs.

How the hour is metered

As each application is deployed, its virtual machines are created. The moment those virtual machines are in place, metering begins. This includes stopped services. For instance: assuming we clicked Deploy on the Production area of our service, and uploaded an application comprising one worker role and one web role, we’d then see our deployment in a stopped state: …

deployed-paused

Bill Zack’s Developing for Windows Azure in Visual Studio 2010 post of 4/13/2010 to the Innovation Showcase blog points to the Windows Azure Team’s How To Develop for the Cloud in Visual Studio 2010 blog of 4/19/2010, which in turn links two recent magazine articles, Cloud Development in Visual Studio 2010 and Developing and Deploying Windows Azure Apps in Visual Studio 2010:

[T]hat explain how to build cloud services and applications with Visual Studio 2010. Written by Windows Azure Tools experts Hani Atassi, Anson Horton, Jim Nakashima and Danny Thorpe, these articles provide step-by-step instructions showing you how to use Visual Studio 2010 for the entirety of the Windows Azure application development lifecycle.

Looks like a severe case of linkitis to me.

Bill Zack asks about Windows Azure Performance? in his 4/13/2010 post:

The Extreme Computing Group at Microsoft Research Labs now has a web site (in Azure of course) devoted to Windows Azure performance information. This site will present the results of regularly running benchmarks on the Windows Azure platform.

On this site a series of micro-benchmarks measure the basic operations in Azure such as reading and writing to the various data stores the platform provides and sending communications from one node to another. In addition to these, a set of macro-benchmarks are also provided that illustrate performance for common end-to-end application patterns. They provide the source code for each, which you can use as examples for your development efforts.

You can read more about it here.

The Windows Azure Team updated its Case Studies pages on 4/12/2010. Check ‘em out.

Peter Tweed announced Slalom Consulting San Francisco Custom Dev Challenge is live! in this 4/12/2010 post:

The Slalom Consulting San Francisco Custom Dev Challenge is live at www.slalomchallenge.com!!!!!

Slalom Consulting employs world-class technical consultants who take on ground breaking projects.  Please take the Slalom Custom Dev Challenge to see how you compare to the level of knowledge we look for in our technical consultants.  The online quiz is focussed on General .NET at this time and will be growing to include other technical topics in the future.

This application is written in C#, Silverlight and WCF running deployed in the cloud on Windows Azure and working with SQL Azure and Blob Storage.

Peter is a consultant with Slalom Consulting and “will show how easy it is to build quality Silverlight applications on top of SharePoint 2010” at the 4/14/2010 meeting of the East Bay .NET Users Group.

David Chou updated his 3/21/2010 Run Java with Jetty in Windows Azure post on 3/28/2010 with Jetty configuration information:

Configure Jetty (updated 2010.03.28)

There are many ways Jetty can be configured to run in Azure (such as for embedded server, starting from scratch instead of the entire distribution with demo apps), thus earlier I didn’t include how my deployment was configured, as each application can use different configurations, and I didn’t think my approach was necessarily the most ideal solution. Anyway, here is how I configured Jetty to run for the Worker Role code shown in this post.

First, I had to change the default NIO ChannelConnector that Jetty was using, from the new/non-blocking I/O connector to the traditional blocking IO and threading model BIO SocketConnector (because the loopback connection required by NIO doesn’t seem to work in Windows Azure).

This can be done in etc/jetty.xml, where we set connectors, by editing the New element to use org.eclipse.jetty.server.bio.SocketConnector instead of the default org.eclipse.jetty.server.nio.SelectChannelConnector, and remove the few additional options for the NIO connector. The resulting block looks like this:

<Call name="addConnector">
<Arg>
    <New class="org.eclipse.jetty.server.bio.SocketConnector">
      <Set name="host"><SystemProperty name="jetty.host" /></Set>
      <Set name="port"><SystemProperty name="jetty.port" default="8080" /></Set>
      <Set name="maxIdleTime">300000</Set>
    </New>
  </Arg>
</Call>

Now, I chose the approach to use JRE and Jetty files simply as application assets (read-only), mostly because this is a simpler exercise, and that I wanted to streamline the Worker Role implementation as much as possible, without having to do more configurations on the .NET side than necessary (such as due to dependencies on allocating a separate local resource/storage for Jetty, copy files over from the deployment, and use that as a runtime environment where it can write to files).

As a result of this approach, I needed to make sure that Jetty doesn’t need to write to any files locally when the server is started. Description of what I did:

  • etc/jetty.xml – commented out the default “RequestLog” handler so that Jetty doesn’t need to write to that log
  • etc/jetty.xml – changed addBean “org.eclipse.jetty.deploy.WebAppDeployer”’s “extract” property to “false” so that it doesn’t extract the .war files
  • contexts/test.xml – changed <Set name="extractWAR">’s property to “false” so that it doesn’t extract the .war files as well

This step above is considered optional, as you can also create a local storage and copy the jetty and jre directories over, then launch the JRE and Jetty using those files. But you’ll also need more code in the Worker Role to support that, as well as using a different location to find the JRE and Jetty to launch the sub-process (the Tomcat Solution Accelerator used that approach).

Download the project

Download the project
The entire solution (2.3MB) including the Jetty assets as configured (but minus the ~80MB of JRE binaries), can be downloaded from Skydrive.

Amy Sorokas posits “The mass amounts of social data being created every day called for a system to make sense of it all. FUSE Labs created project ‘Twigg’” in her Project "Twigg" post to the FuseLabs blog:

Originally envisioned and built by FUSE Labs engineers, project "Twigg" is Microsoft's internal-only data ingestion platform. It is a real-time information processing infrastructure built to processes social network information hosted on the Azure computing and storage infrastructure. It provides interfaces to consume this information by various teams across Microsoft, including Microsoft Research and Bing. The result comes for Bing users when they can easily find trending topic and results from Twitter in their own searches.

Steve IckmanSteve Ickman, FUSE Labs Principal SDE, was one of the engineers that envisioned this system and that came up with the idea of building the first iteration of the Twigg architecture. "One day over coffee I asked myself if I was to go and count tweets in Twitter, as many tweets as I could. if I would look and expand links included in the tweets and count how many times I would see the same link, could I find anything interesting?" Just a few weeks after asking this question Steve had a basic system up and running that used Twitter search to review tweets, crawl the links included in the tweets and generate data from them.

This basic system was the first version of Twigg, a project that is now in its third iteration of the architecture and is the infrastructure behind two of FUSE Labs projects: Bing Twitter search and Bing Twitter Maps. In addition to these two FUSE projects, Twigg is also used as the data ingestion infrastructure for projects from other teams and has continued to generate interest and excitement across Microsoft.

Twigg leverages Windows Azure Platform's storage and compute capabilities to ingest and process immense amounts of data. When Bing Twitter was first launch in November 2009 it processed millions tweets a day. Today it processes nearly double the original number of tweets a day and this processing ability continues the need to grow every day. "Azure it's an amazing platform. Without it this would have been impossible," says Steve. "Azure makes the hard pieces of this project, like storage, fairly trivial."

Continually processing more data, the combination of Azure and Twigg brings interesting ways to find and interact with the lively world of social networks for anyone using Bing today. FUSE Labs continues to work and grow that experience.

Return to section navigation list> 

Windows Azure Infrastructure

Lori MacVittie asserts “When you combine virtualization with auto-scaling without implementing proper controls you run the risk of scaling yourself silly or worse – broke” in her The Goldfish Effect post of 4/13/2010:

goldfish_bowl

You virtualized your applications. You set up an architecture that supports auto-scaling (on-demand) to free up your operators. All is going well, until the end of the month.

Applications are failing. Not just one, but all of them. After hours of digging into operational dashboards and logs and monitoring consoles you find the problem: one of the applications, which experiences extremely heavy processing demands at the end of the month, has scaled itself out too far and too fast for its environment. One goldfish has gobbled up the food and has grown too large for its bowl.

It’s not as crazy an idea as it might sound at first. If you haven’t implemented the right policies in the right places in your shiny new on-demand architecture, you might just be allowing for such a scenario to occur. Whether it’s due to unforeseen legitimate demand or a DoS-style attack without the right limitations (policies) in place to ensure that an application has scaling boundaries you might inadvertently cause a denial of service and outages to other applications by eliminating resources they need.

Automating provisioning and scalability is a Good Thing. It shifts the burden from technology to people, and it is often only through the codification of the processes IT follows in a more static, manual network to scale an application can inefficiencies be discovered and subsequently eliminated. But an easily missed variable in this equation are limitations that were once imposed by physical containment. An application can only be scaled out as far as its physical containers, and no further. Virtualization breaks applications free from its physical limitations and allows it to ostensibly scale out across a larger pool of compute resources located in various physical nooks and crannies across the data center.

But when you virtualized resources you will need to perform capacity planning in a new way. Capacity planning becomes less about physical resources and more about costs and priorities for processing. It becomes a concerted effort to strike a balance between applications in such a way that resources are efficiently used based on prioritization and criticalness to the business rather than what’s physically available. It becomes a matter of metering and budgets and factoring costs into the auto-scaling process. …

Lori concludes with “A NEW KIND of NETWORK is REQUIRED” topic.

The ARN Staff reports Azure lands in Australia in this 4/13/2010 post:

Microsoft’s cloud platform, Windows Azure, is officially available in Australia.

The Windows Azure platform includes Windows Azure, SQL Azure and platform AppFabric, which offers a flexible and scalable environment for developers to create cloud applications and services.

The Microsoft cloud platform also supports Australian currency. The pricing structure allows partners to optimise cloud elasticity and reach new markets while having low barriers to entry, the vendor claimed.

In a release, Microsoft director, developer and platform evangelism, Gianpaolo Carraro, said the Windows Azure platform would be especially beneficial to local application development and deployment organisations.

“In one sense, it is essentially the Microsoft experience that developers and customers know and are familiar with but with Cloud-class grunt,” he stated.

The platform is available in different types of packages, depending on customer needs. The business edition starts from $110 per database per month.

Local details of the Azure launch were announced in February. At the time, early adopters of the Azure included Adslot, Ajiliti, Joomla, Dataract, Avanade, Fujitsu Australia, MYOB, Object Consulting and Soul Solutions.

Early reports indicated that the billing system wasn’t working, but it is now. Currency is in AUS$.

William Vambenepe claims Steve Ballmer gets Cloud in this 4/12/2010 post:

Devops? What’s devops? See these articles:

<Return to section navigation list> 

Cloud Security and Governance

Laura Smith reports Identity management in cloud computing courts enterprise trust in a 4/13/2010 post to the SearchCIO.com site:

Identity management (IDM) in cloud computing is a nebulous application for most enterprises. While new products and standards efforts promote cost savings and management efficiencies, it all boils down to trust.

"I worry about authentication in the cloud," said Phil Kramer, chief technology officer of Systems Solutions Technologies LLC In Old Hickory, Tenn., a consultancy and systems integrator with more than 30 years of experience in enterprise-wide deployments of network infrastructure and information security. "I worry that encryption will be tightly coupled with weak authentication. Username and password would not be enough for me."

Federating identity management makes sense, especially in a cloud environment where users are logging onto to multiple systems within and outside the firewall, Kramer acknowledged. Internal IDM is all about account provisioning, assigning user access to systems and resetting end user passwords; interbusiness IDM is about identity mapping within a partner's context.

"Do I really want to authenticate every buyer that comes to my systems? Do I want to support password resets and handle the help desk calls for all external users? It would make sense to have a partner authenticate its user and send the credentials, but it's a matter of trust. I trust you to authenticate your users fully, then your systems communicate with mine and I provision the account and grant access. But would you rely on a password for this?" Kramer asked.

Like Kramer, most enterprise CIOs have shied away from moving their IDM applications to the cloud. Midsized and smaller companies have embraced IDM as a service, but larger companies tend to use the cloud for less critical services, such as storage, according to Andrew Sroka, CEO of Fischer International Identity LLC in Naples, Fla., which recently received the number one ranking in the Brown-Wilson Group's top 10 list of outsourced identity and access management technologies for the second consecutive year. …

<Return to section navigation list> 

Cloud Computing Events

Barnard Crespi reports that the MITSloan CIO – Cloud Computing Panel will be held on 5/19/2010:

Microsoft CEO Steve Ballmer hails technology as the gift that keeps on giving. If technology is the gift, then Cloud Computing is the ribbon around it. Eric Schmidt, Chairman and Chief Executive for Google(R), calls cloud computing the ‘defining technological shift of our generation’, noting that its impact on technology and business may prove more significant than the PC revolution of the 1980s and it is likely to impact as wide a range of professional disciplines.

image Join us on May 19, 2010 as our panel of industry experts discuss the implications of the Cloud across all industry segments. Harnessing this momentous shift in how businesses leverage IT services falls on the CIO. He or she must effectively communicate to business leaders and IT professionals Cloud Computing holistically, approaching it from the perspective of both technology and traditional business models.

The Cloud keynote panel addresses the practicality and applications necessary to ensure that Cloud applications and services fit seamlessly into existing processes. Strategically, this objective only can be achieved for enterprise level IT processes if applications running in the Cloud conform to established security and governance policies. Concurrently they must integrate with existing systems and operational procedures without disrupting existing budgets.

The panelists are an all-star cast of experts who have had a big dose of industry experience. Including:

  • Ms. Kirsten Wolberg is Chief Information Officer at Salesforce.com, leading the Information Technology organization responsible for building and maintaining the global technology infrastructure and business applications for all Salesforce.com employees and business units.
  • Mr. Michael Kirwan is Yahoo!’s Chief Information Officer. In this role, Mr. Kirwan has global responsibility for Yahoo!’s Corporate Systems group, which includes the IT Infrastructure, Corporate Applications, CRM and Premium Services Infrastructure teams. These teams ensure Yahoo!’s internal systems and billing / anti-fraud services are available 24/7.
  • Mr. Mark Forman leads KPMG’s Federal Performance and Technology Advisory Services, focusing on strategy, business transformation, governance, and technology initiatives across the federal government. Mr. Forman works closely with government leaders on major transformation initiatives in the defense, intelligence, and civilian agencies. He also leads the North American IT Advisory Services for the Government Industry, and represents the Americas on KPMG Global Government IT Advisory Services Leadership team.
  • Mr. Sanjay Mirchandani is the Senior Vice President and Chief Information Officer of EMC Corporation. As Chief Information Officer, Mr. Mirchandani is responsible for extending EMC’s operational excellence, and driving technological innovations to meet the current and future needs of the business.

Barnard invites you to:

Join your peers in shaping the future. Be a part of the MIT Sloan CIO Symposium , May 19, 2010. Learn and Register Grab a totally unique version of this article from the Uber Article Directory

ICS Solutions announced presentation of its Windows Azure: the next big thing in Cloud computing from Microsoft - 3 Hour Seminar on 4/21, 5/5, 5/20 and 6/17/2010:

The Windows Azure Platform (Azure) is an internet-scale cloud services platform hosted in Microsoft datacenters, which provides an operating system and a set of developer services that can be used individually or together. Azure’s flexible and interoperable platform can be used to build new applications to run from the cloud or enhance existing applications with cloud-based capabilities.

Azure aligns with the Microsoft vision of Software-plus-Services by offering customers, developers, and businesses a transformation in connecting devices, business, productivity, and software. Azure provides developers with a quick on-ramp to authoring and running applications in the Cloud and has been marked as the next biggest thing in Cloud computing.
Register for one of ICS Solutions Azure seminars to be amongst the first people in the world to see how easy it is to build, deploy and manage applications in the Cloud and gain insight into the hottest platform currently available from Microsoft.

Brad Reed reports “Interop chief says cloud computing, virtualization and iPad will be talk of show” in his Cloud computing, virtualization top Interop agenda post of 4/12/2010 to NetworkWorld’s Data Center blog:

Last year's Interop was all about how to survive the economic apocalypse created by a housing bubble and a subsequent Wall Street meltdown. This year, Interop wants to help companies rise from the ashes and get themselves back to growth.
Interop general manager Lenny Heymann says that this year's show is going to try to give companies strategies they can use to revitalize their industries and spearhead and economic recovery in the United States. In this Q&A with Heymann, we discuss the major themes of this year's show, the impact of cloud computing on the enterprise and how the rise of the iPad  might affect IT.

What are the big themes at this year's Interop?

The biggest theme is the revival of the tech economy itself. Our show is big enough and broad enough that it acts as a barometer for the health of the industry. Clearly things have improved since last year and the community will get a chance to take the industry's temperature themselves and see if the recovery is on. So the biggest theme this year, instead focusing on individual technologies, is that the market is back on.

Moving on from there, we see that cloud computing is number one with a bullet in terms of the new action around it and the decisions that people in IT will have to make around cloud computing. Clearly cloud computing is a game changer and it's one technology that we think people need to know about. In fact on Wednesday morning our keynote speeches largely revolve around cloud computing. …

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Jason Kincaid reports Cloud Computing – Google Plans To Expand Cloud Computing Services in this 4/12/2010 TechCrunch post:

When you think of developer-focused web computing services, the first thing that probably comes to mind is Amazon’s hugely popular AWS, which includes S3 (storage) and EC2 (processing).  Google has its own web computing service — namely, Google App Engine — and the search giant is looking to significantly expand its offerings. During a roundtable discussion this afternoon at Google Headquarters, Dave Girouard, President of Google Enterprise, hinted at this, saying that Google was looking to give developers more value-added services in the cloud.

So what does that mean? Google doesn’t want to keep expanding into commoditized services like online storage. Instead, it wants to offer services that go beyond those, allowing developers to tap into Google technology that they haven’t previously had access to.  One example that came up during the roundtable was to give developers access to Google’s automated translation services, which can translate Email and webpages with a fair amount of accuracy almost instantly.

That’s about as detailed as the roundtable got, but we’ve heard elsewhere that Google is considering a variety of other value-added services.  One of these could include online video encoding as a service; another could focus on location/geo services.

It sounds like Google is still considering which features it wants to offer and we don’t have a timetable.  But it’s quite clear that we’ll be hearing more about this soon (and remember, Google I/O is just over a month away).

<Return to section navigation list> 

blog comments powered by Disqus