Wednesday, March 10, 2010

Windows Azure and Cloud Computing Posts for 3/9/2010+

Windows Azure, SQL Azure Database and related cloud computing topics now appear in this weekly series.

 
• Updated 3/10/2009: Dare Obasanjo on RDBMS scalability; Chris Hoff on the “Abstraction is Distraction” issue with virtualized networking; Lori MacVittie discusses “the pay for what you consume concept”; many new Live Windows Azure Apps, APIs, Tools and Test Harnesses and Cloud Computing Events.

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.

Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now download and save the following two online-only chapters in Microsoft Office Word 2003 *.doc format by FTP:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available from the book's Code Download page; these chapters will be updated for the January 4, 2010 commercial release in February 2010. 

Azure Blob, Table and Queue Services

Shaun Xu’s Azure - Part 4 - Table Storage Service in Windows Azure of 3/9/2010 is a detailed tutorial for using Windows Azure table storage services:

In Windows Azure platform there are [three] storage [services] we can use to save our data on the cloud. They are the Table, Blob and Queue. Before the Chinese New Year Microsoft announced that Azure SDK 1.1 had been released and it supports a new type of storage – Drive, which allows us to operate NTFS files on the cloud. I will cover it in the coming few posts but now I would like to talk a bit about the Table Storage.

Concept of Table Storage Service

The most common development scenario is to retrieve, create, update and remove data from the data storage. In the normal way we communicate with database. When we attempt to move our application over to the cloud the most common requirement should be have a storage service. Windows Azure provides a in-build service that allow us to storage the structured data, which is called Windows Azure Table Storage Service.

The data stored in the table service are like the collection of entities. And the entities are similar to rows or records in the tradtional database. An entity should had a partition key, a row key, a timestamp and set of properties. You can treat the partition key as a group name, the row key as a primary key and the timestamp as the identifer for solving the concurrency problem.

Different with a table in a database, the table service does not enforce the schema for tables, which means you can have 2 entities in the same table with different property sets.

The partition key is being used for the load balance of the Azure OS and the group entity transaction. As you know in the cloud you will never know which machine is hosting your application and your data. It could be moving based on the transaction weight and the number of the requests. If the Azure OS found that there are many requests connect to your Book entities with the partition key equals “Novel” it will move them to another idle machine to increase the performance. So when choosing the partition key for your entities you need to make sure they indecate the category or gourp information so that the Azure OS can perform the load balance as you wish. …

The preceding members of Shaun’s series are:

The four episodes (so far) are detailed and fully illustrated.

<Return to section navigation list> 

SQL Azure Database (SADB, formerly SDS and SSDS)

Brian Hitney’s Ip2Location (and IPinfoDB) Performance Tips post of 3/10/2010 provides suggestions for improving performance of IP location lookup operations in SQL Azure, which also applies to the SaaS versions:

I’ve done a number of talks lately on Worldmaps and typically in side conversations/emails, people are curious about the databases and converting IP addresses to geographic locations.   And, often when you dive into using the data, it seems there are a number of performance considerations and I thought I’d share my input on these topics.

First up, the data.  Worldmaps uses two databases for IP resolution.  The primary/production database is Ip2Location.  I’ve found this database to be very accurate.  For development/demo purposes, I use IPinfoDB.  I haven’t had too much time to play with this database yet, but so far seems accurate also.   The latter is free, whereas Ip2Location is not.

imageIn either case, the schema is nearly identical:

The BeginIp and EndIp columns are a clustered primary key.  In the case of IPinfoDB, there is no EndIp field (and it’s not really needed).  When performing a resolution, a string IP address is converted into a 64 bit integer and then used in searching the table.  That’s why having a clustered key on the BeginIp (and optionally EndIp) is crucial to performance.

Brian continues with times for an SQL Azure clustered index seek without and with hints and with joins to a Countries table.

Note: Ip2Location’s demo services shows my fixed IP address (AT&T DSL) is San Francisco (94101), a few miles from my actual location in Oakland (94610). IPInfoDB’ service doesn’t return a location. See also Lori MacVittie’s The IP Address - Identity Disconnect post of 3/4/2010 in my Windows Azure and Cloud Computing Posts for 3/4/2010+ post.

Dare Obasanjo’s Building Scalable Databases: Are Relational Databases Compatible with Large Scale Websites? surveys recent blog posts such as Ted Hoff’s MySQL and Memcached: End of an Era? and Dennis Forbes’ Getting Real about NoSQL and the SQL-Isn't-Scalable Lie, which I covered, together with responses, in my recent Windows Azure and Cloud Computing Posts for 3/4/2010+ post:

… There’s lots of good for food for thought in both blog posts. Todd is right that a few large scale websites are moving beyond the horizontal scaling approach that Dennis brought up in his rebuttal based on their experiences. What tends to happen once you’ve built a partitioned/sharded SQL database architecture is that you tend to notice that you’ve given up most of the features of an ACID relational database. You give up the advantages of the relationships by eschewing foreign keys, triggers and joins since these are prohibitively expensive to run across multiple databases. Denormalizing the data means that you give up on Atomicity, Consistency and Isolation when updating or retrieving results. And the end all you have left is that your data is Durable (i.e. it is persistently stored) which isn’t much better than you get from a dumb file system. Well, actually you also get to use SQL as your programming model which is nicer than performing direct file I/O operations. …

Dare continues with references to switches by Digg and Twitter to the Apache Cassandra database and concludes:

… I expect we’ll see more large scale websites decide that instead of treating a SQL database as a denormalized key-value pair store that they would rather use a NoSQL database. However I also suspect that a lot of services who already have a sharded relational database + in-memory cache solution can get a lot of mileage from more judicious usage of in-memory caches before switching. This is especially true given that you still caches in front of your NoSQL databases anyway. There’s also the question of whether traditional relational database vendors will add features to address the shortcomings highlighted by the NoSQL movement? Given that the sort of companies adopting NoSQL are doing so because they want to save costs on software, hardware and operations I somehow doubt that there is a lucrative market here for database vendors versus adding more features that the banks, insurance companies and telcos of the world find interesting.

Users of SQL Azure continue to wait for Microsoft’s promised best sharding practices white paper for “SQL Server in the Cloud” and an increase in maximum database size from 10 MB to to “tens of megabytes.” In the interim, secondary indexes on Windows Azure tables would be welcome, too.

<Return to section navigation list> 

AppFabric: Access Control and Service Bus

The Windows Azure Platform AppFabric team is Announcing upcoming commercial availability of Windows Azure platform AppFabric on 4/9/2010 in this 3/10/2010 post:

We are happy to share that starting April 9, 2010, Windows Azure platform AppFabric will be commercially available on a paid and fully SLA-supported basis.  

In order to help you get familiarized with AppFabric billing, we released the AppFabric Billing Preview yesterday March 9. With this billing preview, you will be able to download a usage summary from the AppFabric Developer Portal, with information similar to that of the "daily usage summary" currently available to you for Windows Azure and SQL Azure.

If you are already using AppFabric, we will begin charging as of April 9, 2010 at 12:00 AM GMT. You can find our pricing for AppFabric here. You can also go to the pricing section of our FAQs for additional information as well as our pricing blog post. If you do not desire to be charged for AppFabric, please remove any applications that are currently utilizing these services prior to this date. 

The Windows Azure Platform AppFabric team writes in its Announcing availability of Billing Preview for Windows Azure platform AppFabric post of 3/9/2010:

Here’s a ‘Thank You’ to all our customers and partners, for creating a Windows Azure platform AppFabric commercial account. The trust you have placed in the Windows Azure platform is tremendously appreciated. As you may remember from our previous announcement, AppFabric will become fully SLA-supported in April, and your account will begin to accrue charges.

In order to help prepare you for the onset of billing, today we are releasing the AppFabric Billing Preview. To use this billing preview, at any time you may visit the AppFabric Developer Portal and download a daily AppFabric usage summary, which will have information similar to that currently available for Windows Azure and SQL Azure.  The report will summarize your usage of AppFabric starting today. The Billing Preview will be available until you start seeing actual billing charges in April. We hope you find the billing preview useful in understanding how you will be charged when actual billing starts.

You can find our pricing information here. For additional detail, you can also go to the pricing section of our FAQs as well as our previous pricing blog post. If you have feedback on the billing preview or need further information, please visit our Discussion Forum.

Eugenio Pace (@eugenio_pace) announced A Guide to Claims based Identity – Released - The strategy behind it and our plans on 3/8/2010:

As most of you know, the Guide for Claims based Identity is officially released. We’ve been “technically done” for a couple months, but it just takes some time for all content to be pushed to MSDN, an ISBN to approved, the final PDF to be ready for publishing and the process with the printer to be started.

So now that this is done I wanted to take an opportunity to share with you the reasons we invested on this guide and our plans moving forward.

There should be little doubt that Microsoft is betting heavily on “the cloud”. We knew this for quite a bit, way before Windows Azure was even called that. So one key question we asked ourselves at that time: what can we do today to help customers prepare for that? And “today” is a very important constraint. Back in June 2009, we knew Azure was coming, but features and implementation details were constantly changing. Investing on Azure specifics, would not have been wise; as the shelf-life of our deliverables would have been very short.

Looking at various scenarios for application development, it became quite clear to us that identity management was pretty basic thing that you had to get it right before considering serious development on the cloud. Especially if you are a company with quite a bit of on-premises investments considering moving some of those to the cloud. And this is a big segment of our customers.

So you take this, you add the fact that key technologies were in the last phase of being released (e.g. WIF and ADFS) and you now see why writing a guide on claims based identity made sense to me…It’d be small, simple deliverable, but a key stepping stone to our work on the cloud. …

Eugenio continues with the patterns & practices group’s plans for Azure and claims-based Id over the next six months.

Ajoy Krishnamoorthy (@ajoyk) interviews Eugenio Pace in this 00:13:14 p&p Claims Identity and Access Control Guide is now available Channel9 video segment of 3/3/2010:

Back in October, Eugenio gave us an overview of the Claims Identity and Access Control guide. At that time the team was just getting started with the project. You can see that video here.

We are thrilled to announce the availability of the final version of this guide. In this video, Eugenio gives us a quick overview of the layout of the guide and the scenarios covered in this guide.

Ajoy and Eugenio are members of Microsoft’s patterns & practices group.

<Return to section navigation list>

Live Windows Azure Apps, APIs, Tools and Test Harnesses

• Dave Bailey’s Users give their verdict on Azure post of 3/10/2010 quotes Rob Fraser, co-founder and software architect, RiskMetrics: “There’s no genuine log integration in Azure, so we’ve had to engineer a system that deals with those log files ourselves:” 

Some of the first wave of UK adopters met in London recently to air their views on Microsoft’s cloud computing platform. Dave Bailey listened in.

The competition within the cloud computing market is hotting up, and as it is between some of the biggest names in software, all eyes are on this space.

Google, Amazon and Microsoft are in on the game, with the latter’s Azure being the newest entrant in the market.

The service went live in early January, becoming available to firms on a commercial basis in February. The platform comprises three elements: Windows Azure itself, an online operating system; SQL Azure, a cloud database solution; and the Azure platform AppFabric, which connects cloud services to on-premise applications.

So what do the early adopters think of the platform? Computing attended a recent roundtable event, where several customers discussed some of the problems they were facing with the platform.

Dave continues with Azure pros and cons from early UK adopters.

• Dario Solera announces that Threeplicate Srl is Launching Amanuens Beta in this 3/10/2010 post. Amanuens is a Windows Azure-based traslator for localizing .NET RESX files:

Amanuens LogoAmanuens is a web application that allows you, or your translators, to translate RESX files. The peculiarity of our approach is that the application talks directly with your Subversion repository. Moreover, translations are done entirely in the web browser, without the need of any desktop application such as Microsoft Visual Studio. On the Amanuens website there is a 2-minute introductory video that will help you understand how it works and how it can help you and your team.

Amanuens was born to simplify the localization process of ScrewTurn Wiki. You know, sending RESX files via email, waiting weeks, and then (trying) to merge the translated resources is just plain ugly and error prone. We refined the application a bit, and we received good feedback from some STW contributors, so we decided to build a service for everyone.

I have to admit that I did not realize the potential of such approach until Joannes Vermorel, founder of Lokad, mentioned the need for a continuous localization process. He described the process in a slightly different context, but I think that we, as software developers, are used to continuous* processes (most notably continuous integration). Why not strive for continuous localization of software UIs too? …

We use a lot of open-source applications on a daily basis, and they’re of great help for our work. For this reason, we’re committed to make Amanuens totally free for open-source projects.

Tech Stuff

And now a few juicy technical details for all the geeks who might come across this post.

Amanuens is built entirely in ASP.NET MVC 1.0, with C# code. It runs on Windows Azure and makes use of both SQL Azure and Blob Storage. I am personally very happy with ASP.NET MVC 1.0, as it feels like fresh air compared to Web Forms. As for Windows Azure, it still has quite some rough edges, but I think Microsoft is going in the right direction. After all, it’s a 1.0 product, so we can expect a lot of improvements in the near future.

Windows Azure is an infinitely-scalable Platform as a Service, but what about the application? Well, Amanuens is built in a totally stateless way and we do not even use ASP.NET’s SessionState. This means that the application can scale horizontally very well. The first probable bottleneck will be the SQL database (as usual), but I expect that SQL Azure can handle loads of traffic without problems. At any rate, we constantly monitor the response time of the application, and we’ll be able to migrate data off the database in case it will be necessary.

• Nuno Silva continues his Azure Services tutorial with part 3, A simple, yet interesting scenario: creating the silverlight client, and adds a link to a live demo on 3/9/2010:

After creating my WCF service and hosting it on Azure, it’s time to take care of the last of my requirements: consuming it in a Silverlight control. Before I go into the Silverlight code, let me point out that I had to make a couple of changes to my Windows Azure service: one regards Silverlight consumption, the other one regarding hosting the service in a reachable manner.

As you are probably aware, you can’t access external (different from the current domain) urls from a Silverlight application, unless you configure it to run in the desktop and with elevated privileges (which is always an option that your users must allow). In this case, my requirement clearly states that this Silverlight application will run in the context of a browser, so that is not an option for me. In order for Silverlight to access my service, I added a file called ClientAccessPolicy.xml in the root of my service application in Windows Azure. … [Sample code omitted for brevity.]

This basically allows this service to be called from everywhere. You might want to restrict things a bit more, but in my case it will do perfectly.

The second change I made to the service was adding a new service behavior configuration. Turns out I couldn’t add a service reference to my service hosted in Azure. I didn’t have that error before because I built my console client using the service in my local machine then simply changed the endpoint to target my hosted service. I found this article that explains the problem I was having:

http://code.msdn.microsoft.com/wcfazure/Wiki/View.aspx?title=KnownIssues

After that, I could successfully add a service reference to my service in Visual Studio. So, I guess it’s time to start developing some Silverlight code. A warning first though, I’m the most terrible designer you are likely to meet in the near future so I didn’t try to do anything too fancy, except for a couple of simple animations that will run as we do the initial load of information. If I had enough time I could probably make this a whole lot prettier and more “Silverlightish” but the truth is I can’t afford to spend too much time on this. And to be honest, the main purpose of this is to show how easy it is to connect to a WCF service in Azure, so there is no point in spending too much time in the UI design. …

David Linthicum asks 2010's big cloud question: Where should I run my application? and writes “Now you have to deal with the location for development and deployment, whether you go traditional, public, or private” in his 3/9/2010 post to InfoWorld’s Cloud Computing blog:

I enjoyed Bill Claybrook's recent article "Cloud vs. in-house: Where to run that app?" While he covered the basic host-versus-outsource decisions, as well as the new architectural options of public and private clouds, what was most interesting is the dilemma many in IT are finding around having new choices.

Let's limit this question to new applications: Where should you build them? On-premise using traditional platforms? Public cloud? Or private cloud? While the questions in the past were just around the hardware, software, and development platform, now you have to deal with the location for development and deployment, and whether you go traditional, public cloud, or private cloud. …

The best approach that I've found is to consider your requirements first, and see if you can knock a few platform contenders off the list. You need to examine -- in order -- security, privacy, compliance, performance, and then features and functions of the platform. …

Dave continues with specific recommendations.

Lewis Benge describes Building a Distributed Commerce Infrastructure in the Cloud using Azure and Commerce Server in this 3/9/2010 post:

One of the biggest questions I routinely get asked is how scalable Commerce Server is. Of course the text book answer is the product has been around for 10 years, powers some of the largest e-Commerce websites in the world, so it scales horizontally extremely well. One argument however though is what if you can't predict the growth of demand required of your Commerce Platform, or need the ability to scale up during busy seasons such as Christmas for a retail environment but are hesitant on maintaining the infrastructure on a year-round basis? The obvious answer is to utilise the many elasticated cloud infrastructure providers that are establishing themselves in the ever-growing market, the problem however is Commerce Server is still product which has a legacy tightly coupled dependency on Windows and IIS components.

Commerce Server 2009 codename "R2" however introduced to the concept of an n-tier deployment of Microsoft Commerce Server, meaning you are no longer tied to core objects API but instead have serializable Commerce Entity objects, and business logic allowing for Commerce Server to now be built into a WCF-based SOA architecture. Presentation layers no-longer now need to remain on the same physical machine as the application server, meaning you can now build the user experience into multiple-technologies and host them in multiple places – leveraging the transport benefits that a WCF service may bring, such as message queuing, security, and multiple end-points. All of this logic will still need to remain in your internal infrastructure, for two reasons. Firstly cloud based computing infrastructure does not support PCI security requirements, and secondly even though many of the legacy Commerce Server dependencies have been abstracted away within this version of the application, it is still not a fully supported to be deployed exclusively into the cloud.

If you do wish to benefit from the scalability of the cloud however, you can still achieve a great Commerce Server and Azure setup by utilising both the Azure App Fabric in terms of the service bus, and authentication services and Windows Azure to host any online presence you may require. The architecture would be something similar to this:

This setup would allow you to construct your Commerce Services as part of your on-site infrastructure. These services would contain all of the channels custom business logic, and provide the overall interface back into the underlying Commerce Server components. It would be recommended that services are constructed around the specific business domain of the application, which based on your business model would usually consist of separate services around Catalogue, Orders, Search, Profiles, and Marketing. …

Scott Densmore describes My Azure Setup for his part in p&p’s Azure Guidance project in this 3/9/2010 post:

The first thing I did to get started on our new Azure Guidance project is setup the environment. Since I always seem to forget or people keep asking me what my setup is I thought I would write it down so I don't have to keep reminding myself over and over.

The Tools

Not all of the tools are required to work with Azure, yet they sure help. Another one that I will mention just because it makes it easier to work with SQL Azure is SQL Express 2008 R2.

I already have WIndows 7 which ships with PowerShell. The only thing you really need to install is Visual Studio 2008 SP1 and the PowerShell Commandlets. You can install all the tools by hand and configure IIS so that it will run Azure projects or you can get the Web Platform Installer (WPI). I am assuming most will want the easy way.

Once you have the WPI setup and open, there are options to show other tabs. Go into the options and choose the Developer Tools tab. Once you have the Developer Tools tab opened, you will see an option for Visual Studio Tools. The Azure Tools for Visual Studio is here and you can check it. It will pick all the dependencies for you and setup everything for you. This makes life much easier. …

Scott continues with more details on his setup. He’s a member of Microsoft’s patterns & practices group.

Ajoy Ajoy Krishnamoorthy (@ajoyk) describes patterns & practices for Windows Azure in this 3/8/2010 post:

“It's the next step, it's the next phase, it's the next transition,…” – Steve Ballmer said that about Cloud Computing during his speech last week at University of Washington. You can watch web cast on demand here or read the transcript of his speech here.  At patterns & practices, what are we doing to provide guidance for the Cloud?

Eugenio, Scott and several members from our team have just started working on a project that will take an iterative approach to provide scenario based guidance for Windows Azure. Scott wrote an excellent blog post detailing our plan.

Excerpts from Scott’s blog post:

The program will follow three major themes:

  1. Moving to the Cloud
  2. Integrating with the Cloud
  3. Leveraging the Cloud

These themes will allow us to categorize the scenarios that we will be delivering. Most of the scenarios are challenges that customers face today. And as we progress the program forward we will have more scenarios. The following is a simple picture of our plans on delivering on this project.

Plan.png

One of the questions we get asked is around the availability of Enterprise Library for Windows Azure. As Scott explains in his blog, that is one of the first thing the team is tackling. We are in the process of validating and identifying any changes that’s needed to get Enterprise Library to work on Windows Azure. We will share more information on this shortly. …

Mikael Ricknäs reports “the company is transforming from a system integrator to 'a service assembler,' Capgemini said” in his Capgemini to offer cloud help with new unit story of 3/8/2010 for IDG News Service:

The technical services group of Capgemini has traditionally helped companies with system integration, but cloud computing is changing that. The company is increasingly assembling lots of different software-as-a-service applications, a phenomenon that has led Capgemini to create a new business unit.

"We have a very large blue-chip firm in the U.K. for whom we have just assembled 16 different software-as-a-service solutions to provide the overall shop window," said David Boulter, vice president of Capgemini's new Infostructure Transformation Services, which launched on Monday.

Service assembly has gone from a niche into something more mainstream, according Boulter. The integration of software-as-a-service applications is more of a business issue than a technical one. For example, providing a single SLA (Service Level Agreement) for the client, Boulter said.

If done successfully, service assembly will help companies differentiate themselves from the competition and cut the time it takes to develop new offerings, according to Boulter. …

The Windows Azure Team’s Real World Windows Azure: Ben Riga at Channel 9 post of 3/9/2010 promotes Ben’s earlier interview of Rob Fraser of the Risk Metrics group:

Ben Riga, a platform advisor on the Windows Azure evangelism team, has begun publishing a new series of video interviews with customers and partners that are building on the Windows Azure platform today.  In this series Ben interviews technical leaders and explores how companies are using Windows Azure, Microsoft SQL Azure, and other components, such as Windows Azure AppFabric-giving you the chance to apply their lessons learned to your own development efforts.

Jeff Widmer begins a three-part illustrated tutorial with Getting Started with Windows Azure: Part 0 – Where do I go to get started? of 3/2/2010:

I am trying to get started with Windows Azure for the first time so I am going to dive in and see how things go.  Follow along as I learn all about Windows Azure.

Part 0 - Where do I go to get started? <<< YOU ARE HERE
Part 1 - Setting up your development environment
Part 2 - Creating a Windows Azure Hello World Application

The first thing I did was go to the Windows Azure Platform home page.  There are several different parts to the Windows Azure Platform (Windows Azure, SQL Azure, AppFabric) which I still have to learn about but I am not quite ready for what those are so I watched the high-level “What is Windows Azure” video.  It is a great video by Steve Marx to get you oriented with Windows Azure and I highly recommend it. …

Jeff continues with the details of using the Windows Azure Developer Portal.

Aleksey Savateyev’s UPDATE: Porting Silverlight RIA to Windows Azure (Data Access) post begins:

Given all latest changes to SDKs and tools, I think a significant update is needed for my old blog post on creating a data access layer for Azure Table Storage in Silverlight (Porting Silverlight RIA to Windows Azure - Part 1). It is now a part of whitepaper I'm working on, soon to be released on MSDN.

Note: Steps 1-3 represent one server-side model class, step 4 represents server-side data service class and steps 5-7 represent third, client-side view model class.

Aleksey is a Microsoft architect working with Global ISVs on building software based on new and emerging technologies mostly in Web and S+S space.

Return to section navigation list> 

Windows Azure Infrastructure

Lori MacVittie writes “Or Why Carr’s Analogy is Wrong. Again.” in her If I Had a Hammer … post of 3/10/2010:

frustBag Nicolas Carr envisioned compute resources being delivered in a means similar to electricity. Though providers and consumers alike use the terminology to describe cloud computing billing and metering models, the reality is that we’ve just moved from a monthly server hosting model to a more granular hourly one, and the delivery model has not changed in any way as we’ve moved to this more “on-demand” model of IT resources.

There’s very little difference between choosing amongst a list of virtual “servers” and a list of physical “servers” with varying memory capacity and compute power. Instead of choosing “Brand X Server with a specific memory and CPU spec”, you’re choosing “generic image with a specific memory and CPU spec.” You are still provisioning based on a concrete set of resources, though arguably the virtual kind can be much more easily modified than its physical predecessors. Still, you are provisioning – and ultimately paying – for a defined set of resources and you’re doing so every hour that it remains active. You may provision the smallest amount of resources possible as a means to better perform capacity planning and keep costs lower, but you’re still paying for unused resources no matter how you slice it (pun intended).

PAY ONLY for WHAT you USE NEED

The pay for what you consume concept doesn’t actually apply to cloud computing today unless you look at “use” from the application or virtual machine point of view, and even then it breaks down. True, the application is using resources as long as it is powered on, but it’s not using all the resources it is allocated (likely) and thus you aren’t paying for what you use, you’re paying for the minimum you need. The difference is significant. Bandwidth can be metered in a way similar to electricity and its delivery model is almost exactly the same as the electrical grid. You can, in fact, deliver and consume bandwidth based on the same model – pay only for what you use. We don’t, but we could. 

Lori continues with a more detailed analysis of the issue, which most Windows Azure and many SQL Azure subscribers have addressed since billing began in February 2010: Developers are charged Windows Azure compute time, regardless of whether their application is being accessed or not. This is unlike Google App Engine, which charges only for app execution and offers no-charge thresholds for compute time, persistent storage, and data ingress/egress.

• Chris Hoff (@Beaker) claims “Abstraction is distraction” in the network stack for virtualized environments in his Incomplete Thought: The Other Side Of Cloud – Where The (Wild) Infrastructure Things Are… post of 3/9/2010 that includes a pointer to Cisco Systems’ new 322 Terabit per second switch:

While our attention has turned to the wonders of Cloud Computing — specifically the elastic, abstracted and agile delivery of applications and the content they traffic in — an interesting thing occurs to me related to the relevancy of networking in a cloudy world:

All this talk of how Cloud Computing commoditizes “infrastructure” and challenges the need for big iron solutions, really speaks to compute, perhaps even storage, but doesn’t hold true for networking.

The evolution of these elements run on different curves.

Networking ultimately is responsible for carting bits in and out of compute/storage stacks.  This need continues to reliably intensify (beyond linear) as compute scale and densities increase.  You’re not going to be able to satisfy that need by trying to play packet ping-pong and implement networking in software only on the same devices your apps and content execute on.

As (public) Cloud providers focus on scale/elasticity as their primary disruptive capability in the compute realm, there is an underlying assumption that the networking that powers it is magically and equally as scaleable and that you can just replicate everything you do in big iron networking and security hardware and replace it one-for-one with software in the compute stacks.

The problem is that it isn’t and you can’t.

Cloud providers are already hamstrung by how they can offer rich networking and security options in their platforms given architectural decisions they made at launch – usually the pieces of architecture that provide for I/O and networking (such as the hypervisor in IaaS offerings.)  There is very real pain and strain occurring in these networks.  In Cloud IaaS solutions, the very underpinnings of the network will be the differentiation between competitors.  It already is today.

See Where Are the Network Virtual Appliances? Hobbled By the Virtual Network, That’s Where… or Incomplete Thought: The Cloud Software vs. Hardware Value Battle & Why AWS Is Really A Grid… or Big Iron Is Dead…Long Live Big Iron… and I Love the Smell Of Big Iron In the Morning.

With the enormous I/O requirements of virtualized infrastructure, the massive bandwidth requirements that rich applications, video and mobility are starting to place on connectivity, Cloud providers, ISPs, telcos, last mile operators, and enterprises are pleading for multi-terabit switching fabrics in their datacenters to deal with load *today.*

I was reminded of this today, once again, by the announcement of a 322 Terabit per second switch.  Some people shrugged. Generally these are people who outwardly do not market that they are concerned with moving enormous amounts of data and abstract away much of the connectivity that is masked by what a credit card and web browser provide.  Those that didn’t shrug are those providers who target a different kind of consumer of service.

Abstraction has become a distraction.

Raw networking horsepower, especially for those who need to move huge amounts of data between all those hyper-connected cores running hundreds of thousands of VM’s or processes, still know it as a huge need.

Hoff’s new title appears to be “Senior Director, Cloud Security, Cisco Solutions” (see Cloud Computing Events) He also recorded a 00:04:54 Cloud Computing in Government Insights by Chris Hoff @ Cisco Govmt Solutions Forum video segment on 3/2/2010:

Chris Hoff provides a wrap up on his cloud session at the Cisco Government Solution Forum, including insights on the benefits (cost savings, agility, flexibility, and collaboration) cloud computing...

Beaker says in a 3/10/2010 tweet:

@rogerjenn It's Director of Cloud & Virtualization Solutions (Security Technology Business Unit, Cisco) ;) Thx.

• Mike Wickstrand equested current Azure customers by email on 3/10/2009 to complete a brief Windows Azure 2010 Q1 Customer Satisfaction Survey:

Here’s what you can do today to make your voice heard – click on the link below to participate in the Windows Azure 2010 Q1 Customer Satisfaction Survey. We place a high value on your feedback and the time you invest in sharing it. We want you to know that we carefully read every response and share what we learn with the entire Windows Azure Team.

http://www.zoomerang.com/Survey/?p=WEB22AC8QQA3Z7

As incentive for completing the survey, I’ll distribute $50 American Express Gift Cards to 10 randomly selected people who enter the survey sweepstakes! Of course, if you have any questions, comments or suggestions, please don’t hesitate to contact me.

Mike is the Windows Azure Platform’s senior director for product planning.

Lori MacVittie asserts “Thought those math rules you learned in 6thgrade were useless? Think again…some are more applicable to the architecture of your data center than you might think” in her The Order of (Network) Operations post of 3/9/2010:

Remember back when you were in the 6th grade, learning about the order of operations in math class? You might recall that you learned that the order in which mathematical operators were applied can have a significant impact on the result. That’s why we learned there’s an order of operations – a set of rules – that we need to follow in order to ensure that we always get the correct answer when performing mathematical equations.

image

  • Rule 1:   First perform any calculations inside parentheses.
  • Rule 2:   Next perform all multiplications and divisions, working from left to right.
  • Rule 3:   Lastly, perform all additions and subtractions, working from left to right.

Similarly, the order in which network and application delivery operations  are applied can dramatically impact the performance and efficiency of the delivery of applications – no matter where those applications reside. …

HERE COMES the SCIENCE MATH

tableofopsLet’s do some math to prove our theory, shall we? Consider the following “table” of the time it takes to execute certain network operations. Note that these are completely arbitrary in that they do not represent actual performance statistics, though the values are relative to one another based on real metrics. The actual time to execute a given operation will be highly dependent on load and device performing the operation, thus it will be variable. However, what is static is that each operation will consume “time” on a given system to execute, and this table is designed to represent that basic truism.

Architecture #1

orderofops1

Let’s assume for a moment that our architecture is simple: two network devices, both will need to inspect the payload to apply security or routing policies, and an application. Assuming that the application is responsible for compression and SSL operations, this means that on the ingress (inbound) requests, both network devices must necessarily decrypt and then re-encrypt the request in order to apply policies. The application, because it is assuming it handled the SSL, also needs to decrypt.

Based on our completely arbitrary and fictitious table of operational costs, this means the time to execute on ingress is: SSL: 25 units + Compression: 9 units + Inspection: 14 units = 48 units and our total CPU cycle utilization is: SSL: 50 units + Compression: 21 units + Inspection: 16 units = 87 units

On egress (outbound) our total time to execute will be: SSL: 25 units + Compression: 15 units + Inspection: 14 units = 54 units and total CPU cycle utilization at: SSL: 50 units + Compression: 35 units + Inspection: 16 units = 101 units

Our total time to execute 1 transaction using this order of operations is 102 units with a total CPU cycle utilization of 188 units. Now let’s compare that with a more strict order of operations in the architecture, delegating responsibility for compression and SSL operations to Network Device #1.

Lori then applies here analysis to a second architectural sample and concludes:

Rule 1:   Offload all cryptographic or obfuscating (like compression) functions to the last device in the delivery network which needs to inspect the payload to reduce the impact of redundant operations.

Steven Forte’s Yet another Windows Azure billing blog post…. of 3/9/2010 is an impassioned plea for a low- or no-cost developer-only account:

By now there have been a lot of blog posts on Windows Azure billing. I have stayed out of it since I figured that the billing scheme would generate some sticker shock on our end and some rethinking on Microsoft's end. For the most part it has, but I now want to tell my story since I think most early Azure users are thinking along my lines.

When Windows and SQL Azure went live, I wanted to deploy an application using some of Telerik’s products to “production”. I put my free MSDN hours into the Azure system for billing and uploaded the application. I actually could not get it to work and left it up there figuring I would get back to it and fix it later. Periodically I would go in and hoke around with it and eventually fixed it. For the most part I had nothing more than an advanced “Hello World” and simple Northwind data over forms via SQL Azure up there.

Recently, I received a bill for $7 since I went over my free 750 hours by about 65 hours. (I guess I had a test and production account running at the same time for a while.) Even thought for the most part I had no hits other than myself a few times, I still incurred charges since I left my service “live” in production. My bad, I learned a lesson as to how Azure works, luckily, it was only a $7 lesson. …

That said, I was in Redmond a month or two ago and had a chance to talk to the head of MSDN. I complained about how the MSDN subscription offer was only for 8 months, etc. He told me that for the first time in Microsoft’s history, they have hard physical assets that have to be paid for with this service. It is not like if they want to give me a free copy of Windows, it does not cost Microsoft anything except the bandwidth for me to download (which is a fixed cost.) I get that, and I am sure that there will be a cost effective MSDN-Azure “developer only” subscription option in the future. Or at least there should be. :)

I agree with Stephen. Microsoft will regret their lack of a low- or no-cost developer Account for Windows Azure and Windows Azure AppFabric as Google builds developer mind-share for the App Engine with its free quotas for small projects.

Everest Group reports Everest ITO Study: Cloud Computing Can Benefit Traditional Enterprise Setups, Offers Little Gain for Virtualized Centers in this 3/9/2010 press release:

Cloud computing as an IT Outsourcing (ITO) strategy presents a viable business case for companies with traditional enterprise platforms, but the cloud’s utility is not apparent when compared to a virtualized setup, according to Everest, a global consulting and research firm.

A buyer’s investment in a cloud infrastructure can save 40-50 percent over a traditional enterprise platform, according to a new Everest ITO study, Hype and Reality of Cloud Computing – Mind the Gap! The cloud proposition over enterprise setups is based on IT suppliers’ abilities to leverage scale to improve asset utilization, lower costs for asset procurement, standardize delivery and processes, and offer labor flexibility that reduces labor support ratios and delivers services from less expensive locations. On the other hand, large organizations running virtual datacenters may not find added value from cloud services unless IT demand is very volatile or through labor savings gained by large-scale sourcing options.

“The IT industry is working to woo enterprise buyers to the cloud, and many suppliers are positioned to deliver on most of the cloud’s benefits,” said Ross Tisnovsky, Vice President, Research. “However, the cloud conundrum lies in the fact that IT demand best served by the cloud is also the most challenging to serve from the cloud. While cloud services offer a strong business case over a traditional enterprise setup, the buyer’s cloud adoption strategy should not be based on cost savings alone. Buyers also must factor in associated risks, which need to be understood in the early stages while evaluating long-term strategies and approaches to cloud computing.”

Enterprises face multiple challenges to adopting cloud computing such as fragmented application portfolios, lack of cloud standards, security, system performance and management control. Security breaches, downtime, business disruption, and regulatory non-compliance issues pose significant concerns to buyers, and Everest predicts broad-based standards won’t come for 18 months or longer. …

The release continues with other negative conclusions about the economics of cloud computiner.

David Vellante claims “A lack of integration is holding up progress in two key areas of online enterprise IT” in his IT's Online Enterprise Integration Crisis post of 3/4/2010:

Virtualization and cloud computing have always gone hand in glove. Virtualization simplifies computing generally and cloud computing specifically and is a key enabler to delivering infrastructure as a service. Virtualization not only cuts costs and speeds the deployment of resources; very importantly, it provides a means of reselling the same IT asset in a multi-tenant world, meaning suppliers are essentially "time sharing" the same resources to different customers. I call it "double-dipping," and cloud vendors don't like it much when I speak to them about it. But it's the way of the world today, and it is just good business.

In my discussions with organizations, I've found senior executives on the business side are much more willing to outsource IT to cloud providers than ever before but are aware that security, privacy, and compliance risks can't be ignored. So for higher-value applications they're virtualizing their internal data centers.

The dilemma that both IT practitioners and cloud providers face, however is that there's a lack of integration across storage, backup, networking, security, and management functions in virtualized environments. This lack of integration is constricting their ability to more aggressively pursue virtualization and cloud computing and host more applications from their portfolios on both internal and external clouds.

What's being done about the lack of integration? The answer is plenty, but going is slow and resource intensive. …

David continues with a summary of what his research indicates about the leading IT integration challenges -- and possible solutions.

<Return to section navigation list> 

Cloud Security and Governance

David Linthicum claims “While the focus on security is on attacks from the outside world, many attacks will occur within clouds -- by insiders” in his Afraid of outside cloud attacks? You're missing the real threat post of 3/10/2010 to InfoWorld’s Cloud Computing blog:

Many IT managers are not moving to the cloud due to security concerns. I suspect they're envisioning a group of Eastern Bloc hackers in a one-room apartment attacking their cloud providers and stealing their data. While I'm sure that does indeed go on, the true threat around cloud computing security issues are going to come from within the cloud.

The more common issues will be around those who have been trusted with cloud computing access and who walk off with data, typically before resigning or being fired. While this occurs all of the time now with the advent of cheap USB thumb drives, the cloud makes it a bit easier, considering that employees with access to cloud-based resources have access to large amounts of typically sensitive data from anywhere in the world.

The only way to defend against this, other than doing a good background check, is to make sure that no single user has access to all of the data in a downloadable format. This includes limiting use of the data-oriented APIs provided by the cloud computing provider to only a few authorized and trusted people. …

Root LabsAttacking RSA exponentiation with fault injection post of 3/8/2010 begins:

A new paper, “Fault-Based Attack of RSA Authentication” (pdf) by Pellegrini et al, is making the rounds. The general idea is that an attacker can disrupt an RSA private key operation to cause an invalid signature to be returned, then use that result to extract the private key. If you’re new to fault injection attacks on RSA, I previously wrote an intro that should help.

The main concept to grasp is that public key crypto is brittle. In the case of RSA’s CRT operation, a single bit error in one multiplication result is enough to fully compromise your private key. We’ve known this since 1997. The solution is simple: validate every signature with the public key before returning it to the caller.

The authors noticed something curious. OpenSSL does verify signatures it generates before returning them, but if it detects a problem, it does not just return an error. It then tries again using a different exponentiation process, and then returns that signature without validating it.

Think about this for a moment. What conditions could cause an RSA private key operation to compute an invalid answer? An innocent possibility is cosmic radiation, bad RAM, etc. In this case, all computations should be considered unreliable and any retried operation should be checked very carefully. The other and more likely possibility is that the system is under attack by someone with physical proximity. In this case, OpenSSL should generate a very obvious log message and the operation should not be retried. If it is, the result should be checked very carefully.

For whatever reason, the OpenSSL programmers decided to retry with fixed-window exponentiation and trust that since there were no published fault attacks for it, they didn’t have to validate its result. This is a foolhardy attitude — not something you want to see in your crypto library. There had been many other fault injection attacks against various components or implementation approaches for RSA, including right-to-left exponentiation. There was no reason to consider left-to-right exponentiation invulnerable to this kind of attack. [Emphasis added.] …

The report, which continues with more detailed analyses, doesn’t speak highly of OpenSSL’s system design.

Tim Greene claims “Trusting corporate data to the cloud is a risk to be dealt with, experts say” in his feature-length Cloud security, cyberwar dominate RSA Conference story of 3/4/2010 for NetworkWorld:

Cloud security loomed over the RSA Conference this week as a major concern of business, but worry about the threat of cyberwar was also strong, with officials from the White House and FBI weighing in to encourage private participation in government efforts to defend information and communications networks.

During the highest profile panel at the conference, a former technical director of the National Security Agency bluntly said he doesn't trust cloud services. Speaking for himself and not the agency, Brian Snow said cloud infrastructure can deliver services that customers can access securely, but the shared nature of the cloud leaves doubts about attack channels through other users in the cloud. "You don't know what else is cuddling up next to it," he said.

In his keynote address, Art Coviello, the president of RSA, the security arm of EMC, agreed that customers need to be assured the cloud is safe. Coviello told the 4,000 attendees gathered for his talk that cloud services will inevitably be adopted widely because of the huge financial benefits they offer. "But you won't want any part of that unless service providers can demonstrate their ability to effectively enforce policy, prove compliance and manage multi- tenancy," he said.

The big problem is trust, he said. His own company announced at the show a partnership with Intel and VMware to improve trust by enabling measurement of cloud providers' security. The effort would let customers of cloud infrastructure services weigh the security of the service and get metrics to deliver to auditors who are sent to determine whether businesses comply with government and industry security standards. "Service providers should be able to tell compliance officers and auditors just about anything they need to know -- with verifiable metrics," Coviello said. …

Ellen Messmer’s CISOs rain on cloud-computing parade at RSA article for NetworkWorld of 3/4/2010 begins:

Economic pressures are driving more businesses and governments to nervously eye cloud computing, despite myriad unanswered questions that swirl around a single central concern: security. This was backdrop for a panel discussion between CISOs at this week's RSA Conference.

"We're all in dire straits," said Seth Kulakow, Colorado's CISO. "Cloud computing is obviously on everybody's mind." But even if cloud-computing looks like a bargain, "it's got to have the same kind of risk controls you have now."

"It's imperative we look at it," said Nevada's CISO Christopher Ipsen, who had noted that the economic crisis and housing-market collapse have left his state's financial situation "extremely bad."

"We are doing some cloud services with e-mail," said California's CISO, Mark Weatherford. "It's very efficient. We can't ignore the benefits in the cloud, but we have to proceed carefully." The Los Angeles Police Department is regarded as the state's early adopter in all this since it's moving to a cloud-computing arrangement with Google. …

<Return to section navigation list> 

Cloud Computing Events

Allison Watson says “We’re adding more than 200 customers per day to our Windows Azure system that’s now up and billing in the United States” in her 00:09:42 Allison Watson Unplugged - We're All In video presentation of 3/10/2010:

Watch Allison Watson’s Webcast on Cloud Computing Opportunities for Microsoft Partners Join Allison Watson, corporate vice president of Microsoft’s Worldwide Partner Group, as she speaks to members of the Microsoft Partner Network about the opportunities cloud computing offers for Microsoft partners.

She continues with a reference to the City of Miami’s adoption of Windows Azure for its live Miami311 non-emergency services portal. Rutrell Yasin provides more background on Miami311 in his The city of Miami is moving citizen services to the cloud with the launch of Miami 311 post of 3/10/2010.

Allison is Microsoft’s Corporate Vice President, Worldwide Partner Group.

Brian Prince and Joey Snow start a four-part “Real World Azure” series of Channel9 video segments with a 00:20:06 Real World Azure: An Overview of Cloud Computing episode on 3/8/2010:

IT Pro Technical Evangelist Joey Snow invites Senior Architect Evangelist Brian Prince to Redmond to begin unravelling the mystery around Cloud Computing in this first video in the four part, Real World Azure series. Joey and Brian keep it light as they fill in the vocabulary for you. They describe what SaaS, Software plus Services and cloud computing are, and point out types of applications which are best-suited for the cloud.

Brian is happy to reveal how IT professionals move up the value food chain and do more of what they aspire to as they begin to leverage cloud computing. The discussion rounds out with a white board chat about the spectrum of applications that might run on-premises, hosted, or in the cloud for an enterprise.

• Akamai asserts “Web based threats are becoming increasingly malicious and sophisticated every day” while announcing its Cloud Based Security Services: Saving Cloud Computing Users from Evil-Doers Webinar on 3/31/2010 at 9:00 AM PST:

The timing couldn’t be worse, as more companies are adopting cloud-based infrastructure and moving their enterprise applications online. In order to make the move securely, distributed defense strategies based on cloud-based security solutions should be considered.

Join Akamai and a panel of leading specialists for a discussion that will delve into IT’s current and future security threats. This in-depth conversation will in cover the trends that are spreading fear, uncertainty and doubt amongst the corporate IT security community.

Topics will include web application security, vulnerabilities, threats and mitigation/defense strategies, and tactics. Get real-life experiences and unique perspectives on the escalating requirements for Internet security from three diverse companies: Cisco, Whitehat, and Akamai.
We will discuss:

  • Individual perspectives on the magnitude and direction of threats, especially to Web Applications
  • Options for addressing these challenges in the near term, and long term implications for how enterprises will respond
  • Methods to adopt and best practices to fortify application security in the cloud

Participants include:

  • Dana Gardner
    President
    & Principal Analyst,
    Interarbor Solutions Inc.
  • Jeremiah Grossman
    Founder and CTO, WhiteHat Security
  • Chris Hoff
    Sr. Director,
    Cloud Security,
    Cisco Solutions
  • Andy Ellis
    Sr. Director of Information Security and Chief Security Architect,
    Akamai

Secure Cloud 2010 claims “Pamela Jones Harbour, Commissioner of the US Federal Trade Commission to Keynote First European Event to Focus on Cloud Security” SecureCloud 2010 Announces Agenda for Inaugural Two Day Cloud Security Conference in Barcelona in this 3/10/2010 press release:

The SecureCloud 2010 conference today announced it has published the agenda for its inaugural cloud security conference, which will be held from March 16 - 17 at the Majestic Hotel and Spa in Barcelona, Spain. SecureCloud 2010 is an educational and networking event designed to help IT professionals adopt and share best practices related to secure cloud computing. The conference will feature two keynote sessions: the first by Dr. Udo Helmbrecht, Executive Director of ENISA and the second by Pamela Jones Harbour, Commissioner of the US Federal Trade Commission. The event is free to attend but space is limited so participants are encouraged to register early to guarantee entry: http://securecloud2010.eventbrite.com/

"Cloud computing presents its own unique security risks and threats that we're still in the process of understanding," said Dr. Udo Helmbrecht, Executive Director of ENISA. "SecureCloud 2010 will provide a great opportunity for European IT practitioners to get out in front of these issues and understand how to best mitigate these risks. We're confident that this event will help our industry establish new protocols and identify best practices that will dramatically improve the future of cloud security."

The two-day, dual-track event is being hosted by the Cloud Security Alliance (CSA), European Network and Information Security Agency (ENISA), the IEEE Standards Association, and ISACA, four of the leading organizations shaping the future of Cloud Computing Security. SecureCloud 2010 is the first European event to focus specifically on state-of-the-art practices designed to promote security, privacy, and trust within cloud computing environments. …

Joe Healy will present a four-hour MSDN Simulcast Event: Take Your Applications Sky High with Cloud Computing and the Windows Azure Platform (Level 200) on 3/11/2010 at 10:00 AM PST:

Join your local MSDN Events team as we take a deep dive into Microsoft Windows Azure. We'll start with a developer-focused overview of this brave new platform and the cloud computing services that can be used either together or independently to build amazing applications. As the day unfolds, we’ll explore data storage, Microsoft SQL Azure, and the basics of deployment with Windows Azure.

Joe is a Microsoft developer-evangelist. Register here.

The CloudConnect event will conduct three cloud-related IT workshops on 3/15/2010 from 9:00 AM – 5:00 PM at their Santa Clara Convention Center venue in Santa Clara, CA:

  • An Introduction to Cloud Computing ()
  • Cloud Business Summit
  • Cloud Apps Bootcamp

Overlapping the preceding three preceding business-related workshops on 3/15/2010 will be these six targeting developers:

  • Building on Google App Engine (9:00 AM to 12:00 PM)
  • Cloud Performance Optimization (9:00 AM to 12:00 PM)
  • Building Your First Amazon App (1:00 PM to 2:45 PM)
  • NoSQL, no Join, noRDBMS: Understanding Cloud Data (1:00 PM to 2:45 PM)
  • Creating Applications on Force.com (3:15 PM to 5:00 PM)
  • Developing for Microsoft Windows Azure Platform (3:15 PM to 5:00 PM) [Emphasis added]

Microsoft Technical Architect David Chou will present the Azure workshop.

David Makogon of RDA Corporation presents Webcast: Microsoft Azure - The Business Benefits of Cloud Services on 3/18/2010 at 10:00 to 10:45 AM:

Microsoft Windows Azure is a platform for developing and deploying cloud applications and services.  The promise of cloud-based solutions for reducing operational and infrastructure costs and simplifying companies’ ability to adapt to demand for their products and services is well-documented.  However, one often overlooked benefit of the cloud is the ability to create secure, highly scalable and reliable cloud-based services that expose line-of-business and back-office systems  and services to a company’s own on- or off-premises applications and their trading partners.

This webcast will explore how Windows Azure may be used by organizations to leverage their existing systems and services in an effort to create new product and revenue streams or improve upon existing business processes.

Agenda:

  • The prevalent technologies in Azure-based solutions
  • The common patterns for communicating with line-of-business systems in the cloud
  • The common challenges and best practices for working with line-of-business systems in the cloud

Presenter: David Makogon is a Senior Consultant at RDA with 25 years of software creation experience. His current passion is Microsoft’s Azure cloud computing platform, and he has worked with companies such as Coca-Cola Enterprises to develop cloud-based solutions.

<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Colin Clark asserts “Typical of event processing applications that do things are those in Capital Markets like algorthmic trading, pricing and market” in his Cloud Event Processing – Where For Art Thou oh CEP? post of 3/10/2010:

In a recent post, Louis Lovas of Progress Apama explains why the first generation CEP vendors don’t have many, if any, cloud deployments.  Here’s a quote from his post:

“Typical of event processing applications that do things are those in Capital Markets like algorthmic trading, pricing and market making. These applications perform some business function, often critcal in nature in their own right. Save connectivity to data sources and destinations, they are the key ingredient or the only ingredient to a business process.  In the algo world CEP systems tap into the firehose of data, and the data rates in these markets (Equities, Futures & Options, etc.) is increasing at a dizzying pace. CEP-based trading systems are focused on achieiving the lowest latency possible. Investment banks, hedge funds, and others in the arms race demand the very best in hardware and software platforms to shave microseconds off each trade. Anything that gets in the (latency) way is quickly shed.”

Shall I hear more, or shall I speak at this? (Romeo’s reply…)

Louis goes on to describe additional possible reasons for the current lack of CEP in the cloud.  Amongst them he cites that many businesses have not migrated their key business processes to the Cloud, and that having those business processes in the Cloud is a requirement for CEP in the cloud.

Without citing potential evidence to the contrary, I think that Louis’s point is valid.  It’s valid because Progress Apama’s current customer base, like Streambase and Sybase, is predominately comprised of and focused upon Capital Markets participants actively engaged in high frequency, or algo, trading.  And I agree with Louis that it currently doesn’t make any sense to move algo trading to the cloud, no matter which, if any, of the 21 definitions of cloud you identify with.

Meanwhile, Back at the Ranch

We’re busy hooking up some big data visualization to our TwitYourl – and we’ll be sharing that shortly.  And although TwitYourl could hardly be explained as a key business process, using Cloud Event Processing based Continuous Intelligence can certainly help refine and focus those business processes.

For more background on Cloud (or Complex) Event Processing and Microsoft’s Reactive Extensions (Rx) for .NET see the Colin Clark’s Cloud Event Processing: CEP in the Cloud’s Cloud Event Processing: CEP in the Cloud entry in my Windows Azure and Cloud Computing Posts for 3/8/2010+ post.

Chelsi Nakano claims Google's Marketplace Spells Trouble for Microsoft in this 3/10/2010 post to the CMSWire blog:

Google set off an enterprise-tastic bomb last night when they announced the opening of Google Marketplace, an online store for business apps. Now, we can’t help but wonder, is Microsoft an impending casualty? 

The Strip

We noted Microsoft’s attempt to be “the very best option” for cloud lovers late last year when they announced their app marketplace for Windows Azure. Dubbed PinPoint, the online store helps users find related experts, applications and professional services.

In addition to PinPoint, Microsoft released an information marketplace called Dallas. This part of Azure is designed to provide developers with content (data, imagery, real-time web services) from third-party providers through clean, consistent APIs. It’s the same idea as Salesforce's AppExchange and Apple's infamous iPhone App store.

Imagine all those little stores residing next to each other in a virtual strip mall. Combined, they form what is undoubtedly the largest directory of IT companies and their offerings we’ve got. Now, picture a Texas-sized, G-shaped supermarket dropping right down in the center of it all.

Google’s Mega Outlet

Google's marketplace will connect developers with their whopping 25 million Apps users and the 2 million businesses that have gone Google. Better yet, from what we can tell, the store is simple and straightforward. Here are some high points from the presentation:

  • Google says everything businesses need is now in the cloud
  • Developers don’t have to use App Engine to build—you can use whatever you want
  • Google asks for a one-time fee of US$ 100, and a 20% rev share
  • Big G already has Over 50 launch partners, including Zoho, Box.net, Atlassian and Aviary …

Chelsi continues with links to an explanatory video and discusses G's acquisition of DocVerse, “a startup that allows people to collaborate with MS Office documents online.”

• Rich Miller reports Amazon S3 Now Hosts 100 Billion Objects in this 3/9/2010 post to the Data Center Knowledge blog:

Amazon Web Services has quietly passed an interesting benchmark: the company’s S3 storage service now hosts more than 100 billion objects. This factoid was noted this morning at Data Center World, when keynote speaker Brian Lillie of Equinix said that Amazon now is hosting 102 billion objects in S3 (Simple Storage Service).

Over the past year, the number of objects stored on S3 has grown from 54 billion to 100 billion, according to Amazon CTO Werner Vogels, who mentioned this startling growth curve in his recent presentation at the Cebit computer trade show in Germany.

It’s a fuzzy milestone, to be sure, as we don’t know how much infrastructure is required to store those 100 billion objects, or how much revenue Amazon is generating from them. But in an industry where we’re used to big numbers, 100 billion is an eye-popping total. By any measure, that’s a huge storage cloud, and likely a sign of things to come.

Carl Brooks (@eekygeeky) claims "most of those objects are 6KB files named 176033-078-43789arg-rec-num-blerfblerfblerf.log, but..."  in this Tweet of 3/10/2010.

<Return to section navigation list> 

blog comments powered by Disqus