Sunday, October 31, 2010

Windows Azure and Cloud Computing Posts for 10/28/2010+

A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

AzureArchitecture2H_thumb31133  
Updated 10/30/2010 with PDC 2010-related articles marked

• Updated 10/29/2010 with articles marked

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Cloud Computing with the Windows Azure Platform published 9/21/2009. Order today from Amazon or Barnes & Noble (in stock.)

Read the detailed TOC here (PDF) and download the sample code here.

Discuss the book on its WROX P2P Forum.

See a short-form TOC, get links to live Azure sample projects, and read a detailed TOC of electronic-only chapters 12 and 13 here.

Wrox’s Web site manager posted on 9/29/2009 a lengthy excerpt from Chapter 4, “Scaling Azure Table and Blob Storage” here.

You can now freely download by FTP and save the following two online-only PDF chapters of Cloud Computing with the Windows Azure Platform, which have been updated for SQL Azure’s January 4, 2010 commercial release:

  • Chapter 12: “Managing SQL Azure Accounts and Databases”
  • Chapter 13: “Exploiting SQL Azure Database's Relational Features”

HTTP downloads of the two chapters are available for download at no charge from the book's Code Download page.


Tip: If you encounter articles from MSDN or TechNet blogs that are missing screen shots or other images, click the empty frame to generate an HTTP 404 (Not Found) error, and then click the back button to load the image.

Azure Blob, Drive, Table and Queue Services

•• Jai Haridas presented Windows Azure Storage Deep Dive at PDC 2010:

imageWindows Azure Storage is a scalable cloud storage service that provides the ability to store and manipulate blobs, structured non-relational entities, and persistent queues. In this session you will learn tips, performance guidance, and best practices for building new applications or migrating an existing applications that use Windows Azure Storage.

image


<Return to section navigation list> 

SQL Azure Database, Azure DataMarket and OData

•• James Staten lauds the Windows Azure DataMarket in his Unlock the value of your data with Azure DataMarket post of 10/29/2010 to the Forrester Research blogs:

image If the next eBay blasts onto the scene but no one sees it happen does it make a sound? Bob Muglia, in his keynote yesterday at the Microsoft Professional Developers Conference, announced a slew of enhancements for the Windows Azure cloud platform but glossed over a new feature that may turn out to be more valuable to your business than the entire Platform as a Service (PaaS) market. That feature (so poorly positioned as an “aisle” in the Windows Azure Marketplace) is Azure DataMarket, the former Project Dallas. The basics of this offering are pretty underwhelming – it’s a place where data sets can be stored and accessed, much like Public Data Sets on Amazon Web Services and those hosted by Google labs. But what makes Microsoft’s offering different is the mechanisms around these data sets that make access and monetization far easier.

imageEvery company has reams of data that are immensely valuable. Sales data, marketing analytics, financial records, customer insights and intellectual property it has generated as a course of business. We all know what data in our company is of value but like our brains, we are lucky if we mine a tenth of its value. Sure we hit the highlights. Cable and satellite television providers know what shows we watch and, from our DVRs, whether we fast forward through commercials. Retailers know, by customer, what products we buy, how frequently and whether we buy more when a sales promotion has been run. But our data can tell us so much more, especially when overlayed with someone else’s data.

imageFor example, an advertising agency can completely change the marketing strategy for Gillette with access to Dish Network’s or TiVo’s DVR data (and many do). Telecom companies collect GPS pings from all our smart phones. Wouldn’t it be nice if you could cross correlate this with your retail data and find out what type of buyers are coming into your stores and leaving without buying anything? Wouldn’t it be even better if you knew the demographics of those lookie-loos and what it would take to get them to open their wallets? Your sales data alone can’t tell you that.

Wouldn’t it be nice if you could cross correlate satellite weather forecast data with commodity textile inventories and instruct your factories to build the right number of jackets of the right type and ship them to the right cities at just the right time to maximize profits along all points of the product chain? These relationships are being made today but not easily. Data seekers need to know where to find data providers, negotiate access rights to the data then bring in an army of data managers and programmers to figure out how to integrate the data, acquire the infrastructure to house and analyze the data and the business analysts to tweak the reports. Who has time for all that?

What eBay did for garage sales, DataMarket does for data. It provided three four key capabilities:

  1. A central place on the Internet where items can be listed, searched for and marketed by sellers
  2. A simple, consistent ecommerce model for pricing, selling and getting paid for your items
  3. A mechanism for separating the wheat from the chaff.
  4. A simple and trusted way of receiving the items

With Azure DataMarket now you can unlock the potential of your valuable data and get paid for it because it provides these mechanisms for assigning value to your data, licensing and selling it, and protecting it.

If you anonymized your sales data, what would it be worth? If you could get access to DirecTV’s subscriber data, AT&T’s iPhone GPS pings, Starbuck’s sales data or the statistics necessary to dramatically speed up a new drug test what would it be worth to you? Now there is a commercial market means of answering these questions.

Azure DataMarket solves the third problem with some governance. DataMarket isn't a completely open market where just anyone can offer up data. Microsoft is putting in place mechanisms for validating the quality of the data, authorization to vend it. And through data publication guidelines it is addressing the fourth factor - assurance that the data can be easily consumed.

imageA key feature of DataMarket is preparing the data in the Odata format making it easy to access directly through such simple business intelligence tools as Microsoft Excel. Where other information services give you access to raw data in a variety of cryptic or proprietary formats, Azure Datamarket sets can be pulled right into pivot tables -- no programming required.

Now there aren’t a huge number of datasets in the market today; it will take time for it to reach its full potential. This presents an opportunity for first movers to capture significant advantage. And while there is a mechanism for pricing and selling your data, there’s little guidance on what the price should be. And yes, you do think your data is worth more than it really is…now. I’d like to see a data auction feature that will bring market forces into play here.

There are a variety of companies already making money off their datasets; some making nearly as much from their data as they are leveraging that data themselves. Are you? Could you? It’s time to find out.


•• Pat Wood (@patrickawood) pictured below) posted Microsoft Unveils New Features for SQL Azure on 10/30/2010:

image Steve Yi’s PDC 2010 video, “What’s New in SQL Azure?”, reveals the new features Microsoft is now adding to SQL Azure. The features include a new SQL Azure Developer Portal and web-based Database management tools. The new SQL Azure Data Sync CTP2 enables SQL Azure databases or tables to be synchronized between SQL Azure and on-premises SQL Server databases. Additionally, the new SQL Azure Reporting CTP enables developers to save SQL Azure Reports as Microsoft Word, Microsoft Excel, and PDF files.

imageMicrosoft announced that all of these features will be available before the end of the year. The SQL Azure Reporting CTP and the SQL Azure Data Sync CTP2 are beginning to be made available to developers now. You can apply for the CTPs here.

Visit our Gaining Access website to learn more about SQL Azure and download our SQL Azure and Microsoft Access demonstration application.

Other free downloads include the Appointment Manager which enables you to save Access Appointments to Microsoft Outlook.


•• Chris Koenig [pictured below] posted OData v2 and Windows Phone 7 on 10/30/2010:

image Yesterday at PDC10, Scott Guthrie demonstrated a Windows Phone 7 application that he was building using Silverlight.  During this time, he mentioned that a new OData stack had been released for .NET which included an update to the library for Windows Phone 7. Now you might think that this was just a regular old upgrade – you know… bug fixes, optimizations, etc.  It was, but it also signaled a rewrite of the existing CTP that had been available for Windows Phone 7.  In this rewrite, there are some important feature changes that you need to be aware of, as announced by Mike Flasko on the WCF Data Services Team Blog yesterday:

panoramaLINQ support in the client library has been removed as the core support is not yet available on the phone platform.  That said, we are actively working to enable this in a future release.  Given this, the way to formulate queries is via URIs.

We’ve added a LoadAsync(Uri) method to the DataServiceCollection class to make it simple to query services via URI.

So you can easily support the phone application model we’ve added a new standalone type ‘DataServiceState’ which makes it simple to tombstone DataServiceContext and DataServiceCollection instances

In this post, I’ll go through the first 2 of the changes explain what they mean to you, and show you how to adapt your application to use the new library.  In a future post, I’ll explore the new support for the phone application model and tombstoning.

Where are the new bits?

imageFirst things first – we need to get our hands on the new bits.  There are 3 new download packages available from the CodePlex site for OData:

  1. OData for .NET 4.0, Silverlight 4 and Windows Phone 7 (Source Code, if you need it)
  2. OData Client Libraries and Code Generation Tool (just the binaries, and a new DataSvcUtil tool)
  3. OData Client Sample for Windows Phone 7 (great for learning)

After downloading, don’t forget to “unblock” the ZIP files, or Visual Studio will grouch at you.  Just right-click on the ZIP file and choose Properties from the menu.  If there is a button at the bottom labeled “Unblock”, click it.  That’s it.

Once you get the Client Libraries and Code Generation Tool zip file unblocked, and unzipped, replace the assembly reference in your project from the old CTP version of the System.Data.Services.Client.dll to the new version supplied in this new download.

You’ll also need to re-generate the client proxy in your code using the new version of DataSvcUtil.  Remember how?  Just run this command from a Command Prompt opened to the folder where you unzipped the tools

DataSvcUtil /uri:http://odata.netflix.com/catalog /dataservicecollection /version:2.0 /out:NetflixCatalog.cs

Note: You must include the /dataservicecollection attribute (which itself requires the /version:2.0 attribute) to get INotifyPropertyChanged attached to each of your generated entities.  If you’re not going to use the DataServiceCollection objects as I will, then you might not need this, but I’m using it in my samples if you’re following along at home.

Now you can replace your old proxy classes with these new ones.  Now is when the real fun begins…

Wherefore art thou LINQ?

The most impactful change has got to be the removal of LINQ support by the DataServiceProvider.  From Mike’s post:

LINQ support in the client library has been removed as the core support is not yet available on the phone platform.  That said, we are actively working to enable this in a future release.  Given this, the way to formulate queries is via URIs.

Wow.  I don’t know about you, but I have come to depend on LINQ for almost everything I do in .NET anymore, and this one really hits me square between the eyes. Fortunately, these URI-based queries aren’t too complicated to create, and Mike also points out that they’ve added a new method on the context to make this a bit easier on us:

We’ve added a LoadAsync(Uri) method to the DataServiceCollection class to make it simple to query services via URI.

With my custom URI and this new method, it’s actually almost as simple as before, sans LINQ doing all my heavy lifting.

Surgery time

So – to finish upgrading my Netflix application, I’ve got to make some changes to the existing MainViewModel.LoadRuntimeData method.  Here’s what it looks like from the last implementation:

image

As you can see, this contains quite a bit of LINQ magic. Unfortunately, that’s going to have to go. The only thing we can really salvage, is the instantiation of the NetflixCatalog class based on the Uri to the main OData service endpoint.  This is important – we don’t use a URI with all the query string parameters in one shot because the DataContext class needs to have a reference to the base URI for the service, and future queries will be based on that base URI.

Rebuild the query

Now that we have a reference to the main service, we now have to build our own query. To do this, we’ll need to manually convert the LINQ query into something that the previous DataContext would have done for us.  There are a couple ways to figure this out. First, we could dive into the OData URI Conventions page on the OData web site and read all about how to use the various $ parameters on the URL string to get the results we want.  The sneaky way, however, would be to run the old code with the old provider and look at the URI that got sent to the Netflix service by snooping on it with Fiddler.   The results are the same – one long URL that has all the parameters included.

image

Although it’s pretty long, note how clean the URL is – Where becomes $filter, OrderBy becomes $orderby and Take becomes $top. This is what is so great about OData: clean, clear, easy to read URLs.

Armed with our new URI, we can go in and replace the LINQ query with a new Uri declaration. Since we already have the new DataContext based on the service’s URI, we can remove that from here and just use the stuff that comes after:

image

Don’t forget to mark this new URI as UriKind.Relative.  This new URI is definitely NOT an absolute URI, and you’ll get an error if you forget. Here’s what the new code looks like so far:

image
ObservableCollection –> DataServiceCollection

Now that the DataContext is created, and the replacement query is built, it’s time to load up the data. But what are we going to load it up into? Depending on which version of my old application you were using, you might see code like I listed above with ObservableCollection or you might have the version that I converted to use the DataServiceCollection.  This new model definitely wants us to use DataServiceCollection as it has some neat ways to manage loading data for us.  That means we will have to swap out our definition of the Items property with a DataServiceCollection.

First, replace instances of ObservableCollection with DataServiceCollection.  Second, remove the initializer for the Items property variable – it’s no longer needed. Third, and this is optional, you can tap into the Paging aspects of the DataServiceCollection by adding a handler to the DataServiceCollection.Loaded event.

Note: I don’t need this feature now, so I’m not going to add code for it.  I’ll leave it as an exercise for the reader,or you can hang on for a future post where I add this back in.

Run the query

Now that my query URIs are defined, and my DataServiceCollection objects are in place, it’s time to wire up the final changes to the new query.  For this, all I have to do is initialize the Items property with a DataServiceCollection and ask it to go run the query for us.

image

Notice the simplified version of the loading process.  Instead of having to go through and manually load up all the items in the ObservableCollection, here the DataServiceQuery handles all that hard work for us.  The main thing we need to remember is to initialize it with the DataContext before calling out to it.

Wrapping it all up

Now that we’ve got everything working, let’s take a look at the whole LoadRuntimeData method:

image

Except for a few minor changes to the ViewModel properties (all we really did was change a type from ObservableCollection to DataServiceQuery) the actual code changes were pretty minimal. I still don’t like that I have to write my own URL string, but the team is going to address that for me in the future, so I guess I can hang on until then.

I’ve uploaded the project to my Skydrive, so you can download this version to see it in action.  It’s still not a very exciting application, but it does show off how to use the new OData library.  As always, thanks for reading, I hope you found it valuable, and let me know if you have any questions.

Chris Koenig is a Senior Developer Evangelist with Microsoft, based in Dallas, TX. Prior to joining Microsoft in March of 2007, Chris worked as a Senior Architect for The Capital Group in San Antonio, and as an Architect for the global solution provider Avanade.

Chris Woodruff observed in a comment to the preceding post:

To work with the new OData Client Library with developing Windows phone 7 applications there is a hint and two gotcha's:

  • Hint -- If you are a LINQ'er and want to get your URI's written quickly go to my blog post "Examining OData – Windows Phone 7 Development — How to create your URI’s easily with LINQPad" at http://www.chriswoodruff.com/index.php/2010/10/28/examining-odata-how-to-create-your-uris-easily-with-linqpad/
  • Gotcha -- Seems the ability to use the Reactive Framework has been broken with this new update of the OData Client Library. To do Async loads of data calls on the DataServiceCollection<t>, you need to have an observer tied to your class like most MVVM patterns show. Rx is different as it allows Observers to subscribe to your events which does not seem to be handled in the code for the DataServiceCollection<t> written for WP7.
  • One more Gotcha -- If you have the new Visual Studio Async CTP installed on your machine you will not be able to compile the source of the OData Client Library that is on the CodePlex site. It has variable names of async that conflict with the new C#/VB.NET keywords async and await.


•• Yavor Georgiev reported wcf.codeplex.com is now live in a 10/29/2010 post to The .NET Endpoint blog:

image Over the last few weeks the WCF team has been working on a variety of new projects to improve WCF’s support for building HTTP-based services for the web. We have also focused on a set of features to enable JavaScript-based clients such as jQuery.

imageWe are proud to announce that these projects are now live and available for download on http://wcf.codeplex.com. You can get both the binaries and the source code, depending on your preference. Please note that these are prototype bits for preview purposes only.

For more information on the features, check out this post, this post, this PDC talk, and the documentation on the site itself.

Our new CodePlex site will be the home for these and other features, and we will continue iterating on them with your help. Please download the bits and use the CodePlex site’s Issue Tracker and Discussion tab to let us know what you think!


•• Peter McIntyre posted a detailed syllabus of Data access for WCF WebHttp Services for his Computer Science students on 10/29/2010:

At this point in time, you have created your first WCF WebHttp Service. In this post, intended as an entry-level tutorial, we add a data access layer to the service.

Right at the beginning here, we need to state a disclaimer:

Like many technologies, WCF WebHttp Services offers functionality that spans a wide range, from entry-level needs, through to high-scale, multi-tier, enterprise-class requirements. This post does NOT attempt to cover the range.

Instead, we intend to provide an entry-level tutorial, to enable the developer to build a foundation for the future, where the complexity will be higher. We are intentionally limiting the scope and use case of the topics covered here, so that the new WCF service developer can build on what they know.

WCF WebHttp Services, as delivered in .NET Framework 4, are still fairly new as of this writing (in October 2010). The MSDN documentation is typically very good. However, the information needed by us is scattered all over the documentation, and there’s no set of tutorials – by Microsoft or others – that enable the new WCF service developer to get success quickly. The blog posts by the ADO.NET Entity Framework and WCF teams are good, but the examples tend to be focused on a specific scenario or configuration.

We need a tutorial that is more general, consistent, and predictable – a tutorial that enables the developer to reliably add a data access layer to a WCF WebHttp Service, and get it working successfully. Hopefully, this tutorial meets these needs.

Topics that this post will cover

In the last blog post, titled “WCF WebHttp Services introduction”, you learned about the programming model, and got started by creating a simple project from the template. Make sure you cover that material before you continue here.

This  blog post will cover the following topics:

  • Principles for adding a data access layer to a WCF WebHttp Service
  • Brief introduction to POCO
  • A brief introduction to the enabling C# language features (generics, LINQ, lambdas)
  • Getting started by creating a data-enabled service
Principles for adding a data access layer to a WCF WebHttp Service

The Entity Framework is the recommended data access solution for WCF WebHttp Services.

Review – Entity Framework data model in a WCF Data Service

imageIn the recent past, you gained experience with the creation of Entity Framework data models, as you programmed WCF Data Services. That programming model (WCF Data Services) is designed to expose a data model in a RESTful manner, using the OData protocol.

A feature of the OData protocol is that the data is delivered to the consumer in AtomPub XML, or in JSON. Let’s focus on the XML for a moment.

The AtomPub XML that expresses the entities is “heavy” – it includes the AtomPub wrapping structure, schema namespace information, and related attributes for the properties and values.

The image below is the XML that’s returned by a WCF Data Service, for the Northwind “Categories” collection. Only the first object in the collection is shown. As you can see, the actual data for the Category object is near the bottom, in about four lines of code.

Preview – Entity Framework data model in a WCF WebHttp Service

Now, let’s preview the use of an Entity Framework data model in a WCF WebHttp Service.

The default XML format for an entity, or a collection of entities, is known as entity serialization. It too is “heavy”, although not as heavy as WCF Data Service AtomPub XML. Entity serialization includes schema namespace information, and related attributes for the properties and values.

The image below is the XML that’s returned by a non-optimized WCF WebHttp Service, for the Northwind “Categories” collection. Only the first object in the collection is shown. As you can see, the actual data for the Category object is near the bottom, in about four lines of code.

WCF WebHttp Services should consume and provide data in easy-to-use concise formats. Therefore, we optimize the entity serialization process, by performing one additional step after we create our Entity Framework data model: We generate POCO classes. Generating plain old CLR classes (POCO) enables WCF to serialize (and deserialize) objects in a very concise XML format.

The image below is the XML that’s returned by a WCF WebHttp Service that has had POCO classes generated for the data model, for the Northwind “Categories” collection. Only the first object in the collection is shown. As you can see, most of the XML describes the actual data for the Category object.

The following is a summary, a side-by-side comparison, of these three XML formats:

AtomPub EF serialization POCO serialization

Peter continues with a “Brief Introduction to POCO” and other related topics.


•• Azure Support shared my concern about lack of news about SQL Azure Transparent Data Encryption (TDE) and data compression in a SQL Azure Announcements at PDC 2010 post of 10/29/2010:

image Microsoft saves its big announcements for Azure till its PDC conference, so what did we get at PDC 2010? The biggie was SQL Azure Reporting, this was a major announcement and a much requested feature for SQL Azure. In the demonstrations the compatibility with the SQL Server 2008 R2 Reporting Services looked very good – simply drag a report onto the design surface in an ASP.NET or ASP.NET MVC Web Role and the deploy it to Azure and then the report will be visible on the web page. SQL Azure Reporting is currently in CTP and will only be available per invite later in 2010.

imageSQL Azure Data Sync CTP 2 was announced with the major new feature being the ability to sync different SQL Azure databases. You can sign up for the Data Sync and Reporting CTPs here

Project Houston which is an online tool for managing an querying SQL Azure databases is now named Database Manager for SQL Azure. This is essentially a lightweight and online version of SSMS and addresses some of the weaknesses of the quite frankly under-powered Azure developer portal.

And What We Didn’t Get…

First and foremost we didn’t get any smaller versions of SQL Azure, this is a pity since it would be an excellent compliment to the new ultra-small Windows Azure instances. The high barrier to entry is the most persistent complaint about Azure and the 1GB starter size for a SQL Azure database is definitely way too large for most new web projects. Also a deafening silence on backups (we were promised two backup types – clone and continuous by the end of 2010 but this is almost certain not to be met. Also no news on compression and encryption.


•• The OData Blog posted New OData Producers and Consumers on 10/28/2010:


•• Cihangir Biyikoglu continued his series with Building Scalable Database Solution in SQL Azure - Introducing Federation in SQL Azure of 10/28/2010:

imageIn the previous post [see below], I described the scale-out vs scale-up techniques and why scale-out was the best thing after slides bread and why specifically sharding or horizontal partitioned wins as the best pattern in the world of scale-out.

imageI also talked about the advantages of using SQL Azure as the platform for building sharded apps. In this post, I’d like address a few areas that make the life of the admin and the developer more challenging when working with sharded apps and how SQL Azure will address these issues in future. Basically we’ll address these 2 challenges;

#1 …so I built the elastic app that can repartition but repartition operation requires quite a lot of acrobatics. how do I easily repartition?

#2 Robust runtime connection routing sounds great. How do I do that? How about routing while repartitioning of data is happening at the backend? What if the data partition, the atomic unit, moves by the time I look up and connect to a shard?

First thing first; Introducing Federation in SQL Azure

To address some of these challenges above and more, we will be introducing federations in SQL Azure. Federation are key to understanding how scale-out will work. Here are the basic concepts;

  • Federations represent all data being partitioned. It defined the distribution method as well as the domain of valid values for the federation key. In the picture below, you can see that the customer_federation is part of sales_db.
  • Federation Key is the key used for partitioning the data.
  • Atomic Unit (AU) represent a single instance value of the federation key. Atomic units cannot be separated thus all rows that contain the same instance value of the federation key always stay together.
  • Federation Member (aka Shard) is the physical container for a range of atomic units.
  • Federation Root is the database that houses federations and federation directory.
  • Federated Tables refer to tables in federation members that contain partitioned data, as opposed to Reference Tables refer to table that contain data that is repeated in federation members for lookup purposes.

Following figure shows how it all look when you put these concepts together. Sales_DB is the database that contains a federation (a.k.a the federation root). Federation called customer_federation is blown up to show you the details. Federation members contain ranges of atomic units (AU) such as [min to 100). AUs contain the collection of all data for the given federation key instance such as 5 or 25.

image

Repartitioning Operations with Federations

Federation provide operation for online repartitioning. SPLIT operation allows spreading of a federation members data (collection of atomic units) to many federation members. MERGE allows gluing back of federation members data together. This is exactly how federation address the challenge #1 above. With the SPLIT and MERGE operations administrators can simply trigger repartitioning operations on the server side and most importantly they can do this without downtime!

Connecting to Federations

Federation also allow connection to the federation’s data using a special statement; “USE FEDERATION federation_name(federation_key_value)”. All connections are established to the database containing the federation (a.k.a root database). However to reach a specific AU or a federation member, instead of requiring a database name, applications only need to provide the federation key value. This eliminates the need for apps to cache any routing or directory information when working with federations. Regardless of any repartitioning operation in the environment, apps can reliably connect to the AU or to the federation member that contain the given federation key value, such as tenant_id 55, simply by issuing USE FEDERATION orders_federation(55). Again the important part is, SQL Azure guarantees that you will always be connected to the correct federation member regardless of any repartitioning operation that may complete right as you are establishing a connection. This is how federations address the challenge #2 above.

I’d love to hear feedback from everyone on federations. Please make sure to leave a note here or raise issues in the SQL Azure forums here.


• Moe Khosravy sent a 10/28/2010 e-mail message: Former Microsoft Codename Dallas will be taken offline on January 28, 2011. New Portal Announced:

image Former Microsoft Codename Dallas will be taken offline on January 28, 2011.

Microsoft Codename Dallas, now Windows Azure DataMarket is commercially available!!!

How does this impact you?

imageWith this announcement, the old Dallas site will be taken offline on Jan 28th 2011. Until then, your applications will continue to work on the old site but customers can register on our new portal at https://datamarket.azure.com . Additional details on the transition plan are available HERE.

Commercial Availability Announced

Today Microsoft announced commercial availability of DataMarket aka Microsoft Codename “Dallas” which is part of Windows Azure Marketplace. DataMarket will include data, imagery, and real-time web services from leading commercial data providers and authoritative public data sources.

image

DataMarket enables developers worldwide to access data as a service. DataMarket exposes API’s which can be leveraged by developers on any platform to build innovative applications. Developers can also use service references to these API’s in Visual studio, making it easy for developers to consume the datasets. 

Millions of information workers will now be able to make better data purchasing decisions using rich visualization on DataMarket. DataMarket connectivity from Excel and PowerPivot will enable information workers to perform better analytics by mashing up enterprise and industry data.

Today’s key announcements around Dallas include:

  • Windows Azure Marketplace is all up marketplace brand for Microsoft’s cloud platform and Dallas is DataMarket section of Windows Azure Marketplace
  • DataMarket has been released with new user experience and expanded datasets from leading content providers!
  • Excel Add-in for DataMarket is now available for download at https://datamarket.azure.com/addin
  • DataMarket is launching with 40+ content provider partners with content ranging from Demographic, geospatial, financial, retail, sports, weather, environmental and health care.

Thank you for being early adopter customer of Dallas Service and hoping to see you on new DataMarket.


The SQL Server Team posted Announcing an updated version of PowerPivot for Excel on 10/28/2010:

image Today, Microsoft is announcing the launch of Windows Azure Marketplace (formerly codenamed “Dallas”). The Windows Azure Marketplace is an online marketplace to find, share, advertise, buy and sell building block components, premium data sets and finished applications. Windows Azure Marketplace includes a data section called DataMarket, aka Microsoft Codename Dallas.

With this announcement, the SQL Server team has released an updated version of PowerPivot for Excel with functionality to support true 1-click discovery and service directly to DataMarket. Using this new PowerPivot for Excel add-in, customers will be able to directly access trusted premium and public domain data from DataMarket. Download the new version of PowerPivot at http://www.powerpivot.com

FAQ

1. I’ve installed the new PowerPivot for Excel add-in… now what do I do?

Once you install the new add-in, go to the PowerPivot work area and you’ll notice under the “Get External Data” section a new icon that allows you directly discover and consume data from DataMarket

clip_image002

2. In addition to being able to directly connect to DataMarket, are there any other new functionality in PowerPivot for Excel or PowerPivot for SharePoint?

This updated version only adds direct access to Marketplace and no additional functionality is being shipped in PowerPivot for Excel. There’re no changes in PowerPivot for SharePoint either.

3. I just installed the latest version of PowerPivot from Download Center but I don’t see the DataMarket icon.

It takes about 24 hours to replicate the new bits into all the servers WW. Please be patient and again after a few hours


• Wayne Walter Berry (@WayneBerry) suggested on 10/28/2010 watching Steve Yi’s  prerecorded Video: What's New in SQL Azure? session for PDC 2010:

image SQL Azure is Microsoft’s cloud data platform. Initially offering a relational database as a service, there are new enhancements to the user experience and additions to the relational data services provided. Most notably, updates to SQL Azure Data Sync enable data synchronization between on-premises SQL Server and SQL Azure.

imageAdditionally, SQL Azure Reporting will soon become available for customers and partners to provide reporting on SQL Azure databases. This session will provide an overview and demonstration of all the enhancements to the SQL Azure services, and explores scenarios on how to utilize and take advantage of these new features.

View it Pre-Recorded From PDC 2010.


• Steve Yi [pictured below] recommended on 10/29/2010 Eric Chu’s pre-recorded Video: Introduction to Database Manager for SQL Azure from PDC 2010’s Player app:

image The database manager for SQL Azure (previously known as Microsoft® Project Code-Named “Houston”) is a lightweight and easy to use database management tool for SQL Azure databases. The web-based tool is designed specifically for Web developers and other technology professionals seeking a straightforward solution to quickly develop, deploy, and manage their data-driven applications in the cloud.  In this session, we will demonstrate the database manager in depth and show how to access it from a web browser, how to create and edit database objects and schema, edit table data, author and execute queries, and much more.

imageView Introduction to Database Manager for SQL Azure by Eric Chu as a pre-recorded video from PDC 2010.

Steve also pointed on 10/29/2010 to Liam Cavanagh’s (@liamca) PDC 2010 Video: Introduction to SQL Azure Data Sync:

image

In this session we will show you how SQL Azure Data Sync enables on-premises SQL Server data to be easily shared with SQL Azure allowing you to extend your on-premises data to begin creating new cloud-based applications.   Using SQL Azure Data sync’s bi-directional data synchronization support, changes made either on SQL Server or SQL Azure are automatically synchronized back and forth. 

imageNext we show you how SQL Azure Data Sync provides symmetry between SQL Azure databases to allow you to easily geo-distribute that data to one or more SQL Azure data centers around the world.  Now, no matter where you make changes to your data, it will be seamlessly synchronized to all of your databases whether that be on-premises or in any of the SQL Azure data centers. 

View Introduction to SQL Azure Data Sync by Liam Cavanagh video from PDC 2010.

To learn more about SQL Azure Data Sync and how to sign up for the upcoming CTP, visit: http://www.microsoft.com/en-us/sqlazure/datasync.aspx


• Cihangir Biyikoglu explained Building Scalable Database Solutions Using SQL Azure – Scale-out techniques such as Sharding or Horizontal Partitioning on 10/28/2010:

image As I spend time at conferences and customer event, the top faq I get has to be this one; SQL Azure is great but a single database is limited in size and computational capacity. How do I build applications that need larger computational capacity? The answer invariably is; You build it the same way most application are built in the cloud J! That is by using scale-out techniques such as horizontal partitioning or commonly referred to as sharding.

Rewinding back to the top…

imageWhen building large scale database solutions, there are a number of approaches that can be employed.

Scaling-Up refer to building apps using a single large unified HW and typically single database that can house all the data of an app. This approach works as long as you are able to find hardware that can handle the peak load and you are ok with typically exponential incremental cost for a none linear increase in scalability. Scale-up class of hardware typically have high administration cost due to its complex configuration and management requirements.

Scaling-Out refer to building apps using multiple databases spread over multiple independent nodes. Typically nodes are cost effective, commodity class hardware. There are multiple approaches to scale-out but among alternatives patterns such as sharding and horizontal partitioning provides the best scalability. With these patterns, apps can decentralize their processing, spread their workload to many nodes and harness the collective computational capacity. You can achieve linear cost to scalability ratio as you add more nodes.

How do you shard and app?

You will find many definition and approaches when it comes to sharding but here are the two principals I’ll recommend for best scalability characteristics.

1. Partition your workload to independent parts of data. Let’s call single instance of this data partition an atomic unit. The atomic unit can be a tenants data in a multi-tenant apps or a single customer in a SaaS app, a user in a web app or store for a branch enabled app etc. Atomic unit should be the focus of majority of your app workload. This boils down to ensuring that transactions are scoped to atomic units and the workload mostly filters to an instance of this atomic unit. In this case of a multi-tenant app for example, the tenant_id, user_id, store_id, customer_id etc is present in most interactions of the app with the database such as the following query;

SELECTFROMWHEREand customer_id=55 OR UPDATEWHEREand customer_id=55

2. Build elastic apps that understand that data is partitioned and has robust dynamic routing to the partition that contains the atomic unit. This is about developing app that discover at runtime where a given atomic unit is located and do not tie to a specific static distribution of data. This typically entails building apps that cache a directory of where data is at any given time.

Clearly, the two requirements above place constraints on the developers but they result in great scalability and price-performance characteristics at the end. Once you have built the app with these principals, now app administrators can capacity plan flexibly based on the load they expect. In the inception of the app, maybe a few databases are enough to handle all the traffic to 100s of atomic units. As your workload grow such as more tenants, more traffic per tenant or larger tenants etc, you can provision new databases and repartition your data. If your workload shrink, again you can repartition your data and de-provision existing databases.

Why is SQL Azure the best platform for sharded apps?

1. SQL Azure will help reduce administrative complexity and cost. It boils down to no physical administration for OS and SQL. No patching or VMs to maintain. built-in HA with load balancing to best utilize capacity of the cluster.

2. SQL Azure provides great elasticity with easy provisioning and de-provisioning of databases. So you don’t need to but HW, wire it up, install Windows and SQL on top, you can simply run “CREATE DATABASE” and be done. Better yet you can run “DROP DATABASE” and no longer incur a cost.

Given the benefits, many customers using SQL Azure service build large scale database solution like multi-tenant cloud apps such as Exchange Hosted Achieve service or internet scale apps such as TicketDirect that require massive scale on the web.

This is the way life is today. In the next few weeks with PDC and PASS 2010, we will be talking about features that will make SQL Azure even a better platform for sharded apps, enhancing the lives of both developers and DBAs. Stay tuned to PDC and PASS and to my blog. I’ll be posting details on the features here as we unveil the functionality at these conferences.


Mike Flasko of the WCF Data Services Team reported Data Services Client for Win Phone 7 Now Available! on 10/28/2010:

image Today, at the PDC, we announced that a production-ready version of the WCF Data Services client for Windows Phone 7 is available for download from http://odata.codeplex.com.  This means that it is now simple to create an app that connects your Windows Phone 7 to all the existing OData services as well as the new ones we’re announcing at this PDC.

The release includes a version of System.Data.Services.Client.dll that is supported on the phone (both the assembly and the source code) and a code generator tool to generate phone-friendly client side proxies

imageThe library follows most of the same patterns you are already used to when programming with OData services on the desktop version of Silverlight.  The key changes from Silverlight desktop to be aware of are:

  • LINQ support in the client library has been removed as the core support is not yet available on the phone platform.  That said, we are actively working to enable this in a future release.  Given this, the way to formulate queries is via URIs.
  • We’ve added a LoadAsync(Uri) method to the DataServiceCollection class to make it simple to query services via URI.
  • So you can easily support the phone application model we’ve added a new standalone type ‘DataServiceState’ which makes it simple to tombstone DataServiceContext and DataServiceCollection instances

Liam Cavanagh (@limaca) posted Announcing SQL Azure Data Sync CTP2 on 10/28/2010:

image Earlier this week I mentioned that we will have one additional sync session at PDC that would open up after the keynote. Now that the keynote is complete, I am really excited to point you to this session “Introduction to SQL Azure Data Sync” and tell you a little more about what was announced today.

imageIn the keynote today, Bob Muglia announced an update to SQL Azure Data Sync (called CTP2) to enable synchronization of entire databases or specific tables between on-premises SQL Server and SQL Azure, giving you greater flexibility in building solutions that span on-premises and the cloud.

As many of you know, using SQL Azure Data Sync CTP1, you can now synchronize SQL Azure database across datacenters. This new capability will allow you to not only extend data from your on-premises SQL Servers to the cloud, but also enable you to easily extend data to SQL Servers sitting in remote offices or retail stores. All with NO-CODING required!

Later in the year, we will start on-boarding customers to this updated CTP2 service. If you are interested in getting access to SQL Azure Data Sync CTP2, please go here to register.

If you would like to learn more and see some demonstrations of how this will work and some of the new features we have added to SQL Azure Data Sync, please take a look at my PDC session recording. Here is the direct video link and abstract.

Video: Introduction to SQL Azure Data Sync (Liam Cavanagh) – 27 min

In this session we will show you how SQL Azure Data Sync enables on-premises SQL Server data to be easily shared with SQL Azure allowing you to extend your on-premises data to begin creating new cloud-based applications. Using SQL Azure Data sync’s bi-directional data synchronization support, changes made either on SQL Server or SQL Azure are automatically synchronized back and forth. Next we show you how SQL Azure Data Sync provides symmetry between SQL Azure databases to allow you to easily geo-distribute that data to one or more SQL Azure data centers around the world. Now, no matter where you make changes to your data, it will be seamlessly synchronized to all of your databases whether that be on-premises or in any of the SQL Azure data centers.


<Return to section navigation list> 

AppFabric: Access Control and Service Bus

The Windows Azure App Fabric Team published a detailed Windows Azure App Fabric PDC 2010 Brief as a *.docx file that didn’t receive large-scale promotion. Here’s a reformatted HTML version:

Overview

Businesses of all sizes experience tremendous cost and complexity when extending and customizing their applications today. Given the constraints of the economy, IT must increasingly find new ways to do more with less but at the same time simultaneously find new innovative ways to keep up with the changing needs of the business. Developers are now starting to evaluate newer cloud-based platforms as a way to gain greater efficiency and agility; the promised benefits of cloud development are impressive, by enabling greater focus on the business and not in the running of infrastructure.

However, customers already have a very large base of existing heterogeneous and distributed business applications spanning different platforms, vendors and technologies. The use of cloud adds complexity to this environment, since the services and components used in cloud applications are inherently distributed across organizational boundaries. Understanding all of the components of your application – and managing them across the full application lifecycle – is tremendously challenging. Finally, building cloud applications often introduces new programming models, tools and runtimes making it difficult for customers to transition from their existing server-based applications.

To address these challenges, Microsoft has delivered Windows Azure AppFabric.

Windows Azure AppFabric: Comprehensive Cloud Middleware

Windows Azure AppFabric provides a comprehensive cloud middleware platform for developing, deploying and managing applications on the Windows Azure Platform. It delivers additional developer productivity, adding in higher-level Platform-as-a-Service (PaaS) capabilities on top of the familiar Windows Azure application model. It also enables bridging your existing applications to the cloud through secure connectivity across network and geographic boundaries, and by providing a consistent development model for both Windows Azure and Windows Server. Finally, it makes development more productive by providing a higher abstraction for building end-to-end applications, and simplifies management and maintenance of the application as it takes advantage of advances in the underlying hardware and software infrastructure.

There are three key components of Windows Azure AppFabric:

  1. Middleware Services: platform capabilities as services, which raise the level of abstraction and reduce complexity of cloud development.
  2. Composite Applications: a set of new innovative frameworks, tools and composition engine to easily assemble, deploy, and manage a composite application as a single logical entity
  3. Scale-out application infrastructure: optimized for cloud-scale services and mid-tier components.

image

Following is a high-level overview of these Windows Azure AppFabric components and features.

Middleware Services

Windows Azure AppFabric provides pre-built, higher level middleware services that raise the level of abstraction and reduce complexity of cloud development. These services are open and interoperable across languages (.NET, Java, Ruby, PHP…) and give developers a powerful pre-built “class library" for next-gen cloud applications. Developers can use each of the services stand-alone, or combine services to provide a composite solution.

image

  • Service Bus (Commercially available now; updated CTP delivered October 2010) provides secure messaging and connectivity capabilities that enable building distributed and disconnected applications in the cloud, as well as hybrid applications across both on-premise and the cloud. It enables using various communication and messaging protocols and patterns, and removes the need for the developer to worry about delivery assurance, reliable messaging and scale.
  • Access Control (Commercially available now; updated CTP delivered August 2010) enables an easy way to provide identity and access control to web applications and services, while integrating with standards-based identity providers, including enterprise directories such as Active Directory®, and web identities such as Windows Live ID, Google, Yahoo! and Facebook.
  • Caching (New CTP service delivered October 2010; commercially available in H1 CY11) accelerates performance of Windows Azure and SQL Azure based apps by providing a distributed, in-memory application cache, provided entirely as a service (no installation or management of instances, dynamically increase/decrease cache size as needed). Pre-integration with ASP.NET enables easy acceleration of web applications without having to modify application code.
  • Integration (New CTP service coming in CY11) will provide common BizTalk Server integration capabilities (e.g. pipeline, transforms, adapters) on Windows Azure, using out-of-box integration patterns to accelerate and simplify development. It will also deliver higher level business user enablement capabilities such as Business Activity Monitoring and Rules, as well as self-service trading partner community portal and provisioning of business-to-business pipelines.
  • Composite App (New CTP service coming in H1 CY11) will provide a multi-tenant, managed service which consumes the .NET based Composition Model definition and automates the deployment and management of the end to end application - eliminating manual steps needed by both developers and ITPros today. It also executes application components to provide a high-performance runtime optimized for cloud-scale services and mid-tier components (automatically delivering scale out, availability, multi-tenancy and sandboxing of application components). Finally, it delivers a complete hosting environment for web services built using Windows Communication Foundation (including WCF Data Services and WCF RIA Services) and workflows built using Windows Workflow Foundation.

It’s a key characteristic of all AppFabric Middleware Services that they are consistently delivered as true multi-tenant services – you simply provision, configure, and use (no installation or management of machines/instances).

Composite Applications

AppFabric Composition Model
& Visual Designer

As developers build next-generation applications in the cloud, they are increasingly assembling their application as a composite from many pre-built components and services (either developed in house or consumed from third parties cloud services). Also, given that many cloud applications need to access critical on-premises business applications and data, there is a need to be able to compose from on-premises services easily and securely. Finally, the highly distributed nature of these composite applications require more sophisticated deployment and management capabilities for managing all of the distributed elements that span the web, middle-tier and database tier.

Microsoft is advancing its Windows Azure AppFabric cloud middleware platform to provide a full composite application environment for developing, deploying and managing composite applications. The AppFabric composition environment delivers three main benefits:

Composition Model A set of .NET Framework extensions for composing applications on the Windows Azure platform. This builds on the familiar Azure Service Model concepts and adds new capabilities for describing and integrating the components of an application. It also provides a consistent composition model for both Windows Azure and Windows Server.

image

Visual Design Experience A new Visual Studio based designer experience allows you assemble code from your existing application components, along with newer cloud services, and tie them together as a single logical entity.

Managed as a service The Composite Application service is a multi-tenant, managed service which consumes the Composition Model definition and automates the deployment and management of the end to end application - eliminating manual steps needed by both developers and ITPros today.

The composite application environment offers the following benefits:

  • Greater developer productivity through rapid assembly, linking of components and automated deployment of the entire end-to-end application;
  • Easier configuration and control of entire application and individual components;
  • End-to-end application monitoring (events, state, health and performance SLAs);
  • Easier troubleshooting (through richer diagnostics and debugging of the whole application);
  • Performance optimization of the whole application (scale-out/in, fine-tuning, migration, etc);
  • Integrated operational reporting (usage, metering, billing).
Scale-out Application Infrastructure

Both the AppFabric Services and your own composite applications built using the Composition Model are built upon an advanced, high-performance application infrastructure that has been optimized for cloud-scale services and mid-tier components. The AppFabric Container provides base-level infrastructure such as automatically ensuring scale out, availability, multi-tenancy and sandboxing of your application components. The main capabilities provided by the AppFabric Container are:

image

  • Composition Runtime This manages the full lifecycle of an application component including loading, unloading, starting, and stopping of components. It also supports configurations like auto-start and on-demand activation of components.
  • Sandboxing and Multi-tenancy This enables high-density and multi-tenancy of the hosted components. The container captures and propagates the tenant context to all the application and middleware components.
  • State Management This provides data and persistence management for application components hosted in the container.
  • Scale-out and High Availability The container provides scale-out by allowing application components to be cloned and automatically distributed; for stateful components, the container provides scale-out and high availability using partitioning and replication mechanisms. The AppFabric Container shares the partitioning and replication mechanisms of SQL Azure.
  • Dynamic Address Resolution and Routing In a fabric-based environment, components can be placed or reconfigured dynamically. The container automatically and efficiently routes requests to the target components and services.
Bridging On-Premises and Cloud

Finally, one of the important capabilities needed by businesses as they begin their journey to the cloud is being able to leverage existing on-premise LOB systems and to expose them selectively and securely into the cloud as web services. However, since most organizations are firewall protected, the on-premise LOB systems are typically not easily accessible to cloud applications running outside the organization’s firewall.

AppFabric Connect allows you to leverage your existing LOB integration investments in Windows Azure using the Windows Azure AppFabric Service Bus, and Windows Server AppFabric.  This new set of simplified tooling extends BizTalk Server 2010 to help accelerate hybrid on/off premises composite application scenarios which we believe are critical for customers starting to develop hybrid applications.

image

(AppFabric Connect is available now as a free add-on for BizTalk Server 2010 to extend existing systems to both Windows Azure AppFabric and Windows Server AppFabric.)


Clemens Vasters announced the availability of his pre-recorded PDC10 session in a PDC10 Windows Azure AppFabric - Service Bus Futures post of 10/29/2010:

image My PDC10 session is available online (it was pre-recorded). I talk about the new ‘Labs’ release that we released into the datacenter this week and about a range of future capabilities that we’re planning for Service Bus. Some of those future capabilities that are a bit further out are about bringing back some popular capabilities from back in the .NET Services incubation days (like Push and Service Orchestration), some are entirely new.

image7223One important note about the new release at http://portal.appfabriclabs.com – for Service Bus, this is a focused release that provides mostly only new features and doesn’t provide the full capability scope of the production system and SDK. The goal here is to provide insight into an ongoing development process and opportunity for feedback as we’re continuing to evolve AppFabric. So don’t derive any implications from this release on what we’re going to do with the capabilities already in production.

Click here to go to the talk.


 Rajesh Makhija summarizes Windows Azure Identity and Access In the Cloud in this 10/29/2010 post:

Identity & Access in the Cloud

image For building applications that leverage the Windows Azure Platform one needs to put some thought on how one would manage Identity and Access especially in scenarios were the application needs to leverage on-premise resources as well as Cloud Services for enterprise and inter-enterprise collaboration. Three key technologies that can ease this task are:

  1. Windows Azure AppFabric Access Control Service
  2. Active Directory Federation Services 2.0
  3. Windows Identity Foundation

image7223The Windows Azure AppFabric Access Control Service helps build federated authorization into your applications and services, without the complicated programming that is normally required to secure applications that extend beyond organizational boundaries. With its support for a simple declarative model of rules and claims, Access Control rules can easily and flexibly be configured to cover a variety of security needs and different identity-management infrastructures. It acts as a Security Token Service in the cloud.

Active Directory Federation Services 2.0 is a server role in Windows Server that provides simplified access and single sign-on for on-premises and cloud-based applications in the enterprise, across organizations, and on the Web. AD FS 2.0 helps IT streamline user access with native single sign-on across organizational boundaries and in the cloud, easily connect applications by utilizing industry standard protocols and provide consistent security to users with a single user access model externalized from applications.

Windows® Identity Foundation (WIF) is a framework for building identity-aware applications. The framework abstracts the WS-Trust and WS-Federation protocols and presents developers with APIs for building security token services and claims-aware applications. Applications can use WIF to process tokens issued from security token services and make identity-based decisions at the web application or web service.

Scenarios

All application scenarios that involve AppFabric Access Control consist of three service components:

  • Service provider: The REST Web service.
  • Service consumer: The client application that accesses the Web service.
  • Token issuer: The AppFabric Access Control service itself.

For this release, AppFabric Access Control focuses on authorization for REST Web services and the AppFabric Service Bus. The following is a summary of AppFabric Access Control features:

  • Cross-platform support. AppFabric Access Control can be accessed from applications that run on almost any operating system or platform that can perform HTTPS operations.
  • Active Directory Federation Services (ADFS) version 2.0 integration. This includes the ability to parse and publish WS-Federation metadata.
  • Lightweight authentication and authorization using symmetric keys and HMACSHA256 signatures.
  • Configurable rules that enable mapping input claims to output claims.
  • Web Resource Authorization Protocol (WRAP) and Simple Web Token (SWT) support.

Acm.exe Tool

The Windows Azure AppFabric Access Control Management Tool (Acm.exe) is a command-line tool you can use to perform management operations (CREATE, UPDATE, GET, GET ALL, and DELETE) on the AppFabric Access Control entities (scopes, issuers, token policies, and rules).

View:

Downloads & References


• Tim Anderson (@timanderson) described AppFabric [as] Microsoft’s new middleware in this 10/29/2010 post:

image7223I took the opportunity here at Microsoft PDC to find out what Microsoft means by AppFabric. Is it a product? a brand? a platform?

image The explanation I was given is that AppFabric is Microsoft’s middleware brand. You will normally see the work in conjunction with something more specific, as in “AppFabric Caching” (once known as Project Velocity) or “AppFabric Composition Runtime” (once known as Project Dublin. The chart below was shown at a PDC AppFabric session:

image

Of course if you add in the Windows Azure prefix you get a typical Microsoft mouthful such as “Windows Azure AppFabric Access Control Service.”

Various AppFabric pieces run on Microsoft’s on-premise servers, though the emphasis here at PDC is on AppFabric as part of the Windows Azure cloud platform. On the AppFabric stand in the PDC exhibition room, I was told that AppFabric in Azure is now likely to get new features ahead of the on-premise versions. The interesting reflection is that cloud customers may be getting a stronger and more up-to-date platform than those on traditional on-premise servers. [Emphasis added.]

Related posts:

  1. PDC day one: Windows in the cloud
  2. Microsoft PDC big on Azure, quiet on Silverlight
  3. Microsoft TechEd 2010 wrap-up: cloud benefits, cloud sceptics

Wade Wegner summarized New Services and Enhancements with the Windows Azure AppFabric on 10/28/2010:

image Today’s an exciting day!  During the keynote this morning at PDC10, Bob Muglia announced a wave of new building block services and capabilities for the Windows Azure AppFabric.  The purpose of the Windows Azure AppFabric is to provide a comprehensive cloud platform for developing, deploying and managing applications, extending the way you build Windows Azure applications today.  

image7223PDC09, we announced both Windows Azure AppFabric and Windows Server AppFabric, highlighting a commitment to deliver a set of complimentary services both in the cloud and on-premises.  While this has long been an aspiration, we haven’t yet delivered on it – until today!

Let me quickly enumerate the some of the new building block services and capabilities:

  • Caching (CTP at PDC) – an in-memory, distributed application cache to accelerate the performance of Windows Azure and SQL Azure-based applications.  This Caching service is the complement to Windows Server AppFabric Caching, and provides a symmetric development experience across the cloud and on-premises.
  • Service Bus Enhancements (CTP at PDC) – enhanced to add durable messaging support, load balancing for services, and an administration protocol.  Note: this is not a replacement of the live, commercial Service Bus offering, but instead a set of enhancements provided in the AppFabric LABS portal.
  • Integration (CTP in CY11) – common BizTalk Server integration capabilities (e.g. pipeline, transforms, adapters) as a service on Windows Azure.
  • Composite Application (CTP in CY11) – a multi-tenant, managed service which consumes the .NET based Composition Model definition and automates the deployment and management of the end-to-end   application.

As part of the end-to-end environment for composite applications, there are a number of supporting elements:

  • AppFabric Composition Model and Tools (CTP in CY11) – a set of .NET Framework extensions for composing applications on the Windows Azure platform.  The Composition Model provides a way to describe the relationship between the services and modules used by your application.
  • AppFabric Container (CTP in CY11) – a multitenant, high density host optimized for services and mid-tier components.  During the keynote, James Conard showcased a standard, .NET WF4 running in the container.

Want to get up to speed quickly?  Take a look at these resources for Windows Azure AppFabric:

Watch these great sessions from PDC10 which cover various pieces of the Windows Azure AppFabric:

Also, be sure to check out this interview [embedded in Wade’s article] with Karandeep Anand, Principal Group Program Manager with Application Platform Services, as he talks about the new Windows Azure AppFabric Caching service.

Early next week my team, the Windows Azure Platform Evangelism team, will release a new version of the Windows Azure Platform Training Kit – in this kit we’ll have updated hands-on-labs for the Caching service and the Service Bus enhancements.   Be sure and take a look!

Of course, I’ll have more to share over the new few days and weeks.


The Windows Azure AppFabric team posted Introduction to Windows Azure AppFabric Caching CTP on 10/28/2010:

image7223This blog post provides a quick introduction to Windows Server AppFabric Caching.  You might also want to watch this short video introduction.

As mentioned in the previous post announcing the Windows Azure AppFabric CTP October Release, we've just introduced Windows Azure AppFabric Caching, which provides a distributed, in-memory cache, implemented as a cloud service. 

Earlier this year we delivered Windows Server AppFabric Caching, which is our distributed caching solution for on-premises applications.  But what do you do if you want this capability in your cloud applications?  You could set up a caching technology on instances in the cloud, but you would end up installing, configuring, and managing your cache server instances yourself.  That really defeats one of the main goals of the cloud - to get away from managing all those details.

So as we looked for a caching solution in Windows Azure AppFabric, we wanted to deliver the same capabilities available in Windows Server AppFabric Caching, and in fact the same developer experience and APIs for Windows Azure applications, but in a way that provides the full benefit of cloud computing.  The obvious solution was to deliver Caching as a service. 

To start off, let's look at how you set up a cache. First you'll need to go to the Windows Azure AppFabric LABS environment developer portal (http://portal.appfabriclabs.com/ ) to set up a Project, and under that a Service Namespace. Then you simply click the "Cache" link to configure a cache for this namespace.

With no sweat on your part you now have a distributed cache set up for your application.  We take care of all the work of configuring, deploying, and maintaining the instances. 

The next screen gives you the Service URL for your cache and an Authentication token you can copy and paste into your application to grant it access to the cache. 

So how do you use Caching in your application?

First, the caching service comes with out-of-the-box ASP.NET providers for both session state and page output caching.  This makes it extremely easy to leverage these providers to quickly speed up your existing ASP.NET applications by simply updating your web.config files.  We even give you the configuration elements in the developer portal (see above) that you can cut and paste into your web.config files. 

You can also programmatically interact with the cache to store and retrieve data, using the same familiar API used in Windows Server AppFabric Caching.  The typical pattern used is called cache-aside, which simply means you check first if the data you need is in the cache before going to the database.  If it's in the cache, you use it, speeding up your application and alleviating load on the database.  If the data is not in the cache, you retrieve it from the database and store it in the cache so its available next the application needs it.

The delivery of Caching as a service can be seen as our first installment on the promise of AppFabric - to provide a consistent infrastructure for building and running applications whether they are running on-premises or in the cloud. You can expect more cross-pollination between Windows Server AppFabric and Windows Azure AppFabric in the future.

We invite you to play with the Caching CTP and give us feedback on how it works for you and what features you would like to see added.  One great way to give feedback is to fill out this survey on your distributed caching usage and needs.

As we move towards commercial launch, we'll look to add many of the features that make Windows Server AppFabric Caching extremely popular, such as High Availability, the ability to emit notifications to clients when they need to refresh their local cache, and more.


Windows Azure AppFabric team reported Windows Azure AppFabric SDK October Release available for download on 10/28/2010:

image7223A new version of the Windows Azure AppFabric SDK V1.0 is available for download starting 10/27. The new version introduces a fix to an issue which causes the SDK to rollback on 64 bit Windows Server machines with BizTalk Server installed.

The fix is included in the updated SDK download.  If you are not experiencing this issue, there is no need to download the new version of the SDK.


<Return to section navigation list> 

Windows Azure Virtual Network, Connect, and CDN

Daniel J. St. Louis explained PDC: Why Steve Jobs' Pixar uses Microsoft Windows Azure in this 10/29/2010 post:

image

Everyone knows Pixar, the studio that makes fantastic computer-graphics movies. Most people probably realize they couldn’t just make one of those films with their desktop PC. But few people may know that’s not necessarily because of the CG software, but mainly because of the way Pixar converts all that computer data into video.

(Steve Jobs is one of the three founding fathers of Pixar Animation Studios)

Windows AzureThrough a process called rendering, a CG movie is converted from data to video frame by frame. There are 24 frames per second in film. With all the data that go into one frame of, say, "Up," it would take one computer more than 250 years to render a Pixar movie, said Chris Ford, a business director with the studio.

"Generally, if you’re a studio you’ll have a data center, known as a render farm, typically with 700 to 800 processors," he said today. Ford joined Bob Muglia, president of the Microsoft Business Division, on stage at the Professional Developers Conference in Redmond.

Why was he in Redmond? Because, in a proof of concept, Pixar has taken its industry-leading rendering technology — software called RenderMan — and put it in the cloud. The idea is that anybody can use Renderman — via Microsoft’s Windows Azure cloud-computing platform — taking advantage of thousands of processors, connected via the Internet, to render CG video at relatively quick speeds.

Up
Click to enlarge - Image courtesy of Pixar

All this data — each balloon, each car, each building, each light source, each texture — are compiled into CG files that must be rendered to create one frame of "Up."
Click to enlarge

With the tool, users could choose the speed at which their video is rendered, paying more for rush jobs and paying less if they have some time to wait. That’s possible because of the elasticity of cloud platforms such as Azure — they can automatically manage how many processors are working on projects at a time.

PDC2010Ford said Pixar — which, by the way, was co-founded by Apple CEO Steve Jobs — likes Windows Azure also because it is dependable and is a choice that’s guaranteed to be around for a while. Cloud-based RenderMan, of course, represents a revenue source for Pixar not that typical for a movie studio.

Pixar’s RenderMan technology is widely used across the industry. Ford said just about every CG shot you’ve seen on the silver screen since the 1995 release of "Toy Story" was rendered using RenderMan.

Through Windows Azure, Pixar wants to make sure every CG shot you see on the small screen is as well.

Microsoft's Bob Muglia, left, and Pixar's Chris Ford talk to the audience at PDC in Redmond.
Image courtesy of Nick Eaton/seattlepi.com

Microsoft’s Bob Muglia, left, and Pixar’s Chris Ford talk to the audience at PDC in Redmond.


•• Jim O’Neill posted a paean to interoperability in his Windows Azure for PHP and Java article of 10/29/2010:

Historically Microsoft’s Professional Developer Conference (PDC) is the pre-eminent developer conference for Microsoft technologies, and while that remains true, you may have noticed this year’s session titles included technologies like “PHP” and “Java” – typically with “Azure” in close proximity.  Although there’s a lingering perception of Microsoft as ‘closed’ and ‘proprietary’, Windows Azure is actually the most open of any of the cloud Platform-as-a-Service vendors today, and recent announcements solidify that position.

altYou might have seen my post on the Azure Companion a few weeks ago, and today at PDC there were even more offerings announced providing additional support for Java and PHP in the cloud:

  1. The Windows Azure tools for Eclipse/Java, an open source project sponsored by Microsoft, and developed and released by partner Soyatec.  We expect them to make a Community Technology Preview of the Windows Azure tools for Eclipse/Java available by December 2010.
  2. Release of version 2.0 of the Windows Azure SDK for Java, also from Soyatec.
  3. November 2010 CTP of the Windows Azure Tools for Eclipse/PHP as well the November 2010 CTP of the Windows Azure Companion.
  4. The launch of a new website dedicated to Windows Azure and PHP.

All of the sessions at PDC are available online, so if you’re looking for more details on using PHP or Java in the cloud, check out these sessions (and actually let me know as well!):

To stay engaged with the conversation on interoperability of Windows Azure (and other Microsoft technologies) with PHP, Java, and more, tap into the resources below:

See also James Governor’s detailed Windows is the dominant (Java) Developer OS: So what is Microsoft going to do about it? post of 10/28/2010 and Bruce Kyle’s Java Developers Get New Tools for Windows Azure at PDC10 of 10/31/2010.


•• The Windows Azure Virtual Network Team invited you on 10/28/2010 to be notified when the Windows Azure Connect CTP is available:

image

Click here to sign up with your Windows Live ID.


•• Anthony Chavez prerecorded his Understanding Windows Azure Connect session for PDC 2010:

image Windows Azure Connect is a new Windows Azure service that enables customers to setup secure, IP-level network connectivity between their Windows Azure compute services and existing, on-premise resources. This eases enterprise adoption by allowing Windows Azure applications to leverage and integrate with a customer’s current infrastructure investments. For example, using the Windows Azure Virtual Network customer can migrate a line-of-business application to Azure that requires connectivity to an on-premise SQL Server database which is secured using Active Directory-based Windows Integrated Authentication. In this session, we will give an overview of Windows Azure Virtual Network's functionality, discuss usage scenarios, and walk through the steps involved in setting up and managing a Virtual Network.

image

My Twitter panel message reads: @pdcevent The PPTX file for Anthony Chavez's #azure #pdc2010 #cs67 presentation isn't available from the link in Download M[aterials] #PDC10.

The failing URL for Download Materials is: http://az8714.vo.msecnd.net/presentations/CS67-Chavez.pptx. The http://az8714.vo.msecnd.net/downloads/cs67_thumb_ppt.PNG URL returns the expected thumbnail.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

•• Steve Fox explained Integrating SharePoint [2010] with Windows Azure in this pre-recorded PDC 2010 session:

image SharePoint 2010 provides many ways to integrate with Windows Azure. From simple SQL Azure data-centric applications to complex workflow that leverages custom Azure services, there is great potential to integrate these two growing technologies. This session will provide a code-centric view of the ways in which you can integrate with Azure, covering areas such as web part development, data programmability, service consumption, and Business Connectivity Services integration. If you’re looking to take your SharePoint solutions into the cloud with Azure, then you can’t miss this session.


•• Chris Mayo pre-recorded his SharePoint in the Cloud: Developing Solutions PDC 2010 session:

With the most recent release of SharePoint comes the ability to build and deploy applications of many types and flavors. Using SharePoint Server 2010, you can develop a wide variety of applications for the enterprise using .NET, Silverlight, JavaScript, and much more. But with the release of SharePoint Online on our horizon, what are the possibilities and the boundaries here? And how is the design and development process different? If you want to develop for SharePoint Online, but are unsure about the boundaries then you’ll not want to miss watching this demo-heavy session.

image


•• J. D. Meier presented the starter version of a Windows Azure Scenarios Map on 10/31/2010:

image The Windows Azure scenarios map is a consolidated and shared view of the common scenarios and tasks developing applications for the Windows Azure platform. You will find Getting Started and Architecture scenarios first, followed by other common areas. Scenarios in each group should represent common tasks developers for this platform would face.

image

Your call to action here is simply scan the Windows Azure Scenarios Map below and either share your scenarios in the comments or email your scenarios to me at feedbackandthoughts at live.com.  Be sure to share your scenarios in the form of “how to blah, blah, blah …” – this makes it much easier to act on and update the map.

For a quick review of what a good Scenarios Map looks like, see my related post, 5 Keys to Effective Scenario Maps.

Categories

  • Getting Started
  • Architecture and Design
  • Access Control
  • ASP.NET Applications
  • Caching
  • Configuration
  • Data Access / Storage
  • DataMarket (“Dallas”)
  • Deployment
  • General
  • Logging / Health Monitoring
  • Performance
  • Security
  • Service Bus
  • SQL Azure
  • Transactions
  • WCF
  • WIF (Windows Identity Foundation)
  • Windows Azure VM (Virtual Machine) Role
  • Worker Role
  • Workflow

Windows Azure Scenarios Map

Category

Items

Getting Started

  • How to set up your development environment for Windows Azure development.
  • How to build a web site in a web role.
  • How to deploy an application to Windows Azure.
  • How to debug deployed applications.
  • How to build a worker process in a worker role.

Architecture and Design

  • How to implemented layered architecture in the cloud.
  • How to run an on-premise app in the cloud.
  • How to design an application to scale across multiple data centers.
  • How to design a loosely coupled system.
  • How to design around latency issues.
  • How to dynamically increase or decrease the number of role instances based on load.
  • How to use Azure diagnostics to troubleshoot production issues.
  • How to provide incremental progress feedback from a worker role (for ex: progress bar).
  • How to design for integration (custom cloud applications / finished services (BPOS) / on premise / ESB)
  • How to call on-premise data stores from Windows Azure.
  • How to decide if your application is right for Windows Azure (on-premise vs. cloud, advantages/disadvantages).
  • How to scale horizontally.
  • How to scale vertically.
  • How to manage state in the cloud.
  • How to manage logs.
  • How to cache data.
  • How to design for asynchronous work.
  • How to design a tightly bound system.
  • How to segregate application logic between Azure roles.
  • How to create a small-to-medium Web app.
  • How to create a large Web application.
  • How to manage separation of responsibilities in functional roles.
  • How to build a system using both hosted data and on-premises data.
  • How to coordinate multiple worker roles.
  • How to create a parallel processing application.
  • How to use a distributed cache.

Access Control

  • How to authorize access to a REST interface.
  • How to implement complex logic in claims mapping.
  • How to configure my application for multiple authentication methods?
  • How to perform sign-out from my claims aware application?
  • How to Enable Tracing
  • How to use Windows Azure platform AppFabric Access Control to obtain a Simple Web Token (SWT) after providing a SAML token.
  • How to integrate Windows Azure platform AppFabric Access Control with ADFS.
  • How To: Configure Facebook as an Identity Provider for Azure AppFabric Access Control Service (ACS)
  • How To: Configure Google as an Identity Provider for Azure AppFabric Access Control Service (ACS)
  • How To: Configure Yahoo! as an Identity Provider for Azure AppFabric Access Control Service (ACS)

ASP.NET Applications

  • How to connect to SQL Azure.
  • How to connect to Windows Azure Storage.
  • How to authenticate users using Live ID.
  • How to implement a RESTful interface in an ASP.NET application.
  • How to access certificates.
  • How to manage state in an application.
  • How to connect to a WCF service with an internal endpoint.
  • How to encrypt a value using RSA encryption.
  • How to monitor health of other VM instances.
  • How to access performance counters from code.

Caching

  • How to leverage a distributed cache (e.g. Velocity)
  • How to swap out cache providers.
  • How to cache data effectively.
  • How to expire the cache.
  • How to use Azure's VM local storage.

Configuration

  • How to configure a web role.
  • How to configure a worker role.
  • How to cache configuration data.
  • How to decide what settings should go in ServiceConfiguration vs. Web/App Configs.
  • How to programmatically change configuration settings.

Data Access / Storage

  • How to access Azure Storage tables.
  • How to access Azure Storage queues.
  • How to connect to SQL Azure.
  • How to decide whether to use Azure Table Storage vs. SQL Azure
  • How to access Windows Azure Storage from Silverlight
  • How to upload files to BLOB storage.
  • How to handle connection timeouts with Azure Storage.
  • How to design an extensible schema that will never need to be updated.
  • How to choose a partition key for different entities.
  • How not to get too much data into one partition.
  • How to load initial/domain data (ETL)
  • How to do BI in the cloud.
  • How to store BLOB data for an on premise application.
  • How to organize your containers and blobs efficiently.
  • How to track/retrieve additional information/properties about blobs
  • How to authorize access to containers/blobs
  • How to name storage containers in WAS (what are the restrictions for naming?)
  • How to design a scalable partitioning strategy for WAS.
  • How to authorize access to BLOBs using Shared Access Signatures
  • How to persist a VM drive to Azure Drives.
  • How to organize your containers and blobs efficiently.
  • How to track/retrieve additional BLOB properties.
  • How to use queues for IPC.
  • How to deploy data to an Azure Drive.
  • How to create a WCF Data Services interface for Windows Azure Storage.
  • How to expose SQL Azure through a WCF Data Services interface.
  • How to support transactional data in Azure Storage.
  • How to repartition your live data.
  • How to repartition data.
  • How to programmatically reset and obtain storage access keys.

DataMarket (“Dallas”)

  • How to use DataMarket from my application
  • How to address security, billing, auditing, and authenticating

Deployment

  • How to install an SSL certificate for an Azure ASP.NET app.
  • How to determine number of instances of roles to deploy.
  • How to roll out a deployment.
  • How to roll back a deployment.
  • How to create and install a deployment certificate.
  • How to deploy applications programmatically through the portal API’s.

General

  • How to push peak loads to the cloud to reduce the size of an on premise data center.
  • How to decide if you application is right for Windows Azure (on-premise vs. cloud, advantages/disadvantages)
  • How to run your own VM in the cloud.
  • How to develop with a team of developers.

Logging / Health / Monitoring

  • How to determine your log destination (EventLog, TableStorage, Flatfile, etc)
  • How to view logs
  • How to monitor the health of a deployed application
  • How to log information from IIS (until IIS Logs are available).
  • How to monitor web roles.
  • How to monitor worker roles.
  • How to alert/alarm if needs are beyond Windows Live Alerts (currently what Azure provides)
  • How to throttle your logging.

Performance

  • How to design around Azure throttling.
  • How to simulate load.
  • How to access/view performance counters.
  • How to do capacity planning.
  • How to compare BLOB storage against VM drives.
  • How to measure performance against CRUD.

Security

  • How to encrypt values stored in configuration files.
  • How to perform single sign on (Federation).
  • How to turn your application into a claims aware application.
  • How to authenticate callers.
  • How to identify callers.
  • How to manage personally identifying information / sensitive data in the cloud.
  • How to leverage roles (Membership).
  • How to leverage claims.
  • How to turn claims into roles.
  • How to sanitize logging events for sensitive data.
  • How to prevent CSRF attacks.
  • How to protect configuration settings.
  • How to encrypt persisted data.
  • How to integrate with my Membership Provider.
  • How to secure any sensitive data that is sent between cloud applications.
  • How to store sensitive data in the cloud.
  • How to secure sensitive data sent to a cloud app.
  • How to build an STS.
  • How to integrate with Active Directory.

Service Bus

  • How to use the service bus to expose on-premise services to Windows Azure hosted applications.
  • How to use the service bus from a Silverlight client.
  • How to expose “discoverable” services via the service bus.
  • How to authenticate service bus access with AppFabric Access Control.

SQL Azure

  • How to decide between Windows Azure Storage and SQL Azure.
  • How to implement separation of privileges in SQL Azure.
  • How to avoid SQL Azure throttling.
  • How to deploy SQL Azure TSQL or DB schemas as part of application deployment process.
  • How to backup SQL Azure databases.
  • How to restore SQL Azure databases.
  • How to use SQL roles and accounts in conjunction with claims based authentication mechanisms.

Transactions

  • How to implement 2-phase commit.
  • How to roll back.
  • How to update multiple pieces of data at the same time.
  • How to lock effectively.

WCF

  • How to set up transport security for WCF on Windows Azure.
  • How to use client certs with a WCF service on Windows Azure.
  • How to use on-premise user stores for authentication and authorization.
  • How to use internal endpoints with a WCF service.
  • How to expose an on-premise WCF service to a Windows Azure hosted client.
  • How to build a WCF service in a worker role.

WIF (Windows Identity Foundation)

  • How to set up ADFS as an STS for Active Directory.
  • How to create a custom STS.
  • How to create a federation provider STS.
  • How to use a custom claims repository.
  • How to: Using the FederatedPassiveSignIn ASP.NET User Control
  • How to use WSTrustChannelFactory and WSTrustChannel
  • How to identify from Windows Phone to ASP.NET web site?
  • How to identify from Windows Phone to WCF service?
  • How to identify from iPad/iPhone to WCF service?
  • How to identify from droid to WCF service?
  • How to identitfy from Silverlight to WCF service?
  • How to Enable Tracing
  • How to log WCF and WIF traces to different tracing sources
  • How to use svctraceviewer.exe for troubleshoting
  • How to use svctraceviewer.exe to inspect WIF traces
  • How to: Establishing Trust from an ASP.NET Relying Party Application to an STS using FedUtil
  • How to package and deploy claims aware application on-prem?
  • How to package and deploy claims aware application to Azure?
  • How to request an ActAs token.

Windows Azure VM (Virtual Machine) Role

  • How to choose between VM Role and designing for a Web or Worker Role
  • How to design for the VM Role

Worker Role

  • How to communicate between different types of worker roles.
  • How to schedule work.
  • How to group different types of work.
  • How to determine the number of worker roles.
  • How to determine if multiple threads should be used

Workflow

  • How to design for asynchronous work.
  • How to design for integration (custom cloud applications / finished services (BPOS) / on premise / ESB)

Contributors and Reviewers

  • External Contributors / Reviewers – Adam Grocholski; Andy Eunson, Bill Collette; Kevin Lam, Terrance Snyder, Will Clevenger
  • Microsoft Contributors / Reviewers – Alik Levin, Carlos Farre, Julian Gonzalez, Mohit Srivastava, Rob Boucher

My Related Posts


•• Jim Nakashima (@jnakashima) pointed to his PDC10 presentation in a Rapid Developer Deploy to Windows Azure post of 10/29/2010:

image At PDC10, I did a session Building, Deploying and Managing Windows Azure Applications that covered the end to end experience of using Windows Azure.

imageTo watch the recording, go to http://player.microsoftpdc.com/session click on “Guide”, my session is at 11:30 AM on Thursday.

As part of the session I was able to show some of the cool new features we have coming in our next release (coming sometime in November or December of 2010) and I’m following up with a series of blog posts.

The first in this series is showing the rapid developer develop feature.

As part of developing a web site that runs on Windows Azure, it is really common for the developer not only to iterate on F5 but also to iterate in the cloud.

Today, iterating on the cloud requires a full redeployment which can be a lengthy operation.  We set out to make this scenario better.

I’ll start with a Cloud Service that has a single Web role and I’ll just use the default ASP.NET template.

image

To enable rapid developer deploy, right click on the Cloud Service node in Solution Explorer, select publish and on the publish dialog, enable remote desktop (post coming soon) and select the checkbox to “Enable Web Deploy”:

image

Once the deployment has completed, view the web page in IE:

image

and you’ll see the standard ASP.NET template.  Since at the time of this writing, this is not live, you’ll see that I’m showing this in one of our test clusters, cloudapp-preview.net.

image

From here, I can go back and make any changes I want to my web site. In this case, I’ll just update the text on that main page.

image

Hit “save” and right click on the WebRole1 project this time and select publish.

image

This will bring up a dialog where the settings to deploy to the web role are already setup.  Make sure you fill out the username/pass at the bottom of the dialog.

image

Note: The settings for this profile are saved to the project folder, in my sample they are saved in a file called WebRole1.Publish.xml.  The tools created this file on your behalf.

After the web deploy publish completes – really in the matter of seconds (depends on the extent of the changes and payload of course) – I can go back to IE, refresh the browser and…

image

The changes were deployed to the Windows Azure.  Yes, in the matter of seconds.  As someone that iterates a lot on the cloud, I’m really happy we have this feature coming in v1.3.

What’s Going On?

The checkbox on the cloud service deploy dialog sets up MS deploy on the web role instance which we then use the standard VS tools for MS deploy to deploy to that instance. 

We need to have Remote Desktop enabled, not because it uses remote desktop but because it needs a user account.  This requirement is likely to be removed over time.

Why Developer Deploy?

The reason why this is supported only for a single instance developer scenario is because we’re only modifying the instance, we’re not modifying the package that Windows Azure will use to create new instances.

Additionally, because Windows Azure instances are stateless (to support scale out) an OS update or other management operation can result in that instance being replaced by a different instance which means that the changes you made through web deploy will be lost.

As you can imagine, those behaviors are not acceptable for production but are totally fine for the developer scenario.

We’ll eventually get a full story around this but didn’t want to wait to solve the developer case first.

Finally

This will be coming between now and the end of the year (2010) – let me know what you think.


•• Brian Harry supplemented his PDC 2010 presentation with a TFS on Windows Azure at the PDC post of 10/28/2010:

image Hosting of ALM in the cloud as software as a service is gradually becoming more and more popular.  The vision, of course, is ALM as a seamless service – making it really easy to get started, easy to scale, easy to operate, easy to access, …  You’ve seen me write from time to time about our work with 3rd party hosting and consulting companies offering TFS services.  We did a bunch of work in TFS 2010 on both the technical and licensing front to enable a new generation of cloud based TFS services.

imageSeveral months ago, I wrote a post about our initial investigation into porting TFS to the Windows Azure platform.  Since then, we’ve continued to pursue it and today, at this year’s PDC, I demoed Team Foundation Server running on the Windows Azure platform.  We announced that we’ll deliver a CTP (basically an early preview) next year.  We aren’t, by any means, done with the technical work, but, for now,  it’s a great case study to see what is involved in porting a large scale multi-tier enterprise application to Azure.

imageThe demo I did today represents an important step forward in getting TFS running on the Azure cloud platform.  When I wrote my post a few months ago, I talked about a few of the challenges porting the TFS data tier from SQL Server to SQL Azure.  What I demoed today included not only that but also the remaining components of TFS running in the cloud – the ASP.NET application tier (running as a Web Role), the TFS Job Service (formerly a Windows service for periodic background tasks, now running as a worker role) and the TFS Build controller/agent (running in an Azure VM role).  I demoed connecting from a web browser, Visual Studio and from Microsoft Test Manager.

One of the cool things (but makes for a mundane demo :)) is that, for the end user, TFS in the cloud looks pretty much like TFS on-premises.  Other than the fact that you can log in with an internet identity rather than a windows identity, you’ll find that the Visual Studio experience, for example, looks pretty much identical to a local TFS scenario.  Here’s a screenshot of the “new experience” of logging in with a internet id:

TFSintheCloud-NewProject2

And here’s a screenshot of VS connected to TFS on Azure (as you can see, there’s not much difference):

TFSintheCloud-VS

The good news is that a lot of the work we did in TFS 2010 to improve the TFS architecture to scale from simple client installs (TFS Basic) to very complex, centrally hosted enterprise installs really helped prepare us for the port to Azure.  The result is that it has been a pretty reasonable project.  I’ll describe some aspects of the effort.

Porting TFS to SQL Azure

As I mentioned, step 1 was getting the SQL ported to SQL Azure.  TFS is a very large SQL app and therefore, we feared this would not be a simple task.  TFS has over 900 stored procedures and over 250 tables.  Despite how involved the app is, the system was up and running with an investment of 2 people for about 1 month.  The biggest issues we had to deal with were:

  • OpenXML – SQL Azure does not support the OpenXML mechanism that SQL 2008 does for passing lots of data to a stored procedure.  Our initial port was to move from OpenXML to the XML data type.  However, we’ve found that to not be the best solution (some performance issues and XML escaping issues) and ultimately ported to Table Valued Parameters (TVPs).
  • Select INTO – SQL Azure does not support “select into” because it requires that all tables have a clustered index and select into doesn’t allow for that.  We ported all of our select into occurrences to use explicit temp tables.
  • Sysmessages – SQL Azure doesn’t allow you to add messages to sysmessages.  This means you can’t properly raise custom errors (something we make heavy use of).  The truth is, even if you could, it wouldn’t be a great solution because sysmessages doesn’t really have a good localization store in a multi-tenant environment with different tenants with different languages.  We have created a new mechanism for raising errors from SQL and ensuring that the end user ultimately gets a localized, intelligible message.
  • Clustered indexes – As I mentioned above, SQL Azure requires clustered indexes on all tables.  Some of ours didn’t have them.  It' was pretty straight forward to go back and add them though.
  • No full text index – SQL Azure does not yet support full text indexing.  For now we’ve disabled it in TFS (the only place we used it was in work item tracking) and are figuring out what our long term plan for that will be.
  • Database directives – There are a lot of database directives that we use that SQL doesn’t support – partitioning, replication, filegroups, etc.  Removing them was not a particularly big issue.

While this is not an exhaustive list of the issues, it’s a pretty good list of the larger ones we had to deal with.

Porting to the Web Role

While the TFS app tier is also pretty big, a straight port of it was surprisingly simple – about 1 person, 1 month.  Azure’s implementation of ASP.NET is pretty faithful to the on-premises solution so most stuff ports pretty well.  The harder parts of this (that have taken a few people several months) have been adapting to some of the cloud platform differences – with identity being the biggest one.  Here’s some detail…

  • Identity – Windows Azure doesn’t have Windows identities for use in the same way you would use them in an on-premises app.  You need to use an internet identity system.  Our first cut at this was to attempt direct use of LiveID – um, it turned out to be much more complicated than we originally expected.  After getting our head out of the sand, we realized the right thing was to use App Fabric Access Control Services (otherwise known as ACS).  ACS gives us an authentication system that is OpenID compatible and supports many providers, such as LiveID, Yahoo, Google, Facebook, and Active Directory Federation.  This enables people to connect to our Azure based TFS service with whatever identity provider they choose.  TFS was pretty baked to understand Windows identities so it was a fair amount of work to port and had some pretty non-obvious ramifications.  For instance, we can’t put up a username/password dialog any more.  These services (LiveID, etc) require that you use their web page to authenticate.  That means that everywhere we used to get Windows credentials, we now have to pop up a web page to enable the user to enter their username/password with their preferred provider.  From a technical stand point, identity has been the most disruptive change.
  • Service identities – Not only do we have to deal with user identities (as described above) but TFS also has a number of cooperative services and they need to authenticate with each other.  The end user oriented authentication mechanisms don’t really support this.  Fortunately, ACS has introduced something called “Service identities” for this exact purpose.  They enable headless services to authenticate/authorize each other using ACS as the broker.
  • Calls from the cloud – The relationship between a TFS server and a TFS build controller (in TFS 2010 and before) is essentially peer to peer.  Connections can be initiated from either side.  This is potentially a big issue if you want your TFS server in the cloud and a build machine potentially on the other side of a Network Address Translation (NAT) device.  One option is to use the Azure Service Bus to broker the bi-directional communication.  We looked at our alternatives and ultimately decided to change the communication protocol so that the build controller acts like a pure client – it contacts the TFS server but the TFS server never contacts it.  This is going to allow us, long term, to have simpler trust relationships between them and simplify management.
  • Deployment – Another sizable chunk of work has been changing TFS to deploy properly to Azure.  TFS used an MSI, wrote to the Registry, the GAC, etc.  None of this is allowed on Azure – Azure apps are deployed as a zip file with a manifest that basically controls where files go.
  • Tracing – We had work to do to hook our tracing (System.Diagnostics) and event logging into the Azure diagnostics store – neither of these is automatic.
  • Admin console – The admin console had cases where it accessed the TFS database directly but that’s not very desirable in Azure.  The SQL Azure authentication system is different.  We didn’t want to complicate the world by exposing two authentication systems to administrators so we’ve eliminated the direct database access from the admin console and replaced it with web service calls.

Again, this isn’t a comprehensive list but captures many of the largest things we’ve had to do to get TFS on Azure.

Tuning for Azure

There’s a class of work we are doing that I call “Tuning for Azure” and by this, I mean, it’s not strictly necessary to provide a TFS service on Azure but it makes the service more practical.  My best examples off the top of my head are:

  • Blob storage – In our on-premises solution, all TFS data is stored in SQL Server – work item attachments, source code files, Intellitrace logs, …  This turns out not to be a good choice on Azure – really for 2 reasons: 1) SQL Azure databases are capped at 50GB (might change in the future, but that’s the limit today) and 2) SQL Azure storage is more than 50X more expensive per GB than Windows Azure storage.  As a result we’ve done work to move all of our blob data out of SQL and into a blob storage service that can be hosted on Windows Azure blobs.  We believe this will both significantly reduce the pressure on the database size limit and reduce our operational cost.
  • Latency and bandwidth – The fact that your TFS server will always be across the WAN means you need to think even harder about how you use it.  As much as can be local needs to be local.  Calls to the server really need to be async and cancelable.  You have to really minimize round trips and payload size, etc.
Running as a Service

The biggest category of work we have left is stuff I call “Running as a Service”.  Most all of this applies to running a traditional on-premises 3 tier app as a service but it becomes more important the larger the scale of the service is and a worldwide service like Azure makes it really important.  An example of something in this category is upgrades.  In TFS today, when you upgrade your TFS version, you take the server down, do the upgrade and then bring the server back up.  For a very large on-premises installation, that might entail a few hours of down time up to about a day (for crazy big databases).  For a service the size of a global service, you can’t take the service down for days (or even hours) while you upgrade it.  It’s got to be 24x7 with minimal interruptions for small segments of the population.  That means that all upgrades of the TFS service have to be “online” without ever taking a single customer down longer than it takes to update their own data.  And that means running multiple versions of the TFS simultaneously during the upgrade window.  It’s by no means the only big “running as a service” investment but it’s a good example of how expectations of this kind of service are just different.

We also haven’t yet tackled what billing would look like for this kind of online service.  That can also be way more complicated than you might, at first, think.

That’s a lot of detail but it’s a pretty cool milestone to be able to demo the great progress we’ve made and share our experiences porting a large existing on-premises app to a cloud architecture.  As always, it’s pretty fun to be messing around with the latest and greatest technology.  As I said at the beginning, we’re still primarily looking at this as technological exercise and aren’t ready to talk about any product plans.  For now, TFS provides a great on-premises solution and we have a growing set of partners providing hosted services for those that want to go that route.  The move to the cloud is certainly gaining momentum and we’re making sure that your investment in TFS today has a clear path to get you there in the future.

Brian noted in response to a comment:

No details about joining preview programs yet.

The extent of customization is still being worked out.  Customization will certainly be be supported but we believe there will be some limits compared to what you can do with on premises TFS.


•• Mary Jo Foley (@maryjofoley) posted Orleans: More on Microsoft's cloud programming model in the sky on 10/28/2010 to ZDNet’s All About Microsoft blog:

image Today, Microsoft’s programming model for the cloud is .Net. At some point in the future, it may become Orleans.

Orleans is a Microsoft Research project about which I’ve written a bit earlier this year. At Microsoft’s Professional Developers Conference in Redmond this week, the company is showing off Orleans as one of its research demos.

imageOrleans is currently a project in Microsoft’s eXtreme Computing Group, which is  chartered with research and development “on the cutting edge of ultrafast computing.” A prototype of Orleans exists and a couple of other Microsoft research projects, like the Horton online-query execution tool, are built on Orleans, according to company officials.

Here’s an updated description of Orleans from Microsoft’s Web site:

Orleans “offers a simple programming model build around grains, a unit of computation with private and shared state that communicates exclusively by sending messages to other grains and receiving and replying to requests from clients. Combine with the Orleans runtime, which provides functionality commonly used in this type of system, Orleans raises the level of abstraction and helps developers build scalable correct Cloud applications.”

An overview diagram of Orleans from the research site:

Orleans has three main components: The programming model, the programming language and tools, and a runtime system. Orleans uses standard .Net languages (currently only C#) with custom attributes, according to the Web site.

“Developers write cloud applications in a .NET language and declare persistence, replication and consistency as attributes on grains,” the site explains.

There’s no timetable or definitive commitment as to when Orleans will become the programming model for the Microsoft cloud. But it is still interesting to see what might come to be at some point….


• Gavin Clarke prefixed his Microsoft's Azure cloud plan favors Java article of 10/29/2010 for The Register with “From killer to fluffer”:

image PDC 2010 PDC is a Microsoft event, right? And .NET is a Microsoft architecture? Microsoft built C# and the Common Language Runtime (CLR) to kill Java, yeah? And they premiered C# and the CLR with .NET at the Professional Developers' Conference in 2000.

imageSo why is Microsoft at PDC ten years later talking about making Java a first-class citizen on Windows Azure?

The enterprise.

image Doug Hauger [pictured at right], general manager of the cloud infrastructure services product management group, told The Reg that Azure has seen a surprisingly huge amount of adoption by its enterprise customers. They now want their server-side Java Enterprise Edition apps to work better on Azure. People are testing data sets on Azure or extending existing data and apps to the cloud.

Better? But Microsoft has been telling us for more than a year that Java already runs on Azure, like PHP and Ruby, as part of an attempt to convince us that a cloudy Microsoft is an open Microsoft.

"Better" means more than just running your Java Virtual machine. It means improving its performance and providing better Eclipse tooling and client libraries for Widows Azure.

But just how far will Microsoft go in making Java a first-class citizen?

Based on the Azure roadmap server and tools president Bob Muglia laid out Thursday at PDC, there's plenty of scope for Java to tap Azure's growing and storage power.

Changes are planned to Azure's networking, management, media, caching, and security that fill gaps in the next year. Microsoft will deliver new capabilities and add features that help simplify the tasks that developers are struggling to achieve on their own.

You'll get IP-based network connectivity between on-premises and cloud-based apps following a Community Technology Preview (CTP) by the end of this year. Later this year, full support for Internet Information Services (IIS) will be added, so you won't have to spin up multiple instances of ISS. Instead, you'll be able to host multiple sites on the server. Applications are getting a boost with the addition of Dynamic Content Caching in 2011

Azure users will be able to secure the content they deliver using SSL/TLS in 2011.

Azure's SQL Azure relational database service is also getting a boost with reporting and analytics. SQL Azure Reporting – which will let you embed reports from Word and Excel and add PDFs to Windows Azure applications – will be available as a CTP by the end of this year and will hit generally available in the first half of 2010. SQL Azure Data Sync, adding geo-location to SQL Azure data and the ability to synchronize with cloud and mobile apps, is planned for the end of 2010 with release in the first half of 2011.

Database Manager for SQL Azure, web-based database management and querying for SQL Azure, will be delivered by the end of this year.

Microsoft is also making it easier to move existing onsite apps to Azure. Muglia announced a beta of the Virtual Machine Role for Windows Server 2008 R2 that's due by the end of this year. Server Application Virtualization will be released as a CTP before the end of this year and delivered as final code in the second half of next year.

Java – enterprise Java especially – has been largely bypassed in the apps voortrek[king] into the cloud. Attention has gone on web-friendly and more fashionable dynamic languages, like PHP or Ruby.

Only in June did somebody finally step up to put Java in the cloud: Salesforce and VMware's Spring said they would build VMforce, for hosting Spring- and Tomcat-based Java applications on the Force.com service.

Other languages are running on top of services like Amazon. Heroku hosts Ruby apps with its own monitoring and pricing on top of the bookseller's cloud.

Amazon this week upped the competitive pressure by cutting the price for the smallest computer and storage instance on its cloud from $0.02 to free. At PDC, Microsoft introduced its smallest instance - the Extra Small Windows Azure Instance that's a quarter of the bandwidth, processing and cache of its previous smallest option. Extra Small is priced $0.05 per compute hour.

Are Microsoft and Amazon in a price war? Not according to Hauger, who says the two aren't comparable. Azure already offers much more and will deliver more in 2011 - platform capabilities such as CDN and IP connection to on-site apps, he said.

These are also the kinds of things Java will experience as a first-class citizen running on top of Azure. "Java will benefit from everything we are doing there as long as we make Java the first class platform. It's a huge opportunity for us," Hauger said.

But this sounds like a JVM play, and Java frameworks such as Spring will remain second-class citizens - at least for now. "We're not sure what we want to say on that," Hauger said.


• Toddy Mladenov explained Remote Desktop Connections to Windows Azure Instances in this fully illustrated 10/28/2010 tutorial:

image It has been long time since my last post related to Windows Azure. We have been concentrated on delivering all the new and exciting features for PDC2010, and of course we wanted to keep them secret.

In this morning’s PDC2010 keynote Bob Muglia revealed the secret, and I would like to start with my favorite feature – establish Remote Desktop connection to any Windows Azure instance. Here is the step-by-step guide how to enable the feature, and how to use it.

Configuring Windows Azure Deployment for RDP

imageThe first thing I did is to create new basic Cloud Project in Visual Studio. I called it Hello RDP. Nothing new here. I modified the Default.aspx page just to make it say: “Hello, RDP!”.

Here are the steps (quite simple) that you need to go through to enable Remote Desktop in your deployment:

  1. In Visual Studio right-click on the Cloud Project (Hello RDP in my case) and select Publish. You will be presented with the Deploy Windows Azure project dialog below:
    image
    I have selected the Create Service Package Only option but you have the choice to deploy from Visual Studio to the cloud directly if you want.
  2. Next click on the link Configure Remote Desktop connection above the OK button. You will see the Remote Desktop Configuration dialog where you can select the certificate you want to use to encrypt the credentials as well as the actual credentials you want to use to login to the instance and the expiration date for the password.
    image
    In the Certificates drop down you can select a certificate from your local cert store or create a new one that will be sored locally. After you fill in all the information click on the OK button in the current dialog as well as in the deployment dialog.
    Your package will be created on the local hard drive.
    Note: If you do not have Hosted Service created and/or if you don’t have the certificate already uploaded through the Windows Azure Portal you should create the package locally and deploy it through the Portal. See next section for more details.
Configuring the Windows Azure Hosted Service

In the current implementation (as of PDC10) you need to create hosted service and upload the certificate before you do the deployment. There may be some changes for the official release of the new UI in order to make it more streamlined. Here is the current workflow:

  1. Load the Windows Azure Portal and click on Compute, Storage & CDN in the navigation area
    image
  2. Click on the Compute Services node in the tree. In the grid you will see your subscription (or subscriptions if you have more than one)
    image
  3. Click on the New Hosted Service button, and you will get the dialog to fill in Hosted Service and Deployment information
    image
    In the new Windows Azure Portal UI we combined the creation of Hosted Service and the Deployment in one step. However for RDP you should choose the “Do not deploy” option because you will receive error if you don’t have the certificate uploaded.
  4. When you fill in all the information click the OK button and you will see your Hosted Service appear below your subscription.
    image
  5. Expand the Compute Services node in the tree and select the Service Certificates node
    image
  6. Click on Add Certificate to open the Upload Certificate dialog. Select the same certificate that you selected in the Visual Studio dialog when configuring your package, and click the Create button.
    Note: Make sure you select the right Hosted Service in the Target Hosted Service dialog
    image
  7. After it is uploaded you will see your certificate in the grid
    image
  8. Go back to the Compute Services view and click on New Production Deployment (or New Staging Deployment if you want to deploy to staging). Create a new Deployment dialog will appear where you can select the package and the configuration file you created in visual studio above.
    image
  9. After clicking OK you will see new line item in the grid that represents your deployment. In addition you will receive frequent updates what is happening with the deployment, and it will expand once the roles and instances are spun up. Really cute – isn’t it?  We refresh the status every 10 seconds.
    image
    Now, you just need to wait until your deployment is complete.
Connecting to the Windows Azure Instance

Once the deployment is complete the instances will be in Ready state, and when you select any one of them the Remote Desktop Connect button in the ribbon will light up.

image

When you click the Connect button you will be asked to download RDP file from the Windows Azure Portal (Note: Silverlight will also present you with security warning the first time you click on the Remote Access Connect button; you can select the checkbox in the warning if you don’t want to be warned in the future). You can select Open in the prompt for downloading the RDP file and the connection to the Windows Azure Instance will be established. Optionally you can save the RDP file locally for future use.

image

Voila! You are in!

image

Finally, it is highly recommended that you turn off Remote Desktop access to your Windows Azure instances when you don’t need it in order to avoid security risks. You can do this very easily from the Portal – just select the Role row, and uncheck the Remote Access Enabled checkbox:

image

Final Notes and Disclaimers About the Remote Desktop Feature

Couple of things I would like to mention at the end:

  • First, the Remote Desktop feature is scheduled to be released at the end of November, which means we are still working on it and there may be some changes in the workflow
  • And second, the new Windows Azure Portal UI is also scheduled to release at the end of November and there may be some changes in the UI also. I have used the environment we use for the PDC demos to make the screenshots above, and if you are attending the conference you will see the same UI in the hands-on labs

I will try to update this blog post with any changes we make between PDC and the final release.

Hope you enjoy the new features in Windows Azure, and as always your feedback is highly appreciated.


Dmitry Sotnikov and Einar Mykletun described Quest Software's Azure Services in a 00:18:17 Channel9 video on 10/28/2010:

In this interview, Dmitry Sotnikov (Director of Cloud Solutions) and Einar Mykletun (Security and Compliance Architect) from Quest Software discuss building Quest’s new OnDemand product line – cloud-based IT management services to help IT professionals manage their on-premise Active Directory and server infrastructure. We talk about what it took to build the services on top of the Windows Azure platform, focusing specifically on security.

Quest is an early adopter of the Windows Azure platform.  They’ve built out an extensive services framework as well as a few initial service offerings that sits on top of Windows Azure Framework and Windows Identity Framework. One of the key elements in the design of Quest’s framework is secure communication and authentication between all the service components and layers, whereby encryption is based on certificates stored within the dedicated certificate store provided by Windows Azure. On this topic, Einar explains similarities between developing software for on-premises and in Windows Azure as well as highlights a few key differences.

Meanwhile, Dmitry dives into benefits of claims based authentication and authorization which is leveraged by Quest’s OnDemand solutions, shows that customer’s Security Token Service (STS) can be interconnected with other, public STS systems to provide access to the cloud-based solutions. Dmitry shows us a few key code samples from Quest’s STS implementation highlighting the use of Windows Identity Foundation (WIF) classes and Einar shows code necessary to implement STS federation for Quest’s SSO for existing Quest support customers.

Finally, our guests talk about Quest Software benefits from Windows Azure datacenter, such as security, recoverability, replication and compliance when building solutions on top of Windows Azure platform, and how customers’ internal networks can actually be MORE secure if they adopt the Quest OnDemand Log Management service built on top of Windows Azure.

image

[Watch] the interview and then:


Robert Rowley, MD reported Practice Fusion Teams up With Microsoft's Windows Azure MarketPlace to Support Health Research in a 10/28/2010 post to the EHR Bloggers blog:

image The buzz at the Microsoft PDC conference in Redmond today is all about data. Windows Azure MarketPlace, formerly called Codename "Dallas," is officially launching to provide an integrated platform for researchers to access and analyze information from top national sources including the Associated Press, Data.gov, Zillow.com, PitneyBowes and NASA.

image Practice Fusion's Research Division teamed up with Microsoft for the program to offer a set of de-identified health information to researchers entirely for free. The clinical dataset provided includes insight on vitals, diagnoses, medications, prescriptions, immunizations and allergies in a de-identified, HIPAA-compliant format.

imageWe're very excited about the potential for public health discoveries stemming from this data. See Practice Fusion's VP of Product Development, Matthew Douglass, announce the health research initiative in a video overview of the Microsoft and Practice Fusion research initiative.

image


<Return to section navigation list> 

Visual Studio LightSwitch

Patrick Emmons asked Why Create Lightweight Business Applications Using Microsoft LightSwitch? in this 10/30/2010 article for CMS Wire:

image22242[Visual Studio Magazine’s] VSLive2010 developer’s event is all about development in the Visual Studio environment, and one of the big announcements this year was the launch date for Visual Studio LightSwitch. LightSwitch is a Rapid Development environment that will allow technical and somewhat-technical people the ability to create light weight Line of Business applications. While many developers don’t think LightSwitch will be useful for creating apps, we think it can be very beneficial to use in the right circumstances. Here are some reasons why.  

Right-Sized vs Enterprise Ready

In recent years there has been a growing philosophy that everything needs to be enterprise ready. The prevailing thought is all solutions need to be scalable, flexible, anything-able. While that is true for anything that really does need to be enterprise ready, there are situations where enterprise ready is TOO much.

Imagine you are a small start up. You are not focused on enterprise ready. You are focused on getting through your first year. Alternately, you might be an established organization that is considering getting into a new line of business. Focusing on getting something up and running to let your employees share information in a cost effective way would ensure you are not risking valuable resources (i.e., capital). In today’s economy capital budgets are limited, and in some companies non-existent.

Best of Both Worlds

Traditionally, we have seen tools such as Access, Excel and more recently SharePoint, act as a useful starting point for a low cost prototype. The best thing that can be said of those initial forays in developing Line of Business applications is that usually all of the necessary data points have been identified and that there is a working prototype. We find that having a working prototype when starting an enterprise application development effort is immeasurably helpful.

While Access and Excel solutions do provide value when moving to the next level of maturation, LightSwitch can provide even more. Since LightSwitch can connect to Microsoft SQL Server or Oracle databases, the application can utilize either of those databases during the initial development.

LightSwitch also generates an ADO.Net EntityFramework class structure that can be used in the next iteration of development. Finally, the interface is rendered to a Silverlight application.

Recently, we had a customer request a simple application for generating quotes for customers and tracking them in a web format. Taking this use case, we decided to give LightSwitch a go. We were able to build a working prototype for the need within 4 hours, complete with the database tables, class structure and Silverlight interface. Normally this would have taken close to 40 hours to get to the same point in a traditional web development environment.

Efficiency vs. Maturation

Some people point out that if this right-sized application is successful that it will need to be rebuilt, usually from the ground up. While this is mostly true, it’s relevant to restate that having a working prototype does reduce the risk (risk = time + money) in starting a new application.

So would it be more efficient to build the enterprise ready version of the application first? The assumption there is that you are going to get the application right the first time. Or that the application will be used for a period of time to recover its ROI. But aren’t those two very big assumptions? Furthermore, aren’t those two very expensive assumptions?

Also, it’s relevant to say that enterprise software endeavors are never guaranteed successes. We all know the high rate of failure for traditional development, whether it is done using an agile or waterfall approach. As noted in a recent Gartner report, approximately 50% of all features are either never used or rarely used. Why not develop those features inexpensively first and then decide what needs to be in your final application? These are the types of benefits Microsoft’s LightSwitch can provide, making it something to consider moving forward.

About the Author

Patrick Emmons is co-founder of Adage Technologies and an accomplished technical architect with more than 15 years of programming and web development experience. Prior to Adage, Patrick was a principle for another web development firm and also worked as a developer and consultant for Ameritech, Motorola and Baker Robbins.


Return to section navigation list> 

Windows Azure Infrastructure

My (@rogerjenn) Microsoft Announces Cloud Essentials for Partners with Free Windows Azure and Office 365 Benefits post updated 10/30/2010 explains a new benefits program:

• Updated 10/30/2010 9:30 AM PDT for the alternative Microsoft Cloud Accelerate program (see end of post).

image Haris Majeed, a Microsoft Windows Azure Admin, sent the following Status Update message to contributors to the 2nd-ranked Continue Azure offering free for Developers idea on the Windows Azure Feature Voting Forum on 10/29/2010 at ~4:00 PM PDT:

image

Here’s the original idea:

image

Note that the existing MSDN Subscriber benefits were extended by eight months on 10/25/2010 (see the article in the Live Windows Azure Apps, APIs, Tools and Test Harnesses section of my Windows Azure and Cloud Computing Posts for 10/25/2010+ post.

And here are the details of the Cloud Essentials Pack from the new Microsoft Cloud Partner site:

image

image Note that the availability date has been updated from 2011H1 to 1/7/2011, but BPOS hasn’t be updated to Office 365. The Office 365 licenses should be of interest to Microsoft Access 2010 developers (and users) who want to host multiuser Web Databases on SharePoint Server 2010 in the cloud or upsize an Access database to SQL Azure and link it to an Access front-end. …

The post continues with a description of an Alternative Microsoft Cloud Accelerate Program for “Competency” Partners “Already in the Cloud.”


•• My (@rogerjenn) Adam Langley Points Out That SSL/TLS Isn’t Computationally Expensive Any More post of 10/30/2010 recommends using HTTPS (SSL) for all Internet connections to Windows Azure projects, not just SQL Azure;

image Adam Langley from Google updated his Overclocking SSL essay of 6/25/2010 on 10/26/2010 for OpenSSL v1.0.0a. The point of his post, and the Velocity 2010 presentation on which it was based, is SSL/TLS is not computationally expensive any more.

SQL Azure requires Secure HTTP (HTTPS), also called Secure Sockets Layer (SSL), encryption for all connections made over the Internet. HTTPS is optional for Internet connections to Windows Azure Web roles and storage services. Many developers are hesitant to specify HTTPS for Windows Azure projects because of concern with CPU resources used for encryption and decryption. The same concern arises with SQL Azure.

Here’s the description of the 6/24/2010 Overclocking SSL session by Adam Langley (Google), Nagendra Modadugu (Google), Wan-Teh Chang (Google):

At Google we’re fanatical about speed and security so we’re working hard on making HTTPS faster in Google Chrome. In this talk we’ll cover details of the SSL/TLS protocol and where latency is introduced. We’ll describe how sites can configure themselves for optimal performance and the changes to SSL/TLS that we’re experimenting with to help in the future.

  • The current costs of HTTPS over HTTP in Google’s serving infrastructure (dependent on getting internal approvals to release the data)
  • Corking, records and the interaction of TLS with TCP congestion control
  • Session tickets
  • What are CRLs and OCSP and why your users are spending hundreds of milliseconds on them.
  • OCSP stapling
  • OCSP disk caching in future versions of Firefox and Google Chrome
  • Cut-through/False start mode
  • OCSP multi-stapling
  • TLS cached info

The post continues with a full-text version of Adam’s essay.


•• Chris Czarnecki posted Microsoft Announces New VMRole for Azure to the Learning Tree blog on 10/29/2010:

image Microsoft have been offering their Azure Platform as a Service (PaaS) for some time now. This provides an elegant deployment environment for applications developed to take advantage of the scalable features Azure provides. Developing Web applications requires the application to be packaged in Web Roles, with heavyweight computation assigned to Worker roles. I mention this because these are new terms to .NET Web application developers and therefore a new skill set is required. Moving existing applications to Azure requires at least some modification to a traditional .NET Web application. Further, once deployed to the Azure platform, although running on IIS, IIS is not visible to users of Azure – no configuration is allowed. I am utterly convinced that this approach, whilst elegant, has had a significant impact on the rate of adoption of Azure.

imageA recent announcement from Microsoft has now changed this. They are offering two new things. Firstly, Microsoft are to provide elevated privileges for Azure with full IIS. This means that developers and administrators with greater flexibility. For example, it will enable the deployment of multiple IIS sites per Web role as well as the ability to install IIS modules. This will certainly make Azure more attractive and cost effective.

Secondly, Microsoft have announced a new role, the VM role. The aim of this new addition is to make migrating existing applications to the cloud easier and faster. Developers can upload their existing applications to a VM role, maintain total control whilst gaining the advantage of Azure’s load balancing and elastic scaling. A new costing model has also been introduced by Microsoft, enable virtual machine instance sizes to be selected. For example an extra small instance starts from $0.05 per hour up to an extra large instance at $0.96 per hour.

With these new offerings, Microsoft has now not only continued to offer Azure as a PaaS, but also with the VM role entered the market for Infrastructure as a Service (IaaS) and the domain of Amazon EC2. It will be interesting to see how the functionality evolves over the coming months.

If you are interested in learning more about Cloud Computing and the solutions offered by major provider such as Microsoft, Amazon, Google etc, why not come along to the Learning Tree Cloud Computing course. If you are sure Azure is for you, learn the details of how to extract the maximum by attending the Learning Tree Azure course.


•• Peter Bright wrote Future of Windows Azure -- platform is the service on 10/30/2010 for Ars Technica; it was then picked up by CNN International:

imageAt PDC this week, Microsoft unveiled its roadmap for the Windows Azure cloud computing platform.

image Moving beyond mere Infrastructure-as-a-Service (IaaS), the company is positioning Windows Azure as a Platform-as-a-Service offering: a comprehensive set of development tools, services, and management systems to allow developers to concentrate on creating available, scalable applications.

image Over the next 12-18 months, a raft of new functionality will be rolled out to Windows Azure customers. These features will both make it easier to move existing applications into the cloud, and enhance the services available to cloud-hosted applications.

The company believes that putting applications into the cloud will often be a multistage process. Initially, the applications will run unmodified, which will remove patching and maintenance burdens, but not take advantage of any cloud-specific functionality.

Over time, the applications will be updated and modified to start to take advantage of some of the additional capabilities that the Windows Azure platform has to offer.

Microsoft is building Windows Azure into an extremely complete cloud platform. Windows Azure currently takes quite a high-level approach to cloud services: applications have limited access to the underlying operating system, and software that requires Administrator installation isn't usable.

Later in the year, Microsoft will enable Administrator-level access and Remote Desktop to Windows Azure instances.

For even more compatibility with existing applications, a new Virtual Machine role is being introduced.

This will allow Windows Azure users to upload VHD virtual disks and run these virtual machines in the cloud. In a similar vein, Server Application Virtualization will allow server applications to be deployed to the cloud, without the need either to rewrite them or package them within a VHD. These features will be available in beta by the end of the year.

Next year, virtual machine construction will be extended to allow the creation of virtual machines within the cloud. Initially, virtual machine roles will support Windows Server 2008 R2; in 2011, Windows Server 2003 and Windows Server 2008 with Service Pack 2 will also be supported.

Microsoft also has a lot to offer for applications that are cloud-aware. Over the past year, SQL Azure, the cloud-based SQL Server version, has moved closer to feature parity with its conventional version: this will continue with the introduction of SQL Azure Reporting, bringing SQL Server's reporting features to the cloud.

New data syncing capabilities will also be introduced, allowing SQL Azure to replicate data with on-premises and mobile applications. Both of these will be available in previews by the end of the year, with final releases in 2011.

A range of new building-block technologies are also being introduced, including a caching component (similar to systems such as memcached) and a message bus (for reliable delivery of messages to and from other applications or mobile devices).

A smaller, cheaper tier of Windows Azure instances is also being introduced, comparable to Amazon's recently-released Micro instances of EC2.

The breadth of services that Microsoft is building for the Windows Azure platform is substantial.

Compared to Amazon's EC2 or Google's AppEngine, Windows Azure is becoming a far more complete platform: while EC2 and AppEngine both offer a few bits and pieces that are comparable (EC2 is particularly strong at hosting existing applications in custom virtual machines, for example), they aren't offering the same cohesive set of services.

Nonetheless, there are still areas that could be improved. The billing system is currently inflexible, and offers no ability for third parties to integrate with the existing Windows Azure billing.

This means that a company wishing to offer its own building blocks for use by Windows Azure applications has to also implement its own monitoring and billing system. Windows Azure also has no built-in facility for automating job management and scaling.

Both of these gaps were pertinent to one of yesterday's demonstrations. Animation studio Pixar has developed a prototype version of its RenderMan rendering engine that works on Windows Azure.

Traditionally, RenderMan was only accessible to the very largest animation studios, as it requires considerable investment in hardware to build render farms.

By moving RenderMan to the cloud, smaller studios can use RenderMan for rendering jobs without having to maintain all those systems. It allows RenderMan to be sold as a service to anyone needing rendering capabilities.

Neither job management -- choosing when to spin up extra instances, when to power them down, how to spread the different frames that need rendering between instances -- nor billing are handled by Windows Azure itself. In both cases, Pixar needed to develop its own facilities.

Microsoft recognizes that these are likely to be useful to a broad range of applications, and as such good candidates for a Microsoft-provided building block. But at the moment, they're not a part of the platform.

Microsoft CEO Steve Ballmer speaks at this week's Microsoft PDC conference.Microsoft CEO Steve Ballmer has said that Microsoft is "all in" with the cloud. The company is certainly working hard to make Windows Azure a better platform, and the commitment to the cloud extends beyond the Windows Azure team itself.

Ars was told that all new development of online applications within Microsoft was using Windows Azure, and with few exceptions, existing online applications had migration plans that would be implemented in the next two years.

The two notable exceptions are Hotmail and Bing, both of which already have their own, custom-built, dedicated server farms.

This internal commitment is no surprise given the history of the platform. Windows Azure was originally devised and developed to be an internal platform for application hosting.

However, before there was any significant amount of internal usage, the company decided to offer it as a service to third parties. Now that the platform has matured, those internal applications are starting to migrate over. As such, this makes Windows Azure, in a sense, the opposite to both EC2 and AppEngine.

Those products were a way for Amazon and Google to monetize their preexisting infrastructure investment investment that had to be made simply to run the companies' day-to-day business.

With the newly announced features, there's no doubt that Windows Azure is shaping up to be a cloud computing platform that is both powerful and flexible. Microsoft is taking the market seriously, and its "all in" position seems to represent a genuine commitment to the cloud.

What remains to be seen is whether this dedication will be matched by traditionally conservative businesses and developers, especially among small and medium enterprises.

A move to the cloud represents a big change in thinking, and the new Windows Azure features will do nothing to assuage widespread fears such as a perceived loss of control.

It is this change in mindset, not any technological issue, that represents the biggest barrier to widespread adoption of Windows Azure, and how Microsoft aims to tackle the problem is not yet clear.


Kevin McLaughlin reported Microsoft Racks Up Another Jaw-Dropping Quarter in a 10/28/2010 article for Computer Reseller News (CRN):

image Microsoft (NSDQ:MSFT)'s traditional cash cows in its client and server software business continue to propel the software giant to quarterly financials that make competing CEOs drool with envy, even as the company transitions into the great unknown of cloud computing.

image In Microsoft first quarter earnings call Thursday, the company reported profit of $5.4 billion, or 62 cents per share, a 51 percent jump from the $3.6 billion and 40 cents per share it racked up in the year-ago quarter. Microsoft also saw bookings rise 24 percent during the quarter.

Microsoft Q1 revenue grew 25 percent year-over-year to $16.2 billion. Wall Street analysts had expected revenue of 15.8 billion and earnings of 55 cents per share, according to Thomson Reuters.

The trusty revenue engines of Exchange, Sharepoint, Dynamics CRM and Lync (formerly Office Communications Server) all grew "healthy double digits," Microsoft CFO Peter Klein said in the call. Overall, Microsoft's Server and Tools Business saw revenue grow 12 percent during the quarter.

Microsoft Office 365, its newly unveiled suite of cloud apps that includes the re-branded Business Productivity Online Suite (BPOS), will enable new growth opportunities for Microsoft and its partners and will be available "in calendar 2011", said Klein.

imageWindows Azure subscriptions grew 40 percent compared to last quarter, and Microsoft (NSDQ:MSFT) is adding new enhancements to help customers build new apps and migrate existing ones to the cloud, Klein said. The number of business customers licensing Microsoft's cloud services has more than tripled during the past year, he said. [Emphasis added.]

"Customers of all sizes are buying Microsoft cloud services," Klein said during the call. "Microsoft has significant cloud momentum and we are leading the industry through this transformation."

Microsoft's Online Services, which hasn't been doing well lately, grew 8 percent during the quarter, including 13 percent growth for Bing, which is now powering Yahoo's algorithmic and paid search results in the U.S.

On the client side, Windows 7, for which Microsoft has sold more than 240 million licenses to date, had its fourth straight quarter of double digit growth, with OEM revenue growing 11 percent during the quarter. Windows revenue rose 66 percent to $4.8 billion, compared to $2.9 billion during last year's Q1. Desktop PC sales growth is stronger in businesses than it is with consumers, Klein said.

Microsoft noted that its results included $1.5 billion in deferred revenue from last year's Q1 that stemmed from its Windows 7 upgrade program and Windows 7 licenses sold to customers in advance of the October 2009 launch. Microsoft's revenue grew 13 percent when this is taken into account.

Read More: Next: What about the tablet cannibals? << Previous | 1 | 2 | 3 | Next >>


R “Ray” Wang posted Event Report: PDC10 Focuses Developers On Cloud And Devices Opportunity on 10/28/2010:

Microsoft Bolsters Developer Tools To Inspire Innovation

image At the October 28th, 2010 Microsoft Professional Developers Conference, CEO Steve Ballmer presided over 3 key announcements that affirm the software giant’s ambitions to retain and grow the mind share and market share of worldwide developers.  In fact the 3 main announcements foreshadow Microsoft’s convergence in the five pillars of consumer tech:.

  • IE9 showcases advancements in hardware acceleration and user experiences. Dean Hachamovitch demonstrated how hardware acceleration and site-centric design can transform the web experience.  A comparison of Google Chrome vs Microsoft IE9 showed significant performance gains with IE9 at 60 frames per second (FPS) on HTML5 versus 40 FPS on the latest version of Chrome.  The hardware acceleration definitely improves the audio, video, text, and canvas experience.  The new browser supports HTML5 semantic tags, CSS3 2D transforms, and F12 capabilities.

    Point of View (POV):
    IE9 transforms the browser market.  Early demos show how quickly developers can build rich HTML5 experiences.  Hardware acceleration and site-centric design will drive all future development capabilities as new devices such as tablets, surface computing, and mobile innovations enter the market.  Of equal importance, Microsoft shows support for open standards in HTML5 while also betting on Silverlight for more complex apps that can simultaneously build in HTML5 and video.  Unfortunately, a few beta users have reported stability issues on IE9.  As with any early release, expect improved stability over the next 1 to 3 months.
  • image Windows Phone 7 attempts to gain traction. Launched one week ago, Scott Guthrie highlighted the key features in the WP7 platform.  Adoption remains fierce with over 1000 applications and games uploaded into the integrated market place.  During the Brandon Watson demo, attendees saw integrated experiences to XBox Live, a native Facebook application, and an exclusive Amazon Kindle app.  Developers can build on Visual Studio Express, take advantage of OData support, and concurrently use Expression Blend with Visual studio to design digital assets and apply animation time lines.  Visual Studio Express and Expression Blend will be available to developers at no cost.

    (POV):
    WP7 shows promise in bringing Microsoft’s mobile efforts out of the doldrums.  With Palm facing a brain drain post-HP acquisition, WP7 enters a competitive 4-way race among iOS4, Google Android, and RIM’s WebWorks development platform for mind share.  However, Microsoft’s taken the lessons learned from a decade of catching up.  By improving developer productivity through better tools, closer partnerships with device manufacturers on standards, and development of a market place, buyers and partners should not count Microsoft out.  Additionally, the industry can expect the tablet wars to heat up as these markets converge.  Of note to iPhone users, many early users of WP7 on AT&T’s network claim that they have less dropped calls than Apple’s iPhone.  Software Insider expects AT&T to start publishing dropped call results by device and OS.
  • imageWindows Azure provides a key PaaS entry point for apps developers. Bob Muglia described the key tenants of platform as a service (PaaS) in Windows Azure.  As the creation layer in the flavors of cloud computing, Microsoft shows its commitment to a true multi-tenant platform.  VM’s are controlled at the app level, patch service and release management is maintained by the user, services are delivered ready made, tools are standardized, peak loads scale to demand, and the system is designed to withstand failure.  Mark Russowich walked attendees through new features such as extra small instances, remote desktop, full IIS support, virtual network, elevated privileges, Windows Server 2008 R2 roles, and multiple administrator capabilities.  Brian Hurray showed the audience how Team Foundation Server (TFS) can manage global deployments and leverage other identities beyond Active Directory.  Project Dallas now evolves into Windows Azure Data Market with 35+ content partners, 60+ data offerings, and 100+ offers coming soon.

    (POV):
    The Azure opportunity should accelerate innovation for developers.  Microsoft is the largest general purpose PaaS on the market.  Azure builds on a history of services built on Microsoft SQL Azure, Microsoft .NET, Microsoft Visual Studio, and Microsoft System Center.  Despite the familiarity, partners will still need to understand the business model to support Windows Azure in order to profitably build the last-mile solutions customers seek. Windows Azure Market Place provides a key component in creating market demand.  While Azure does support Java, developers looking at the Java world may want to first consider VMForce through the VMware and Salesforce.com alliance.  Keep in mind, Microsoft is the only full service cloud offering across SaaS, DaaS, PaaS, and IaaS.

The Bottom Line:  Microsoft Invests In A Trifecta For Success

Microsoft’s taken the lessons learned from competitors and has improved on them.  As it catches up in the browser, mobile, and cloud, developers should be rest assured that the Redmond giant has regained its mojo.  Microsoft remains a fast follower, incorporating and improving competitor innovations by introducing ease of use, scale, and marketplaces to complete the ecosystem.  Those who count Microsoft out will be disappointed.  Those who discount Microsoft’s leadership in a full service cloud offering will miss the boat. …

Ray continues his analysis here.


David Linthicum claimed “As the tech community finally gets to work on cloud computing, these tips will help you move in the right direction” in a preface to his 3 elements of good clouds post to InfoWorld’s Cloud Computing blog of 10/28/2010:

image We in IT finally seem to be getting to work on this whole cloud computing thing, rather than standing around arguing the benefits of private versus public clouds or trying to define elasticity. Good for us, but considering that most organizations have no experience building clouds, I put together a few items that should be a part of the process. (Note: I'm focusing on clouds built by enterprises.)

image Loose coupling. To build a cloud or cloud-based applications on an infrastructure or platform system, you must create distributed components, such as processes, that are not tightly bound to each other. As a test, a cloud component should be able to stop working, yet not halt other components. Loose coupling leads to good architecture, good SOA, and good clouds. It provides you with options in the future when looking to relocate functions from cloud to cloud, including private, public, hybrid, or community, whether on site or off.

Governance and security are systemic. I often hear, "Oh yeah, security," or "What is governance?" In the world of cloud computing, that's not good. Both governance and security are architectural patterns that should be considered during scoping, design, and development. Security, for example, needs to be engineered inside the cloud, including the way to approach identity management and encryption for data in flight and at rest. You can't bolt on technology and hope for the best. On the upside, it's much easier to do this right than you think, and there are great technologies available to leverage.

Testing and staging. Don't forget about testing your company's cloud as it is being built. While you may be tempted to test within a production cloud, that's typically a bad idea. Just as with traditional system development, you need environments that replicate production to test new functionality. Otherwise, you risk crashing a production cloud. Yes, clouds can crash. Don't be the crasher or the crashee.

Of course, there are a many more quick and dirty tips to make better clouds. These are just my top three.


David Lemphers explained Why Vertical Clouds Are The Future! in this 10/28/2010 post:

image Over the last two and a half years I’ve spent in the cloud space, one question that has always concerned me has been, “Is there money in cloud computing?”. I ask it from a provider perspective, as I think there is a lot of money to be made in the cloud ecosystem, but as a direct cloud service provider, is there money to be made.

I think there is, but I don’t think it’s going to be as a commodity platform provider. Why? Because there are just too many business issues that aren’t addressed by cloud computing providers to date.

See, if you’re a business, and you’re looking to make money by offering a service via the web, by leveraging a cloud platform, you’ll need some basic business capabilities. Let’s use tax as a good example. States are permitted to enact their own tax laws, as is the federal government. They don’t have to agree on a standard, meaning one state might declare that tax is applicable if a service is delivered to a resident in their state, while another might declare that tax is applicable if the service is provided out of their state. So as a service provider, you’re going to need some fine grained insight into what’s happening with your service. Let’s keep moving forward with this example. If you are hosting your service with a cloud provider that has two data centers, in different states. Let’s say that for one part of the month, your service is running out of state x. Then something happens, and for a period of time, your service is moved to data center y, and your service is served out of there. Then, the something is fixed, and your service moves back to data center x. Both data centers are in the providers “region a”, just in different states. This introduces a complexity, because what happens if state y considers your tax liability to be based on if you ran a service out of their state? And if you didn’t know your service was running in state x, because the bill you get from the service provider simply says “region a”, you might be in a sticky situation.

Again, this is why purely generic, commodity clouds are a challenge. They don’t have enough “business smarts” baked in, to help you as the customer, stay compliant. It’s not that they need to be tax smart, but they at least need to let you be tax smart.

Taxation is just one area of complexity, compliance is another. For example, what if you’re trying to develop a service to sell to financial services customers? There are a myriad of compliance requirements that financial services customers have. If your service doesn’t support your customers in being compliant, then that’s going to be an insurmountable barrier to them buying your service. And, if you try and use a cloud platform provider who doesn’t have the requisite hooks in place for you to build the controls required to offer compliance, the whole venture is a non-starter.

This is why I think vertical clouds are the future. See, right now, buying and connecting up servers is not a challenge, money is the only factor here. However, standing up a truly automated cloud, one that can dynamically allocate resources based on demand, like the big players have, is far harder, mainly because there aren’t many tried and true options out there. OpenStack is definitely an interesting option, as it’s open source, demonstrated (NASA and RackSpace use it), and provides all the bits you need to get a commodity cloud up and running. OK, so now I have the platform in place, and it’s open source and pluggable, now I have some room to move. So say I decide I’m going to be the premier provider of cloud platform services to the financial services industry. I go a build a bunch of goodies into the base platform that supports all the compliance requirements (tax, legal, etc.) of my industry (same applies to healthcare, etc). Wow, now I have something interesting, because now I can offer customers a high-value cloud platform service that makes it easy for them to do business. What’s more, the future of cloud is absolutely composability and redundancy, so I may have an app that has a financial services capability delivered by FinCloudX and a healthcare capability delivered by HealthCloudY. Ahhhh. And, all the appropriate compliance requirements are taken care of by each cloud.

David was on Microsoft’s Windows Azure Team for most of the “two and a half years [he] spent in the cloud space.” He’s presently Director of Cloud Computing for PricewaterhouseCoopers.


Dina Bass asserted Microsoft Woos Toyota, Duels Amazon in $10 Billion Cloud Bet in her 10/27/2010 article for Bloomberg News:

image Five years after unveiling a plan to shift into cloud computing, Microsoft Corp. may finally be making headway.

Microsoft’s then-Chief Technology Officer Ray Ozzie penned a memo in October 2005, saying the company was at risk if it didn’t reinvent itself as a provider of software and computing services over the Web.

Microsoft CEO Steve Ballmer Heeding the warning, Microsoft has signed up customers including Toyota Motor Corp., 3M Co. and Lockheed Martin Corp. for its cloud product. By March, 90 percent of the company’s engineers will be working on cloud-related products, says Chief Executive Officer Steve Ballmer. And the server unit may generate $10 billion in annual revenue from cloud services in a decade, says Microsoft President Bob Muglia.

“We will see those numbers and more,” Muglia said in an interview.

imageDevelopers will learn more about the cloud strategy at a conference today at the company’s headquarters in Redmond, Washington, where Microsoft will highlight tools that make it easier to move applications to the cloud. In a report due after the market closes today, Microsoft will say net income rose 34 percent last quarter to $4.8 billion, or 55 cents a share, as sales gained 22 percent, according to analysts’ predictions.

Further progress in cloud computing will hinge on whether Microsoft can narrow Amazon.com Inc.’s lead. Microsoft in November released its flagship cloud product, Azure, which stores and runs customers’ programs in its own server farms.

Lagging Behind Amazon

That came three years after market leader Amazon introduced a suite of cloud services that let companies rent, rather than buy, servers, the powerful machines that run networks and handle complex computing tasks.

Still, Microsoft ranks high in surveys asking chief information officers which cloud vendors they plan to use, said Sarah Friar, a software analyst at Goldman Sachs Group Inc.

“What Microsoft is doing well from a cloud perspective is they are enterprise class and understand what it means to both sell to large enterprise but also meet all their requirements,” said Friar, who is based in San Francisco.

The approach won over Toyota City, Japan-based Toyota. The automaker is using Azure to track the 2,000 calls daily that come into the Lexus roadside and crash assistance service. The program took days to implement, compared with weeks for an internal database, said Glen Matejka, a Toyota manager.

3M, whose products range from Post-It notes to flu tests, uses Azure to host a new program, called Visual Attention Service, that lets website designers test which parts of a site catch the human eye.

3M Cuts Costs

3M, based in St. Paul, Minnesota, halved costs by entrusting the program to Microsoft’s machines instead of its own, said Jim Graham, technical lead for the program.

Cloud computing can make it affordable for companies to tackle projects that previously would have required purchasing and maintaining tens of thousands of servers.

“Look at the so-called quants on Wall Street,” said Microsoft General Manager Bill Hilf. “They say ‘I want to ask a question but it’s going to take me 20,000 servers to answer.” Each time that happens, banks don’t want to build a new server farm. They want to just access those machines as needed.

Getting customers on board wasn’t easy. During Microsoft’s mid-year reviews, a series of meetings in hotel ballrooms near Microsoft’s campus early this year, executives got a sobering message from sales staff, Muglia said. Customers weren’t convinced Microsoft took its cloud push seriously, Muglia said.

Bet Hedging No More

“When we talked about choice, what they heard was we were hedging our bets,” he said.

Microsoft sales chief Kevin Turner decided that the pitch needed to change. Rather than discussing various options, Microsoft’s sales force has altered its pitch to “lead with the cloud” and now focuses customer meetings on cloud technologies, Muglia said.

Microsoft’s biggest challenge in cloud computing may come from Seattle-based Amazon. The online commerce provider generates about $500 million in sales from cloud services, more than five times Microsoft’s cloud-related revenue, according to Friar.

Amazon Chief Executive Officer Jeff Bezos has said Amazon Web Services, which includes the company’s cloud services, will eventually be as large as his company’s electronic-commerce business, which makes up 97 percent of the company’s sales. Neither Amazon nor Microsoft discloses cloud-related revenue.

Mid-Sized Businesses

Among small and medium-sized businesses, 77 percent surveyed by Goldman Sachs said they used Amazon, compared with 10 percent for Microsoft and 17 percent apiece for Google Inc. and Salesforce.com Inc. Larger businesses reported 12 percent use for Microsoft, compared with 18 percent for Salesforce and 11 percent for Google. Amazon wasn’t included in that survey.

Amazon isn’t encountering a lot of competition when it bids for customers, said Adam Selipsky, an Amazon vice president.

“We’re really not being told by customers or prospects about material adoption of broad cloud offerings that are currently out there from other vendors,” he said.

International Business Machines Corp. started offering cloud programs in March that let businesses develop and store programs in IBM data centers.

VMware Inc., a software maker, has an edge over Microsoft in so-called private clouds, which store programs in a company’s own servers, Friar said. Google and Salesforce have some high- end features Azure lacks, according to David Smith, an analyst at Gartner Inc. in Stamford, Connecticut.

Margin Pressure

Selling software as a service may also squeeze Microsoft’s profit margins because it entails additional costs of operating data centers to run customer programs, Friar says.

Still, the company should be able to generate more revenue from cloud products than for its regular software, Muglia said.

Cloud products, which are sold as a subscription, may help Microsoft get more customers on multiyear agreements. Those make revenue flow more predictably and lessen reliance on the vagaries of product cycles and computer upgrades, Smith said.

Microsoft may not lag behind rivals long, Goldman Sachs research suggests. When the firm’s analysts asked companies about the future, 29 percent listed Microsoft in their cloud purchasing plans. IBM came in second, with 25 percent, Salesforce racked up 24 percent and Google garnered 20 percent. Amazon trailed, with 8 percent.

Muglia likens the advent of cloud computing to the shift away from mainframes toward personal computers and servers. He says he doesn’t want to see Microsoft consigned to a role as leader in a has-been technology, citing IBM.

“The threat is that we become like IBM is on the mainframe,” Muglia said. “People still use mainframes an awful lot but it’s not where the future lies. I just have no interest in running the Microsoft mainframe business.”

Graphic credit: Steve Ballmer, chief executive officer of Microsoft Corp. Photographer: Matthew Staver/Bloomberg


Barbara Darrow claimed VARs ponder Microsoft Azure, Amazon Web Service options in a 10/27/2010 article for Search IT Channel.com:

image What does Azure, Microsoft's ambitious Platform-as-a-Service play, mean for the thousands of VARs, systems integrators and others in Microsoft's partner ecosystem?

Some VARs still rely at least partly on outright product sales and many of them view Azure as a disintermediation threat because it delivers software/services directly from Microsoft to their customers. Other partners that write custom code or customize applications for customers see it as an additional deployment option for applications that typically run on customer or partner premises. Integrators maintain that as long as disparate applications remain live, they will need to be tied together, regardless of where they run.

More on partners-in-the-cloud

imageAs Azure matures, pricing remains a problem  
VARs eyeing cloud computing weigh Azure vs. Amazon Web Services
For Microsoft Azure platform, late is good  
VARs eye Azure Appliance, love Aurora

The Azure delivery problem

image Microsoft is making an Azure play for partners -- especially developers -- but at this point it's unclear whether Azure will pay off for traditional VARs.

Microsoft bit off more than it could chew with Azure, said one Boston-area, Microsoft Gold partner who does a lot of e-commerce work atop the Microsoft stack but deploys it on Amazon Web Services (AWS). And, in his eyes, Microsoft reneged on its pledge to offer parity between on-premises, partner-hosted and Microsoft-hosted deployment models.

Microsoft claims 10,000 customers for Azure, which went live last February, but this VAR and some others said they see little traction for a variety of reasons, especially because Azure runs only on Microsoft-hosted servers.

"The big picture reason -- something it can't fix overnight -- is that a few years ago, [Microsoft COO] Kevin Turner got up and talked software plus services … and Microsoft's ability to offer on- and off-premises capabilities. It was a pretty good pitch and one Google couldn't make. The [implied] message was Azure will do that," the VAR said.

But that's not what happened. It became clear that Azure would be a Microsoft-hosted play only. Azure would run customer and partner applications but only in Microsoft's own data centers -- basically the same as the Google Apps model.

"You have to program stuff just for Azure and if you want to take your stuff to run elsewhere, you can't," he said. The Boston VAR called this a non-starter for many customers that fear vendor or host lock-in.

Another problem, in this VAR's view, is that Azure fell victim to Microsoft's tendency to pack everything-but-the-kitchen-sink into one big "massively complicated project," he said.

That sort of all-in monolithic approach is what sunk Microsoft's Longhorn-aka-Vista effort, he said. And while Microsoft was getting all its Azure ducks in a row, Amazon continued to hone, enhance and add to its AWS stable, gaining more customers and credibility along the way.

In AWS's more incrementally rolled out Infrastructure-as-a-Service (IaaS) world, VARs and developers can use development tools of their choice and deploy the resulting applications on an AWS-hosted Windows or Linux stack. Then, as needs change, they can move that work on premises or to another partner's hosting service. There is no lock-in.

The Azure strategy can be looked at two ways. Some VARs see Azure as a means to get more customer applications dependent on a Microsoft infrastructure, thus tightening the vendor's customer account control. These VARs prefer to see themselves as the customer's trusted adviser and gatekeeper. Others said Microsoft had no choice but to counter cloud and Software-as-a-Service efforts from competitors ranging from Google to Salesforce.com.

Microsoft: Azure as the true cloud

For its part, Microsoft said that Azure, unlike AWS, offers true cloud deployment capabilities. And for that reason, the company said a more revolutionary approach is needed.

"We built Azure for what the cloud really is," said Tim O'Brien, director of Microsoft's platform strategy group. "It's for net new cloud-native applications that take full advantage of this scale-out infrastructure. AWS is fine for those taking existing applications and putting them out there."

Microsoft did acknowledge the on-premises deployment concern with its Azure Appliance announcement in July. At that time, the company said it would make Azure available to run in prescribed and very large data centers built by Dell, Hewlett-Packard and Fujitsu. Those deployments were slated to come online this year -- news is likely to come from the Microsoft Professional Developers Conference, the company said then. In theory, Azure will be made available for smaller data centers that smaller partners could host for customers, but there is no timeline on that.

Azure targets .NET world, ease-of-use fans

Many VARs love that AWS accepts and runs existing applications and prices usage very aggressively. But AWS, with its arcane interfaces and data center terminology, is truly for techies whereas Microsoft Azure aims for the more point-and-click-minded .NET cognoscenti. And there are thousands of those people, although Azure does require them to embrace the latest .NET 4 technology. It will not run old DLLs or other applications. What it will do is make it relatively easy for .NET developers to get aboard without a huge learning curve.

"I love Amazon as a cloud vendor, but that stuff is for real geeks and not for the faint of heart," said John Landry, former CTO of Lotus Development Corp., who has since launched smaller ISV companies in the Microsoft world. "Microsoft makes Azure more accessible to mere mortals."

Landry has used Azure and said he is particularly enthusiastic about SQL Azure, which gives partners a huge opportunity to use pay-as-you-go database services.

"As a given SQL Server application needs to scale up based on a demand spike, with SQL Azure, you can just change the connection string and point it to the cloud vs. at your on-premises SQL Server. They've made that really easy," Landry said.

And Azure signed on some major ISVs including some that competed with Microsoft in other arenas. In January, Intuit Inc. said it will support Azure as a platform (AWS is another) for its AppCenter application store for small business applications. Currently two out of 64 total applications are on Azure, with more to come, said Alex Chriss, director of the Intuit Partner Platform.

Microsoft Azure partner call to action

Of course, Microsoft sees Azure as a great big sandbox for thousands of .NET developers. But its call to action is for traditional VARs to get acclimated to Azure and use it to augment their existing businesses.

"If you look at what VARs do, they will have to wire up whatever they're already selling to what's in Azure," said Microsoft's O'Brien. "If you're selling software in a box or in the cloud, integration still has to happen and there are tremendous opportunities to be had both on the reseller and integrator side."

There is definitely interest in the field. Laurus Technologies, an Itasca, Ill.-based VAR, recently added a Microsoft practice around Business Productivity Online Suite (BPOS), SharePoint and Lync Server. But it also plans to embrace Azure, said Stephen Christiansen, Microsoft practice leader. Partners will be needed for assessment services to help customers decide what makes sense to migrate to Microsoft-hosted Azure or (eventually) internally-hosted Azure, Christiansen said.

In order to attract and retain VARs, Microsoft needs to make Azure pricing affordable, transparent and predictable. One Microsoft VAR that once blasted the Azure pricing model as opaque has since been reassured.

"The complexity of [Microsoft's] billing models is getting clearer, and they've alleviated my real terror, which was getting a project or a customer running and not really being able to estimate the cost. They've done a bunch of things, including BizSpark, that give you a free or low-cost runway for a year," said Tim Huckaby, CEO of InterKnowlogy, in Carlsbad, Calif.

BizSpark provides Azure access for a year and in that time VARs can learn by the traffic pattern and will be able to estimate exact costs, he said.

Indeed, BizSpark, which launched a few years ago to simplify Windows platform adoption, was later expanded to include Azure. "With BizSpark, you can see how much compute power you use, how much storage, the I/O, etc. -- and it will tell you the cost," O'Brien said.

For many, it'll be both Azure and AWS

Still, while a lot of companies paint it as an either-or proposition, the reality is that the way forward for many ISVs and VARs will be both Azure PaaS and AWS IaaS.

Propelware, for example, is an Intuit partner writing applications for the Azure-hosted Intuit AppCenter. But it also uses AWS for its public-facing websites, according to CEO Joe Dwyer.

"I like AWS, but we're a heavy-duty .NET shop and I don't want to be an 'IT guy,'" Dwyer said.

SearchITChannel.com assistant editor Pat Ouellette contributed to this story.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA)

•• Michael Poulin asserted A Private Cloud - it is just a non-Public Cloud in a 12/30/2020 post to ebizQ’s Business Ecology Initiative & Service-Oriented Solution blog:

image Last month Joe McKendrick posted his BLOG "Do Whatever it Takes to Keep IT In-House, says ebizQ's Michael Poulin" where he quoted me on "I am saying that the enterprise, if it wants to live a long good life, has to keep its IT in house" a bit out of the context while I talked about some business consequences of using Cloud Computing. Well, I assume that, probably, old fashion people still believe that IT and Cloud Computing are not the same things.

image At the same time, Joe has explained my actual position regarding Cloud Computing very well: "Remember this as well: Just because a lot of technology resources may reside outside the organization doesn't mean an enterprise won't need tech-savvy executives that know how to make the most of technology. Such individuals are needed more than ever, to help identify and coordinate both internal and external technology-based services. " These "tech-savvy executives" are the ones who I've meant as IT that I advised to keep in house.

Moreover, my experience tells me that the Private Cloud hosted by particular trusted provider outside of the client-company is the way to go for the organisations of all sizes. Such hosting may be easily organised under the direct control and monitoring of the client-company, in accordance to its specification, transparently and reliably from the client's business perspective. You, as a client-company, always know what is going on with your data and software in the hosting SW and HW environment; you define and dictate when the hosting provider has to patch the operating systems and not to disturb your own clients as well as when and how apply compliance dictated by the industry or country laws and regulations. With Private Clouds, You are in control over your business assets.

Public Cloud is, probably, cheaper but I've counted 15 pure business risks and 8 technical-business mixed risks for it. This type of Cloud, as it is today, is suitable for non-critical IT needs like quick complex calculations or POC, or for those businesses who cannot afford Private Cloud of external vendors yet.

Also, when I read the ZapThink's comparison of mission-critical SW solutions with using of office furniture, which was put by Jason'(Bloomberg), I was jolted. If we accept the 'furniture logic', Jason's proposal sounds to me like having a chair in one room while your table with served dinner in another room... Companies write their own software because they cannot find appropriate products with reasonable cost/time of customisation. I hope many would agreed with me in this. Certainly, some SW products become commodities and they may be offered from the Cloud's "shelves" but talking about massive transition into Clouds is a bit reckless nowadays. This means only one thing: despite immediate financial benefit of placing corporate SW into the Cloud, play a small chess game - calculate a few steps ahead from the perspectives of your corporate business (if you cannot do this, I can help you) and then decide if you immediate benefit worth the risks in the future. Returning your SW back may be much more costly than placing it into the Cloud.

Before jumping into the Cloud, be sure you know how to get out (if needed).


James Staten (@staten7) reported Windows Azure Crosses Over To IaaS on 10/28/2010:

image

At its Professional Developers Conference this week, Microsoft made the long-awaited debut of its Infrastructure as a Service (IaaS) solution, under the guise of the “VM-role” putting the service in direct competition with Amazon Web Services’ Elastic Compute Cloud (EC2) and other IaaS competitors. But before you paint its offering as a "me too," (and yes, there is plenty of fast-follower behavior in today’s announcements), this move is a differentiator for Microsoft as much of its platform as a service (PaaS) value carries down to this new role, resulting in more of a blended offering that may be a better fit with many modern applications.

image

First off, it’s not a raw VM you can put just anything into. As of its tech preview, opening up later this year, it will only host applications running on two versions of Windows Server — no Linux or other operating systems. . . yet. Second much of the PaaS services on Azure span PaaS and IaaS deployments letting you easily mix VM role and Worker role components in the same service; sharing the same network configuration, shared components, caching, messaging, and network storage repositories. Third, Microsoft is also debuting, in preview, the ability to host virtualized application images in its Worker Role, another deployment option that can accommodate even more applications and components that previously were not a fit on the Azure platform. And SQL Azure Data Sync replicates and synchronizes on-premise SQL Server databases to Azure. All these moves give traditional Windows developers significantly higher degrees of freedom to try and then deploy to the Azure platform. If you have held back on trying Azure because you didn’t feel like writing your entire application on the Azure platform now the bar is a bit lower — assuming everything you write runs on Windows.

image One could argue that these moves are really a "so what" given that nearly every public IaaS cloud delivers these same degrees of freedom — more frankly, since any app running on any OS can be deployed to a VM-based cloud. But we forget that there is a deeper level of developer skill required to leverage IaaS (see figure below). If your developer skills center around building applications in .Net, Java, PHP, Ruby or other languages and not down to the layer below, installing, configuring, and managing the middleware and operating system, IaaS is a steep learning curve. And frankly having to manage these lower layers cuts into go-to-market productivity. That’s always been the appeal of PaaS offerings. But PaaS carries two significant downsides which have kept its adoption lower than IaaS in our ForrSight surveys: fear of lock-in due to the abstracted middleware services and the difficulty in migrating on-premise code to the platform.

Depth of knowledge and cloud abstraction levels

Neither fear is as real as it sounds, by the way. PaaS lock-in really isn’t that different than the old Java app server wars. WebLogic and WebSphere both supported the Java standard and you could port your app back and forth unless you tapped into their value added services. The same is true on most PaaS platforms. If you value the added services you will use them; and you will be locked in.

The truth about PaaS platform migration isn’t as daunting either. Most PaaS cloud services support one or multiple programming frameworks and adhere to the usual standards. How certain services are provided is usually where the work resides, not in the core functions of your application.

Much of the rest of Microsoft’s Azure were either catch up responses to EC2 announcements — extra small instances, dynamic, and secure caching — or enhancements to help Visual Studio developers and System Center administrators familiarity with Azure — Team Foundation integration, better reporting and management capabilities. Both are strong motives yielding good results. Microsoft’s culture is that of a fast follower but one with a strong listening engine and customer retention priority. As a result it mostly takes others’ ideas, enhances them for its installed base and innovates where it sees unique crossover opportunities between these aims. Usually the outcomes are easier to use and more enterprise-ready solutions.

Perhaps the most interesting of these cross-overs is DataMarket. The former Project Dallas is a "me too" of AWS’ Public Data Sets but one that is significantly easier to use and comes with a commercialization component for information providers that has no match among the cloud competitors. I’ll blog about this next.

James serves Infrastructure & Operations Professionals for Forrester Research.

<Return to section navigation list> 

Cloud Security and Governance

•• Chris Hoff (@Beaker) sought interview participants with a Got Cloud [Security]? I’d Like To Talk To You… post on 10/29/2010:

image Blogging is very much a broadcast medium.  Sure, people comment every now and then, but I like talking to people; I like to understand what *they* think.

I have some folks I’d like to “interview” for my blog on the topic of Cloud – specifically ops, compliance, risk, and security. 

I don’t want anecdotes or take ill-defined polls and I also don’t want to regurgitate my interpretation of what somewhat else said. I want to hear you say it and let others know directly what you said.

The structure would be somewhat similar to my Take 5 interviews.  I’d preferably like folks in the architect or CISO/CSO role who have broad exposure to initiatives in their large enterprise or service provider companies.

We can keep it anonymous.

Email me [choff @ packetfilter.com] if you’re interested.

Thanks,

/Hoff


<Return to section navigation list> 

Cloud Computing Events

•• Tim Anderson (@timanderson) sums up his PDC experiences in Reflections on Microsoft PDC 2010 of 10/29/2010 enroute to the UK:

imageI’m in Seattle airport waiting to head home – so here are some quick reflections on Microsoft’s Professional Developers Conference 2010.

Let’s start with the content. There was a clear focus on two things: Windows Azure, and Windows Phone 7.

pdc10.gifOn the Azure front, the cloud platform, Microsoft impressed. Features are being added rapidly, and it looks solid and interesting. The announcements at PDC mean that Azure provides pretty much the complete Windows Server platform, should you want it. You will get elevated privileges for complete control over a server instance; and full IIS functionality including support for multiple web sites and the ability to install modules. You will also be able to remote desktop into your Azure servers, which is going to make Windows admins feel more comfortable with Azure.

The new virtual machine role is also a big deal, even though in some ways it goes against the multi-tenanted philosophy by leaving the customer responsible for patches and updates. Businesses with existing virtual servers can simply move them to Azure if they no longer wish to run their own hardware. There are also existing tools for migrating physical servers to virtual.

I asked Bob Muglia, president of server and tools at Microsoft, whether having all these VMs maintained by customers and potentially compromised with malware posed a security threat to the platform. He assured me that they are fully isolated, and that the main danger is to the customer who might consume unexpected amounts of bandwidth.

Simply running on an Azure VM does not take full advantage of the platform though. It makes more sense to hook into Azure services such as SQL Azure, or the non-relational storage services, and deploy to Azure web or worker roles where Microsoft take care of maintenance. There is also a range of middleware services called AppFabric; see here for a few notes on these.

If there was one gap in the Azure story at PDC, it was a lack of partner announcements. Microsoft says there are more than 20,000 applications running on Azure, but we did not hear much about them, or about notable large customers embracing Azure. There is still a lot of resistance to the cloud among customers. I asked some attendees at lunch whether they expect to use Azure; the answer was “no, we have our own datacenter”.

I think the partner announcements will come. Microsoft is firmly behind Azure now, and it makes sense for its customers. I expect Azure to succeed; but whether it will do well enough to counter-balance the cost to Microsoft of migration away from on-premise servers is an open question.

Alongside Azure, though hardly mentioned at PDC, is the hosted application business originally called BPOS and now called Office 365. This is not currently hosted on Azure, though Muglia told me that most of it will in time move there. There are some potential synergies here, for example in Azure workflow applications that handle SharePoint forms or documents. [Emphasis added.]

Microsoft’s business is primarily based on partners selling Windows hardware and licenses for on-premise or client software. Another open question is how easily the company can re-orient itself to be a cloud platform and services company. It is a massive shift.

What about Windows Phone? Microsoft has some problems here, and they are not primarily to do with the phone itself, which is decent. There are a few issues over the design of the launch devices, and features that are lacking initially. Further, while the Silverlight and XNA SDK forms a strong development platform, there is a need for a native code SDK and I expect this will follow at some point.

The key issue though is that outside the Microsoft bubble there is not much interest in the phone. Google Android meets the needs of the OEM hardware and operator partners, being open and easily customised. Apple owns the market for high-end devices with the design quality and ease of use that comes from single-vendor control of the whole stack. The momentum behind these platforms is such that it will not be easy for Microsoft to grab much market share, or attention from third-party app developers. It deserves to do well; but I will not be surprised if it under-performs relative to its quality.

There was also some good material to be found on the PDC sidelines, as it were. Andes Hejlsberg presented on new asynchronous features coming in C# 5.0, which look like a breakthrough in making concurrent programming safer and easier. He also showed a bit of Microsoft’s work on compiler as a service, which has huge potential. Patrick Smaccia has an enthusiastic report on the C# presentation. Herb Sutter gave a brilliant talk on lambdas.

The PDC site lets you stream pretty much all the sessions and seems to work very well. The player application is written in Silverlight. Note that there are twice as many sessions as appear in the schedule, since many were pre-recorded and only show in the full session list.

Why did Microsoft run such a small event, with only around 1000 attendees? I asked a couple of people about this; the answer seems to be partly as a cost-saving measure – it is much cheaper to run an event on the Microsoft campus than to hire an external venue and pay transport and expenses for all the speakers and staff – and partly to emphasise the virtual aspect of PDC, with a global audience tuning in.

This does not altogether make sense to me. Microsoft is still generating a ton of cash, as we heard in the earnings call at the event, and PDC is a key opportunity to market its platform to developers and influencers, so it should not worry too much about the cost. Second, you can do virtual as well as physical; they are not alternatives. You get more engagement from people who are actually present.

One of the features of the player is that you see how many are currently streaming the content. I tuned into Mark Russinovich’s excellent session on Azure – he says he has “drunk the cloud kool-aid” – while it was being streamed live, and was surprised to see only around 300 virtual attendees. If that figure is accurate, it is disappointing, though I am sure there will be thousands of further views after the event.

Finally, what about all the IE9/HTML 5 vs Silverlight discussion generated at PDC? Clearly Microsoft’s messaging went badly awry here, and frankly the company has only itself to blame. It cannot be surprised if after making a huge noise about how IE9 forms a great client for web applications, standards-based and integrated with Windows, that people question what sort of role is envisaged for Silverlight. It did not help that a planned session on Silverlight futures was apparently cancelled, probably for innocent reasons such as not being quite ready to show, but increasing speculation that Silverlight is now getting downplayed.

Microsoft chose to say nothing on the subject, other than some remarks by Bob Muglia to freelance journalist Mary-Jo Foley which seem to confirm that yes, Silverlight is no longer Microsoft’s key technology for cross-platform web applications.

If that was not quite the message Microsoft intended, then why not clarify the matter to press, myself included, as we sat in the press room on Microsoft’s campus?

My take is that while Silverlight is by no means dead, it seems destined for a lesser role than was once envisaged – a shame, as it is an excellent cross-platform .NET client.

Related posts:

  1. Microsoft PDC big on Azure, quiet on Silverlight
  2. Reflections on Microsoft PDC 2009
  3. AppFabric – Microsoft’s new middleware

•• Ben Day reported Beantown .NET Meeting on Thursday, 11/4/2010: Andy Novick, “Partitioning SQL Server” at a so-far undisclosed Boston, MA location:

Beantown .NET is going to be meeting Thursday, 11/4/2010.  This month we have Andy Novick presenting “Partitioning SQL Server Tables, Views and Indexed Views”. 

As always, our meeting is open to everyone so bring your friends and co-workers – better yet, bring your boss.  Please RSVP by email (beantown@benday.com) by 3pm on the day of the meeting to help speed your way through building security and to give us an idea how much pizza to order.

Future meetings:

  • December 2 – TBA
  • January 6 – Richard Hale Shaw, “On Time & Under Budget”
  • February 3 – TBA

The Windows Azure Team claimed on 10/28/2020 You spoke. We listened, and responded in this summary of new and forthcoming Windows Azure features:

image

Today, at PDC 2010, we announced new Windows Azure functionality that will make it easier for customers to run existing Windows applications on Windows Azure, enable more affordable platform access and improve the Windows Azure developer and IT Professional experience. This new Windows Azure functionality is driven by extensive customer and partner feedback collected through forums such as mygreatwindowsazureidea.com. We thank you for your valuable input over the past year and look forward to your continued feedback.

pdc10.gifImproved Support for Enterprises

We are adding the following capabilities to Windows Azure in order to make it easier to integrate resources between the cloud and traditional IT systems, and provide better support for existing Windows applications.

  • Support for more types of new and existing Windows applications will soon be available with the introduction of the Virtual Machine (VM) role. Customers can move more existing applications to Windows Azure reducing the need to make costly code or deployment changes.
  • Development of more complete applications using Windows Azure is now possible with the introduction of Elevated Privileges and Full IIS. The new Elevated Privileges functionality for the Web and Worker role will provide developers with greater flexibility and control in developing, deploying and running cloud applications. The Web role will soon provide Full IIS functionality, which enables multiple IIS sites per Web role and the ability to install IIS modules.
  • Remote Desktop functionality enables customers to connect to a running instance of their application or service in order to monitor activity and troubleshoot common problems.
  • We're also introducing a range of new networking functionality under the Windows Azure Virtual Network name. Windows Azure Connect (formerly Project Sydney), which enables a simple and easy-to-manage mechanism to setup IP-based network connectivity between on-premises and Windows Azure resources, is the first Virtual Network feature that we'll be making available as a CTP later this year.

With the introduction of these enterprise-ready service enhancements, both enterprise customers and the systems integrators who support their businesses can use Windows Azure to extend and optimize their IT capabilities.

More Affordable Platform Access

Cost is one of the key drivers of cloud adoption, and Windows Azure will soon include an offering that provides access to compute instances at a substantially lower cost. This new 'Extra Small' Windows Azure Instance will provide developers with a cost-effective training and development environment. Developers can also use the 'Extra Small' instance to prototype cloud solutions at a lower cost. We are also announcing a new "Cloud Essentials Pack" offer that replaces our existing partner offers. This offer provide free access to the Windows Azure platform including 750 extra small instance hours and a SQL Azure database per month at no charge, and will be available to Microsoft Partner Network members beginning January 7, 2011.

Better Developer and IT Professional Experience
We heard that while you appreciated the reduced management burden that the cloud offers, you also place a high value on retaining the flexibility to see and control how your applications and services are running in the cloud. To address these needs, we are announcing the following developer and operator enhancements at PDC 10:

  • A completely redesigned Silverlight based Windows Azure portal to ensure an improved and intuitive user experience
  • Access to new diagnostic information including the ability to click on a role to see role type, deployment time and last reboot time
  • A new sign-up process that dramatically reduces the number of steps needed to sign up for Windows Azure.
  • New scenario based Windows Azure Platform forums to help answer questions and share knowledge more efficiently.

One of the most exciting things we're announcing today is the new Windows Azure Marketplace, which is an online marketplace for you to share, buy and sell building block components, premium data sets, training and services needed to build Windows Azure platform applications. This is a great way to help you quickly create and monetize applications on Windows Azure. We are delivering the first section in the Windows Azure Marketplace, DataMarket, (formerly Microsoft Codename "Dallas") which is now commercially available, and will enable you to leverage public data in your applications, creating richer experiences for your users. Already there are more than 35 data providers offering subscriptions on DataMarket. Visit the DataMarket (Dallas) blog for more information. A beta of the application section of the Windows Azure Marketplace will be available later this year.

With these investments, Windows Azure provides the broadest range of access and connection to the cloud, and a differentiated model for cross premises connectivity. We're excited to continue working with you to build out the most comprehensive cloud platform on the market and appreciate the feedback you have provided thus far - keep that feedback coming! All the features above will be available to customers by the end of the calendar year 2010. The VM role and Extra Small instance will be available in Beta, while Windows Azure Connect will be available in CTP. All other features will be Generally Available by the end of the year. If you would like to be notified when these Windows Azure features are available, and when we're accepting registrations for the VM role and Extra Small instance Beta as well as the Windows Azure Connect CTP, please click here.

Coming in CY2011

At PDC 10, we are also announcing that the following important features will be made available to customers in CY2011.

  • Dynamic Content Caching: With this new functionality, the Windows Azure CDN can be configured to cache content returned from a Windows Azure application.
  • CDN SSL Delivery: Users of the Windows Azure CDN will now have the capability to deliver content via encrypted channels with SSL/TLS.
  • Improved global connectivity: We will add new Windows Azure CDN nodes in the Middle East and improve existing connectivity in the US and Brazil.
  • Constructing VM role images in the cloud: We will enable developers and IT Professionals to build VM images for VM role directly in the cloud. This will be offered as an alternative to the current approach of building images on-premises and uploading them over the Internet.
  • Support for Windows Server 2003 and Windows Server 2008 SP2 in the VM role.
  • Improved Java Enablement: Microsoft plans to make Java a first-class citizen on Windows Azure. This process will involve improving Java performance, Eclipse tooling and client libraries for Windows Azure. Customers can choose the Java environment of their choice and run it on Windows Azure.

Since this year's PDC is a fully virtualized event, there is much more guidance and information available to you online. To attend PDC 2010 virtually, please visit: http://www.microsoftpdc.com/.  For additional information about today's news, check out the PDC Virtual Press Room.   


Kathleen Richards reported  Microsoft Focuses on Windows Phone, IE 9, Cloud in PDC Keynote in a two-page analysis for Redmond Developer News on 10/28/2010:

image Microsoft kicked off its 2010 Professional Developers Conference (PDC) today by offering developers worldwide an update on its emerging cloud computing, Windows Phone and Internet Explorer 9 development platforms.

pdc10.gifThe two-day event, held at the Redmond campus, is being streamed live via a Silverlight media player that is hosted on Windows Azure to developers. More than 16,000 people viewed the keynote via the live stream.

"Make no mistake about it, when it comes to Windows Phone, we're all in," said Microsoft CEO Steve Ballmer during the two-hour keynote. "I've gotten asked various questions, 'what will you do if this or that or blah, blah, blah..."

"Boom, baby!" he enthusiastically told the crowd of developers who had descended on the Redmond campus for PDC. "That's what we're going to do. Continue to work, continue to drive, to continue to improve, but man I think we've got a great opportunity for you and for us with the Windows Phone."

During the keynote, Microsoft continued its march towards the release of IE9 expected in early 2011, comparing its performance to Chrome and reiterating its investments in HTML5. The company cited more than 10 million downloads of the beta since its release in September. IE 9 Platform Preview 6 is available to developers starting today.

"I’d be a lot more excited about HTML5 if I could program it using C#," said Rocky Lhokta, a Microsoft MVP and the principal technology evangelist at consultancy Magenic, who watched the keynote via live stream. "I love Silverlight, and try as I might, I have a hard time getting excited about HTML. That said, the capabilities provided by IE9 are nothing short of amazing!"

Windows Phone Call to Action
App development for Windows Phone 7, which is slated for release in the United States on November 8th, and already available in several countries, was promoted with free devices from Samsung, LG and HTC for all registered attendees, who will also get to bypass the $99 registration fee for the Windows Phone Marketplace.

Several upcoming Window Phone 7 apps were demonstrated including PopCap Games, Facebook, Amazon Kindle and Intuit's TurboTax companion application. Scott Guthrie, corporate vice president in the Microsoft Developer Division, announced a new OData Library and showed off his own "hot deals" app that utilized an OData end point on eBay.

Guthrie also offered a "sneak peak" at an upcoming performance analysis tool that runs an instrumented version of the app on devices and provides profiler data in Visual Studio. It allows Windows Phone developers to profile their apps performance on the device by providing summary data on frames per second, CPU utilization and storyboard animations. Developers can drill down in the summary to see detailed stats on how much time is spent in the CPU and GPU and drill down further to see a Visual Tree that details how long it takes elements to render on screen. The profiler also suggests how to troubleshoot performance issues.

"I am happy to see the performance profiling capabilities, since that’s an area every phone developer must face when building applications," said Lhokta, whose team's first Windows Phone 7 application hit the Marketplace yesterday.

Ballmer said that with about 1,000 apps at launch, Microsoft is ahead of its competitors in their marketplace launch phases.

"Over 1000 apps is good, but they need to build up to the big leagues in the next 6 months…" said Al Hilwa, IDC program director, Applications Development Software, in an email. "From what I am seeing in the tools and how jazzed developers are, I think they will hit some big milestones for apps on a really fast schedule."

Azure Platform Services Start To Arrive

image

During the second hour of the keynote, Bob Muglia, president of Microsoft's Server and Tools Business, outlined the company's commitment to cloud services in three areas: infrastructure as a service, platform as a service and software as a service. More than 20,000 applications are hosted in Windows Azure, according to Microsoft.

Muglia announced a Virtual Machine role for Windows Server 2008 R2, which allows users to create a virtual instance of the server in the cloud, to ease migrations. A public beta is expected by the end of 2010. Support for Windows Server 2008 SP2 and Windows Server 2003 and is planned in 2011, according to Muglia. Another migration feature announced today is Server Application Virtualization, which enables users to deploy an app image to a Windows Azure worker role. A preview of this functionality is expected by the end of the year, with final release planned for late 2011.

As Microsoft has asserted since Windows Azure was first announced at PDC08, Windows Azure and SQL Azure are the foundation for the company's Platform as a Service vision. Two years later, some of those services are finally closer to reality, according to announcements during today's keynote. Muglia said that the same set of services will also be supported in the Windows Azure Platform Appliance (WAPA), announced this summer, for private clouds. Developers can expect more information on WAPA at TechEd Europe in November, according to Muglia.

Developers have long requested SQL Azure Reporting Services, and Data Sync Services for on-premise, cloud and mobile. CTPs of both services are expected by the end of the year, with general availability planned for the first half of 2011.

"Microsoft’s been promising hosted SQL Azure Reporting Services since the transition from SQL Server Data Services, so I’m gratified that we can expect a CTP of SQL Azure Reporting by the end of the year," said Roger Jennings, principal consultant of OakLeaf Systems, in an email. "But there’s no news about a full-text search implementation, which was promised in the SDS/SSDS era. I’m also disappointed with the lack of news about future availability of SQL Azure Transparent Data Encryption for data privacy or data compression to increase effective storage capability within the current 50-GB database size limit." …

Read more: 2, next ». My comments continue on Page 2.


Microsoft published Server and Tools Business News for PDC 2010 as a 10/28/2010 press release in *.docx format. Here’s the HTML version:

image

Platform as a service (PaaS) is where developers and businesses will ultimately see the true value of the cloud, and the operating system for platform as a service is Windows Azure. The Windows Azure platform, composed of Windows Azure and SQL Azure, is supported by a rich set of development tools, management and services from Microsoft Corp. It’s built to be flexible and give customers the ability to run the technologies they choose to achieve the power the cloud promises.

pdc10.gifAt the Professional Developers Conference (PDC), Microsoft is announcing a host of new enhancements and services that make it easier to move to platform as a service, to enhance current applications and workloads, and to transform applications to take full advantage of the underlying platform. (For more information about Microsoft’s cloud offerings, please visit the Cloud Computing: A Guide for IT Leaders website.)

Here are the details of the PDC announcements:

Making it easier to move existing applications and run them more efficiently, Microsoft is providing a bridge to PaaS from IaaS.

• Windows Azure Virtual Machine Role eases the migration of existing Windows Server applications to Windows Azure by eliminating the need to make costly application changes and enables customers to quickly access their existing business data from the cloud. Today at PDC 2010, Microsoft announced Virtual Machine Role support for Windows Server 2008 R2 in Windows Azure. A public beta will be available by the end of 2010.

Server Application Virtualization enables customers to deploy virtualized application images onto the Windows Azure worker role (single role, single instance) rather than the VM Role. Through this approach, customers can more easily migrate their traditional applications to Windows Azure without the need to rewrite them or to package them within a VM. Once the application is deployed with server application virtualization on Windows Azure, customers can benefit from the automated service management capabilities of Windows Azure including automatic configuration and ongoing operating system management. Server Application Virtualization for Windows Azure will be available as a community technology preview (CTP) before the end of 2010, and the final release will be available to customers in the second half of 2011.

Constructing VM role images in the cloud. Microsoft is enabling developers and IT professionals to build VM images for VM role directly in the cloud. This will be offered as an alternative to the current approach of building images on-premises and uploading them over the Internet. This update will be available in 2011.

Support for Windows Server 2003 and Windows Server 2008 SP2 in the VM Role. Microsoft supports Windows Server 2008 R2 in the Guest OS. In 2011, Microsoft will add support for Windows Server 2003 and Windows Server 2008 SP2.

Enhance applications and workloads with rich new services and features.

imageSQL Azure Reporting allows developers to embed reports into their Windows Azure applications, including rich data visualization and export to popular formats, such as Microsoft Word, Microsoft Excel and PDF, enabling the users of these applications to gain greater insight and act on their line-of-business data stored in SQL Azure databases. A CTP will be available to customers by the end of 2010. The final release of SQL Azure Reporting will be generally available in the first half of 2011.

SQL Azure Data Sync is another important building block service to help developers rapidly build cloud applications on the Windows Azure platform using Microsoft’s cloud database. It allows developers to build apps with geo-replicated SQL Azure data and synchronize on-premises with cloud and mobile applications. A CTP will be available by the end of 2010. A final release of SQL Azure Data Sync is set to be released in the first half of 2011.

Database Manager for SQL Azure is a new lightweight, Web-based database management and querying capability for SQL Azure. This capability was formerly referred to as “Project Houston,” and allows customers to have a streamlined experience within the Web browser without having to download any tools. Database Manager for SQL Azure will be generally available by the end of 2010.

Windows Azure AppFabric helps developers rapidly build cloud applications on the Windows Azure platform.

  • AppFabric Caching, which helps developers accelerate the performance of their applications.
  • AppFabric Service Bus enhancements will help developers build reliable, enterprise quality delivery of data or messages, to and from applications to third parties or mobile devices.

CTPs will be available at PDC, and both of these important building-block technologies will be generally available the first half of 2011.

Windows Azure Marketplace is a single online marketplace for developers and IT professionals to share, find, buy and sell building block components, training, services, and finished services or applications needed to build complete and compelling Windows Azure platform applications. Developers and ISVs will find that the Marketplace is an ideal way to monetize and publicize their offerings to cloud customers, and customers will find that the Marketplace offers an array of technologies they can purchase and use in one stop.

DataMarket is best thought of as an “aisle” in the Windows Azure Marketplace that provides developers and information workers with access to premium third-party data, Web services, and self-service business intelligence and analytics, which they can use to build rich applications. Today there are more than 35 data providers offering data on DataMarket, with over 100 more coming soon.

At PDC 2010, DataMarket (formerly code-named “Dallas”) was released to Web, and a Windows Azure Marketplace beta will be released by the end of the year.

TFS on Windows Azure. Microsoft demoed Team Foundation Server on Windows Azure, which shows that steps have been made toward cloud-hosted Application Lifecycle Management. It also demonstrates that Windows Azure is capable of running complex, enterprise workloads such as Team Foundation Server with marginal effort. A CTP will be available in 2011.

image7223Windows Azure AppFabric

  • Windows Azure AppFabric Access Control enhancements help customers build federated authorization into applications and services without the complicated programming that is normally required to secure applications beyond organizational boundaries. With support for a simple declarative model of rules and claims, Access Control rules can easily and flexibly be configured to cover a variety of security needs and different identity-management infrastructures. These enhancements are currently available to customers.
  • Windows Azure AppFabric Connect allows customers to bridge existing line-of-business (LOB) integration investments over to Windows Azure using the Windows Azure AppFabric Service Bus, and connecting to on-premises composite applications running on Windows Server AppFabric. This new set of simplified tooling extends Microsoft BizTalk Server 2010 to help accelerate hybrid on- and off-premises composite application scenarios, which are critical for customers starting to develop hybrid applications. This service is freely available today.

Windows Azure Virtual Network. New functionality is being introduced under the Windows Azure Virtual Network name. Windows Azure Connect (previously known as “Project Sydney”) enables a simple and easy-to-manage mechanism to set up IP-based network connectivity between on-premises and Windows Azure resources. The first Windows Azure Virtual Network feature is called Windows Azure Connect. A CTP of Windows Azure Connect will be available by the end of 2010, and it will be generally available in the first half of 2011.

Extra Small Windows Azure Instance. At the PDC 2010 Microsoft announced the Extra Small Instance, which will be priced at $0.05 per compute hour in order to make the process of development, testing and trial easier for developers. This will make it affordable for developers interested in running smaller applications on the platform. A beta of this role will be available before the end of 2010.

Remote Desktop enables IT professionals to connect to a running instance of their application or service to monitor activity and troubleshoot common problems. Remote Desktop will be generally available later this year.

Elevated Privileges. The VM role and Elevated Privileges functionality removes roadblocks that today prevent developers from having full control over their application environment. For small changes such as configuring Internet Information Service (IIS) or installing a Microsoft Software Installer (MSI), Microsoft recommends using the Elevated Privileges admin access feature. This approach is best suited for small changes and enables the developer to retain automated service management at the Guest OS and the application level. Elevated Privileges will be generally available to customers later this year.

Full IIS Support enables development of more complete applications using Windows Azure. The Web role will soon provide full IIS functionality, which enables multiple IIS sites per Web role and the ability to install IIS modules. The full IIS functionality enables developers to get more value out of a Windows Azure instance. Full IIS Support will be generally available to customers later this year.

Windows Server 2008 R2 Roles. Windows Azure will now support Windows Server 2008 R2 in its Web, worker and VM roles. This new support will enable customers to take advantage of the full range of Windows Server 2008 R2 features such as IIS 7.5, AppLocker, and enhanced command-line and automated management using PowerShell Version 2.0. This update will be generally available later this year.

Multiple Admins. Windows Azure will soon support multiple Windows Live IDs to have administrator privileges on the same Windows Azure account. The objective is to make it easy for a team to work on the same Windows Azure account while using their individual Windows Live IDs. The Multiple Admins update will be generally available later this year.

Dynamic Content Caching. With this new functionality, the Windows Azure CDN can be configured to cache content returned from a Windows Azure application. Dynamic Content Caching will be available to customers in 2011.

CDN SSL Delivery. Users of the Windows Azure CDN will now have the capability to deliver content via encrypted channels with SSL/TLS. This update will be available in 2011.

Improved global connectivity. Microsoft will add new Windows Azure CDN nodes in the Middle East and improve existing connectivity in the U.S. and Brazil in 2011.

Improved Java Enablement. Microsoft plans to make Java a first-class citizen on Windows Azure. This process will involve improving Java performance, Eclipse tooling and client libraries for Windows Azure. Customers can choose the Java environment of their choice and run it on Windows Azure. Improved Java Enablement will be available to customers in 2011.

Transform applications to do new things in new ways, highly scalable and highly available

Windows Azure AppFabric Composition Model and Composite App Service provides an end-to-end “composite” application development environment to help developers streamline the process of assembling, managing and deploying various home-grown and third-party services that span the Web, middle tier and database in the cloud.

  • The AppFabric Composition Model will help developers compose applications on the Windows Azure Platform with extensions to the .NET Framework and tie them all together through a new Microsoft Visual Studio-based designer experience. A CTP will be available in the first half of 2011.
  • The AppFabric Composite App Service allows developers to take the Composition Model and automate the deployment, configuration, control, monitoring, troubleshooting, reporting and optimization of an application without the usual manual steps. A CTP is also due the first half of 2011.

Windows Azure Enhancements. While developers and IT professionals appreciate the reduced management burden that Windows Azure offers, they also place a high value on retaining the flexibility to see and control how their applications and services are running in the cloud. Developers and IT professionals need clear visibility into their cloud applications, along with a high level of control over how these applications are running.

To address these needs, Microsoft is announcing the following developer and operator enhancements at PDC 2010:

  • A completely redesigned Microsoft Silverlight-based Windows Azure portal to ensure an improved and intuitive user experience
  • Access to new diagnostic information including the ability to click on a role to see type and deployment time
  • A new sign-up process that dramatically reduces the number of steps needed to sign up for Windows Azure
  • New scenario-based Windows Azure Platform forums to help answer questions and share knowledge more efficiently

These Windows Azure enhancements will be generally available by the end of 2010.

“Windows Azure Platform Cloud Essentials for Partners” is an offer that replaces Microsoft’s existing partner offers. This offer will go live on Jan. 7, 2011, and provide free access to the Windows Azure platform, including 750 Extra Small Instance hours and a SQL Azure database per month at no additional charge. Partners can sign up for the Cloud Essentials Pack at http://www.microsoftcloudpartner.com. …


Alex Williams posted Microsoft PDC 10: Live Blogging and an Interview with Bob Muglia on 10/28/2010:

pdc10.gifWe're going to the Microsoft Professional Developers Conference starts at 9 a.m. PST and we have some things in store to give a perspective on the event and the strategy behind Windows Azure.

Our own Frederic Lardinois will be live blogging the keynotes. I will be snapping pictures to capture a sense of the event.

After the keynote, I will do a live, streaming interview with Bob Muglia, who oversees Windows Azure. Muglia reports to Microsoft CEO Steve Ballmer. Earlier this year, Muglia filled the role previously held by Ray Ozzie, a key player in crafting the strategy for Microsoft's cloud initiative. The interview begins at 1:20 p.m. PST.

To see the interview, you have a few options:

  1. Come back here tomorrow to see the live stream or view the interview on our Justin.tv web page.
  2. Watch live video from ReadWriteWeb on Justin.tv

We'll be looking for perspective from Muglia, especially after spending some time this week in Las Vegas where we interviewed Anant Jhingranat the IBM Information on Demand Conference. Jhingranat is an IBM Fellow, Vice President and CTO for IBM's Information Management Division. He is an important player in developing IBM's cloud strategy.

Here's the recording of that interview. It's a bit choppy but Jhingranat's views about Hadoop and cloud computing provide some context for what these trends mean for customers.


Sharon Pian Chan (@sharonpianchan) asserted “As Microsoft kicks off its Professional Developers Conference in Redmond on Thursday, the company will continue its pitch to get developers to come over to the cloud” as a deck for her Developers starting to ride on Microsoft's cloud article for the Seattle Times’ 10/28/2010 issue:

image Imagine Microsoft has built a virtual Mall of America. Each store can immediately explode into a Costco-size warehouse or collapse into an airport kiosk. Each store's inventory can expand and contract on demand, like the sliding gun racks in the movie, "The Matrix."

That's what cloud computing is for Microsoft — a new technology space where companies can build new software, as well as distribute and sell their wares.

Close to a year after Microsoft cut the ribbon, however, most of the storefronts remain empty. But tech companies say interest is brewing.

image

As Microsoft kicks off its Professional Developers Conference in Redmond on Thursday, the company will continue its pitch to get developers to come over to the cloud and the Azure development platform Microsoft has built for it.

pdc10.gif"The last three years have been kind of like the Moses experience — charging through the desert, preaching cloud computing especially on Azure, and seeing a lot of people have interest but really no one taking hold or taking the next step," said Danny Kim, chief technology officer of Full Armor. His Boston-based company helps organizations like education departments move software management into the cloud.

Kim has started to see more developer interest. "In the last three to four months we've been seeing a huge uptick in engagement," he said. "It's getting beyond just kicking the tires to people actually doing test drives."

The cloud is a bet-the-company move by Microsoft, Chief Executive Steve Ballmer has said.

The PC's prominence as a computing device is eroding with the explosion of smartphones, introduction of devices like the iPad and advances with Web-connected television. The cloud is Microsoft's antidote to "a post-PC world," as retiring Chief Software Architect Ray Ozzie wrote in a blog post Monday.

"It's important that all of us do precisely what our competitors and customers will ultimately do: close our eyes and form a realistic picture of what a post-PC world might actually look like, if it were ever truly to occur," Ozzie wrote. "... We're moving toward a world of 1) cloud-based continuous services that connect us all and do our bidding, and 2) appliance-like connected devices enabling us to interact with those cloud-based services."

In the cloud, Internet-connected devices access software and data stored in giant remote data centers. Microsoft, Google, Amazon.com, Salesforce.com and IBM are all jockeying to bring software developers to their cloud platforms.

Microsoft is "all in," as Ballmer says, when it comes to cloud investment. The company has built immense data centers around the world, in Ireland, Chicago, San Antonio and Quincy, Wash., each costing hundreds of millions of dollars. It has developed Azure and SQL Azure, the database platforms that support cloud efforts.

The company launched Azure in January and began charging customers in February.

For companies that weren't comfortable with putting their software and data in Microsoft's data centers, Microsoft announced an Azure appliance in July, a cloud in a box for people who want to build cloud applications but want to keep the servers on their own property.

As of July, Azure had 10,000 customers. The company has not disclosed revenue numbers.

Microsoft declined to comment for this story.

"I think it's a pretty good start. It doesn't show any kind of weakness, it doesn't show any great strength, it just shows that there's interest," said Rob Sanfilippo, vice president at independent research firm Directions on Microsoft in Kirkland.

The pricing has been confusing to people, although Microsoft has options for developers to try out the service free. Microsoft usually sells software with licenses and is starting to charge for online services per user.

Charges for using Azure are based on computing power, bandwidth and storage, an entirely new pricing model for software companies to wrap their heads around.

"Initially it was a little difficult to unwind for customers how pricing was broken down and what they should expect as time wears on," said Mike Lucaccini, president of Archetype, a company in Emeryville, Calif., that builds media-management software. He has started offering hosting services on Azure to his customers who don't want to buy new servers and manage the infrastructure.

People who are building on Azure say the slow adoption is more about general fear around privacy, security and reliability of moving to the cloud, rather than what Microsoft is offering compared with its competitors.

"We believe this is one of the finest platforms that Microsoft has got, and they got it right the first time," said Rajesh Gupta, global head for the Microsoft practice at InfoSys, a large technology company based in India.

Gupta says he wants to see Microsoft evangelize the cloud.

"Microsoft needs to take a leading role in terms of championing the adoption of cloud technology," he said. "There needs to be a forum joining hands."


The Mobile Cloud Computing Forum invites you to Register for Free Live Streaming of its one-day conference in London, UK on 12/1/2010:

Show Highlights:

  • image 1 day conference and exhibition on Enterprise Mobile Cloud Computing and Enterprise Apps
  • Watch the event streamed LIVE online free of charge, with the option to ask questions from your desktop - click here to register
  • Hear from leading case studies on how they have integrated Mobile into their working practices
  • Learn from the key players offering Mobile products and services
  • Benefit from our pre-show online meeting planner
  • Network in our combined exhibition and catering area
  • Evening networking party for all attendees


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

•• Dave McCrory (@mccrory) created a Public Cloud Hourly Cost Comparison between Windows Azure, Amazon EC2, Rackspace Cloud Servers, and Joyent SmartMachines in an Excel spreadsheet:

After receiving feedback from several readers and the desire to see the results, I have updated and reformatted my previous Cloud Cost Comparison. I have included comparisons between Microsoft Windows Azure, Amazon EC2, Rackspace Cloud Servers, and Joyent SmartMachines in this spreadsheet. This spreadsheet includes normalized data across CPU, Memory, Disk Storage, Disk Throughput, and Cost.

A few important notes about this data:

  1. It is current as of October 30th, 2010
  2. The data in the spreadsheet is normalized and a WINDOWS GUEST OS IS ASSUMED (the above means that for example on Amazon, if you choose Linux it will be ~33% cheaper)
  3. I want feedback to improve the accuracy of this spreadsheet
    (Fields with a ? mean that I have made an assumption)

Without further delay, here is the new Public Cloud Hourly Cost Comparison:
(Click on the Image to Download the Spreadsheet)

- Please Leave my Blog Link in or credit me if you decide to copy/use this


•• Nicole Hemsoth reported Intel Lays Groundwork to Fulfill 2015 Cloud Vision in a 10/28/2010 post to the HPC in the Cloud blog:

image According to IDC forecasts, “by 2015, over 2.5 billion people with more than 10 billion devices will access the Internet” which means that capacity will be stretched to over twice what it is now. Already, data centers are experiencing the effects of increased demand and build-outs of existing data centers, due to cost and efficiency pressures, are forced to learn quickly how they can somehow manage to become far more efficient while still offering peak performance.

image What is needed is an overhaul of current theories about efficient data center operation so that flexibility and cloud architectures are given sufficient weight. These are all issues that Intel addressed recently via a string of announcements that were geared toward creating a more open, accessible, flexible and efficient cloud.

This week Intel announced its Cloud 2015 Vision, which sets forth its mission to create a “federated, automated and client-aware” environment that adheres to its three pillars of cloud, including efficiency, simplification and security as well as its goals to “create solutions that are open, multi-vendor and interoperable.” By packaging up a small bundle of rhetoric-driven announcements in a hard-to-disagree with bundle of topics that challenge cloud adoption, Intel took some steps toward making itself heard in the "cloudosphere" on some of the major issues that vendors in niche cloud spaces have often discussed at length.

Key Challenges for the Next Five Years

Intel’s goals over the next five years are based on some inherent challenges that are holding the paradigm shift of cloud at bay. These include:

• Maintaining the stability of mission critical applications during the cloud migration process.

• Finding ways to negotiate issues related to privacy, security and the protection of intellectual property.

• The automation and flexibility of resources will still be evolving as cloud tools continue to evolve.

• Finding solutions that will meet goals of interoperability and maintain flexibility.

• Making sure that cloud-based applications enable user productivity, no matter what device is being used.

Read More: Page  1  of  4, 2, 3, 4 All »

What qualifies a chip peddler to create a “federated, automated and client-aware” cloud-computing environment?


Chirag Mehta published Challenging Stonebraker’s Assertions On Data Warehouses - Part 1 on 10/28/2010:

I have tremendous respect for Michael Stonebraker. He is an apt visionary. What I like the most about him is his drive and passion to commercialize the academic concepts. ACM recently published his article “My Top 10 Assertions About Data Warehouses." If you haven’t read it, I would encourage you to read it.

I agree with some of his assertions and disagree with a few. I am grounded in reality, but I do have a progressive viewpoint on this topic. This is my attempt to bring an alternate perspective to the rapidly changing BI world that I am seeing. I hope the readers take it as constructive criticism. This post has been sitting in my draft folder for a while. I finally managed to publish it. This is Part 1 covering the assertions 1 to 5. The Part 2 with the rest of the assertions will follow in a few days.

“Please note that I have a financial interest in several database companies, and may be biased in a number of different ways.”

I appreciate Stonebraker’s disclaimer. I do believe that his view is skewed to what he has seen and has invested into. I don’t believe there is anything wrong with it. I like when people put money where their mouth is.

As you might know, I work for SAP, but this is my independent blog and these are my views and not those of SAP’s. I also try hard not to have SAP product or strategy references on this blog to maintain my neutral perspective and avoid any possible conflict of interest.

Assertion 1: Star and snowflake schemas are a good idea in the data warehouse world.

This reads like an incomplete statement. The star and snowflake schemas are a good idea because they have been proven to perform well in the data warehouse world with row and column stores. However, there are emergent NoSQL based data warehouse architectures I have started to see that are far from a star or a snowflake. They are in fact schemaless.

“Star and Snowflake schemas are clean, simple, easy to parallelize, and usually result in very high-performance database management system (DBMS) applications.”

The following statement contradicts the statement above.

“However, you will often come up with a design having a large number of attributes in the fact table; 40 attributes are routine and 200 are not uncommon. Current data warehouse administrators usually stand on their heads to make "fat" fact tables perform on current relational database management systems (RDBMSs).”

There are a couple of problems with this assertion:

  1. The schema is not simple; 200 attributes, fact tables, and complex joins. What exactly is simple?
  2. Efficient parallelization of a query is based on many factors, beyond the schema. How the data is stored and partitioned, performance of a database engine, and hardware configuration are a few to name.

"If you are a data warehouse designer and come up with something other than a snowflake schema, you should probably rethink your design.”

Really?

The requirement, that the schema has to be perfect upfront, has introduced most of the problems in the BI world. I call it the design time latency. This is the time it takes after a business user decides what report/information to request and by the time she gets it (mostly the wrong one.) The problem is that you can only report based what you have in your DW and what’s tuned.

This is why the schemaless approach seems more promising as it can cut down the design time latency by allowing the business users to explore the data and run ad hoc queries without locking down on a specific structure.

Assertion 2: Column stores will dominate the data warehouse market over time, replacing row stores.

This assertion assumes that there are only two ways of organizing data, either in a row store or in a column store. This is not true. Look at my NoSQL explanation above and also in my post “The Future Of BI In The Cloud”, for an alternate storage approach.

This assertion also assumes that the access performance is tightly dependent on how the data is stored. While this is true in the most cases, many vendors are challenging this assumption by introducing an acceleration layer on top of the storage layer. This approach makes is feasible to achieve consistent query performance, by clever acceleration architecture, that acts as an access layer, and does not depend on how data is stored and organized.

“Since fact tables are getting fatter over time as business analysts want access to more and more information, this architectural difference will become increasingly significant. Even when "skinny" fact tables occur or where many attributes are read, a column store is still likely to be advantageous because of its superior compression ability."

I don’t agree with the solution that we should have fatter fact tables when business analysts want more information. Even if this is true, how will column store be advantageous when the data grows beyond the limit where compression isn’t that useful?

“For these reasons, over time, column stores will clearly win”

Even if it is only about rows versus columns, the column store may not be a clear commercial winner in the marketplace. Runtime performance is just one of many factors that the customers consider while investing in DW and business intelligence.

“Note that almost all traditional RDBMSs are row stores, including Oracle, SQLServer, Postgres, MySQL, and DB2.”

Exactly!

The row stores, with optimization and acceleration, have demonstrated reasonably good performance to stay competitive. Not that I favor one over the other, but not all row-based DW are that large or growing rapidly, and have serious performance issues, warranting a switch from a row to a column.

This leads me to my last issue with this assertion. What about a hybrid store – row and column? Many vendors are trying to figure this one out and if they are successful, this could change the BI outlook. I will wait and watch.

Assertion 3: The vast majority of data warehouses are not candidates for mainmemory or flash memory.

I am assuming that he is referring to the volatile flash memory and not flash memory as storage. Though, the SSD block storage have huge potential in the BI world.

“It will take a long time before main memory or flash memory becomes cheap enough to handle most warehouse problems.”

Not all DW are growing at the same speed. One size does not fit all. Even if I agree that the price won’t go down significantly, at the current price point, main memory and flash memory can speed up many DW without breaking the bank.

The cost of DW, and especially the cost of flash memory, is a small fraction of the overall cost; hardware, license, maintenance, and people. If the added cost of flash memory makes business more agile, reduces maintenance cost, and allows the companies to make faster decisions based on smarter insights, it’s worth it. The upfront capital cost is not the only deciding factor for BI systems.

“As such, non-disk technology should only be considered for temporary tables, very "hot" data elements, or very small data warehouses.”

This is easier said than done. The customers will spend significant more time and energy, on a complicated architecture, to isolate the hot elements and running them on a different software/hardware configuration.

Assertion 4: Massively parallel processor (MPP) systems will be omnipresent in this market.

Yes, MPP is the future. No disagreements. The assertion is not about on-premise or the cloud, but I truly believe that cloud is the future for MPP. There are other BI issues that need to be addressed before cloud makes it a good BI platform for a massive scale DW, but the cloud will beat any other platform when it comes to MPP with computational elasticity.

Assertion 5: "No knobs" is the only thing that makes any sense.

“In other words, look for "no knobs" as the only way to cut down DBA costs.”

I agree that “no knobs” is what the customers should thrive for to simplify and streamline their DW administration, but I don’t expect these knobs to significantly drive down the overall operational cost, or even the cost just associated with the DBAs. Not all the DBAs have a full time job to manage and tune the DW. The DW deployments go through a cycle where the tasks include schema design, requirements gathering, ETL design etc. Tuning or using the “knobs” is just one of many tasks that the DBAs perform. I absolutely agree that the no-knobs would certainly take some burden off the shoulders of a DBA, but I disagree that it would result into significant DBA cost-savings.

For a fairly large deployment, there is significant cost associated with the number of IT layers

that are responsible to channel the reports to the business users. There is an opportunity to invest into the right kind of architecture, technology-stack for the DW, and the tools on top of that to help increase the ratio of Business users to the BI IT. This should also help speed up the decision-making process based on the insights gained from the data. Isn’t that the purpose to have a DW to begin with? I see the self-service BI as the only way to make IT scale. Instead of cutting the DBA cost, I would rather focus on scaling the BI IT with the same budget and a broader coverage amongst the business users in an organization.

In his current role, Chirag helps SAP explore, identify, and execute the growth opportunities by leveraging the next generation technology platform - in-memory computing, cloud computing, and modern application frameworks - to build simple, elegant, and effective applications to meet the latent needs of the customers.


<Return to section navigation list>