Monday, May 16, 2011

Windows Azure and Cloud Computing Posts for 5/16/2011+

image2 A compendium of Windows Azure, Windows Azure Platform Appliance, SQL Azure Database, AppFabric and other cloud-computing articles.

image4    

Dedicated to TechEd NorthAmerica 2011 Day 1

Note: This post is updated daily or more frequently, depending on the availability of new articles in the following sections:

To use the above links, first click the post’s title to display the single article you want to navigate.


Azure Blob, Drive, Table and Queue Services

imageNo significant articles today.


<Return to section navigation list> 

SQL Azure Database and Reporting

Zane Adam announced “Project Austin” for SQL Azure in a 5/15/2011 post:

image We are continuing our innovation in the Windows Azure Platform and releasing at a frequent cadence of high-value new features and enhancements to our public cloud services.  Today at TechEd I’m excited to make a few new announcements:

imageFor AppFabric, this past month we have just released the Windows Azure AppFabric Caching service, and major enhancements to the Access Control service. At TechEd today we announced two new capabilities for both developers and IT Pros:

  • We released today new features and enhancements as part of the Windows Azure AppFabric May CTP which adds to the Service Bus more comprehensive pub/sub messaging and enables new scenarios on Windows Azure platform.   These scenarios include:
    • Async Cloud Eventing – Distribute event notifications to occasionally connected clients (e.g. phones, remote workers, kiosks, etc);
    • Event-driven SOA – Loose coupling to enable easy evolution of messaging topology over time; and
    • Intra-App Messaging – Load leveling and load balancing for building highly scalable, resilient applications.
  • We also announced capabilities coming in the Windows Azure AppFabric June CTP, which will provide the AppFabric Application Manager and Developer Tools that make it easy for developers to build, deploy, manage and monitor multi-tier (web, business logic, database) applications as a single logical entity. This CTP will also enable getting these benefits when running Windows Communication Foundation (WCF) and Windows Workflow Foundation (WF) services on AppFabric.

For SQL Azure we also recently released new Import/Export feature for SQL Azure as CTP.  At TechEd today we announced a set of new developer and IT Pro enhancements:

  • A new capability (Codename “Austin”) which will make Microsoft StreamInsight’s complex event processing capabilities (available in SQL Server today) available as a service on the Windows Azure platform. This allows customers and partners to build event-driven applications where the analysis of the events is performed in the cloud.  This is in private CTP for now, but will be opened up for public CTP later in H2 on the SQL Azure Labs site. 
  • We are integrating the import/export features into the management portal. These features help simplify archiving SQL Azure databases and provide migration between SQL Server on-premises and SQL Azure.  This makes it significantly easier to move databases back and forth between on-premises and cloud.  Additionally, this enables new hybrid scenarios to extend on-premises data beyond the firewall to the cloud and reach both web and mobile users.  By exporting an on-premises database to the cloud and utilizing SQL Azure Data Sync, data will be truly available and kept up to date anywhere – in any of Microsoft’s global datacenters, customer datacenters, and in branch offices for scenarios such as retail point of sale systems.
  • We will also provide an enhanced management experience through the portal, including enhancements to the web based database manager (formerly called “Project Houston”) for additional schema management, and a new service for managing SQL Azure databases via web services with an OData endpoint.

I’d also like to briefly talk a little more about Codename “Austin”.  To learn more about StreamInsight, you can start here, and it adds another dimension to our growing portfolio of cloud data services.  Codename “Austin” now offers the capability for creating event-driven systems in the cloud that have high data rates, continuous queries, and low-latency requirements.  Industries such as manufacturing, utilities, and web analytics use StreamInsight to identify meaningful trends and patterns as they happen to trigger immediate response actions.  Some example scenarios include:

  • Collecting data from manufacturing applications (e.g. real-time events from plant-floor devices and sensors);
  • Financial trading applications (e.g. monitoring and capitalizing on current market conditions with very short windows of opportunity);
  • Web analytics (e.g. immediate click-stream pattern detection and response with targeted advertising.); and
  • “Smart grid” management (e.g. infrastructure for managing electric grids and other utilities, such as immediate response to variations in energy to minimize or avoid outages or other disruptions of service).

Complementary to BI, complex event processing (CEP) enables real time insight into vast volumes of streaming data; while BI enables analytics and insight into a set of existing data to inform future decision making.  Taking StreamInsight into the cloud with Codename “Austin” provides the opportunity to use this as a service instead of implementing this yourself, but more importantly, be able to collect and process events from anywhere on the planet and derive trends from a vastly increased series of events since that data is sent to the cloud. 

To read more on the AppFabric announcements you can read the post on the AppFabric Team Blog and visit www.appfabric.net.

To read more on the latest about the SQL Azure announcements at TechEd, check out the SQL Azure team blog.

We continue to listen to your feedback and come out with new and exciting enhancements to our cloud services. So check out the new capabilities and let us know what you think!


Steve Yi described the SQL Azure May 2011 Update in a 5/16/2011 post to the SQL Azure team blog:

imageAnnounced earlier today at TechEd, there are several key updates to the SQL Azure service that I wanted to share with you.  Zane Adam covered these at a high level on his blog this morning as well as talking about improvements in AppFabric.  The theme of this service release was to continue improving on making SQL Azure databases easier to manage, and the service enhancements go a long way towards making automation and keeping track of geo-distributed deployments more convenient.

imageFor the May 2011 service update, there are four key improvements the engineering teams have been hard at work on:

  1. SQL Azure Management REST API – a web API for managing SQL Azure servers.
  2. Multiple servers per subscription – create multiple SQL Azure servers per subscription.
  3. JDBC Driver – updated database driver for Java applications to access SQL Server and SQL Azure.
  4. DAC Framework 1.1 – making it easier to deploy databases and in-place upgrades on SQL Azure.

Let’s go through these one by in a little more detail.  For deeper technical details you can read more in the MSDN documentation here.

SQL Azure Management REST API: With the latest release, managing databases can be accomplished via a web API to programmatically manage SQL Azure servers along with configuring the firewall rules.  While managing the servers via the Windows Azure developer portal is straightforward enough, doing this via an API provides the ability automate these tasks.  Many SQL Azure solutions created by our partner ISVs create new databases or add a firewall rule when onboarding new customers.  This API makes scenarios such as this much more convenient and efficient.   The REST API we’ve implemented utilizes standard and open web protocols to make it easy to use by any variety of programming platforms.

Multiple servers per subscription: You can now create multiple SQL Azure servers per subscription, making it easier to manage multiple database deployments across different servers – whether they’re in the same datacenter, or a geo-distributed deployment across worldwide Windows Azure platform datacenters.

JDBC Driver: Java developers can download the updated driver here.  The SQL Server JDBC Driver 3.0 driver is now available, a Type 4 JDBC driver.  This version fully supports both SQL Server and SQL Azure and is free of charge.  This enables on-premises Java applications to communicate with SQL Azure to make data available in the cloud; or to deploy a Java app to Windows Azure and utilize SQL Azure as the underlying data store.  More information on Windows Azure cross-platform capabilities are here.

DAC Framework 1.1: At the end of March I put up a post about the new import/export feature for SQL Azure that makes migrating databases between on-premises SQL Server and SQL Azure pretty simple, and will be tightly integrated into the database tools shipping with the next version of SQL Server (“Denali”).  Both schema and data are packaged together into a .bacpac file format (and no, I wasn’t involved in the naming of that file extension :) . 

The improvements to the DAC Framework in 1.1 takes this a step further by enabling in-place upgrades of SQL Azure databases, changing the database schema as necessary while still preserving the data.  Very cool.  Used in conjunction with SQL Azure Data Sync, synchronizing data across on-premises and cloud creates very compelling opportunities to extend data from on-premises to reach users on the web, phone, tablets, and in next generation web apps via AJAX and jQuery.

Over the next few weeks I’ll post more updates and examples of the new service enhancements.  Leave a comment and let me know which of these features you’re most interested in learning more about.


The SQL Server Team posted Microsoft Introduces the Cloud-Ready Information Platform on 5/16/2011:

imageTechEd North America is upon us today and we couldn’t be more thrilled. Microsoft continues to evolve and invest in the Information Platform and today during the TechEd Foundational Session, Microsoft SQL Server: The Data and BI Platform for Today and Tomorrow, Quentin Clark, Corporate Vice President, unveiled the Microsoft Cloud-Ready Information Platform.

The data explosion is happening at every level across every imaginable device, application and individual. Meanwhile IT needs to balance the proliferation of applications, globalization, increasingly powerful commodity hardware, demand for accessible insights, and new form factors such as the cloud, appliances and mobile devices. And they have to do this with an uptime and level of compliance that is simply expected.

The Cloud-Ready Information Platform brings customers a versatile database and business intelligence platform that will help them tackle the data explosion and evolve into the future through integrated private and public cloud offerings, optimized appliances, complete and scalable data warehouse offerings, scalable end-to-end  business intelligence, and of course continued investments in the SQL Server database software.

So what does this mean? Well, it means you can start to break free from tradition and move the business forward with mission critical confidence, breakthrough insight and cloud on your terms.

  • Mission-Critical Confidence
    The cloud-ready information platform will protect an organization’s infrastructure – getting you the nines and performance you need at the right price, especially for mission critical workloads, with the new availability solution, SQL Server AlwaysOn and blazing-fast performance with Project “Apollo.” With Microsoft, customers reduce the need to trade off uptime for security patching– SQL Server continues to lead major database vendors with the fewest number of vulnerabilities [nist.org] and in turn this reduces the need to sacrifice uptime for patching. Customers who bet on Microsoft get more than a trusted platform; they get a trusted business partner and a huge ecosystem of experienced vendors.
  • Breakthrough Insight
    Customers will quickly unlock breakthrough insights across thousands of users through highly interactive web-based data visualizations with Project “Crescent” and managed self-service analytics with PowerPivot all unified with the new BI semantic model. Insights are backed by credible consistent data made possible by new Data Quality Services and complete data warehousing solutions including Parallel Data Warehouse and Fast Track.
  • Cloud On Your Terms
    Additionally, SQL Server will offer organizations the agility to quickly create and scale solutions that solve challenges and fuel new business opportunity from server to private or public cloud linked together by common tools—build once, deploy and manage wherever with SQL Server Developer Tools code name “Juneau.”

What does cloud-ready mean for you?
SQL Server Code Name “Denali” will help customers bridge applications and workloads from traditional servers to private cloud to public cloud. Customers can take advantage of private and public clouds by scaling on demand with flexible deployment options, a common set of developer tools, and solutions to extend the reach of data across private and public cloud environments. “Denali” will deliver key features that help customers move to the cloud when they are ready and without the need to rewrite or retool investments.

  • Scale on demand with flexible deployment options:  A common architecture which spans traditional, private cloud and public cloud environments giving customers the ability to scale beyond the constraints of any one environment for maximum flexibility in deployments
  • Fast time to market:  A range of options for rapidly provisioning resources  and reducing IT burden including Fast Track reference architectures to build private cloud solutions, pre-configured and optimized SQL Server appliances for ready-made solutions, and public cloud data services with SQL Azure
  • Common set of tools across on-premises and cloud:  Integrated set of developer tools and management tools for developing and administering applications across private cloud and public cloud giving developers and IT professionals maximum productivity, faster time to solution and lower on-ramp to building cloud solutions. Also, enhancements to the Data-tier Application Component (DAC) introduced in SQL Server 2008 R2 further simplify the management and movement of databases from on premises to cloud.
  • Solutions to extend the reach of data:  Support for technologies such as SQL Azure Data Sync to synchronize data across private and public cloud environments, OData to expose data through open feeds to power multiple user experiences and Windows Azure Marketplace DataMarket to monetize data or consume from multiple data providers.

Get ready for the evolution
The next SQL Server Code Name “Denali” CTP is coming this summer! Sign up today for the notification and be among the first to test the exciting new features and functionality.


Steve Yi listed SQL Azure And Data Services at TechEd 2011: What’s Coming This Week in a 5/16/2011 post to the SQL Azure Team Blog:

image TechEd is this week!  May 16-19 to be exact, in Atlanta.  Unfortunately I won’t be there, I was at InterOp earlier this week in Vegas (I’ll send a recap of that in the next few days) and I’ll be sitting this one out and catching up on some work and start work on an idea for a cloud application that I’ve been thinking about for a while.

imageMy colleague, Tharun Tharian, will be there so I encourage you to say hi if you see him.  Of course he’ll be joined by the SQL Azure, DataMarket, and OData engineering teams delivering the latest on the respective services and technologies, and also manning the booth to answer your questions.  You’ll find them hanging out at the Cloud Data Services booth if you want to chat with them during the conference.

For those of you going, below are the SQL Azure and OData sessions that I encourage you to check out the sessions listed below – either live or over the web.  I encourage you to attend either Quentin Clark or Norm Judah’s foundational sessions right after the keynote to learn more about how to bridge on-premises investments with Windows Azure platform, and how SQL Server “Denali” and SQL Azure will work together in both the services offered, and updates in the tooling and development experience.  You may even hear some new announcements about SQL Azure and other investments in cloud data services.

Enjoy the conference!

****

Foundational Sessions:

FDN04 | Microsoft SQL Server: The Data and BI Platform for Today and Tomorrow

FDN05 | From Servers to Services: On-Premise and in the Cloud

  • Speaker(s): Norm Judah
  • Monday, May 16 | 11:00 AM - 12:00 PM | Room: Sidney Marcus Auditorium
SQL Azure Sessions:

DBI403 | Building Scalable Database Solutions Using Microsoft SQL Azure Database Federations

  • Speaker(s): Cihan Biyikoglu
  • Monday, May 16 | 3:00 PM - 4:15 PM | Room: C201

DBI210 | Getting Started with Cloud Business Intelligence

  • Speaker(s): Pej Javaheri, Tharun Tharian
  • Monday, May 16 | 4:45 PM - 6:00 PM | Room: B213

COS310 | Microsoft SQL Azure Overview: Tools, Demos and Walkthroughs of Key Features

  • Speaker(s): David Robinson
  • Tuesday, May 17 | 10:15 AM - 11:30 AM | Room: B313

DBI323 | Using Cloud (Microsoft SQL Azure) and PowerPivot to Deliver Data and Self-Service BI at Microsoft

  • Speaker(s): Diana Putnam, Harinarayan Paramasivan, Sanjay Soni
  • Tuesday, May 17 | 1:30 PM - 2:45 PM | Room: C208

DBI314 | Microsoft SQL Azure Performance Considerations and Troubleshooting

  • Wednesday, May 18 | 1:30 PM - 2:45 PM | Room: B312

DBI375-INT | Microsoft SQL Azure: Performance and Connectivity Tuning and Troubleshooting

  • Speaker(s): Peter Gvozdjak, Sean Kelley
  • Wednesday, May 18 | 3:15 PM - 4:30 PM | Room: B302

COS308 | Using Microsoft SQL Azure with On-Premises Data: Migration and Synchronization Strategies and Practices

  • Speaker(s): Mark Scurrell
  • Thursday, May 19 | 8:30 AM - 9:45 AM | Room: B213

DBI306 | Using Contained Databases and DACs to Build Applications in Microsoft SQL Server Code-Named "Denali" and SQL Azure

  • Speaker(s): Adrian Bethune, Rick Negri
  • Thursday, May 19 | 8:30 AM - 9:45 AM | Room: B312

DataMarket & OData Sessions:

DEV308 | Creating and Consuming OData Services

  • Tuesday, May 17 | 3:15 PM - 4:30 PM | Room: B402

COS307 | Building Applications with the Windows Azure DataMarket

  • Speaker(s): Christian Liensberger, Roger Mall
  • Wednesday, May 18 | 3:15 PM - 4:30 PM | Room: B312

DEV325 | Best Practices for Building Custom Open Data Protocol (OData) Services with Windows Azure

  • Speaker(s): Alex James
  • Thursday, May 19 | 1:00 PM - 2:15 PM | Room: C211

Development Sessions:

DEV312 | Code First Development in EF 4.1

  • Tuesday, May 17 | 10:15 AM - 11:30 AM | Room: C307

DEV313 | Entity Framework 4 & Beyond: Building Real-World Apps

  • Wednesday, May 18 | 10:15 AM – 11:30 AM | Georgia Ballroom 1

DEV207 | Introducing SQL Server Developer Tools, Codename “Juneau”

  • Wednesday, May 18 | 3:00 PM - 4:15 PM | Room: B406

DEV314 | SQL Server Developer Tools, Codename “Juneau” & EF: Best Friends Forever

  • Wednesday, May 18 | 5:00 PM - 6:15 PM | Georgia Ballroom 3

Interactive Sessions

DEV374-INT | OData Unplugged

  • Wednesday, May 18 | 1:30 PM - 2:45 PM | B303

Product Demo Stations (all week during Expo hours)
  • Entity Framework & SQL Server Developer Tools, Codename “Juneau”
  • Data in the Cloud (OData, SQL Azure, Data Market)


<Return to section navigation list> 

MarketPlace DataMarket and OData

imageSee Steve Yi listed SQL Azure And Data Services at TechEd 2011: What’s Coming This Week in a 5/16/2011 post to the SQL Azure Team Blog in the SQL Azure Database and Reporting section above.


<Return to section navigation list> 

Windows Azure AppFabric: Access Control, WIF and Service Bus

The AppFabric Team sported a new blog URL when Announcing the Windows Azure AppFabric CTP May and June Releases on 5/16/2011:

image722322222Today at the TechEd conference we announced that we have released enhancements to Service Bus as part of the Windows Azure AppFabric CTP May release, and the upcoming release of the AppFabric Developer Tools and Application Manager as part of the Windows Azure AppFabric CTP June release.

Service Bus is already a production service supported by a full SLA. We also have a CTP version of the service showcasing future enhancements in our LABS previews environment. Today we have released the Windows Azure AppFabric CTP May release that adds more comprehensive pub/sub capabilities to the Windows Azure platform which enhance and enable new scenarios.

The Service Bus enhancements include capabilities that enable more advanced pub/sub messaging through Queues with a durable store and Topics that enable subscriptions. You can learn more on these capabilities in this blog post: Introducing the Windows Azure AppFabric Service Bus May 2011 CTP. [See below.]

In addition, these enhancements enable connectivity to Service Bus from any platform or operating system through the REST/HTTP API. For example, it is possible for Java and PHP applications to connect to the Service Bus using the REST/HTTP API.

We are also adding more videos and code samples to the Windows Azure AppFabric Learning Series available on CodePlex regarding these capabilities. Here is the list of videos on these capabilities that have already been released, or are coming soon:

  • An Introduction to the Windows Azure AppFabric Service Bus
  • An Introduction to Windows Azure AppFabric
  • Coming Soon: An Introduction to Service Bus Queues
  • Coming Soon: An Introduction to Service Bus Topics
  • Coming Soon: An Introduction to Service Bus Relay
  • Coming Soon: How to use Service Bus Topics + Code Sample
  • Coming Soon: How to use Service Bus Queues + Code Sample
  • Coming Soon: How to use Service Relay + Code Sample
  • Coming Soon: How to connect to Service Bus from Java and PHP applications Code Samples

In addition, we announced at TechEd that the upcoming Windows Azure AppFabric CTP June release will introduce new capabilities that make it easy for developers to build, deploy, manage and monitor multi-tier applications (across web, business logic and database tiers) as a single logical entity.

These new capabilities consist of:

  • AppFabric Developer Tools which are enhancements to Visual Studio that enable to visually design and build end-to-end applications on the Windows Azure platform.
  • AppFabric Application Manager which are runtime capabilities that enable automatic deployment, management and monitoring of the end-to-end application, and is supported by visual monitoring and analytics from within the cloud management portal.
  • Composition Model which are a set of .NET Framework extensions for composing applications on the Windows Azure platform. This builds on the familiar Azure Service Model concepts and adds new capabilities for describing and integrating the components of an application. The Composition Model gets created by the AppFabric Developer Tools and used by the AppFabric Application Manager.

This CTP will also enable and make it easy to build, deploy, manage and monitor Windows Communication Foundation (WCF) and Windows Workflow Foundation (WF) services on AppFabric.

You can learn more by watching the Composing and Managing Applications with Windows Azure AppFabric video which we added to the Windows Azure AppFabric Learning Series available on CodePlex.

We will share more details on these capabilities with the release of the Windows Azure AppFabric June CTP.

The enhancements to Service Bus are available on our LABS previews environment at: http://portal.appfabriclabs.com/. So be sure to login and start checking out these new capabilities.
You should also download the new Windows Azure AppFabric SDK V2.0 CTP – May Update in order to be able to use these capabilities. Please remember that there are no costs associated with using the CTPs, but they are not backed up by any SLA.

We would really like to get your feedback on these new capabilities showcased in the CTP, so make sure to ask questions and provide feedback on the Windows Azure AppFabric CTP Forum.

For questions and feedback on the production Service Bus please use the Connectivity for the Windows Azure Platform Forum.

To learn more about Windows Azure AppFabric:

If you have not signed up for Windows Azure AppFabric yet be sure to take advantage of our free trial offer. Just click on the image below and get started today!

clip_image001

Read the MSDN Library’s Windows Azure AppFabric CTP May Release topic.


Clemens Vasters (@clemensv) of the AppFabric Team posted Introducing the Windows Azure AppFabric Service Bus May 2011 CTP on 5/16/2011:

image A lot of partners and customers we talk to are telling us that they think of Service Bus as one of the key differentiators of the Windows Azure platform because it enables customers to build and interconnect applications that reflect the reality of where things stand with regard to moving workloads to cloud infrastructures: Today and for years to come, applications and solutions will straddle desktop and devices, customer-owned and operated servers and datacenters, and private and public cloud assets.

After a decade and more of application integration and process streamlining, no line-of-business application is and should ever again be an island.

image722322222If applications move to the cloud or if cloud-based SaaS solutions are to be integrated into enterprise solutions for individual customers, integration invariably requires capabilities like seamless access to services and secure, reliable message flow across network and trust boundaries. Also, as more and more applications are federated across trust boundaries and are built to work for multiple tenants, classic network federation technologies such as VPNs are often no longer adequate since they require a significant degree of mutual trust between parties as they permit arbitrary network traffic flow that needs to be managed.

We just released a new Community Technology Preview that shows that we’re hard at work and committed to expand Service Bus into a universal connectivity, messaging, and integration fabric for cloud-hosted and cloud-connected applications – and we invite you to take a look at our Windows Azure AppFabric SDK V2.0 CTP – May Update and accompanying samples. 

Service Bus is already unique amongst platform-as-a-service offerings in providing a rich services relay capability that allows for global endpoint federation across network and trust boundaries.

As of today, we’re adding a brand-new set of cloud-based, message-oriented-middleware technologies to Service Bus that provide reliable message queuing and durable publish/subscribe messaging both over a simple and broadly interoperable REST-style HTTPS protocol with long-polling support and a throughput-optimized, connection-oriented, duplex TCP protocol.

The new messaging features, built by the same team that owns the MSMQ technology, but on top of a completely new technology foundation, are (of course) integrated with Service Bus’s naming and discovery capabilities and the familiar management protocol surface and allow federated access control via the latest release of the Windows Azure AppFabric Access Control service. 

Queues

Service Bus Queues are based on a new messaging infrastructure backed by a replicated, durable store.

Each queue can hold up to 100MB of message content in this CTP, which is a quota we expect to expand by at least an order of magnitude as the service goes into production. Messages can have user-defined time-to-live periods with no enforced maximum lifetime.

The size of any individual message is limited to 256KB, but the session feature allows creating unlimited-size sequences of related messages whereby sessions are pinned to particular consumers and therefore enabling chunking of payloads of arbitrary sizes. The session state facility furthermore allows transactional recording of the progress a process makes as it consumes messages from a session and we also support session-based correlation, meaning that you can build multiplexed request/reply paths in a straightforward fashion.

Queues support reliable delivery patterns such as Peek/Lock both on the HTTP API and the .NET API that help ensuring processing integrity across trust boundaries where common mechanisms like distributed 2-phase transactions are challenging. Along with that, we have built-in detection of inbound message duplicates, allowing clients to re-send messages without adverse consequences if they’re ever in doubt whether a message has been logged in the queue due to intermittent network issues or an application crash.

In addition to a dead-letter facility for messages that cannot be processed or expire, Queues also allow deferring messages for later processing, for instance when messages are received out of the scheduled processing order and need to be safely put on the side while the process waits for a particular message to permit further progress.

Queues also support scheduled delivery – which means that you can hand a message to the queue infrastructure, but the message will only become available at a predetermined point in time, which is a very elegant way to build simple timers.

Topics

Service Bus Topics provide a set of new publish-and-subscribe capabilities and are based on the same backend infrastructure as Service Bus Queues – and have all the features I just outlined for Queues.

A Topic consists of a sequential message store just like a Queue, but allows for many (up to 2000 for the CTP) concurrent and durable Subscriptions that can independently yield copies of the published messages to consumers.

Each Subscription can define a set of rules with simple expressions that specify which messages from the published sequence are selected into the Subscription; a Subscription can select all messages or only messages whose user-or system defined properties have certain values or lie within certain value ranges. Rules can also include Actions, which allow modifying message properties as messages get selected; this allows, for instance, selecting messages by certain criteria and affinitizing those messages with sessions or to stamp messages with partitioning keys, amongst many other possible patterns.

The filtered message sequence represented by each Subscription functions like a virtual Queue, with all the features of Queues mentioned earlier. Thus, a Subscription may have a single consumer that gets all messages or a set of competing consumers that fetch messages on a first-come-first-served basis.

To name just a few examples, Topics are ideal for decoupled message fan-out to many consumers requiring the same information, can help with distribute work across partitioned pools of workers, and are a great foundation for event-driven architecture implementations.

Topics can always be used just like Queues by setting set up a single, unfiltered subscription and having multiple competing consumers pull messages from the subscription. The great advantage of Topics over Queues is that additional subscriptions can be added at any time to allow for additional taps on the message sequence for any purpose; audit taps that log pre-processing input messages into archives are a great example here.

Access Control Integration

This new CTP federates with the appfabriclabs.com version of the Access Control service, which is compatible with the Access Control “V2” service that is in available commercially since April. The current commercially available version of Service Bus federates with Access Control “V1”.

The Service Bus API to interact with Access Control for acquiring access tokens has not changed, but we are considering changes to better leverage the new federation capabilities of Access Control “V2”.

Customers who are setting up access control rules for Service Bus programmatically will find that there are significant differences between the management APIs of these two versions of the Access Control service. The current plan is to provide a staged migration for customers with custom access control rules on their Service Bus namespaces; migration will be an option for some period of time when we will operate the V1 and V2 versions of the Access Control Service side-by-side. We will publish concrete guidance for this migration over the next several months with initial details coming this week here on this blog.

What Changed and What’s Coming?

We believe that providing these capabilities in the cloud – paired with the features we already have available in Service Bus – will open up a whole new range of possibilities for cloud-hosted and cloud-enhanced applications. We have seen amazing business solutions built on Service Bus and based on customer feedback we’re convinced that the addition of a fully featured set of message-oriented middleware capabilities will enable even more powerful solutions to be built.  Our intention is to make all capabilities contained in this preview commercially available in the second half of 2011.

The load balancing and traffic optimization features for the Relay capability of Service Bus that were added in the PDC’10 CTP of Service Bus have been postponed and are no longer available in this CTP. However, “postponed” does not mean “removed” and we are planning on getting these features back into a CTP release soon. We’ve traded these features for capabilities that we expect will be even more important for many customers: Full backwards compatibility between the current production release of Service Bus and the new version we’re presenting in this CTP, even though we have changed a very significant portion of the Service Bus backend. We are committed to provide full backwards compatibility for Service Bus when the capabilities of this CTP go into production, including backwards compatibility with the Microsoft.ServiceBus.dll that you already have deployed.

To help customers writing apps on platforms other than .NET we will also release Java and PHP samples for the new messaging capabilities in the next few weeks. These samples will be versions of the chat client implemented in the Silverlight and Windows Phone chat samples included in the SDK for this CTP release.

Lastly, and most importantly, the purpose of a Community Technology Preview is to collect feedback from the community.   If you have suggestions, critique, praise, or questions, please let us know at http://social.msdn.microsoft.com/Forums/en-US/appfabricctp/

You can also Twitter me personally at @clemensv and I’ll let the team know what you have to say.


Ron Jacobs posted a Goodbye .NET Endpoint, Hello AppFabric Team Blog swansong to the .NET Endpoint blog on 5/16/2011:

image Over the last year you’ve heard us talking about both Windows Azure AppFabric and Windows Server AppFabric.  Today we are adding our final post to the .NET endpoint team blog and announcing the new AppFabric Team blog.  This blog will be the home of content about all things AppFabric from Service Bus to Caching to WCF and WF.

image722322222If you’ve been following us here on the .NET Endpoint please be sure to follow the AppFabric Team Blog.

endpoint.tv becomes AppFabric.tv

As part of this change, endpoint.tv will now become AppFabric.tv.  We will continue to bring you the best content about WF and WCF as well as the rest of the AppFabric world.  Check it out at http://appfabric.tv

MS_AppFabricTV-Banner-220


The Windows Identity and Access team posted Announcing the WIF Extension for SAML 2.0 Protocol Community Technology Preview! on 5/16/2011:

imageIt is our pleasure to announce the availability of the first CTP release of the WIF (Windows Identity Foundation) Extension for the SAML 2.0 Protocol ! We heard your feedback about the necessity to have support for the SAML 2.0 protocol in WIF. Today, we announce an extension to WIF that delivers on that feedback.

image722322222This WIF extension allows .NET developers to easily create claims-based SP-Lite compliant Service Provider applications that use SAML 2.0 conformant identity providers such as AD FS 2.0.

This CTP release includes a set of samples that illustrate how to use the extension. You can download the package that includes the WIF Extension for SAML 2.0 Protocol and samples from here.

Key features of this extension include:

  • Service Provider initiated and Identity Provider initiated Web Single Sign-on (SSO) and Single Logout (SLO)
  • Support for the Redirect, POST, and Artifact bindings
  • All of the necessary components to create a SP-lite compliant service provider application

We’ll be looking for your questions, comments, and other feedback on the claims based identity forum here.  Watch this blog for future posts about the roadmap of this WIF extension. 

WIF has a new Microsoft.com home page here.


Vittorio Bertocci (@vibronet) chimes in with Attention ASP.NET Developers: SAML-P Comes to WIF on 5/16/2011:

image

image If I’d have a dollar for every time a customer or partner asked me if they could use WIF for consuming the SAML2.0 protocol… ok, I would not exactly buy a villa a Portofino, but let’s just say that this is one of the most requested features since WIF came out.

Well, dear .NET developers, rejoice:  you no longer need to envy your friend the ADFS2 administrator. From now on you are gifted the ability to use ASP.NET for writing SAML-P SP-Lite compliant relying parties, which in fact I should probably call service providers just to add some local color.

The WIF team just released,and here I quote verbatim, the CTP of the WIF Extensions for SAML 2.0 Protocol.

At its core, what makes those extensions tick is the Saml2AuthenticationModule, which looks very similar (i.e. raises ~the same events, etc.) to the WSFederationAuthenticationModule and is in fact inserted in the pipeline more or less in the same way. The module lives in the assembly Microsoft.IdentityModel.Protocols.dll, together with the (lots of) classes it needs to implement the details of the SAML protocol.

The programming model may be similar, as one would expect, but of course the extensions implement features that are paradigmatically SAMLP. Examples? POST, Redirect and Artifact bindings; SP initiated and (can you believe that?) IP initiated SSO and SLO (single log out).

The package contains various other goodies: a good set of cassini-based samples, documentation that will get you started and that will help you to use ADFS2 as IP instead of the sample IP provided in the package. But my favourite is definitely the SamlConfigTool: it is a slightly more raw counterpart of fedutil/add STS reference, which can consume metadata from one IP and generate the corresponding SP config settings. And just like fedutil, it can generate the SP metadata so that the IP can easily consume it for automating the SP provisioning as well.

The WIF Extensions for SAML.20 Protocol unlock some new, interesting scenario: and of course, being this a CTP the WIF team really wants your feedback. If you’ll play with the extensions, please take a moment to chime in and let us know what you think!


<Return to section navigation list> 

Windows Azure VM Role, Virtual Network, Connect, RDP and CDN

imageNo significant articles today.


<Return to section navigation list> 

Live Windows Azure Apps, APIs, Tools and Test Harnesses

Michael S. Collier (@michaelcollier) provided details of the Update to Windows Azure Toolkit for Windows Phone 7 in a 5/16/2011 post:

image Today Microsoft is releasing the third update to the Windows Azure Toolkit for Windows Phone 7. This is the third update in less than 2 months! It is great to see the teams moving so quickly to get new updates out. The second update (v1.1) included a little bit of new functionality, namely support for the Microsoft Push Notification Service, bug fixes, and some usability enhancements. This third update (v1.2) is a much more substantial update to the toolkit. Wade Wegner recently teased a few of these updates. Let’s take a quick look at some of them.

Support for Access Control Services

imageFrom the start you’ll notice at least two new software requirements: Windows Identity Foundation Runtime and Microsoft Windows Identity Foundation SDK. If you don’t have the WIF Runtime or SDK, the dependency checker will detect the missing software and provide you direct links to pages where you can download the appropriate version for your environment.

The Windows Identify Foundation (WIF) is used in support of what just may be the most exciting new feature of the toolkit – built-in support for Windows Azure Access Control Services. To get started, one of the first things you’ll want to do is create an ACS (v2) namespace. You can do that by opening the developer portal at http://windows.azure.com and going to the “Service Bus, Access Control & Caching” section.

You’ll need to grab the ACS namespace and management service key. To get that, you’ll want to click the desired namespace in the main window, and then the green “Access Control Service” button in the top ribbon. Additional details on this process can be found in toolkit’s documentation. Enter the namespace and key into the wizard when prompted. The toolkit will handle the rest for you. Pretty nice, huh?

image Now you can have your Windows Phone 7 application integrate with Windows Live, Yahoo!, and Google. For mobile users that are likely to have social identities to begin with, this gives them an easy on-ramp to your application! They can easily select the identity provider of choice.

Google

      

Yahoo!

      

Windows Live ID

Once you authenticate, you’ll be prompted to complete the process by storing some information from the identity provider in a new Users table created on your behalf by the toolkit wizard.

Support for Windows Azure Queues

Another great feature of the latest revision is support for Windows Azure Queues. This essentially completes the trifecta – support for Windows Azure tables, blobs, and now queues. Just like you can with tables and blobs, the toolkit now allows you to list existing queues in your storage account.

You can easily list queues in your account.

The toolkit also contains an easy sample to show how to add a new message to the queue.

      

If you add, then you should be able to delete too.  By selecting the “dequeue” button, that’s exactly what will happen.  The topmost message in the queue is removed from the queue.

Improved User Interface

Finally, really nice update to the toolkit is a much more stylistic user interface. Previously the toolkit’s web admin interface was pretty, well, generic – straight out of the default ASP.NET MVC project styling.  This latest update has a much improved UI that makes using the toolkit a lot more enjoyable experience.

Troubleshooting

This version of the Windows Azure Toolkit for Windows Phone 7 comes with much more help relating to some problems you might encounter. One that I’m very happy to see mentioned is help for those with Resharper installed. If you have Resharper installed and try to use the Windows Azure Toolkit for Windows Phone 7, you’ll likely run into a less than optimal experience – either errors when creating a new project and/or problems simply starting a new project. While the problem still exists, the 1.2 version of the troubleshooting documentation of the toolkit points out the problem and provides instructions on how to disable Resharper while you use the toolkit. I’m sure the team is hard at work on a permanent solution.

The v1.2 update looks to be a pretty impressive update to the Windows Azure Toolkit for Windows Phone 7.  Be sure to check it out – I think you’ll enjoy it too!


Wade Wegner (@wadewegner) reported NOW AVAILABLE: Windows Azure Toolkit for Windows Phone 7 v1.2 on 5/16/2011:

image Here it is – the Windows Azure Toolkit for Windows Phone 7 v1.2!

As I mentioned last week when I spoke about Updates Coming Soon to the Windows Azure Toolkit for Windows Phone 7, we have some really important and valuable additions to the toolkit.

  • imageSupport and tooling for the Access Control Service 2.0
  • Support for Windows Azure Storage Queues
  • Updated UI/UX for the management web application

imageThese are significant updates – particularly the support for ACS.  Given the number of updates since version 1.0 – don’t forget that we added Microsoft Push Notification support, and more, in version 1.1 – I decided to redo the Getting Started with the Windows Azure Toolkit for Windows Phone 7 video.

I highly recommend you take a look at the following resources to learn more:

GettingStartedWAZToolkitWP7v12_512_c

Getting Started with the Windows Azure Toolkit for Windows Phone 7

by Wade Wegner

WAZWP7ACS1_512_ch9632

Getting Started with ACS and the Windows Azure Toolkit for Windows Phone 7

by Vittorio Bertocci

We also have a fantastic set of articles on CodePlex that you should take a look at:

Version 1.2 Updates

In version 1.1 we introduced support for Microsoft Push Notification Services.  In version 1.2 we default to adding this service, but we give you the option of excluding if it’s not required.  Additionally, we also let you choose whether you want to support Apple Push Notification Services in now:

PushNotification

Then, you can easily use the Windows Azure Toolkit for iOS to work with this service running in Windows Azure.

As mentioned extensively by Vittorio, you can also choose to use ACS instead of the simple ASP.NET membership service.

ACS

Take a look at this article if you’re trying to determine which type of user authentication you should use.  If you go with ACS, this produces a very nice login experience where you can choose one of your existing identity providers.

WP7

As with Blobs and Tables, we now provide full support for Windows Azure Queues.  This allows you to enqueue and dequeue messages from your application.

image

Finally, we were not particularly pleased with the out-of-the-box ASP.NET theme, so we updated it.  Inspired by the Metro Design guidelines for Windows Phone 7, we came up with something nice and fresh.

image

Breaking Changes

We’ve come far along enough now that it’s more important for us to track changes, in particular when they are breaking changes.  If you used version 1.0 or 1.1 of this toolkit, I highly recommend you take a look at the Change Log.  If you’ve started to use the toolkit for building applications, there are potentially a couple changes you should review.  The two I’ll call out here are:

  • In the AuthenticationService we changed the LoginModel class to Login. This means that you may have to update how authenticate to the membership service.
  • We changed the CreateUserModel to RegistrationUser, and changed the name of its UserName property to Name.  This class is used by the AuthenticationService to register new users.

An affect of these changes could be an error when interacting with user data stored in table storage.  For local development, the best thing to do would be to reset local storage so that you don’t have the old schema stored in a table.

image_22

Next Steps

We’ll have at least a couple more releases of this toolkit over the next month or two, and I’ll share the details with you as soon as they are locked.  For now, please be sure to let us know if you think we should target other scenarios.  Submit your feedback on CodePlex and join the discussion!

I hope this helps!


Vittorio Bertocci (@vibronet) adds his two cents with Bring Your Active Directory in Your Pockets with ACS, OAuth 2.0 and the New Windows Azure Toolkit for Windows Phone 7 quickstart of 5/16/2011:

imageAs promised last week, today Wade released the 1.2 version of the Windows Azure Toolkit for Windows Phone 7. And again as promised, this version has full support for ACS!

Using the new feature is super straightforward: you can see that by yourself in the quickstart video in which I walk through the simplest ACS configuration.

image

imageAs mentioned in the teaser post last week, we purposefully kept the VS template very simple. As a result, the initial setup ends up with one application which supports Windows Live ID, Yahoo! and Google, the identity providers which are pre-configured within ACS.

However, once the project has been created nothing prevents you from working directly on you ACS namespace for adding new identity providers, such as Facebook or even your own Active Directory; those identity providers will automagically show up in the list of IPs in the login screen without he need of changing a single line of code. Want proof? Read on!

Adding a New WS-Federation Provider (like ADFS2)

imageHere I’ll assume that you went through the steps I show in the quickstart video and you ended up with the basic ACS application as created by the toolkit.

If you want to enable your users to sign in the application using their Active Directory accounts, you can go directly to the ACS namespace you are using and add you AD (assuming you have ADFS2 deployed) as an identity provider.

Note: Right now the toolkit bits are still handling access at the account level, which means that your users will need to go through the same sign-up step you have for users coming from social providers; however this does not subtract anything to the joy of being able to reuse your domain credentials on a device, instead of having to memorize yet another password. In the future the integration will be even more seamless: think claim mapping rules, along the lines of what we’ve done for integrating ADFS2 with Umbraco.

Well, let’s do it then: it will only take a minute!

As usual I don’t have an ADFS2 instance on my laptop, hence I’ll simulate it using SelfSTS. This time I picked the  SelfSTS1 folder from the assets of the ACS and Federation lab, copied it under c:\temp and modified it a bit to emit a different set of claims:

image

I also changed its port from the config, generated a new certificate, hit Start and refreshed the federationmatadata.xml file (hint: use the URL from the metadata field to open the file in Notepad, then save it over the old metadata file). Those may not be strictly necessary, but I always do that for avoiding collisions.

Now that you have your ADFS2 simulation up & running, go to your namespace in the ACS portal at https://YOURNAMESPACE.accesscontrol.windows.net/v2/mgmt/web. From there go to Identity providers, hit Add, keep the default ws-federation and hit Next.

image

From here you can add your SelfSTS instance. Remember, it simulates your AD! If you have an ADFS2 instance, use that instead. Enter whatever name you want in the display name and login text link fields, upload the metadata file from your SelfSTS, scroll to the bottom of the page and hit Save.

image

Now click Rule groups form the left-hand menu, click on Default Rule Group for WazMobileToolkit, hit the Generate link, accept the defaults and click the Generate button. You can also hit Save for good measure, if you are superstitious .

And you’re done! Go back to Visual Studio and start the portal/service and the phone client as shown in the quickstart. Here there’s what you’ll see on the phone app:

image

That’s right! Just like that, now your AD appears as one of the options. Neat. If you pick that option, ACS will contact SelfSTS for authenticating you. If you would be using a real ADFS2 instance at this point you would be prompted for your credentials: but SelfSTS is a test utility which automatically authenticates you, hence you’ll go straight to the next screen:

image

…and from now on everything is exactly as for the social providers: the user gets an entry in the system, which will be used for handling authorization. You can see it in the Users table in the management portal.

image

Let me reiterate: in addition to being able to use credentials from Windows Live ID, Yahoo and Google, the user can now reuse his domain credentials to sign in from one Windows Phone client to one application whose backend is in the cloud (Windows Azure). And enabling all that took just few clicks on the ACS management portal, no code changes required.

Now, do you want to hear a funny story? We did not plan for this. I am not kidding. I am not saying that we are surprised, I totally expected this, what I mean is that this scenario didn’t take any specific effort to implement, it came out “for free” while implementing support for ACS and social providers. 
When you admit users from social providers in your application, you don’t receive very detailed (or verifiable, excluding email) information about the users; hence the usual practice is to create an account for the user and mainly outsource credential management to the external identity providers.
In the walkthrough above I treated my simulated ADFS2 exactly in the same way, and everything worked thanks to the fact that we are relying on standards and ACS isolates the phone application and the backend from the differences between identity providers.  It’s the usual federation provider pattern, with the twist demonstrated in the ACS + WP7 lab; in a future blog post I’ll go a bit deeper in the architecture of this specific solution.

What could we accomplish if we’d explicitly plan for an identity provider like ADFS2? Well, for one: provision-less access, one of the holy grails of identity and access control. ADFS2 sources data from AD, hence can provide valuable information about its users (roles, job functions, spending limits, etc) which carries the reputation of the business running that AD instance. This means that we could greatly simplify the authorization flow, skipping the user registration step and authorizing directly according to the attributes in input. As mentioned above, we already have a good example of that in the ACS Extensions for Umbraco; that’s a feature that is very likely to make its way in the toolkit, too.

There you have it. The bits are in your hands now, and we can’t wait to find out what you’ll accomplish with them! If you have feedback, please do not hesitate to visit the discussion section in http://watoolkitwp7.codeplex.com/. Happy coding!


The Windows Azure Team put its spin on JUST ANNOUNCED: Windows Azure Toolkit for Windows Phone 7 v1.2 in a 5/16/2011 post:

imageDuring today’s keynote at TechEd North America, Microsoft’s Drew Robbins demonstrated how to build an application using the new Windows Azure Toolkit for Windows Phone 7 v.1.2. Now available for download here, this version includes some important new features, including:

  • Support and tooling for the Access Control Service 2.0 (i.e. use identity federation like Live ID, Facebook, Google, Yahoo! and ADFS)
  • Support for Apple Push Notification Services (with the Windows Azure Toolkit for iOS)
  • Support for Windows Azure storage queues (simple enqueue and dequeue operations)
  • Updated UI/UX for the management web application
  • Code refactoring, simplification, and bug fixes

imageCheck out these new videos on Channel 9 to help you get started with the toolkit:

Getting Started with the Windows Azure Toolkit for Windows Phone 7

Getting Started with ACS and the Windows Azure Toolkit for Windows Phone 7

You can also find additional information and guidance in the following related blog posts:

Click here to read more about today’s announcements from TechEd North America.  Click here to learn more about Windows Azure Toolkits for Devices.


The Windows Azure Team posted a brief Just Announced at Tech-Ed: Travelocity Launches Analytics System on Windows Azure case study on 5/16/2011:

imageMicrosoft Corporate Vice President Robert Wahbe delivered the opening keynote this morning at Microsoft Tech-Ed North America 2011 in Atlanta, Georgia. During his talk, Wahbe outlined how the cloud is changing IT and demonstrated how Microsoft and Windows Azure are helping customers move their businesses to the cloud.  One of the examples he used was Travelocity; their story is worth delving into because it illustrates the benefits a move to the cloud can create for an organization.

Founded in 1996, Travelocity is an online travel agency that connects millions of travelers with airlines, hotels, car-rental companies, and other services.  In March 2010, business partners asked Travelocity to collect website metrics on customer shopping patterns. Travelocity decided to deploy the application in the cloud to avoid burdening its own data center.

Travelocity uses Windows Azure to provide compute power and storage for its business intelligence and analysis system. In doing so, it avoids burdening the capacity of its on-premises infrastructure, Thanks to cloud computing, Travelocity has fulfilled its partners’ requests for a system that collects metrics on customer interactions.

The company is also experiencing a shift in how it manages its development efforts. Because Microsoft manages the servers, configuration, and maintenance, Travelocity is able to build and deploy applications on a per-month subscription basis. It reduces costs while reaching its large customer base.

The company also benefits from the enormous scalability offered by Windows Azure, which ensures that customers from around the world can access Travelocity’s services reliably. Plus, because of a faster time-to-market and a flexible development environment, the company can experiment with new offerings and enhance the customer experience.

Click here to watch Wahbe’s opening keynote.  Click here to read more about today’s announcements. You can read the Travelocity case study here.


 <Return to section navigation list> 

Visual Studio LightSwitch

Andy Kung continued his series with Course Manager Sample Part 5 – Detail Screens (Andy Kung) on 5/16/2011:

image2224222222I’ve been writing a series of articles on the Course Manager Sample, if you missed them:

In Part 4, we identified the main workflow we want to implement. We created screens to add a student, search for a student, and register a course. In this post, we will continue and finish the rest of the workflow. Specifically, we will create detail screens for student and section records and a course catalog screen that allows user to filter sections by category.

clip_image002

Screens

Student Detail

Remember the Search Students screen we built in Part 4? If you click on a student link in the grid, it will take you to a student detail screen.

clip_image003

This is pretty cool. But wait… we didn’t really build this screen! In reality, LightSwitch recognizes that this is a common UI pattern, and therefore it generates a default detail screen for you on the fly. Of course, we can always choose to build and customize a detail screen, as we’re about to do for Student and Section.

Adding a detail screen

Create a screen using “Details Screen” template on Student table. In this screen, we want to also include student’s enrollment data, so let’s check the “Student Enrollments” box.

Make sure “Use as Default Details Screen” is checked. It means that this detail screen will be used as the detail screen for all student records by default. In other words, if you click on a student link, it will take you to this detail screen instead of the auto-generated one. As a side note, if you forget to set it as the default details screen here. You can also set the property of the Student table (in table designer).

clip_image004

By default, the details screen template lays out the student info on top and the related enrollment data on the bottom. We can make similar layout tweaks to the student portion as we did for “Create New Student” screen in Part 4 (such as moving the student picture to its own column, etc).

clip_image005

Including data from related tables

I’d like to draw your attention to the Enrollments portion of the screen. Since Enrollment is a mapping table between Student and Section, the grid shows you a student (shown as a summary link) and a section (shown as a picker). Neither of the fields is very useful in this context. What we really want is to show more information about each section (such as title, meeting time, instructor, etc.) in the grid. Let’s delete both Enrollment and Section under Data Grid Row.

clip_image006

Use the “+ Add” button and select “Other Screen Data.”

clip_image007

It will open the “Add Screen Data” dialog. Type “Section.Title”. You can use Intellisense to navigate through the properties in this dialog. Click OK.

clip_image008

The Title field of the Section will now appear in the grid. Follow similar steps to add some other section fields. The “Add Screen Data” dialog is a good way to follow the table relationship and include data that is many levels deep.

clip_image009

Making a read-only grid

Now we have an editable grid showing the sections this student is enrolled in. However, we don’t expect users to directly edit the enrollments data in this screen. Let’s make sure we don’t use editable controls (ie. TextBox) in grid columns. A quick way to do this is to select the “Data Grid Row” node. Check “Use Read-only Controls” in Properties. It will automatically selects read only controls for the grid columns (ie. TextBox to Label).

clip_image010

We also don’t expect users to add and delete the enrollments data directly in the data grid. Let’s delete the commands under data grid’s “Command Bar” node. In addition, data grid also shows you an “add-new” row for inline add.

clip_image011

We can turn it off by selecting the “Data Grid” node and uncheck “Show Add-new Row” in Properties.

clip_image012

Launching another screen via code

In Part 4, we’ve enabled the Register Course screen to take a student ID as an optional screen parameter. The Student picker will be automatically set when we open the Register Course screen from a Student detail screen. Therefore, we need a launch point in the student detail screen. Let’s add a button on the enrollment grid.

Right click on the Command Bar node, select Add Button.

clip_image013

Name the method RegisterCourse. This is the method called when the button is clicked.

clip_image014

Double click on the added button to navigate to the screen code editor.

clip_image015

Write code to launch the Register Course screen, which takes a student ID and a section ID as optional parameter.

Private Sub RegisterCourse_Execute()
    ' Write your code here.
    Application.ShowRegisterCourse(Student.Id, Nothing)
End Sub

That’s it for Student Detail screen. F5 and go to a student record to verify the behavior.

clip_image016

Section Detail

Now that we’ve gone through customizing the student detail screen, let’s follow the same steps for Section. Please refer to the sample project for more details.

  1. Create a screen using “Details Screen” template on Section table. Include section enrollments data
  2. Tweak the UI and make the enrollment grid read-only
  3. Add a button to the enrollment grid to launch Register Course screen
Course Catalog

In Course Catalog screen, we’d like to display a list of course sections. We’d also like to filter the list by the course category. In Part 2, we’ve created a custom query for exactly this purpose called SectionsByCategory. It takes a category ID as a parameter and returns a list of sections associated with the category. Let’s use it here!

Create a screen using “Search Data Screen” template. Choose SectionsByCategory as screen data.

clip_image017

In screen designer, you will see SectionsByCategory has a query parameter called CategoryId. It is also currently shown as a TextBox on the screen. User can enter a category ID via a text box to filter the list. This is not the most intuitive UI. We’d like to show a category dropdown menu on the screen instead.

clip_image018

Select SectionCategoryId (you can see it is currently bound to the query parameter) and hit DELETE to remove this data item. After it is removed, the text box will also be removed from the visual tree.

clip_image019

Click “Add Data Item” button in the command bar. Use the “Add Data Item” dialog to add a local property of Category type on the screen.

clip_image020

Select CategoryId query parameter, set the binding via property.

clip_image021

Drag and drop the Category property to the screen content tree (above the Data Grid). Set the “Label Position” of Category to “Top” in Properties.

clip_image022

Follow the Course Manager sample for some layout tweaks and show some columns as links (as we did in Search Students). Now, if you click on a section link. It will open up the Section Detail screen we customized!

clip_image023

Conclusion

In this post, we’ve completed the main workflow in Course Manger. We are almost done! All we need is a Home screen that provides some entry points to start the workflow.

clip_image025

Setting a screen as the home (or startup) screen is easy. Creating one that displays some static text and pictures require some extra steps. In the next post, we will conclude the Course Manager series by finishing our app with a beautiful Home screen!

clip_image027


Robert Green posted My TrainingCourses LightSwitch Demo with source code on 5/16/2011:

image In the past month I have given the LightSwitch Beyond the Basics talk at both VS Connections in Orlando and VSLive in Las Vegas. Today I am giving the Building Business Applications with LightSwitch talk at TechEd. All 3 of these use the same basic application, although different variants. I promised I would make it available, and here is the demo script.

image2224222222The application starts with existing SQL Server data, so the first step is to create a TrainingCourses database in SQL Server or SQL Express. To populate the database run the TrainingCourses.sql file. That will create the tables and relationships and also add sample data to the Customers, Courses and Orders tables. Next, create a new VB or C# LightSwitch project named TrainingCourses.

For the SharePoint part of the demo, create a Training Courses team site. Create a Course Notes list with Title, Course Number and Notes, all strings. Then import the items in the CourseNotesList workbook.

Open attached fileBuilding-Business-Apps-with-LightSwitch.zip


Return to section navigation list> 

Windows Azure Infrastructure and DevOps

Yung Chou explained Window Azure Fault Domain and Update Domain Explained for IT Pros in a 5/16/2011 post:

imageSome noticeable advantages to run applications in Windows Azure are high availability and fault tolerance achieved by the so-called fault domain and update domain. These two terms represent important strategies adopted by Windows Azure for deploying and updating applications. With in this post and in all my articles, It should be noted that when discussing Windows Azure applications, Windows Azure and Fabric Controller (FC) are used interchangeably to represent the cloud OS in Windows Azure Platform, unless otherwise stated. And in the context of cloud computing, an application and a service are considered the same since all user applications are generally delivered as services.

Fault Domain

imageThe scope of a physical unit failure is a fault domain which is in essence a single point of failure. And the purpose of identifying/organizing fault domains is to prevent a single point of failure. In a simplest form, a computer by itself connected to a power outlet is a fault domain. Apparently if the connection between a computer and its power outlet is off, this computer is down. Hence a single point of failure. As well, a rack of computers in a datacenter can be a fault domain since a power outage of a rack will take out the collection of hardware in the rack similar with what is shown in the picture here. Notice that how a fault domain is formed has much to do with how hardware is arranged. And a single computer or a rack of computers is not necessarily an automatic fault domain. Nonetheless, in Windows Azure a rack of computers is indeed identified as a fault domain. And the allocation of a fault domain is determined by Windows Azure at deployment time. A service owner can not control the allocation of a fault domain, however can programmatically find out which fault domain a service is running within.

Specifically, Windows Azure Compute service SLA guarantees the level of connectivity uptime for a deployed service only if more than one instance of each role of the service are specified by the service owner in the application definition, i.e. csdef file. Under this assumption, Windows Azure by default deploys the role instances of an application into "at least" 2 fault domains, which ensures fault tolerance and allows an application to remain available even if a server hosting one role instance of the application fails.

Upgrade Domain

On the other hand, an upgrade domain is a strategy to ensure an application stays up and running, i.e. highly available, while undergoing an update of the application. Windows Azure distributes the role instances of an application evenly when possible into multiple upgrade domains with each upgrade domain as a logical unit of the application’s deployment. When upgrading an application, it is then carried out one upgrade domain at a time. The steps are: stopping the instances of an intended role running in the first upgrade domain, upgrading the application, bringing the role instances back online followed by repeating the steps in the next upgrade domain. An application upgrade is completed when all upgrade domains are processed. By stopping only the instances running within one upgrade domain, Windows Azure ensures that an upgrade takes place with the least possible impact to the running service. A service owner can optionally control how many upgrade domains with an attribute, upgradeDomainCount, in service definition, i.e. the csdef file of an application. Below shows what is documented on the attribute in MSDN. It's however not possible to specify which role is allocated to which domain.

image

Observations

Within a fault domain, there is no concept of  fault tolerance. Only when more than one fault domains are managed as a whole, is fault-tolerance applicable. In addition to fault domain and update domain, to ensure fault tolerance and high availability Windows Azure also has network redundancy built into routers, switches, and load-balancers. FC also sets check-points and stored the state data across fault domains to ensure reliability and recoverability.


Doug Rehnstrom described Windows Azure Interoperability in a 5/16/2011 post to the Learning Tree blog:

image I was talking to a colleague last week and he asked, “If you deploy an application to Azure, can you move it to a different cloud provider later on?” To answer this question, let’s look at three things. First, what is Azure really? Second, does writing an Azure application lock you into Azure? And third, is there a need for standards in cloud-computing?

Azure is Just Windows Server in a Virtual Machine

imageLet’s say you write a Web application using ASP.NET. That application will run on any Windows server. Azure is really just a virtual machine running an instance of Windows server. So, moving an application from an Azure virtual machine to an Amazon EC2 virtual machine is no different than moving an application from a Dell server to an HP server. In fact, it would be quicker, because you wouldn’t have to buy the hardware or install the software. That is not to say there are no issues, but it’s really not that big a deal.

You might want to check out this article for a Look Inside a Windows Azure Instance.

Does using Azure Lock you into Microsoft as your Cloud Vendor

imageThere are changes you’d likely make in your Web application to take optimal advantage of Azure’s architecture. For example, you would likely use Azure Storage for sessions, membership and online data. Check out the following article to find out why, Windows Azure Training Series – Understanding Azure Storage.

Surely, that would lock you into Azure. Well not really. Azure Storage can be accessed from anywhere via http. So, while you might be using Azure Storage from an Azure application, you could just as easily use it from an application running on EC2, Google App Engine or from your local area network. So, if you had an application using Azure specific features and wanted to move it, it would not mean a rewrite.

Is there a need for Standards in Cloud Computing

Whenever people start talking about a need for standards I get worried. Standards mean committees and meetings and great long documents. I would argue we already have the standards we need. All of Windows Azure is made available using http and a REST-based API. That means any platform that can make an http request, can use Windows Azure. The same can be said of Amazon Web Services and Google App Engine.

Microsoft provides Windows Azure for compute services, SQL Azure for relational database and Azure Storage for data services. Amazon has EC2, Elastic Beanstalk, S3 and RDS which collectively provide the same services. Google offers App Engine and Big Table. You can mix and match the services from these providers any way you think is best, and move between them over time.

Summary

So, yes you can deploy an Azure application today and move it elsewhere later. To learn more about Windows Azure, check out Learning Tree course 2602, Windows Azure Platform Introduction: Programming Cloud-Based Applications. Or, to learn more about cloud-computing in general, come to Learning Trees Cloud Computing course.


Jason Zander (@jlzander) described new DevOps features while Announcing ALM Roadmap in Visual Studio vNext at Teched in this 5/16/2011 post:

image I get a lot of questions about the future of Visual Studio; while I can't talk about everything we're doing I am excited because today at Teched North America, I announced our vision for Application Lifecycle Management (ALM) in the next version of Visual Studio. Our vision for ALM can be broken down into three main themes:

imageWhen we asked people what the biggest problem they faced in successfully delivering software, they identified the need for better collaboration. We know that building software takes a team of people including developers, testers, architects, project planners, and more. Out of this observation, we created the strategy for our ALM offering which focuses on helping people collaborate in very tightly integrated ways:

  • Collaboration – focus on the flow of value between team members no matter what role.
  • Actionable Feedback – when feedback is required between team members, it should be in a form which is directly applicable to solving the problem at hand.  For example when a tester communicates a defect to development it should include videos, screen shots, configuration information, and even an IntelliTrace log making it easier to find and fix the root problem.
  • Diverse Work Styles – provide the best possible tool for each team member whether that is the Visual Studio IDE, the web browser, SharePoint, Office, or dedicated tooling.
  • Transparent Agile Processes – Enable all of the above to work on a “single source of truth” from engineering tasks through project status.  TFS provides this core that brings together all team members and their tools.

VS2005, VS2008, and VS2010 have all delivered new value following this path.  For example VS2010 added deep Architect <-> Developer and Test <-> Developer interaction through solutions like architectural discovery, layering enforcement, automated testing, and IntelliTrace.

In the keynote today, I talked about how we have continued on this path by incorporating two additional important roles: stakeholders and operations. Even though this diagram greatly simplifies the flows throughout the application lifecycle, it captures the essence of planning, building, and managing software:

There are a number of scenarios that span the next version of Visual Studio for ALM. These scenarios improve the creation, maintenance and support of software solutions by focusing on improving the workflow across the entire team as well as across the entire lifecycle.

  • Agile Planning Tools – create transparency across the planning process and full team participation through solutions like the new backlog and task board.
  • Lightweight Requirements – a natural way to capture and receive feedback on requirements early in the process.
  • Stakeholder Feedback – working code which matches the expectations of stakeholders.
  • Continuous Testing – unit test coverage ensures quality in the final product.
  • Agile Quality Assurance – increased code quality with code review support, enhanced unit testing frameworks and new exploratory testing support.
  • Enhanced User Experience – more time ‘in the zone’, through improved experiences for day-to-day tasks.
  • Aligning Development with Operations – increased connections and insight between the operations and development teams lowering the time it takes to fix a bug in production.

Here are just a few of the screenshots from the demos today – a link to my keynote is below. 

Agile Planning
Planning a successful release requires making a lot of tradeoffs.  We need to decide what core customer problems we want to solve and in what priority (product backlog).  We only have so many resources available to us which must be factored in (capacity planning).  Once we’ve determined what comes next we actually have to plan out the work (sprint planning).  Finally we need to track our progress against the plan and make adjustments as we go (managing tasks).  With Visual Studio vNext we have introduced a new web based interface that implements the scrum model providing solutions for all of these issues.  Because the solution is built on TFS, the data is easily accessed in your favorite tool of choice.  You can see some examples here:

          

Lightweight Requirements
How often have you built exactly what the customer asked for but not what they wanted?  In general customers are working hard to provide their requests and the engineering team is working equally hard to make them happy.  At the same time there are a lot of steps involved and each point can introduce the potential to diverge between the stakedholder and the engineering team.  Visual Studio vNext provides a natural way to work through stakeholder feedback using something we are all familiar with, PowerPoint.  Using the Storyboarding plug-in for PowerPoint the product owner can quickly mock up the solution and get feedback directly from the customer before impelmentation begins.  Because the solution leverages PowerPoint, you can show linking between screens in your application, demonstrate user actions (like touch), and easily share your proposal.  The following screen shots demonstrate some of these concepts:

   

      

Stakeholder Feedback
The longer the time between a user asking for a feature and the team delivering, the more likely we are to see disconnects.  Ideally we would get feedback on a regular basis to make sure what is being built is what was asked for.  To help with this, Visual Studio vNext introduces a new feedback tool (based on the Test Professional support VS2010) which allows a product owner to use the new features and provide actionable feedback.  The tool collects video and audio of the feedback session as well as the action logs (which can be turned into test cases).  This kind of rich feedback done in a timely way keeps everyone in sync.

    

Continuous Testing
Having great unit test coverage is one of the best ways to ensure quality in the final product.  With Visual Studio vNext we are enabling you to use your favorite unit testing framework integrated deeply into the IDE.  We will support MS Test, xunit, and nunit with vNext.  You will also be able to target both .NET and native C++ code.  Adding test frameworks is an extensibility point as well so if you don’t see your favorite one listed here, you can easily add it.

           

Agile Quality Assurance
Once our new software has been finished by the development team we need to do our quality assurance passes.  This will include a set of functional tests that can utilize the manual and coded testing features of VS2010. For Visual Studio vNext we also wanted to provide a way for a QA professional to explore the product and look for additional issues. The new Exploratory Testing tool provides a great way to do this.  As you explore the product the tool records the entire session (audio and video), the action log of steps taken, etc. From the tool you can create a new test case, bug, take snapshots, and take detailed notes. The filtering tool also allows you to select the steps that led to finding a defect. All of these are examples of providing actionable feedback.

[Updated Screenshot]

System Center and Visual Studio Team Foundation Server 2010

The relationship between the application development and the IT operations teams is critical.  We’ve just released a CTP of the new connector between System Center and Visual Studio Team Foundation Server (TFS) 2010 that facilitates the alignment between development and operations. The connector enables an operations team to assign application performance monitoring and diagnostic information gathered by System Center to TFS for immediate attention and application incident triage by the engineering team. Using this support, the operations team can easily capture key data (like call stacks) and automatically deliver them to the engineering team (another example of actionable feedback). Microsoft will deliver a CTP of the new connector later today.  Be sure to check it out and let us know what you think!

Find Out More

The “Visual Studio vNext: Application Lifecycle Management” whitepaper is available today.  This is a comprehensive whitepaper that covers these topics in much more detail.  I recommend you check it out, along with the Visual Studio Roadmap.

The best way to take advantage of the benefits of ALM is to start using Visual Studio 2010 today and obtain an MSDN subscription.   Additional helpful links to get you started can be found on our websites:

A video of my keynote will be available later on the Teched North America site:

Check out Cameron Skinner, Brian Harry, and Amit Chatterjee’s blogs over the next couple of months for more details on ALM in Visual Studio vNext.


Rob Gillen (@argodev, pictured below) recommended David Pallman’s Windows Azure Handbook: v1 book in a 5/16/2011 post:

image Earlier this year, a fellow Windows Azure MVP, David Pallmann wrote the first book in series on Windows Azure. For whatever reason, I started reading the book with a great deal of skepticism (fast moving technology, books are outdated long before they reach press, etc.) but was pleasantly surprised with the strong business relevance of this book.

azurehandbookv1In fact, I’d go so far as to say that most any consultant working in the Azure field (and other cloud fields as well) should take the time to read this book. There are a number of worksheets and thought processes that David walks through that illustrate a maturity (i.e. lack of hype) to the cloud engagement process.

Of particular interest to me were the later chapters that moved a bit away from the technology specifics and more on the reasoning behind why one would consider a move to the cloud, what should be considered, how to identify good/bad cloud application candidates, how to plan an effective pilot, etc.


<Return to section navigation list> 

Windows Azure Platform Appliance (WAPA), Hyper-V and Private/Hybrid Clouds

The System Center Team posted New Hyper-V Cloud Fast Track Partners Double Down on Private Cloud Vision on 5/16/2011:

Tech Ed North America 2011 http://bit.ly/mdfwpA kicked into high gear today with news that Cisco and NetApp are participating in Microsoft’s Hyper-V Cloud Fast Track program.   Together, NetApp and Cisco have developed a pre-validated private cloud solution for customers who want to deploy with less risk, more flexibility and choice, and more quickly. 

Their solution is interesting because it offers the flexibility of the cloud with higher levels of control needed to meet security and regulatory guidelines.  It uses Windows Server 2008 R2 Hyper-V and Microsoft Systems Center to ease management, enable automation and self-service provisioning of the entire private cloud infrastructure, including automated self-service provisioning via native Opalis Integration Packs.  It is scalable vertically or horizontally, paving the way for greater efficiency, agility and cost savings.  Check out the insider insights on this TechNet Edge video from NetApp and Cisco engineers in a TechNet Edge video here: http://bit.ly/jWAplo.  

NetApp and Cisco join six of our largest OEM partners who are betting on Microsoft’s private cloud strategy, and our commitment to meet customers where they are and help them plan for the future.  Together with our Hyper-V Cloud Fast Track partners, our customers will have the ability to leverage the best of what they have and then move to other options when the time is right, on their terms. Customers get the benefit of proven private cloud solutions now available from eight vendors who represent more than 80% share of the world’s server market.  

Customers are on a path toward virtualization and cloud, driving increasing interest in Hyper-V.  There is growing recognition that management is key.  Our solutions include enterprise-class virtualization and comprehensive management technologies to deliver private cloud services, and powerful application insights to ensure maximum performance and availability across private and public cloud assets.  The Windows Hyper-V Cloud Fast Track program ensures customers have a broad choice of predefined, validated configurations for private cloud deployments comprising compute, storage, networking resources, virtualization and management software.

Congratulations to NetApp and Cisco on their decision to Fast Track with us to the cloud!

What I really wanted was details about WAPA’s release.


<Return to section navigation list> 

Cloud Security and Governance

Lori MacVittie (@lmacvittie) asserted Shared resources do benefit organizations, there’s no arguing about that. But when resources forming the basis of identity are trusted and then inadvertently shared, you may find your (IP) identity misappropriated in an introduction to her (IP) Identity Theft in Cloud Computing Environments post of 3/16/2011 to F5’s DevCentral blog:

image In the past two years there have been interesting stories floating around about what happens when IP addresses are “shared” in public cloud computing environments. You’ve no doubt heard how someone spun up an instance and was immediately blacklisted by some other website because the last application assigned that IP address was naughty on the Internets.

Organizations have struggled with such issues and admittedly they are somewhat unique to the shared, up and down nature of cloud computing. In the history of Internet this type of scenario has certainly happened with dynamically assigned IP addresses from ISPs and even statically assigned addresses that were formerly assigned to someone who got the IP address listed as a SPAMmer or otherwise tagged as a carrier of illegal or malicious intent. But these are not the only issues with shared IP addresses. It’s not really that the organization dealing with such skeletons in their IP address closet are experiencing reputation damage, it’s that technical – and often very static – security systems are tightly coupled to IP addresses. They haven’t adequately, yet, evolved to deal with the reality of highly volatile IP address assignment. We still use IP address as a “unique” identifier even though today, particularly with public cloud computing, it’s anything but.

And if you thought being tagged as guilty by (IP) association was bad enough, you haven’t considered the potential for application “identity” theft that could leave applications open to back-door attacks.

MISTAKEN IDENTITY
So let’s assume that two applications are deployed in a cloud computing environment. Let us further assume that both use multiple instances and a load balancing service to maintain availability and scalability. What happens when Application A drops an instance – and therefore its associated IP address – and that IP address is subsequently assigned to Application B?

imageUsually nothing bad, that’s for sure. That’s just another minutes’ work in a public cloud computing environment. But what if the load balancing service for Application A did not drop the IP address.

Yeah. Isn’t that interesting? Application A is going to break (assuming a rudimentary but commonly used in cloud computing round robin algorithm) 25% of the time. Every fourth request will result almost assuredly in either an error or the wrong content. But that’s would actual be a good thing, considering the alternative possibilities.

1. The customer who now “owns” the IP address is a legitimate customer whose only intention is to deploy an application in the cloud. The customer is angrified because this case of mistaken (IP) identity is costing them money in terms of bandwidth and processing power to constantly field requests that should not be directed to them in the first place. Those requests are not valid for their application, but they pay to process and reject them nonetheless.

2. The customer who now “owns” the IP address is an evil genius with a plan to now use the mistaken (IP) identity to serve up malicious traffic. Perhaps a browser vulnerability-based injection. Perhaps some content that looks like it belongs on the site that tricks the user into confirming their login details, providing an easy means of credential theft. Or perhaps it’s the perfect means of delivering what looks like valid content to the page that’s nothing more than a phishing attack. Oh, the options if you’re an evil genius just go on and on. After all, the IP address is trusted by the application it is serving content to, so a lot of the obstacles that attackers must  navigate to deliver malicious content on other sites are eliminated in this scenario.

Think such a scenario is far-fetched? It’s not. It’s not widespread (yet), either, and it’s difficult to target an application in such a manner because of the (seemingly) random nature of IP assignment in cloud computing environments, but mass SQL injection attacks aren’t exactly targeted, either. They’re designed to take advantage of a vulnerability when one is found, and part of its usage is the discovery of such an existing vulnerability. A maliciously minded person could certainly set up such an attack method and then sit back – and wait. After all, most attackers aren’t using their money to perpetrate such attacks anyway, so they’re unlikely to care that they’ll wrack up a lot of money in instance costs while waiting for a successful scenario to come about.

HOW IT HAPPENED image
There are a couple of possibilities for how such a scenario could (and did) happen. Based on the nature of load balancing and dynamic environments, however, it is easy enough to extrapolate how such a situation could easily arise.

First and foremost this scenario is enabled by a tight-coupling between “application” and “IP” address. This tight-coupling is a dependency that is no longer necessary in well-architected high-availability environments and, as we are learning, can be dangerous to the health and security of applications in highly dynamic environments such as cloud computing where IP addresses are shared and cannot be relied upon as a definitive identifier of anything. This is particularly impactful in load balanced environments, where IP addresses are still used to identify nodes and where operations has failed to separate the application identity from an IP address. 

Load balancing works by applying an algorithm to choose a resource from a pool of resources dynamically. There are numerous algorithms that can factor in a number of variables (resource capacity, performance, connection limits, etc…) but all rely upon information gathered from the application to determine suitability to fulfill any given request. In healthy load balanced environments there is a very nearly symbiotic relationship between the load balancing service and the applications, as it is from the application and its supporting infrastructure that the load balancing service receives the status information crucial to making decisions.

In this particular scenario, it is likely a failure to properly monitor the application compounded a failure of process to remove a resource from the pool in the first place. In a cloud computing style environment, this may have been the result of an automation failure.

1. AUTOMATION FAILURE

The load balancing service was either local to the cloud provider or at the customer’s site and upon release of the instance/IP address, whatever automation or process that was in place that should have removed that IP address from the pool of available resources did not.  This could have been due to a disconnect between the way in which the instance was de-provisioned; perhaps the customer used a method that was unanticipated by the provider and thus no corresponding entry point into an automated process was provided for, making it impossible for an automated response from the load balancing service. Or perhaps the load balancing service was external to the provider, and the resources integrated into an existing load balanced system internal to the customer’s data center, in a hybrid-style architecture and no such automation exists and instead relied upon manual intervention. Human error is a common instigator of operational failures so this is a distinct possibility as well.

2. MONITORING FAILURE

All load balancing solutions use some kind of monitoring to ensure that a resource is actually available. This monitoring occurs on a fixed interval basis as determined by the service/configuration, and can be implemented using a variety of techniques – from ICMP replies to advanced content verification. ICMP, the most rudimentary but least costly (in terms of bandwidth and processing) method, is wholly inadequate to determine the availability of an application because it is designed only to garner a response at the network (IP) layer. TCP-based monitoring, as well, is designed to garner a response at the application or web server layer, and is better than ICMP but still fails to determine whether an application is actually available in the sense that it is (1) running and (2) returning valid responses. Advanced health monitoring that verifies the availability at the network, server and application layer is the only certain method to guarantee true application availability. Such methods involve making a request and receiving a response from the application and then verifying the content is what was expected. If it is not, the node (member, resource) can be marked as “unavailable” and taken out of the rotation for the load balancing algorithm, ensuring that no further requests to that resource are made.

AUTOMATION needs CONTEXT
Regardless of what disconnect caused the load balancing service to fail to remove the instance when it was decommissioned, proper health monitoring techniques would have caught the problem and resolved it such that neither the original assignee of the IP address was serving up invalid content (or failures) and the inheritor of the IP address would not be inundated with requests that resulted in extraneous and unnecessary costs.

More importantly, however, proper health monitoring would prevent a scenario in which the (application) identity of what may be a trusted service can be misappropriated and used to further some evil genius’ plans. Automation of any kind – and make no mistake, health monitoring that can automatically pull a resource out of a pull is a form of automation – requires context to ensure it makes the right decisions at the right time. It’s simply not enough to PING or open a TCP connection and assume that means anything other than basic networking and application platform services are available. In an environment where networks and resources are shared, the ability to ascertain availability of an application is critical not only to the successful delivery of an application but as we are learning, its security as well.

 

No significant articles today.


<Return to section navigation list> 

Cloud Computing Events

No significant articles today.


<Return to section navigation list> 

Other Cloud Computing Platforms and Services

Tim Anderson (@timanderson) reported Mono splits from Novell/Attachmate to form basis of new company on 5/16/2011:

image Mono is an open source implementation of .NET, formerly sponsored by Novell, and its future following Novell’s acquisition by Attachmate has been the subject of speculation.

Today Mono leader Miguel de Icaza has revealed new plans. In a blog post, he announces Xamarin, a new company focused on Mono. This company will build new commercial .NET tools for Apple iOS and Google Android, to enable .NET development on those platforms. Note that they will not be called MonoTouch and MonoDroid, the Novell offerings for this, but will be “source compatible”. I am sure there are brand and intellectual property ownership issues here; but de Icaza is no stranger to negotiating tricky issues of this kind, bearing in mind Mono’s relationship with Microsoft .NET. However I am not sure why the new company cannot acquire the existing brands, since it seems that Attachmate will no longer be able to support them.

The plans are not exactly new, but have been forced by Attachmate’s decision to lay off the entire Mono team:

We have been trying to spin Mono off from Novell for more than a year now. Everyone agreed that Mono would have a brighter future as an independent company, so a plan was prepared last year.

To make a long story short, the plan to spin off was not executed. Instead on Monday May 2nd, the Canadian and American teams were laid off; Europe, Brazil and Japan followed a few days later. These layoffs included all the MonoTouch and MonoDroid engineers and other key Mono developers.

Apparently Xamarin has “angel funding” but is looking for more.

The advent of MonoTouch and MonoDroid has been good for Mono, since it gives the project a stronger business model than it had previously. These mobile platforms are hot, and the ability to code for them in C# is great for Microsoft Platform developers. This factor could enable Xamarin to succeed.

On the other hand, Novell’s name gave Mono enterprise credibility as well as the backing of a large company, and these it now lacks.

The curious thing is that Mono is valuable to Microsoft. The company seems at times to hate Mono, because it removes the need for Windows, and at other times to love it, because it extends the breadth of .NET to include Linux and now iOS and Android. Microsoft gave some sort of official status to Moonlight, the Mono implementation of Silverlight, though the company’s support for Moonlight has always seemed hesitant.

So can we expect now that the company which can afford $8.5 billion for Skype, could expend a few million in support of Xamarin? Or will it stand by and hope that Mono fades away?

I have no idea, though I would like to see both Mono and Xamarin succeed. It is no threat to Microsoft, but does take .NET to places that Microsoft will never support. Without Mono, C# is merely a language for programming Windows.

Related posts:

  1. Mono project: no plans for cross-platform WPF
  2. Trying out MonoTouch – C# for Apple’s iPhone and iPad
  3. Moonlight 2 released; no Microsoft codecs unless you get it from Novell


Lydia Leong (@cloudpundit) reported The forthcoming Public Cloud IaaS Magic Quadrant in a 5/16/2011 post to her CloudPundit blog:

image Despite having made various blog posts and corresponded with a lot of people in email, there is persistent, ongoing confusion about our forthcoming Magic Quadrant for Public Cloud Infrastructure as a Service, which I will attempt to clear up here on my blog so I have a reference that I can point people to.

image 1. This is a new Magic Quadrant. We are doing this MQ in addition to, and not instead of, the Magic Quadrant for Cloud IaaS and Web Hosting (henceforth the “cloud/hosting MQ”). The cloud/hosting MQ will continue to be published at the end of each calendar year. This new MQ (henceforth the “public cloud MQ”) will be published in the middle of the year, annually. In other words, there will be two MQs each year. The two MQs will have entirely different qualification and evaluation criteria.

2. This new public cloud MQ covers a subset of the market covered by the existing cloud/hosting MQ. Please consult my cloud IaaS market segmentation to understand the segments covered. The existing MQ covers the traditional Web hosting market (with an emphasis on complex managed hosting), along with all eight of the cloud IaaS market segments, and it covers both public and private cloud. This new MQ covers multi-tenant clouds, and it has a strong emphasis on automated services, with a focus on the scale-out cloud hosting, virtual lab environment, self-managed virtual data center, and turnkey virtual data center segments. The existing MQ weights managed services very highly; by contrast, the new MQ emphasizes automation and self-service.

3. This is cloud compute IaaS only. This doesn’t rate cloud storage providers, PaaS providers, or anything else. IaaS in this case refers to the customer being able to have access to a normal guest OS. (It does not include, for instance, Microsoft Azure’s VM role.) [Emphasis added.]

4. When we say “public cloud”, we mean massive multi-tenancy. That means that the service provider operates, in his data center, a pool of virtualized compute capacity in which multiple arbitrary customers will have VMs on the same physical server. The customer doesn’t have any idea who he’s sharing this pool of capacity with.

5. This includes cloud service providers only. This is an MQ for the public cloud compute IaaS providers themselves — the services focused on are ones like Amazon EC2, Terremark Enterprise Cloud, and so forth. This does not include any of the cloud-enablement vendors (no Eucalyptus, etc.), nor does it include any of the vendors in the ecosystem (no RightScale, etc.).

6. The target audience for this new MQ is still the same as the existing MQ. As Gartner analysts, we write for our client base. These are corporate IT buyers in mid-sized businesses or enterprises, or technology companies of any size (generally post-funding or post-revenue, i.e., at the stage where they’re looking for serious production infrastructure). We expect to weight the scoring heavily towards the requirements of organizations who need a dependable cloud, but we also recognize the value of commodity cloud to our audience, for certain use cases.

At this point, the initial vendor surveys for this MQ have been sent out. They have gone out to every vendor who requested one, so if you did not get one and you wanted one, please send me email. We did zero pre-qualification; if you asked, you got it. This is a data-gathering exercise, where the data will be used to determine which vendors get a formal invitation to participate in the research. We do not release the qualification criteria in advance of the formal invitations; please do not ask.

If you’re a vendor thinking of requesting a survey, please consider the above. Are you a cloud infrastructure service provider, not a cloud-building vendor or a consultancy? Is your cloud compute massively multi-tenant? Is it highly automated and focused on self-service? Do you serve enterprise customers and actively compete for enterprise deals, globally? If the answers to any of these questions are “no”, then this is not the MQ for you.


Judith Hurwitz (@jhurwitz) asked Can the Power of Open Source Change Industry Dynamics? in a 5/16/2011 post:

image After spending two days at the Red Hat Summit last week, I started thinking about the power of open source software and how it has transformed the software industry. When I was writing my new book, Smart or Lucky, How Technology Leaders Turn Change into Success, I analyzed the success and failures of companies that attempted to cement their offerings as a standard in the market.  In the 1980 and 1990s companies like Microsoft were able to establish their platforms as a de facto standard. Fast forward another couple of decades and it is becoming clear that open source has changed the rules of the game.

This transition did not happen over night.  In the early days of open source it was hard to get a customer to take the movement seriously.  Customers were afraid of being stuck with orphaned software that was hard to manage and left to the whims of developers.  In those days customers had a right to be worried. All open source offerings were not created equal. There were many instances of companies with failing software products that attempted to keep products alive by offering these products to the open source community.  Many of these efforts lasted a few short and painful months before the products went to the forgotten product graveyard.  More recently companies like Sun Microsystems, before its acquisition by Oracle, proclaimed that portions of the Java language and its Solaris operating system would become open source. Ironically one of Sun’s early triumphs was its ability take to take one of its most important assets in the late 1980s – its network file system – and put it into the public domain.  While this bold action, as I talk about in Chapter 1 of my new book, had very positive yet a short term impact on Sun’s fortunes. The company was not able to sustain its early success.

But things started to change when software companies like Red Hat with strong commercial approach to managing and maintaining the Linux operating system, JBoss middleware, and now cloud computing software began actively collaborating with well-respected standards organizations.  Organizations with strong leadership like the Apache Foundation, the Open Group, the OMG, the Eclipse Foundation, and the  Distributed Management Task Force (DMTF) have partnered closely with vendors ranging from IBM to VMware to support the requirements for interoperability.

The most important consequence of the growing importance and legitimacy of open source is that it levels the playing field.  In reality, it is a lot harder for a single company to corner a market based simply on convincing partners and customers to standardize on a single system.  Now I am not naive, it is still possible for a company to gain dominance in a market segment with a proprietary approach.  For example, while Apple uses a Linux based operating system, its software is proprietary.  Apple’s loyal customers accept this because of the perceived elegance of the solution.  In essence, you can only corner a market when you offer customers something that solves a problem or changes the customer experience in a significant way.

Cloud computing adds an interesting dimension to the open source approach.  As cloud computing matures more customers will demand interoperability both at the API level, the data level, and at the applications level.  So, I was particularly interested in Red Hat’s latest efforts related to cloud computing open source.  The company is focused on two initiatives: Cloud Forms and Open Shift. Cloud Forms is intended as an Infrastructure as a Service (IaaS) initiative to help customers implement a private or hybrid cloud from a lifecycle perspective.   This initiative focuses on portability, virtualization, and integration with public cloud services.  Open Shift is focused on focused on Platform as a Service (PaaS).  Therefore, it is designed to provide an open development platform to support a variety of language options including Java, Ruby, and Python.  It is too early to determine whether this effort will become widely adopted by customers. But it is indeed a step in the right directions.  The fact that Red Hat is working closely with the Apache foundation is a good step in the right direction.  As with any open source initiative it will require that a broad community of developers work on expanding and deepening the implementation of this approach so it will take time and a lot of effort.  The downside of this open development process is that it takes a long time for efforts like Cloud Forms and Open Shift to mature to the point where they are ready for the average commercial corporation.  However, the benefit of an open source development effort is the power and wisdom of thousands of contributors.


<Return to section navigation list> 

1 comments:

used computers from warehouse said...

In GAE, it would go like this: The Cron job needs to ping a URL which executes a task, but that task is limited by the request timeout. Basically you have 30 seconds to finish processing while Azure can run for hours on a single task. May be, this is not the best example as it’s asynchronous but I know you got the point.
http://www.usedcomputersfromwarehouse.com/